

I teach, nothing is evident to anyone 😭
I teach, nothing is evident to anyone 😭
If you want, any work that does not encompass the whole world is applying a filter and therefore a bias of some sort. We don’t expect a photo to X-ray the roots of a tree, because we understand the physical constraints of photography. Sure, something could be just out of frame, something else could have been photoshopped out, you can create a different story by selecting different photos and so on. But we understand the “what” a photo represents. I doubt we have the dang understanding of “what” an LLM represents, what are the constraints of the possible answers, and we definitely don’t understand why a specific answer is chosen over the infinite other possibilities.
Depends. For an expert, that is self evident (even if it might not be clear which biases have been incorporated). But that is not how it has been marketed. Chatgpt and similar are perceived as answering “the truth” at all times, and that skews the user’s understanding of the answers. Researching how deeply the answers are affected by the coders’ bias is the focus of their research and a worthwhile undertaking to avoid overlooking something important
AI is getting a much more widespread use than people with a technical background. So its application, namely in education but in all other non-CS disciplines will be through people with limited understanding of the biases. It is importing them to make them explicit, to underline that an LLM will produce the same biases it deduced from testing data and its loss function. But lots functions and test data are not public knowledge, studies need to be performed to understand how the coders’ own biases influenced the LLM scheme itself.
A photo has less bias because we know what it is representing: a photo only shows what can be seen. But the same understanding is not clear AI. Why showing a photo-realistic tree versus a biological diagram? Choices have been made, of which a broader audience needs to be aware of.
Did you read the rest of the article? The tree drawing was just the triggering element to an evaluation of the AI capabilities, in particular underlining how “tree” (but also “human”, “success”, “importance”) are being strongly restricted in their meaning by the AI itself, without the user noticing it. Thus, a user receives an answer that has already undergone a filtering of sorts. Not being aware of this risks limiting our understanding of AI and increasing its damage.
Theoretical research in AI is both necessary and hard at the moment, with funding being giving more to new results over the understanding of the properties of old ones.
I’m okay breaking the rules if that’s how the language is usually spoken, but It’s still interested in learning the rule
I think the whole premise of Duolingo that learning a language means translating to and from it is bonkers. I know multiple languages at various levels, and every time to speak I create my sentences in the target language directly. Translating is a totally different skill set
There are some limitations though: often native speakers don’t have a deep understanding of the grammar rules they use, because they use them intuitively. So sometimes learning this way makes it a bit foggy. I often use this technique when I’m already familiar with the target language, At a basic level.
Same here: very far from top result, and links to this same article.
Consider that the full number is world wide. How many of them are US based or US involved?