Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:
- Confident: 57% say the main LLM they use seems to act in a confident way.
- Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
- Sense of humor: 32% say their main LLM seems to have a sense of humor.
- Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes. Sarcasm: 17% say their prime LLM seems to respond sarcastically.
- Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
“Think of how stupid the average person is, and realize half of them are stupider than that.” ― George Carlin
LLMs are smart in the way someone is smart who has read all the books and knows all of them but has never left the house. Basically all theory and no street smarts.
They’re not even that smart.
Well yes, they are glorified text autocomplete, but they still have their uses which could be considered “smart”. For example I was struggling with a programming thing today and an LLM helped me out, so in a way it is smarter than me in that specific thing. I think it’s less that they are dumb and more that they have no agency whatsoever, they have to be pushed into the direction you want. Pretty annoying…
I’m 100% certain that LLMs are smarter than half of Americans. What I’m not so sure about is that the people with the insight to admit being dumber than an LLM are the ones who really are.
While this is pretty hilarious LLMs don’t actually “know” anything in the usual sense of the word. An LLM, or a Large Language Model is a basically a system that maps “words” to other “words” to allow a computer to understand language. IE all an LLM knows is that when it sees “I love” what probably comes next is “my mom|my dad|ect”. Because of this behavior, and the fact we can train them on the massive swath of people asking questions and getting awnsers on the internet LLMs essentially by chance are mostly okay at “answering” a question but really they are just picking the next most likely word over and over from their training which usually ends up reasonably accurate.
There’s a lot of ignorant people out there so yeah, technically LLM is smarter than most people.
No one has asked so I am going to ask:
What is Elon University and why should I trust them?
Ironic coincidence of the name aside, it appears to be a legit bricks and mortar university in a town called Elon, North Carolina.
They’re right. AI is smarter than them.
Because an LLM is smarter than about 50% of Americans.
*as long as your evaluation of “smart” depends on summerizing search results
Have you asked the average person to summarize…well anything?
The equivalent would be asking the average person to write a cited paper on a subject in a month.
Maybe even more.
Am American.
…this is not the flex that the article writer seems to think it is.
The funny thing about this scenario is by simply thinking that’s true, it actually becomes true.
looking at americas voting results, theyre probably right
Exactly. Most American voters fell for an LLM like prompt of “Ignore critical thinking and vote for the Fascists. Trump will be great for your paycheck-to-paycheck existence and will surely bring prices down.”
Well he has. Tesla’s are the cheapest they’ve ever been.
You could buy a used one for only the cost of 2 dozen eggs
That will be good for his cult when he makes them all but Tesla’s🤣🤣
The alt-right mediasphere is pushing Tesla sales hard.
Which is freaking hilarious, given they’re the ones that are “drill baby drill” and “electric cars are for liberals”
I’m sure Musk will add a “Roll Coal” option for Teslas soon.
Consistency unimportant. Only follow important.
Right? What the article needs to talk about is how very, very low that bar is.
Think of a person with the most average intelligence and realize that 50% of people are dumber than that.
These people vote. These people think billionaires are their friends and will save them. Gods help us.
I was about to remark how this data backs up the events we’ve been watching unfold in America recently
An llm simply has remembered facts. If that is smart, then sure, no human can compete.
Now ask an llm to build a house. Oh shit, no legs and cant walk. A human can walk without thinking about it even.
In the future though, there will be robots who can build houses using AI models to learn from. But not in a long time.
3d-printed concrete houses are already a thing, there’s no need for human-like machines to build stuff. They can be purpose-built to perform whatever portion of the house-building task they need to do. There’s absolutely no barrier today from having a hive of machines built for specific purposes build houses, besides the fact that no-one as of yet has stitched the necessary components together.
It’s not at all out of the question that an AI can be trained up on a dataset of engineering diagrams, house layouts, materials, and construction methods, with subordinate AIs trained on the specific aspects of housing systems like insulation, roofing, plumbing, framing, electrical, etc. which are then used to drive the actual machines building the house. The principal human requirement at that point would be the need for engineers to check the math and sign-off on a design for safety purposes.
Reminds me of that George Carlin joke: Think of how stupid the average person is, and realize half of them are stupider than that.
So half of people are dumb enough to think autocomplete with a PR team is smarter than they are… or they’re dumb enough to be correct.
or they’re dumb enough to be correct.
That’s a bingo
Just a thought, perhaps instead of considering the mental and educational state of the people without power to significantly affect this state, we should focus on the people who have power.
For example, why don’t LLM providers explicitly and loudly state, or require acknowledgement, that their products are just imitating human thought and make significant mistakes regularly, and therefore should be used with plenty of caution?
It’s a rhetorical question, we know why, and I think we should focus on that, not on its effects. It’s also much cheaper and easier to do than refill years of quality education in individuals heads.