Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:

  • Confident: 57% say the main LLM they use seems to act in a confident way.
  • Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
  • Sense of humor: 32% say their main LLM seems to have a sense of humor.
  • Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes. Sarcasm: 17% say their prime LLM seems to respond sarcastically.
  • Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
  • Akuchimoya@startrek.website
    link
    fedilink
    English
    arrow-up
    5
    ·
    35 minutes ago

    I had to tell a bunch of librarians that LLMs are literally language models made to mimic language patterns, and are not made to be factually correct. They understood it when I put it that way, but librarians are spread to be “information professionals”. If they, as a slightly better trained subset of the general public, don’t know that, the general public has no hope of knowing that.

    • WagyuSneakers@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      30 minutes ago

      It’s so weird watching the masses ignore industry experts and jump on weird media hype trains. This must be how doctors felt in Covid.

  • fubarx@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    ·
    4 hours ago

    “Think of how stupid the average person is, and realize half of them are stupider than that.” ― George Carlin

  • notsoshaihulud@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    4 hours ago

    I’m 100% certain that LLMs are smarter than half of Americans. What I’m not so sure about is that the people with the insight to admit being dumber than an LLM are the ones who really are.

  • Comtief@lemm.ee
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    2
    ·
    5 hours ago

    LLMs are smart in the way someone is smart who has read all the books and knows all of them but has never left the house. Basically all theory and no street smarts.

      • Comtief@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        4 hours ago

        Well yes, they are glorified text autocomplete, but they still have their uses which could be considered “smart”. For example I was struggling with a programming thing today and an LLM helped me out, so in a way it is smarter than me in that specific thing. I think it’s less that they are dumb and more that they have no agency whatsoever, they have to be pushed into the direction you want. Pretty annoying…

  • Montreal_Metro@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    ·
    5 hours ago

    There’s a lot of ignorant people out there so yeah, technically LLM is smarter than most people.

  • kipo@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    6 hours ago

    No one has asked so I am going to ask:

    What is Elon University and why should I trust them?

    • Patch@feddit.uk
      link
      fedilink
      English
      arrow-up
      9
      ·
      4 hours ago

      Ironic coincidence of the name aside, it appears to be a legit bricks and mortar university in a town called Elon, North Carolina.

  • Traister101@lemmy.today
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 hours ago

    While this is pretty hilarious LLMs don’t actually “know” anything in the usual sense of the word. An LLM, or a Large Language Model is a basically a system that maps “words” to other “words” to allow a computer to understand language. IE all an LLM knows is that when it sees “I love” what probably comes next is “my mom|my dad|ect”. Because of this behavior, and the fact we can train them on the massive swath of people asking questions and getting awnsers on the internet LLMs essentially by chance are mostly okay at “answering” a question but really they are just picking the next most likely word over and over from their training which usually ends up reasonably accurate.

  • Geodad@lemm.ee
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    2
    ·
    9 hours ago

    Because an LLM is smarter than about 50% of Americans.

  • Owl@lemm.ee
    link
    fedilink
    English
    arrow-up
    104
    arrow-down
    1
    ·
    13 hours ago

    looking at americas voting results, theyre probably right

  • Telorand@reddthat.com
    link
    fedilink
    English
    arrow-up
    119
    arrow-down
    3
    ·
    13 hours ago

    Think of a person with the most average intelligence and realize that 50% of people are dumber than that.

    These people vote. These people think billionaires are their friends and will save them. Gods help us.

    • Mac@mander.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 hour ago

      This is why i don’t believe in democracy. Humans are too easy to manipulate into voting against their interests.
      Even the “intelligent” ones.

    • 9point6@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      12 hours ago

      I was about to remark how this data backs up the events we’ve been watching unfold in America recently

  • 👍Maximum Derek👍@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    60
    arrow-down
    4
    ·
    13 hours ago

    Reminds me of that George Carlin joke: Think of how stupid the average person is, and realize half of them are stupider than that.

    So half of people are dumb enough to think autocomplete with a PR team is smarter than they are… or they’re dumb enough to be correct.