Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:

  • Confident: 57% say the main LLM they use seems to act in a confident way.
  • Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
  • Sense of humor: 32% say their main LLM seems to have a sense of humor.
  • Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes. Sarcasm: 17% say their prime LLM seems to respond sarcastically.
  • Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
  • WagyuSneakers@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 hours ago

    I’m the expert in this situation and I’m getting tired explaining to Jr Engineers and laymen that it is a media hype train.

    I worked on ML projects before they got rebranded as AI. I get to sit in the room when these discussion happen with architects and actual leaders. This is Hype. Anyone who tells you other wise is lying or selling you something.

    • BlushedPotatoPlayers@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      6 hours ago

      I see how that is a hype train, and I also work with machine learning (though I’m far from an expert), but I’m not convinced these things are not getting intelligent. I know what their problems are, but I’m not sure whether the human brain works the same way, just (yet) more effective.

      That is, we have visual information, and some evolutionary BIOS, while LLMs have to read the whole internet and use a power plant to function - but what if our brains are just the same bullshit generators, we are just unaware of it?

      • WagyuSneakers@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        30 minutes ago

        I work in an extremely related field and spend my days embedded into ML/AI projects. I’ve seen teams make some cool stuff and I’ve seen teams make crapware with “AI” slapped on top. I guarantee you that you are wrong.

        What if our brains…

        There’s the thing- you can go look this information up. You don’t have to guess. This information is readily available to you.

        LLMs work by agreeing with you and stringing together coherent text in patterns the recognize from huge samples. It’s not particularly impressive and is far, far closer to the initial chat bots from last century than they do real GAI or some sort of singularity. The limits we’re at now are physical. Look up how much electricity and water it takes just to do trivial queries. Progress has plateaued as it frequently does with tech like this. That’s okay, it’s still a neat development. The only big takeaway from LLMs is that agreeing with people makes them think you’re smart.

        In fact, LLMs are a glorified Google at higher levels of engineering. When most of the stuff you need to do doesn’t have a million stack overflow articles to train on it’s going to be difficult to get an LLM to contribute in any significant way. I’d go so far to say it hasn’t introduced any tool I didn’t already have. It’s just mildly more convenient than some of them while the costs are low.