• lando55@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    2 months ago

    What does it actually promise? AI (namely generative and LLM) is definitely overhyped in my opinion, but admittedly I’m far from an expert. Is what they’re promising to deliver not actually doable?

    • naught101@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      ·
      2 months ago

      It literally promises to generate content, but I think the implied promise is that it will replace parts of your workforce wholesale, with no drop in quality.

      It’s that last bit that’s going to be where the drama happens

    • frezik@midwest.social
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      edit-2
      2 months ago

      They want AGI, which would match or exceed human intelligence. Current methods seem to be hitting a wall. It takes exponentially more inputs and more power to see the same level of improvement seen in past years. They’ve already eaten all the content they can, and they’re starting to talk about using entire nuclear reactors just to power it all. Even the more modest promises, like pictures of people with the correct number of fingers, seem out of reach.

      Investors are starting to notice that these promises aren’t going to happen. Nvidia’s stock price is probably going to be the bellwether.

    • Smokeydope@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      17
      ·
      edit-2
      2 months ago

      It delivers on what it promises to do for many people who use LLMs. They can be used for coding assistance, Setting up automated customer support, tutoring, processing documents, structuring lots of complex information, a good generally accurate knowledge on many topics, acting as an editor for your writings, lots more too.

      Its a rapidly advancing pioneer technology like computers were in the 90s so every 6 months to a year is a new breakthrough in over all intelligence or a new ability. Now the new llm models can process images or audio as well as text.

      The problem for openAI is they have serious competitors who will absolutely show up to eat their lunch if they sink as a company. Facebook/Meta with their llama models, Mistral AI with all their models, Alibaba with Qwen. Some other good smaller competiiton too like the openhermes team. All of these big tech companies have open sourced some models so you can tinker and finetune them at home while openai remains closed sourced which is ironic for the company name… Most of these ai companies offer their cloud access to models at very competitive pricing especially mistral.

      The people who say AI is a trendy useless fad don’t know what they are talking about or are upset at AI. I am a part of the local llm community and have been playing around with open models for months pushing my computers hardware to its limits. Its very cool seeing just how smart they really are, what a computer that simulates human thought processes and knows a little bit of everything can actually do to help me in daily life.

      Terrence Tao superstar genius mathematician describes the newest high end model from openAI as improving from a “incompentent graduate” to a “mediocre graduate” which essentially means AI are now generally smarter than the average person in many regards.

      This month several comptetor llm models released which while being much smaller in size compared to openai o-1 somehow beat or equaled that big openai model in many benchmarks.

      Neural networks are here and they are only going to get better. Were in for a wild ride.

      • Stegget@lemmy.world
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        3
        ·
        2 months ago

        My issue is that I have no reason to think AI will be used to improve my life. All I see is a tool that will rip, rend and tear through the tenuous social fabric we’re trying to collectively hold on to.

        • Smokeydope@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          9
          ·
          edit-2
          2 months ago

          A tool is a tool. It has no say in how it’s used. AI is no different than the computer software you use browse the internet or do other digital task.

          When its used badly as an outlet for escapism or substitute for social connection it can lead to bad consequences for your personal life.

          When it’s best used is as a tool to help reason through a tough task, or as a step in a creative process. As on demand assistance to aid the disabled. Or to support the neurodivergent and emotionally traumatized to open up to as a non judgemental conversational partner. Or help a super genius rubber duck their novel ideas and work through complex thought processes. It can improve peoples lives for the better if applied to the right use cases.

          Its about how you choose to interact with it in your personal life, and how society, buisnesses and your governing bodies choose to use it in their own processes. And believe me, they will find ways to use it.

          I think comparing llms to computers in 90s is accurate. Right now only nerds, professionals, and industry/business/military see their potential. As the tech gets figured out, utility improves, and llm desktops start getting sold as consumer grade appliances the attitude will change maybe?

          • AA5B@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            2 months ago

            A better analogy is search engines. It’s just another tool, but

            • at their best enable your I to find anything from all the worlds knowledge
            • at their worst, are just another way to serve ads and scams, random companies vying for attention, they making any attention is good attention, regardless of what you’re looking for

            When I started as a software engineer, my detailed knowledge was most important and my best tool was the manuals. Now my most important tools are search engines and autocomplete: I can work faster with less knowledge of the syntax and my value is the higher level thought about what we need to do. If my company ever allows AI, I fully expect it to be as important a tool as a search engine.