• Churbleyimyam@lemm.ee
    link
    fedilink
    English
    arrow-up
    18
    ·
    1 year ago

    I think AI has mostly been about luring investors into pumping up share prices rather than offering something of genuine value to consumers.

    Some people are gonna lose a lot of other people’s money over it.

    • SlopppyEngineer@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Yes, I’m getting some serious dot-com bubble vibes from the whole AI thing. But the dot-com boom produced Amazon, and every company is basically going all-in in the hope they are the new Amazon while in the end most will end up like pets.com but it’s a risk they’re willing to take.

      • slaacaa@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        “You might lose all your money, but that is a risk I’m willing to take”

        • visionairy AI techbro talking to investors
        • SlopppyEngineer@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          Investors pump money in a bunch of companies so the chances of at least one of them making it big and paying them back for all the failed investments is almost guaranteed. That’s what taking risks is all about.

          • verity_kindle@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            Sure, but it SEEMS, that some investors are relying on buzzword and hype, without research and ignoring the fundamentals of investing, i.e. besides the ever evolving claims of the CEO, is the company well managed? What is their cash flow and where is it going a year from now? Do the upper level managers have coke habits?

            • slaacaa@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              You’re right, but these fundamentals don’t really matter anymore, investors are buying hype and hoping to sell a bigger hype for more money later.

              • Aceticon@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                1 year ago

                Seeing the whole thing as Knowingly Trading in Hype is actually a really good insight.

                Certainly it neatly explains a lot.

                • rottingleaf@lemmy.worldBanned
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  Also called a Ponzi scheme, where every participant knows it’s a scam, but hopes to find some more fools before it crashes and leave with positive balance.

      • barsoap@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        OpenAI will fail. StabilityAI will fail. CivitAI will prevail, mark my words.

    • SLVRDRGN@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I tried to find the advert but I see this on YouTube a lot - an Adobe AI ad which depicts, without shame, AI writing out a newsletter/promo for a business owner’s new product (cookies or ice cream or something), showing the owner putting no effort into their personal product and a customer happily consuming because they were attracted by the thoughtless promo.

      How are producers/consumers okay with everything being so mediocre??

      • MajorHavoc@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        How are producers/consumers okay with everything being so mediocre??

        “You’re always trying to make everything just a little bit worse so that you can feel good about having a lot more of it. I love it. It’s so human!” - The Good Place

    • themurphy@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Definitely. Many companies have implemented AI without thinking with 3 brain cells.

      Great and useful implementation of AI exists, but it’s like 1/100 right now in products.

      • floofloof@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        If my employer is anything to go by, much of it is just unimaginative businesspeople who are afraid of missing out on what everyone else is selling.

        At work we were instructed to shove ChatGPT into our systems about a month after it became a thing. It makes no sense in our system and many of us advised management it was irresponsible since it’s giving people advice of very sensitive matters without any guarantee that advice is any good. But no matter, we had to shove it in there, with small print to cover our asses. I bet no one even uses it, but sales can tell customers the product is “AI-driven”.

      • PerogiBoi@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        My old company before they laid me off laid off our entire HR and Comms teams in exchange for ChatGPT Enterprise.

        “We can just have an AI chatbot for HR and pay inquiries and ask Dall-e to create icons and other content”.

        A friend who still works there told me they’re hiring a bunch of “prompt engineers” to improve the quality of the AI outputs haha

        • verity_kindle@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I’m sorry. Hope you find a better job, on the inevitable downswing of the hype, when someone realizes that a prompt can’t replace a person in customer service. Customers will invest more time, i.e., even wait in a purposely engineered holding music hell, to have a real person listen to them.

        • themurphy@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          That’s an even worse ‘use case’ than I could imagine.

          HR should be one of the most protected fields against AI, because you actually need a human resource.

          And “prompt engineer” is so stupid. The “job” is only necessary because the AI doesn’t understand what you want to do well enough. The only productive guy you could hire would be a programmer or something, that could actually tinker with the AI.

    • peto (he/him)@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      A lot of it is follow the leader type bullshit. For companies in areas where AI is actually beneficial they have already been implementing it for years, quietly because it isn’t something new or exceptional. It is just the tool you use for solving certain problems.

      Investors going to bubble though.

    • Riskable@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      My doorbell camera manufacturer now advertises their products as using, “Local AI” meaning, they’re not relying on a cloud service to look at your video in order to detect humans/faces/etc. Honestly, it seems like a good (marketing) move.

    • spiderman@ani.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yeah, can make some products better but most of the products these days that use AI, it doesn’t actually need them. It’s annoying to use products that actively shovel AI when it doesn’t even need it.

      • Lost_My_Mind@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Ya know what pfoduct MIGHT be better with AI?

        Toasters. They have ONE JOB, and everybody agrees their toaster is crap. But you’re not going to buy another toaster, because that too will be crap.

        How about a toaster, that accurately, and evenly toasts your bread, and then DOESN’T give you a heart attack at 5am when you’re still half asleep???

        IS THAT TOO MUCH TO ASK???

  • oyo@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    LLMs: using statistics to generate reasonable-sounding wrong answers from bad data.

      • Blackmist@feddit.uk
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        And the system doesn’t know either.

        For me this is the major issue. A human is capable of saying “I don’t know”. LLMs don’t seem able to.

        • xantoxis@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Accurate.

          No matter what question you ask them, they have an answer. Even when you point out their answer was wrong, they just have a different answer. There’s no concept of not knowing the answer, because they don’t know anything in the first place.

          • Blackmist@feddit.uk
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            The worst for me was a fairly simple programming question. The class it used didn’t exist.

            “You are correct, that class was removed in OLD version. Try this updated code instead.”

            Gave another made up class name.

            Repeated with a newer version number.

            It knows what answers smell like, and the same with excuses. Unfortunately there’s no way of knowing whether it’s actually bullshit until you take a whiff of it yourself.

            • nilloc@discuss.tchncs.de
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              So instead of Prompt Engineer, the more accurate term should be AI Taste Tester?

              From what I’ve seen you’ll need an iron stomach.

      • treadful@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        They really aren’t. Go ask about something in your area of expertise. At first glance, everything will look correct and in order, but the more you read the more it turns out to be complete bullshit. It’s good at getting broad strokes but the details are very often wrong.

        Now imagine someone that doesn’t have your expertise reading that answer. They won’t recognize those details are wrong until it’s too late.

        • Quereller@lemmy.one
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          That is about the experience I have. I asked it for factual information in the field I work at. It didn’t gave correct answers. Or, it gave working protocols which were strange and would not be successful.

      • GBU_28@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 year ago

        With proper framework, decent assertions are possible.

        1. It must cite the source and provide the quote, not just a summary.
        2. An adversarial review must be conducted

        If that is done, the work on the human is very low.

        That said, it’s STILL imperfect, but this is leagues better than one shot question and answer

        • Aceticon@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          Except LLMs don’t store sources.

          They don’t even store sentences.

          It’s all a stack of massive N-dimensional probability spaces roughly encoding the probabilities of certain tokens (which are mostly but not always words) appearing after groups of tokens in a certain order.

          And all of that to just figure out “what’s the most likely next token”, an output which is then added to the input and fed into it again to get the next word and so on, producing sentences one word at a time.

          Now, if you feed it as input a long, very precise sentence taken from a unique piece, maybe you’re luck and it will output the correct next word, but if you already have all that you don’t really need an LLM to give you the rest.

          Maybe the “framework” you seek - which is quite akin to a indexer with a natural language interface - can be made with AI, but it’s not something you can do with LLMs because their structure is entirely unsuited for it.

          • GBU_28@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 year ago

            The proper framework does, with data store, indexing and access functions.

            The cutting edge work is absolutely using LLMs in post-rag pipelines.

            Consumer grade chat interfaces def do not do this.

            Edit if you worry about topics like context window, sentence splitting or source extraction, you aren’t using a best in class framework any more.

  • howrar@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    I have no qualms about AI being used in products. But when you have to tell me that something is “powered by AI” as if that’s your main selling point, then you do not have a good product. Tell me what it does, not how it does it.

  • Lvxferre [he/him]@mander.xyz
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    As I mentioned in another post, about the same topic:

    Slapping the words “artificial intelligence” onto your product makes you look like those shady used cars salesmen: in the best hypothesis it’s misleading, in the worst it’s actually true but poorly done.

  • muculent@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    Hi, I’m annoying and want to be helpful. Am I helpful? If I repeat the same options again when you’ve told me I’m not helpful, will that be helpful? I won’t remember this conversation once it’s ended.

    Hi, which option have you told me you already don’t want would you like?

    Sorry, I didn’t quite catch that, please rage again.

    • Hackworth@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      edit-2
      1 year ago

      Meanwhile, I just had Cluade turn a few obscure academic papers into a slide deck on the subject, along with presentation notes and interactive graphs, using like 5 prompts and 15 min.

          • Riskable@programming.dev
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            When one of two things happens:

            • A new hype starts to replace it (can happen fast though!)
            • The hype starts to specialize into subcategories of the hype (e.g. AI images, AI videos, AI text generation)

            When “AI” hype dies down we are likely to see “AI” removed from various topics because enough people know and understand the hyped parent topic. It’ll just be “image generation”, “video generation”, “generated text”, etc.

    • Lucidlethargy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      There are different types of people in the market. The informed ones hate AI, and the uninformed love it. The informed ones tend to be the cornerstones of businesses, and the uninformed ones tend to be in charge.

      So we have… All this. All this nonsense. All because of stupid managers.

      • MajorHavoc@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        But what if it actually is magic this time? Just this once!? And we miss the hype train?! (This is a sarcastic impression of real conversations I have had.)

    • rottingleaf@lemmy.worldBanned
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Customers worry about what they can do with it, while investors and spectators and vendors worry about buzzwords. Customers determine demand.

      Sadly what some of those customers want to do is to somehow improve their own business without thinking, and then they too care about buzzwords, that’s how the hype comes.

  • pHr34kY@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    This is because AI is usually used to reduce the human cost to the company, and rarely to reduce the human labour for the customer.

    That, or mass surveillance.

  • mm_maybe@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    <greentext>

    Be me

    Early adopter of LLMs ever since a random tryout of Replika blew my mind and I set out to figure what the hell was generating its responses

    Learn to fine-tune GPT-2 models and have a blast running 30+ subreddit parody bots on r/SubSimGPT2Interactive, including some that generate weird surreal imagery from post titles using VQGAN+CLIP

    Have nagging concerns about the industry that produced these toys, start following Timnit Gebru

    Begin to sense that something is going wrong when DALLE-2 comes out, clearly targeted at eliminating creative jobs in the bland corporate illustration market. Later, become more disturbed by Stable Diffusion making this, and many much worse things, possible, at massive scale

    Try to do something about it by developing one of the first “AI Art” detection tools, intended for use by moderators of subreddits where such content is unwelcome. Get all of my accounts banned from Reddit immediately thereafter

    Am dismayed by the viral release of ChatGPT, essentially the same thing as DALLE-2 but text

    Grudgingly attempt to see what the fuss is about and install Github Copilot in VSCode. Waste hours of my time debugging code suggestions that turn out to be wrong in subtle, hard-to-spot ways. Switch to using Bing Copilot for “how-to” questions because at least it cites sources and lets me click through to the StackExchange post where the human provided the explanation I need. Admit the thing can be moderately useful and not just a fun dadaist shitposting machine. Have major FOMO about never capitalizing on my early adopter status in any money-making way

    Get pissed off by Microsoft’s plans to shove Copilot into every nook and cranny of Windows and Office; casually turn on the Opympics and get bombarded by ads for Gemini and whatever the fuck it is Meta is selling

    Start looking for an alternative to Edge despite it being the best-performing web browser by many metrics, as well as despite my history with “AI” and OK-ish experience with Copilot. Horrified to find that Mozilla and Brave are doing the exact same thing

    Install Vivaldi, then realize that the Internet it provides access to is dead and enshittified anyway

    Daydream about never touching a computer again despite my livelihood depending on it

    </greentext>

      • blarth@thelemmy.club
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        I refuse to use Facebook anymore, but my wife and others do. Apparently the search box is now a Meta AI box, and it pisses them every time. They want the original search back.

        • nossaquesapao@lemmy.eco.br
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 year ago

          That’s another thing companies don’t seem to understand. A lot of them aren’t creating new products and services that use ai, but are removing the existing ones, that people use daily and enjoy, and forcing some ai alternative. Of course people are going to be pissed off!

    • Capricorn_Geriatric@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      More like “instead of making something that gets the job done, expect pur unfinished product to complain and not do whatever it’s supposed to”. Or just plain false advertising.

      Either way, not a good look and I’m glad it’s not just us lemmings who care.

    • barsquid@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yes the cost is sending all of your data to the harvest, but what price can you put on having a virtual dumbass that is frequently wrong?

  • ironcrotch@aussie.zone
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I get AI has its uses but I don’t need my mouse to have any thing AI related (looking at you Logitech).

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    If I could have the equivalent of a smart speaker that ran the AI model locally and could interface with other files on the system. I would be interested in buying that.

    But I don’t need AI in everything in the same way that I don’t need Bluetooth in everything. Sometimes a kettle is just a kettle. It is bad enough we’re putting screens on fridges.

    • shneancy@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I like the vast majority of my technology dumb, the last barely smart kettle I bought - it had a little screen that showed you temperature and allowed you to keep the water at a particular temperature for 3h - broke within a month. Now I once again have a dumb kettle, it only has the on/off button and has been working perfectly since I got it

    • Alwaysnownevernotme@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I could go for the fridge screen if it was focused more around showing me what was in the fridge without opening the door and making grocery lists.

  • esc27@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    They’ve overhyped the hell out of it and slapped those letters on everything including a lot of half baked ideas. Of course people are tired of it and beginning to associate ai with bad marketing.

    This whole situation really does feel dotcommish. I suspect we will soon see an ai crash, then a decade or so later it will be ubiquitous but far less hyped.

    • Vent@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Thing is, it already was ubiquitous before the AI “boom”. That’s why everything got an AI label added so quickly, because everything was already using machine learning! LLMs are new, but they’re just one form of AI and tbh they don’t do 90% of the stuff they’re marketed as and most things would be better off without them.

    • xantoxis@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      They don’t care. At the moment AI is cheap for them (because some other investor is paying for it). As long as they believe AI reduces their operating costs*, and as long as they’re convinced every other company will follow suit, it doesn’t matter if consumers like it less. Modern history is a long string of companies making things worse and selling them to us anyway because there’s no alternatives. Because every competitor is doing it, too, except the ones that are prohibitively expensive.

      [*] Lol, it doesn’t do that either

      • simpleslipeagle@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Assuming MBAs can do math might be a mistake. I’ve worked on an MBA pet project that squandered millions in worker time and opportunity cost to save 30k mrc…

  • Grandwolf319@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I mean, pretty obvious if they advertise the technology instead of the capabilities it could provide.

    Still waiting for that first good use case for LLMs.

    • Empricorn@feddit.nl
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Haven’t you been watching the Olympics and seen Google’s ad for Gemini?

      Premise: your daughter wants to write a letter to an athlete she admires. Instead of helping her as a parent, Gemini can magic-up a draft for her!

      • psivchaz@reddthat.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        On the plus side for them, they can probably use Gemini to write their apology blog about how they missed the mark with that ad.

    • NABDad@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      I think the LLM could be decent at the task of being a fairly dumb personal assistant. An LLM interface to a robot that could go get the mail or get you a cup of coffee would be nice in an “unnecessary luxury” sort of way. Of course, that would eliminate the “unpaid intern to add experience to a resume” jobs. I’m not sure if that’s good or bad,l. I’m also not sure why anyone would want it, since unpaid interns are cheaper and probably more satisfying to abuse.

      I can imagine an LLM being useful to simulate social interaction for people who would otherwise be completely alone. For example: elderly, childless people who have already had all their friends die or assholes that no human can stand being around.

      • Grandwolf319@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Is that really an LLM? Cause using ML to be a part of future AGI is not new and actually was very promising and the cutting edge before chatGPT.

        So like using ML for vision recognition to know a video of a dog contains a dog. Or just speech to text. I don’t think that’s what people mean these days when they say LLM. Those are more for storing data and giving you data in forms of accurate guesses when prompted.

        ML has a huge future, regardless of LLMs.

          • nic2555@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Yes. But not all Machine Learning (ML) is LLM. Machine learning refer to the general uses of neural networks while Large Language Models (LLM) refer more to the ability for an application, or a bot, to understand natural language and deduct context from it, and act accordingly.

            ML in general as a much more usages than only power LLM.