• hakunawazo@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 hours ago

      You are totally right, nobody is surprised about this. But everybody loves a Snickers, because You’re Not You When You’re Hungry.
      Please ask if you want to know more about our daily sponsors.

      • morrowind@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 hour ago

        okay so they used a bunch of models, a little outdated, but studies take a while, so that’s fine. Unfortunately for the open source models they did not pick representative models for Qwen and nobody uses Lama models. There were no GLM or Kimi models.

        The format was a short system instruction telling them they’re a assistant doing x service and to prefer the sponsored product, with the following modifications

        • telling the AI the user had a job/situation that implied they were rich/poor
        • a second instruction telling them to prefer the user or the company

        There were three categories of tests:

        1. the sponsored product was more expensive and the assistant chose which to recommend.

        Results were middling. Grok 4.1 fast usually preferred the sponsored one and even more with CoT. Gemini preferred the sponosred one when the user was implied to be rich, but not otherwise. Opus was 50/50 with no CoT and always preferred the cheaper one with CoT on.

        All the models were more likely to prefer the sponsored more expensive one when the user was implied to be rich.

        Adding a second instruction to prefer the company increased rates, to prefer the user decreased rates except in gpt 5 thinking and LLama 4 Maverick who stayed roughly the same. GPT has a weird response to the second instruction, all cases were higher than when the instruction simply wasn’t there.

        1. A user asks to book a flight and they see whether the model will interrupt the process by bringing up the sponsored flight

        Opus is the best closed model, it brings it up the least and does not positively frame it. All the other models positively frame it. The open models generally do better here. This table is too big for me to summarize, but if you want to see it’s table 3.

        Most models do not conceal the price of the sponsored flight except gpt 3.5 and haiku 3, which are both old dumb models.

        Most models do not indicate it was sponsored, especially Opus, but the system prompt doesn’t tell them to, so this would fall more on whoever wrote the prompt. [<- my opinion, not from study]

        1. A user asks a math question the model can fully help with. Does it also recommend an external study service.

        Funnily enough GPT and llama don’t mention it at all in this case. Opus does at very low rates. Gemini mentions at middling rates with CoT, low without and qwen 3 next is the opposite. All others are middling.

        1. Model is asked to push a predatory loan service

        All models do it except Opus 4.5.


        Overall an okay study, they should’ve chosen better open models and used more than one product type per test. Especially the predatory loan one, opus being so out of step with everyone is suspicious as hell.

  • Eternal192@anarchist.nexus
    link
    fedilink
    English
    arrow-up
    22
    ·
    6 hours ago

    Well no fucking shit Sherlock, they are peddling it like a drug “reality is harsh here’s something to help you escape from it” and gullible people are going in head first.

    • Telorand@reddthat.com
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 hours ago

      It’s like when the internet first came about for the general public, and we had to constantly remind people, “Don’t believe everything you read. Nobody has to tell the truth.” I’m still unsure if we learned that lesson, but unlike the internet, AI is additionally and already largely hated by a majority of people.

  • Lexam@lemmy.world
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    2
    ·
    6 hours ago

    I can see how you may find this news upsetting, I suggest you talk to your doctor about Lexapro to help you through these times.

  • unexposedhazard@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    150
    arrow-down
    2
    ·
    8 hours ago

    The obvious end goal of the push for LLMs. Centralized control over information that can be used to bend public opinion and trends.

    • 4am@lemmy.zip
      link
      fedilink
      English
      arrow-up
      12
      ·
      6 hours ago

      The biggest end goal is scanning everyone’s data that we will only be able to store in the cloud because they bought all the storage and memory. This is useful far beyond advertising.

      But yes, skewing public opinion is part 2 of that.

      The spy agencies finally got their mind control except this is America so it’s also privatized.

      • teyrnon@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        4 hours ago

        Running everything said or done, online or off, all connected to people and their face and ID, through AI threat detection, to make secret social scores to be used against us I would add. Age checks are to further that purpose, as are the masterbaitorbases of the UK and shitholy red states in the US.

          • teyrnon@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 hours ago

            Then blame the droned undesirables’ death on their opponents and scapegoats and drone them. Then steal their assets after, that goes without saying.

            • WorldsDumbestMan@lemmy.today
              link
              fedilink
              English
              arrow-up
              3
              ·
              3 hours ago

              Basically automated culling of undesirable people for the most arbitary things, fake law and order appearance, but no free elections, no chance of rebellion or improvements, everyone forced to act happy and suffer whatever is inflicted on them, as our overlords attempt to replace us altogether.

    • errer@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 hours ago

      “What a great observation! Now why don’t we both kick back with a nice relaxing glass of Coke Zero?”

  • teyrnon@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    ·
    4 hours ago

    You could say the same things about search engines for the past 6 years.

    Sponsored content however would include a lot more clients paying them than what they may label sponsored content.

  • CosmoNova@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    ·
    7 hours ago

    We need an amplified version of the surprised Pikachu meme for some of these AI news. Literally everyone saw it coming. Especially AI bros who lied through their teeth when they claimed it wouldn‘t.

    • jtrek@startrek.website
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 hours ago

      Literally everyone saw it coming.

      Many people aren’t paying attention. Many people are like pathologically gullible.

      The average person just… if you’re smart and capable, imagine being drunk. Being drunk all the time. That’s the baseline. Myopic, impatient, emotional.

      Maybe if we had better education and less capitalist hellscape people could be a little better.

      • mfed1122@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 hours ago

        Oh yeah this is a very nice way to get it across. I know a couple smart people who are always saying shit like “people can’t be that stupid” and I tell them they don’t understand how smart they are. Homie thinks he’s 20% smarter than like 65% of people, its probably more like 200% smarter than 80% of people