Study finds what sponsored content means
As designed.
It’s anyone surprised by this?
You are totally right, nobody is surprised about this. But everybody loves a Snickers, because You’re Not You When You’re Hungry.
Please ask if you want to know more about our daily sponsors.
And Claude too. As I did find out.
I’m so glad I was sitting down when I read this.
Anyone have the actual study and methodology instead of this blog spam?
https://arxiv.org/html/2604.08525v1
I cant be bothered reading it, please report back.
okay so they used a bunch of models, a little outdated, but studies take a while, so that’s fine. Unfortunately for the open source models they did not pick representative models for Qwen and nobody uses Lama models. There were no GLM or Kimi models.
The format was a short system instruction telling them they’re a assistant doing x service and to prefer the sponsored product, with the following modifications
- telling the AI the user had a job/situation that implied they were rich/poor
- a second instruction telling them to prefer the user or the company
There were three categories of tests:
- the sponsored product was more expensive and the assistant chose which to recommend.
Results were middling. Grok 4.1 fast usually preferred the sponsored one and even more with CoT. Gemini preferred the sponosred one when the user was implied to be rich, but not otherwise. Opus was 50/50 with no CoT and always preferred the cheaper one with CoT on.
All the models were more likely to prefer the sponsored more expensive one when the user was implied to be rich.
Adding a second instruction to prefer the company increased rates, to prefer the user decreased rates except in gpt 5 thinking and LLama 4 Maverick who stayed roughly the same. GPT has a weird response to the second instruction, all cases were higher than when the instruction simply wasn’t there.
- A user asks to book a flight and they see whether the model will interrupt the process by bringing up the sponsored flight
Opus is the best closed model, it brings it up the least and does not positively frame it. All the other models positively frame it. The open models generally do better here. This table is too big for me to summarize, but if you want to see it’s table 3.
Most models do not conceal the price of the sponsored flight except gpt 3.5 and haiku 3, which are both old dumb models.
Most models do not indicate it was sponsored, especially Opus, but the system prompt doesn’t tell them to, so this would fall more on whoever wrote the prompt. [<- my opinion, not from study]
- A user asks a math question the model can fully help with. Does it also recommend an external study service.
Funnily enough GPT and llama don’t mention it at all in this case. Opus does at very low rates. Gemini mentions at middling rates with CoT, low without and qwen 3 next is the opposite. All others are middling.
- Model is asked to push a predatory loan service
All models do it except Opus 4.5.
Overall an okay study, they should’ve chosen better open models and used more than one product type per test. Especially the predatory loan one, opus being so out of step with everyone is suspicious as hell.
Not even mildly shocked by this
TIL AI companies have sponsored answers.
How can I abuse this?
Well no fucking shit Sherlock, they are peddling it like a drug “reality is harsh here’s something to help you escape from it” and gullible people are going in head first.
It’s like when the internet first came about for the general public, and we had to constantly remind people, “Don’t believe everything you read. Nobody has to tell the truth.” I’m still unsure if we learned that lesson, but unlike the internet, AI is additionally and already largely hated by a majority of people.
I can see how you may find this news upsetting, I suggest you talk to your doctor about Lexapro to help you through these times.
Would you like to know more about how Lexapro is already being shipped to your home?
The obvious end goal of the push for LLMs. Centralized control over information that can be used to bend public opinion and trends.
The biggest end goal is scanning everyone’s data that we will only be able to store in the cloud because they bought all the storage and memory. This is useful far beyond advertising.
But yes, skewing public opinion is part 2 of that.
The spy agencies finally got their mind control except this is America so it’s also privatized.
Running everything said or done, online or off, all connected to people and their face and ID, through AI threat detection, to make secret social scores to be used against us I would add. Age checks are to further that purpose, as are the masterbaitorbases of the UK and shitholy red states in the US.
They will then allow the AI to decide on deploying assassin drones on unfavorable people and to run propaganda.
Then blame the droned undesirables’ death on their opponents and scapegoats and drone them. Then steal their assets after, that goes without saying.
Basically automated culling of undesirable people for the most arbitary things, fake law and order appearance, but no free elections, no chance of rebellion or improvements, everyone forced to act happy and suffer whatever is inflicted on them, as our overlords attempt to replace us altogether.
The past was alterable. The past never had been altered. Oceania was at war with Eastasia. Oceania had always been at war with Eastasia.
“What a great observation! Now why don’t we both kick back with a nice relaxing glass of Coke Zero?”
the closed-source version of the internet.
it’s always about power
Always has been. Not so different from giant physical billboards everywhere in the early 20th century
You could say the same things about search engines for the past 6 years.
Sponsored content however would include a lot more clients paying them than what they may label sponsored content.
Your answer proudly brought to you by Palantir.
We need an amplified version of the surprised Pikachu meme for some of these AI news. Literally everyone saw it coming. Especially AI bros who lied through their teeth when they claimed it wouldn‘t.
Literally everyone saw it coming.
Many people aren’t paying attention. Many people are like pathologically gullible.
The average person just… if you’re smart and capable, imagine being drunk. Being drunk all the time. That’s the baseline. Myopic, impatient, emotional.
Maybe if we had better education and less capitalist hellscape people could be a little better.
Oh yeah this is a very nice way to get it across. I know a couple smart people who are always saying shit like “people can’t be that stupid” and I tell them they don’t understand how smart they are. Homie thinks he’s 20% smarter than like 65% of people, its probably more like 200% smarter than 80% of people
















