Companies are going all-in on artificial intelligence right now, investing millions or even billions into the area while slapping the AI initialism on their products, even when doing so seems strange and pointless.
Heavy investment and increasingly powerful hardware tend to mean more expensive products. To discover if people would be willing to pay extra for hardware with AI capabilities, the question was asked on the TechPowerUp forums.
The results show that over 22,000 people, a massive 84% of the overall vote, said no, they would not pay more. More than 2,200 participants said they didn’t know, while just under 2,000 voters said yes.
someone tried to sell me a fucking AI fridge the other day. Why the fuck would I want my fridge to “learn my habits?” I don’t even like my phone “learning my habits!”
Why does a fridge need to know your habits?
It has to keep the food cold all the time. The light has to come on when you open the door.
What could it possibly be learning
Hi Zron, you seem to really enjoy eating shredded cheese at 2:00am! For your convenience, we’ve placed an order for 50lbs of shredded cheese based on your rate of consumption. Thanks!
We also took the liberty of canceling your health insurance to help protect the shareholders from your abhorrent health expenses in the far future
If your fridge spies after you, certain people can have better insights into healthiness of your food habits, how organized you are, how often things go bad and are thrown out, what medicine (requiring to be kept cold) do you put there and how often do you use it.
That will then affect your insurances, your credit rating, and possibly many other ratings other people are interested in.
I wish products followed your lead and had no AI features, 1995 Toyota Corolla :/
- Know when you’re about to put groceries in so it makes the fridge colder so the added heat doesn’t make things go bad.
- Know when you don’t use it and let it get a tiny bit warmer to save a teeny bit of power. (The vast majority of power is cooling new items, not keeping things cold though.)
- Tell you where things are?
- Ummm… Maybe give you an optimized layout of how to store things?
- Be an attack vector on your home’s wifi
- Wait, no, uh,
- Push notifications
- Do you not have phones?
So I can see what you like to eat, then it can tell your grocery store, then your grocery store can raise the prices on those items. That’s the point. It’s the same thing with those memberships and coupon apps. That’s the end goal.
They can see what you like to eat by what you’re buying, LOL. No, not this.
A fridge can give them information on how do you eat.
And it would improve your life zero. That is what is absurd about LLM’s in their current iteration, they provide almost no benefit to a vast majority of people.
All a learning model would do for a fridge is send you advertisements for whatever garbage food is on sale. Could it make recipes based on what you have? Tell it you want to slowly get healthier and have it assist with grocery selection?
Nah, fuck you and buy stuff.
I still want this fridge. (Source)
it doesn’t seem all that hard to make, as long as you don’t mind the severely reduced flexibility in capacity and glass bottles shattering against each other at the bottom
always xkcd
To remind you when should go to buy groceries haha
I’m still pissed about the fact that I can’t buy a reasonably priced TV that doesn’t have WiFi. I should never have left my old LG Plasma bolted to the wall of my previous house when I sold it. That thing had a fantastic picture and doubled as a space heater in the winter.
Projector gang checking in 🤓📽️
Everything alright here?
You can always join us in the peaceful realm of select input.
(there are still WiFi-free options)
I want AI in my fridge for sure. Grocery shopping sucks. Forgetting how old something was sucks. Letting all the cool out to crawl around to see what I have sucks.
I want my fridge to be like the Sims, just get deliveries or pickup the order. Fill it out and get told what ingredients I have. Bonus points if you can just tell me what recipes I can cook right now, even better if I can ask for time frame.
That would be sick!
Still not going to give ecorp all of my data or put some half back internet of stings device on my WiFi for it. But it would be cool.
Jian-Yang wants a smart fridge. To make you feel bad. Because you’re fat and you’re poor.
…just under 2,000 voters said “yes.”
And those people probably work in some area related to LLMs.
It’s practically a meme at this point:
Nobody:
Chip makers: People want us to add AI to our chips!
The even crazier part to me is some chip makers we were working with pulled out of guaranteed projects with reasonably decent revenue to chase AI instead
We had to redesign our boards and they paid us the penalties in our contract for not delivering so they could put more of their fab time towards AI
That’s absolutely crazy. Taking the Chicago School MBA philosophy to things as time consuming and expensive to setup as silicon production.
This is one of those weird things that venture capital does sometimes.
VC is is injecting cash into tech right now at obscene levels because they think that AI is going to be hugely profitable in the near future.
The tech industry is happily taking that money and using it to develop what they can, but it turns out the majority of the public don’t really want the tool if it means they have to pay extra for it. Especially in its current state, where the information it spits out is far from reliable.
I don’t want it outside of heavily sandboxed and limited scope applications. I dont get why people want an agent of chaos fucking with all their files and systems they’ve cobbled together
NDA also legally prevent you from using this forced garbage too. Companies are going to get screwed over by other companies, capitalism is gonna implode hopefully
I have to endure a meeting at my company next week to come up with ideas on how we can wedge AI into our products because the dumbass venture capitalist firm that owns our company wants it. I have been opting not to turn on video because I don’t think I can control the cringe responses on my face.
Back in the 90s in college I took a Technology course, which discussed how technology has historically developed, why some things are adopted and other seemingly good ideas don’t make it.
One of the things that is required for a technology to succeed is public acceptance. That is why AI is doomed.
AI is not doomed, LLMs or consumer AI products, might be
In industries AI is and will be used (though probably not LLMs, still, except in a few niche use cases)
Yeah, I mean the AI being shoveled at us by techbros. Actual ML stuff is currently and will continue to be useful for all sorts on not-sexy but vital research and production tasks. I do task automation for my job and I use things like transcription models and OCR, my company uses smart sorting using rapid image recognition and other really cool uses for computers to do things that humans are bad at. It’s things like LLMs that just aren’t there - yet. I have seen very early research on AI that is trained to actually understand language and learns by context, it’s years away, but eventually we might see AI that really can do what the current AI companies are claiming.
There’s really no point unless you work in specific fields that benefit from AI.
Meanwhile every large corpo tries to shove AI into every possible place they can. They’d introduce ChatGPT to your toilet seat if they could
“Shits are frequently classified into three basic types…” and then gives 5 paragraphs of bland guff
With how much scraping of reddit they do, there’s no way it doesn’t try ordering a poop knife off of Amazon for you.
It’s seven types, actually, and it’s called the Bristol scale, after the Bristol Royal Infirmary where it was developed.
I know. But I was satirising GPT’s bland writing style, not providing facts
Imagining a chatgpt toilet seat made me feel uncomfortable
Aw maaaaan. I thought you were going to link that youtube sketch I can’t find anymore. Hide and go poop.
Don’t worry, if Apple does it, it will sell a like fresh cookies world wide
Idk, they can’t even sell VR.
Someone did a demo recently of AI acceleration for 3d upscaling (think DLSS/AMDs equivilent) and it showed a nice boost in performance. It could be useful in the future.
I think it’s kind of a ray tracing. We don’t have a real use for it now, but eventually someone will figure out something that it’s actually good for and use it.
AI acceleration for 3d upscaling
Isn’t that not only similar to, but exactly what DLSS already is? A neural network that upscales games?
But instead of relying on the GPU to power it the dedicated AI chip did the work. Like it had it’s own distinct chip on the graphics card that would handle the upscaling.
I forget who demoed it, and searching for anything related to “AI” and “upscaling” gets buried with just what they’re already doing.
That’s already the nvidia approach, upscaling runs on the tensor cores.
And no it’s not something magical it’s just matrix math. AI workloads are lots of convolutions on gigantic, low-precision, floating point matrices. Low-precision because neural networks are robust against random perturbation and more rounding is exactly that, random perturbations, there’s no point in spending electricity and heat on high precision if it doesn’t make the output any better.
The kicker? Those tensor cores are less complicated than ordinary GPU cores. For general-purpose hardware and that also includes consumer-grade GPUs it’s way more sensible to make sure the ALUs can deal with 8-bit floats and leave everything else the same. That stuff is going to be standard by the next generation of even potatoes: Every SoC with an included GPU has enough oomph to sensibly run reasonable inference loads. And with “reasonable” I mean actually quite big, as far as I’m aware e.g. firefox’s inbuilt translation runs on the CPU, the models are small enough.
Nvidia OTOH is very much in the market for AI accelerators and figured it could corner the upscaling market and sell another new generation of cards by making their software rely on those cores even though it could run on the other cores. As AMD demonstrated, their stuff also runs on nvidia hardware.
What’s actually special sauce in that area are the RT cores, that is, accelerators for ray casting though BSP trees. That’s indeed specialised hardware but those things are nowhere near fast enough to compute enough rays for even remotely tolerable outputs which is where all that upscaling/denoising comes into play.
Found it.
I can’t find a picture of the PCB though, that might have been a leak pre reveal and now that it’s revealed good luck finding it.
Having to send full frames off of the GPU for extra processing has got to come with some extra latency/problems compared to just doing it actually on the gpu… and I’d be shocked if they have motion vectors and other engine stuff that DLSS has that would require the games to be specifically modified for this adaptation. IDK, but I don’t think we have enough details about this to really judge whether its useful or not, although I’m leaning on the side of ‘not’ for this particular implementation. They never showed any actual comparisons to dlss either.
As a side note, I found this other article on the same topic where they obviously didn’t know what they were talking about and mixed up frame rates and power consumption, its very entertaining to read
The NPU was able to lower the frame rate in Cyberpunk from 263.2 to 205.3, saving 22% on power consumption, and probably making fan noise less noticeable. In Final Fantasy, frame rates dropped from 338.6 to 262.9, resulting in a power saving of 22.4% according to PowerColor’s display. Power consumption also dropped considerably, as it shows Final Fantasy consuming 338W without the NPU, and 261W with it enabled.
Nvidia’s tensor cores are inside the GPU, this was outside the GPU, but on the same card (the PCB looked like an abomination). If I remember right in total it used slightly less power, but performed about 30% faster than normal DLSS.
from the articles I’ve found it sounds like they’re comparing it to native…
We have plenty of real uses for ray tracing right now, from blender to whatever that avatar game was doing to lumen to partial rt to full path tracing, you just can’t do real time GI with any semblance of fine detail without RT from what I’ve seen (although the lumen sdf mode gets pretty close)
although the rt cores themselves are more debatably useful, they still give a decent performance boost most of the time over “software” rt
Which would be approptiate, because with AI, theres nothing but shit in it.
And what do the companies take away from this? “Cool, we just won’t leave you any other options.”
Plenty of companies offering sane normal solutions and make bank in the process
History has shown that not to be the case.
I don’t mind the hardware. It can be useful.
What I do mind is the software running on my PC sending all my personal information and screenshots and keystrokes to a corporation that will use all of it for profit to build user profile to send targeted advertisement and can potentially be used against me.
Any “ai” hardware you but today will be obsolete so fast it will make your dick bleed
No, but I would pay good money for a freely programmable FPGA coprocessor.
If the AI chip is implemented as one, and is useful for other things I’m sold.
I think manufacturers need to get a lot more creative about simplified computing. The RPi Pico’s GPIO engine is powerful yet simple, and a good example of what is possible with some good application analysis and forethought.
I have few pi pico but i didn’t knew about it, can you please elaborate, because I’ve been using them just like any other esp32 stm32 esp8266 i have
Whichnoart of the pico are you referring to specifically? Never heard the term “GPIO engine” before. Is that sort of like the USB stack but for GPIO?
I think they meant PIO (programmable IO). It’s like a small processor tied to some of the IO pins. There’s a very small set of instructions and some state machines.
It can be used to implement your own IO protocols without worrying about the issues that come with bit-banging from the cpu.
They want you to buy the hardware and pay for the additional energy costs so they can deliver clippy 2.0, the watching-you-wank-edition.
If you unbend him, clippy could be very useful 🍆📎
God damn you.
Well, NPU are not in pair with modern GPU. General GPU has more power than most NPUs, but when you look at what electricity cost, you see that NPU are way more efficient with AI tasks (which are not only chatbots).
I wouldn’t even pay less.
I would pay less, and then either use it for dumb stuff or just not use it at all.
84% said no.
16% punched the person asking them for suggesting such a practice. So they also said no. With their fist.
Its bad enough they shove it on you in some websites. Really not interested in being their lab rats
✨chat assistants✨
I honestly have no Idea what AI does to a processor, and would therefore not pay extra for the badge.
If it provided a significant speed improvement or something, then yeah, sure. Nobody has really communicated to me what the benefit is. It all seems like hand waving.
what they mean is that they are putting in dedicated processors or other hardware just to run an LLM . it doesnt speed up anything other than the faux-AI tool they are implementing.
LLMs require a ton of math that is better suited to video processors than the general purpose cpu on most machines.
I honestly have no Idea what AI does to a processor
Parallel processing capability. CPUs historically worked with mostly-non-massively-parallelizable tasks; maybe you’d use a GPU if you wanted that.
I mean, that’s not necessarily “AI” as such, but LLMs are a neat application that uses them.
On-CPU video acceleration does parallel processing too.
Software’s going to have to parallelize if it wants to get much by way of performance improvements, anyway. We haven’t been seeing rapid exponential growth in serial computation speed since the early 2000s. But we can get more parallel compute capacity.
That’s kind of abstract. Like, nobody pays purely for hardware. They pay for the ability to run software.
The real question is, would you pay $N to run software package X?
Like, go back to 2000. If I say “would you pay $N for a parallel matrix math processing card”, most people are going to say “no”. If I say “would you pay $N to play Quake 2 at resolution X and fps Y and with nice smooth textures,” then it’s another story.
I paid $1k for a fast GPU so that I could run Stable Diffusion quickly. If you asked me “would you pay $1k for an AI-processing card” and I had no idea what software would use it, I’d probably say “no” too.
Yup, the answer is going to change real fast when the next Oblivion with NPCs you can talk to needs this kind of hardware to run.
I’m still not sold that dynamic text generation is going to be the major near-term application for LLMs, much less in games. Like, don’t get me wrong, it’s impressive what they’ve done. But I’ve also found it to be the least-practically-useful of the LLM model categories. Like, you can make real, honest-to-God solid usable graphics with Stable Diffusion. You can do pretty impressive speech generation in TortoiseTTS. I imagine that someone will make a locally-runnable music LLM model and software at some point if they haven’t yet; I’m pretty impressed with what the online services do there. I think that there are a lot of neat applications for image recognition; the other day I wanted to identify a tree and seedpod. Someone hasn’t built software to do that yet (that I’m aware of), but I’m sure that they will; the ability to map images back to text is pretty impressive. I’m also amazed by the AI image upscaling that Stable Diffusion can do, and I suspect that there’s still room for a lot of improvement there, as that’s not the main goal of Stable Diffusion. And once someone has done a good job of building a bunch of annotated 3d models, I think that there’s a whole new world of 3d.
I will bet that before we see that becoming the norm in games, we’ll see LLMs regularly used for either pre-generated speech synth or in-game speech synthesis, so that the characters say text (which might be procedurally-generated, aren’t just static pre-recorded samples, but aren’t necessarily generated from an LLM). Like, it’s not practical to have a human voice actor cover all possible phrases with static recorded speech that one might want an in-game character to speak.
I think it’s coming pretty fast. There’s already a mod for Skyrim that lets you talk to your companion. People are spending hours talking to llms and roleplaying, the first triple A game to incorporate it is going to bee a massive hit imo. I’m actually surprised no one’s been coming out with visual novels using them, it seems like a perfect use case.
It’s definitely going to be used first for making the content of the game like you said though.
there are some local genai music models, although I don’t know how good they are yet as I haven’t tried any myself (stable audio is one, but I’m sure there are others)
also minor linguistic nitpick but LLM stands for ‘language model’ (you could maybe get away with it for pixart and sd3 as they use t5 for prompt encoding, which is an llm, i’m sure some audio models with lyrics use them too), the term you’re looking for is probably ‘generative’
Show me a practical use for AI and I’ll show you the money. Genmoji ain’t it.
Give me a virtual assistant that actually functions and I will give you A LOT of money…
deleted by creator
40% of translators report having lost income due to it :0
I don’t need a translator 🤷
I use a llm fine tuned on medical stuff for minor medical questions or to prep for medical appointments (getting on the same page as the doc can save some serious time, say the wrong thing and they’ll get hung up on it for a year lol).
I really want to combine it something like fasten health so I can go over my medical data on my own machines faster. Pipe dream rn because getting that data from the docs is a pain in the dick, but still would be cool to me.