

GiGo.


GiGo.


My main concerns are mostly to do with the fact that Google in my experience has always had the benefit of enticing software and services that are extremely invasive but also very convenient (even if we remove IoT from the table for a moment). This is mostly due to how invasive Google Play Services is, and how invasive the Google app has been since the first iterations of Google Assistant (Google Now). I’m concerned that even those of use who have done what we can to turn off Gemini and not use Generative AI are still compromised regardless because big tech has a choke hold on the services we use.
So I suppose I’m trying to understand what the differences are in how these two types of technology compromise cyber security.


Pre-Generative AI, lots of companies had AI/Algorithmic tools that posed a risk to personal cyber security (Google’s Assistant and Apple’s Siri, MS’s Cortana etc).
Is the stance here that AI is more dangerous than those because of its black box nature, it’s poor guardrails, the fact that it’s a developing technology, or it’s unfettered access?
Also, do you think that the “popularity” of Google Gemini is because people were already indoctrinated into the Assistant ecosystem before it became Gemini, and Google already had a stranglehold on the search market so the integration of Gemini into those services isn’t seen as dangerous because people are already reliant and Google is a known brand rather than a new “startup”.


It died a long long time before this. The enshittification directly started back in the early 2000’s when one of the owners basically usurped the whole company. Which of course lead to mods quitting en masse. After that it went downhill and that downhill trend continued. Then it got bought out by the Israeli’s, and the AI art injection was them trying to prevent the site from going under.
Nothing about the site is what it was.


It’s not clear that the except is a quote. No quotation marks. No vertical bar denoting quotation. The ellipses at the very start of the first sentence.


Generative AI LLM’S? No. GiGo Counters? Yes.


People who live in third world countries like the US who don’t have Internet at home/internet isn’t available to them because it’s not profitable for the company providing for that area.
And before you say phone, you have to have service to receive or make a phone call. There are places in this country that don’t have either.


We have a wireless Android Auto dongle. And it takes an age to auto connect. Not to mention the problems with it still wanting us to pull over and put the car in park to switch, something I thought would be circumvented when I bought it but somehow is not. Usually it’s the person in the passenger seat trying to change something and not being able to. I’m not advocating for distracted driving. I’m pointing out that someone else in the vehicle who’s not driving can’t interact to change certain things even though it’s perfectly safe for them to do so.


It’s a Honda. But that’s exactly the point I’m trying to make here. With both car play and Android Auto I have issues but they’re down to how the manufacturer chose to implement each. Car manufacturers deliberately hamstrung these features and still didn’t get what they wanted.


I have equally bad experiences with both Android Auto and Apple Carplay. I don’t really want either and am fine with what I’ve got (only 1/3 of the cars I own even has Carplay/Android Auto). I mostly dislike how it’s been implemented with “safety controls” that require the phone to be plugged into the infotainment center in some cars and the requirement that I only connect it while at a stop with the car in park. If someone is driving with me and they want to change to their phone I have to pull over and that’s stupid.
The infotainment centers themselves with their stupid touch screens and lack of buttons are where my real problems start, and the end with the tracking BS and telemetry data. You can keep the new cars. I don’t want them.


Lack of context for what was being discussed, mostly. No joke I read this without context and was very confused (and I had already read a similar article about this event).


It did what now? What the hell is this title?
“We Let AI Run Our Office Vending Machine. It Lost Hundreds of Dollars.”
Is the actual title. What gives?


Probably because it gets you in trouble with the feds.


There was a scam going where they would offer for someone to apply for a role and use that good candidates clean information to get it v they would do the work and split the pay with the person who’s info they used.
In exchange that person would get “job experience”, the perks of WFH, and the ability to hold down more than one of these figurehead jobs simultaneously.


A couple of weeks ago a WAYMO Taxi drove through an active crime scene.
A bit ago they had to patch their taxis firmware to prevent them running down children (something you’d think they already would be programmed to do).
Passengers have reported being held hostage by their taxi when it stopped suddenly and refused to move.
There’s a laundry list of things that have been wrong with them. Some have had reasonable fixes (and some of those fixes should have been implemented before they were allowed on the road).


No. You don’t get to decide what is put on my personal computing device just because you want to force the general public to bear the burden of protecting children rather than forcing parents to do their fucking jobs.


Yeah. I often forget this one because AI isn’t replacing my job any time soon. At best maybe it could potentially be used to streamline some processes to do with tech data and work flow management (what tests and protocols get done when, and combining tests/troubleshooting steps to prevent rework). But that would have to be a very targeted and very very regulated and tested thing before it could be viable.


I think this is a case of the lesser of two evils here. Not being Elon Musk is such a low bar to clear.
Their statements each time something bad happens with their products don’t bear out that things will change in a meaningful way any time soon. There are a lot of reasons I’d never ride in one of these but even putting that to the side, objectively they each seem to have significant problems with implementation that are receiving lip service instead of actual fixes.


The crazy thing is, none of these articles seem to want to admit that AI is bad. They keep making articles like this. Keep saying that approval is falling among the general populace. But when touching on why that is, there’s always some wiggle words. Always some spin.
It’s never “people being forced to use it are seeing it as a detriment to them” people using it are seeing a decrease in efficacy of the results it gives for the amount of prompting required. Or people don’t like it because it’s going to have significant detrimental affects on the environment and their utilities.
All of those are solid reasons for the decline in both the use of AI LLM’S and the approval of them.
The cost of goods and services relating even tangentially to AI are going through the roof. The amount of slop is increasing at a furious pace, directly contributing to things like enshittification and dead Internet theory. The effect on the economy is looking to be extremely catastrophic.
But oh no. It’s lack of authenticity on social media spaces that people are worried about. Sure.
I didn’t. But I also can’t say I’ve been paying attention.