The U.S. government’s road safety agency is again investigating Tesla’s “Full Self-Driving” system, this time after getting reports of crashes in low-visibility conditions, including one that killed a pedestrian.
The National Highway Traffic Safety Administration says in documents that it opened the probe on Thursday with the company reporting four crashes after Teslas entered areas of low visibility, including sun glare, fog and airborne dust.
In addition to the pedestrian’s death, another crash involved an injury, the agency said.
Investigators will look into the ability of “Full Self-Driving” to “detect and respond appropriately to reduced roadway visibility conditions, and if so, the contributing circumstances for these crashes.”
Tesla: Why would we need lidar? Just use visual cameras.
Tesla, which has repeatedly said the system cannot drive itself and human drivers must be ready to intervene at all times.
how is it legal to label this “full self driving” ?
“I freely admit that the refreshing sparkling water I sell is poisonous and should not be consumed.”
That’s pretty clearly just a disclaimer meant to shield them from legal repercussions. They know people aren’t going to do that.
Last time I checked that disclaimer was there because officially Teslas are SAE level 2, which let’s them evade regulations that higher SAE levels have, and in practice Tesla FSD beta is SAE level 4.
Literally its “partial self driving” or “drive assist”
To be fair its marketed as full self driving, not full self no crashing
National Highway Traffic Safety Administration is now definitely on Musk’s list of departments to cut if Trump makes him a high-ranking swamp monster
Why do you think musk dumping so much cash to boost Trump? The plan all along is to get kickbacks like stopping investigation, lawsuits, and regulations against him. Plus subsidies.
Rich assholes don’t spend money without expectation of ROI
He knows Democrats will crack down on shady practices so Trump is his best bet.
He’s not hoping for a kickback, he is offered a position as secretary of cost-cutting.
He will be able to directly shut down everything he doesn’t like under the pretense of saving money.
Trump is literally campaigning on the fact that government positions are up for sale under his admin.
“I’m going to have Elon Musk — he is dying to do this… We’ll have a new position: secretary of cost-cutting, OK? Elon wants to do that,” the former president said"
Humans know to drive more carefully in low visibility, and/or to take actions to improve visibility. Muskboxes don’t.
They also decided to only use cameras and visual clues for driving instead of using radar, heat cameras or something like that as well.
It’s designed to be launched asap, not to be safe
I mean, that’s just good economics. I’m willing to bet someone at Tesla has done the calcs on how many people they can kill before it becomes unprofitable
I’m not so sure. Whenever there’s crappy weather conditions, I see a ton of accidents because so many people just assume they can drive at the posted speed limit safely. In fact, I tend to avoid the highway altogether for the first week or two of snow in my area because so many people get into accidents (the rest of the winter is generally fine).
So this is likely closer to what a human would do than not.
low visibility, including sun glare, fog and airborne dust
I also see a ton of accidents when the sun is in the sky or if it is dusty out. \s
Yup, especially at daylight savings time when the sun changes position in the sky abruptly.
Cameras are probably worse here, but they may be able to make up for it with parallel processing the poor data they get.
Muskboxes
like that
Humans know to drive more carefully in low visibility…Muskboxes don’t.
They do, actually. It even displays a message on the screen about low visibility.
Eyes can’t see in low visibility.
musk “we drive with our eyes, cameras are eyes. we dont need LiDAR”
FSD kills someone because of low visibility just like with eyes
musk reaction -
He really is a fucking idiot. But so few people can actually call him out… So he just never gets put in his place.
Imagine your life with unlimited redos. That’s how he lives.
The whole “we drive with our eyes” thing is such bullshit. Humans are terrible drivers. Autonomous driving should be better than humans.
That goes for OpenPilot too. They actually openly advertise that their software makes the same mistakes as humans, as if it’s some sort of advancement. Like if I could plug Lidar into my brain, I totally would.
It’s worse than that, though. Our eyes are significantly better than cameras (with some exceptions at the high end) at adapting to varied lighting conditions than cameras are. Especially rapid changes.
Hard to credit without a source, modern cameras have way more dynamic range than the human eye.
Not in one exposure. Human eyes are much better with dealing with extremely high contrasts.
Cameras can be much more sensitive, but at the cost of overexposing brighter regions in an image.
if he was truthful: “the cost of adding lidar cuts in my profits”
Correction - Older Teslas had lidar, Musk demanded they be removed because they cut into his profits. Not a huge difference but it does show how much of a shitbag he is.
Honestly though, I’m a fucking idiot and even I can tell that Lidar might be needed for proper, safe FSD
You’d think “we drive with our eyes, cameras are eyes.” is an argument against only using cameras but that do I know.
How Can Cameras Be Real If Our Eyes Aren’t Real?
What pisses me off about this is that, in conditions of low visibility, the pedestrian can’t even hear the damned thing coming.
I hear electric cars all the time, they are not much quieter than an ice car. We don’t need to strap lawn mowers to our cars in the name of safety.
You can hear them, but manufacturers had to add external speakers to electric cars to make them louder.
https://en.wikipedia.org/wiki/Electric_vehicle_warning_sounds
I think they are a lot more quiet. I’ve turned around and seen a car 5 meter away from me, and been surprised. That never happens with fuel cars.
I think if you are young, maybe there isn’t a big difference since you have perfect hearing. But middle aged people lose quite a bit of that unfortunately.
I’m relatively young and it can still be difficult to hear them especially the ones without a fake engine sound. Add some city noise and they can be completely inaudible.
If it took them this long to look at Full Self Driving, I don’t have a lot of hope. But I’d like to be pleasantly surprised.
If anyone was somehow still thinking RoboTaxi is ever going to be a thing. Then no, it’s not, because of reasons like this.
It doesn’t have to not hit pedestrians. It just has to hit less pedestrians than the average human driver.
It’s bit reductive to put it in terms of a binary choice between an average human driver and full AI driver. I’d argue it has to hit less pedestrians than a human driver with the full suite of driver assists currently available to be viable.
Self-driving is purely a convenience factor for personal vehicles and purely an economic factor for taxis and other commercial use. If a human driver assisted by all of the sensing and AI tools available is the safest option, that should be the de facto standard.
The average human driver is tried and held accountable
It does, actually. That’s why robotaxis and self-driving cars in general will never be a thing.
Society accepts that humans make mistakes, regardless of how careless they’re being at the time. Autonomous vehicles are not allowed the same latitude. A single pedestrian gets killed and we have to get them all off the road.
Exactly. The current rate is 80 deaths per day in the US alone. Even if we had self-driving cars proven to be 10 times safer than human drivers, we’d still see 8 news articles a day about people dying because of them. Taking this as ‘proof’ that they’re not safe is setting an impossible standard and effectively advocating for 30,000 yearly deaths, as if it’s somehow better to be killed by a human than by a robot.
The problem with this way of thinking is that there are solutions to eliminate accidents even without eliminating self-driving cars. By dismissing the concern you are saying nothing more than it isn’t worth exploring the kinds of improvements that will save lives.
If you get killed by a robot, it simply lacks the human touch.
If you get killed by a robot, you can at least die knowing your death was the logical option and not a result of drunk driving, road rage, poor vehicle maintenance, panic, or any other of the dozens of ways humans are bad at decision-making.
Or the result of cost cutting…
It doesn’t even need to be logical, just statistically reasonable. You’re literally a statistic anytime you interact w/ any form of AI.
or a flipped comparison operator, or a “//TODO test code please remove”
It needs to be way way better than ‘better than average’ if it’s ever going to be accepted by regulators and the public. Without better sensors I don’t believe it will ever make it. Waymo had the right idea here if you ask me.
But why is that the standard? Shouldn’t “equivalent to average” be the standard? Because if self-driving cars can be at least as safe as a human, they can be improved to be much safer, whereas humans won’t improve.
I’d accept that if the makers of the self-driving cars can be tried for vehicular manslaughter the same way a human would be. Humans carry civil and criminal liability, and at the moment, the companies that produce these things only have nominal civil liability. If Musk can go to prison for his self-driving cars killing people the same way a regular driver would, I’d be willing to lower the standard.
Sure, but humans are only criminally liable if they fail the “reasonable person” standard (i.e. a “reasonable person” would have swerved out of the way, but you were distracted, therefore criminal negligence). So the court would need to prove that the makers of the self-driving system failed the “reasonable person” standard (i.e. a “reasonable person” would have done more testing in more scenarios before selling this product).
So yeah, I agree that we should make certain positions within companies criminally liable for criminal actions, including negligence.
I think the threshold for proving the “reasonable person” standard for companies should be extremely low. They are a complex organization that is supposed to have internal checks and reviews, so it should be very difficult for them to squirm out of liability. The C-suite should be first on the list for criminal liability so that they have a vested interest in ensuring that their products are actually safe.
Sure, the “reasonable person” would be a competitor who generally follows standard operating procedures. If they’re lagging behind the industry in safety or something, that’s evidence of criminal negligence.
And yes, the C-suite should absolutely be the first to look at, but the problem could very well come from someone in the middle trying to make their department look better than it is and lying to the C-suites. C-suites have a fiduciary responsibility to the shareholders, whereas their reports don’t, so they can have very different motivations.
That is the minimal outcomes for an automated safety feature to be an improvement over human drivers.
But if everyone else is using something you refused to that would have likely avoided someone’s death, while misnaming you feature to mislead customers, then you are in legal trouble.
When it comes to automation you need to be far better than humans because there will be a higher level of scrutiny. Kind of like how planes are massively safer than driving on average, but any incident where someone could have died gets a massive amount of attention.
I thought it was illegal to call it full self driving? So I thought Tesla had something new.
Apprently it’s the moronic ASSISTED full self driving the article is about. So nothing new.
Tesla does not have a legal full self driving system, so why do articles keep pushing the false narrative, even after it’s deemed illegal?Did they change it again? It was FSD Beta, then Supervised, now you’re telling me it’s ASSISTED? Since that’s not in TFA…
IDK I heard assisted, maybe they decided on supervised? The central point is that it’s illegal in some states to call it full self driving, because it’s false advertising.
Assisted full self driving is an oxymoron.
Absolutely, but that’s what Tesla decided, that or supervised, because it’s illegal to call it actually full self driving.
But an oxymoron is also fitting for Musk. You can even skip the oxy part. 😋100% agree. Who sells assisted full self driving anyway? Tesla’s is supervised which means it drives and the person behind the wheel is liable for its fuckups.
so why do articles keep pushing the false narrative, even after it’s deemed illegal?
The same reason that simple quadcopters have been deemed by the press to be called “drones”. You can’t manufacture panic and outrage with a innocuous name.
Calling it a drone has nothing to do with how many propellers it has, some drones are Jet driven. some are boats and some are vehicles.
A Drone is simply an unmanned craft, controlled remotely or by automation.https://www.merriam-webster.com/dictionary/drone
an uncrewed aircraft or vessel guided by remote control or onboard computers:
It sure doesn’t say when that was updated, but for a long period of time the use of drone when discussing unmanned aircraft was reserved for military craft that were usually armed and used to kill people. In the attempt to demonize hobby rc use, the press started calling simple quadcopters (and other propeller configurations if we are being pedantic) drones and not what they were normally called by the people using and making them in the hobby. My point still stands, the press likes to change the wording of things, and will perpetuate their narrative in order to garner views. Manufacturing fear is part of their tactic, and is why I replied what I replied to the question of why the press continues to push the false narrative of these cars being “self driving”.
It was called that name at the time when the kills happened.
I thought it was illegal to call it full self driving?
Courts have already ruled the opposite.
why do articles keep pushing the false narrative
Because that’s what it’s called.
Tesla Banned From Calling Driver Assist Full Self-Drive In California
https://www.motor1.com/news/628604/tesla-banned-full-self-drive-california/
Does anyone else find this enraging ?
It’s a decade too late.
Investigators will look into the ability of “Full Self-Driving” to “detect and respond appropriately to reduced roadway visibility conditions
They will have to look long and hard…
Maybe have a safety feature that refuses to engage self drive if it’s too foggy/rainy/snowy.
Preventing engaging something in bad conditions is a lot easier than what do you do if the conditions suddenly happen.
If it’s suddenly foggy it needs to be able to handle the situation well.
Cameras/Lidar don’t work well in fog. Radar does, but it isn’t a primary sensor and can’t be driven on safely alone in any circumstance.
So now you need to slow down (which humans will do) but also since the sensors are failing or insufficient, safely get out of the way of what might be other incoming vehicles behind you, or slow/stopped vehicles ahead of you.
You could restrict hours the system can be engaged which will reduce the likely hood of certain events (e.g morning fog, or sunrise/sunset head on sun) but there’s still unpredictability.
Inb4 someone on TikTok shows how to bypass that sensor by jamming an orange in it -__-
I wonder if they will now find the Emperor has no clothes.
This is why you can’t have an AI make decisions on activities that could kill someone. AI models can’t say “I don’t know”, every input is forced to be classified as something they’ve seen before, effectively hallucinating when the input is unknown.
I’m not very well versed in this but isn’t there a confidence value that some of these models are able to output?
All probabilistic models output a confidence value, and it’s very common and basic practice to gate downstream processes around that value. This person just doesn’t know what they’re talking about. Though, that puts them on about the same footing as Elono when it comes to AI/ML.
Right, which is why that marvelous confidence value got somebody ran over.
Are you under the impression that I think Teslas approach to AI and computer vision is anything but fucking dumb? The person said a stupid and patently incorrect thing. I corrected them. Confidence values being literally baked into how most ML architectures work is unrelated to intentionally depriving your system of one of the most robust ccomputer vision signals we can come up with right now.
Yes, but confidence values are not magic. These values are calculated based on how familiar the current input is to a previous observed input. If the type of input is unfamiliar to the model, what do you think happens? Usually, there will be a category with a high enough confidence score so that it will be chosen as the correct one, while being wrong. Now, assuming you somehow manage to not get a favorable confidence score for any decision. What do you think happens in that case? I never encountered this, but there can only be 3 possible paths: 1) Choose a random value. Not good. 2) Do nothing. Not good. 3) Rerun the model with slightly newer data? Maybe helps, but in the case of driving a car, slightly newer data might be too late.