Why Isn't Tesla Level 3?

post by jefftk (jkaufman) · 2024-12-11T14:50:01.159Z · LW · GW · 3 comments

Contents

3 comments

Many people who've used Tesla's "Full Self Driving" software are pretty excited about it:

I can step into a Tesla today, press a destination, and go there without touching the wheel or pedals. Sure it won't be flawless but the fact is, I can. I can't do the same in any other consumer car, and the closest thing is a Waymo. The effort is there, I think its just a matter of time before we start seeing the legal stuff play out.

I think this is mostly not a legal issue. Let's take perhaps the most favorable conditions for driverless cars:

Tesla could demonstrate their system worked sufficiently reliably in these conditions that the person in the driver's seat could safely work, read, or watch a movie, and Tesla could take full legal responsibility for any crash. We know Tesla could legally do this because (despite what the Secretary of Transportation thinks) Mercedes already does, with Drive Pilot, which they launched in Germany in 2022 and the US in 2023.

Then Tesla could gradually remove restrictions, as they were able to demonstrate that we could trust their system with more complex scenarios.

Tesla fans will often claim that Tesla could easily do this ("FSD is practically perfected, with no accidents whatsoever, under such conditions already"), but I don't think it's so clear. This is a situation where getting to "impressive" levels of operation is quite doable (ex: here's Cruise seven years ago on in SF, dealing with many unusual situations) but getting to "reliable enough that you don't have to supervise it" has been incredibly hard (ex: Cruise is shutting down, after dragging a pedestrian under a car last year). And unlike most of their competitors (including Mercedes) the Tesla vehicles don't have LIDAR, which makes it even harder to get to that level of reliability.

Tesla has been making bold promises here since at least 2016, when they claimed that Self-Driving was limited only by "extensive software validation and regulatory approval":

Full Self-Driving Capability

Build upon Enhanced Autopilot and order Full Self-Driving Capability on your Tesla. ... Please note that Self-Driving functionality is dependent upon extensive software validation and regulatory approval, which may vary widely by jurisdiction. It is not possible to know exactly when each element of the functionality described above will be available, as this is highly dependent on local regulatory approval. Please note also that using a self-driving Tesla for car sharing and ride hailing for friends and family is fine, but doing so for revenue purposes will only be permissible on the Tesla Network, details of which will be released next year.

(They were still saying the same thing, including "will be released next year", in 2019.)

I would be very happy to see Tesla succeed, and make a car that did not need a supervising driver, but if their hardware and software were up to the task they would have worked to get Level 3 certification already.

Comment via: facebook, mastodon, bluesky

3 comments

Comments sorted by top scores.

comment by Dagon · 2024-12-11T16:29:01.522Z · LW(p) · GW(p)

Yeah, there's a lot of similarity with other human-level cognitive domains.  We seem to be in the center of a logistic curve - massive very recent progress, but an unknown and likely large amount of not-yet-solved quality, reliability, and edge-case-safe requirements.

20 years ago, it was pure science fiction.  10 years ago it was "just an engineering problem, but a big one", 5 years ago it was "need to fine-tune and deal with regulation".  Now it's ... "we can't say we were wrong, and we've improved massively AGAIN, but it feels like the end-state is still out of reach".  

For a lot of applications, FSD IS ALREADY safer than human drivers.  But it's not as resilient and flexible, and it's much worse than human in very rare situations, like person stuck under wheels in sensor-free location.  The goalpost remains rather undefined, and I suspect it's quite a ways out yet.  

I do put some probability into a discontinuity - some breakthrough or change in pattern that near-instantaneously makes it so much obviously better than a human in all relevant ways that it's just no longer a question.  It's, of course, impossible to put a timeline on that.  I'd probably guess another 8-20 years on the current path, could be as fast as 2026 if something major changes.

Note that this is only slightly different from my AGI estimates - I'd say 15-40 years on current path (or as early as 5 if something big changes) for significant amounts of human functioning is no longer economically or socially desired - the shift from AI as assistants and automation to AI as autonomous full-stack organizations. 

This similarity makes me suspicious that I'm just using cached heuristics, but also may be just that they're similar kinds of tasks in terms of generality and breadth of execution.

comment by Logan Zoellner (logan-zoellner) · 2024-12-11T19:57:45.472Z · LW(p) · GW(p)

Tesla fans will often claim that Tesla could easily do this

 

Tesla fan here. 

Yes, Tesla can easily do the situation you've described (stop and go traffic on a highway in good weather with no construction). With higher reliability than human beings.

I suspect the reason Tesla is not pursuing this particular certification is because given the current rate of progress it would be out of date by the time it was authorized.  There have been several significant leaps in capabilities in the last 2 years (11->12, 12->12.6, and I've been told 12->13).  Most likely Elon (who has undeniably been over optimistic) is waiting to get FSD certified until it is at least level 4.

It's worth noting that Tesla has significantly relaxed the requirements for FSD (from "hands on wheel" to "eyes on road") and has done so for all circumstances, not just optimal ones.

comment by Dave Lindbergh (dave-lindbergh) · 2024-12-11T18:10:45.232Z · LW(p) · GW(p)

In most circumstances Tesla's system is better than human drivers already.

But there's a huge psychological barrier to trusting algorithms with safety (esp. with involuntary participants, such as pedestrians) - this is why we still have airline pilots. We'd rather accept a higher accident rate with humans in charge than a lower non-zero rate with the algorithm in charge. (If it were zero, that would be different, but that seems impossible.)

That influences the legal barriers - we inevitably demand more of the automated system than we do of human drivers.

Finally, liability. Today drivers bear the liability risk for accidents, and pay for insurance to cover it. It seems impossible to justify putting that burden on drivers when drivers aren't in charge - those who write the algorithms and build the hardware (car manufacturers) will have that burden. And that's pricey, so manufacturers don't have great incentive to go there.