I agree we won’t have Level 5 cars in 2030. Also, driving laws and traffic flows are not static: a city could, for example, have lots of additional bike lanes (with new laws) in 2028, and cars will have to adjust to that. Same for their interaction with other self-driving cars, potentially different from human driven cars.
And of course, european cities are a totally different challenge.
If anyone succeeds at Level 5 it will be Tesla. Waymo have sadly been stuck in a dead end for years. They’ve perfected driving in a small sunny suburb in Arizona, but anywhere outside that they can’t drive in. They just don’t have the data to train a learning model to generalize self-driving. Their approach does not scale beyond a couple of fair weather cities.
Tesla are getting masses of data from real-world driving conditions and using it to constantly improve their models. If anyone cracks it, they will. Elon has a gift for picking the right approach. He went with electric instead of hydrogen, and reusable rockets instead of spaceplanes. The latter is analogous to self-driving, in that while companies like Virgin Orbit have succeeded in placing small sats into orbit, their air-launched design can’t scale to anything larger. So they’re at a dead end.
Similarly, Waymo’s attempt to map everything works in a few places with optimal driving conditions but can’t progress beyond that. I don’t know if Tesla will crack level 5, but if anyone can they can.
The gap between Level 4 (Waymo) and Level 5 doesn’t seem big but it might be the type of thing that we never really solve. And the difference is just vague enough that I can imagine arguments about if a car actually makes it or not. If the car is handling everything but that 1 in 100 year snow storm in phoenix, is it level 5 if a human can handle that storm?
Choosing who to kill is not a problem. In all your years of driving, how often have you ever had to make that choice? It makes for a fun debate but if that was the only issue, they could flip a coin for the one or two times a decade it pops up.
Things like figuring out if that guy waving his arms is mad at you or giving you directions is much more common. Or figuring out that some orange cones change where it is legal to drive. Or that the freshly paved road doesn’t have any markings and you sort of have too guess where to go. Or there is no road sign but that unmarked road that isn’t on the map is probably where you need to turn. They are all somewhat solvable but to generalize enough to not have a human in the car in arbitrary locations might be borderline impossible.
In a lot of ways public transit should be easier. Well defined routes and situations (i.e. if the road is under construction or their is an accident, it is pretty easy to tell the cars) means you can get away with L4. I almost think the tougher issue is dealing with passengers doing things like paying, harassing other customers. and destroying/making messes. I also wonder how the math changes when you aren’t paying drivers. Better off with that 1 school bus with 75 kids or having 3 vans sized busses each carrying 30?
Or is he frantically trying to get you to stop because a power line fell across the road around the bend?
Not too recently a Tesla car - in a city - blocked a fire engine from continuing to a fire (where occupants were trapped inside). The crew had to wait for a garbage truck employee to hurry into the driver’s seat and move the truck so the fire engine could get through. It seems the AI in the autonomous car - even though it was on the opposite side of the road - didn’t know why the fire engine was in its lane and just stopped. It had no idea that it could - and should - have backed up some to let the fire engine get around the garbage truck.
There is also some debate going on about stopping in the driving lane to let passengers exit instead of pulling over to the curb. The car companies are trying to weasel their way out of that by saying if it’s safe to do so, it should be okay just to stop in the driving lane to discharge passengers. Now they want to twist traffic laws to suit themselves?
Regarding winter driving, would an autonomous car (AI) realize that the on-ramps, off-ramps and bridges are subject to icing over during a winter storm? Will it slow down to a safe speed or continue at the “legal dry pavement” speed? I’d hate to test that theory if there was a river below.
My driveway apron is 2-1/2 cars wide and the driveway goes back out of sight due to some trees. It looks like a road, it’s a pavement apron 1-1/2 car lengths long and stone the rest of the way. It looks like a road and many people have pulled in and even made the wye and drove up to my house before realizing they turned off too soon. I live on a corner and those people end up backing out and driving up to the corner to get to the side street. I wonder what a autonomous car would do if it pulled into my driveway. Or would it know that the real side street is just on the other side of my house?
Food for some amusing thought: If someone was full of road rage, what would they yell at an autonomous vehicle?
It wasn’t a Tesla. It was a Cruise self driving car but yes that is the type of problem that is hard. How do you know that the rules of the road no longer apply?
Recognizing that the fire truck’s normal path is blocked, that it has the right of way, and that it is safe for you to reverse is hard. Heck I wouldn’t be shocked if most human drivers took a few seconds to figure out what the correct course of action was.
I think in theory understanding that snow means you need to slow down should be “easy”. Now there is a lot of judgment about do I need to slow down 5,10, or 20mpw but the car is getting constant feedback on traction from the sensors. But I am not aware of the companies doing tons of training in winter. Winter driving can be rough with almost zero road marks and you guess where to go based on the ruts in the road…
Yes, you’re right. That was a Cruise. I had just re-read my earlier post and went to the link about the Tesla driving into the rear of the maintenance truck in Taiwan. I stand corrected.
Kind of like when I was plowing up a mountain road during a snowstorm and ended up driving across a field, past a house (gee, I could have sworn that house was well off the road) and suddenly find myself looking at another house directly in front of me. Had to get out and walk around to check things out. Found out I was 200 feet off the road. Very slowly backed out the way I came in and started up the road. I took a break at the top before heading back down - which was a lot easier. Still 3 miles downhill can keep you on your toes. Oh, I was driving a 4-wheel drive Oshkosh snow fighter with balloon ties. Pretty hard to get stuck with that, but still possible.
You have a good point about circumstances requiring a lot of judgement. What a human sees - and hopefully, based on experience - can make a big difference in how much to slow down, whether it would be appropriate to move over into the center of the road, when and how much to accelerate before reaching the bottom of a hill you’re going to climb, etcetera.
Companies will have to make their AI do a lot of winter driving under different conditions as well as throw many different types of obstacles at them. I wonder how an AI would fare with a deer jumping out in front of the vehicle. Okay, this is more on rural roads, but in the city there are children, dogs and still the occasional fox, deer and bear, etcetera.
Honestly, I’m mildly skeptical that we’ll even see an L3 car by 2030. Tesla has been selling L2 cars for years, but the gap between L2 and L3 is huge.
Every few months, we hear about an L2 car crashing into something, and people blame the driver for not paying attention. While there’s a discussion to be had about habituation, human factors, and whether L2 is even a safe level for consumers to be using in the first place, for the purposes of this debate, the relevant problem is that, at L3, those crashes must not happen at all. Instead, the car has to recognize that it does not understand what’s going on, disengage, and alert the human to take over, and there still needs to be enough time left over for the human to actually do something about it. At the same time, the car cannot do this very often, or else it rapidly degenerates to “L2 but with more noise.”
I don’t think those problems are insurmountable. I do think they are very hard, and we are not giving them enough credit.
I think that’s not the right way to look at the problem.
It’s not about whether the self-driving cars can drive better than the best human drivers.
It’s about whether they can drive better than humans on average.
Should we say that people are not allowed to drive if they can’t drive too well in the snow, but can drive safely elsewhere? I don’t think so.
Same with these self-driving vehicles. Once the statistical threshold is crossed where they are safer than human drivers, its actually reducing suffering, not increasing it.
The only problem then is if people can get over the reduction of human value to statistics and accept that it really isn’t any different to the statistics behind the required level of proficiency for humans to pass a driving test.
Once we’re there, it likely won’t be long at all until self-driving cars are better than all human drivers. It was the same with Go.
That just looks like a bug, not a fundamental flaw in the idea, nor evidence that it is necessarily a very long way away.
It’s not saying anything about the overall safety of self-driving cars, but just about a specific bug, that will likely be ironed out.
To a certain extent, as with any new technology, there’s probably going to be an initial period where there are some teething problems, like this. Look at the first cars and early driving rules, safety regulations (or lack thereof), or even just hand-crank starters.
If someone had come along and outright banned them, that’d wouldn’t be so great for us today, would it?
You can do all the theoretical analysis and simulated tests you want, but in the end you just don’t know what problems lie in wait until you launch it into the real world.
Self-driving autonomous vehicles have a long(?) way to go. Just the other day I read about how multiple GM cars seemed to have lost their satellite connection and just stopped, stranding the occupants. Thankfully none lost their signal while crossing railroad tracks.
More testing and backup systems have to be developed to prevent situations like this from happening. It’s quite possible that in the rush to be the first to provide autonomous vehicles to the masses, not enough “real world” scenarios were tested to see the results. One argument was that, for now, such vehicles should only be used in cities or on highways and not on rural roads which are prone to have many sharp, hair pin curves; sudden low-speed curves; wildlife crossing the roads; trees and/or power lines fallen across the road or hanging just above them. I think more real world situations need to be tested and the drivers need to be able to instantaneously take over when they see their vehicle is not slowing down enough for a curve, debris in the road, or simply when children are near the street and could dart out suddenly. After all this has been tested, then the software needs to be vigorously tested for flaws and fail safe processes incorporated into the systems.
Don’t get me wrong, I’d love to see these vehicles progress to the point of being commonplace… so long as they’re safe.
Your point about the old hand-crank starters was spot on. Many stories are out there about people breaking fingers, arms and getting severely injured from those. And a stick instead of a wheel to steer the vehicle? That was the primitive period in development of the horseless carriage indeed!
I think it’s safe to say the weakest part of every car in the world right now is the nut behind the steering wheel.
That, if nothing else is a good enough reason for me to fervently wish for the day when I can sit in the rear seat of my driverless car with my Nintendo Switch or a good book to pass the travel time, as well as feel relief at what will likely be a drastic reduction in road-related fatalities.
Although getting a Level 5 compliance is “difficult” as you mention, it is not anymore a technology but a legal issue IMHO. I also believe, that the acceptance of the idea in government institutions and its corresponding legal preparations is strongly connected to the acceptance of it by people which are going to use it. Those people will know that the technological maturity of the capability won’t be perfect (it will never be) at that time, but it will be proven to them that it is “statistically” superior to human under the conditions specified.
That is to say, the problem is already more on the sociology than the technology now, and this will be totally solved by 2030
It all sounds great until you encounter an actual robo-taxi in the wild. Which is rare: Six years after companies started offering rides in what they’ve called autonomous cars and almost 20 years after the first self-driving demos, there are vanishingly few such vehicles on the road. And they tend to be confined to a handful of places in the Sun Belt, because they still can’t handle weather patterns trickier than Partly Cloudy. State-of-the-art robot cars also struggle with construction, animals, traffic cones, crossing guards, and what the industry calls “unprotected left turns,” which most of us would call “left turns.”
So I see the case of unclear weather being brought a lot in this context, thus I assume its on the main challenges.
To this unclear weather , do anyone know what exactly the challenge in that, like is that AI doesn’t have enough samples to train its model, is that the current tech cannot detect the surrounding efficiency?
Just to repeat what has already been mentioned. Level 5 means that the car can drive “anywhere” without “any” human intervention. I guess meaning that there doesn’t even need to be a qualified driver on board.
If it’s just within a city … then it becomes Level 4 … which I do believe will be possible by 2030 as city centres can add infrastructure to assist the cars. I believe there will be a strong push to do this to reduce inner city emissions and all the automated cars will of course be electric.
I believe true Level 5 is a long way off. Any AI system is no more intelligent than my left sock. It is ultimately pattern recognition and is only as good as the data that it has been fed. It is of course still amazing technology but is often easily foxed when confronted with novel inputs. I myself can get confused when there are complex roadworks, failing traffic signals, damaged road surfaces, other cars behaving erratically etc etc. So what chance does an “AI” have in those situations? … I guess it would simply disengage like an autopilot does. If I’m having to monitor the car I’d rather just drive it myself in the first place.
“under all conditions” in the SAE chart actually means “in any weather where public safety people have not closed the roads.”
I have to take Jeff’s side of the bet. I have a Level 2 car, but in reduced-visibilty conditions it reverts to Level 1 adaptive cruise control, or to Level 0 bupkis assistance. If I had a level 5 car without a steering wheel, I’d be shivering in the dark in an unexpected snowstorm. Not acceptable.
I have zero clue when vehicles will be able to drive themselves, I’m really wondering if I’d like what I’ve heard Elon talk about where we don’t have to own cars anymore and we just call a hella CHEAP taxi although I’d only perfer it if it’s really cheap.
This would be a very interesting development if it actually did happen but considering the automation of travel would put so many hard working drivers oit of a career it would also be a scary development for where it could lead to