Imagine Being Stuck in a Car That Won’t Let You Out

You hail a ride, climb in, and the car starts driving itself. No driver, just you and the AI. Then the car stops β right in the middle of a busy road β and nothing happens. The doors won’t open. The car won’t move. And no one is coming to help.
That’s not a scene from a sci-fi thriller. That’s exactly what happened to real passengers in Wuhan, China, when multiple Baidu Apollo Go robotaxis suddenly froze in traffic. And it raises a question a lot of us are already thinking: are self-driving cars safe enough for everyday people to actually use?
What Actually Happened in Wuhan

Baidu’s Apollo Go is one of the world’s largest autonomous taxi services β think of it like a driverless Uber. In a recent incident in Wuhan, several of these robotaxis simply stopped working while carrying passengers on busy city roads.
The cars didn’t pull over safely. They didn’t alert anyone. They just… stopped. Passengers reportedly had no clear way to override the system or exit the vehicle safely. Traffic backed up, at least one accident occurred, and police received multiple emergency calls from panicked riders and frustrated drivers stuck behind them.
That’s not a software glitch that crashes your phone. That’s a software glitch that traps a human being in a metal box on a busy highway.
Why This Matters Beyond One Bad Day in Wuhan

It’s easy to dismiss this as a one-off problem β a bug that engineers will fix and move on from. But the bigger issue is what this incident reveals about how these systems are being rolled out.
Right now, self-driving taxis are expanding rapidly across China, the United States, and parts of Europe. Companies are eager to prove their technology works. City governments are eager to modernize. But in that rush, one critical question often gets buried: what happens to the passenger when the AI fails?
In a normal taxi, the driver can pull over, open the door, and say “sorry, car’s broken.” In a driverless car with a frozen system, passengers may have zero control β no emergency exit option, no override button, no human to talk to.
The Real Gaps Nobody Talks About Enough
Here’s what’s missing from most conversations about autonomous vehicles:
- Emergency exit protocols: Passengers need a guaranteed, simple way to exit any self-driving vehicle at any time β no AI permission required.
- Human backup systems: If the car freezes, there should be an instant human operator who can remotely take control or guide the passenger to safety.
- Clear liability rules: When an AI causes an accident or traps someone, who is legally responsible β the company, the city, or the software?
- Transparent failure reporting: Incidents like Wuhan should be publicly documented, not quietly patched in a software update.
These aren’t futuristic demands. They’re basic passenger rights β the same ones we expect from elevators, trains, and planes.
So, Are Self-Driving Cars Safe Right Now?
Honestly? The technology is genuinely impressive, and in many conditions, autonomous vehicles are statistically safer than human drivers. But statistically safer on average is very different from safe enough in every situation.
The Wuhan incident is a reminder that AI systems can fail in ways that are physically dangerous β not just inconvenient. And when they do, ordinary passengers shouldn’t be the ones left stranded with no recourse.
The question of whether self-driving cars are safe isn’t just a technical one. It’s a human one. And right now, the honest answer is: they’re getting there, but we’re not there yet.
What Should You Do If You’re Considering a Robotaxi Ride?
Be curious, not afraid. Ask about emergency exit options before you get in. Check whether there’s a remote human operator monitoring your ride. And pay attention to how companies respond when things go wrong β that tells you a lot more than their marketing ever will.
Progress is real. But so are the passengers inside these cars. Both deserve to be taken seriously.
Frequently Asked Questions
Self-driving cars are still being tested and improved, with mixed safety records so far. While they can reduce human error in some situations, incidents like Baidu’s robotaxi issues show the technology isn’t perfect yet and needs more development before widespread use.
Autonomous vehicles have fewer accidents in controlled conditions, but real-world safety depends heavily on weather, road conditions, and software reliability. Recent problems with companies like Baidu demonstrate that these cars still struggle with unexpected situations that human drivers handle naturally.
Main concerns include software glitches, difficulty handling edge cases, cybersecurity risks, and unpredictable behavior in bad weather or complex traffic. Baidu’s robotaxi incidents highlight how these vehicles can malfunction in ways that put passengers and pedestrians at risk.
Stay ahead of AI β weekly digest
Get the most useful AI updates delivered to your inbox every week. No noise, just what matters.