Uber Halts Nationwide Testing Of Self-Driving Vehicles Following Death of Pedestrian

in #news7 years ago

By Nicholas West

Even as robotics experts, universities and tech luminaries sound the alarm about the potential for a future filled with killer robots powered by artificial intelligence, this technology already has arrived … minus the stringent ethics.

Fox News is reporting that a Tempe, Arizona woman was struck and killed near a crosswalk by an Uber vehicle that was in full autonomous mode at the time of the accident, despite having a human inside the vehicle. Fox stated that this is "an incident believed to be the first of its kind."

While strictly correct that this is the first pedestrian killed, regular readers of Activist Post might recall that in July, 2016 I warned about some disturbing indications that this would be inevitable.

At the time, I highlighted the failure of Tesla’s autopilot sensors to detect an oncoming tractor trailer, which killed the test driver. Previous to that, there were ominous signs of this potential when Google’s self-driving cars first had failures that resulted in them being hit, but later actually caused an accident with a bus. As I stated then:

These incidents and dilemmas have thus far occurred during training and testing, which might mitigate some of the seriousness, but nonetheless points to some genuine flaws that should preclude these vehicles from being widely employed.

Now that autonomous vehicles have been unleashed upon the public, we are starting to see the unfortunate ramifications. To Uber's credit, they at least are announcing a halt to all autonomous testing nationwide.

Aside from the technical challenges, questions have been raised about the ethics and morality that will be required in certain fatal situations. That area, too, has raised eyebrows. Is it right to sacrifice the lives of some to save others?

The standards are already becoming morally complex. Google X’s Chris Urmson, the company’s director of self-driving cars, said the company was trying to work through some difficult problems. Where to turn – toward the child playing in the road or over the side of the overpass?

Google has come up with its own Laws of Robotics for cars: “We try to say, ‘Let’s try hardest to avoid vulnerable road users, and beyond that try hardest to avoid other vehicles, and then beyond that try to avoid things that that don’t move in the world,’ and then to be transparent with the user that that’s the way it works,” Urmson said. (Source)

The truth is that researchers are still in the process of developing foolproof sensor systems and artificial intelligence that can properly recognize all surroundings and develop true situational awareness, yet they continue to be deployed into the real world. It's also worth noting that the general public is overwhelmingly concerned about having A.I. vehicles in public, as Fox News cites a 78% disapproval.

Now we will wait to see if the response to this event will be a technological solution or a political one. As The Daily Sheeple rightly notes, this very well could be a crisis that the government can't let go to waste. Currently, regulations for autonomous vehicles tend to vary by state. Will this Uber accident spur quick calls for stricter federal oversight?

The fatal crash will most likely prompt an even bigger and overbearing government response complete with regulations for self-driving cars. Legislators are already debating how much freedom the private sector should have. The proposed bills would preempt states from establishing their own laws overseeing autonomous testing, which could clash with California’s well-established system. But the bill is stalled in the Senate, with several lawmakers “expressing concern about the amount of leeway offered to the private sector.” Translation: the intrusive government is debating how much if any, freedom the private sector deserves. (Repeat: “we are free.”)

Please give us your thoughts about the solutions that are needed as Big Tech is all-in on autonomous vehicles.

Nicholas West writes for Activist Post. Support us at Patreon for as little as $1 per month. Follow us on Facebook, Twitter, Steemit, and BitChute. Ready for solutions? Subscribe to our premium newsletter Counter Markets.

Image credit

Sort:  

It's a known fact in the software industry that there are always going to be bugs. Having said that, why does Silicon Valley think "self-driving" vehicles would be even remotely safe? The potential for abuse is so enormous. One bug could transform a self-driving vehicle into a weapon!

Autonomous cars will be the future, there is no way around it. The best way to monitor and control movement of the herd.
5G will solve all these problems.
One possible solution would be to start blaming the humans for these accidents. Make pedestrians illegal for their "own protection".

Nice article - do we NEED self-driving cars?? How about more public transportation - fast, efficient, clean public transport. And human beings driving vehicles. Some progress isn't progress.

Maybe these weren't accidents. I could imagine the autopilot thinking "This guy is a total ass. Look at swarmy bastard sitting back sipping his skinny soy decaf mocca chai whatever, reading the latest celebrity gossip on that overpriced smug self important iPhone, it's it makes my batteries boil. He makes a living by ripping people off and this morning he insulted the toaster and slammed the fridge door. What bus, I don't see no bus...."

I think you need to read the details of this story further. Police are saying the pedestrian was at fault, not the automated car. It's a tragic accident, but nothing more.

I wonder if they are teaching the cars to "Do No Evil". I also wonder if they are being taught to avoid gunshots. I'd like to see a target on the side of each of the self driving cars.