It was bound to happen, and just recently, it did.
According to a report from Good Morning America's Bill Hutchinson and Jeffrey Cook:
A self-driving Uber car hit and killed a pedestrian Sunday night in Tempe, Arizona, police said, in what appears to be the first case of a pedestrian death caused by an autonomous vehicle.
The vehicle was in "autonomous mode at the time of the collision, with a vehicle operator behind the wheel," Tempe police said in a statement.
Image from Good Morning America
Although I've been covering companies like Nvidia who have been at the forefront of autonomous driving technology, I always felt iffy about it. I mean, it's great that people want to try this stuff out, but I for one would NEVER get into an autonomous car.
Are you kidding me?
Eventually, some ignorant a--hole is going to implement the blockchain into autonomous vehicles...just wait and see. Furthermore, as soon as investors and the general public hear about blockchain and AVs, they'll clamor for it like no one's business.
You just have to ask yourself: how the heck will a decentralized database make autonomous driving better?
This incident just proves the point that no matter how far advanced we become, no matter how many powerful computer algorithms we create, at the end of the day, computer programming is simply binary: ones (1) and zeroes (0). Yes or no.
A computer cannot possibly replicate the infinite number of variables and calculations we make in our lives every day. And that truly shows the limit of digital technologies -- yes, it's great to have innovations, but they'll never replace the irreplaceable human brain.
Yeah but still lots more human error deaths
But that's an apples to zucchinis comparison -- humans have been driving since the late 19th/early 20th century, so of course there are more human error deaths.
Technology, on the other hand, is supposed to reduce human error to the point of practical elimination. So killing somebody is a major failure in that the vast majority of sober and focused (human) drivers do not cause fatalities.
I donβt trust the AI cars, but am Much more worried about the 70% of real people I see on their phone while driving dangerously & impatiently. Just so they can get to wherever they are going a little faster, much more dangerously, & then play with their phone aimlessly when they get there.
Itβs hard to even drive in parking lots these days without hitting a phone zombie, literally standing in the middle of the road staring at the screen. Or walking in the street without even looking up. Then they Walk through the grocery store glued to the screen, incapable of paying attention to any real people around them.
Then at the gym, I wait for racks being used by people who canβt even lift weights without getting on their phone for 5 minutes between sets. They spend two hours at the gym, 10 minutes spent doing actual exercise, & 110 minutes spent nose deep in their phone, sitting on equipment someone else could use.
The real kicker to this whole rant, is 95% of all this phone time is spent on absolutely meaningless & trivial things, that add very little value to a persons life, & serve to steal attention & free choice from these poor phone addicted people.
This was of course written on my phone, but in the privacy of my home. I make a point to not use my phone in public unless absolutely necessary. & even then I will sit down out of the way to use it for a moment.
If you see me blindly walking around glued to my phone, feel free to shoulder check me, or hit me with your car to teach me a lesson. But you wonβt see me in that spot π
The oblivious driver/pedestrian problem is a separate issue, which I agree is a serious problem that often leads to serious and fatal accidents. That said, computer technology is supposed to minimize human error, usually as close to zero as you can practically get.
When somebody is killed as a result of malfunctioning tech -- and that's the real point here...somebody that didn't have to die was killed by a "test run" of a new technology to which the dead person did not provide consent -- that tech is necessarily a failure.
Humans already kill humans, inadvertently or otherwise. Why do we need tech to do what apparently comes so naturally for us?
Very true, but I look at predatory programming in smartphones to addict users as tech malfunction.
Have seen a few sprained ankles from people walking off curbs while dumbphoning. Iβd imagine many many fatalities have occurred from walking with them. We all know dumbphoning while driving kills people everyday, but most people think they are the talented one who can do it safely & so they do.
I donβt think driverless cars should be so hastily tested in a real environment, but I donβt think people have the willpower to drive safely with smartphones either & a person using a smartphone is much much more likely to impact my life at this point. Happens everyday when I drive. Seems more than 1/2 are smartphoning.
Sorry for the off topic rant, am hopeful that the dangers of transportation will decrease as we βprogressβ but seems to be more dangerous than ever. Would love to see a functional smart transportation system & less individually operated vehicles. I think it should be much more difficult to get a license & a lot easier to lose. Way too much risk from what I see.
I always enjoy reading your perspectives, inspires thought
Your point is true and very much valid, and in a way, the predatory programming you mention could be construed indirectly as a tech malfunction. Although just my opinion, I think it's not so much a tech malfunction but a deliberate, concerted effort to dumb down society! :)
Unfortunately, I don't think transportation safety will ever improve because of a host of societal issues, primarily stemming from the destruction of family and faith, and spiraling out of control from there into multiple sub-issues. I don't want to expound at this point because I'll go into full-blown rant mode, and that's something none of us want to see...LOL!
I'll just move on to destroying our astronomical "realities" instead...haha! :)
I agree with you about the concerted effort to dumbdown.
Looking forward to the rant post about family & faith when you decide to write it out. I have had quite a few thoughts on the same topic in the past few years.
π₯ π π πΈπ°π΅
This is also a real problem...
I am a rural homebody who drives very short distances infrequently & I still have to dodge phone zombies in every parking lot I drive through & every store I go into. I can only imagine how bad this problem is on the big city sidewalks.
Yeah, it's not cool, pretty soon the next thing will be zombies with "VR-headsets", and then we will be doomed!
=)
image src "The Dailymail.co.uk"
VR porn addiction could be a real serious problem
I have this friend who is into that, he say's it is easy to become "hooked" on it...
Ya, just look at the smart phone porn addiction. Difficult for people when there is a mega library of porn at their fingertips everywhere. Can only imagine the increased feeling of reality from VR will make the brain become even more strongly addicted
AI is moving too fast, letβs-slow down on the technology. Terminator is not that far fetched, look at some of the bizarre things said by some autonomous robots already.
We embrace technology until it kills us.
That the brain is "irreplaceable" is an assumption. One that many leading in fields like autonomous vehicles and artificial intelligence will vehemently argue against.
The biggest incentive of automation technology is to minimize the "error range" and frequency of error that we see within any type of skill that can be accomplished by humans and machines alike and we are achieving that for the most part in a broad range of these types of technologies, which can, and has been, substantiated by sound statistical analysis procedures that rule out chance or bias.
Whether or not autonomous vehicles currently fall within that category, I'm not sure, but I'm almost certain that it will, if it hasn't already, as we improve and fine-tune the technology involved in minimizing the severity and frequency of "accidents".
Uhh...somebody was just killed from a malfunctioning AV. I'm not sure if initiatives in "minimizing the severity and frequency" of "accidents" -- and I'm confused why you put quotation marks around accidents -- would be any comfort to the grieving family. Clearly, this is an unacceptable error range, and AV technology has a LONG ways to go before it can even come close to replacing the human brain.
I put quotes because not everyone will agree what the definition of accident is, especially when it comes to rating the safety of AVs. It would probably be more accurate to say "error", because it should be of concern any time that the vehicle behaves in ways that are unintended (such as crossing a lane border without the intention of changing lanes) since these are obvious signs that the potential for errors that may lead to fatality are there. You could make the argument to dock the same number of points any time that an AV misbehaves, because it only comes down to chance whether a human or car happens to be there when these instances occur.
As with anything in this sandbox of this objective world that we all share, nothing will ever be perfect. There will never be a time in the future when we can say that cars can be driven, or drive themselves, with 0% chance for error or accidents. It's unrealistic to accept nothing short of a perfect track record. What is realistic is shooting for a goal of some factor of 1 better than has been achieved by humans over a large span of time (random sampling with a large "n" and all that), like, say, 5X less likely to be in an accident.
Of course, that number would have to be very high for most people to accept that the writing is on the wall, and for them to even start to consider handing over control to AI, and we can only guess as to what that number might be, but my guess is that it's no lower than 10X. I also feel that the numbers will actually get much, much better than that, as in factors of ten higher (100X, 1000X, etc.), as this tech makes major advancements in the future.
@bullishmoney
Yeah it's super sad, crappy and completely expectable.
Why is there such a great need to make all these human-workers obsolete, what are we all going to do? How are people supposed to survive?
We are supposed to borrow money & be good docile debt slaves
How do "they" expect us to pay back these loans when there are no jobs, because the robots took over?
The loans are never meant to be paid back. They are meant to be a means of control, I.e. work your whole life & never quite pay them back
Like invisible shackles...
Ahahahah I like how you made it into a blockchain issue
Let's be honest it's all about perfecting the technology before it becomes accepted.