Sort:  

Congratulations @michaelbisbell! You have completed some achievement on Steemit and have been rewarded with new badge(s) :

Award for the number of upvotes

Click on any badge to view your own Board of Honor on SteemitBoard.
For more information about SteemitBoard, click here

If you no longer want to receive notifications, reply to this comment with the word STOP

By upvoting this notification, you can help all Steemit users. Learn how here!

Resteemed by @resteembot! Good Luck!
The resteem was payed by @greetbot
Curious?
The @resteembot's introduction post
Get more from @resteembot with the #resteembotsentme initiative
Check out the great posts I already resteemed.

Resteemed by @resteembot! Good Luck!
The resteem was payed by @greetbot
Curious?
The @resteembot's introduction post
Get more from @resteembot with the #resteembotsentme initiative
Check out the great posts I already resteemed.

You were lucky! Your post was selected for an upvote!
Read about that initiative
logo

Nice post @michaelbisbell, I liked how you ended it on a positive note but also exploring the dangers of what could possibly happen if it ended up with the wrong hands. I particularly like how you mentioned about how we are placing value and success levels on material things created by the current system we live in . I think we need to re-think about what value is and also look back at old traditions such as certain way of life presented by buddhism and other practices, to allow us not to get lost in the progression. What do you think? Thank you for your post, it was a good read!

I agree to an extent. Sometimes getting lost in spirituality or religion is just as bad as getting tied up in material things, you have to stay grounded in the world you live in if you ever want to be capable of impacting it. Glad you enjoyed it! Makes me happy to see my hard work put into this one be appreciated. Haha

Regarding the risks, it would seem that (1) we can mitigate the impact of automation through some kind of basic income and (2) we can prevent rogue, malicious, biased, or otherwise harmful AI through appropriate thoughtfulness into instilling ethics and avoiding biased datasets (see, for example, Nick Bostrom's book Superintelligence.

But I think the biggest risks are the unknown unknowns and the unclear timeline. I'm currently reading the book Founders at Work, interviewing startup founders in 2007. It was just 10 years ago, yet virtually none of them say anything about smartphones, and the section on Research in Motion talks about BlackBerry being the most ubiquitous mobile device in the world. This was just 10 years ago, and nobody was talking about one of the biggest trends in technology that was coming, mobile, with obvious signs if you look back. So what, regarding AI, will look obvious in the future, but isn't obvious today? Is it even possible to predict? What will the timelines actually be?

As for the intersection of blockchain and AI, I absolutely think this is a fascinating area, and brings us closer to truly decentralized autonomous organizations. For example, see the AI-blockchain hedge fund startup Numerai.

I don't really fear "rogue, malicious, biased, or otherwise harmful AI" as that is projecting human emotions and tendencies onto something that is not human. I fear that it is not truly artificial intelligence just as televisions in every living room did not truly offer privacy as Edward Snowden brought to light, that it is programmed responses and actions which would mean that someONE is pulling the strings. That is my only fear in regards to the whole things, to be honest. I don't see any reason for an AI to "destroy humans" as we are mutally beneficial to one another. I'm extrememly hopeful for the future, and I believe that we have to at least, if it is truly AI, give it a chance to rise or fall.

I think that the reason no one was talking about it was just because no one had the information. Everyone being informed and having an opionin didn't really become possible until almost every person had a screen in their hand givin them 24/7 access. We live in a very different world, and yet we always measure things against a past without those technological advances. In fact, on that note that's a positive attribute I would equivalate to AI. We lose generational knowledge and are forced to repeat these 80-100 year cycles in society, but with an AI that lives through these and can carry on our legacy from generation to generation with a first hand view point... That will change everything. I can't tell you how many nights I've spent stairing at the ceiling wondering if the history I was taught was what actually happened, or trying to understand the thoughts and mindset of anyone on the Earth at the time. Although it doesn't solve the issue of the past, it allows us to perserve the future.

Another huge innovation of the future (possibly the greatest), imagine the space exploration opportunities available when we can send a humanoid thinking entity that can simutaneously communicate with us from wherever it is because it's connected to every extension of itself!! We are moving towards a Star Trek style future, and that excites me. Weren't they even one of the first to predict cell phones? I feel like I remember hearing something about that. How much faster could AI robots build Elon Musk's terraformed Mars without dealing with the lack of oxygen in the process? What about the moon base? What about Titan?? Other universes??? We can't travel at the speed of light due to our molecular structure, but an AI can. What if you can operate the AI through a VR headset? So many possibilities.

I don't really fear "rogue, malicious, biased, or otherwise harmful AI" as that is projecting human emotions and tendencies onto something that is not human.

Not necessarily. AI is learning from human curated data, which has already been shown, many times, to carry human biases. Similarly, an AI may become inadvertently harmful by optimizing to its given goal—e.g. Bostrom's "paperclip" scenario in which an AI converts the world and all humans to paperclips, simply because it received the goal "make the most paperclips."

Regarding space-based AI, I think robots in space makes sense, but I don't see how we could communicate with them via VR given the huge time lag. It would maybe be one-way communication—they could send back what they've seen, like our Mars rovers. But we couldn't control it usefully in real-time.

That's just it, we don't know how responsive it will be to other parts of itself. Theoretically, if the AI is the same both here and there we could communicate through the AI as it would literally just search for data within itself. It wouldn't travel as slow as us because it would probably have little bases and stuff set up so that the solar system would literally turn into a giant network router. Just like you can close a computer and immediately open another and log in, it'll be the exact same data. Maybe that wouldn't work now, but a few more years of AI development and that'd be sweet to see. With the advances in AI allowing for planets like Mars to have strong internet signals that can be sent back and forth, you could likely see it in real time. Because that AI would be working to get communication to us and with quantum computers on their way that doesn't seem so unlikely.

I feel like that could apply without the AI. By having an intelligence of a human without the negative implications and emotions like greed, anger, hunger, and sadness that power the great movements we've seen. I think we have to think about it as a living thing in order to really lock it down, otherwise, it isn't held accountable for its actions. If every AI robot is held accountable just as every human is, I believe there is a stronger potential for good than bad. The AI knows what evil is, and would be making a decision to act it out just as a person would. That's where I see that it doesn't make sense for an altogether different classification of life altogether to be thought of the same and with the same fears of a human.

I do believe we should be cautious, stay informed, and set regulations before it is fully immersed as Elon Musk warns pretty regularly, but to assume a negative outcome is to put that out there into the world and that's not something I want to hold myself responsible for later down the road. Always prepare for the worst and expect the best, that's my motto.