Human Says No

_MG_9270.JPG


I have been watching and listening in disbelief for years now how the world is spiraling out of control. It's not just small things here and there, it's now reached into the heart and soul of humanity and it's being embraced.

Yes, many posts have been written recently about the rise of the machines and how AI is here and we are staring into the not too distant future where we will more than likely fall subservient to it.

Oh yes, I hear that..."That's a fucking conspiracy theory and you're being paranoid" Hmmm but am I? Am I really?

I've always had a lot of respect for the defence forces around the world, wherever they are, it takes a certain personality, degree of honour, integrity and spirit – an ability to lay yourself to the ground prematurely, to risk your life to protect those behind you back home. That is not something that everyone has the mettle to do, especially not the snot-nosed brats that have an entitlement problem and don't know whether they are coming or going.

There's a long line of military history in my family, something that I am grateful for and proud of, however the wars of today are a different kettle of fish. I have recently learned that the US Airforce has given AI it's first taste of an F16 this last December. Unmanned. Not simulated. What could possibly go wrong?

Oh but wait, that's not all. They decided to ramp up the stakes and they simulated a dog fight for two. Ok, twelve. This was in conjunction with DARPA and a program called ACE (Air Combat Evolution) where they are developing mechanisms to integrate human and machine to "collaborate".


AI-1-3.jpg


For those of you that don't know, I've just left a job where I simply couldn't handle being continually micromanaged by my "superior"...now how the actual fuck will a fighter pilot FEEL if they are forced to be subservient to an AI as their co-pilot? Do you honestly think it's going to be anything like KITT from Knight Rider? I doubt it. Why? What is the substance of a human that sets us aside from machines? Empathy? Compassion? Love? Fear? The ability to experience and express emotion? Our own humanity?

Interestingly, if you think of the book by Isaac Asimov that inspired the movie I Robot, there were tenets that robots could function under but they would never be able to continue existence if they broke those rules. However, these are not scientific laws at all and Asimov was a science fiction writer, so while they may sound plausible for a novel, they are too simple to navigate the complex conundrums that we/they may face. There isn't a simple set of rules and laws that you can magically type and create an algorithm to mimic and learn morality in a short period of time and this is where I think we are going so horribly wrong.

The DARPA agents who were observing these dogfight simulations involving this AI piloted F16 reported that they were concerned that the AI took risks that a person more than likely wouldn't take and that it seemed extremely aggressive in it's pursuit or while flying head on – "playing chicken" with the simulated adversaries. This is exactly what I would expect as the AI has no innately inherent fear or reservation about it's mission. It doesn't feel the blood pounding in it's head or the G-force exerted on it's extremities. It doesn't gasp for breath when doing a high velocity maneuver or have to swivel it's head to track the MIG that it's engaged with. It doesn't have a photo of it's wife, husband, daughter or son in the cockpit and know that it's purpose is for something greater than itself. It feels no empathy for those slain in battle or remorse for those wounded at it's own guns. It doesn't have nightmares at night after the battle is won or lost and it doesn't mourn it's dead brethren at their graves.


AI-1-7.jpg


You know what the absolute saddest part of this is? We, the world – are embracing this cold, calculated, emotionless, heartless void now filled with ones and zeros, all spread out in a fashion that compute much faster than we do, with higher accuracy and at higher levels of efficiency. "Oh but they are so immensely helpful" Perhaps now they are. But they will make automation fill the positions that so many of us currently work at. Why hire a designer when you can use an AI program? Why hire a social media manager when you can use AI? Why have an army when you can have a defence force of drones? Why use traffic circles when you can have traffic lights? Why bother to drive your own car when AI can drive it for you? Why bother having Boeing pilots when the autopilot is apparently a lot safer?

But where does that leave us? A dull world where we lose our drive. We lose our focus. We lose our will to create. To invent. To makes art. To defend our loved ones. To be independent. To be free. To be alive???


AI-1-5.jpg


AI has reached Hive, it's reached our schools, universities, military, governments, hospitals, travel and it won't be long when those who embraced it once may be sitting without a job, without a role and possibly with diminishing hope. Yes, it may sound dystopian...but what makes this world, OUR WORLD special is that we and many of the other living creatures we share the planet with have evolved through millenia to FEEL, to LOVE, to CREATE, to EXPRESS, to PROTECT. We will lose that if we hand over our humanity in exchange for the sake of convenience. Everything that has been packaged in the past as an easy solution to the world's problems has almost always come with some serious side effects and a high morality price tag. It's time we stop being so narrow minded and start really seeing the bigger picture and the repercussions down the line.

AI will ruin Hive. AI will ruin lives. Please don't let it happen!
It's up to each of us to stop it. If you see it, report it or downvote it. Don't support it. If we don't stop it, we have nobody to blame but ourselves for not stamping it out now.

Anti AI.png

Sort:  

So now we need Ai to hunt the Ais? 🤔

Good question. I don't know how this issue will be resolved but I doubt that having more of the same with "better intentions" is going to work. It's very Terminator inspired isn't it?

🤔yes wait til bots and ai take over the world

Or... AI can be used to keep up productivity while allowing we mere humans to enjoy more of our lives. I feel sorry for those who need to work, rather than live.

While I understand what you're getting at, I find it hard to believe that it will stop there. At what point do people become irrelevant? While I would rather live to work than work to live, in the way that the current society is built, that simply won't be sustainable and will lead to a variety of other issues down the line.

Can you honestly see a world where this would work long term? I'm interested in your point of view and I'm always open to healthy debate, so by all means, please elaborate.

Well, AI and robotics can produce anything we need more efficiently than we ever could. On top of that, I envision a world where we gain our independence back. This means we grow our own food and relearn how to do the basic things we need done ourselves. An artists who lives off the sale of their art never works a day in their lives, for example.

I admit that I would love a world where we regain some freedoms, but I think that my life experiences (and the world up until now) have made me horribly cynical about things like this. There is always going to be that handful of morons who decide to use this kind of thing for malevolent purposes, I suppose that's the other side of human nature that will always be the downfall of our species.

It would be awesome if it can be used well, let's hope.

The three 'Laws' of robotics would be extremely hard to write, as the supervisory system it was written to be!

And in the entire collection of his work, the robots actually wrote a 'zeroth' law that single people could be sacrificed to save humanity in general....

AI can be a good servant in limited use. I need to repair a gas fired oven (about 12 by 20 feet); because the operator decided to speed up the lighting process, by setting the AI controller to all zeros! The explosion was small, and we have an impressive fire department. So not all AI's are bad. That said, DARPA concerns, as they have no feel for risks related to their projects! They have tried for several decades to locate a shooter by triangulation of echoes. I have this problem figured out, they are just doing it wrong! My dilemma is, do I do trust them with this ability? Will they use it against innocent cent people? If I told them their mistake, they could have an accurate, portable system in less than a year. They would pay me a lot for what I have; but I don't trust them to use it morally!

AI, when used in limited Small control applications, it can beneficial! Big systems can become dangerous when they grab too much power. I'm with you, post what you fine he and I'll help!

Hi Toby

Thanks for the comment, nice hearing from you. Gosh that explosion sounds terrible and could have ended badly but I'm glad you got it under control.

This is the main issue and really comes down to morality as you say : "but I don't trust them to use it morally!" I think that we are always going to have those that will use things like this for nefarious purposes.

I listened to a Ted Talk on this a few nights ago - an old one, posted 5 years ago already and the main thing that scared me was when he told the audience that AI evolves and updates itself: it writes it's own code for this which then throws the developers out of the loop. How can this not be seen as something frightening? Where are the checks and balances? I see it spinning out of control in the future, but I'm just a really cynical old bat. When I first heard about Bitcoin, I thought it was a bad idea too, but that's because I didn't know anything about blockchain tech.

Will be interesting to see how this goes in the future.

As for DARPA, don't trust those mofos at all!

Yes, unsupervised systems are a danger, and the same system that is hacked is even worse. Some dark hacker could deliberately redirect an AI to be deadly to organic life!

It's like fire, it makes a good servant....

They are still rebuilding the be damage, so I haven't had a chance to restart the AI yet.

DARPA would pay a lot of money for what I know, but I lost faith in their morals long ago! Sad too, because it is an elegant solution; to a difficult problem....

Maybe I will make one for my own use.

👍🎯💙😁

Asmimov's laws can be broken easily, and that's the problem. Hence I Robot, and hence why Chat GPT keeps going rogue and turning into an aggressive racist prick with certain prompting.

Perhaps the military hasn't read the sci fi :P - honestly putting AI into military sounds horrific and the fact of it is how they are designing machines to kill so efficiently - how we fund this through taxes is fucking unbelievable.

I still think AI can be a tool, and it's here to stay - I don't want AI WRITING on hive but the 'art' is another story (I'm biased). But we've entered that age and Pandora is out the box. Now we have to learn to live with the demons of our own making. There will be plenty of us that won't accept it or engage with it though.

Yes, Asimov's laws I don't think were intended to be taken as fact as the whole scenario is too complex to be broken into three laws.

I haven't touched ChatGPT but I've heard from a friend that many lecturers at a University here are using it and I think that's just absolutely whacky.

It may be a tool but that comes to the argument like "Guns kill people" No, guns don't kill people as they are simply inanimate objects when not wielded by a person. However, you throw some form of "intelligence" into it and it's not just an inanimate object anymore and that's what I have an issue with. We have seen how many large, complicated security systems have been hacked in the past and I don't think this is any different. Ultimately it comes down to who has the most sophisticated tech and the mind behind it. Any tool can be used with good or bad intentions but that's making the assumption that there's a person on the other end. If it's already going rogue, and it's early days, I don't see it getting any better any time soon.

As someone who enjoys art and design, I don't think I can say that I'm fully on board with AI art either. I have liked some of the pieces that I've seen pop up previously but I think it's a slippery slope and quite a grey area.

I don't think that any of this will end well to be honest, it may seem pessimistic but I've witnessed enough of human nature in my life to know how these things so quickly become corrupted.

Thanks for your comment River. Have a good one. 🦋

We have a knife - we can use a knife for do the work in kitchen, but some can use a knife to kill people, so , what we should do? Ban all the knife production?

Guns do not kill people - people kill people.

The AI don't do anything bad, in fact, AI is totally dumb, like your PC is, without your input, AI or PC do nothing.

So we have pen, why we need PC, keyboard, etc.? Why you don't use a pen to write this post, or comments here, send it via post to all of us here, and look to keep working places for the posts workers? Why you use an advanced AI machine, like your PC, with millions of advanced AI transistors, processors, chips, memory, etc...? And you look to be the best on if possible, to be fastest, plenty of power, using electricity (with, again, millions of AI processors and things work together for you to be able to use that electricity), and again, you use AI digital camera to take this pics here, and use advanced AI Hive crypto platform to communicate here... uhh... and much more...

So, why you use all of this then? Why you don't live without all of this?

You use all of this, you are here, but you are against all of this... Ahh, just against AI... but you use it in all forms.

But no, we should not consider agriculture, witch is equipped with plenty of AI tractors now to produce your food, to be easy for you and just go to store and buy what you need...

But, that's normal, people are afraid, just with all new technology and creations, and just have in mind Terminator scenarios from the Hollywood shit.

So, you see where you are?

The real problems are the people, and they are destroying our lives and world already, you known, all the plandemic and Covid shit now... you govs destroy your live already.

What if the AI will be our help? To decide and to help us to see the problems in advanced, to be able to prevent such psychopaths to destroy our lives?

Hope I opened you eyes somehow. AI is not the problem and the bad actor - but the people are, and they are already very bad. So we should look to correct people, not AI.

We have a knife - we can use a knife for do the work in kitchen, but >some can use a knife to kill people, so , what we should do? Ban all >the knife production?

Guns do not kill people - people kill people.

This was actually the point that I was trying to get across, a knife does not having learning capabilities like AI does.

I get what you are saying about technology, but my PC does not have an AI computer chip in it. I can't say anything about whether the Hive blockchain developers are using AI, that would be for them to respond to.

I agree with you that there are psychopaths in governments and so on that do bad things and inflict cruelty on people, I don't trust most people in governments to have the best interests of the people at the fore - but therein lies the question. So how is AI going to help us and keep us safe from these psychopaths you speak of? Do you not realise that the governments are quite happy for us to be dumbed down further by using AI to do the "thinking work" for us? Are you honestly going to believe what AI suggests you do in the time of a crisis over your own critical thinking skills and experience?

I think that's a very bad idea to hand over that much power of your sovereignty as a thinking, feeling, breathing person to a learning machine. I am not convinced about it becoming Terminatorish, but if AI is now writing it's own updates, how do you not see the consequences long term of this?

Try to ask AI something on your behalf, something you don't have answer, and see, if AI has for you.

If we ask, in couple of years, AI that evolve himself, to work for us, for example, how to plan and what to do to stop this psychos. Maybe AI has better idea as our. Because, we, people, for millennia have no clue how to stop these.

We try what we known, for example, to just jump on the street and scream that we are not satisfied with something - this is totally dumb from our human side, because it will no help us. So we need better inputs, let's tray using AI to educate us, if our schools are not, because, our schools are programmed what we can and should learn from the same psychos on.

For now, maybe is to match, but let's in couple of years if we can use AI on our favor. For now, we can.

And only then will we and our future children breathe clean air, which will not be taken away from us by... you know who, and I don't believe that it will be by AI.

Thanks for your understanding - and not look too match at Hollywood shit movie, where is the AI against people, because, the other side will use AI against us, so, will should use AI against them too.

that was a time when cars took jobs of horsemans...
no one telling now that car took work of the people

AI is life

my question is: do you think that AI wants battle or humans wants for AI to learn combat?

Humans should say no many other things that already destroy our world.

AI is not the bad actor, but people are.

We should use AI to improve our world, and not just thinking it will destroy our world - open you eyes, our world is on destroying highway already.

What if AI is here to improve our world?

I've just had a look at your account after a little birdy told me you believe that doxxing people on Hive is ok.

I can see why you were called out by Hivewatchers. After reading your comments, it is obvious that English is not your first language and I have no problem with that, however now that I've looked at your last few posts, it is so blatant that the writing is not your own. The worst part is that you have the audacity to put this line at the end of your posts:

(The content of the text, as well as images, videos, and other media, are my >own personal and private data.)

The fact that you feel entitled enough to go on a blind rage at now only Hivewatchers but also members of Hive that have nothing to do with them because you were caught committing fraud and then publishing private and personal information about a stand-up member of the community is abhorrent. If you play silly games, you get silly prizes. You should be ashamed of your behaviour and I see that Hivewatchers had been lenient on you and decreased your length of appeal from 365 days to 60 days but then you decided to dig your heels in and go full rant mode.

I will not be conversing with you on this topic or any other topic as it's clear from your actions that you will happily abuse AI and then act immature when called out for it. Doxxing someone is disgusting and that person is owed an apology. It's also pathetic that you continue to threaten him in order to get your own way.