Violent Killer Robots are't the Real Danger of Artificial Intelligence - Here's what Is
A lot of content relating to artificial intelligence, unless it’s made specifically for technologically literate people, starts off along the lines of “when you hear ‘artificial intelligence’ you probably think of things like Skynet from the Terminator movies”. If it’s video content, then it’ll have footage from the movies – most likely a few seconds from the opening of Terminator 2 . This is, at best, pretty freakin’ insulting. After a decade of news about advancements in A.I., we’d be pretty versed in, at least, what artificial intelligence is. It’s almost as insulting to read as being told “now you might be thinking about little green men” in content talking about life on Mars.
All of that is a mixture of mainstream media hyped-up bullshit, and journalists not knowing how to start their story on artificial intelligence advancements without being a talentless hack. Thing is, there’s no reason to think that an artificial intelligence as advanced as Skynet, given the vague order to “protect humans”, that it would instantly begin an attack that kills billions. It makes no sense. You might say that an artificial intelligence would fight to ensure it’s survival, but why would a piece of software care whether it existed or not?
Misconceptions on Artificial Intelligence
Some technophobes claim that machines could get “angry” with humans and create violence for that reason, but that logic also comes from ignorance. The reasoning here is that any advanced artificial intelligence is just like a human, only smarter. This ignores the fact that artificial intelligence is not just a “smarter” human brain. Our emotions and preconceptions and biases don’t exist by virtue of the fact that we’re alive – and it won’t be part of a piece of software by virtue of the fact that it’s intelligent. Human brains are very complex organs that we have barely begun to scratch the surface of, and to claim that an artificially intelligent entity will act in the same way that a human acts, without knowing even why a human acts in the way that it does, is ignorant, egotistical, and patronising in the extreme.
You might say “you’re right Stoski, we don’t know how an artificial intelligence would act, so it’s perfectly valid to claim that it could act in an emotional way”. Making a claim based specifically on not having information is not an argument.
What’s happening here is that the individual is projecting human traits onto a hypothetical artificial intelligence. They think “well since humans are violent and vindictive and cruel then an artificial mind must be those things also” but that isn’t how this stuff works. An artificial intelligence doesn’t need to be anything close to an organic intelligence (ie a human mind).
The real threat of artificial intelligence
The biggest threat of artificial intelligence is not that it might decide to kill us, like in Terminator, but that it is so hyper-focused on the goals that were set for it that the steps that it takes to achieve those goals are not only detrimental to the creators, but also that they’re something that the creator didn’t intend or even imagine as a possibility.
Imagine ordering an A.I. to make humans happy. We might intend that it go about solving world hunger or curing a deadly disease, but it’s entirely possible that the A.I. would deduce that the most efficient way to do that would be to drug us without our knowledge. Clearly not what the designer had in mind.
Unlike a human, with contextual awareness and basic reasoning skills, an A.I. might seek the most efficient means to an end, ignoring the secondary consequences of its actions. Take the simple example of a robot that’s given a simple and relatively innocuous goal of walking across a room. With no furniture in the way, a straight line would be the most efficient route to the goal - likely to be taken by AI and human alike. However, if a human baby was put in between the human and the end goal, we would take steps to not harm the baby - either step over or around it. An AI, with one set of instructions (walk across the room) would not take steps to prevent from harming the baby, and might step on or otherwise harm it. Since “don’t harm baby” was not a part of its instructions, the A.I. doesn’t “care” about the welfare of the baby, only about following instructions.
In that example, the solution would be relatively simple; instruct the robot not to hurt the baby; but what of more complex situations? How do we make sure an AI doesn’t take unforeseen actions when we tell it to, say, deliver water to drought-ravaged parts of a country? Like a misbehaving genie, the big danger is of the AI misinterpreting our intentions, and harming humans with actions, in its calculation, that would most effectively carry out whatever instructions it was given.
This is the real danger of artificial intelligence that the media doesn’t talk about - the real but unsexy side that doesn’t get the attention that it deserves, while the “killer robots” fantasy is in the spotlight for decades.