Are The Dangers Of AI More Hazardous Than We Think?First Published: December 28, 2018 Estimated Reading Time: 6 minutes
Most of us are familiar with such blockbuster movie concepts perhaps best expressed by the early Terminator films which revolve around the notion of artificial intelligence (AI) “taking over” and ultimately wiping out humanity. While such movies are complete fiction, the dangers of AI are very real. And, if we listen to people such as Elon Musk and even Julian Assange, that danger is more urgent than even the most paranoid of us suspect.
It is perhaps interesting that many who are critical of such claims of the need to regulate the spread of technologies around the planet usually have connections of one kind or another to the very big business that makes untold millions off the back of the sale, implementation and ultimate rolling out of such intelligence-based technologies. This tells us, does it not, that the collective interests of humanity is perhaps not at the top of the list of such big business corporations. And given the high cost to us all if such people as Elon Musk and his “doomsday scenario” is right, that should concern everybody greatly.
We will look at some of the potential dangers of the continuing intertwining of intelligence and machines in a moment. There are an eternity of ways this intelligence is working its way into our lives, far too many to individually examine in one article. What we are concerned with here is the risk of not gaining control of such advancements. With that in mind, let’s remind ourselves of some of what Elon Musk had to say. While there is no doubt as to the controversy he sometimes courts, his mind, contacts, and genuineness (in his field) surely provide more than adequate balance. In short, we should not dismiss his thoughts without at least significant consideration.
More Dangerous Than Nukes?
At the start of 2018, the above statement was issued by Elon Musk regarding AI. In full Musk would state, “I think the danger of AI is much bigger than the danger of nuclear warheads – by a lot. Nobody would suggest we allow the world to just build nuclear warheads if they want. That would be insane. And mark my words, AI is far more dangerous than nukes”.
One might think that such a statement is rather a sensible point for debate. After all, more and more aspects of our daily lives are carried out by machine. And more and more aspects of our lives features “smart technology” – thinking computers – artificial intelligence. Perhaps it is surprising, then, that Musk would face an absolute backlash for even suggesting such a scenario.
Musk would also highlight how there is very little “regulatory oversight” regarding AI, meaning that anyone can develop and then sell anything. If there was to be a problem, given by its nature the lightning speed with which computers can process and action information, by the time we become aware of it, it really could be too late to stop any considerable damage, if at all.
Musk, as have several other researchers and social commentators, has spoken many times on this subject. Not only of the need to regulate the AI industry, but to essentially slow down the mass-rolling out of such technologies. There is no desire to remove AI from our lives. In many, many ways the improvements are exemplary. Just to take control of AI before it, with its cold-calculating emotionless mind, takes control of us.
The video below is one of many online that features Musk speaking of this potential danger a little further.
It’s Up To All Of Us To Remain Vigilant Of Personal Agendas!
It is something we highlight regularly but worth reminding ourselves again at this point. That is the notion of black budget programs and technology being decades ahead of what is available in the public arena. Or that we even know about. When we think of how “futuristic” many of our lives are right now, it is mind-blowing to think what is available behind closed doors. And AI is something that shadow governments and intelligence agencies around the world are greatly interested in. For example, imagine the advancement of a machine, disguised as something completely organic, perhaps even a person, that can not only record what it sees and hears in real time but can learn and adapt to its surroundings and situations. Espionage would move to an entirely different level. Perhaps it already has?
Like many other things, even life itself, it isn’t the artificial intelligence that is a bad thing. More how it might be used once it is fully available. Again, that isn’t to say such technologies should be hidden away lest someone abuse it. Such thinking, for example, would halt a great many things in its tracts. However, certain safeguards and overrides, agreed internationally and at the highest levels, need to be put in place.
Ultimately, there will always be a danger that such advancements will be hijacked. And ultimately used in ways detrimental to the vast majority of humanity. It is up to us, then, the “vast majority” to remain ever watchful. And ever-vigilant for those who sneak into positions of power and influence, be it political, corporate, or any other field, looking to implement their own tunnel-vision agendas. Remember the old saying of the killer coming with a smile and promising you everything you could want!
Dick, The “Learning” Robot
Maybe it is worth looking at the “learning robot” named “Dick”, whose image strongly resembles that of Philip K. Dick, the late science-fiction author. What’s more, the “brain” of this advanced robot is made from Dick’s own work. As well as from his conversations with other writers.
What is fascinating is, should you ask Dick (the robot) a question, he will answer in the same manner and thought-process as the late-author would. Perhaps even more remarkable, though, is that should you ask something Dick has no information on, he would still be able to provide a legitimate answer simply by working it out. It would use a mathematical technique called “latent semantic analysis”. This allows the robot to “think” and essentially learn from the information it already has. For all intents and purposes, you might argue the robot is “alive”.
Just to highlight the complexity of Dick’s processing (thinking!) the response to a question asking “if he thinks” is interesting. He would state that “A lot of humans ask me if I can make choices. Or is everything I do and say programmed”. So, from this statement, we know he is aware he is different to humans. And that he is aware that humans also think. Furthermore, rather than simply answer the question as a robot would, he takes the time to elaborate. And then set up his answer. He then continues “The best way that I can respond to that is to say that everything humans, animals, and robots do is programmed to a degree”. So, from this, he has thought about the question. And then even likened the behavior of humans and animals (another life-from he is aware is different from both himself and humans) to his own.
You can view part of that interview below.
Killer Robots – The Third Revolution In Warfare Behind Gunpowder And Nuclear!
With the above in mind, then, what should we make of the news and predictions of the merging of humans and machines creating real-life cyborgs? Or, even more chilling, the creation of thinking, intelligent and self-learning “killer robots”, referred to by some as the “third revolution in warfare after gunpowder and nuclear”. Indeed, for most of us, the first image that springs to mind is likely the Terminator robot killing machine.
Perhaps the Lethal Autonomous Weapons System (LAWS) should concern us most. Not least because it determines, with no human intervention, what targets to engage. And these targets could, one day, include humans. And, as we have already seen with intelligent “learning robots”, should the framework exist, once one robot “learns” something, of its own accord, that can transfer to every other machine on the network. Given the lack of human intervention, what are the risks of such programs “learning” to select new targets? Ones previously off-limits. Surely not as unrealistic as many insist.
What’s more, this technology is likely to be available “within years, not decades”. The individual “robotics components” already exist. It is now a case of the combination of these components. And once more, if the technology is available in the public domain, what is going on in the black-budget projects? Can we really, with no regulatory body to oversee such developments, trust that such technologies won’t fall into the hands of the machines themselves?
Perhaps as much as the rapid development of AI causes us to marvel at the advanced “intelligence” of such machines, it also forces us to ask such questions as exactly what it means to be human? What makes us human? What is the essence of humanity? And can that ever be replicated artificially?
The video below looks at this further.