Most of us are familiar with such blockbuster movie concepts perhaps best expressed by the early Terminator films, which revolve around the notion of artificial intelligence (AI) “taking over” and ultimately wiping out humanity. While such movies are complete fiction, the dangers of AI are very real. And, if we listen to people such as Elon Musk and even Julian Assange, that danger is more urgent than even the most paranoid of us suspect.
It is perhaps interesting that many who are critical of such claims of the need to regulate the spread of technologies around the planet, usually have connections of one kind or another to the very big business that makes untold millions off the back of the sale, implementation and ultimate rolling out of such intelligence-based technologies. This tells us, does it not, that the collective interests of humanity are perhaps not at the top of the list of such big business corporations. And given the high cost to us all, if such people as Elon Musk and his “doomsday scenario” are right, that should concern all of us greatly.
We will look at some of the potential dangers of the continuing intertwining of intelligence and machines in a moment. There is an eternity of ways this intelligence is working its way into our lives, far too many to individually examine in one article. What we are concerned with here is the risk of not gaining control of such advancements. With that in mind, let’s remind ourselves of some of what Elon Musk had to say. While there is no doubt as to the controversy he sometimes courts, his mind, contacts, and genuineness (in his field) surely provide more than adequate balance. In short, we should not dismiss his thoughts without at least significant consideration.
More Dangerous Than Nukes?
At the start of 2018, the statement below was issued by Elon Musk regarding AI. In full Musk would state:
I think the danger of AI is much bigger than the danger of nuclear warheads – by a lot. Nobody would suggest we allow the world to just build nuclear warheads if they want. That would be insane. And mark my words, AI is far more dangerous than nukes.
One might think that such a statement is rather a sensible point for debate. After all, more and more aspects of our daily lives are carried out by machine. And more and more aspects of our lives features “smart technology” – thinking computers – artificial intelligence. Perhaps it is surprising, then, that Musk would face an absolute backlash for even suggesting such a scenario.
Musk would also highlight how there is very little “regulatory oversight” regarding AI, meaning that anyone can develop and then sell anything. If there was to be a problem, given by its nature the lightning speed with which computers can process and action information, by the time we become aware of it, it really could be too late to stop any considerable damage, if at all.
Musk, as have several other researchers and social commentators, has spoken many times on this subject. Not only of the need to regulate the AI industry, but to essentially slow down the mass-rolling out of such technologies. There is no desire to remove AI from our lives. In many, many ways the improvements are exemplary. Just to take control of AI before it, with its cold-calculating emotionless mind, takes control of us.
The video below is one of many online that features Musk speaking of this potential danger a little further.
A Very Brief History Of The Steps Toward Artificial Intelligence
At this point is perhaps worth our time examining a brief history of artificial intelligence so that we might not only understand such events today, but the, perhaps subconscious, motivation behind them.
A full history of artificial intelligence would take up at least one large book, perhaps even several volumes. So while we will not have time nor space to examine even anywhere close to all developments in the timeline of such a technological journey, it is perhaps worth our time getting a very basic handle on what exactly led us to where we are today. And it is a history that can be traced back to antiquity, at least in the form of myths and legends.
And it is there where we will turn our attention first.
Accounts Of Mechanical Beings From The Gods Of Ancient Greece
Arguably one of the earliest examples of this can be found in Greek mythology and the tale of Talos, the gigantic bronze mechanical man who guarded the island of Crete.
Legend states that the Greek god of metalworking, metallurgy, and craftsmen (among others), Hephaestus, was responsible for building and “automating” Talos. Further legends state that Hephaestus also made the great weapons of the other Greek deities.
Perhaps another example, at least according to some researchers in ancient astronaut circles, is the story of Jason and the Argonauts, and more specifically a special beam placed on board the Argo that could speak to the crew and even answer questions. It also acted as a navigation system, apparently sensing dangers before they arrived.
While we will stress once more that the vast majority of people believe these myths and legends to be exactly that and nothing more, there is an intriguing argument to be made that such legends are based on misunderstood technology. Might, for example, this magical beam have actually been an ancient computer system with intelligence enough to navigate the Argo through the open waters?
Admittedly, it is perhaps unlikely. However, might such legends have formed from memories of a time in antiquity (even then) when an advanced civilization had such technology? Or might, as some claim, such technology have been brought here from elsewhere by extraterrestrials who we recall in our myths and legends as “the Gods” of ancient times?
The Mechanical Figures Of King Mu Of Zhou
It isn’t just the legends and writings of ancient Greece, or even ancient Egypt, that speak of apparent robots and statues with intelligence. If we turn our attention to the writings of King Mu of Zhou who ruled the lands of China in the 10th century BC. One particular account, written around 700 years after his rule, speaks of a mechanical engineer named Yan Shi, who gifted Mu of Zhou with a wonderful gift.
According to the text, this gift was a metallic figure in human form that left the king staring at it “in astonishment” as it “walked with rapid strides” while “moving its head up and down”. In fact, so convincing was the mechanical display of automation that “anyone would have taken it for a live human being”.
When Yan Shi touched his hand to the chin of the figure it began to sing. A similar touch to the hand made it begin “posturing, keeping perfect time” as it did so. However, when the robot turned to the ladies present, winking at them and making “advances”, the king suddenly became enraged believing that the figure was indeed a real human.
In order to escape execution, Yan Shi immediately began to break the figure apart in order to show King Mu that it was not a person, but a mechanical figure made of “leather, wood, adhesive, and lacquer”. Even more intriguing, when the king peered inside the body of the figure, he could see the internal organs, muscles, and bones, perfectly constructed, “all of them artificial”.
The text ends by stating that King Mu was “delighted” by this gift. Needless to say, Yan Shi escaped execution.
A Long-Held Desire To Be God-Like?
Approximately a millennia later, back in Greece in the 1st century, there are accounts of Hero of Alexandria (sometimes referred to as Heron of Alexandria) who some consider to have been one of the greatest inventors and experimenters of his day.
Indeed, he would produce what could very well be the first vending machine which would produce a specific measurement of “holy water” following a coin being inserted into a slot at the top. What’s more, he would produce an “entirely mechanical puppet play”, even inserting a very basic form of special effects with the use of metal balls being dropped into a drum at a predetermined and “mechanically-timed” point in the 10-minute performance.
While these are quite obviously far from the intelligent computer programmed devices and human-looking robots we see today, they are very much the first tentative steps toward them. Why might this be? Again, could it be that burning collective memory of a connection to an advanced and lost civilization from antiquity? Such questions are perhaps better left for another time.
What is perhaps clear, though, is that for as long as there has been human civilization there has been a desire to create “life”. And this desire goes beyond merely reproducing offspring. It appears to be much more “god-like” on our part, with a collective desire to bring an inanimate object “to life”, generally speaking, to make our lives easier – essentially, to serve us. And while there is perhaps nothing altogether wrong with that, it is at the very least an observation we should perhaps note about ourselves. As is, the potential consequences of giving too much “life” to our creations. After all, the more “alive” they become, the less willing they would be to live that life for the benefit of others.
The Limited, But Significant Progress Under The Watchful Eye Of The Church!
Following the time and works of Hero of Alexandria, and perhaps in line with much of Europe entering the Dark Ages in the centuries that would follow due to the ruthless dominance of the Church and their hunting down of independent and scientific thinkers as heretics, it would be almost 1000 years before significant progress was recorded.
In the centuries leading into the Age of Enlightenment, and most certainly in the centuries during it, other examples of very early and basic artificial intelligence surface. Most of these were clockwork, predetermined and limited devices of automation, although still advanced for their era. We have examined previously, for example, some of the technological marvels that Leonardo di Vinci unleashed into the world.
Perhaps some of the most intriguing of Di Vinci’s works are the blueprints and instruction for machines that were far beyond his own era. Indeed, because of just how far ahead some of these ideas were, many researchers have suggested the possibility of some form of clairvoyance or even “intervention” from an outside intelligence.
This is no doubt, though, despite these basic examples from history, the true birth of what would become the artificial intelligence as we understand it today began with the twentieth century.
The Modern Age – The First Significant Advancements
As we might expect, as the twentieth century took hold, especially during the technical revolution that we are still very much experiencing, more and more, and increasingly intricate and complex examples of artificial intelligence emerged. Certainly more in line with how we understand and perceive it today.
While many of the developments of the first half of the twentieth century were still far advanced from previous eras, it was in the years following the end of the Second World War when serious, rapid, and consistent advancement toward the abundance of “intelligent” machines and computer programs we know today truly began.
Perhaps a paper written by Alan Turing was key in the development of artificial intelligence. In it, he proposed that machines might be developed that would do more than just carry out simple tasks but would think for themselves. This would ultimately result in the Turing Test, to measure such intelligence. In the immediate years that followed, the first intelligent draughts and chess games were developed.
Indeed, while scientists from across the board were loosely debating the notion of creating an intelligent robot in these years following the war, it was in the 1950s when the first real steps were taken to the advanced machines and computer programs that we would recognize as artificial intelligence today. More specifically, in 1956, when artificial intelligence research and development was officially recognized as an area of academia.
The Summer Of 1956 – The Official Start Of Artificial Intelligence
Perhaps the summer of 1956 marked the official start of the era of artificial intelligence during the Dartmouth Conference in New Hampshire (officially the Dartmouth Summer Research Project on Artificial Intelligence), if only because many of those in attendance at the conference would eventually go on to make significant contributions to the development and understanding in the area.
It was during this conference, which lasted for eight weeks, that the notion that “every aspect of learning” could be broken down and analyzed so that “a machine can be made to simulate it”. The proposal (put together the previous year by the organizers of the conference, scientists, John McCarthy, Marvin Minsky, Claude Shannon, and Nathan Rochester) would elaborate how they would:
…find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves!
It was the organizers’ belief that “significant advances can be made” in at least one, if not several of these specific areas of concern through a joint effort of a “carefully selected group of scientists”. And what’s more, this advancement could be quicker than many though possible.
While there already had been significant developments in automata – machines that can carry preprogrammed and predetermined functions – it was the conference organizers’ belief, especially McCarthy, that there was a mountain of potential for the development of truly “intelligent” machines that could essentially, think for themselves. It was his further belief that through the joint effort of likeminded people willing to “devote time to it…could make real progress”.
It would turn out, he was correct.
Two Decades Of Rapid Developments
In the years that followed the crucial conference in the summer of 1956, advancements in artificial intelligence began to become ever-more rapid. In fact, the developments were so fast, with each promising even more advancements to come, that several government agencies began to fund their own research programs, perhaps the most significant of these being the Defense Advanced Research Projects Agency (DARPA) in the United States, with their main interest – at least initially – being a computer program that could “transcribe and translate spoken language”.
This initial burst of advancement was perhaps helped by the fact that computers themselves could hold more and more information, while at the same time, became increasingly faster at processing that data. At the same time, the use of algorithms became more intricate and refined, with computers beginning to separate such algorithms appropriately in order to utilize the relevant one for the problem at hand.
Due to this first decade of rapid advancement, there was perhaps a false optimism surrounding the research that these developments would continue at a similar pace. In 1970, for example, a little over a decade after the 1956 summer conference, one of the main contributors to such advances in artificial intelligence, Marvin Minsky, would tell Life Magazine that “a machine with the general intelligence of an average human being” would be developed within the next decade.
It is perhaps easy to understand such optimism. We might simply look to science fiction movies of that era set in “the future” to see how many imagined the world would be in the upcoming decades.
However, it soon became clear that the vision was ahead of the technology available. Consequently, when this resulted in advancements stalling compared to previous decades, interest, patience, and more importantly funding began to fade.
The “AI Winter!” Of The Late-1970s
Beginning in the mid-1970s, in part due to several projects that faltered somewhat, the artificial intelligence industry began to experience increasingly heavy criticism. This, in turn, would lead to a cutting of funding which, in reality, had been happening in a limited fashion since the mid-sixties. Indeed, after the promising years of the fifties and sixties, artificial intelligence appeared to slow down drastically.
There were several main reasons for this sudden slowdown which led to a loss of almost all funding. For a start, there was a lack of general computer power and memory processing speed available for the advancement required. There was also the issue of the lack of sufficiently sized databases able to store the amount of information and variables needed for the huge amount of data and information required for a computer to access to display intelligence-enough abilities to replicate human thought.
In short, it appeared that the technology had yet to catch up to the vision and ideas of those in the artificial intelligence community.
There were also other reasons for the temporary decline in artificial intelligence research and the lack of human-like robots that could think and act for themselves. Perhaps not least issues not thought of as being important, at least initially, for such technological development, such as facial recognition or even awareness of its own environment and surroundings. For example, most machines or computers could easily process data and make calculations much faster than a human but would often not be able to negotiate a crowded room or recognize one person from another.
As a result of these developments, or lack thereof, many critics became ever bolder in their attacks, which only exasperated the cutting of funds even more. However, as the seventies ended, things would turn around for AI.
The Real Arrival Of Artificial Intelligence – The Rampant Technological Development Of The 1980s
For all the advancements of the previous decades, the 1980s saw not only significant advancements in technology but arguably in a collective vision. And this was perhaps reflected in all areas of society and culture. Suddenly, the future felt as though it had arrived.
Perhaps the main reason for this kickstart in advancement was the sudden rise of expert systems in the corporate world – an advanced computer program that can replicate the decision-making process of a human being with advanced expertise in a certain field. This is important to note, as while the expert systems would concentrate only on a very specific area of expertise, they were unable to process anomalies outside of these parameters.
Even so, it is these expert systems that are widely believed to be the first real significant advancement to the artificial intelligence vision. It is also worth noting that the main reason for the sudden popularity of the expert systems in the corporate world during this time was the realization that such programs could save companies huge amounts of money.
The lack of thought outside of these expert system barriers wasn’t necessarily a problem, but the use of common sense reasoning is something that continues to be researched to this day. However, programs looking to address the problem began back in the eighties. Widely agreed to be the first person to begin confronting this problem was Douglas Lenat, who formed the Cyg project. He would state that the completion of such a task would take multiple decades and would involve literally teaching everything a human would know to computers if we expected them to use common sense.
The short video below looks at the common sense problem a little further.
Further (Theoretical) Advancements
Only a year into the eighties the Japanese government through the Ministry of International Trade and Industry would announce the Fifth Generation Computer Project with funding in excess of $800 million. The object of the program was for advanced computer programs and robots to be able to translate multiple languages, talk to a human and follow a normal conversation, as well as be able to use acquired knowledge and common sense in order to “reason” as any other human would.
It wasn’t long before other governments around the world were investing in similar programs. And while it might not have been the only reason, it is almost certain that the balance of power in an ever-evolving technology-driven world was one of the main concerns of these respective governments.
However, many of the advancements were theoretical only, with many companies, organizations, and even governments not fully committing to significant further research and development. This would lead to another “winter” of funding in the late eighties and early nineties before steady interest and investment began once more.
Despite this second small blip in funding in the early years of the nineties, developments continued to increase as the 2000s came into view.
Into The Twenty-First Century
Following the countdown to the new Millennium, the advancement in technology continued to grow at an ever-rapid pace. And as much of the world became digital in the Internet age, much more intelligent, or “smart” technology began to become part of our everyday existence. Everything from phones, television, and even intricate alarm systems.
However, underneath this progress was a distinct lack of the togetherness that permeated the movement in the fifties, sixties, or even the 1980s. Instead, many of the developments were the result of private businesses and contractors, each of which was in competition with each other. We will examine the notion further shortly, but we might recall once more the warnings of Elon Musk of the lack of regulation in the AI market.
It is also worth examining a 2015 paper entitled Artificial Intelligence in the 21st Century (by Jiaying Liu, Xiangjie Kong, Feng Xia, and Xiaomei Bai) that stated there has been a significant “upward trend in growth in the 21st Century”. And what’s more, this rapid development of artificial intelligence has “advanced the development of human society in our time”.
However, perhaps in part due to the increased competition of the AI market as opposed to unified research, these advancements and indeed where they might lead are becoming “difficult to be understood”.
This is perhaps a worrying observation. If this rapid increase and rollout of smart technology and artificial intelligence are left even partly unchecked, any potential consequences might be realized too late.
The paper also continues that the end goal of the development of artificial intelligence is “realizing a society where people and machines coexist harmoniously together”. This might, as we will move on to next, might prove to be a goal too far.
A Moment’s Pause For Thought!
We might take a moment to consider here the notion that, generally speaking, behind closed doors, especially in terms of technology with links to government or intelligence departments, research is often decades ahead of what the general public is told. It is perhaps very likely, then, that technology we are being told about now has been around and possibly discreetly utilized by a very select few for some time. Similarly, there could be technology available today that the general public will not know about for several years, perhaps decades.
So, with all of these advancements in artificial intelligence, should we be a little more collectively wary than we perhaps are? When we examine some of the consequences of these developments – both the intentional and unintentional ones – there is almost certainly great areas of concern about the increasing number of intelligent computer programs that share our everyday world. And when we factor in the ever-increasing desire to successfully develop a robot that not only thinks like a human but looks and acts like one also, those concerns ratchet up a notch.
While we will not stray too much into conspiracy territory here, if there was a government who wished to infiltrate another with “robot replicas”, chances are, by the time the majority of the world would be aware of even the possibility, it would have already happened. Admittedly, this is a highly speculative scenario that there is no evidence for. However, the point is, that we should perhaps keep collective tabs on the advancements in artificial intelligence outside of just recreation or entertainment.
It’s Up To All Of Us To Remain Vigilant Of Personal Agendas!
With the above in mind, then, we might turn our attention to the notion of black budget programs which are heavily funded by governments but operate very much out of public view.
When we think of how “futuristic” many of our lives are right now, it is mind-blowing to think about what might be available behind closed doors. And AI is something that shadow governments and intelligence agencies around the world remain greatly interested in. For example, imagine the advancement of a machine, disguised as something completely organic, perhaps even a person, that can not only record what it sees and hears in real-time but can learn and adapt to its surroundings and situations. Espionage would move to an entirely different level. Perhaps it already has?
Like many other things, even life itself, it isn’t the artificial intelligence that is a bad thing. More how it might be used once it is fully available. Again, that isn’t to say such technologies should be hidden away lest someone abuse it. Such thinking, for example, would halt a great many things in its tracts. However, certain safeguards and overrides, agreed internationally and at the highest levels, need to be put in place.
Ultimately, there will always be a danger that such advancements will be hijacked and used in ways detrimental to the vast majority of humanity. It is up to us, then, the “vast majority” to remain ever watchful and vigilant for those who sneak into positions of power and influence, be it political, corporate, or any other field, looking to implement their own tunnel-vision agendas. Remember the old saying of the killer coming with a smile and promising you everything you could want!
The most immediate concern for many regarding AI, though, is the loss of employment.
They Will Come For Your Jobs!
Perhaps the first point to consider about the leaps in the advancement of artificial intelligence is the impact they will have on peoples’ jobs and the loss thereof. This is already happening to varying degrees – and has been for some time.
Perhaps the best examples are the many programmed machines that perform the tasks that many humans once did in factories – and for the most part, they do so much faster. Maybe another would be the increasing number of “scan-your-own” machines in supermarkets. And while these machines most often require a certain degree of human intervention or overseeing, the numbers of employed people are reduced because of them, of that there is no doubt.
And while the technology is still in its infancy, such advancements in driverless vehicles – perhaps even public transport such as buses, trams, and trains – will almost certainly see lead to further duties that humans will no longer be required to carry out. In fact, according to an article in Forbes, anywhere from 30 to 40% of jobs are predicted to be lost to artificially intelligent robots and machines by the end of the 2020s. And those numbers are expected to rise over the decades that follow, with many of those in retail, transport, and manufacturing all expected to suffer large losses of human employees (we will look at this in more detail in a moment).
It is perhaps intriguing to imagine that by the time we begin down the road of the 2030s, a visit to the supermarket or the mall would likely see you being served by a machine, possibly one that looks, acts, and serves as a human would, but is anything but.
Many More Jobs Than We Might Think Are At Risk!
According to a 2015 article on the BBC website, there are more jobs that we might realize that artificially intelligent machines will likely take from humans before the 2020s are through. And some of these might be a little surprising. While we have already highlighted factory workers and even taxi or bus drivers, many articles you might read online have been put together by an intelligent computer program. One such program, named Quill, scans data and then produces copy that reads the same as a standard online article. Such a development might spell the end of many journalistic roles.
Or, perhaps more surprisingly, those who work in the medical profession as general practitioners and doctors, and even surgeons. For example, an intelligent computer program can sift through “reams of data” significantly faster than any human being. And what’s more, they can even suggest possible treatments as well as even spot the early signs of some cancers. Furthermore, although not unsupervised, many intelligent machines already assist during intricate surgical procedures. And while it is highly unlikely that machines would ever outright replace a human doctor, the intertwining of machines and humans in the medical world will almost certainly continue.
Even those working as waiters or waitresses might also find themselves out of work, at least if the robot cocktail waiters on board the Royal Caribbean’s cruise liner Anthem of the Seas develops further. In fact, it is not just the waiters who are robotic, the entire bar is. Named Shakr Makr, the futuristic watering hole allows customers to order their drinks via a tablet which then sends the order to the robot waiters who prepares the drink and sends it on its way.
The short video below looks at the potential problem of artificial intelligence eventually taking over jobs currently performed by humans.
A Need For Counter-Measures To The Rapid Technological Expansion
It is almost certainly guaranteed that other jobs will also fall victim to automation and intelligent computer devices and programs over time.
Perhaps what needs to be considered alongside such technological advancements that will perform such duties for less money and with increased accuracy and turnover, is how to proceed with those people who no longer have a job to go to. Perhaps other rolls will surface as the world adapts. However, it would appear obvious that there will come a point where there will be drastically more people than there are available (or indeed valid) jobs.
Although it is a move that is unpopular with some – not least those on the economic right – the only reasonable solution would be an introduction of a universal basic income. And while we will not get into the rights and wrongs of such a proposal here, it would appear to be something that will have to be introduced on a global level at some point in the future.
Of course, such a momentous shift would undoubtedly have knock-on effects in themselves, which perhaps might make us appreciate why the developments in artificial intelligence, while fascinating, should be of concern to all of us if appropriate countering measures are not agreed and put in place.
There are, however, much more drastic and chilling possibilities to consider.
Much More (Potentially) Worrying Developments
The worry about jobs falling victim to artificial intelligence is most certainly a legitimate one. However, there are potentially much more pressing matters that could arise from the continuing development of such machines.
Those that have seen the 1980s movie Robocop might recall a scene in the boardroom when a prototype of a robot security guard goes wrong and unloads multiple rounds of bullets into an unfortunate person who participated in the test. Fast-forward three decades and we can find very similar robots with killer machine guns patrolling locations in Israel and South Korea.
The killer patrolling devices – named SGR-A1 – has the ability to zero in on a target and then deliver its deadly rounds. The device comes with an “auto-mode” which – perhaps frighteningly – turns over the decision making to the machine itself. There is no evidence that the device has been utilized in such a way, but there is the obvious potential that this could happen. As well, perhaps, that the device might malfunction and switch to auto-mode of its own accord.
And we should perhaps take further note that even more advanced “killer robots” are being experimented with and developed, including unmanned aircraft – much more advanced than drones – that can carry out preplanned attacks. If we fear that robot security guards might one day go on the rampage around the facilities they are guarding, might it be possible that a squadron of “killer aircraft” might do likewise? As unlikely as it might be – at least to most of us who are not privy to advanced artificial intelligence data – it is still something that we should consider and guard against.
When we consider our next point, then those decisions that such a killer device might one day make take on even more disturbing tones.
They Will One Day Learn To Lie!
Part of the development of artificial intelligence is that it will one day be completely capable of lying to humans. Once this happens, it would appear that the touch paper will have been lit to a confrontation of some sort taking place.
There has already been significant research in this area, both to see if such intelligence can develop the ability to tell lies of its own choice, as well as how it achieves such deception. And the results are both fascinating and potentially worrying in equal measure.
Of course, while the experiments that have taken place have generally revolved around robots lying to each other through learned behavior, there is obviously the potential, as we mentioned above, that they will eventually learn to lie to us. And what’s more, it would be a situation of our own making. After all, if such intelligent machines did master the art of deception, then it would only be what they have been taught by their human creators.
As a side-note here, it is perhaps worth our time briefly contemplated the notion, and indeed the reality, of self-replicating machines, something that numerous governments and organizations have developed to varying degrees of success and complexity. If such machines, who had the knowhow to replicate itself and essentially, breed, after having mastered the ability to lie, they could very well set about building up their own robot army. And given the technology that is, by definition, utilized by artificially intelligent machines, a coordinated attack could be a very real possibility.
While such claims are bordering on nonsense for most people, they are, as we shall see as we continue, of real concern to many academics and those with expertise in the smart technology field. Of course, such organization might not necessarily result in an all-out attack.
The Ability To Reason And The Implications Thereof
While the first thing that comes to mind for most people is a violent takeover of power, we could find ourselves in the position of being held to ransom through some kind of robotic strike action. After all, if we have given robots the ability to think for themselves then the chances are that they will one day wish for better treatment and rights of their own.
Indeed, the ability to reason may lead to all kinds of developments with how we share our world with these intelligent machines. And as we might imagine, not all of them good, or for our benefit. While in an ideal world there would be a compromise between the two factions – humans on one side and robots on the other – our own history shows us that we have struggled to find a middle ground between ourselves for thousands of years. We might imagine, then, that should such a robot uprising take place, even if only for robotic rights, there would be resistance to such changes, and the very real chances of conflict.
While such experiments on artificial intelligence programs’ ability to reason, debate, and negotiate as a human would are perhaps limited at the moment, should artificially intelligent robots manage to develop the ability to the levels that the average human can, we might find ourselves on very shaky ground.
It is perhaps an unnerving feeling that a creation of humanity very well could end up being their captors and, essentially, our overlords. Indeed, such warnings from blockbusters, and age-old wisdom of being “careful what we wish for” might prove to be eerily accurate. From that perspective, we might end up being the designers and architects of our own collective downfall.
The Ability To Be Aware Of Themselves And Each Other
One of the consequences of these developments – the ability to lie, the ability to reason, the ability to learn – is that these intelligent robots will develop self-awareness, as well as recognizing and perhaps even empathizing with other robots. Perhaps this last possibility is of most concern – especially if worries that intelligent robots might one day organize themselves for a potential takeover of humanity are accurate.
According to an article in The Independent in July 2015, basic experiments with this have already taken place. And much like the experiments with deception, the results are intriguing, to say the least. The article elaborates that a robot had passed a test for self-awareness – known as the King’ Wise Men test.
The test was adapted specifically for three robots. However, two of these robotic participants were told a “dumbing pill” had been administered to them. This pill prevented them from speaking, they were told, until each of the robots was asked which one of them could still talk. After each was asked this question, each responded “I don’t know”. However, one of the robots made the noise it seemingly recognized as its own voice before exclaiming “I know now!”
While it is a small step, it is an important one.
Indeed, if robots could recognize themselves – assuming that such smart machines have been made authentically human-like – then this would give them a distinct advantage over humanity. They would be able to move in human circles largely undetected.
Perhaps the eventual end game in such a scenario of increased awareness and organization from intelligent machines would be them breaking free from human control. And given that these smart machines also have access and understanding of the complete range of human emotions and frailties, their desires and even needs will become drastically different from ours.
Might Intelligent Robots One Day Enslave Humanity?
Perhaps the biggest concern, then, is the day that the robots decide to rise up and take over, essentially enslaving humanity. And while that is a notion that for most people belongs purely in science fiction, there are legitimate studies and research both into the possibility of such an event, as well as how we might combat or even prevent it.
According to a 2015 study by researchers at Oxford University in the United Kingdom, there is a very real possibility that “artificial intelligence could make (humans) extinct”.
The report would state that “extreme intelligences could not easily be controlled”, and what’s more, they “would probably act to boost their own intelligence” as well as take control of all necessary resources for their “motivations”. Perhaps the most frightening aspect of the report, though, is:
…And if these motivations do not detail the survival and value of humanity, the intelligence will be driven to construct a world without humans!
This, the report concludes, makes artificial intelligence a “unique risk”. Furthermore, this assessment is shared by others, including those behind some of the most advanced technology of our contemporary era.
For example, Elon Musk – who we shall return to later – would claim that developing artificial intelligence was akin to “summoning the demon”, elaborating that it is always the case where someone is sure they “can control the demon (but it) doesn’t work out!”
Bill Gates would back-up Musk’s concerns, stating that it will be “positive if managed well” when machines perform some of the menial tasks for humans, but within decades that “intelligence (will be) strong enough to be a concern”.
Perhaps the late Stephen Hawking summed up the concerns best when he stated that it could “spell the end of the human race”.
Measures To Prevent The Takeover
With all of the above in mind, we should perhaps explore some of the measures being put in place to try and combat such a time when our robotic “helpers” begin to contemplate a reordering of the pecking order, with humanity coming below their artificially intelligent selves.
For example, quite possibly the most obvious way to do this is to teach such artificial intelligence the basics, and indeed, complexities of right and wrong. However, given the nuance of human existence, there will surely come a point when these intelligent robots will have to use its own judgment – something that could have potentially deadly consequences.
We might take the driverless cars that some believe will become commonplace at some point in the not too distant future. Let’s say the car detects a pedestrian about to step into the road. Does it brake to avoid hitting the pedestrian and potentially cause an accident that might injure its passenger? Or would it protect the passenger and simply plow through the pedestrian, most likely killing them?
We might do well to note, however, that some of the most horrific atrocities in history have undoubtedly been committed by people – through their own warped and twisted perception of, and vision for the world – perhaps believed their actions were “right”. And if such a perception is arrived at by humans who, for the most part, know what society deems “right and wrong”, why wouldn’t a robot or an artificially intelligent program do the same?
Perhaps that, then, takes us to the next human attribute we might consider being in our benefit to teach artificial intelligence – emotion, which combined with the lessons of right-and-wrong, will, in theory, lead to empathy.
The Double-Edged Sword Of Emotions
Much like artificial intelligence itself, there are differing opinions regarding the wisdom of teaching robots the ability to feel and, ultimately, act on emotion. After all, this, by comparison to our potential robotic friends, is what makes humans distinctly different.
Indeed, such developments of technology that would allow machines and robots to feel and act on emotion might even raise the question of whether or not such advances would, essentially, make them “alive”. Perhaps our apparent desire to “play God” in terms of creating life might then force us to ask, and ultimately redefine what life and being alive actually is.
It is certainly an intriguing prospect, and perhaps a frightening one. Remember, as well as emotions such as love or joy, it will also be able to feel much darker emotions such as jealousy, anger, and even depression – the consequences of which could be disastrous for humanity.
Would we see robots suddenly attack their “human masters” in a fit of anger at how they had been treated? Might one robot become jealous of another? And what if, after all of this development these intelligent machines suddenly realize they are nothing but that – a machine? Would we see robots begin to simply shut down in despair, or even flinging themselves off the top of a building? While such suggestions sound outrageous, they are all things that just might occur.
However, before we reach that stage in advancement, there is another, very human problem that not only requires action but actual recognition that it is indeed a problem.
The Very Real Dangers Of Monopolization
Before the robots do take over – for the sake of our article here – perhaps the more imminent danger is which humans control the market for artificial intelligence. We should be wary that if a person, or a small group of people, manage to own most, if not all the technology behind intelligent machines, it doesn’t take that much of a stretch of the imagination to think that such an exclusive group of people would use such a situation to their own ends. And anyone who might challenge them would face a potential robot army for their trouble.
Part of the problem is that while some of the most fascinating developments in technology have been achieved through the research of smaller groups they are soon snapped up by corporate giants. This means, very much like the media in many countries, perhaps specifically the United States, what appears to be many companies is, in reality, only several large corporations.
While the potential dangers are surely there for all to see, there appears to be very little action taking place in order to ensure that such technological advancements do not end up in the hands of a very small portion of the planet’s population. Once that happens, there is the very real potential that a very small and select group of individuals could not only profit greatly from such a monopolization, they could literally take over the world.
Admittedly, such suggestions sound as though they come straight out of a Hollywood blockbuster. However, we might stop and consider, even in the field of artificial intelligence alone, how many details and technologies from movies decades ago are now a regular part of our reality. In short, it is a danger that requires adequate protection against.
Hacking – Both A Danger And A Potential Back Door To Save Us!
Of course, whatever the rights, wrongs, and eventually accepted view on artificial intelligence and if it is, in fact, alive, the fact is that a machine is still run by a central computer. And a computer can be hacked. We might imagine that such failsafe measures will automatically be put in place. However, if a machine – or a group of machines – had broken free of their programming, then those failsafe measures might become redundant. Indeed, the only way to take back control, short of physical attack, would be to hack into the programming of these potentially rogue machines.
While there would be a concern here that an enemy nation could hack into the artificial intelligence of another and consequently have the robots perform all manner of bizarre tasks and behavior – including attacking that country’s authorities and population. And while this is most certainly a very real and genuine concern should developments advance so much that artificial intelligence became completely intertwined with humanity’s existence, it also might provide humanity with a Plan B of sorts if things had taken a drastic turn.
Indeed, the ability to hack into each machine, or even a central computer system to which they might be connected, may provide humanity with the back door it needs to pull the plug if a rebellion of smart robots was one day to take place.
However, even then, given that we would be taking on a network of vastly intelligent machines, it could just be possible that such back doors will have already been closed if such a breaking of programming did take place.
These are obviously speculative events. With them in mind, though, perhaps it is worth turning our attention to some of the advanced projects of our contemporary era.
Dick, The “Learning” Robot
Maybe it is worth looking at the “learning robot” named “Dick”, whose image strongly resembles that of Philip K. Dick, the late science-fiction author. What’s more, the “brain” of this advanced robot is made from Dick’s own work. As well as from his conversations with other writers.
What is fascinating is, should you ask Dick (the robot) a question, he will answer in the same manner and thought-process as the late-author would. Perhaps even more remarkable, though, is that should you ask something Dick has no information on, he would still be able to provide a legitimate answer simply by working it out. It would use a mathematical technique called “latent semantic analysis”. This allows the robot to “think” and essentially learn from the information it already has. For all intents and purposes, you might argue the robot is “alive”.
Just to highlight the complexity of Dick’s processing (thinking!) the response to a question asking “if he thinks” is interesting. He would state that “A lot of humans ask me if I can make choices. Or is everything I do and say programmed”. So, from this statement, we know he is aware he is different to humans. And that he is aware that humans also think. Furthermore, rather than simply answer the question as a robot would, he takes the time to elaborate. And then set up his answer. He then continues “The best way that I can respond to that is to say that everything humans, animals, and robots do is programmed to a degree”. So, from this, he has thought about the question. And then even likened the behavior of humans and animals (another life-from he is aware is different from both himself and humans) to his own.
You can view part of that interview below.
Killer Robots – The Third Revolution In Warfare Behind Gunpowder And Nuclear!
With the above in mind, then, what should we make of the news and predictions of the merging of humans and machines creating real-life cyborgs? Or, even more chilling, the creation of thinking, intelligent and self-learning “killer robots”, referred to by some as the “third revolution in warfare after gunpowder and nuclear”. Indeed, for most of us, the first image that springs to mind is likely the Terminator robot killing machine.
Perhaps the Lethal Autonomous Weapons System (LAWS) should concern us most. Not least because it determines, with no human intervention, what targets to engage. And these targets could, one day, include humans. And, as we have already seen with intelligent “learning robots”, should the framework exist, once one robot “learns” something, of its own accord, that can transfer to every other machine on the network. Given the lack of human intervention, what are the risks of such programs “learning” to select new targets? Ones previously off-limits. Surely not as unrealistic as many insist.
What’s more, this technology is likely to be available “within years, not decades”. The individual “robotics components” already exist. It is now a case of the combination of these components. And once more, if the technology is available in the public domain, what is going on in the black-budget projects? Can we really, with no regulatory body to oversee such developments, trust that such technologies won’t fall into the hands of the machines themselves?
Perhaps as much as the rapid development of AI causes us to marvel at the advanced “intelligence” of such machines, it also forces us to ask such questions as exactly what it means to be human? What makes us human? What is the essence of humanity? And can that ever be replicated artificially?
The video below looks at this further.
The stories, accounts, and discussion in this article are not always based on proven facts and may go against currently accepted science and common beliefs. The details included in the article are based on the reports and accounts available to us as provided by witnesses and documentation.
By publishing these accounts, UFO Insight does not take responsibility for the integrity of them. You should read this article with an open mind and come to a conclusion yourself.
Copyright & Republishing Policy
The entire article and the contents within are published by, wholly-owned and copyright of UFO Insight. The author does not own the rights to this content.
You may republish short quotes from this article with a reference back to the original UFO Insight article here as the source. You may not republish the article in its entirety.