Superintelligent Machines with Human Morals Would Be a Disaster
Nothing is more frightening to a human being than a spaceship touching down from a distant galaxy filled with superintelligent beings capable of destroying humanity in a veritable rout. Humans have been at the top of the food chain for millennia. Suddenly, being subject to the whims of more advanced creatures would be hard to take. Our intelligence might be so feeble, compared to that of our visitors, that we seem like unicellular organisms to them.
But humans may be setting up a similar scenario by creating machines and robots so smart and so capable that one day they may begin to use the earth and everything on it for their own ends, putting an end to humanity. Think of the idea that superintelligent machines could create offspring vastly smarter than those we have created. It’s a slippery slope and things could quickly get out of hand.
Of course, science fiction writers have been foretelling this type of scenario for centuries but today some of the smartest scientists on the planet are decrying human complacency in the face of this threat. It is no longer a figment of the imagination: scientists now believe there are no limits to how smart robots and machines can become.
What Goes Around, Comes Around
Last July we looked at “Visions of the Los Angeles That Could Have Been” and how shortsightedness led to a sprawling smog filled city instead of a subway and monorail transportation system, more parks, playgrounds and beaches for LA residents. We also wrote: “What Will The Job Market Look Like In 2045 When Machines Do All The Work?” and concluded that really smart machines and robots are likely to be carrying much of the load.
Since these articles were written it has become clear that superintelligent machines and robots are on the way, raising concerns from leading scientists about the potential threat to the survival of humanity.
Heavyweights Weigh In On Threat of Superintelligent Machines
On July 3, 2014, Stephen Hawking, Director of Research Centre of Theoretical Physics Cambridge; Max Tegmark, MIT Physicist; Stuart Russell , Professor of Computer Science at Berkeley and Frank Wilczek, Noble Laureate and Physics professor at MIT contributed a piece to the Huffington Post decrying the complacency of people in general and technologists and scientists in particular about the very real threat of supersmart robots “going rogue.”
Hawking and his colleagues describe a situation similar to what occurred in Los Angeles when a blind hunger for progress, combined with economic incentives, caused policymakers to adopt technologies that, in the end, created much human suffering from pollution, wasted time in traffic jams, etc. Are humans doing enough thinking about the future?
Where Is “Hope for the Best Plan or the Worst?”
The “carrot and the stick” being held out before humans is the possibility that civilization could see the end of military conflict, tools and techniques to rein in infectious diseases, and an era of plenty where poverty, water shortages and pollution are largely solved. While many humans are excited about advanced technology, not enough are evaluating “unintended consequences.”
Not If, But When: Studying The Possibilities
The aforementioned article by Hawking et al, mentions four organizations that are looking at these issues:
- Cambridge Center for Existential Risk
- Machine Intelligence Research Institute
- Future of Humanity Institute
- Future of Life Institute
In a future article we will go into these organizations and discuss whether or not they provide a sufficient level of research to help humans cope with an increasingly complex and risky robot future.