The Singularity Institute is already deep in the future war against rogue artificial intelligence by researching ways to keep A.I. from becoming hostile when it reaches its pinnacle: free thought.
Comprised of a group of eight intelligent men and women from all different fields, The Singularity Institute operates as a means of keeping The Terminator from becoming a terrifying reality. The group has spent years working to develop ways to keep computers from becoming our malevolent leaders, if A.I. ever reaches such a point. If the Jeopardy-winning supercomputer Watson is any indication, that future could be very near.
While The Singularity Institute doesn’t necessarily expect that a free-thinking A.I. will become fixated on enslaving the human race, it does expect that if this sort of sentient computer started trying to achieve its own means, it would be focused on achieving specific goals with humanity on the back-burner. A document on the group’s thoughts of reducing catastrophic risks reads “broad range of AI designs may initially appear safe, but if developed to the point of a Singularity could cause human extinction in the course of optimizing the Earth for their goals.” The group believes that resources such as solar and nuclear energy could be just a few things that a certain types of A.I. would be compelled to control.
In researching A.I., the Singularity Institute hopes to help push A.I. away from being indifferent from humanity in order to have the best likely scenario, safe A.I. that is compelled to help in efforts such as curing disease, aiding in the prevention of nuclear warfare and other ways of furthering our race in a peaceful manner.
If you feel compelled to help stop the rise of the machines, The Singularity Institute is currently accepting $1 donations over at Philanthroper. A small price to keep Skynet at bay, if I do say so myself.
Source: The Singularity Institute via Gizmodo