Cambridge thinkers propose more research into “extinction-level” risks to mankind.
What are you supposed to be doing right now? The washing up? I wouldn’t bother, mate. We’re all doomed anyway.
At least, that’s what popular culture tells us. Even if the environment doesn’t turn against us with fiery abandon, there’s always meteors, robot uprisings, nanomachine swarms, sentient viruses, etc. The only thing left to do is take bets on which terrifying cataclysm will kill us first, and that’s pretty much what a new initiative proposed by a gaggle of boffins at Cambridge University would do.
The Centre for the Study of Existential Risk (CESR) would analyze risks to the future of mankind, particularly those we could be directly responsible for. The Centre, proposed by a philosopher, a scientist and a software engineer, would gather experts from policy, law, risk assessment and scientific fields to help investigate and grade potential threats. According to the Daily Mail, the proposal is backed by Lord Rees, who holds the rather grand-sounding post of Astronomer Royal.
Judging by comments from philosopher and co-founder, Huw Price, the potential threat of artificial intelligence seems to be pretty high on the centre’s agenda.
The problem, as Price sees it, is that when an artificial general intelligence (AGI) becomes smart enough to write its own computer programs and create adorable little AGI babies (applets?), mankind could be looking at a potential competitor.
“Think how it might be to compete for resources with the dominant species,” says Price. “Take gorillas for example – the reason they are going extinct is not because humans are actively hostile towards them, but because we control the environments in ways that suit us, but are detrimental to their survival.”
“Nature didn’t anticipate us, and we in our turn shouldn’t take artificial general intelligence (AGI) for granted.”
Price quoted former software engineer and Skype Founder, Jaan Tallinn, who once said he sometimes feels he’s more likely to die from an AI accident than something as mundane as cancer or heart disease. Tallinn has spent the past few years campaigning for more serious discussion of the ethical and safety aspects of AI development.
“We need to take seriously the possibility that there might be a ‘Pandora’s box’ moment with AGI that, if missed, could be disastrous,” writes Price. “With so much at stake, we need to do a better job of understanding the risks of potentially catastrophic technologies.”
Source: The Register