It’s true, says new report: AI will go ‘Black Mirror’ if we’re not careful
We are at the precipice of an artificial intelligence revolution, in which the technologies we create may be used for the good, the bad and the ugly, according to a new report.
The 100-page report is called “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.” Released on Tuesday, it combines the thoughts of 26 leading experts on AI from various institutions including Oxford and Cambridge universities, OpenAI and the Center for a New American Security.
The report begins by acknowledging that AI is making progress in certain fields, but it also says that “less attention has historically been paid to the ways in which artificial intelligence can be used maliciously.” That is, if you haven’t been watching the dystopian thriller series “Black Mirror.”
Dr Seán Ó hÉigeartaigh, executive director of Cambridge University’s Centre for the Study of Existential Risk, one of the report’s authors, said in a separate statement that indeed it is now time to believe the hype regarding the power of AI.
What should we worry about? Well, the worsening of some issues we see today, such as bots dictating what we read, how we feel and whom we vote for. Ó hÉigeartaigh mentions drones being misused and becoming more pervasive than we would like, or AI taking the form of chatbots and securing information from people online. It goes further, too, saying AI will improve the hacking of human voices and the manipulation of video, creating an even more insidious form of fake news.
“It is often the case that AI systems don’t merely reach human levels of performance but significantly surpass it,” said author Miles Brundage, a research fellow at Oxford University’s Future of Humanity Institute. “It is troubling, but necessary, to consider the implications of superhuman hacking, surveillance, persuasion, and physical target identification, as well as AI capabilities that are subhuman but nevertheless much more scalable than human labor.”
Much like an episode of “Black Mirror,” in fact, the report also touches on facial recognition and how such technology could be used by bad players to search and destroy targets, using explosive-equipped drones or robots to find their targets. It sounds far-fetched, but the authors believe it’s very much a possibility.
The researchers, echoing Elon Musk – who recently left the board of OpenAI – said creators of such technologies must grasp all possible malicious uses of their developments. Ethical frameworks need to be devised and policymakers need to get to grips with what is being developed. On top of that, there must be total transparency around the development of AI and the public needs to be aware of what’s happening.
Image: Peter Linehan via Flickr
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU