UPDATED 01:06 EDT / SEPTEMBER 02 2016

NEWS

It’s the end of the world as we know it and AI feels fine

It can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop … ”

For many of those born before the 1990s, those lines from the movie Terminator were their introduction to artificial intelligence (AI). And like the film’s much wiser older brother, Ridley Scott’s Blade Runner, they show us that once we humans have been exponentially outgrown by our machines of loving grace, the future looks like a grim place to be.

According to some futurists, we are a mere few decades away from AI far brighter than we can imagine right now: Artificial Superintelligence (ASI) and Artificial General Intelligence (AGI) are just around the corner, we are told. If this comes true, profound changes will occur in the world, and life will not be as we know it when we reach the Singularity.

Oxford University philosopher and AI expert Nick Bostrom says this will happen within the next three decades. He also believes that if AI creators and policy makers are not careful and fastidious in their management of AI, our machines may well wipe out the human species.

Killing us softly

Serious business, but we shouldn’t get too carried away. A bunch of tech giants are reportedly working on how to create AI responsibly with social progress in mind rather than working on making dystopian parables come true. Google, Microsoft, Amazon, Facebook and IBM are addressing more prosaic concerns than the obliteration of the human race, such as mass unemployment or intelligent computers controlling horrible weapons.

The New York Times reports that this “Industry Group” dealing with AI ethics is yet to have a name and is rather hush hush, but its objective will be to ensure that “AI research is focused on benefiting people, not hurting them.” This follows a new Stanford University report on the future of AI and how a lot of people will be affected in the near future.

The worry, of course, is that tech companies, like most businesses, focus on the “bottom line” and that the race to create the best AI might take place without an equal focus on the consequences of their creativity. The Stanford report, which is called Artificial Intelligence and Life in 2030, says that it won’t be possible to regulate AI. Governments are often too slow to catch up to advanced technologies.

This is one of the reasons for the industry Group; tech companies want to create a “self-policing organization,” according to the Times article: policing not super-intelligent AI, but machines intelligent enough to make a great impact on “health care, education, entertainment and employment” and the military, which the Stamford report believes will soon change significantly due to technological advancements. As for the Singularity, the report doesn’t go that far.

Photo credit: KOMUnews via Flickr

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU