UPDATED 15:00 EST / MARCH 07 2019

AI

Q&A: How AI is cultivating a responsible community to better mankind

Artificial intelligence initiatives powered by big data are propelling businesses beyond the capacity of human labor. While AI tech offers an undeniable opportunity for innovation, it has also sparked a debate around potential misuse through the vast reach of programmed biases and other problematic behaviors.

The power of AI can be comprehensively harnessed for good by fostering diverse teams focused on ethical solutions and working in tandem with policymakers to ensure responsible scale, according to Janet George (pictured), fellow and chief data officer at WD, a Western Digital Company.

George spoke with Lisa Martin (@LisaMartinTV), host of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, during the Stanford Women in Data Science event in Stanford, California. They discussed the range of possibilities in AI and how WD is leveraging the technology toward sustainability.

[Editor’s note: The following answers have been condensed for clarity.]

Tell us about Western Digital’s continued sponsorship and what makes this important to you.

George: Western Digital has recently transformed itself … and we are a data-driven … data-infrastructure company. This momentum of AI is a foundational shift in the way we do business. Businesses are realizing that they’re going to be in two categories, the ‘have’ and the ‘have not.’ In order to be in the have category, you have to embrace AI … data … [and] scale. You have to transform yourself to put yourself in a competitive position. That’s why Western Digital is here.

How has Western Digital transformed to harness AI for good?

George: We are not just a company that focuses on business for AI. One of the initiatives we are doing is AI for Good and … Data for Good … working with the UN. We’ve been focusing on trying to figure out the data that impacts climate change. Collecting data and providing infrastructure to stow massive amounts of species data in the environment that we’ve never actually collected before. Climate change is a huge area for us, education … [and] diversity. We’re using all of these areas as a launching pad for Data for Good and trying to use data … and AI to better mankind.

Now we have the data to put out massively predictive models that can help us understand what the change would look like 25 years from now and take corrective action. We know carbon emissions are causing very significant damage to our environment and there’s something we can do about it. Data is helping us do that. We have the infrastructure, economies of scale. We can build massive platforms that can stow this data and then we can analyze this data at scale. We have enough technology now to adapt to our ecosystem … and be better in the next 10 years.

What are your thoughts on data scientists taking something like a Hippocratic Oath to start owning accountability for the data that they’re working with?

George: We need a diversity of data scientists to have multiple models that are completely diverse, and we have to be very responsible when we start to create. Creators have to be responsible for their creation. Where we get into tricky areas are when you are the human creator of an AI model, and now the AI model has self-created because it has self-learned. Who owns the copyright to those when AI becomes the creator? The group of people that are responsible for creating the environment, creating the models, the question comes into how do we protect the authors, the users, the producers, and the new creators of the original piece of art.

You can use the creation for good or bad. The creation recreates itself, like AI learning, on its own with massive amounts of data after an original data scientist has created the model. Laws have to change; policies have to change. Innovation has to go, and at the same time, we have to be responsible about what we innovate.

Where are we as a society in starting to understand the different principles and practices that have to be implemented in order for proper management of data to enable innovation?

George: We’re debating the issues. We’re coming together as a community. We’re having discussions with experts. What are we seeing as the longevity of that AI model in a business setting, in a non-business setting? How does the AI perform? We are now able to see the sustained performance of the AI model.

Policy makers are actively participating. We don’t want innovators to innovate without the participation of policymakers. We want the policymakers hand-in-hand with the innovators so we have the checks and balances in place and we feel safe. We need psychological safety for anything we do. Imagine having AI systems run our lives without having that psychological safety. This knowledge has to come back and be part of discussions … so we can change the regulations and be prepared for where this is going.

Watch the complete video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s coverage of the Stanford Women in Data Science event.

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU