At VMware Explore, experts debate questions around AI bias and the future of data privacy
At the VMware Explore conference in Las Vegas on Tuesday, the focus was on what artificial intelligence can do. Today, the company turned its attention to what AI should do.
Interest in the ethics of artificial intelligence is rising with the technology’s increased adoption, thanks to rapid growth in generative tools such as OpenAI’s ChatGPT. In a briefing session for the media today, VMware Inc. provided an opportunity for industry experts to discuss the potential pitfalls facing organizations that are rushing to adopt large language models trained on data that could be biased, private or just flat wrong.
“There’s a lot all of us collectively don’t know,” Chris Wolf (pictured, middle), vice president of VMware AI Labs, said during the media session. “There’s a lack of industry standards. We want to be able to help customers understand correctness of AI as well as explainability of results.”
Complicated math
Part of the dilemma confronting many organizations interested in deploying AI is a basic lack of understanding around what the technology is and how it works.
“People are still a little confused about what AI is and isn’t,” said Meredith Broussard (left), associate professor at the Arthur L. Carter Journalism Institute of New York University. “AI is just math, very complicated, beautiful math. All of the biases of the real world are reflected in the data used to train the model.”
Those biases are what have some industries worried about the long-term implications of using models that may disproportionately discriminate against segments of society. Broussard cited one analysis of mortgage approval algorithms which found that lenders were 80% more likely to reject Black applicants and 70% more likely to turn down Native Americans.
“All of that bias was being fed into the model and it’s not surprising that it was making biased decisions,” Broussard said. “We are going to have hard conversations about race, gender and disability in the office. We can tweak the models’ outputs mathematically to make it more fair.”
Data protection
In addition to the potential for bias, there is also the risk that AI models will inadvertently expose proprietary company or customer data. That has led VMware to move cautiously in how it uses generative AI internally, according to Wolf.
During the Explore gathering this week, AI startup HuggingFace Inc. unveiled a SafeCoder assistant designed to ensure that code remains within a virtual private cloud during training and inference stages. VMware is offering SafeCoder on its platform, in addition to using it within the company.
“It had really good contextual awareness of how to assist our software engineers,” Wolf said. “For VMware, our source code is our business. It’s very important that we maintain privacy and control of that data.”
There is also the possibility that generative AI will spawn a cottage industry of users who will leverage the technology simply to make things up. A noteworthy example of this discussed during the media briefing occurred earlier this year in New York where an attorney was found to have used ChatGPT to generate nonexistent court cases in a legal filing on behalf of his client.
“If AI is assisting us, the human job is to make sure that AI is correct,” Wolf said. “Humans are still going to have to supervise AI.”
The human element will be a central factor in the future direction of how generative AI is used, not just in supervising the models themselves but in learning to accept the inevitability that it will become a part of daily life.
“People are really scared of artificial intelligence,” Broussard said. “AI is not coming to take your job. We are going to see people using AI, but there is no coming robot apocalypse. It’s not reasonable to expect that’s what the world is going to turn into.”
What the world will become is a place where humans and machines evolve together, a prospect that has already raised stress levels within many sectors of society. This will require a renewed focus on solutions and part of the message from the discussion at VMware Explore today could be summarized in two words: Chill out.
“The first thing we need to do is take a deep breath, honestly,” said Karen Silverman (right), founder and CEO of The Cantellus Group. “These tools are also going to require a lot more cognitive work from all of us. Our role is going to be very integrated in terms of how these tools are used.”
Photo: SiliconANGLE
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU