Unlocking the power of AI while safeguarding personal data sovereignty
Generative artificial intelligence is one of the biggest trending topics in tech right now, and a lot of that popularity stems from the controversy surrounding it, mainly the ethical issues raised with using the technology.
Gen AI requires large data sets to learn, oftentimes with original creative works without any authorization from the creator and can be used to create questionable content, including deepfakes. These issues require consistent governance to ensure that decisions are made with societal principles in mind when it comes to furthering the development of AI. To help address this, SAS Institute Inc. put a governance model in place to focus on matters associated with AI oversight.
“My job is to sit at this intersection of policy governance, the actual technology capability itself and how it intersects with people,” said Reggie Townsend (pictured), vice president of data ethics at SAS. “Governance does slow us down. I would argue that taking a beat from time to time is a good idea. As an example, how many of us get to drive as fast as our cars will allow us? We have speed limits for a reason.”
Townsend spoke with theCUBE industry analyst John Furrier at Supercloud 4, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed the evolving AI landscape, the importance of governance and ethical inquiry in AI development and the legal and technical challenges associated with gen AI.
Navigating AI ethically
Two highly active areas of conversation surrounding AI are the open-source aspect of technology and the increase of global access, both bringing government involvement into question. AI can act as a competitive advantage for a company but also give an edge to the government — and its adversaries, who can use the technology for more sophisticated cyberattacks at a faster rate.
“Whether it be business, military, healthcare, you name it, we have to have humans involved who are determining process and procedure early on, well before the first line of code is written so that we have an understanding of how we want to deploy the AI,” Townsend said. “It would be a horribly bad idea to say, ‘we’re just going to create a bunch of AI drones to go fight wars for us.’ However, it may be a perfectly fine idea to say, ‘I want AI to turn on my coffee maker in the morning.’”
As AI continues to develop and increase in implementation across the globe, companies are gaining confidence in the technology and are becoming bolder in the utilization of AI. A lot of organizations, however, are more cautious and less confident as it relates to long-term deployment and the customers being impacted.
“We’re slowing down when it comes to products that are going to impact our people, whether they be chatbots, or triggering decisions for loan decisions and all those sorts of things and understandably so,” Townsend said. “There is a bit of a wait-and-see around generative largely because of some of the legal dynamics there, as well as the technical dynamic and accuracies. You don’t want to build on a foundation just to see that foundation crumble in [the] next year.”
Differing AI approaches
The way different countries approach AI varies as well, with both the U.S. and E.U. approaching gen AI in their own way.
“The U.S.-centric approach has been, ‘Let’s go slow on the regs, let’s appreciate where the tech is headed, and then let’s use common and case law to build the necessary precedents to regulate that legally,’” Townsend said. “On the opposite side, you’ve got the European view of ‘Let’s put a regulatory regime in place based on what we expect out of the technology in the next 10, 15 or 20 years.’”
The conversation ended with the pair stressing the importance of personal data sovereignty in an AI-optimized world. AI has the potential to transform various aspects of society, from work patterns to creativity to technological infrastructure and data management. Embracing AI is one of the keys to surviving in the coming decades, Townsend explained.
“Those who are at greatest harm, those who are most vulnerable today are those who aren’t participating on the front end of this technology,” Townsend said. “What I’m much more concerned about in the short term, is one’s ability to participate in the economy in the next five to 10 years.”
Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of Supercloud 4:
Photo: SiliconANGLE
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU