UPDATED 11:50 EDT / MARCH 28 2025

Reggie Townsend, vice president of data ethics practice at SAS Institute Inc., talks with theCUBE about responsible AI during the Tech Innovation CUBEd Awards 2025 series AI

How SAS champions responsible AI with ethical, transparent innovation

For artificial intelligence to serve humanity positively, responsible AI must be a priority. SAS Institute Inc. has helped set the ball rolling by operationalizing ethical AI practices that foster accuracy, transparency, fairness and accountability, according to Reggie Townsend (pictured), vice president of data ethics practice at SAS. 

“My job is to make sure wherever our software shows up that we don’t hurt people,” Townsend said. “I might say that oftentimes when harm is experienced, it’s a result of unintended consequence, and so we want to try to anticipate some of the unintended consequences and get out in front of those as best we can. I help to shepherd our AI oversight activities … everything from what we buy to what we sell as it relates to AI, making sure that we’ve got the adequate structures in place internally to ensure that we’re … doing our level best to keep out of harm’s way.”

Townsend spoke with theCUBE’s Rebecca Knight for the Tech Innovation CUBEd Awards 2025 interview series, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed why responsible AI should take center stage in the present digital landscape.

Through the responsible AI lens

Based on its commitment to responsible AI, SAS received a CUBEd “Innovative Responsible AI Initiative” award. The company remains committed to ensuring that AI systems are designed, developed and deployed with human needs, values and well-being at the core, according to Townsend. 

“We have what we call data ethics principles,” he said. “The first one that we really focus on is human-centricity. We talk about human agency, well-being, and equity … with an idea of making sure that when we are in a situation, we have to make tough calls about whether we are going to deploy a certain capability into a certain part of the world … or whether we’re going to sell to a certain kind of customer for a certain type of purpose, we really want to examine really closely how are the humans centered in all of this?”

By prioritizing human principles, responsible AI development helps create systems that are ethical, inclusive and beneficial to society as a whole, according to Townsend. This principle explains why the AI sector needs a human-centered approach. 

“We like to examine, in a banking scenario, ‘Who’s the potential[ly] most vulnerable when we’re making loan decisions?’” he said. “In a fraud example, ‘Who’s the potential[ly] most vulnerable when we’re declining credit to individuals?’ Now, oftentimes, those are decisions that our customers ultimately have to make, but as a platform provider of these capabilities, we want to help counsel them. We want to help them get to points at which they are also proving themselves trustworthy. We just see that as a part of our obligation.”

Here’s the complete video interview, part of SiliconANGLE’s and theCUBE Research’s coverage for the Tech Innovation CUBEd Awards 2025 interview series

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU