The role of data that fuels artificial intelligence raises some ethical questions. Who owns the data collected from healthcare devices or the information gathered from a Fitbit? The technology industry is now trying to resolve the principles of collecting data and its democratization.
“There’s a real big gap, and I think probably part of what the industry has to do is not just build great new technologies, but sort of start to fill that gap with data education and literacy,” said Dawn Nafus (pictured), anthropologist at Intel and author of “Self-Tracking” and “Quantified: Biosensing Technologies in Everyday Life.”
While at the AI Intel Lounge during the South by Southwest event in Austin, Nafus spoke with John Furrier (@furrier), host of theCUBE, SiliconANGLE Media’s mobile live streaming studio, about establishing ethical guidelines within the parameters of AI used with personal data. (*Disclosure below.)
Building tools for data analysis for people without technology skills, Nafus is on a mission to help everyday people understand their own data. In a recent book she co-authored with Gina Neff, titled “Self-Tracking,” the pair sets out to empower and educate individuals about the devices that are tracking data about them in their personal lives.
People are still struggling to understand what all the personal data collected means, Nafus explained.
“With wearables, we are in the classic troth of disillusionment,” she said. The trend for most people is to use wearables for three to four week weeks to gather information. The problem is that data — depending on the access to the data and level of understanding of it — can either help a person or be useless.
Nafus believes that AI will add more complexity to the average person’s comprehension of data. She envisions a moment where people will have to decide what is necessary for them to track and personally deciding what the data will do for them.
The definition of democratization is that non-business data must be accessible to individuals. “Democratization to me is also being able to ask questions … [and] building mechanisms for transparency,” said Nafus.
According to Nafus “The AI Now Report,” conducted by the Obama administration and the NYU School of Law, proposed a couple of pathways into oversight. The findings revealed that using AI responsibly means increasing the diversity of AI researchers, as well as the need to modify the Computer Fraud and Abuse Act and Digital Millennium Copyright Act to clarify that neither stands in the way of independent auditing of AI systems.
The industry is also developing guidelines for ethical use of AI data. The Engineering Professional Association released a set of guidelines last year, The Global Initiative for Ethical Considerations in the Design of Autonomous Systems, which outlines standards for AI developers.
“It’s still early days for the industry. … What’s next is we are going to get real about how to make ethical principles actually at an engineering level,” Nafus said.
Watch the complete video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s coverage of the South by Southwest. (*Disclosure: Intel sponsors some SXSW segments on SiliconANGLE Media’s theCUBE. Neither Intel nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)