LinkedIn, eBay founders form $27M fund for public-interest AI

data visualization artificial intelligence singularity AI brain

A $27 million research fund aimed at supporting artificial intelligence research for the public interest was announced today by a couple of high-profile Silicon Valley founders.

LinkedIn Corp. co-founder Reid Hoffman, eBay Inc. founder Pierre Omidyar’s philanthropic investment firm Omidyar Network and the John S. and James L. Knight Foundation announced today that they’ve created the Ethics and Governance of Artificial Intelligence Fund.

In a press release, the fund’s creators explained that the ongoing advancements in AI are fundamentally changing our lives, which is why it is important that AI research includes input from more than just engineers and computer scientists. The new fund will encourage AI research that includes a wide range of voices, from economists and social scientists to policymakers and more.

The fund is another indication that technology leaders want to address the potential societal and cultural impacts arising from the rapid progress of artificial intelligence. In November, Carnegie Mellon University announced plans to form a research center to look at the ethics of AI. And tech leaders such as Tesla Motors Inc. Chief Executive Elon Musk have repeated longstanding concerns about the dangers of runaway AI.

The MIT Media Lab and the Berkman Klein Center for Internet & Society at Harvard University will serve as the first two founding academic institutions for the new fund, and they will join the fund’s creators on a board that will decide on which research projects will be accepted into the program.

Here are some examples of issues the fund’s creators hope to address:

  • Communicating complexity: How do we best communicate, through words and processes, the nuances of a complex field like AI?
  • Ethical design: How do we build and design technologies that consider ethical frameworks and moral values as central features of technological innovation?
  • Advancing accountable and fair AI: What kinds of controls do we need to minimize AI’s potential harm to society and maximize its benefits?
  • Innovation in the public interest: How do we maintain the ability of engineers and entrepreneurs to innovate, create and profit, while ensuring that society is informed and that the work integrates public interest perspectives?
  • Expanding the table: How do we grow the field to ensure that a range of constituencies are involved with building the tools and analyzing social impact?

“There’s an urgency to ensure that AI benefits society and minimizes harm,” said Reid Hoffman. “AI decision-making can influence many aspects of our world – education, transportation, health care, criminal justice, and the economy – yet data and code behind those decisions can be largely invisible.”

Joi Ito, director of the MIT Media Lab, added that AI’s rapid development means tough challenges need to be dealt with. “For example, one of the most critical challenges is how do we make sure that the machines we ‘train’ don’t perpetuate and amplify the same human biases that plague society,” he said. “How can we best initiate a broader, in-depth discussion about how society will co-evolve with this technology, and connect computer science and social sciences to develop intelligent machines that are not only ‘smart,’ but also socially responsible?”

The fund’s creators say that they will collaborate with existing efforts to promote ethical AI research, including the upcoming AI Now symposium, which will take place at the Skirball Center for the Performing Arts in New York City on July 7.

Photo credit: Saad Faruque via photopin cc