

Prolific, a company that was founded to source verified data from human participants, said today that it has raised $32 million in a Series A funding round co-led by Partech and Oxford Science Enterprises to broaden its services to use human insights for training and improving artificial intelligence models.
With the vast popularity of generative AI chatbots, such as OpenAI LP’s ChatGPT and Google LLC’s Bard, which can understand natural speech and respond conversationally, there has been an ever-increasing need for access to verifiable data from human participants. At the same time, AI models need to be tested to prevent them from going off the rails or determine that they aren’t behaving in toxic or harmful ways.
Prolific was founded in 2014 in the United Kingdom out of a desire to provide genuine crowdsourced participants for online academic research, and existing tools were expensive and cumbersome to access. Another problem was that not everyone was whom they said they were: Too many were bots or scripts pretending to be people in order to reap rewards.
Now the company boasts a network of more than 120,000 prescreened and vetted active participants across 38 countries who can provide insights for training and testing AI models from a diverse set of backgrounds. The human participants are paid a minimum of $8 per hour for their work.
“AI represents one of the biggest leaps forward in technology in recent years, and our unique approach to data sourcing from humans positions us to make these systems more accountable and less biased,” said Prolific co-founder and Chief Executive Phelim Bradley.
Using Prolific’s platform, AI model builders can employ its network of human participants to enable a process known as reinforcement learning by human feedback, or RLHF. Using this method, the outputs of AI models can be checked by humans to improve and train the model to reduce its harmfulness and errors. This process is important for helping reduce “hallucinations,” or when AI chatbots confidently present completely false information.
With access to native language speakers, AI models will also have more natural and authentic conversational outputs. Having a larger pool of people from diverse backgrounds and demographics to annotate and categorize the data that goes into AI models also helps reduce the potential bias and harmfulness of AI responses. For example, an AI model is less likely to make racially or culturally insensitive responses if trained on truly representative population samples.
As governments and companies seek to bring AI into compliance with copyright law, Prolific’s platform can also help ensure simple auditing and providing transparent sourcing of training data. That has become increasingly important with the European Parliament drafting laws that will place transparency on the use of copyrighted works used in training data on AI models.
“The funding we have secured will fuel our growth in the AI space, especially in the U.S., bolstering our commitment to human-guided AI development during this pivotal moment in the technology’s progression,” said Bradley.
THANK YOU