

As the use of artificial intelligence has expanded into areas that increasingly affect people’s health, privacy and civil rights, the need for governance standards specific to AI use cases has grown.
That’s a gap that Monitaur Inc. is attempting to fill starting today with a platform it says helps organizations create and enforce AI governance practices. It’s addressing a need highlighted by a recent survey of chief analytics, AI data officers that found that 65% said their companies can’t explain how specific AI model decisions or predictions are made, 73% have struggled to get executive support for efforts to make AI ethics and responsible practices a priority and only 20% actively monitor their models in production for fairness and ethics.
GovernML is the latest addition to Monitaur’s ML Assurance Platform, which also has modules covering record-keeping, performance monitoring and auditing. Delivered as a service, GovernML enables enterprises to establish and maintain a system of record of model governance policies, ethical practices and model risks across their entire AI portfolio, the company said.
The way to realize AI’s potential is by trust and transparency,” said Chief Executive Anthony Habayeb. “Our ability to enable risk departments to do their job is critical to building governance and oversight of AI systems.”
Most organizations administer governance policies manually with little ability to repeat and scale successful models, Habayeb said. Standards are also lacking for what constitutes good training data and standards for fairness. “People at $70 billion companies are still using a piece of paper to document stuff,” he said.
Governance is expected to grow in importance as regulators factor it into their assessments of business practices and legal challenges grow from people and groups that believe they have been unfairly treated by AI algorithms.
“The stakeholder who has incentives to perform reviews needs to have the information they need to confidently assess an application, inspect that system and provide their objective opinion that it’s fair and safe,” Habayeb said.
“AI governance is massively important given the level of power and influence artificial intelligence holds on the average person,” said Sergio Suarez Jr., CEO of TackleAI LLC, a company that mines useful data from unstructured documents. “’Regulation’ is often considered a bad word, but most people involved in artificial intelligence recognize the risk potential of the programs they’re creating and support some level of restriction and governance.”
The importance of fairness and high-quality training data was highlighted four years ago when Amazon.com Inc. abandoned an AI-driven recruiting program after discovering it unfairly favored male candidates because its training data was biased by a preponderance of applications for men. Two years earlier, Microsoft Corp. pulled an AI chatbot from Twitter after pranksters taught it to spew hateful and profane remarks.
Monitaur said its platform centralizes policies, controls and evidence across all advanced models in the enterprise. Its approach is grounded in risk management, a discipline that helps enterprises make decisions based on the levels of risk to the business. AI governance discussions tend to focus too narrowly on technical concepts such as explanations of how models work, monitoring and bias testing while ignoring issues of lifecycle governance and of human oversight, the company said.
GovernML is integrated into the Monitaur ML Assurance Platform to provide a lifecycle AI governance approach that covers actions ranging from policy management to technical monitoring, testing and human oversight.
“We offer a lifecycle of good auditing policies that start with why you would use a model for a problem like hiring,” Habayeb said. “If you discover in pre-deployment, that your data is biased, we ask where you got that data? We create a risk methodology that spans the whole lifecycle.”
Monitaur came out of stealth mode in January 2020 and has raised $3 million, the CEO said.
THANK YOU