![](https://d15shllkswkct0.cloudfront.net/wp-content/blogs.dir/1/files/2024/05/aporia.png)
![](https://d15shllkswkct0.cloudfront.net/wp-content/blogs.dir/1/files/2024/05/aporia.png)
Machine learning observability startup Aporia Technologies Ltd. today launched Guardrails for Multimodal AI Applications, a new service that extends its existing artificial intelligence guardrails solution to include guardrails for vision and audio.
Claimed to be a first-of-its-kind solution, Guardrails for Multimodal AI Applications is designed to mitigate issues in video and audio-based AI applications, such as hallucinations, wrong responses, compliance violations and jailbreak attempts. Multimodal AI is an AI system that can simultaneously process and interpret multiple types of data inputs, such as text, images, audio and video.
The new service’s release follows OpenAI’s launch of its flagship multimodal GPT-4o model on May 13. Aporia argues that though GPT-4o provides unprecedented productivity with the richest, most human-like AI experience to date, it also raises a major issue of accountability.
The issues come down to both what is put into an AI model and what comes back. Aporia notes that a misstep in misinformation spoken to users could have serious implications — such as someone seeking ways to deal with depression and AI advising drug and alcohol abuse, or a banking customer asking to see their financial history, only to receive someone else’s data.
Guardrails for Multimodal AI allows engineers to add a layer of security and control between the app and the user. The guardrails operate with a defined, customizable set of behavioral rules that work at subsecond latency, going beyond what common prompt engineering can do.
“Multimodal AI is a game-changer for the world we live in, but one that requires guardrails to ensure its safety, success and ultimate adoption,” said Liran Hason, chief executive officer and co-founder of Aporia. “Industries across the globe are coming to rely on AI, yet as many engineers are discovering, AI by itself is inherently unreliable.”
The company says the new service detects and mitigates 94% of hallucinations before they reach users in real time and, in doing so, offers a powerful layer between large language models and AI applications. The solution also prevents the misuse of applications for malicious purposes, such as prompt injections or prompt leakage and can block explicit and offensive language in user interactions, identifying inappropriate wording and phrasing to block it immediately.
“As we have seen before, disastrous accidents can occur quickly,” Hason added. “Aporia Guardrails are the first solution to actively mitigate spoken and written responses in real time and support the human in the loop.”
Aporia is a venture capital-backed startup that has raised $30 million in funding, according to Tracxn, including a round of $25 million in February. Investors in the company include Tiger Global Management LLC, TLV Partners LP, Samsung NEXT LLC and Vertex Ventures.
THANK YOU