UPDATED 17:55 EDT / OCTOBER 16 2024

AI

LatticeFlow releases framework for checking LLMs’ compliance with the EU AI Act

Startup LatticeFlow AG today released COMPL-AI, a framework that can help companies check whether their large language models comply with the EU AI Act.

Zurich-based LatticeFlow is backed by more than $14 million in venture funding. It provides a platform for finding technical issues in artificial intelligence training datasets. Additionally, the company helps organizations ensure that their neural networks meet safety requirements.

LatticeFlow created COMPL-AI in response to the rollout of the EU AI Act earlier this year. The legislation introduces a set of new rules for companies that offer advanced AI models in the bloc. Notably, AI applications that are deemed high-risk by regulators must follow stringent safety and transparency requirements.

Some of the rules rolled out with the AI Act are only defined in relatively high-level terms, which means developers must interpret how they apply to their projects. That can complicate regulatory compliance efforts. According to LatticeFlow, its new COMPL-AI framework translates the high-level requirements set forth in the AI Act to concrete steps that developers can take to ensure regulatory compliance. 

COMPL-AI includes a list of technical requirements that must be met to ensure an LLM adheres to the legislation. Moreover, the framework provides an open-source compliance evaluation tool. The software can analyze an LLM to determine how thoroughly it implements AI Act rules.

LatticeFlow says that its evaluation tool measures LLMs’ regulatory compliance using 27 different benchmarks. Those benchmarks assess a model’s reasoning capabilities, the frequency with which it generates harmful output and various other factors.

“With this framework, any company — whether working with public, custom, or private models — can now evaluate their AI systems against the EU AI Act technical interpretation,” said LatticeFlow co-founder and Chief Executive Officer Petar Tsankov.

LatticeFlow put its open-source evaluation tool to the test by using it to analyze LLMs from several major AI providers. The companies on the list included OpenAI, Meta Platforms Inc., Google LLC, Anthropic PBC and Alibaba Group Holding Ltd. LatticeFlow determined that most of the evaluated AI models include effective guardrails against harmful output, but many fall short when it comes to cybersecurity and fairness. 

According to the company, the results of the analysis also suggest that there are opportunities to refine some AI Act provisions. Using the current rules as a reference, its open-source evaluation tool found it challenging to measure how well LLMs protect user privacy. Assessing how well AI models address copyright considerations also proved to be difficult.

European Commission spokesperson Thomas Regnier said that “the European Commission welcomes this study and AI model evaluation platform as a first step in translating the EU AI Act into technical requirements, helping AI model providers implement the AI Act.”

Photo: Unsplash

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU