Dataiku Launches LLM Guard Services to Control Generative AI Rollouts From Proof-of-Concept to Production in the Enterprise

Florian Douetteau, Dataiku CEO
Florian Douetteau, Dataiku CEO
3 weeks ago

Dataiku has announced the launch of its LLM Guard Services suite that is designed to advance enterprise GenAI deployments at scale from proof-of-concept to full production without compromising cost, quality, or safety. Dataiku LLM Guard Services includes three solutions: Cost Guard, Safe Guard, and the newest addition, Quality Guard. These components are integrated within the Dataiku LLM Mesh, the market’s most comprehensive and agnostic LLM gateway, for building and managing enterprise-grade GenAI applications that will remain effective and relevant over time. To foster greater transparency, inclusive collaboration, and trust in GenAI projects between teams across companies, LLM Guard Services provides a scalable no-code framework.

Today’s enterprise leaders want to use fewer tools to reduce the burden of scaling projects with siloed systems, but 88% do not have specific applications or processes for managing LLMs, according to a recent Dataiku survey. Available as a fully integrated suite within the Dataiku Universal AI Platform, LLM Guard Services is designed to address this challenge and mitigate common risks when building, deploying, and managing GenAI in the enterprise.

“As the AI hype cycle follows its course, the excitement of two years ago has given way to frustration bordering on disillusionment today. However, the issue is not the abilities of GenAI, but its reliability,” said Florian Douetteau, Dataiku CEO. “Ensuring that GenAI applications deliver consistent performance in terms of cost, quality, and safety is essential for the technology to deliver its full potential in the enterprise.”

Dataiku LLM Guard Services provides oversight and assurance for LLM selection and usage in the enterprise, consisting of three primary pillars:

  • Cost Guard: A dedicated cost-monitoring solution to enable effective tracing and monitoring of enterprise LLM usage to better anticipate and manage spend vs. budget of GenAI.
  • Safe Guard: A solution that evaluates requests and responses for sensitive information and secures LLM usage with customizable tooling to avoid data abuse and leakage.
  • Quality Guard: The newest addition to the suite that provides quality assurance via automatic, standardized, code-free evaluation of LLMs for each use-case to maximize response quality and bring both objectivity and scalability to the evaluation cycle.

Previously, companies deploying GenAI have been forced to use custom code-based approaches to LLM evaluation or leverage separate, pure-play point solutions. Now, within the Dataiku Universal AI Platform, enterprises can quickly and easily determine GenAI quality and integrate this critical step in the GenAI use-case building cycle. By using LLM Quality Guard, customers can automatically compute standard LLM evaluation metrics, including LLM-as-a-judge techniques like answer relevancy, answer correctness, context precision, etc., as well as statistical techniques such as BERT, Rouge and Bleu, and more to ensure they select the most relevant LLM and approach to sustain GenAI reliability over time with greater predictability. Further, Quality Guard democratizes GenAI applications so any stakeholder can understand the move from proof-of-concept experiments to enterprise-grade applications with a consistent methodology for evaluating quality.

 

Don't Miss

Conor Jensen, Global Field CDO of Dataiku

New Report Finds 65% of Senior AI Professionals See Positive Returns on Active GenAI Investments

Dataiku’s survey reveals that GenAI is now central to business strategies, with

Survey Reveals Only 20% of Senior IT Leaders are Using Generative AI in Production

Dataiku’s survey reveals a significant gap in business investments in Generative AI