close
close

Patronus AI debuts an API to arm AI workloads with a robustness fence

Patronus AI debuts an API to arm AI workloads with a robustness fence

Patronus AI Inc. today introduced a new tool designed to help developers ensure that their AI applications generate accurate results.

The Patronus API, as the offering is called, is rolling out a few months after launch Closed $17 million Series A funding round. The investment included participation from the venture capital arm of Datadog Inc. and several other institutional sponsors.

San Francisco-based Patronus AI offers a software platform that promises to simplify the development of AI applications. Developers can use it to compare a set of large language models and determine which one is most suitable for a particular software project. The platform also promises to simplify several related tasks, such as identifying technical issues in AI applications after they are deployed in production.

Patronus API, the company’s new offering, is an application programming interface that enterprises can integrate into their AI workloads. It is designed to help developers detect when an application is generating inaccurate quick answers and filter them out.

Many types of problems can arise as a result of AI work. Some user queries lead to hallucinations, neural network responses that contain false information. In other cases, the LLM is built into the program, which may generate answers that are too short or inconsistent with the company’s style guidelines.

Repelling cyberattacks is another challenge. Hackers sometimes use malicious hints to try to trick LLM into doing a task it is not designed to do, such as revealing sensitive information from its training dataset.

The Patronus API detects such issues by running LLM Quick Responses through a different language model. This second language model checks each response for problems such as hallucinations and notifies developers if there is a match. There are already several tools on the market that use LLM to find problems in AI applications, but Patronus AI says they have limited accuracy.

The Patronus API offers several LLM scoring algorithms to choose from. One of them is Lynx, the open source language model of Patronus liberated in July This is a customized version of the Llama-3-70B-Instruct model from Meta Platforms Inc. that has been optimized to detect incorrect AI results.

According to Patronus, Lynx is better than GPT-4o at detecting problems in AI applications with RAG features. RAG, or search-augmented generation, is a machine learning technique that allows LLM to incorporate data from external sources into its rapid responses. Patronus says the Lynx’s accuracy is due in part to its use of COT, a processing approach that allows LLM to break down a complex task into simpler steps.

The Patronus API can also scan the output of AI applications using other scoring algorithms. Some of these algorithms take up less hardware than Lynx, which means they can be used more economically. In addition, developers can upload their own scoring models to analyze the results of AI applications based on metrics that are not supported out of the box.

Patronus offers API access under a usage-based pricing model. Customers receive a Python software development kit that makes it easy to integrate the service into their applications.

photo: Unsplash

Your vote of support is important to us and helps us keep content FREE.

One click below supports our mission to provide free, deep and relevant content.

Join our community on YouTube

Join a community of over 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jesse, Dell Technologies Founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many other luminaries and experts.

“TheCUBE is an important partner for the industry. You guys are truly a part of our events and we really appreciate you coming and I know people appreciate the content you create too.” – Andy Jesse

THANK YOU