The new AI model of Contection AI crushes the GPT-4o in accuracy-why does it matter

Rate this post

Join our daily and weekly newsletters for the latest updates and exclusive content of a leading AI coverage industry. Learn more


Contextual you have revealed him ground language (GLM) Today, claiming that it supplies the highest factual accuracy in the industry by superior to the leading AI systems of Google., Anthrop and OPENAI Of a key indicator of truth.

The startup, founded by the pioneers of generation extracted generation (RAG) technology reports that its GLM has achieved 88% result of the fact of the facts of Comparison factscompared to 84.6% for Google Twins 2.0 flash79.4% for anthropic Claude 3.5 Sonnet and 78.8% for Openai GPT-4OS

While large language models have transformed corporate software, factual inaccuracies – often called hallucinations – remain a critical challenge to accept the business. The contextual AI aims to decide this by creating a model specifically optimized for RAG corporate applications, where accuracy is paramount.

“We knew that part of the solution would be a technique called RAG, with extraction,” says Dou Kiela, CEO and co-founder of the contextual AI, in an exclusive interview with VentureBeat. “And we knew this, because RAG was initially my idea. What is this company is really to do RAG the right way to make the next level of making a rag. “

The company’s focus is significantly different from general -purpose models as Chatgpt or Clodwhich are intended to deal with everything from creative writing to technical documentation. The contextual AI is instead aimed at high -bet corporate environments, where factual precision exceeds creative flexibility.

“If you have a problem with rags and are in a situation of businesses in a highly regulated industry, you have no tolerance to hallucination,” Kiela explained. “The same general-purpose language that is useful for the marketing department is not what you want in an enterprise where you are much more sensitive to error.”

A comparison of a comparison showing the new grounding language model of the context AI (GLM), exceeding competitors from Google, Anthropic and Openai for factual test tests. The company claims its specialized approach reduces AI hallucinations in corporate settings (Credit: Contextual AI)

How Contextual AI makes the “grounding” the new gold standard for models of enterprise

The concept ofgrounding” – Guaranteeing AI’s answers strictly to the information explicitly provided in the context – is outlined as a critical requirement for AI systems enterprises. In regulated industries such as finance, healthcare and telecommunications, companies need AI, which either provides accurate information or explicitly acknowledges when he does not know anything.

Kiela suggested an example of how this strict grounding works: “If you give a recipe or formula to a standard language model and somewhere in it, you say:” But this is true in most cases, “most language models will still give you the recipe, assuming that this is true. But our language model says, “In fact, he only says that this is true in most cases.” This captures this extra beat. “

The ability to say “I don’t know” is crucial to businesses. “Which is really a very powerful function if you think in a corporate setting,” Kiela added.

Contextual AI of RAG 2.0: A more integrated way to process information about company

AI’s contextual platform is built on what it calls “Rag 2.0“An approach that moves beyond simply connecting the components outside the shelf.

“The typical RAG System uses a frozen model outside the built -in shelf, a vector database to extract and language of the language with a black box for generation, sewn together by prompt or orchestration frame,” the company said. “This leads to Frankenstein’s Monster of Generation AI: Individual components work technically, but the whole is far from optimal.”

Instead, contextual AI jointly optimizes all components of the system. “We have this component of a mixture of retrievers, which is really a way to make intelligent extraction,” Kiela explained. “He addresses the matter and then believes that as a substantive, like most of the most generation of models, he thinks, (s) he first plans a strategy for making extraction.”

This whole system works in coordination with what Kiela calls “the best reacker in the world”, which helps to prioritize the most appropriate information before sending it to the model of justified language.

Beyond the ordinary text: Contextual AI now reads diagrams and connects to the databases

While the newly devoted GLM focuses on text generation, the contextual AI platform has recently added multimodal content support, including diagrams, diagrams and structured data from popular platforms such as Bigqury., Snowflake., Red shift and PostgresS

“The most challenging problems in enterprises are at the intersection of unstructured and structured data,” Kiela noted. “What I am most excited about is really this intersection of structured and unstructured data. Most of the really exciting problems in large businesses are Smack Bang at the intersection of structured and unstructured, where you have some database records, some transactions, maybe some policy documents, maybe a bunch of other things. “

The platform already maintains various complex visualizations, including chain diagrams in the semiconductor industry, according to Kiela.

AI contextual future plans: Creating more reliable tools for everyday business

The contextual AI plans to release its specialized re -racker component shortly after the GLM start, followed by advanced options for understanding documents. The company also has experimental functions for more agency development opportunities.

Founded in 2023 by Kiela and Amanpreet SinghWho previously worked on the Meta Fundamental AI Research (Fair) team and hugging Face, Contextual AI has provided customers, including HSBC, Qualcomm and The Economist. The company positions itself by helping businesses finally to finish a specific return on their investment in AI.

“This is really an opportunity for companies that may be under pressure to start delivering AI investment in order to start considering more specialized solutions that actually solve their problems,” Kiela said. “And part of this really has a grounding language model, which may be a little worse than the standard language language, but it’s really good to make sure it’s founded in context and that you can really trust your work.”


 
Report

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *