The Qodo Open Code Model sets a new company standard, Beating Openai, Salesforce
Join our daily and weekly newsletters for the latest updates and exclusive content of a leading AI coverage industry. Learn more
FigureAI-managed code quality platform known as Codium has announced the exit of Dig-Ombed-1-1.5bA new open source model, a built-in model that provides modern performance, while being significantly smaller and more efficient than competitive solutions.
Designed to improve the demand, extracting and understanding of code, the 1.5 billion parameters model achieves the highest level of indicators in the industry, outperforming larger models than Openai and Salesforce.
For the development teams of enterprises managing huge and complex code bases, Qodo’s innovation is a leap forward in the working processes of software engineering managed by AI. By activating a more accurate and efficient code retrieval, Qodo-Ombed-1-1.5B addresses a critical challenge in the development of AI: awareness of context in large-scale software systems.
Why Code Model Code is important for Enterprise AI
AI -powered coding solutions have traditionally focused on codes, such as large language models (LLMS) attract attention to their ability to write a new code.
However, like Itar Friedman, CEO and co-founder of Qodo, explained in an interview with video conversations earlier this week: “Enterprise software can have tens of millions, if not hundreds of millions, code lines. The generation of the code itself is not enough – you have to ensure that the code is high quality, works properly and integrates with the rest of the system. “
Code installation models play a crucial role in AI development, allowing systems to effectively search and extract the appropriate code fragments. This is especially important for large organizations where software projects cover millions of code code in multiple teams, repositories and programming languages.
“The context is the king for everything now related to the construction of software with models,” Friedman said. “Consideably, in order to extract the right context from a really large code base, you have to go through some search mechanism.”
Qodo-Ombed-1-1.5b provides performance and efficiency
Qodo-Ombed-1-1.5B stands out with its balance of efficiency and accuracy. While many state-of-the-art models rely on billions of parameters-for example, for example, for example, 3 billion of the Openai model, for example, Qodo’s model, has achieved superb results with only 1.5 billion parameters.
On the Code Information Retrieval Benchmark (Coir), An Industry-Standard Test for Code Retrieval Across Multiple Languages ​​and Tasks, Qodo-Emes SFR-Ombedding-2_R (67.41) and Openai’s Text-Omebedding-3-Large (65.17).

This level of efficiency is crucial to businesses seeking profitable AI solutions. With the ability to work on cheap graphic processors, the model makes the advanced code retrieves available for a wider spectrum of development teams, reducing infrastructure costs, while improving the quality and performance of the software.
Sightseeing the complexity, shade and specificity of the various fragments of the code
One of the biggest challenges in developing AI software is that the code of this kind can have significantly different features. Friedman illustrates this with a simple but impactful example:
“One of the biggest challenges in incorporating code is that two almost identical functions – such as” withdrawal “and” deposit ” – can only differ in sign plus or minus. They must be close to the vector space, but also clearly expressed. “
A key problem with the installation of models is to ensure that the functionally different code is not incorrectly grouped together, which can cause major software errors. “You need a built -in model that understands the code well enough to extract the right context without introducing similar but incorrect features that could cause serious problems.”
To decide, Qodo has developed a unique training approach, combining high quality synthetic data with real code samples. The model was trained to recognize the nuanced differences in a functionally similar code, ensuring that when the developer searches for the corresponding code, the system extracts the right results-not just similar looking.
Friedman notes that this training process has been refined in collaboration with NVIDIA and AWS, and they both write technical blogs about Qodo’s methodology. “We have collected a unique set of data that simulates the delicate properties of software development and refine the model to recognize these shades. That is why our model is superior to common models for incorporating code. “
Multiprogramming language maintenance and plans for future expansion
The Qodo-Ombed-1-1.5B model is optimized for the top 10 most commonly used programming languages, including Python, JavaScript and Java, with additional long tail support from other languages ​​and frames.
The future iterations of the model will expand on this basis, offering more in -depth integration with enterprise development tools and additional language support.
“Many built -in models are struggling to distinguish the languages ​​of programming, sometimes mixing fragments of different languages,” Friedman said. “We have specially trained our model to prevent this, focusing on the top 10 languages ​​used in the development of the enterprise.”
Enterprise implementation options and AWAY
Qodo makes its new model widely accessible by multiple channels.
The 1.5B parameter version is available to a hug under the Openrail ++-M license, which allows developers to integrate it freely into their working streams. Enterprises in need of additional opportunities may have access to more commercial licensing versions.
For companies looking for a fully managed solution, Qodo offers a corporate -class platform that automates the installation of updates with the development of code bases. This deals with a key development challenge managed by AI: ensuring that search and extraction models remain accurate as the code changes over time.
Friedman sees this as a natural step in Qodo’s mission. “We release Qodo embedded one as the first step. Our goal is to constantly improve in three dimensions – accuracy, maintenance for more languages ​​and better processing of specific frameworks and libraries. “
Beyond the hug, the model will also be accessible through the NVID and AWS Sagemaker Jumpstart NIM platform, which makes businesses even easier to introduce and integrate into its existing development environments.
AI’s future at Enterprise Software Dev
AI -powered coding tools are rapidly developing, but the focus is to transfer beyond generating a code to understanding the code, retrieve and provide quality. As businesses are moving to integrate AI deeper into their processes of software engineering, tools such as Qodo-Ombed-1-1.5b will play a crucial role in turning AI systems more reliable, efficient and cost-effective.
“If you are a developer at Fortune 15,000, not just use Copilot or Cursor. You have work processes and internal initiatives that require a deep understanding of large code bases. It is there that the model for incorporating high quality code is essential. “
The latest Qodo model is a step towards the future where AI not only helps developers write a code-this helps them understand, manage and optimize in complex, large-scale software ecosystems.
For Enterprise teams that want to use AI for smarter search code, extract and quality control, the new Qodo built-in model offers an overwhelming, highly efficient alternative to larger, more intense resources solutions.