Beyond LLMs: How SandboxAQ’s Large Quantitative Models Can Optimize Enterprise AI
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn more
While large language models (LLM) and generative AI have dominated enterprise conversations about AI over the past year, there are other ways businesses can benefit from AI.
One alternative is Large Quantitative Models (LQM). These models are trained to optimize for specific goals and parameters related to the industry or application, such as material properties or financial risk metrics. This is in contrast to the more general understanding of language and generating tasks of LLMs. Among the leading advocates and commercial suppliers of LQM is SandboxAQwhich today announced that it has raised $300 million in a new round of funding. The company was originally part of Alphabet and was spun off as a separate business in 2022
The funding is a testament to the company’s success and, more importantly, its future growth prospects as it looks to decide enterprise AI use cases. SandboxAQ has established partnerships with major consulting firms, including Accenture, Deloitte and EY, to distribute its enterprise solutions. The key advantage of LQMs is their ability to address complex, domain-specific problems in industries where the underlying physics and quantitative relationships are critical.
“It’s all about creating a core product in the companies that use our AI,” SandboxAQ CEO Jack Hydari told VentureBeat. “So if you want to create a drug, a diagnostic, a new material, or you want to manage risk at a big bank, that’s where quantitative models shine.”
Why LQMs Matter for Enterprise AI
LQMs have different objectives and work differently than LLMs. Unlike LLMs that process textual data from the InternetLQMs generate their own data from mathematical equations and physical principles. The goal is to address the quantitative challenges that an enterprise may face.
“We generate data and get data from quantitative sources,” Hydari explained.
This approach enables breakthroughs in areas where traditional methods have stalled. For example, in battery development, where lithium-ion technology has dominated for 45 years, LQMs can simulate millions of possible chemical combinations without physical prototypes.
Likewise, in pharmaceutical development, where traditional approaches face a high failure rate in clinical trials, LQMs can analyze molecular structures and interactions at the electron level. Meanwhile, in financial services, LQMs address the limitations of traditional modeling approaches.
“Monte Carlo simulation is no longer sufficient to handle the complexity of structured instruments,” Hydari said.
Monte Carlo simulation is a classical form of computational algorithm that uses random sampling to obtain results. With the SandboxAQ LQM approach, a financial services firm can scale in a way that Monte Carlo simulation cannot. Hydari noted that some financial portfolios can be extremely complex with all kinds of structured instruments and options.
“If I have a portfolio and I want to know what the tail risk is, given the changes in that portfolio,” Hydari said. “What I’d like to do is create 300 to 500 million versions of this portfolio with slight changes to it, and then I want to look at tail risk.”
How SandboxAQ uses LQM to improve cybersecurity
Sandbox AQ’s LQM technology is focused on enabling enterprises to create new products, materials and solutions, rather than simply optimizing existing processes.
Among the corporate verticals in which the company is innovating is cybersecurity. In 2023, the company first released its Sandwich Cryptography Management Technology. This has since been further extended with the company’s AQtive Guard enterprise solution.
The software can analyze an enterprise’s files, applications and network traffic to identify the encryption algorithms used. This includes detecting the use of outdated or broken encryption algorithms such as MD5 and SHA-1. SandboxAQ feeds this information into a governance model that can alert the Chief Information Security Officer (CISO) and compliance teams to potential vulnerabilities.
While an An LLM can be used for the same purposeLQM provides a different approach. LLMs are trained on broad, unstructured Internet data, which may include information about encryption algorithms and vulnerabilities. In contrast, Sandbox AQ’s LQMs are built using targeted, quantitative data about encryption algorithms, their properties, and known vulnerabilities. LQMs use this structured data to build models and knowledge graphs specifically for cryptographic analysis, rather than relying on general language understanding.
In the future, Sandbox AQ is also working on a future remediation module that can automatically suggest and implement updates to the encryption used.
Quantum dimensions without a quantum computer or transformers
The original idea behind SandboxAQ was to combine AI techniques with quantum computing.
Hydari and his team realized early on that true quantum computers would not be easy to find or powerful enough in the short term. SandboxAQ uses quantum principles implemented through an enhanced GPU infrastructure. Through a partnership, SandboxAQ has extended the capabilities of Nvidia’s CUDA to work with quantum techniques.
SandboxAQ also does not use transformers, which are the basis of almost all LLMs.
“The models we train are neural network and knowledge graph models, but they are not transformers,” Hydari said. “You can generate from equations, but you can also have quantitative data coming from sensors or other types of sources and networks.”
Although LQMs are different from LLMs, Hidary doesn’t see it as an either-or situation for businesses.
“Use the LLM for what they’re good at, then bring in the LQM for what they’re good at,” he said.