Our Technology

The majority of the world’s SQL is already machine-generated. Enterprise AI is poised to accelerate AI-generated SQL by at least 1,000x. This will be unprecedented in its volume, concurrency, and complexity and today’s compute platforms are inadequate for this new future.

Compute.AI is purpose built to solve this challenge.

Correcting Inefficiencies

Understanding the Need

Inefficient Platforms

How is Compute Inefficient?

Today’s compute platforms make highly inefficient use of both CPU and Memory.

7 in 10 processor cores are sitting idle, and memory is massively over-provisioned for worst-case scenarios, resulting in catastrophic performance degradation and exponential cost increases as SQL becomes more complex and concurrency needs grow.

THE Memory

Memory Inefficiencies

In-memory database architectures run fast when supplied with an adequate amount of memory. However, if a workload requires more memory than provisioned, the job fails with an Out-of-Memory failure. This forces users to provision memory for the worst case, often a trial and error approach, and is referred to as memory over-provisioning

For example, if a job typically needs 10GB of memory for all its SQL operations but has an operation (say, an explosive JOIN) that requires 1TB of memory, the system will need to be run all the time with 1TB of memory (or else, the job will fail). The tradeoffs are harsh – you can either run with 1000x more memory all the time, even though you only need it for just a few seconds, or you can risk a job failure. And for production workloads, failure is not an option.

THE SQL

Complex SQL Inefficiencies

Complex SQL is characterized by the number of JOIN operations, primarily, and then GROUP BY operations in a SQL statement. Running complex SQL on big data often has a non-linear impact on CPU and memory resources; far more of these resources are needed.

Complex SQL requires more cores and more memory. These resources are best provisioned by adding more nodes dynamically to elastic clusters. New nodes may take several minutes to come up.

Complex operations using elastic clusters tend to have more data movement (shuffles) across the nodes and more local spill-to-disks for relational compute. This has the side effect of large network & disk I/O waits (that show up as idle CPU). So, even though the size of the cluster is increased to add additional compute infra to deal with complex SQL, it is primarily the memory that is leveraged while the additional cores experience more idle.

Today’s in-memory Distributed Shared Memory architectures come with these harsh tradeoffs (more elastic nodes added for compute/CPU and memory, but it is mainly memory that gets used while a larger percentage of CPUs utilization drops) that affect overall infrastructure efficiency.

THE Concurrency

Concurrency Inefficiencies

With stalled CPUs and outsized memory profiles, especially as we go to larger and larger elastic clusters to provision memory, performance degrades dramatically as concurrency increases. Then, as we add concurrency to the picture, we have a potentially untenable infrastructure requirement if we desire anything more than a handful of concurrent jobs running.

Adding complex SQL to the equation dramatically worsens the problem – even moderate levels of concurrency may be hard to achieve even after providing an outrageous amount of memory and increasing the number of nodes to a much larger cluster. Needless to say, the OOM-Kill problem is also exacerbated.

In looking ahead at the fast arriving future of AI-generated SQL, where the volume, complexity and concurrency of SQL is expected to be ~1000x more than today’s workloads, current systems are either technically incapable of servicing the use cases, or customers would require massive levels of infrastructure overprovisioning where the business viability of the setup would be questionable.

Compute.AI makes compute abundant and infinitely scalable.

The Impact

Total Impact of Inefficient Compute

Because we have inefficiencies deeply embedded into the core building blocks of our infrastructure (primarily, the CPU – Memory relationship), everything we do to build on top of this foundation essentially multiplies inefficiencies: Larger scale clusters, the economics of cloud infrastructure (cloud products bill by the second or minute and not by the amount of CPU used), and Data Warehouse and Lakehouse infrastructures.

With the integration of AI into enterprise workloads, the amount of machine-generated complex SQL is exploding, increasing the complexity and concurrency demands of compute platforms and driving the inefficiency gap higher.

This is an enormous problem hiding in plain sight.

The Change

Opportunity for Change

Data management is comprised of three main components: DDL (Data Definition Language – this is the description of data tables and their schema), DML (Data Modification Language – typically, IMD operations or insert/modify/delete), and DQL (Data Query Language).

When using a Lakehouse, DDL and DML are taken care of when creating Parquet + Iceberg. This allows Compute.AI to build relatively simple but incredibly fast compute engines that offer features that address issues such as complex SQL and high concurrency.

Compute.AI has both patented and patent-pending technology that provides the most CPU and Memory efficient compute platform in the world. All of this enables a host of new uses cases that could not be tackled before and at a fraction of the cost.

The Solution

What is Compute.AI?

Compute.AI has created an open-source Spark SQL-based ultra-efficient compute engine that yields 100% CPU utilization with zero over-provisioning of memory and unlimited concurrency.

We integrate seamlessly with Data Warehouse and Lakehouse environments through a single JDBC endpoint with no tuning or DevOps required, similar to an operating system.

Compute.AI provides high performance, unlimited concurrency and scalability with zero waste (in dollars or energy consumed), providing the world’s first compute platform for AI-generated SQL (and other compute-hungry applications).