Our Technology

The majority of the world’s SQL is already machine-generated. Enterprise AI is poised to accelerate AI-generated SQL by at least 1,000x. This will be unprecedented in its volume, concurrency, and complexity and today’s compute platforms are inadequate for this new future.

Compute.AI is purpose built to solve this challenge.

Correcting Inefficiencies

Understanding the Need

Inefficient Platforms

How is Compute Inefficient?

Today’s compute platforms make highly inefficient use of both CPU and Memory.

7 in 10 processor cores are sitting idle, and memory is massively over-provisioned for worst-case scenarios, resulting in catastrophic performance degradation and exponential cost increases as SQL becomes more complex and concurrency needs grow.

THE CPU

CPU Inefficiencies

Processor cores are not able to access memory due to memory bus latency and worsening core/memory ratios, resulting in extremely low processor utilization.

The efficiency gap between software and hardware–just for CPU utilization–is wasting at least ~70% of cloud CPU resources.

Analogy

compute-ai-cpu-inefficiency

Imagine renting a car by the hour that has a 10 cylinder engine, but is only ever able to use 3 cylinders at a time. Not only are you under-powered while driving on the highway, but your car drives 70% slower than its full capacity.

The wasted cost is more than simply paying for 10 cylinders and only getting 3, it is the cost of renting the car for three times longer than necessary, when you could have reached your destination faster had it been firing on all 10 cylinders.

This is the way data platforms consume cloud CPU today.

THE Memory

Memory Inefficiencies

In-memory database architectures run fast when supplied with an adequate amount of memory. However, if a workload requires more memory than provisioned, the job fails with an Out-of-Memory failure. This forces users to provision memory for the worst case, often a trial and error approach, and is referred to as memory over-provisioning

For example, if a job typically needs 10GB of memory for all its SQL operations but has an operation (say, an explosive JOIN) that requires 1TB of memory, the system will need to be run all the time with 1TB of memory (or else, the job will fail). The tradeoffs are harsh – you can either run with 1000x more memory all the time, even though you only need it for just a few seconds, or you can risk a job failure. And for production workloads, failure is not an option.

Analogy

compute-ai-memory-inefficiency

Now, imagine you pre-pay the rental car company for 100 to 1000x more gas than you need to get to your destination, just in case you have to take a long detour. 99% of the time, you won’t need it, but you can’t ever risk running out of gas.

When you return the car, the gas money is non-refundable. This is the way data platforms consume memory today.

THE SQL

Complex SQL Inefficiencies

Complex SQL is characterized by the number of JOIN operations, primarily, and then GROUP BY operations in a SQL statement. Running complex SQL on big data often has a non-linear impact on CPU and memory resources; far more of these resources are needed.

Complex SQL requires more cores and more memory. These resources are best provisioned by adding more nodes dynamically to elastic clusters. New nodes may take several minutes to come up.

Complex operations using elastic clusters tend to have more data movement (shuffles) across the nodes and more local spill-to-disks for relational compute. This has the side effect of large network & disk I/O waits (that show up as idle CPU). So, even though the size of the cluster is increased to add additional compute infra to deal with complex SQL, it is primarily the memory that is leveraged while the additional cores experience more idle.

Today’s in-memory Distributed Shared Memory architectures come with these harsh tradeoffs (more elastic nodes added for compute/CPU and memory, but it is mainly memory that gets used while a larger percentage of CPUs utilization drops) that affect overall infrastructure efficiency.

Analogy

compute-ai-sql-inefficiency

Now imagine you are driving your rental car uphill (complexity increases). The already inefficient engine has to work harder to make the climb, but the only thing you can do is drop into a lower gear (slow down), and give it more gas (memory). And, if there were fewer cylinders used as you gave it more gas, the situation would worsen.

This is the way today’s data platforms handle complexity.

THE Concurrency

Concurrency Inefficiencies

With stalled CPUs and outsized memory profiles, especially as we go to larger and larger elastic clusters to provision memory, performance degrades dramatically as concurrency increases. Then, as we add concurrency to the picture, we have a potentially untenable infrastructure requirement if we desire anything more than a handful of concurrent jobs running.

Adding complex SQL to the equation dramatically worsens the problem – even moderate levels of concurrency may be hard to achieve even after providing an outrageous amount of memory and increasing the number of nodes to a much larger cluster. Needless to say, the OOM-Kill problem is also exacerbated.

In looking ahead at the fast arriving future of AI-generated SQL, where the volume, complexity and concurrency of SQL is expected to be ~1000x more than today’s workloads, current systems are either technically incapable of servicing the use cases, or customers would require massive levels of infrastructure overprovisioning where the business viability of the setup would be questionable.

Compute.AI makes compute abundant and infinitely scalable.

Analogy

compute.ai-concurrency-inefficiency

Now you have your rental car running on three of ten cylinders, with 1000x more gas than needed, chugging its way uphill, and you want to add your family to the car.  For every extra person the car slows down by 20%-50% and burns exponentially more gas.  All of sudden, you are having a family reunion and need to add 100 more family members to the car. 

This is the way today’s data platforms respond to higher concurrency workloads (such as Business Intelligence and AI-Generated SQL).

The Impact

Total Impact of Inefficient Compute

Because we have inefficiencies deeply embedded into the core building blocks of our infrastructure (primarily, the CPU – Memory relationship), everything we do to build on top of this foundation essentially multiplies inefficiencies: Larger scale clusters, the economics of cloud infrastructure (cloud products bill by the second or minute and not by the amount of CPU used), and Data Warehouse and Lakehouse infrastructures.

With the integration of AI into enterprise workloads, the amount of machine-generated complex SQL is exploding, increasing the complexity and concurrency demands of compute platforms and driving the inefficiency gap higher.

This is an enormous problem hiding in plain sight.

Analogy

compute.ai-compute-inefficiency-cost

Rental car engines are already very powerful, but they simply aren’t able to deliver their full power to the driver, and they are guzzling way too much gas.

The hill is getting steeper, additional passengers are piling into the car at a rate never before seen, and the car is moving slower and slower and slower while the rental costs are getting higher by the minute.

These compounding inefficiencies lead to many problems.

The Change

Opportunity for Change

Data management is comprised of three main components: DDL (Data Definition Language – this is the description of data tables and their schema), DML (Data Modification Language – typically, IMD operations or insert/modify/delete), and DQL (Data Query Language).

When using a Lakehouse, DDL and DML are taken care of when creating Parquet + Iceberg. This allows Compute.AI to build relatively simple but incredibly fast compute engines that offer features that address issues such as complex SQL and high concurrency.

Compute.AI has both patented and patent-pending technology that provides the most CPU and Memory efficient compute platform in the world. All of this enables a host of new uses cases that could not be tackled before and at a fraction of the cost.

Analogy

compute.ai-new-compute-platform

Historically, rental cars all used their own proprietary engines. However, recently all the rental car companies have standardized their components making it possible and easy for a third-party to introduce engine upgrades to the entire fleet.

Compute.AI provides the most CPU and Memory efficient engine upgrade in the world.

The Solution

What is Compute.AI?

Compute.AI has created an open-source Spark SQL-based ultra-efficient compute engine that yields 100% CPU utilization with zero over-provisioning of memory and unlimited concurrency.

We integrate seamlessly with Data Warehouse and Lakehouse environments through a single JDBC endpoint with no tuning or DevOps required, similar to an operating system.

Compute.AI provides high performance, unlimited concurrency and scalability with zero waste (in dollars or energy consumed), providing the world’s first compute platform for AI-generated SQL (and other compute-hungry applications).

Analogy

compute.ai-turbocharge

Compute.AI has developed a turbocharger for all rental car engines which unlocks the other seven cylinders, allowing your rental car to go 10x faster, which also allows you to rent it for 1/10th of the time.

The engine now consumes only the gas needed for the trip and you don’t have to prepay. You notice that driving uphill doesn’t slow you down, or consume more gas, and that you can add as many passengers as you want without penalty.

Last but not least, you can install the turbocharger yourself – you don’t have to wait for the rental car company. It is easily available everywhere you go, works with the standardized components for every engine on the market, and takes less time to install than filling up the car with gas.