Compute Needs for AI and the Semantic Layer

Video interview with Joel Inman, CEO of Compute.AI

Joel Inman, CEO at Compute.AI joins John Furrier and Dave Vellante, Co-Founders and Co-CEOs of Silicon Angle Media for a discussion of AI and the Semantic Layer at Supercloud 3.

Aug 18, 2023

(John Furrier) Welcome back to Supercloud 3. I’m John Furrier with Dave Vellante, and we’re live in Palo Alto Studios. It’s the second day of two great days of security plus AI. That’s the theme for Supercloud 3. Supercloud 4 is coming up in October, so mark your calendar. That’s going to be all about AI. This session actually is a lot about AI and security. We’re going to find the truth in the data. We’re joined today by Joel Inman, who’s the CEO of Compute.ai – I love the domain name. Joel, great to see you. You’re a CUBE alumni. It’s been a while. How are things?

(Joel Inman) Thanks for having me, things are great. I just got started coming back into the space after time away, and I was able to be reintroduced to Vikram Joshi who’s the founder of the company and really excited to partner with him.

(John Furrier) Vikram was supposed to be here today but he couldn’t make it. He had something going on. I really appreciate you coming on. We’ve been unpacking Supercloud 3 episodically every quarter. This one’s about security and AI, obviously the data has been a big conversation.

If you look at the forcing function of this next wave that’s coming, the AI is certainly over the top. It’s powering a lot of activity in the developer community as well as companies. So, security is an operational thing. So, data now has continued to inject a forcing function in how people are organizing how they do cloud and how they do on-premise hybrids, and certainly with the Edge as well.

What’s your take on supercloud, as this evolves into the next layer of IT coming? It’s the same game, new variables. What’s your view of supercloud?

(Joel Inman) Yeah, I think it’s really driven by people needing to get more out of their data, to get those deep meaningful insights out of their data. And the demand for analytics is about three, four orders of magnitude what it ever has been before today, and it’s just growing. I mean, AIML (artificial intellegene machine learning) is kind of the accelerator on top of that, but if you look at what’s happening right now with just cloud infrastructure and the need to scale out and support those massive complex workloads that’s where we are today. And we’re at an inflection point where that demand is only going to get stronger.

(Dave Vellante) So, there may be a security angle here. John and I were talking off camera about the semantic layer and we talk all the time about the single version of the truth.  It’s been like the holy grail that tech companies have been trying to obtain forever. Data warehouses didn’t do it, MDM didn’t do it, lakehouses haven’t done it. So, what is your story around the semantic layer?

(Joel Inman) That’s a great question, and kind of that goes to the heart of the founding of Compute.AI. Data warehouses and data lakehouses have been a great step forward in kind of building these repositories where we’re trying to analyze the data, right?

But you need to connect that data. You need to have that tissue across your entire organization in order to go a level deeper and understand what to do with your business. Compute.ai and what we’re doing, our vision for the future, is really separating compute from data management. The reason why we’ve had compute as part of these relational database architectures has been it was convenient, okay?

But now the demand for that compute is skyrocketing, and having that compute trapped in data silos is no better than having data trapped in data silos. Because it starts to break down. There’s not concurrency, and the costs kind of gets out of control as you start to try to feed the supercloud with that kind of infrastructure.

(Dave Vellante) So, contextualize this for us, because John and I have been talking about this over the last couple of months, and the last couple of decades really. So, take Snowflake and what it does.  It separates compute from storage, but it doesn’t do what you’re saying because you’re still putting it all inside of Snowflake (I think – I’m inferring). And now look at what Databricks did at its recent show, kind of basically building out a data mesh with its data lakehouse, being able to connect to even any data, Snowflake or whatever. Now, of course Snowflake’s also connecting different data types. What’s different about your vision?

(Joel Inman) So, the difference of separating compute and our product is called pure compute, right? Because you really want to get that compute set aside. It’s building compute into the fabric of your server infrastructure. So, we envision a compute engine on every server and interacting the data in a very open and scalable fashion, running SQL directly on files. And taking that relational component and then plugging in a compute infrastructure that is inherently reliable, that is highly memory efficient, and that is able to scale from a tiny server, from one node up to hundreds of nodes, thousands of users.

You can hit our Compute.AI engine with different workloads, you can hit it with ad-hoc queries, you can hit it with batch processing. It doesn’t go down, it doesn’t fail, and the costs are linear as they scale. You’re just simply benefiting from the elasticity of the cloud that you’re on.

(Dave Vellante) And so you’re saying that on the Compute.AI platform that if I’m going to have data in different data types, if I’m going to have different query mechanisms, then you’re saying you basically translate it all back into SQL to make it simple?

For example, if I have a vector database, a graph database, a relational database, a structured and unstructured database, those are all different data elements that are incoherent today. Now, I can maybe bring in a DBT (data build tool) to KPI-ify the metrics in a data warehouse. I can maybe do something with that scale, but it’s still a real heavy lift. So, is your vision to change that so that all those data elements are coherent and can be joined at scale?

(Joel Inman) Yeah, that’s exactly what we’re seeing and what we’re saying. I mean, you said the key word there DBT (data build tool), right? And people are using that authoring tool to create their workloads and their pipelines and try to knit this data together. So, when you’re doing that, when you’re creating the semantic layer, what you’re doing is you’re actually executing thousands, and hundreds of thousands, of joints, right? You’re taking tables and formats and rows and columns from all the disparate areas of your business, and you’re putting them together into one semantic layer that is referenceable. And you said unstructured, it needs to have structure, it needs to have that kind of basic structure of a SQL engine.

(John Furrier) Not to interrupt the cadence here but really quick, define the semantic layer for the folks watching. We’ve heard of Semantic Web and we see ChatGPT. But when you say semantic layer, you guys are referring to a different concept within a data plane, right? Can you just explain what the semantic layer is?

(Joel Inman) Our view of the semantic layer is simply metadata that connects the different data silos so that you can put it all together.

(John Furrier) And the use case for that would be? The benefits would be what?

(Joel Inman) The use case to that would be reaching into all the different aspects of your business and really being able to analyze the ephemeral data that you don’t have time to put it into a data warehouse, you don’t have time to run the analytics and the statistics on that because if you take the time to actually put it into that data warehouse it’s gone.

(Dave Vellante) This is the key point. It’s real time, and this is why we use Uber as an example. People, places, things, riders, drivers, ETAs, destinations, prices, transaction data, all different types of data, but to Uber the data is coherent and they make sense of it in real time. They’re not shoving it into a data warehouse, analyzing it and pushing it back out.

(John Furrier) Because the lag-to-value ratio is that by the time the data’s worth anything, it’s over. You’d never get the ride.

(Joel Inman) Yes. It’s gone, the moment has passed and you can’t ever get the data again.

(Dave Vellante) Okay, so this is the problem that you’re solving. What’s interesting here is if the Snowflakes and the Databricks of the world don’t own that semantic layer what’s going to happen, potentially, is like what happened to Oracle with BEA. They basically extracted that and the rest of the application world took advantage of it. So, this semantic layer in terms of, in the context of real-time digital representation of your business is the next wave and now you can start to bring in AI. And there is a security angle here to be able to do this for security in real time. There’s a narrow application in security, but the applications are endless.

(Joel Inman) Well, and there’s also an open standards implication here too, right? We’re seeing a lot of people move to the Iceberg Parquet format, and that’s what we support. And that kind of needs to be there for the community at large to be able to utilize these types of tools.

(Dave Vellante) I mean you saw that at Databricks, they said all right, we’re going to take, whether it’s an Iceberg table or Hootie or Delta, whatever it is, we’re going to translate it into Parquet at the backend and we’ll take care of everything. So, the Iceberg standard is emerging, everybody’s leaning into it.

(John Furries) You mentioned Snowflake and Databricks Dave, I’d love to get your perspective guys on this. It sounds like Compute.AI is disruptive to, in people who have an incumbent position.

(Dave Vellante) Yeah, I think it is. I mean, again, I think Snowflake is, they have a database mindset. They’re database guys – came out of Oracle. And I think they’re bringing in people with an application mindset because they want to be the iPhone, the App Store, for enterprise data apps. But in order to do that, if they want to serve those real-time applications, they’ve got to have at least some kind of relationship with the semantic layer. And personally, I think they need to own it to justify their valuation. I don’t know, do you have a thought on this?

(Joel Inman) Yeah, I do. Our vision of compute is that we believe compute is a new category, okay? People have never really separated it out before. In terms of being disruptive that’s not our goal. We’re very collaborative, and we want to partner with all these companies, right? So, we think we can fit yeah, we love everybody, and we can fit right in there.

So, I think that we’re seeing the early days, we’re seeing the need, we’re seeing people have data warehouses and data lakehouses both in their environment. I think actually you published this stat, 42% of all customers have both. And so, that speaks to the need here, which is data’s everywhere. It’s spilling out, right? And how do we make use of it?

We need to make use of it with a relational structure, a semantic layer, and then the infrastructure to support that. And the infrastructure to support that is going to be the supercloud.

(Dave Vellante) So, in concept how could you partner with a Snowflake, I don’t know if you are or thinking about it, but how in theory could you partner with a platform that is essentially a closed proprietary system like Snowflake?

(Joel Inman) Well, I want to dodge that question and actually talk about Spark, Presto, and Trino, right? Because those are, those are the landscapes that, the areas that we play in.

(Dave Vellante) I have the same question for Databricks. I mean with the exception of well, there’s some propriety processes in Databricks as well.

(Joel Inman) Yeah, I mean so us, we’re a small piece of the system. We’re kind of, think of us as a coprocessor or a compute engine, right? So, we go into that system, and maybe it is a Snowflake, and you point DBT directly at our engine so you offload the compute. And you say compute is going to the compute engine, DBT is pointed at our product, and then you load it back into your data warehouse.

(Dave Vellante) Or I could containerize your stack inside of Snowflake. I mean if you wanted to develop on top of Snowflake you could do that with what they’ve just announced in theory.

(Joel Inman) Yeah, the beautiful thing is about the vision is that we believe compute needs to be everywhere. Almost like fabric, like VMware kind of came out in the early days. And so, it needs to run almost like an operating system, unattended, you don’t think about it, it’s just there.

(Dave Vellante) And it’s got to be at the Edge. This is the thing, I have no doubt that the likes of Snowflake and Amazon and all the cloud guys are thinking about this. No question in my mind, it’s just unclear to me how they play where the data is, which is everywhere. And you’re saying compute has to be everywhere. And I also think the compute is going to be an ARM-based processor that’s low power, low cost, and incredibly powerful.

(John Furries) Well, my question on you guys is, I love the name by the way, Compute.ai. Great URL, I mentioned that at the top. What’s the AI aspect of Compute because we’ve had a lot of conversations in theCUBE over the past, this session and before, how to move the compute to the data, becasue data egress is kind of expensive and moving around data, but also being smart. So, having an AI component, where’s the AI in the compute side of your play?

(Joel Inman) That’s a great question, and prepare yourself for a long-winded answer. I’ll try to be concise.

So, the first part is we use AIML in our product, right? And we use it to page to disk elegantly. So, within our code we use these algorithms in order to garner memory efficiency that has never been seen before. So, bypassing the memory bus throttle that prevents CPUs from being 100% utilized, we can run CPUs at 100% utilization, even over-commit CPUs, over-commit memory, and have a spill to disk that is very elegant. And so, that’s where we have the cost-efficient piece of our product, right? So users of Compute.AI are never getting OOM (out-of-memory), killed by memory overloads, and they don’t need to over-provision memory anymore. So, that’s number one is we use AI, it’s integral to the product that we built.

The second piece of our name has to do with providing infrastructure for the AI generation, right? And so, we’ve had a decade of data scientists building these AI models, and they’re ready to go into production. The rubber is going to hit the road. And when they do that they need SQL as the empowering relational engine to put that into practice. And we’re right there with them to support that. Because otherwise if you put an AI workload on top of the current infrastructure where compute is trapped in database silos, you’re going to get costs that go through the roof.

There was a study that I read recently from CSAT that showed the biggest LLM model is going to cost $25 trillion in compute by 2026. I don’t know if that’s still relevant or not but 25 trillion, right? So, McKinsey came out with a study that said we’re going to benefit 3 to $4 trillion per year in economic value as our global economy. Well, if you’re spending 25 trillion and you’re benefiting 3 to 4 trillion that math doesn’t end up. So, something has to be done about the compute and the infrastructure to support AIML, and that’s where we play.

(Dave Vellante) So, to John’s point, you’re not only bringing the compute to the data, but you’re bringing the compute along with the AI and the ML to the data. And also so there’s another piece, which is somebody else’s AIML?

(Joel Inman) I was talking to a customer yesterday who said, “We have these models that we’ve been building, and they’re proprietary to us, and we need to be able to run them within our platform.” And they were using BigQuery from Google. Perfect example of generating and building these models and then putting them on our infrastructure, and they had a serverless infrastructure and it just works. And so, that’s the type of example, people are going to have to figure out how do we put our AIML into production?

(John Furrier) And then apps are coming. What’s your vision of how you see the application market developing because supercomputing, super compute, smart compute, AI compute, which you guys have, the supercloud layer, now you got super applications that are going to have data native built in natively managing the data, and a lot of this ephemeral data will be in the app. Dave mentioned Uber and we’ll see more of those apps being coded. So, a whole developer tsunami is coming as well, so we’re seeing a Cambrian explosion of developers getting their hands on these open source processes. So, yeah, rubber’s hitting the road for the gen one data sciences with their LLMs and foundation models. Now you’re going to have coders coding on top of data natively.

(Joel Inman) Yeah, I mean I think there’s two parts to your question. One, how are applications evolving and developing? I mean, I think they need to be developed with the semantic layer in mind. We’re really moving towards more of a data-centric ecosystem where applications, they need to stop being so grabby and they need to share data with everybody. And the architecture needs to change a little bit to reference that semantic layer. That also corresponds with AIML because what greater way to feed your production workload AIML than with the semantic layer that reaches into every piece of your business, right? So, we’re setting the stage. Supercloud is setting the stage to not only be able to meet the demands of increasingly complex workloads from BI (business intelligence) tools per-se, but also to build the data center that’s going to provide support for AIML.

(Dave Vellante) This is why supercloud is not just hybrid and on-premise to cloud and across cloud. It has to stretch out everywhere. And that’s why it is a metaphor for the future of what we call cloud, but the way we think about cloud as a remote set of services is changing.

(John Furrier) Joel, thank you so much for coming in for Supercloud 3. Supercloud 4 is going to be all about AI, which is right up your wheelhouse, compute and AI. I don’t think that those two things are going away anytime soon. I’m glad you flew in.

(Joel Inman) Thank you. I appreciate you guys having me, and hopefully next time Vikram can make an appearance.

(John Furrier) Yeah, that’d be great. We’ve been speaking with Compute.AI. I love the name, love the domain name and those are two things that are going to be more and more abundant and important. And of course, Supercloud 4 is coming and is all about AI. I’m John Furrier with Dave Vellante and we’ll be right back after this with more from Supercloud 3.