Generative AI & the Role of Compute

by | Nov 29, 2023 | Video

A discussion on generative AI and the role of compute with John Furrier at Supercloud 4

Joel Inman and Vikram Joshi, CEO and CTO of Compute.AI, discuss generative AI and the role of compute with John Furrier, CEO of Silicon Angle Media

John Furrier: Hello, welcome back to Supercloud 4. Generative AI, this is the focus. I’m John Furrier, host of theCUBE. We’re here for our studio event, and we’ve got two great guests to unpack, generative AI, the role of compute, and how AI is working together. Okay, Vikram Joshi, founder and president, CTO. Thanks for coming on theCUBE, appreciate it. Joel Inman, CEO, Compute.ai. First of all, love the domain name, so got to love that right out the gate. Good to see you, guys.

Joel Inman: Thank you. Thanks for having us.

John Furrier: Joel, we’ve been on before on theCUBE Supercloud 3, I believe you were on. I think it was a 3. Was it 3?

Joel Inman: Yep. Three

John Furrier: About cloud scale, the security. Just a few months ago.

The focus is the generative AI. The hype is off the charts and you’re seeing kind of two schools of thought. You’ve got the old school systems and entrepreneurs and thinkers who are all over this. So, they see a whole other complete paradigm shift around how computing, distributed computing is going to be implemented with kind of the AI angle and then the next generation, kind of the young guns coming up.

Joel Inman: Yeah.

John Furrier: Kind of like a generational shift, clearly a major inflection point. Some are saying it’s as big as the PC revolution, the web revolution, and then mobile. I think it’s all three combined, in my opinion. We see this as a big wave. It’s a generational thing, but it has the infrastructure side. Super important. Everyone’s talking about the apps, but under the covers, whether it’s platform engineering or cloud, the generative AI is going to be a big part of their narrative going forward. You guys are in the middle of it. We’re going to get into the compute relationship to all this, but how do you see that app and infrastructure piece?

Joel Inman: Well, first of all, I’ll talk to the hype cycle, question that you posed. I think we’re at the peak of the hype cycle, maybe even a little bit beyond the hype cycle. And we’re going to go into the trough of disillusionment.

But for those who have a vision that is focused and understand the impact, it’s not there for no reason. We’re going to have a wave for the next 10 years of implementing, adopting enterprise AI in ways that improve our productivity. And I think the McKinsey study proves that we’re going to have $4 trillion a year in extra economic productivity due to AI as we implement it, as we figure out what it even means.

John Furrier: Vikram as the founder, real quick, what was the motivation? What was the vision around this? Because your background… First, explain your background and then get into the founding.

Vikram Joshi: Yeah, so I’m a system software engineer. I write code and am an entrepreneur. So, this is my fourth startup. And my past lives, prior to being an entrepreneur, I guess a good part of my life has been growing companies and going after, chasing big ideas and dreams. I started my career at Sun Microsystems, was fortunate to have worked with some other founders. Back at Sun, brought up some early Sun machines to life. Later at Silicon Graphics, so I played with 3D graphics before GPUs were born. So, I changed domains rapidly. And then later Oracle, which is, I think, more pertinent and relevant to the conversation. Which is going to have relational compute databases and had something to do with data. So that’s pretty much been my background.

John Furrier: So, we’ve seen the movie, you’ve seen the wave, you saw how the telecom connectivity, compute power, many computers to then that whole interconnect, the networking, distributed computing revolution. I mean, many ways of innovation. Joel, you guys, I mean you’ve talked about the thoughts around liberation of compute from data warehouses. And right now, we’ve been covering big data from going back to the early days of theCUBE 13 years ago, the Hadoop days. And the vision was great, but it just never happened.

But yet data warehouses now moved to the cloud and now everyone says move the compute to the data and you get the edge. But with AI, we see a whole other power dynamic. It’s almost as if AI is a gift that dropped into the market at this time that’s going to change the landscape and liberate the market, and specifically the role of compute from data and data management systems because all we hear about today is, I want to do more AI, but I need more GPUs. And oh my God, the compute costs are off the charts. What’s your thoughts on this liberation of separating compute from the database data warehouse and the database management systems of old?

Joel Inman: Great question. So first of all, when we think about AI and the adoption in the enterprise, it is going to drive a thousand times more demand for complex compute. And that complex compute is going to be in the form of machine-generated SQL. I mean, that’s the lingua franca of enterprise AI. That’s what’s proliferating everywhere. You can see it in business intelligence applications today, generating more and more complex SQL for compute.

So, what that means is we need to get our data story together. We need to come together and figure out how do we shore up our infrastructure? How do we drive a lot more efficiency out of our compute? And the first thing that needs to happen is we need to liberate it from our data warehouses. So just as storage was liberated from compute, we need compute to be liberated from our relational database applications. And once we do that, we’re in a place where we can spread compute everywhere. And we’re also in a position to go open.

John Furrier: You know, the hyperscaler market boom because of the cloud architecture. This is interesting with data. It’s almost as data has hit this new inflection point where generations of thinking might have to be thrown out the window because AI works great with more data, but data’s also where everyone wants to steal. That’s the security challenge. Then you got the distributed computing challenge, but latency. And database theory doesn’t make sense if you start to think that way. So, what does it mean for people to think about this? This is a huge concept, creating the compute from data-

Vikram Joshi: Computing from data management?

John Furrier: What is it? What does it mean?

Vikram Joshi: Yeah, yeah. I think I’m so glad that you put the spotlight on that. And just so that we get the lingo here and the terminology right, it’s not about separation of storage from compute. That problem was solved, whether it was map reduce or the Oracles of the world, whether the SAN and the NAS independently scaled storage from compute does not matter.

This is about pulling compute out of databases, out of data warehouses, liberating it, making it available like oxygen, like the ether that’s ever-present, omnipresent, whatever omnipotent out there because data is everywhere. And if you look at the clouds, let’s take AWS EC2, for example, is the entire compute layer. And then they have the storage layer. Every cloud has its own equal and parallel.

The ability to super recruit large numbers of cores and compute without having to think in terms of a database silo. I need to put my data into a table, into a database and a data warehouse to be able to type SQL. I think that’s par se. That day has gone. So we are upon a new future that looks very different. Even if you look at what’s going on with the BI applications today, Tableau, Power BI, Looker, they generate at least 10x more SQL than all humans generate SQL. What’s the future? The future is more autonomous sources of SQL generation, more AI/ML-driven dashboards.

No one’s going to sit out there and use their editing tools and say, “Let me type, do a little bit of ELT. You’re going to throw 50,000 joints. Was that Dave Alante who talked about the computers not here for doing those 50,000 joints? Well, let’s talk about a hundred thousand or a million and make that cost effective.

So semantic layer compute is going to go out not to humans, but it’s going to be AI/ML-driven and also the low-code and the low-code applications. So when we talk about AI/ML, often the first thought that comes to our mind is, let me start to use those GPUs and run some generative AI and LLM models. This is a bit different than that. This is the impact of AI at an application level that is now going to go out and it’s going to tap into these data stores, this semantic knowledge, and this universal index of information to feed these applications all done in an autonomous, in an AI/ML fashion.

And the compute for that is going to be what? A thousand times is anyone’s guess 10,000 times more than the SQL that we have today. And leave aside that today’s machinery is not even efficient for doing what we can do. People are talking about data warehousing costs. So that’s sort of-

John Furrier: It’s like the caveman invented the wheel. Now, we got to move into the modern era is what you’re saying. So what you’re saying is, it’s interesting, is that that’s kind of… you mentioned lingua franca is SQL, large language models use the word language. So a SQL is the language, it’s machine-to-machine talking if you will.

Vikram Joshi: Exactly.

John Furrier: So this feels like a neural net meets large language models. Are we looking at a different system? I mean almost like an operating system for AI. It’s like if you take that forward, you have all this compute. What happens next? ‘Cause you can’t run it on the old infrastructure or maybe you have to modify, abstract away the infrastructure. How should people think about this unlimited compute or dynamic compute or elastic compute? I mean, what do you call it? I mean, ’cause if you have compute everywhere, it’s oxygen.

Joel Inman: Yeah. Well, we like to call it abundant compute. So, the concept is you should be able to breathe it in your application, should just have it available wherever it is needed. When we break down data silos, we also have to think about breaking down compute silos. It does us no good if we move the data everywhere. We have a data centric enterprise, but our compute is still stuck in silos here, there, and the other dictated by the applications. We need to spread the love all over the data center. The way we do that is the technical questions. And you’re right to point it out, it requires a reinventing the vertical stack from the very top to the very firmware that we’re operating our hardware with.

John Furrier: It sounds complicated. Simplify it for me. What is the bottom line? How would you describe it? Because it sounds too good to be true because you’re setting the table for data being addressable and secure.

Vikram Joshi: Well, I think this is not something new. I’m just going to roll back a little bit here. The concept of separating, pulling compute out of data management or databases has been present for quite some time. I mean, let’s go back to these days. What did they do? They took compute and they pushed it towards data, which was going to be better for certain workloads. Rather than move terabytes of data towards compute, you take compute and move it towards data. Later, exit data follows as inspired by Netezza and others.

So, playing with storage and compute separation, independently scaling them, the ability to take compute and move that around. These concepts have been present, a credit where it’s due. The work which has been done by the Spark and the Presto community especially has actually done a lot of, has made a huge dent in the separation of the data management and the data layer.

For example, if you look at the lakehouse, what is the lakehouse? It’s just a bunch of Parquet files and your compute is just you type in SQL. You don’t think. You’re not confined to a data warehouse. You’re not confined to one of those silos. So, I believe that the precedent is here, the problems that need to be solved have to do with compute efficiency and making it cost effective. And the final frontier, I think, for data warehouses and databases is concurrency data. Data warehouses and concurrency don’t go together.

So, when you start to look at the new kinds of workloads and applications that are going to be coming out and hitting these databases, we are talking about this machine-generated SQL out here, it’s going to be so much in quantity, right?

We talked about many orders of magnitude more and complexity, too. A machine generates pretty complex SQL, right? And as the complexity, so complexity along with concurrency exacerbates the whole problem. You want to make the compute for that efficient. And to set the stage for what’s happening, Let’s take like what’s the state of the union today? If you look at the compute efficiency of data warehouses, databases in general, today, especially for elastic compute, we are using three out of 10 codes, right? That’s a 30% CPU utilization. And maybe even that’s a generals number, especially if you start to look at elastic clusters. So, we are leaving 70 cents on the dollar here behind on the table.

And obviously, if you hear about the cloud data warehouse compute cost, I talk to customers all the time and say help me here. We love what we have. We like having 2,000 connectors. The cloud data problem has been solved, but my compute costs are very high.

John Furrier: By the way, that’s a validation for what we’re hearing in the marketplace as well. Cost is the bottleneck. Joel, that’s a great point about the lakehouse. I want to get your thoughts on this because it’s still a relatively new concept. It’s clear that data warehouses are on its last leg. It’s an old way, not the new way, but it’s still installed everywhere. What does a customer need to be aware of when considering migrating to say a lakehouse where you can start tapping into setting an architecture up for this kind of new compute separation?

Joel Inman: Yeah. Well, I think that the warehouse market has been given a life extension by some of the leading vendors. And that life extension is from… They made it very easy to use. So there’s a single JDBC endpoint. It’s in the cloud. It’s easy to access. You don’t need to hire a team of DevOps engineers and it’s absolutely reliable a hundred percent. Those are really compelling things for people in the enterprise who don’t want to dip their toe in the lake, so to speak, or the lakehouse, right? And when we’re talking to customers, many of them are saying, “I want to build a lakehouse. I know I have to, the data’s ephemeral. It’s open format. I need to go there to solidify my infrastructure, to future-proof my infrastructure. But do I need to hire a team of DevOps engineers? Is it going to fail on me if I run out of memory?” So, these are problems that still haven’t been solved. And the giants in the industry are solving these problems right now. It’s a really exciting thing to see.

Vikram Joshi: Yeah, if I were to jump in and add to that, that’s right on. The spirit is there. People want to go out open. They want to spell their tables and their data warehouses onto open Parquet, and Iceberg, or Delta or whatever formats. And then the general sort of thinking that is, I do want to go to an open standard, an open dialect of SQL such as, say, Spark and Presto, right? And I don’t want to have to deal with throwing DevOps engineers and other stuff. It’s like buying my Tesla along with the whole shop of technicians. I don’t want to have to do that. And I think credit goes to the cloud data warehouse companies out there, the pioneers who made it super easy that single JDBC or SQL Endpoint.

I just look at that. I don’t have a deal with that and stuff just happens when’s that going to happen. And the second other issue you touched upon at the risk of repetition here, John and Joel, here, which is I have to provision for my worst-case memory. Every once in a while, the compressions and the rare factions of my workload required massive amounts of memory. Provisioning for the worst case means… of course, I love working in-memory systems and nothing’s wrong with that. Stuff is supposed to run, but there’s more data, there’s more complexity, there’s more concurrency. And you’re not going to be able to throw large amounts of memory at it. And the tradeoffs are harsh. You start to page this memory to disc, right? Spell the disc and the performance is going to be slow. So, provisioning for the worst case means lack of efficiency. So, these are the kinds of issues that the customers face that we talked to.

Joel Inman: I want to just complete the thought there for you or maybe add another point. The data lake market is growing at twice the speed of the warehouse market. It’s growing at 20% CAGR through 2028 is one estimate that I saw versus 10% CAGR. So, lakes are already… Lakehouses, I should say, are already growing at twice that pace. Can you imagine the rate of adoption that we’d see if it was as easy to use as a cloud data warehouse or as simple as connecting your single endpoint and it was as reliable? If it was enterprise-grade-ready to go off the shelf, it would be massive.

Vikram Joshi: No, out of memory failures, right? No home kills.

John Furrier: Well, that’s a good point. Well, first of all, the word memory is interesting now because it comes up in two use cases. Memory as in physical memory in memory for storing stuff and then memory from a recall perspective. Retrieval is a big hot topic in generative AI.

So, I want to kind of connect that to the Supercloud theme, which is across environments, whether it’s two public clouds or on-premises edge ’cause you’re talking about compute, moving compute, right? The edge was really where the first conversation started around that conversation to moving computer around. So, we all know moving data’s expensive. So, when you look at data and its address ability, how do you look at the multi-environment piece where is it a semantic layer? Is it just one big data pool where you have the intelligence built in with AI now with compute kind of programmed into it?

I mean, I’m just kind of riffing on this, but how do you see this? Because I can see the benefits of data lakes over data warehouse check, but also the data warehouses in a public cloud are also constrained in their cloud. So, the easy button here would be one big pool and managing multiple environments. Am I over the top here or am I fantasizing too much here?

Vikram Joshi: No, I’m going to let Joel address this.

John Furrier: What decade am I in? 2055?

Joel Inman: That might be beyond my pay grade as well.

Vikram Joshi: Can meet all just be that one big company.

John Furrier: I’ll be dead by-

Joel Inman: I’ll offer a thought. So, the big Fortune 500 companies that we’ve been in conversations with want to go hybrid. They want to go on-prem, and cloud, and multi-cloud. They want to have their cake and eat it, too. When you start talking about federal contractors and military applications, they want edge devices. Yeah, they don’t want to wait for their data to be centralized to some repository. There’s security issues around that. And so fast forward here as we see the evolution of the Supercloud, it’s going to exist at the edge, it’s going to exist in the fog, it’s going to exist in the data center and the cloud itself. And our small piece of this party at Compute.ai is driving compute efficiency in all of those places. It’s really the interoperability between our pieces of hardware and making sure we get the most out of that CPU, the most out of our memory, and the most out of our applications, all the way up the vertical stack.

Vikram Joshi: Yeah, absolutely. Yeah, yeah, absolutely. Push the limits of CPU, memory utilization dissipate fewer watts. What would the world look like if we could dissipate 5x fewer watts? You talked about the edge. We do talk to the government and DoD and stuff like that. And on the edge, battery power is a huge issue. So, you want to do federated compute, right? I mean, that’s the model that even Google-

John Furrier: And real time tactical edge, for example, in military is all about real time, having the data, low latency-

Vikram Joshi: Having the data.

John Furrier: Battery.

Vikram Joshi: Someone’s walking there with a battery pocket back or there’s a robot doing that. So, compute efficiency is going to be very critical. And of course, GPUs are important. You’re going have to make those decisions out in the field without having to be able to tap it or even have a connection back to the central.

John Furrier: I mean, basically this comes down to the use case. Your point about the Tesla, I mean you have to be optimized for the use case or the application. So here we come into the vertical versus horizontal. You want the scalability, but if you’re on the edge, say military, you need to also talk back to the central data, but also maybe replicate data. It’s all kinds of use case scenarios for that application. And that may not be used by anybody else.

So you need to have the compute. So is this an advantage where it’s separating the compute makes that better. Is that an example? Am I thinking the right way there?

Vikram Joshi: Oh, oh, absolutely. Though I’d say that while there are… John, you point out there are so many problems there to be solved. The one that we think is quite fundamental and maybe has the potential of being the tide that lifts all boats is the one of compute efficiency. And when you look at compute efficiency, it’s really compute and memory. They’re two sides of the same coin, right? When the code-to-memory ratios haven’t gotten any better, they’re actually getting worse a as we know, processor speeds and memory speeds have stagnated for the last 15 years or so, more than well over a decade, right? So, things haven’t gotten much faster. We are doping more transistors into processors, which means what do you do with these things? So, either you’re going to put more cords in there or you’re going to put more caches. Yeah.

And now with in-memory systems with large amounts of transient ephemeral data, which is in passing, I need to hit that data really hard while it’s under my cords and while it’s in the memory, all that is causing massive CPU stalls. So, when Joel and I look at these systems and we see what is the root cause of the lack of CPU utilization, why is it that we are seeing only 30% CPU utilization? And why is it that when you look at elastic clusters and distribute system, the CPU utilization is even more? And when you put the economics of that back into the picture, for example, cloud costs. I mean, we know we are talking about the edge here too. The same thing almost applies to the edge. So, if you look at cloud costs, they’re directly a function of the amount of memory, which is the most expensive component of the whole thing.

And memory and CPU are literally two sides of the same coin. They mean money. If your CPUs are stalled, you’re not doing work. And if your memory is not sufficient, paging to desk, you’re going to get IO waste. And that means less utilization. Whether you’re paging…

Joel Inman: So, yeah, go ahead.

Vikram Joshi: Just to complete the thought, sorry….. Don’t worry about interrupting me, though. So I just want to get the whole thought out here. So, whether you’re getting the data from a lower tier of storage, which is a spelter desk, right? In the past, we had to take a coffee break when you started to page. Remember the Linux operating system? Or you’re going to get that data from over the network is going to cause IO waste.

These are some of the issues having to do with efficiency in compute. And aside from liberation of compute.

John Furrier: So, this is the where the action is because Dave and I were interviewing one of the head guys that tell Jeff Clark and the marketing people like, “Don’t only talk about solutions, not about speeds and feeds.” We’re in a speeds and feed market right now. All the conversation is how fast can the silicon go? I need more chips. I need more power. We’re in a renaissance of systems architecture on a global distributed base scale-

Vikram Joshi: Without doubt.

John Furrier: And now with AI as the gift, that’s going to give the hyperscalers more power. Really in a perfect storm opportunity, it means pre-game, not even first inning. This is where the opportunity is how to heading. Where’s Compute.ai fit in? What’s the vision? Where are you taking the company? ‘Cause right now, all the future scenarios put aside, people just want generative AI working. They want to be positioned to leverage the current situation with headroom so they don’t foreclose the opportunities ahead of them. They don’t want to misfire. So, I won’t say baby steps. I’ll call it maybe kindergarten, maybe play with some blocks here, get on the rug, play with generative AI. So, people are experimenting. They just got to get going. Where are you taking Compute.ai?

Joel Inman: Well, Compute.ai is a fundamental building block for next generation architectures, and it really is providing the scale that is needed to address that extra workload that AI is going to bring the 1000x workload that AI is going to bring. If we don’t fix our data infrastructure now, and by fix, I mean making it compute efficient, making it infinitely concurrent and scalable, making it appropriate to feed a machine that is just hungry and just eating and eating and eating all this processing power, if we don’t fix that down to the very lowest level with CPU utilization, then the costs are going to be out of control. Let’s bring it back down to earth for a second. Our favorite leading, this is just one example, a cloud data warehouse company, right? If you add a ninth user, your cloud cost doubles, your cluster size doubles on the backend in the cloud.

Vikram Joshi: Concurrencies.

Joel Inman: Every ninth user. So the concurrency of nine or a concurrency of eight. We’re talking about the need for a concurrency of thousands or hundreds of thousands of joints as Vikram mentioned before.

John Furrier: Concurrency is huge. And this is again, that’s why it said this is so disruptive to the database world because the theory of databases was constrained to state-of-the-art at that time.

Vikram Joshi: Yeah. Yeah, exactly. And also, if you see, which is the reason why we’ve had oil TP style workloads, which is ATM style bank transactions million per second or whatever. And then you had warehousing, which was more data mining, which was DSS style workloads. And traditionally, that has been, let me just run through billions of rows in a column, shred through them rapidly. That’s what data mining was. Let me do an aggregate and give you the average of mean, median, or mode, whatever, right?

From there, what has happened, and I think it’s important that we talk about this, especially when we talk about efficiency of compute and what does that really mean, when you look at the modern day workloads and the modern day complex compute, it’s not exercising our columnar stores, even though the data is in Parquet on this. If you’re just talking open source language, it comes into memory as a row format. That’s still columnar. But the days of columnar databases, especially if you subject them to modern day workloads are gone because it’s no longer columnar work. So let me give you an example.

John Furrier: It’s postmodern, it’s old modern. Now, we got to remodernize it, basically.

Vikram Joshi: Right, right, right. There’s a better-

John Furrier: Supercloud them.

Vikram Joshi: There you go. So what the pattern that we are seeing for some of the more modern compute, especially when it is autogenerated through one of these autonomous AI-generated SQL sources is row-column, wow column. So, you’re going through large numbers of columns, doing aggregates. And now watch this, you take these aggregates and use them as joint keys for other columns, same or different tables. That’s super complex. You cannot now have the benefit of that linear compression that it get in memory. You have to go row-column, row-column. And that’s where you have to over provision memory for the worst case. And that’s where om kills happen. That’s where you run out of memory failures. So, this row-column paradigm puts a huge burden on the memory infrastructure and as a result on the compute and now the same problem.

John Furrier: So AI as a gift on one hand is also a challenge and this is where inflection points really kind of show. There’s always going to be kind of a new way to kind of some friction that you fix.

Vikram Joshi: And massive levels of complexity is going to come in. I mean, if you look at the SQL generated by Tableau, unless it’s stupidly simple select statements because my table was denormalized, you cannot keep de-normalized summary tables. Maybe I’m taking you guys into it, but generating simple-

John Furrier: We don’t mind.

Vikram Joshi: Select statements is not always possible because data changes, new tables come in, more joiner is needed, more group wise are needed.

John Furrier: SQL stands for structured query language to say language LLMs, large language models. It could be the lingua franca. Final question in conclusion, first of all, great masterclass. I love the deep dives, really good to get into, ’cause it shows where the action is.

Vikram Joshi: That’s where it’s happening.

John Furrier: You got to go deep into the stacks, say, okay, that’s the problem. It will be fixed, has to and it will be because the opportunities are so great.

Final question for you, guys. What is the future vision of compute? ‘Cause I like this idea of separate and that’s concept we’ll continue to talk about cause it makes sense. But what’s the future vision of compute?

Vikram Joshi: So I’m sure you, as a skipper, you have something to say. But I didn’t want to do another database company here. I don’t want to be the 10th search engine and don’t have something significant to offer in Google. So, the whole challenge that’s ahead for of us, which is lack of infrastructure efficiency. We are throwing massive amounts of infrastructure. CPU, and memory are the problem. Costs are higher as a result of that. There’s an opportunity there. Given the problem of AI/ML, autonomous sources, generation of SQL, which is huge amounts of SQL coming with high complexity and high concurrency.

So, solving those problems is super exciting from a technical perspective. And we can see there is ample business opportunity here in value. There are today’s problems that need to be solved. ‘Help me, my data warehouse costs are high. How, what can I do to offload my compute? I love to go to the lake”. So that is an opportunity here for us to address some of these customer pain points today and future proof these customer’s needs for the future that I think is going to-

John Furrier: Joel, it’s a new paradigm. What’s your thoughts? We’ll close it out.

Joel Inman: So our mission is to make compute abundant and infinitely scalable. I kind of said that a couple times. Like oxygen, like breathing, right?

John Furrier: It’s a dream scenario.

Joel Inman: Well, the future’s closer than you think. What we’re seeing in real workloads and situations is a 50x price performance improvement in early use cases. And we’re building on that. And I kind of alluded to this earlier, but if you can stick something into the environment that is very open, that has a single endpoint, very easy to use and no DevOps requirements, that’s where we’re entering the market. So we’re taking these data lake infrastructures. And we’re building upon, we’re standing on the shoulders of giants and we’re saying, “Maybe we can add a little bit here. We can make it easy to use, open, infinitely scalable, and reinvent the wheel without even reinventing the wheel so that people can simply consume it.”

John Furrier: Well, if you can enable companies to build better, faster generative AI apps, which is data-driven, data’s enable, it’s native, sometimes a bolt on, sometimes it’s an abstraction, however you look at it, it will be critical.

Joel Inman: Yeah. I mean, early use cases are data transformations and pain points where the costs are high. But I can’t tell you how many conversations we’re having about AI, and how do I operationalize this, and how do I get it out after I’ve trained it, how do I deploy it.

John Furrier: Vikram, great to see you on this new venture and big idea, Joel. Great company. Love the URL, Compute.ai. Let’s get the backstory on how you got that amazing unit. So we don’t have theCUBE AI yet, so someone else got it. Beat us to the punch. But thanks for coming on.

Vikram Joshi: John, great meeting today and thank you for having us over.Thank you.

John Furrier: All right, bringing out all the access to Supercloud 4, generative AI, the infrastructure, what it takes to make it happen to enable developers and applications to be AI native regenerative AI. That’s what this focus is all about on Supercloud 4. Thanks for watching.