Rodrigo Liang: The World of Artificial Intelligence (AI) is here

February 14, 2022 00:35:15
Rodrigo Liang: The World of Artificial Intelligence (AI) is here
Ayna Insights
Rodrigo Liang: The World of Artificial Intelligence (AI) is here

Feb 14 2022 | 00:35:15

/

Show Notes

Rodrigo Liang, CEO and co-founder of one of the most well funded AI start-ups – SambaNova Systems – talks about the rapid transition to the AI world, the parallels of this transition with the internet era, and the industries that are already witnessing disruptions through AI. Rodrigo discusses the fundamental shift required in the hardware architecture and the merits of a software-first approach optimized for data workflows.

With their recent success and limelight in the AI world, Rodrigo and the SambaNova team are continuing this exciting journey as a trusted advisor to enterprises looking to accelerate their AI deployments. 

Where is AI in its development curve right now?  Why has data overtaken operations as the driver of AI software and hardware? How do enterprises think and approach at-scale AI adoption? Tune in for an insightful conversation with a leader in this exploding field.

Discussion Points

Ayna Insights is brought to you by Ayna, the premiere advisory firm in the industrial technology space that provies transformation and consulting services to its clients. The host of this episode, Paolo Baldesi, is Former Senior Vice President at Fernweh.

This episode is part of the Disruption 2.0 series where the focus is how a new wave of technology is disrupting multiple sectors.

For More Information

Rodrigo Liang at SambaNova

Rodrigo Liang on LinkedIn

Ayna

Paolo Baldesi on LinkedIn

View Full Transcript

Episode Transcript

[00:00:03] Speaker A: Welcome to Fernway Insights, where prominent leaders and influencers shaping the industrial and industrial tech sector discuss topics that are critical for executives, boards and investors. Fernway Insights is brought to you by Fernway Group, a firm focused on working with industrial companies to make them unrivaled. Segment of one leaders to learn more about Fernway Group, please visit our [email protected] dot. [00:00:38] Speaker B: Hi, this is Paulo Baldesi, senior vice president of Fernway Group. Welcome to a new episode of the Fernway Insights podcast today, as we continue with the theme of disruption 2.0. Our guest is Rodrigo Leung, co founder and CEO of Palo Alto based artificial intelligence startup Sambanova Systems. Last summer, the startup raised 678 million in Series D funding, bringing its total funding to more than 1 billion and raising its valuation to more than 5 billion. Founded in 2017, Sambanova came out of stealth mode just over a year ago, and it is shaking up the AI space by keeping its customers to join the AI revolution and transform their business in weeks rather than years with Dataflow as a service. Prior to co founding Zambonova, Rodrigo was a senior vice president responsible for spark processor and ASIC development at Oracle. He led one of the industry's largest engineering organizations responsible for the design of state of the art processors and asics for enterprise servers. Rodrigo has a Master of Science and Bachelor of Science degrees in electrical engineering from Stanford University. Rodrigo, welcome to our podcast. I'm very excited to have you here today. [00:02:00] Speaker C: Thank you so much. [00:02:01] Speaker B: How are you? [00:02:02] Speaker C: I'm doing well. [00:02:03] Speaker B: Very good, thank you. So we'll start with a very basic question, getting your perspective on the state of the artificial intelligence. You have talked a lot about the shift from a pretty AI proposed AI word. For the benefit of our listeners, could you describe what does this transition entail for the various industries and customers? What magnitude of changes are we talking about here? [00:02:28] Speaker C: It's actually the pre AI to post AI is what I was talking about last year. Over the past year or so, pre AI to post AI world is what I was talking about last year. What you've seen over the last twelve months or so is many of the largest companies have now accelerated into it. The level of transition that this represents is mimics what we saw about 20 years ago with the Internet, where initially the Internet seemed like basically email, maybe a webpage, maybe a little bit of light browsing on commerce. Now today is everything we do is at the center of everything we do. Every business, every person touches the Internet every single day. Companies that did not embrace it at the time, 1520 years ago, are now facing a significant disadvantage in the market or potentially gone. So AI has this parallel today. And we've seen the capabilities that AI is able to produce for individuals, for companies, for enterprises, just a matter of how quickly people adopt it. And so we are in the early, early days, but already accelerating into production AI. And I think what we want to talk about here is where do people start? How do people get going? Because it will be the fundamental technology that powers the businesses, powers everything we do every single day. And you're going to see a lot of these things come in in a very rapid fashion, transforming every part of every company in the world. Great. [00:04:06] Speaker B: I think this is a good segue to the next question. And so what are the use cases and applications that you're seeing accelerating and that will become mainstream? [00:04:15] Speaker C: Yeah, I mean, if you talk about most industries, you see people already doing robotic process automation, RPA. These are things that low level tasks that you see humans for years having to do that we've now able to put systems in, allow the machine using text recognition, voice recognition, image recognition, and be able to process very efficiently, very effectively, probably more accurately, because humans sometimes wear error prone with those types of tasks and deliver significant cost benefits, significant velocity benefit, many good things about it. And so you see it, for example, in banking, financials, people are doing this everywhere with text extraction for a variety of tasks that the banks have to do, very efficient. You think when you look at banks using it for compliance and look at contracts and scanning contracts, extracting data out of the contracts and inserting them into different places in their system so that they can make use of the information. And so lots and lots of use cases around text language. Of course, you hear about the language translation where people use it for customer service. And some of them today are pretty amazing. You make a phone call and you're not quite sure if the other person on the other side is live or not, it's pretty amazing what these translated and generated languages are able to do these days. And so tremendous use cases around language, tremendous use cases around image, tremendous use cases around recommendations and systems like that that we can talk about. But super exciting time for the world to really embrace these types of use cases and bring them into their businesses. [00:05:57] Speaker B: Great. You mentioned the financial industries and probably many other industries that deal with customer, for which customer service is centric. What are the industries that are actually going to be among the first ones who will get mainstream with AI use cases and applications? [00:06:15] Speaker C: Yeah, healthcare is going to be another one. That's really exciting. We've done some really great work with the us government, the Argon national labs, Lawrence Livermore National Labs. We've done some tremendous work there where these models are incredibly large, and you're trying to think about imaging and detecting cancer on some medical images and things like that, where the image recognition has to be so sophisticated, because the resolution of the image has to be very large, very high resolution, to find the very, very fine artifacts embedded in these very complex images. And so being able to train images and identify whether that's classifying certain images for cancer or not, or segmenting it, which is drawing boundaries around where the artifact is. These are really sophisticated use cases that are very, very useful for radiologists and folks in the medical imaging world to be able to be more productive and make their services more extensive. Certainly, we did a lot of work with Lawrence Livermore on Covid, and some of the work around there. You can think about microscopy and a lot of the similar challenges of training images that are very high resolution and then looking for. Looking for insights in those types of models, very difficult to do. And you're going to see people continue to use machines to accelerate that type of work. So drug discovery is already very pervasive. You hear that a lot. You hear about doing patient data now, people combining images and text from the doctor's reports and using that data to actually help with diagnostics, right? With preventive care, right? All of those things that you hear in the healthcare side of the world, very, very active, incredible progress using AI. You see it with retail. Retail is going to be another area where recommendation systems have already been in place for many, many years. But for the most part, they're correlational, meaning big data. If, Paulo, you bought this, other people that bought that also bought this thing. So the correlation, as good as it can be today, is not nearly as good as if you allow the machine to look at all the possible data and raise the likelihood that that recommendation is going to be pertinent to you. And so there are a number of large companies, Facebook, including other big companies, that have invested a lot of energy really trying to figure out how to advance this field. How do we bring even greater accuracy? Because it's good for the vendor, but equally it's good for the customer, the consumer. None of us want to be presented stuff that doesn't apply. Right? We don't want that. That's noise. You think about how much noise it's in our inboxes every day, how much noise is on our phones every day, because it's suggesting things and so the world wants to be more efficient both on the seller side and the buyer side, that you want these systems to be very pertinent to what we care about. Right? You want that there are a lot of large companies really making huge strides in it. And this is one that you can really see the competitive nature of this, that those who are able to actually advance this to a certain level will have an undeniable advantage over those who do not. Because there are only so many dollars in the market, so many eyeballs in the market. Whoever gets consumers first will be the ones that have an advantage. And I think companies are going to have this race towards kind of building these systems because it's a competitive world to try to capture eyeballs, capture, you know, customers capture dollars. Right? So retail is going to be one of those really interesting things. And I use retail as a very general, you know, general term. It could be retail, it could be advertising, it could be, you know, a variety of things that we do today on our phone and media devices. [00:09:59] Speaker B: Fantastic. Now talk us through the role of hardware development. What is the magnitude of change that we need to see when it comes to hardware? What are the building blocks of making AI a reality for all businesses? [00:10:17] Speaker C: Let's take a step back and think about the magnitude of why things are changing. I've been building high performance systems for enterprise for almost 30 years, and we've gone from this generation where we're building these large systems, almost mainframe life. But it was at the time, these large scale systems tailored to run software programs that were written by computer scientists. So if you think about the interaction of these systems to the application, it was largely to a small number of computer scientists who were writing code specifically for that machine. Come mid to late nineties, what's happened? Suddenly you have an entire ecosystem of hardware, computers, servers, networking, storage, all that was tuned for these handful of computer scientists and serving their needs. And yet the Internet just broke it open, and all 7 billion people in the world were initiating traffic and touching a server every day. So if you think about that, why was that change? Why did that change happen? People think, oh, it's because we created multicore, because we created this type of stack, that type of software. No, no. Fundamentally changed. There is the fact that we went from a world of small number of interactors, primarily computer sciences, writing specific programs to all 7 billion people in the world, clicking, interacting, and then initiating traffic to an infrastructure that was grossly, grossly under provision for it. You think about, it's not just the computing side and all the TCP IP connections you have to finish. It's every networking, every storage, every data center, all the things that we had to, we had to put together. And that kicked off 20 years of technology innovation, right? 20 years of new cpu design, new storage, new networking, new network processors, new, all of these things, right? All the storage systems, all the software that was put on top of it, vms and ldoms, all of those things, data lakes, all of it got kicked off because of that changing model. And here we are again. And people say, well, why is this going to be so fundamental? Why do you need to change so many things? Well, here's what's going to happen. We already see it today. For the next 20 years, the main interactive with systems won't be human at all, right? It's going to be every car, every cell phone, every doorknob, every traffic light, every device that's out there is going to be smart. And every smart device is going to send something to some server that has to then respond back with some intelligence, right? That's just how the world is. And that doesn't take a lot of us to actually imagine, because you already see it today. Your phone is actually producing all sorts of new things to be calculated wherever you move. As soon as you move, it's trying to do other things, right? And so you didn't push any button. It just started, right? And so in the world of AI, what you're seeing now is that the large majority of interactors will become non human. And worse yet, each transaction, when we humans were doing things, I want to buy something, click one button, I buy, you sell, done. In the world of AIh is a camera. A camera in a car detects something. I got to now iterate and figure out many, many times, is this a cat? Is it a dog? Is it a human? Right? And the way that AI works is through a lot of different transactions we hone in on what is that particular item that I saw, right? So now you have the multiplying effect of many, many devices. It's not 7 billion people in the world interacting. It's probably 700 billion devices. They are talking to this computing ecosystem. Each of those is no longer singular transaction. Each time, each of those is this recurring, is this converging transaction that is machine learning to allow you to figure what it is. And so now you got this future that is explosive, and the world, again, is grossly, grossly under provision to handle that. So now you tell me, what do we need to do? It's going to kick off 20 years of stuff that we got to go through. Right. [00:14:27] Speaker B: That's so interesting. Rodrigo, let us now shift gears, though. I would love to hear a little bit more about Sambanova. And what really impressed me is that Sambanova really began as a research project at the Stanford University. Did you imagine that at that time the company would become one of the most well funded AI startups? Bring us a little bit back to those times. What was your mindset in the lab? How was working your co founders? What was your vision at that time and how it evolved? [00:14:58] Speaker C: Yeah, I've got two of the most brilliant minds in the world as co founders. Kunle Olukoten has been a professor of computer science and electrical engineering at Stanford for 30 years. And Chris Ray is just one of the most brilliant minds in computer science. A professor at Stanford, genius, focused on databases, machine learning, and just a brilliant mind when it comes to that type of technology. And all of us, when we got together, we started thinking, well, look, if you identify the shift that I just described, there is no way you solve this problem by bandaiding old architectures forward. There's just no way to do it. It's such a fundamental shift in the way that the world's going to operate. Really difficult to take hardware architectures that have been around for 25, 30 years, which, think about it, almost every single hardware architecture we're running today has been around for 25 to 30 years. Really difficult to evolve that to fix such a discontinuous shift in the way that use case is going to happen. So it started with the idea that says, look, we got to get back to first principles. We've got to get back to what are we trying to do? And what's different between 25 years ago and now is that 25 years ago, when I was building processors at HP and at sun, we focused mostly on the way that instructions had to operate. You said a computer scientist wants to do an ad, one plus one. We focus on the addition. In the world that we have today, data is key, the operations are key. Data is key. What you have to do is you put the whole thing on its head and say no. Now, instead of actually moving data around and copying everywhere to optimize for where the best operator is, what you really need to do is actually focus on the data, where is the data and flow the data to where they are and flow the data and let the operation wrap itself around it. And that's what we did. As I'm up, we focus based on the research that stand for this and said, look, if we create a data flow machine optimized for data efficiency, optimized for the least amount of data moving, making sure the data shows up when and where it needs to be, no more, no less, you're going to get significant performance advantage, significant efficiency advantage, significant flexibility advantage, which you cannot do today because the legacy architectures force you to chop up the jobs exactly the way that the architecture requires. You copy the data in all the different places that you need it, then you've got to reassemble and do all the things that you have to do today. And we've accepted as a community that, hey, that's just how it is. The data center has got to explode and got to buy more storage and got to buy more memory. I just got to buy all this stuff, and then they've got global warming. Because data centers, all the things that we've got out there does not have to be. It does not have to be. That's where we started, really, since some amazing research and work that was done by Chris Rey and Kundalin and then bringing then a team that had been doing a lot of this work together for enterprise, for cloud environments, for production environment, in order to commercialize this. That really was the marriage that allowed us to actually make this much progress this quickly and then tackle such a big problem, which otherwise you can't really nibble at, right? You can't come and just take a small piece. You got to come and look at the whole thing and say, we've got to revamp this whole thing, otherwise the entire infrastructure is going to get crushed under the weight of 700 billion or whatever the number of devices want to talk to. [00:18:27] Speaker B: You decided to take a software first approach. Sambanava, initially designing a system optimized for managing data and then creating the hardware needed to optimally run that application. Were there any opponents of that approach? When you started with, how much of someone else's success to date would you attribute to that approach? [00:18:48] Speaker C: You know, it's coming from somebody that studied hardware design at Stanford. Build chips from the beginning. That's kind of what we did in the nineties. We built the chips and software would then come and write on top of it. It's a way that a lot of hardware folks in industry sometimes think, I'll just build a chip and they will come, right? Build it. They will come. Here's the problem. And this is something that was very visible to me when I was part of sun and Oracle acquired some. One of the most amazing things that came out of that transition, where all the folks that were building these very complex systems inside sun. And as you recall, sun in the Internet area was the dotted.com dot, right? What everybody wanted, right, when you actually build a website, what you wanted was a sun server. That's what you wanted, right, in the nineties. And you coming from a world that was so hardware centric, people would just build the hardware, build a chip, and then we'll figure out what to do with it. There's still a culture in the world where people just build hardware and then we'll figure out what to do with it. Well, that's not okay anymore. Here's why. Because unless you understand what the software is trying to do, the ecosystem of software is already getting predefined. The ecosystem of software has requirements because you have data lakes, you have the ways that APIs that talk to certain systems, and you're not going to change every application that's out there. So what you need to do is actually think about what are the workflows that exist? Where do you fit into that workflow? What are the APIs that you have to adhere to? Because you're not going to change the world and then come in and build a system that allows the customers to quickly take advantage of technology. So it's a different way of thinking about it. But if you take that approach and say, hey, I'm not only going to address somebody that's willing to throw all their old things away and start from new, and rather come in and say people have workflows that exist, people have business they're running, we're going to come in and we're just going to take a chunk of it, revamp that piece and give them these AI capabilities that they never could imagine before, but do it in a seamless way, if you start with that, the answer you come up with is a very different answer, I assure you that that's true. It's a very different answer. We started from an idea, this must be the data has to flow this way, this is how it's structured, it has to be open source interfaces like Pytorch and Tensorflow. We're not going to go and do our own custom whatever. Many companies go and have to force people to learn a new language. No, none of that use open standards. It already exists. The world has normalized around that leaning system. Now from that, draw down what has to be, what has to be, and if you go and look at the data, look at application, what they want to eventually arrive at a conclusion of what's ultimately going to make a difference. And that's ultimately what the difference is. And there's no doubt about it. Most people say, hey, we have software, software first. No, really, be honest about it. What was the first thing that that company did? Did you put a chip down first or did you put a software down first? Right. Most companies will say, oh, yeah, I'm a software first. Whatever. Okay. No, really, look at, back at it. Did you take out the chip first or did you actually write the, write the software stack? You can't change history. You can't change history. History, right. And so that's kind of what we did at some point. We really started with the software first. Everything was actually modeled on the software stack was modeled first on a substrate that was not the chip. And eventually it became clear, this is what software's trying to do. This is what machine learning really wants. This is what these types of kernels are trying to do. Now, let's create a platform from grounds up, not anchored by legacy cores and legacy isas and legacy routines that you have to do. We're not anchored by any of those. Let's do what the software wants. And out comes what we call a reconfigurable dataflow unit, something that adjusts itself to the software stack, to the application, exactly the way they want it, when they want it, and what you see with the customers, and we have the us government who's standing up and say, look, here are two things that are really important. They bought it, installed it within 45 minutes. They were training models from 45 minutes of plugging power and plugging the ethernet, and they were training models. So that's the first thesis we had, is use the existing workflow they have. Do not force a customer to throw everything away and start over. Use the workflow we have. But then within that, give them a significant advantage. And what they're, from day one, they start saying these applications that they were training running ten x faster, ten x faster. So not only do you get significant advantage in performance, you get significant advantage in use case, it also gives you the ability to actually take that and then do new things. We can talk about as we get into the data flow as a service and why we ended up doing that. [00:23:42] Speaker B: That's great. I think this is a great segue to business model. Let's talk a little bit about that firmly. We continue to see that as a service. Business models are becoming increasingly popular in the technology world, especially in software. At Sambanava two, you offer dataflow as a service, which is a subscription model, where customers get a complete AI solution, including next generation hardware, software and pre trained models. How important do you think is this type of business model to drive adoption at scale? [00:24:16] Speaker C: Yeah. So this is a new paradigm that exists only in the world of AI that we didn't have before. Now think about what you used to have, right? You used to have hardware, right? Hardware with the operators. One plus one always equals two. You buy the hardware, then you have the application. Here's my CRM, here's my customer relationship management software. Here's my supply chain management software. Whatever the application is that I want to use, I want to actually interface with the hardware. Well, my data flows to the software. Application runs on the hardware. One plus one equals two and you get the same result. Now it's just a matter of who runs faster. Okay? In the world of AI, unfortunately, there's new layer called models. Right, models. Which is now why we ended up thinking about this differently from everybody else. Where there's the hardware you need, that there's no doubt about it, there's applications. I'm doing supply chain management. I want to predict yields. I'm doing customer relationship. I want to predict of what the customer might want next. I want to recommend the right set of things. All of those things get baked into the application. But what's the problem here today is there's a layer called models. And depending on the quality of model and how well you train it, one plus one doesn't always equal two, right? A good model might be 1.99, a bad model could be 1.4. Here's the problem. Now that some of the largest companies have invested hundreds and hundreds of data scientists to get good at models. GPT this GPT, right? You look at these models and the accuracy have made tremendous leaps and bounds. And if you're a large company that can afford 5600, maybe thousands of data scientists and thousands of machine learning experts, you can do yourself. But think of the thousands of enterprises today where they're trying to hire data scientists, and one, it's hard to find, two, they're really expensive, and three, they can't keep them, right? And so the challenge with now you, what, you're a company, you're, you're a fortune, you know, 1000 company, you got a lot of research. You need to actually transition to a post AI world. I can't find the data scientists, right? And so what comes in for is let us do it for you, let us cover that gap, which is the model, and training it to a certain level of result, right? Because without it you can't actually operate AI. And so what we're trying to do here is for all these companies that do not have the expertise and frankly they don't want, I mean, their business is something else. They're a bank, they're a retailer, they're whatever their business is not AI. Why invest so much energy and trying to keep track of, is it Resnet, is it Rescalenet, is it unit, is it GPT through? It's really hard for these businesses to keep track of all the changes that are happening. And it happens to be something that Samoanova we do every single day. We do every day, all the time for all of our customers. With dataflow as a service, what it is, is for enterprises to have a choice. Do you want to do it yourself or do you want to get some Lenovo and bring the expertise into a subscription? Those are the choices. Some people have their own models they need that you can do yourself. We have an offering for that called data scale. But many people are now convinced, like, you know what, you're right. What I need is a prediction. I don't need to understand GPT-3 or GPT-2 I don't need to, right? I need a prediction. I need that service. I need that workflow. And for that, data flow is a service. [00:27:45] Speaker B: It's a brilliant solution. [00:27:46] Speaker C: Indeed. [00:27:46] Speaker B: AI is accelerating and this is definitely a disruptive technology that is being a real threat to incumbent. So if everything about incumbent technologies, what those players, those incumbent players need to do to adapt to avoid the risk of being the next codex, blackberries, Nokias, whoever and whatever, right? What do they need to do? [00:28:10] Speaker C: Well, I think you have to start by figuring out where it's going to come in. Just like the Internet, it started with web pages or maybe even email, I don't know, maybe like email. But then webpages, when, when they enter came in, everybody started putting up webpage. Here's my. Whatever.com, right? You got to go buy your.com, right? You got to do that. And so people start with web pages. That's kind of, and some people, that was the end of it, right? That's the end of it. And you think of the sophistication today of what, you know, what you can do with services like Doordash and Uber, or when you think of like, you know what, all the services that get streamed through YouTube and Netflix and all the different things that we're able to do today, well beyond, well, well beyond just putting some advertising on a front page, you know what I mean, well beyond. Right. You know, and so, so this is kind of the transition that companies have to have, you know, AI. Oh, is it for me to just recognize some photos? Right. You know, you can say, hey, let me log into my phone. I can recognize the face as a security thing. Great, right. Well, that's kind of level one. Level one out of maybe the possibility of doing hundreds of different things with AI. For a lot of the businesses, really, what you have to do is come in and you got to understand, what is my strategy? Where do I start? How do I progress? Crisply? Because here's the other thing. The webpage, when the Internet started, was easy because you can say, oh, hey, here's this company that this store used to go to. They have a webpage now. It was visible. With AI, what's happening is most of the improvement, most of the investment, most of the advancement, which we competitive advantage is happening behind the scenes. You and I see it, right? You see our home, adviser home, the Alexis of the world. They're getting smarter every day. They're getting smarter every day. They say a little something that's kind of, oh, it's getting my car, your Tesla's, your car, your smart car. They're getting smarter every day. They recognize them. So all of these services and tools that are available behind the scenes are getting better. And so it's not as visible to the average company to realize how much progress, how much investment and how far ahead is my competition getting, because you don't get to see it. It's not as visible as what it used to be. It's like, oh, here's the webpage I can buy online now. I can click on mine. I can buy it. I can do these things online now that used to be very visible and companies could see what each other were doing. With AI, it's not, it's silent, right? It's silent. And, you know, if I'm way ahead, why would I tell you what I'm doing? I mean, a lot of these businesses are very competitive environment. You're all trying to find an advantage. And so if you have something good and some way to get, you know, capture those eyeballs, capture those dollars, capture those, those, the attention, why would you tell anybody else, right? And so that's what's happening right now in the AI world. Why, why are people investing so much in AI? It's because of that. Because they see a win, they see an advantage, and they're actually deploying it very aggressively. And so for the folks who haven't done that, and they think we're in early stages. Well, that was last year and the year before. Right. Because last year we're actually now in full production on a lot of these different things with customers. And so people need to pay attention, figure out what is your strategy? Where do you start? If you don't know, we'll help you. We'll help you figure out where to start and get you on a plane so that every single quarter you're making improvements on AI and you're transitioning your business because it's going to take you some amount of time and it's not going to happen overnight. Right. And you need to start. [00:31:48] Speaker B: Fantastic. In closing, Rodrigo, the AI market is expected to be huge. As you said, it's going to take every piece of industrial landscape, non industrial landscape in this big market. What are your aspirations for Sambanova? [00:32:03] Speaker C: Well, I mean, we're here again to be a player in helping companies transform their businesses. That's really very clear. Our lane really is around enterprise. We focus on the global 2000, the large companies, the AI market will segment just like Internet did. There will be folks that focus on your cell phone devices. There will be people who will focus on IoT. And we are very much in the cloud and the enterprise environment. What we're focusing on are business solutions for large businesses in production. That's what we think about. You think about your mission. Critical workflows that require security, require high availability requirement, all these services that allow businesses at scale to run. That's what we focus on, and that's where our expertise has been for the last 20 years, as well as what Sambanova itself is able to do as far as the products that we're building focused towards this segment. And so for us, it's really about helping enterprises transition, helping companies figure out how to go from pre AI to post AI and accelerating into it, and building services that give you the advantages and benefits of AI. We do think that most companies will need expert partners to do this. It's not easy. You look at a model with GPT and you got to aggregate thousands of GPU's and hire hundreds of data scientists. Then after you do all that, you've got to spend years building up the expertise to train these things correctly. And then by the time you do it, the next model came in. Now you got to go and learn that model models are changing every month, every quarter, just to keep up with all of that. Why do it yourself when the experts can come in and help you? And then you can shift your attention to what most people are talking about, which is data centric AI and my secret sauce, my information data. Let me cultivate, make, curate my data because I know my customers. I know my services better than anybody else. Let's do that and let me partner with somebody else to help with the model and compute side. And that's really Nova aspires to be that trusted advisor to help companies be able to transform their businesses from pre AI to post AI and really use their data as the secret sauce for those companies to have a competitive advantage in the market. [00:34:28] Speaker B: That's fantastic, Rodrigo, and I think that was a great way to include our podcast. Sounds like you have been up to a fantastic journey and you and Sembanova, but the next chapter sounds even more exciting. So best of luck to you and Sembonova. [00:34:44] Speaker C: Thank you so much and thanks for having us. [00:34:46] Speaker B: Thanks to you. [00:34:47] Speaker C: Take care. [00:34:54] Speaker A: Thanks for listening to Fernway insights. Please visit fernway.com for more podcasts, publications, and events on developments shaping the industrial and industrial tech sector.

Other Episodes

Episode

June 12, 2023 00:23:46
Episode Cover

Christine (“Chris”) King: Breaking the Silicon Ceiling

Christine (“Chris”) King is an accomplished leader in the semiconductor industry who currently serves on Skyworks’ Board of Directors as the lead independent director...

Listen

Episode

September 18, 2024 00:23:03
Episode Cover

Shailesh Upreti: C4V’s Gigafactory Innovations

Discover how AI and Industrial IoT are transforming battery manufacturing—could these technologies be the key to achieving higher production yields and quality control? In...

Listen

Episode

October 28, 2024 00:30:53
Episode Cover

Dr. Howard Grimes: The CyManII Cybersecurity Advantage

How can transforming cybersecurity from a cost center into a strategic profit booster revolutionize the U.S. manufacturing industry? In this episode of The Titanium...

Listen