Chetan Dube: The Spring of AI - ChatGPT

February 15, 2023 00:35:46
Chetan Dube: The Spring of AI - ChatGPT
Ayna Insights
Chetan Dube: The Spring of AI - ChatGPT

Feb 15 2023 | 00:35:46

/

Show Notes

Amelia.AI Founder and CEO Chetan Dube joins us again on the podcast, this time for a discussion of the meteoric rise of generative AI tool ChatGPT. Mr. Dube explains what ChatGPT can do well, and the scenarios in which you would not want to use it. He explains that even in the last 30 days, this tech has improved exponentially - with its open and free use enabling its data points and resources to increase and expand with lightning speed.

Chetan Dube is the founder of New York City-based AI company Amelia, a deterministic model AI whose conversational technology provides customer service for about 500 corporate clients, including Telefonica, BNP Paribas, and Nordic Bank.

 

Discussion Points

 

Ayna Insights is brought to you by Ayna, the premiere advisory firm in the industrial technology space that provies transformation and consulting services to its clients. The host of this episode, Nidhi Arora, is Vice-President at Ayna.

For More Information

Chetan Dube LinkedIn

Open AI - ChatGPT

Amelia.AI

Ayna Website

Fernweh Group Website

View Full Transcript

Episode Transcript

[00:00:03] Speaker A: Welcome to INA Insights, where prominent leaders and influencers shaping the industrial and industrial technology sector discuss topics that are critical for executives, boards and investors. InA Insights is brought to you by Ina AI, a firm focused on working with industrial companies to make them unrivaled. Segment of one leaders to learn more about Ina Aihdev, please visit our website at www. Dot Ina dot AI. [00:00:40] Speaker B: Hi everyone. This is Nidhi Arura, vice president at INA AI. Welcome to another episode of INa Insights podcast. Our guest today is Chetan Dube, founder of New York City based artificial intelligence company Amelia. Indian born Dube moved to the United States after college to become an applied mathematics professor at New York University. In 1992, he left academia and started Amelia as an information technology firm, Ipsoft, in 1998, growing into about 100 million in revenue in 2020. He is widely recognized speaker on autonomics, cognitive computing, and the future impact of a digital workforce. In 2018, he was accepted into the Forbes Technology Council, which is an invitationally community for world class CIO's, ctos and technology executives. Chetan, welcome again to our podcast. You've been with us before, which was last year when we talked in depth about the rise of artificial intelligence. And it's very exciting that you're back here with us, especially now in 2023, which is expected to be a big year for generative AI. Right? [00:01:55] Speaker C: Very glad to be here, Nidhi. Always good to spend time with you. [00:01:59] Speaker B: Perfect. So let's dive right into it. Chetan so before we get into the specifics of terms like generative AI, conversational AI, chatbots, chat DPT and whatnot, would be great to step back a little bit first and give a lay of the land for the benefit of our listeners. So when we talk about technologies like conversational AI or generative AIh, how do we think about them? In the world of AI, what do they mean? How do they work? And there are awareness products out there, like there's Chad, GPT, there's Amelia. How do we think about them? [00:02:36] Speaker C: Well, foundationally, it's a very interesting question. What is artificial intelligence defined as the ability to mimic human intelligence? That's artificial intelligence. Now, what do humans do? Humans either do transactional tasks or they do creative tasks. Transactional tasks are mimicked by conversational intelligence technologies like emedia and generative tasks. And creative tasks are mimicked best bye some generative technology, AI technologies like currently charged GPT. So that would be a broad distinction between the two for us to be able to understand what kind of technologies are able to address what kind of tasks got it. [00:03:31] Speaker B: So let's talk about Chad. GPT, the open air product, which has been, you know, has taken the Internet by the storm and catching all the limelight these days. What is all the bus a buzz about? Like, what is so unique about chat DBT versus other chatbots, or versus, let's say, other generative AI products in the market? [00:03:52] Speaker C: Nadi, it is better. It's a better generative, if not the best generative AI product in the marketplace. That's why it is taking the market by storm. And that's why the companies, like simplified AI and others are, or others are trying to play catch up with it, because it is clearly, if you look at the results, it has clearly established itself as the market leader almost overnight. And so how is it better is the main important thing, because generative AI has started to come off age, and a lot of these technologies are starting to shape up if you have to drill down underneath the chat GPD and to really understand that their models are based on. So GPT is, of course, the generated pre trained transformer models. Now, these models are based on GPT 3.5. The chat GPT is based on GPT 3.5 language model, but it uses instruct GPT, which was a precursor to this. Altman had been working on instruct GPT, which allowed us to be able to understand the intent and had elaborate language models to be able to understand the intent quite well of what was being asked. And then they used the GPT 3.5 models to be able to also now start to furnish the creative answers to these models to these intents that were understood. And thereafter, on the other side came in the reinforcement learning through human feedback, RL HF, as it is called. And then, of course, supervised learning models to be able to make sure that you can continually improve these. And that's what has led to the evolution, I think one has to really understand that they have taken the science of generative technologies, and they've taken the technology to a different level. By combining these macro modules of both instruct GPT and GPT 3.5 with large datasets, and combining the reinforcement learning and supervised learning mechanisms to be able to produce something which can be, as I said, in a nutshell, better and generative. [00:05:56] Speaker B: Right. And Chetan, with that, like, background, knowing that it's a better product out there, open air made this interesting call of making it available openly and free. Right now, it's a common practice these days. We know it's happening in the quantum computing world. So much hardware, so many hardware solutions are available for free on the cloud for people to experiment. Now, it's great from the point of view of democratizing the offering, getting feedback, but there are risks involved as well. So what are your thoughts on this decision by OpenAI to make it free and open for users to test it? [00:06:39] Speaker C: Look, the reinforcement, learning and even some aspects of the supervised learning benefits tremendously. I mean, look, look at the data itself tells you a story. How many applications register 1 million users within the first month of its unveiling? [00:06:53] Speaker B: Yeah, not many. [00:06:55] Speaker C: So now you think about like 1 million now compounding into a couple of million. And now you start to get, with every one of these people that are using it, you get embedded intelligence that you can glean from what they're using and what they're finding to be useful and what they're not finding to be useful. And you can continually evolve the model. You can have GPT 3.5, and I think you're soon going to see GPT four and NeoGPt, and you will start to see that the models evolve rapidly. So it becomes a big self fulfilling prophecy of involving open source communities and actually tapping into no enterprise segment can be as vast as getting a million users in a month. And that allows us to be able to tap, allows organizations like OpenAI to be able to tap into a lot of the, and mind the embedded intelligence coming in from the different responders and to be able to continually improve their product based on that. So it's a very self fulfilling prophecy that allows us to be able to grow the technology rapidly. The downside is it starts off with a not for profit philosophy. And then of course, we all know that this is OpenAI. It's soon going to graduate into very much a for profit partner. [00:08:07] Speaker B: Right. [00:08:08] Speaker C: You can clearly see that with the interest that Microsoft and others have. [00:08:12] Speaker B: You talked about hitting more than 1 million users already, like in a span of one month. And there's a lot of talk about the various applications out there, especially like in original content creation blogs, music, poetry, and even like writing computer codes and so on. And so what are some of the strengths and shortcomings that have emerged in the early days of this testing and experimentation as the users are playing with these different use cases? [00:08:42] Speaker C: Yeah, so that's a great question. The best part about chat GPT, and we have benefited tremendously from it. We are in the domain of conversational intelligence and enterprise grade transaction handling. And that has benefited tremendously because chat GPT has brought an awareness to the market that AI is real and so it has dispelled any of those AI winters. We are now in the spring of AI, and the way it is heating up, it is about to get into the summer of AI. So that's for that. We are grateful for the fact that the awareness has been raised for a company that does not believe in too much of the marketing. It really does help to be able to have a product that has raised the awareness of what AI can actually look. The father of artificial intelligence, John McCarthy, said it better than I could articulate. Human level AI turned out to be harder than one could anticipate. So it is really hard to be able to emulate human levels of intelligence now. Chad GPT and one has to look underneath the covers of chat GPT to be able to answer the question, so why is it and what is it good at? And what is it not good at? So what does Chad GPT do? Chad chat it's typically like, if you go back and I'm a researcher, I'm not going to bore you with too much the math behind it, or it uses these deep, convolutional, generative, adversarial networks. Gans are networks that have been around since the time of Radford. I think it was 2015 that he had published a paper on these convolutional gans, generative adversarial networks. So what are these generative adversarial networks? These generative adversarial networks will take all noise and all fake data and pump it into a generator. The generator will produce that fake output, and then there is a discriminator that comes in on the other side, which takes real data. And in this case, we're talking about anywhere between 52 billion to 170 billion different data models. It takes that real data, and then you take the, and you compare the output coming from the generator, which is taking the fake data, and the discriminator, which is producing the real data. And you find out the variance between the two, and you back propagate through reinforcement learning, through supervised learning, continually iterate back and back and forth and back and forth till you start to minimize the variance between the fake data and the real data. Now, I hope that's not too much of jargon for you, because I've tried to simplify, because it's very important to understand this, to understand what can chat GPT do very, very well, and what it cannot do well. So now, I mean, the classic example is that it has been likened to producing counterfeit currency. You actually produce. The person that is producing fake currency is actually producing these different iterations of dollars. You're comparing it with. The police is comparing it with real dollars, but now you find a variance, and the counterfeit guys takes that variance into a feedback loop and continually iterates so that the fake one looks better and better and closer and closer to the real dollar. Yeah. So this is what a typical gans is. Now that is the underlying mechanism, at the risk of oversimplification and hopefully avoiding too much of the technology behind it, but for you to understand what it can do well and what it cannot do well. So now what does that mean? I mean, if you want to be very, very brazen about it, you would say, and you would forgive my directness, it's a bullshit generator, because you are taking fake data and you are iterating and comparing it with real data and coming in closer and closer to it now. But the interesting part is that it is the best at getting very close to the real data, because you can iterate over these models, so much so that you can actually get very close to real data. No, because it's not, it's not lying, by the way. It is not just, it is creating this, it's generating from noise, all this input. So what it is very good at is at creative. What it is very good at, it is very good at essay writing, it is very good at legal briefs. It is very good at any questions that you could ask. You can write anything about the Gettysburg war, and you could come up with an answer and very compelling, and you could ask a riddle and try and see. It will come up with propositions, it will come up in this, and they'll try and minimize the variance between the two and iterate over that rapidly with a very elaborate language model. And it comes up with a very good, very good realistic sounding. Now this is a very important word. It brushes on reality tangentially. It's the object is to generate plausible sounding, almost real sounding thesis answers to the questions that you might have. Now this works fantastically well for creative. It's brilliant for creative. If you wanted to be able to write, I've known of cases where people are writing legal briefs. If I wanted to be able to write a first notification of loss and I wanted to write a claims brief, I would actually start by saying, hey, generate me this thing. And it would actually have this 170 billion records, and it will actually be able to come up with something that is very plausible sounding brief. Now, now my role becomes I will just edit the brief with the entities that are not factual by replacing it, because obviously I've taken it from some historical recall that I've done from some data in the past. I will edit that to be able to mimic more closely what, what pertains to the case in hat. These are fantastic examples. And chat GPT is, is, is by far currently the best in generative AI in the marketplace. And so it generates very plausible, realistic sounding and creative pieces you can't think of. And as I said, what do humans do? They do creative tasks, and then they do transactional tasks. That's what we do in our life. What you're doing currently, one would argue, maybe considered to be a creative task, and I think I could possibly be replaced with a chat GPT agent and would be able to answer these questions quite well, perhaps better than I. So those are the great pluses of chat DVD. And I think at this point, you can almost conclusively say, with more data records coming in, there could be a chat GPD that could start to answer these questions. The challenge becomes, what do enterprises require for. So that was one segment, what do people create it? And the other thing is transactions. Enterprises require transactions. So what are transactions? I want to transfer BNP their securities trade, their security trade, their principal customers are total and orange, and they do average securities trade that we do for them is over €10 million. Now, if you're doing a securities trade of €11.3 million with, say, total, and you're BNP, you don't want it to be done for 11.2 or 11.4 or eleven, or maybe at a certain time, or maybe close to that time. You want it done deterministically. Chat GPT probabilistic exceptional recall. Amelia is a deterministic transaction. Enterprise transactions agent. I chat GPT atomic one shot with short term memory. No, you actually get to answer some of the questions with short term memory. Answers, one shot answers. Enterprises require multi step business process transactions. You're not just calling in and saying one shot, hitting, you're talking about, well, is this securities trade? Do you want this time? And if the total amount in this holdings in this money market is this much, then do this trade at that time. Otherwise, I'd like to be able to transfer something in from that other account of mine to be able to do this trade here. Now, multi steps. And you're basically going down that tree, which is processes, business process. Amelia. No, for instance, these are the distinctions between that. And then, of course, you do process optimization in Amelia, not chad GPT. That's the distinction. If you ask Chad GPT for if you want cancer care, patient care, memorial stone Kettering is leveraging. Amelia. When you have a patient who's got a physician who's on a nurse, who's actually thinking about a patient and what to administer them and what kind of, I want 162 milligrams of this thing to be administered to the patient for this levels of ketones to be at this level. If you're saying that you are talking about you want that. And by the way, you're not just saying one atomic thing, you are saying, how is the patient feeling? What is his ketone level? Where is he going? What is his PSA activity? Now, based on that, I want to be able to do this and I want to do 162 milligrams. You want precision and determinism. Determinism is the hallmark of conversational and true conversational AI like and probabilistic and creative. Chat GPG is clearly prone on that. So sorry for the elaborate nature of answer to the question about limitations. If you want to get the square root of 50,327,000, you should not ask Chad GPT, because you'll get an approximate answer. You will not get a precise answer to that. The other limitation would be that these data sets often take, these are such elaborate data sets today. They take about, I know, eight and a half, nine months at times to crunch, even though they've got the entire farm of azure crunching these numbers for them. So it takes a long time for these data sets to be crunched. So when you are actually asking about some facets, you will find that the information coming out is a little bit outdated in cases, and that's because of the time that it has taken. Now, they have come up with a very good reinforcement way of, like, layering on current datasets, and that will improve with time. But currently it is a limitation about the extent of staleness of the data. But sorry for the very elaborate answer to the question. Hopefully your viewers find it of interest because there is a lot of noise about what chat GPT can do well. And I think what it does best is creative. What it cannot do well is enterprise grade transactional tasks. [00:19:42] Speaker B: Right? No, it's very helpful, Chetan, to understand this in detail. I think. I'm sure all our listeners will find that very helpful. Chetan, you did mention the Microsoft deal, and you've talked about how chat DPT has made awareness about AI even more, like, profound now, right? How do you think that Microsoft deal, the likely $10 billion investment, what impact would it have on other AI valuations. [00:20:10] Speaker C: So one, I mean you can see the Microsoft motivation. I mean I think that for the longest time we have all known and from the time that even Bamara and before they were trying to like see if they could get into the browser market which Google has dominated and Google managed to still continues to dominate the classifieds and ads revenue which is a majority of its revenue stream. But Microsoft has continuously eyed that and it doesn't take my quran to shoot pedigree to yield or any geniuses of foreign way to figure this out. The only way that Bing or edge, depending on whichever mutation you have, can actually change the game on Google and siphon off that biggest stream of classifieds and ads revenue is if you come up with a superior search engine. Now search, is it deterministic? No, search is probabilistic. So search is all generative, search is all Nidhi wants to know about this. And firmware has excelled in industrial and other things and budget McKinsey brainiacs and formed this company. These are all qualitative things, qualitative, exceptional chat GPD kind of product chat GPD in particular, whereas quantitative is another story. But now think about I was talking to the CEO of one of the largest sis just two days ago and he said to me, I've stopped using Google for searches because when I want to be able to find out what this brief is and that I want to be able to find something out and I look for, let's say I want to be able to write a Fibonacci sequence, I say, well, write me Fibonacci sequences, I write in Google, well guess what? I'll get ten links to different things that will explain to me the Fibonacci sequence. One, two, three. And I'll actually go ahead and I'll have to parse them down to be able to see the Fibonacci sequences on the other side. If I say write me Fibonacci sequence to chat GPT, I get the final product itself, which is I get the code for answering Fibonacci sequence. It's a very powerful differentiator between what you get out of a chat GPT interface. Microsoft having obviously taken a big interest of 49% into this, it's only a matter of time before they start to leverage this technology into their bing or edge and they start to see more of the revenue for classifieds and ads start to shift as more search starts to shift towards a more generative interface. Generative interfaces are going to be very powerful because they will be able to not just tell you the recipe for how the pie is to be baked, theyll bake the pie for you. So you can see why the investment. What impacts are the other part of your question, what impact its going to have on companies in the general AI market? Absolutely. The comparables are just having an inflect surge now because of the fact that the AI market, because of the realization in the market that AI is mainstream, because creative is being done by chat GPT quite well. And Altman has said that in a few years he could actually top a billion in revenue, even though its a few tens of a million right now. So you can see the curve and similarly you can see the impact it has on the valuation of companies like ours, where the interest in the market is surging on transaction grade, enterprise grade, transactional. [00:23:29] Speaker B: AIH right. And it's very interesting, Chetan, because that was going to be my next question, that right now the revenue is probably like in a few million dollars and they're expecting it to reach $1 billion in 2024. Right. I mean, what's interesting in this valuation and some of the others in the generative AI world is that the viability of that revenue model is yet to be proven. [00:23:53] Speaker C: Right. [00:23:54] Speaker B: We just have numbers out there for now. So from that perspective, what do you think about these valuations on generative AI products at the moment? [00:24:04] Speaker C: I've not met Satya, but I know that he's very sharp. He would not be putting 10 billion. There's a clear path to revenue realization. Look, if you take Bing and edge and you actually start to, majority of the revenue is classified as those to Google. We just talked about it a moment ago. If you start to siphon off even a fraction of that revenue to Microsoft, you will make out that 10 billion investment and 1 billion in 2019 or something like that in spades. You will get that back. So that's very clear and just last minute because you have got a superior search. Search is a prime example of where you would be able to use a technology like Char GPD quite effectively. And it's absolutely a very prudent investment from Microsoft. The other part is that when you start to see Microsoft Office products, which continue to be a big staple of Microsoft, they all can be made more intelligent. We all know the paperclip and Cortana, and I think that's completely now, this is a very different world altogether that was semi functional and dysfunctional almost sometimes, but this is very super functional. So you will have a lot of your, what do you use word for? For writing and imagine if you just had a helpful agent that is already. I want to be able to write my perfect employment or thank you note. Here you go. I want to be able to write my perfect illegal brief. Here you go. The power of that technology to be able to come in. What boost does it have on Microsoft Office products? Tremendous boost in Microsoft Office product. But we know Google Office also wants to come into that space and that puts a big barrier there. We know that the valuation that the investments that Microsoft made in this is very prudent. And you're going to see can it get there? Yes, of course it's going to get because look at the revenue footprint. A big part of revenue footprints in double digits, percentages is Microsoft Office products. And those office products are under threat of Google Office and other things. And that now just gets reinforced by saying no. But if you use our product, you actually get to be able to generate the content on the fly PowerPoint. I'll be able to generate my PowerPoints and investor decks. I'll be able to generate that on the fly writing, as I said, a claim. I'll be able to generate that on the fly with Generative AI that Microsoft has acquired now or is about to acquire. And that's all very plus there is Sundar is another IIT guy, so you know he's not trading on the sidelines. Why would he put Google on. Why would he put Google on code Red? Because he understands the material threat. This has got to a search engine. You can count on the fact that there is a chat GPT equivalent. Up until now it has been a bunch of smaller players that have come with some generated models that are not even close to, in comparison to what chart GPT has. But now there's going to come out very soon, I can tell you in this year itself, there's going to come out a Google counterpart that will try and say, no, no, but we've got our own generated model. You can count on that. [00:27:00] Speaker B: Right back to the use cases. We talked a lot about that, that it's mostly in the creative world, right? Let's stick to that. Let's say even within the creative, within that creative world, which sectors or industries or functions do you expect to get disrupted the most by this latest AI tool? [00:27:24] Speaker C: I think GPD is the foundation of human. AI is greater than human. And I think in this case what you will see is that even if you are talking about reporters and if they want to be able to write, why is Apple stock not being as meteoric and rise as what has happened lately with Apple stock. I would go ahead and I would actually just seed that in to a chat GPT interface, which can actually then write an article and AI becomes my assistant that can actually give me a nice warm start. And I can build on top of that because it actually does go ahead and pull in data about all the reasons why Appstock has not been able to surge as much as one the investors would have liked lately. So the both on I mentioned in legal, in reporting, in legal writings, in searching in retail, all of these ones, you are going to find that such an interface will be a very powerful mechanism for you to be able to advance your wares. And all of these will benefit. And I think you find even code, some aspects of code development. Let me just touch upon that code writing, and I can tell you by personal experience as well, you can get snippets of code very quickly. So the building blocks for writing code has just been changed. You're no longer writing laying bricks upon bricks and bricks upon bricks upon bricks to be able to write code. You're now using large prefabricated concrete, big slabs to be able to build the building. And then the tower comes up much sooner because you're just bolting on these little big, huge samples of prefabricated concrete as opposed to like laying brick by brick and starting to build a building. None of these skyscrapers behind me, I'm just about a block from World Trade center or freedom tower. It's been built by vast, not by laying bricks, but by prefab. So coding itself, application development itself, will actually get a big boost by using now you want, oh, I want this kind of thing for my time management. Can you write me a driver for the time? And you will get a block of code that tries to do about the same thing that you wanted it to do. And then you would be able to go ahead and adjust and the entities and the variables and parameters to be able to more accurately reflect what you wanted to accomplish. So that's how you will be able to rapidly build big buildings and rapidly build big applications, by using prefab modules that will come out of generated frameworks like chat, GPT. Hopefully those are some examples that you can see how profound can be the impact of a generative technology on different things that we do, right. [00:30:14] Speaker B: And Chetan, lastly, let's, you know, use your experience. Given that you are such a pioneer in this field. What do you think is the future of products like chat, GPT, or generative AI? And how do you expect some of these other tech giants to now respond to this. I mean, of course Microsoft is tied to it, but let's say Google, Apple and meta, like, how do you expect these guys to respond? [00:30:43] Speaker C: It's coming, as I told you, it's no longer the winter of AI, it's the spring of AI, and it's about to become summer of Aih. We're going to see a hot race now. And that's really benefiting us also because we are the little boys here who have the number one position in transactional enterprise conversational AI. And this is not by my assertion, but by every analyst of distinction rates as number one in enterprise grade conversational AI. But we benefit tremendously from having these big giants suddenly jump into. And the realization in the market itself that AI is real, that you could have all edicts from World Economic Forum Nidhi that say 52% of the transactions, 52% of the transactions in the world are going to, by 2025 are going to be done by a digital agent. And I was with the chairwoman at World Economic Forum just recently and I was talking to her. It's only like 2.2% right now, is what I discovered. So you can have all these numerics of the World Economic Forum and all the Davos, we are just wrapping up in Davos. And so you can see the rapid ascent from 2.2 to 52 happening and from here to 2025 in the next four years. But all of that is just in Switzerland and comes up with a paper at the end. The market realization has undergone a step change in its thinking by the fact that chat GPT has made everyone and the consumers hitting the consumer market allows the level of awareness go from kids trying to do their homeworks, to lawyers trying to write their briefs, to application programmers trying to take snippets of code and develop their major applications. So just the awareness has, in the last 30 days, we have seen it just go right through the roof that AI is real, something that we've been advocating for about 2025 years now. So we're going to see a big, huge surge of investments. We are going to see a big, huge surge of, because everybody sees the returns that you will get as more and more of the commerce will shift to being done by AI. If you talk to Google, I mean, Sundar also will tell you it's an AI company, AI based company. And so you're going to see all of these guys produce their generating products. Our focus is going to continue to be, we're going to produce deterministic, we're going to actually leverage. Here's the interesting part. We can leverage chat. We leverage chat GPT kind of frameworks immensely because when you're talking about a multi step, let's say business process that you're optimizing now, that's our forte. It's a transaction grade, multi step, deterministic business process that you're handling. ERP, CRM, customer care. It is not atomic. You're not just asking one shot questions. You are actually wanting a multi step. I'm having this baggage law. This was a flight. I was here. I don't know where my bag is. Can you be able to find it? By the way? I want it to be sent to this apartment here. All those things, multi step processes that you're doing and process optimization that you are engaged in to be able to get better customer care. All that in the individual steps within that process. We benefit tremendously from larger language bots because we can take in the NeoGPts and we can take in the GPT frameworks, and it benefits us immensely to be able to try and like use larger and larger data models with better reinforcement learning and human feedback loops to allow us to be able to get more precise with understanding the atomic steps within that process, to be able to handle that baggage claim process for a customer better. So we are finding a tremendous updraft on the enterprise grade transactions that we handle because of not only the awareness, but also our ability to leverage larger data models made possible by OpenAI, and we are thankful for that. [00:34:42] Speaker B: Awesome. So, Chetan, thank you very much. It will be very interesting to watch how the summer of AI unfolds. And thank you very much today for your time and sharing your viewpoints on chat, DPD and on conversational AI and also Amelia. Thank you. [00:34:59] Speaker C: Thank you very much, Nidhi. I hope it was of some informative content for you and your viewers. Soon I'll be taken over by chat, GPD and generative networks who will do a better job. [00:35:10] Speaker B: Yeah, we'll see how that unfolds. Thank you very much. [00:35:18] Speaker A: Thanks for listening to Ina insights. Please visit Ina AI for more podcasts, publications and events on developments shaping the industrial and industrial technology sector.

Other Episodes

Episode

April 29, 2024 00:36:44
Episode Cover

Andy Mattes: A CEO's Blueprint for Leading and Transforming Industrial Powerhouses

Ayna.AI’s Ninad Singh speaks with Andy Mattes, a seasoned CEO with a wealth of experience in leading complex organizations such as Coherent Laser and...

Listen

Episode

October 28, 2024 00:30:53
Episode Cover

Dr. Howard Grimes: The CyManII Cybersecurity Advantage

How can transforming cybersecurity from a cost center into a strategic profit booster revolutionize the U.S. manufacturing industry? In this episode of The Titanium...

Listen

Episode

April 04, 2024 00:32:49
Episode Cover

Mike Olosky: From Local Beginnings to Global Leader

In this episode we welcome Mike Olosky, CEO of Simpson Strong-Tie. He talks to Ninad Singh about his journey and underscores adaptability, customer relations,...

Listen