Episode Transcript
[00:00:03] Speaker A: Welcome to AINA Insights where prominent leaders and influencers shaping the industrial and industrial technology sector discuss topics that are critical for executives, boards and investors. INA Insights is brought to you by INA AI, a firm focused on working with industrial companies to make them unrivaled segment of ONE leaders. To learn more about INA AI, please visit our website at www.ina.AI.
[00:00:40] Speaker B: Hello and welcome. Today we are joined by Gerald who is a solutions manager and a technical fellow at Siemens and a seasoned leader in driving digital transformations across global industrial operations.
Gerald has over 20 years at Siemens and he has been at the forefront of applying AI data and digitization to unlock productivity sustainability and new value creation for industrial companies worldwide.
Gerald's career spans senior technical and leadership roles where he has helped Siemens shape its digital industry strategy and bridge the gap between advanced technologies and real world industrial adoption. His experience gives him a unique vantage point on how AI can scale in complex environments. What's working today, where the next breakthroughs may come from. Gerald, we are very excited to have you here and thanks a lot for joining us.
[00:01:31] Speaker C: Thanks for having me. Vinat for your AINA podcast. I really appreciate that.
[00:01:37] Speaker B: Perfect Gerald, we'll first start with your journey. You have spent your career at Siemens shaping their digital and AI driven industry strategy. What first drew you to this space and how has your focus evolved over time?
[00:01:52] Speaker C: That's always great to look back because we have quite a or I have quite a career and I have to tell you my fascination started early during my time at university academia where I did my master of science in electric engineering and I was fascinating by doing simulation of electric circuits at that time combining the virtual world, how we call it today at the computers with the physical world instead of building test beds there.
Once I went deeper into that it was all about control system design, digital algorithms, when we played around with fuzzy control and early neural networks and also some research. Then at my computer science PhD when we went into autonomic computing.
What I when you read that up in the past I compared it and it's pretty close what we talked today about agentic systems and that was all prior Siemens already where I caught fire and then we went into I wanted to join really a key player there and I got lucky. I could join Siemens in the headquarter in Germany at a very strategic group and that group was system software architects and they were deployed at global scale to more or less help strategic projects with scalability because Siemens knows data scalability and algorithms are always the biggest problem there and we I learned a lot over These first deployments, one what I could point out was just to give you an idea, think about that was 20 years back almost when we got the task to create a service.
We didn't talk cloud at that time where we needed we should replace the first level support of an IT help desk for typically computer errors by full automatic autonomous service.
And they heard about, the stakeholders heard about neural networks and they came in and said you need to do that with neural networks, that's the future.
And we went in and said fantastic. And we explained that and based on that I tried to explain them what neural networks are and what you need for that. Long story short, when we went through that process, we came up with a complex rule engine which did at least 80% of the job. They are fully automated and at that point in time it was the right solution.
With my background moving forward, what happened then is I was appointed and was it the right time? At the right time point cloud computing was coming up and I was asked to take away a corporate cloud program manager role in order to prepare Siemens for the cloud computing trend. And this just to date that a little bit. We were then launch partner with Microsoft when Azure launched. So we worked with Microsoft on pre testing there and pre integration. And I just mentioned that because when you see now what's happening with AI, we are in a similar situation from that on. I was asked then, hey, now we have understood cloud computing will change, it will become a megatrend. And I was asked to join the CTO office of digital industries where really our automation business happens and it's our center of automation.
And with the mission first of all figure out what will cloud computing do and change. AI driven workloads, data driven workloads. But what will IT do as well when we come down to the shop floor and to the OT environments we don't understand that. And lucky enough I could lay out the foundation at that time for industrial edge computing. What we currently have in our and we will talk later maybe a little bit more about that. Our platform and infrastructure we have in the marketplace to enable really AI driven workloads close to the process leverage latest IT technologies.
And from that onwards I was always deployed in this strategic projects around AI big data analytics. And meanwhile we had then the IoT wave and now current assignment US market differentiated technologies around AI edge computing, AI MES, advanced scheduling, digital twin virtual commissioning for US market customers. So combining the virtual and the real world at our customers.
[00:06:41] Speaker B: That's very impressive Jaran. And it's a pretty long tenure with A lot of different aspects of automation and digitization that you have touched upon as part of the career. That's very impressive. One of the next things that I would love to understand from you is Siemens has been the pioneer in the digital industries.
And from your perspective what are some of the most significant lessons learned in bringing AI and digitization into large scale complex operations within the industrials ecosystem?
[00:07:14] Speaker C: Great question and I absolutely think we had many learnings. So what I explained here as lessons or learnings, I would say we made those ourselves. So the first one I would worth mentioning is definitely that wherever we went to and at least from my point of view we had to overcome this lack of AI understanding or awareness and the when it comes to data architectures and data processing and especially when it comes to AI and gen, because AI as I mentioned 20 years ago in my introduction, we worked on that and I had to resolve something there. Explain them what you need to train a neural network versus working with a rule based engine.
Then when the last 10 years, in the last decade we applied so much AI which is hidden in products and we didn't talk a lot about that but over the last years we started to talk more, apply AI and, and people understood yeah, you need a data strategy, you need a data architecture, you need maybe PhDs or data scientists. And they said yeah, it takes longer and we need to be careful. But then Genai came and everybody thinks hey now we don't need data anymore.
So for because the gen AI models work without data and that's the biggest misunderstanding I guess focusing on something which is super hype and then not understanding what is required for that. Which leads me to the other lesson that whatever we did with AI, the OT ID convergence is for an AI project in manufacturing, always foundational. Why it's not about pulling the data one time and doing some analytics and get some insights. And that helps you from an enterprise consulting perspective? No, it's totally. You need to have that at runtime, at the non functional requirements of the process of real time or whatever.
And that's hard. So you need to have access, format, semantics, integration processed in real time.
And that leads me to the next thing, what we learned and everybody I think learns over time. When you then at the model level and you generate those models, people still think you do that once. But the reality is industrial processes move forward as their variants and you need the models, you need to monitor the performance and then really upgrade the models with new data you collect from the process. And for that you need to build an End to end pipeline and you need the infrastructure for that.
Digital twin is the other part which can help you there when you work with. When you create training data by creating synthetic data for edge cases which are very hard to capture from, from the real world environments.
And then also when it comes to inferencing at the edge to have some guardrails through physical simulation is really super helpful. What we learned. And then when we talk from a scale perspective, responsible AI and cyber security are non negotiables. So the earlier you bring them in, the quicker you will be at the overall experience or overall progress of your project.
[00:10:45] Speaker B: That's very fascinating Gerald. And I think you touched upon quite a few things that have come up in a lot of the conversations that I have had with the leaders as well. It's the just first of all going and explaining what the AI is.
And I think that education part never stops. Right. And partly it is you're trying to bring a new cohort of people to the use cases that you're trying to develop and party right. Like they don't have an exposure to those kind of things. And so they need to, you know, there is a little bit of an upfront education that is needed on what the art of possible. I think the other thing that you talked about is it's easier to do like a one time data pull and analysis, but it is a completely different thing to have the data available in real time and then setting up the entire infrastructure, setting up the cyber security aspect of it. So those are like some very fascinating points and it at least resonates with me in terms of the conversations that I've had with quite a few folks. So just talking about that. Right. And given the challenges that you talked about, industrials often get labeled as a slow adopter of new technologies.
From your experience, is that a fair characterization or are you seeing a faster adoption now? Especially post Covid, especially all the supply chain shocks that the industry has gone through in the last five, seven years.
[00:12:07] Speaker C: Tricky question I would say looking at that. So from my experience in the industry, so the biggest challenge we have also when we want to create or collect workforce or attract workforce is that we often. It's often not.
The industry is often not understood as it should be. So and which is very different for consumer industries or consumer market products and goods. So therefore historically I think people get the impression but when you look into the industry, so the equipment life cycles are longer and then there's more regulation and safety requirements in manufacturing and you always operate on legacy systems and even if you build a green field, every green field has its own legacy to capture.
So and it's very. Everything what you do needs to be extremely cost efficient. So bringing all those parameters to optimize a solution that takes more time. So some industries are cutting edge. You mentioned you have a lot of experience in semiconductor or if you go automotive working in those industries, they always want the latest tech there. Others maybe in certain types of process or mining or even there they adopt, but not everywhere.
Covid, since you mentioned that further sped up because we got that pressure to come out with new vaccines and also prepare the supply chain that showed what or we had to drill into what can be done. And what we learned is you can change resilience over cost efficiency. What we mean with that is you don't want to optimize across offshoring and everything.
You reduce your or adjust your supply chain, it costs a little bit more, but you gain more resilience. And then on the other hand the investment in zero touch production and remote operations is something what we see which brings us forward a lot when we come to reduced workforce. So everything through Covid was not only based on Covid, it's I think it will be a further accelerator for the next wave we are seeing in the I adoption perfect.
[00:14:36] Speaker B: And I really like the sort of the metric that you put out there, resilience or cost efficiency. It's an interesting way to sort of look at it, right? Because of course, right. Like you know a lot of industrials do work on, you know, thinner margins and you know, resilience is a sort of the need of the hour given all the disruptions. So think of thinking of it in that point of view. Is, is an interesting, is an interesting view. So actually then thinking about the, you know, that and the, and the roi, where do. From your experience, where do industrial companies getting the biggest ROI for AI today? Is it like in the traditional use cases like predictive maintenance which have been there for a long time, Quality control?
Are there some really new cases that are emerging which is getting people more excited?
[00:15:24] Speaker C: You are spot on with your use cases. So when we talk about roi, it's always something I always remind myself that we talk about net savings minus implant implementation cost. So you need to look into and we do that. I recommend that every time to look into what is the net savings potential in your process and where do you start from? If you have already a highly optimized system and you go in and apply AI, even if you have a sophisticated rule based systems, maybe you won't gain that much on top. So we often see customers extremely highly optimized already on those traditional use cases versus non optimized. But let's say we go in, we walk in non optimized in use cases. Predictive maintenance is absolutely the front runner. So it's something. It accompanies me for the last decade. We built solutions.
So we have so many solutions in the portfolio to adjust really where predictive maintenance intelligence needs to be deployed at the edge, at the cloud, central, across across lines, across factories. And what we see typically is that you can reduce unplanned downtime. And that could be in automotive a million per hour and you can reduce 20% of that.
So you see those industries are already very highly optimized. But it's also 25% in reduction of maintenance costs or then also 30% reduction of spare part inventory costs or that type. What we see typically followed by, as you mentioned, energy optimization. Why? Because pushed usually by new regulatory requirements on sustainability and sustainability and energy go often hand in hand, which is good overall.
And because when you look into sustainability, also reduce your overall energy cost. And we have so many processes with heat or high energy demand and they're only. You won't get these high percentages there. But when you look in the. So the ratio is not that high. But when you look into the absolutes, what you can say it's massive quality control. Absolutely.
We see that all over the places. First, quality control was only at the end of the manufacturing line. What's happening now is with the D defect detection and anomaly detection with cameras, for example, you go earlier in the process, deploy that and step by step. And so you can maybe even go after the holy grail to adjust on tolerances avoiding scrap or adjust or quickly check early to avoid waste of energy.
But it doesn't stop there. So as I mentioned, often AI is already hidden somewhere. So when we look into our control systems, where we help manufacturers in additive manufacturing, 3D printing, they go up to 40% improvements on their parameters internally of faster processing time. CNC milling machines, material waste reduction by up to 20% and 50% speed up on the tool path.
We talked about supply chain. So here especially with improved forecasting this stock cards could be reduced by 30% and you can reduce the inventory. What gives you more manufacturing space. Those would be some examples I would mention here.
[00:19:03] Speaker B: Okay. Yeah. And I think as you see and a lot of what you talked about right. Is the AI, the machine learning and different applications of looking at the data. Are you Seeing like some emerging early emerging use cases around like agentic AI that that can be applied in a manufacturing setup to improve performance or it's way too early to call that out.
[00:19:25] Speaker C: So actually when you look into what we are doing with our copilots for production and so on. So those are all going towards Hntki systems and we are early, we try to be ahead here.
We see it's not easy. But when you look into copilots for engineering, where you start air across the value chain with engineering systems and support what you know today for coding but you need to code in the automation languages and domain specific languages and one then you come over to operation, production and afterwards predictive maintenance. You get agents or co pilots deployed on every product and then the specific ones for the process.
And what we need to sort out for the future is how those will interact as a, as a global agentic solution. At the moment they come in at every product.
[00:20:29] Speaker B: Very interesting. Now evolving some of these use cases, right. I think there are a lot of companies, especially on the industrial ecosystem who are interested in exploring a lot of these solutions.
But at the same time I think it gets a lot of these use cases get stuck in the pilot phase. And a lot of time, right, like it doesn't go move beyond the pilot phase. Where do you think are the most common pitfalls that keep AI projects stuck in this pilot phase and what are the practical steps that you have seen companies can take to move beyond it for a successful large scale implementation?
[00:21:05] Speaker C: Yeah, you always have the hard questions here. So I think this one is I would start with what I see most of the time and we make in the early phases. We ran through that ourselves on the R and D department misaligned strategy and readiness and the scale of AI. Why? Because AI you treat AI as a tactical experiment. You treat go somewhere, have your front runner teams and they create a prototype and then they prove hey, we can achieve at a certain lab scenario impact with AI and now it's time to roll that out. But besides that nobody thought about hey, when I really do that across my company, what type of data do I need at what quality? And I need to have a whole data and AI strategy. And then how well are my processes described today in a way that a machine could interpret that. And that's where we see then when you go from the lab out to the real field where you bring in the real world data samples which are very different, often from side to side, they are the biggest potential for failure.
And that happens as well when you work with partners. So what we see is AI is something where people think it will mix up the ecosystems and change everything. So you bring in external partners with AI expertise but no domain expertise and then you get great pilots, but the people don't bake in already the experience of the domain. That is often an issue.
Or as today Everybody looks at ChatGPT so we have the hammer and we look, let's call it over resilience on hype tools where we want to find something because that's so popular. But we overlook the whole universe of AI and what's already there, what we overlooked the last 10 years, what we could apply tomorrow and it would work perfectly.
The other one when we walk in at larger corporations especially is that we have innovation teams and then you have the digital thread and we explain them the digital thread of manufacturing with all the different nuances and then they notice that they only focus on one single step and they don't focus when they apply that methodology, that they focus on the whole value chain internally or the vertical integration. Integration.
How can we go beyond that? That's the second part of the question.
I would say it's not different here than with any other big mega trends. So we need to bring together technology, people and process and we need to first recommendation is always move, go away from technical proof of concepts with artificial data sets or, or simplified data sets and go into proof of value on a real world scenario. So really go in where is the real world problem? Take that, pull out the real data you have there and look at that. So that's the first part because it shows you already that you are much closer and that you dig into the right problems, bring in the right partner there as well have a clear governance and sponsorship for scaling place when you understand what, what AI means. And you need to bring in the AI infrastructure, AI governance, AI training, data creation. And that part, if you do that for your core processes, you need funding for that, you need people for that, that you don't have that usually planned today.
And that brings me then to the investment part that you, once you understand that, that you create new capabilities and invest in standardized data infrastructure, data quality and leverage industrial grade platforms for that at the edge and at the cloud. That would be my high level advice here.
[00:25:17] Speaker B: No, and that's very interesting and I think a lot of it leads to the next topic that I wanted to talk about.
[00:25:23] Speaker C: Right.
[00:25:23] Speaker B: Because one of the key reasons for this getting stuck in the pilot phase is data is messy and you know, the people and the talent that you need for the change management is sometimes not there. And I think you covered a lot about at least setting up the right data infrastructure. But it's a real industrial problem. Right. And at least one of the things that we found out why the data is messy is because a lot of times industrial companies are grown through small acquisitions here and there. Right. Or small units here and there. And everyone is coming with their own sort of ERP MRP data infrastructure. And it becomes a little complicated with variety of legacy systems where data resides. And there is no consistency.
How do you tackle situations like that? Or is there where your strong partner network comes in to at least enable and help iron out the data issues and build the right data repository for the companies that you partner with?
[00:26:21] Speaker C: You're on the spot here. Yeah, you really described our real world in manufacturing where one manufacturer buys others and then buy all the technical depth or the different systems.
And so how can we help here or get through that? I think the first requirement through AI is always and then working backwards to create for the specific use cases where we consider the biggest value or solve the problem to create a high quality data foundation first.
So really stick with the use case. What data do you need? Work backwards from the data sources wherever they sit. And then you discover often maybe as you mentioned, in the legacy environment I would need those data sets, but those data sets don't exist today.
So that's the first thing. And you need to more or less tap into those data. And for that we have the ecosystem, but we have at the moment a whole set of portfolio which makes it super easy. The next part, once you get to those raw data, it's contextualization which is the biggest issue in the existing data sources. So that you get a bunch of raw data, but they are not.
You don't know what the semantics of the data is. And you need to add those semantics first in the manual way. Unfortunately, the IT tools often don't work here and all the trained models on open source often fail because industrial data are different and we work and our ecosystem works there to create those tools to help you with that step. But that's crucial to empower then high level data analytics and AI on top of the data.
And that leads to the next way where you work more on the organizational process side by creating governance and self service in your company.
So the data pools need to be created, training data sets for AI need to be curated. And that should work in a federate way where you orchestrate that across your lines and then factories and Even at the global scale.
And what we recommend of course is once you get to that level, do you really invest in or you can make your life easier by investing in standardized infrastructure especially Therefore I mentioned our industrial edge cloud platform which helps you so much on first of all collecting the data, transforming the data, contextualizing the data which you need, then to inference a model and monitor the model at the edge. If you need that could do the same thing at the cloud and train data at the cloud. So this is what I would recommend.
[00:29:10] Speaker B: That's very interesting and I think part of this you've already answered. But a lot of industry leaders talk about is it better for me to think about and implement an opportunistic use case or is it better for me to think of adopting this more enterprise wide? Of course there are pros and cons of both of these approaches. One gives you a very immediate short term return, but you may have to overhaul the system if you're thinking about taking it to other areas. But the other one sets the right foundations and it takes a little bit longer to get to the impact that you're looking for. How do you advise companies that you work with on what is the right balance?
[00:29:53] Speaker C: So first that requires consulting. The right balance, as you said, is if you first of all you need to be opportunistic, so you need to prove the value. And I also say also beside opportunistic and holistic, I also look into and say, hey, what is short term? Because it seems in the industry, everybody at the moment has some financial pressure and wants short term, low hanging fruit output. So the opportunistic part is the candidate for that. So look first into that area and go after that because that establishes trust over time. But as I said, and what we learn, and that's something where the boardroom expectations often don't meet, the real world is you need to go holistically and establish that infrastructure or establish the right partner in your company taking care of that. So in my opinion, recommendation is the mix. So start with both. We do the same thing, take opportunities, opportunistic approaches short term to show some progress. But go on, figure out what is really core for you on your golden nuggets and what would AI mean there and prepare that infrastructure holistically for your enterprise.
[00:31:18] Speaker B: That's very helpful and general. One last question, right. I think if you step back and if you think about like next 5 years, 10 years, do you believe that AI and adoption of AI will differentiate leaders versus legends in the industrial ecosystem as well as what will be Your sort of one piece of advice for CXOs who are looking to embark on this AI journey right now?
[00:31:47] Speaker C: So when I hear that I always when I coming back where I started at academia and at that time there was a guy in the US called Mark Weiser and he was driving something with was called ubiquitous computing and he made a nice statement. So he said the most profound technologies are those that disappear.
So they weaved themselves into the fabric of the everyday life.
And that's something that sticks to my mind since then. So it was the same with cloud computing and when you think about electricity plumping the Internet, when those came up were so IP cloud. And it will be the same with the current AI boom. So the AI technology will stay, will be extremely impactful, will change everything, but it will disappear in certain products which we use on a daily life. So what my advice here is, and therefore it will definitely to come to your question, separate everyone who adopts it versus the laggards because they will, they will disappear. AI will impact everything. But it doesn't mean that you need to apply AI to every process.
So you need to open your eye on the ecosystem and see which products or new tools change the play the game in the ecosystem and then adopt that. But you definitely have one of your core products in house where you need to be that transformative force.
So that I think try to answer that here in that part. And the real important advice or how can we get there before we come to the important advice is to say be very transparent because AI could also mean that people get scared and don't apply it. So empower a culture of experimentation and explain to everybody, employer and employee, it's not about replacing the workforce, it's about empowering the workforce. Because the empowered workforce will replace other companies not doing that. So the competition level is at a different part and that will sort more or less then change how, how they, how it plays out in the ecosystem. My final advice is get started with AI. Get trained and prepare an AI strategy for your core business and focus on proof of value for real world use cases and try to measure that end to end. That would be my single sentence for get started.
Perfect.
[00:34:37] Speaker B: Thanks a lot, Gerald and I sincerely appreciate you taking time out to connect with us on this podcast series. Thanks a lot.
[00:34:44] Speaker C: Thanks for having me. Vinit. It was a pleasure talking to you and anytime soon. Looking forward.
[00:34:56] Speaker A: Thanks for listening to INA Insights. Please visit INA AI for more podcasts, publications and events on developments shaping the industrial and industrial technology sector.