[00:00:03] Speaker A: Welcome to Fernway Insights, where prominent leaders and influencers shaping the industrial and industrial tech sector discuss topics that are critical for executives, boards, and investors. Fernway Insights is brought to you by Fernway Group, a firm focused on working with industrial companies to make them unrivaled. Segment of one leaders to learn more about Fernway Group, please visit our
[email protected] dot.
[00:00:38] Speaker B: Hi, this is Nick Santanam, CEO of Fernway Group. Welcome to another episode of the Fernvi Insight podcast as we continue with the theme of disruption 2.0. Our guest today is Professor Hani Farid, who the press calls the modern day Sherlock Holmes. He is a professor of electrical engineering, computer science, and school of Information at the University of California in Berkeley. Professor Fareed specializes in forensic analysis of digital images and deep fakes, which he prefers to call content synthesized by artificial intelligence because it's more descriptive. He has collaborated with DARPA in the development of technologies to combat these fake technologies since 2016. In addition, he's also the recipient of the Alfred P. Sloan Fellowship, John Guggenheim Fellowship, among several others. With that, let me welcome Professor Ani Farid to our podcast. Professor, welcome.
[00:01:29] Speaker C: Good to be here with you, Nick, good to see you.
[00:01:31] Speaker B: Great. Let's start off talking a bit about the breakthroughs in artificial intelligence and machine learning. They've enabled the development of technologies that nobody has ever imagined or dreamed of. For our listeners who haven't kept up with the latest developments from your perspectives, what are some of the technologies that have gotten consumers and technology companies excited, and why is that?
[00:01:53] Speaker C: This is what's interesting to me about this latest wave in ML, machine learning and artificial intelligence is it's really sort of the third wave. The first wave came in the 1960s. The second wave came in the 1980s, and this is now the third wave. And each wave made huge promises and really sort of under delivered.
In hindsight, when we look back on it, I think the jury is still out on this wave, whether it will under or over deliver. I think we're still waiting to see what the next decade or so will be. But there's no doubt out that there have been tremendous advances in the use of primarily deep neural networks to do a variety of tasks from predictive models, lots of things in computer vision and computer graphics, which is where you're seeing the bulk of the applications of these technologies. And what's been driving that, of course, is a combination of many things. It's not just one thing. Yes, it's the underlying computational networks, the deep neural networks, but it's also really, really fast GPU's being made by Nvidia and other companies.
It's massive data sets on a scale that are unprecedented in history. And of course, there's some really interesting optimization techniques for training these massive networks, where I think the most interesting uses of the technology have been in my field. And maybe I have a biased view of this, which is in computer vision, computer graphics and robotics, but that sort of makes sense, because that's a, that's what these networks were designed to do. These convolutional neural networks are designed to analyze images. That's at its core, what the convolution is. And I think that there have been real breakthroughs. I think the jury, as I said earlier, is still out as to whether these things are really going to be the tipping point in true artificial intelligence. I think probably not, but I think they are a step in that direction, and I think a really interesting and important and powerful step in that regard.
[00:03:48] Speaker B: That's very interesting, because now you can look on the flip side. There's a lot of risk people are talking about, which comes with the rise of AI and ML. And one of the biggest risk, or one of the major ones, is deepfakes, which is also your area of expertise. Can you talk a little bit about what are deep fakes and what are the threats associated with it?
[00:04:07] Speaker C: Let me just as editorialize here for a minute. I think for the last 20 years, we have made a mistake in the technology sector, which is we develop technology, we run 100 miles an hour towards a brick wall, we slam into the wall, and then we have a hangover the next day and we wonder, how did we get here? I think post Cambridge Analytica and privacy concerns online. And I think we have to take a breath in the technology sector and start thinking about harms that come from technology. Technology is not inherently good, it's not inherently bad. And we have to grow up and mature a little bit as a field and start saying, how do we harness the power of technology while mitigating the harms of it? And this is a really good example of this, where almost right out of the gate, very early on in this latest wave of machine learning, you saw people using machine learning and deep neural networks to create so called deepfakes. So let me first define that for you. A deepfake is simply a fully synthesized audio, image, video or text for that matter. So, for example, if you navigate to the website, this person does not exist. Allone word.com comma. You'll be presented with a face of, well, a person who doesn't exist. And that face was completely synthesized by a generative, adversarial network, which are two deep neural networks pitted against each other. If you go to YouTube and search for deepfake, you will find videos where people's faces or likeness have been inserted into a video. My favorite of which is Nick Cage into the sound of music, which is truly brilliant if you haven't seen the Tom Cruise TikTok videos, also quite brilliant, where we can now synthesize videos of people saying and doing things they never did. And in the audio domain, we can now synthesize from samples of my voice, for example, or Nick, your voice, you saying something that you never said. And in each of these cases, images, audio and video, what is underneath the machinery? There are some type of machine learning, big data, artificial neural networks, and the ability to distort reality. Whether that's an image of a person who doesn't exist, a video of somebody saying and doing something they never did, or an audio of somebody saying something that they never said fully, automatically. And what's important here is not I can manipulate images, audio and video. We've always been able to do that. Hollywood studios have been doing that for decades. But what has changed here is the democratization of access to technology that used to be in the hands of the few and now are in the hands of the many. Because you can go to GitHub, you can download some code, and you can run it on your computer, you can make some pretty compelling fakes. The trend we have been seeing over the last five years in this space is that the content gets more and more realistic. It gets easier and easier to generate. It needs less and less data and less and less computing power. And that trend is going to continue. And now you can see where the harms are. I create a video of Mark Zuckerberg saying Facebook's profits are down 20%, and that goes viral in, what, 30 seconds? How long does it take me to move the market to the tune of billions of dollars? Or I create a video of a world leader saying I've launched nuclear weapons against another nation. How long before somebody panics, pulls the trigger?
Non consensual, intimate imagery, people creating, inserting people's likenesses into a sexually explicit material, and then distributing on the Internet as a form of weaponization.
And the list goes on and on and on of these weaponizations. Here's the thing you should really worry about is the so called liars dividend. That when you live in a world where any image, audio or video can be manipulated. Well, then nothing has to be real. And anybody can dismiss inconvenient facts by saying, well, it's fake. So now let me get back to your question, is, who's responsible? And I would say everybody is. There's the dark side of that technology that I don't think the field itself has fully come to grips with on how to manage or mitigate.
[00:08:18] Speaker B: Then let's sort of go back to a 10,000ft view. What can be done from a technology point of view to detect these deep feet?
[00:08:25] Speaker C: Here's the rub, and I think you know this, Nick, in the security space, playing defense is very hard because you can be 99% good, but that 1% is. That's the ballgame. So here's what you have to understand about playing defense in this space, which is that on YouTube today, there's something like 500 hours of video uploaded every minute. There are billions of uploads to TikTok and to Facebook every day. And at that scale of audio, image and video online, you simply can't forensically analyze your way out of that. Now, when a video hits on, say, Facebook or YouTube or TikTok or Twitter, and we have hours, days, weeks to analyze it, sure, lots of great things. That's the kind of work we do at my lab here at UC Berkeley. But on the day to day, with seconds to respond, there is nothing we can do to forensically analyze. Now, you can change the story a little bit. So in that version I just painted for you, I said, I am the recipient of some content, whether I'm an online provider or a consumer of material online, and I need to be able to trust what I see in here. And I want, say, Facebook or YouTube or Twitter to authenticate. Now, first of all, there is no indication that these platforms are interested in truthful or authentic content. So putting that aside for a minute, I don't even think if they were interested in it, they'd be able to do it. But we can think about the problem in reverse, which is, I, the creator of content, want to publish something, and I want people to trust me. And so the other way to think about forensic analysis is not to try to authenticate content once it's uploaded, but to authenticate content at the point of recording. And this is so called control capture technology, where at the point of recording on my device, whether that's a mobile device or a camera or whatever it is, my device authenticates the material by grabbing the date and time, the GPS location, all of the pixels, it creates a compact digital signature of all that material, it cryptographically signs it, and then it associates that signature at the point of recording. So my camera is saying, this is an authentic recording, and it happened at this date and time at this location. And then as that content makes its way through the Internet, then it can be authenticated. And there's a really beautiful initiative called the C two PA, the Coalition for Content Provenance and Authenticity, that's being led by Adobe and Microsoft and the BBC and Twitter and a number of other organizations to develop this protocol and to allow us, as the creators, to authenticate. And this is the technology that I think will work at scale. I'm excited about that technology. I think that has the hope of working. It's a big infrastructure that has to be built. You have to build the protocol. You got to get the facebooks to start caring about truth and authenticity, which is a whole other battle that we can talk about. But there's room for both of these technologies. The forensic analysis, the kind of things that we do to analyze the underlying pixels and of the video and the image. But then there's also the authenticate at the point of recording, which I think is probably the thing that will work at scale, at Internet scale.
[00:11:41] Speaker B: There's a lot to be done before we get comfortable that these deep fakes don't exist. Or at least we know their deep fakes are the deep fakes. Till that happens, honey, what entities do you think need to take the responsibility and ownership of addressing these risk of deepfakes today, as we speak? Till this protocol comes on, is it going to be the technology companies? Is it the government? Or is it like, well, you're on your own, you mister consumer. Go figure.
[00:12:06] Speaker C: Yeah, a little bit of everything. So, first of all, we, as consumers of content, have got to knock off the nonsense that is social media and online platforms. How quick we are to be outraged and anger and to share things that are demonstrably and idiotically false. That's on us as consumers. I think we're being manipulated. So the fault isn't entirely on us. I think that the algorithms that decide what shows up on your newsfeed every day are intentionally being designed to outrage us and to engage us in the most negative way possible. But we have a responsibility to understand that just like wearing a mask prevents the spread of disease. Not sharing, not retweeting, not liking also prevents the nonsense from proliferating online, and that's on us. There is no doubt that the technology companies have to take this stuff more seriously. Facebook has been notoriously bad at this, Twitter has gotten ever so slightly better. YouTube has gotten a little bit better about reining in the abuses. But this idea of, hey, it's the Internet, what happens on the Internet stays on the Internet is over. It was never true and it's absolutely not true. Right now you are seeing horrific problems around the world, from truly outrageous and horrific human rights violations in places like Myanmar and Ethiopia and Sri Lanka and the Philippines, Philippines, to literally death and destruction from the global pandemic. Because of the lies and stupidity that are being spread online. The companies have got to take more responsibility. Now the government also has to start reining in these companies because the fact is that unless there is a significant financial incentive, or I would say liability, the companies are going to keep doing what theyre doing. They have grown into literally trillion dollar companies now and its hard for them to rein that back in without the pressure from up top. And I do think that there is pressure coming this year, both from the UK, from the EU and from the US, to say, look guys, your platforms are harmful. They are leading to death and destruction and to the disruption of societies and democracies. And this section 230 gives you broad liability protection, doesnt work anymore. So I think theres going to be some mandates coming down from up top. And the last thing I'll add to this, there's the consumer, there's a tech companies, there's government, there's the advertisers. I mean the reality is that we are not the customers, we are not really the ones fueling Facebook. That's 10,000 50,000 advertisers that are doing that and they can just choose to stop advertising on Facebook. They could say, look, this is a company that is doing some really bad things to our society and to our democracies. And yeah, maybe we should stop advertising. So there is pressure points to be placed to make these companies behave better. And I think that the reckoning has come and I think that we will start to see some reigning in of these companies over the next few years.
[00:15:08] Speaker B: Very depressive and interesting.
Maybe, honey, we'll sort of switch over to your journey to date.
In 2015, you developed photo DNA with Microsoft to give some context to our listeners. It's a robust hashing technology that is used to stop the spread of imagery related to sexual exploitation, especially involving children. Now, photodna is also used by Facebook, Twitter and Google to tackle extremist content. What is your motivation, thought process behind creating this kind of technology? And probably sort of more personal? How does it feel to witness a real in the real world, the impact of technology you spearheaded.
[00:15:47] Speaker C: Yeah, this is an interesting story.
So this work dates back to 2008, 2009, when we developed and deployed photodna. And I remember getting a call from Microsoft back in 2008 saying that for five years they and some of the then large tech companies had been really wrestling with this problem of child sexual abuse material.
And by the way, just by way of nomenclature, people often refer to what's called CSAM, child sexual abuse materialist, child porn. I don't like that term. People in industry don't like the term because porn is sort of a funny word. And so we call it what it really is, that these are images of extremely young children being sexually abused and that material being shared online. And we don't need to get too graphic here, but I will tell you that today the average age of a child involved in this material is eight years old. These are not 16 and 17 year olds playing with their sexuality. These are prepubescent kids, eight years old down to, by the way, one month and two month old infants, toddlers, pre verbal is a category of abuse in this space.
And I say that because it's important to understand what we are talking about. And then it gets even worse because that material keeps getting shared year in and year out, decade in and decade out online. And imagine if you're a worst days of your life are being shared online to the tune of millions and millions of people. And by the way, last year alone, the National center for Missing and Exploited Children received 20 million reports of child sexual abuse material from the major tech companies here in the US. This is a phenomenally large industry with shockingly young children that are victims. And back now, in 2008, the tech companies were like, yeah, this sort of seems awful, but what are we going to do about it? And so I got this call from Microsoft because somebody had read some article about my work in forensics. I went down to DC, I knew nothing about the space, Nick. I had no idea how young the kids were. And I was really shook up when I left that meeting. And I thought, this is really bad. And so Microsoft and I decided to team together to try to develop a technology to disrupt the global distribution of child abuse material.
And if you ever wonder if a small group of dedicated people can change the world, this is an example of where it was true. It doesn't happen often, but it does happen. And it was me, a couple of lawyers at Microsoft, and a couple of researchers and engineers. And we just worked for about a year and a half. We developed this technology that goes over to the National center of missing exploited children who have previously identified content. It extracts a distinct digital signature from that content.
Then on upload, every image is scanned and compared against a database of known, previously identified child sexual abuse material. And that's what allows us to catch literally now tens and tens of millions of people distributing content online. That technology has been around now for over a decade. It actually wasn't that hard to develop. It was a lot harder to deploy because you had to convince tech companies to take some responsibility. But what is shocking about it is the number of images we catch, law enforcement cant keep up with.
And so this is the technology that we developed back in 0809. That technology morphed a little bit when we started seeing groups like al Qaeda and ISIS and terror groups also weaponizing the Internet. So we pivoted a little bit to the global fight on terrorism to find and remove specifically harmful and dangerous terror related content.
And it's been really interesting in this space because it was a good example of tech companies saying, well, there's nothing we can do about it. It's too hard of a problem. And then we came in and said, well, it turns out we can solve it. And I'm reminded of this because today we hear from the Mark Zuckerbergs of the world that, oh, disinformation and misinformation is too hard of a problem, it's too big of a problem. But the reality is that it's not so much a technology problem, it's a will problem. It's that these companies don't want to get in the business of moderating content because it's messy and it's complicated. And I agree with that. But I think the alternative is completely unacceptable.
[00:20:05] Speaker B: My understanding is you also worked as a consultant for intelligence agencies, media outlets and the courts, seeking to authenticate the validity of images. What has been your most interesting projects to date? On that front?
[00:20:17] Speaker C: I will tell you one of the ones that I found most fun and interesting and enlightening, which is for years I was getting email from conspiracy theorists who were convinced that the now famous backyard photo of Lee Harvey Oswald, of course, the accused assassin of President Kennedy, people thought the photo was fake.
And even Oswald himself, when shown the photo back in the 1960s with the rifle, the same rifle that was found that killed the president, said, that photo is fake. It's not me. Which is pretty amazing if you think about when he said that some 80 years ago. And I swear to God, every month I get some email from somebody saying, photo is fake photo is fake. Photo is fake. You will go down in history. And I remember one summer I got a particularly coherent email from somebody pointing out things in the image that I thought, yeah, those are. That's sort of weird. I don't really understand that backyard photo. Something about the lighting in the shadows. And so I spent the summer working on a forensic analysis where I did a full blown 3d reconstruction of Oswald's backyard. Him standing there, accounting for his height, his weight, the size of the gun, the weight of the gun, his balance. Where is the sun, the shadows, full blown 3d reconstruction. And found that everything in the photo was perfectly, physically consistent with an outdoor scene, with the size, with the weights, with the bounces, with the shadows, the lighting, everything just sort of lined up so beautifully. You can essentially reconstruct a digital physical reconstruction of the scene. And I thought, well, that's really interesting. So here's all these things that people pointed to. The shadows, the shape of the chin, the weight of the gun has bound to. And one after the other, you could say, nope, thats physically plausible. Physically plausible. Physically plausible. And I was really, I thought, this is really cool. This is great. Weve put this to rest. Science is awesome. I wrote a paper, published it, and I thought, okay, my work is done here. And what happened now, in hindsight, is 100% unpredictable, but I didnt predict it at the time, which is people were really, really mad at me. So the same people who came to me saying, hey, you are the expert in this space, you should analyze this now, started saying that I was part of the conspiracy to conceal who really killed the president because I've worked with law enforcement agencies. Aha. Well, why did you come to him in the first place? Doesn't matter.
There is exactly why I think the Mark Zuckerbergs of the world and the youtubes of the world and the twitters of the world have been responsible for the fact free zone that many, many people live in now, because for years they've been allowing these conspiracies to fester and in fact even be promoting them quite actively. And suddenly you have a significant number of people who don't trust your government, the media and the experts. And when you need to trust them, now we have trouble.
And so that was what I loved about that work, was the science was very cool and there's very specific claims. You could address them. But what was so interesting was the blowback and now the very predictable response that once you ingrained an idea or a belief, there is no penetration here, there is no setting the record straight. And that, to me, was a real eye opener.
[00:23:37] Speaker B: You've been doing a lot of work on digital forensics. Leaving aside this conspiracy theory topics, what other digital forensic efforts have you been involved in?
[00:23:46] Speaker C: What you have to understand is I started thinking about these problems back in 2000, actually more closer to 90 719 97, which was still the year of the analog. And I was primarily concerned about the courts of law. I was worried that when digital takes over and when the ability to manipulate content takes over, what's going to happen when you try to introduce images and audio and video? And so most of my efforts in my lab with my students and postdocs over the last two decades has been in developing very specific forensic techniques that can ingest an image or an audio or a video and detect traces of manipulation. Whether something was added, something was removed, something was altered, something was now fully synthesized. And these are not.
I can analyze a million images a day. These are techniques where you can analyze one or two images a day, because there's a human in the loop and there's analysis that has to be done, and there's these series of techniques that have to be done, as in most forensic techniques.
I think those are really interesting and also very important going forward. There's a whole other avenue of research, which is now, how do you deal with the billions of uploads every day? That's what we were talking about earlier.
And then, of course, the spread of myths and disinformation, trying to understand and disrupt that. All of these have sort of, the field has been morphing. It went from this somewhat boutique niche field where there was a relatively small number of us dealing with relatively small problems in the courts, a few national security cases a year, to all of a sudden, our very existence as a society and democracy depends on our ability to get trusted information, which we're having trouble doing. And so the work we have been doing has been migrating into these three different buckets. People will have say to me, wow, you really sort of saw the future, didn't you? And the answer is no, I didn't. And suddenly we woke up, and we're in this sort of mess of information apocalypse that is having real world consequences for our democracy and our society and our physical health. This will be a defining problem of the next generation, which is, how do we harness the many wonderful things about technology and the Internet, of which there are many. But we also have to acknowledge and admit there are some really bad things that are being done with technology. They are being weaponized against individuals and societies and democracies. And we have got to get a handle on this. And I think we have been too slow in responding. And my hope is that some of the work we're doing is one part of that equation, to try to harness some of the power and mitigate the harms, you know, a fun way.
[00:26:33] Speaker B: We see that making a compelling business case for the use of AI and its applications is a big challenge. I mean, everybody sees it. But making a business case and allowing to make real investments and bluntly making real dollars has been a challenge.
[00:26:46] Speaker C: Right.
[00:26:48] Speaker B: There's obviously, what can companies do? What can government do? What can policies do? But at the end of the day, let's be brutally honest, you have to make money. What's your thoughts on this?
[00:26:57] Speaker C: Yeah, I think the, I think youre right, by the way, that despite all of the splashy headlines and all of the promises, AI and ML have not really delivered in the final run, in the actual ability to do something as opposed to writing an academic paper that shows a cool result. But really getting that thing to work in the real world has been a huge challenge. And you have not seen the type of breakthroughs of this technology. And this is why I said at the very beginning that the jury is still out on this technology. Heres what I think is going to be the future for the next ten years or so in the technology sector. And I equate this to the automotive industry. So im old enough to remember the first cars that we bought and my family bought didnt have seat belts. They didnt have airbags, they didnt have AbS brakes. They didnt have front wheel drive. They didnt have any of the number of safety features that we all now insist on in our cars because we don't want to die in a fiery crash in our car. And what happened is the automotive industry fought and fought and fought against the addition of safety features until they realized that this is a smart investment because people want to be safe. And I think the technology sector has a similar problem, is that we haven't sold security and safety online for us as individuals, for our families, for our kids.
Thats not been a selling point. The selling point has been, hey, heres some free stuff, have fun. What can possibly go wrong? And I think that were starting to turn this page where people want to feel that the content they get is trusted, that its safe, its secure, its honest, theres integrity behind it. And I think thats going to become a selling point related to that is if you wind back 20 years at the beginning of the Internet that we now know today, nobody really understood how theyre going to make money. Remember when Google came around and Facebook came around? It was like, youre just giving everything away for free. How could you possibly make money now? Trillion dollars there, a trillion dollars there, suddenly realized, oh, there is money to be made there. But the reason why we didnt see it is that people werent at that time, 20 years ago, comfortable with online commerce, the idea of spending $0.99 for an app or these renewing subscriptions that we all now have and have lost track of. But today the landscape is really different. People are willing to pay for things online in a way that they were not willing 20 years ago. And I think that means we have to change the business model. We have to think differently about consumers online. We don't have to have privacy invading, outrage generating machines anymore, because that was the Internet of the past. The Internet of the future can look very different. And id like to see venture capitalists and technology companies come around and say, we can do better, we can be more trusted, we can be more privacy preserving, we can be more honest, we can have more integrity behind these products, and people will pay for that. Its not just, hey, this is the morally and the socially responsible thing to do, its that we will do this because it is the smart thing to do. And im hoping that the next 20 years, or maybe even the next ten to five years, we'll see a new business model for the Internet. Moving away from a strictly ad driven, which is fundamentally at odds with the things that I've enumerated. Because when you are in the entirely ad driven business, you are in the attention grabbing business. And if you're in the attention grabbing business, your motivation and individuals and society's motivations are fully misaligned. And I'd like to see that alignment come back a little bit.
[00:30:36] Speaker B: I mean, you've been around since 2000 on the start deck, but especially in the last five years, maybe seven years, a lot of startups that are heating up in the AI world space, in the whole landscape of hardware and software companies.
If you're a betting man, what kind of companies, what type of companies do you think are going to emerge as the winners? I mean, obviously you talked about Nvidia. I mean, there's a hardware players, the obvious ones, but sort of when you see the next wave, where would you place the bet?
[00:31:01] Speaker C: Yeah, I'm not a betting man, which is why I'm a university professor. But since I get to bet with other people's money, sure, despite my concerns about deepfakes and their misuse.
I do think that there is going to be something interesting here about being able to fully synthesize images and videos and audios for commercial purposes, for Hollywood purposes, for entertainment purposes. Imagine if you can now rent a movie and say, I want to insert my favorite performer into the starring role. I can customize movies.
Imagine you can now take any movie in any language, and instead of having that really crappy lip syncing that is really distracting, you can now make people speak any language you want. I think there's real power in that technology, and there's a few small companies that have emerged that I think are going to be leaders in that space. I think, though, they have to be very careful to make sure the technology is not going to be misused. They have to put the right safeguards in place to mitigate that harm, or they're going to end up with exactly the same type of daily bad press that Facebook is facing every day for the last few years. I'm less enthusiastic about the throw machine learning at pick your favorite problem.
I find that not a very compelling story, and I think the last five years have sort of shown that that stuff's not really working.
I'm also, as I said earlier, extremely enthusiastic about anybody working on better business models for online commerce that move away from an entirely ad driven business. I think people are ready for a different Internet, despite the fact that Facebook has billions of users and everybody is on these platforms.
Talk to people. We do surveys about this all the time. Nobody really likes the services. They're on them because they feel like they need to. But I think as soon as something better comes around, people will leave and they will leave gleefully. There will be no love loss for Facebook or for Instagram or for Mark Zuckerberg. They will jump ship, and they will do it quickly if you can offer them a better product. And I think somebody's going to do that, and I think it requires a fundamental rethinking of the underlying business model.
[00:33:15] Speaker B: We'll see. Fingers crossed.
In closing, I'll leave on a high note. You've been called the father of digital forensics by Nova Science. Now, how does it feel to be given that title these days?
[00:33:28] Speaker C: Nick? I'm just grateful I wasn't grandfather.
You know, it's every once in a while in the academic world. You're at the right place at the right time, and the stars align for you to do something that nobody's done before. And please, please understand, I've had phenomenal students and postdocs and collaborators who've done really lion's share of the work in my lab. But honestly, we were just in exactly the right place at exactly the right time at the confluence of many things and started thinking about this problem that, as it turned out, became a really, really critical problem in our society. For me, looking back on the last 20 years of my academic world, I find real reward in that I think one of the great luxuries and one of the great honors of being in the academy and having tenure and that type of job security is we get to go out in the world, look at what the problems are facing our society and try to solve them. And that, to me, has been a tremendous effort. And I take great pride in the work that my students have done over the last two decades and trying to chip away at this problem. And I think that there is a lot more to do. My only hope is that the many, many PhD students that I've graduated are starting their academic jobs now, and so I can start thinking about retiring and they can sort of take over the mantle. One of the great things about being an academic is you're literally training your replacement, and there's great reward in seeing these young minds now stepping forward and carrying on that mantle. And I'm very excited to see what they'll be doing in the next ten years.
[00:35:07] Speaker B: Professor fared, I sincerely hope you don't retire anytime soon because your insights are unbelievably helpful. We look forward to hearing more stories like the ones you shared in the next podcast. What you shared has truly been fascinating. Thank you.
[00:35:21] Speaker C: Thanks, Nick. It was great talking with you.
[00:35:28] Speaker A: Thanks for listening to Fernway Insights. Please visit fernway.com for more podcasts, publications and events on developments shaping the industrial and industrial tech sector.