Why Lead?

0083 - The AI Playbook: How to Win with Machine Learning ft Eric Siegel

Ben Owden Season 3 Episode 83

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 1:05:54

What if you could predict who will buy, who will click—or even who might lie or die? In this episode, Ben Owden sits down with Dr. Eric Siegel founder of Machine Learning Week, a globally recognized expert on demystifying machine learning and author of two landmark books, Predictive Analytics and The AI Playbook, Dr. Siegel.

They cut through the hype of “AI” to reveal how predictive analytics (the real engine behind so many “AI” claims) is transforming day-to-day business operations—from fraud detection and targeted marketing to life-saving medical breakthroughs. Dr. Siegel explains why machine learning doesn’t grant magical crystal-ball power, yet still delivers massive value by accurately tipping the odds at scale. Together, they explore how this technology can help build a world that our kids—indeed, all of us—can thrive in, if we apply it ethically and responsibly.

Whether you’re a data scientist, a tech-savvy entrepreneur, or just curious about AI’s practical impact, this conversation will recalibrate how you see machine learning. Discover why some projects succeed while others fizzle out, how real machine learning differs from inflated notions of “intelligent machines,” and why the secret lies in collaboration, probabilities, and concrete, operational goals—not grandiose futurism.


Get a copy of The AI Playbook


Important Links
*Join Thrive in the Middle Today!
*Book WhyLead to Train Your Teams
*Explore Our Services


Social Media
*Ben Owden's LinkedIn
*Ben Owden's Twitter 

Eric Siegel

The world is unpredictable. What do I mean by that? I mean it's very, it's impossible to, in general to predict, well, is the economy going to go up or down next quarter? That's, you know, it's a fool's errand to put a lot of faith in any particular prediction on that made by anybody or any system. But if we're predicting over 100,000 customers who's going to purchase next quarter, I can do a great job identifying those that are five times more likely than average to predict and others that are five times less likely than average. Excuse me, to buy the idea of being data driven or being sort of empirically driven. The sort of ultimate manifestation of that is machine learning. In fact, that's not an advertisement for machine learning. That's the definition of machine learning. Machine learning is the, is the, are the types of methods or algorithms or computer programs that try to make these predictions. And prediction is what it takes in order to prove an operational decision. Should I market to this person? Depends on whether we think they're going to buy, Right. Otherwise you're wasting a dollar to contact them. Right. Should we investigate the transaction for fraud? Should we block this credit card transaction? Again, it depends on our best prediction of whether it's going to turn out to be fraudulent. Right. So prediction is always sort of the holy grail for improving each individual operational decision. And the best way to predict is to learn from historical cases.

Ben Owden

Being in the middle is challenging. From balancing the demands of those above with the needs of those below. Balancing between pushing the strategy forward and investing in the developments of your teams. Stewarding what is presently working while being a change agent. We understand the tension that exists with being in the middle where you're asked to be everything to everyone. Dear middle manager, we see you, we've heard you and we're here to help you. Y Leap Consultancy is bringing you Thrive in the Middle, a 12 week cohort based leadership program designed for those in middle management. An immersive program where leadership isn't just taught. It's honed, refined and brought to life through a blend of expert guidance, peer collaboration and immersive practical learning experiences. Join thrive today and be more than just a link in your organization. Be its strongest link. To learn more and enroll, email us at yoda@whyleadothers.com 

Mambo this is Ben Owden, the leadership Mr. Miyagi. My hope is that this conversation will help you find the clarity and conviction you need to lead a more meaningful and impactful life. I have curated some of the best thinker practitioners from all over the world to help you get to your leadership nirvana. So sit tight and let's go on this journey together. 

Greetings to you. I hope you're at peace and are having a meaningful and productive day. Welcome to another episode of the WhyLead Podcast. I'm your host, Ben Owden. Do you ever wish you had the power to see into the future? To know who will buy, who will lie, who will click, or even who will die? What if I told you that this isn't just a work of fiction, but a reality shaped by the power of predictive analytics and artificial intelligence? Today, we venture beyond the buzzwords of AI, past the hype of automation, and deep into the very nature of knowledge itself. If AI is supposed to be intelligent, what does that actually mean? Can it truly know, explain and create knowledge? Or is it simply a master of pattern recognition, blind to the deeper truths of the universe? Right. And ultimately, what is the future of artificial intelligence? And can it ever develop the curiosity to questions like why? To guide us through this adventure, I am joined by a leading voice in predictive analytics and machine learning deployment. A speaker, author and expert who has demystified AI for both business leaders and data scientists alike. He's the author of two books, the AI Playbook, Mastering the Rare Art of Machine Learning Deployment, which provides a practical end to end strategy for getting AI systems off the ground and into real world impact. And the second book, which came out first, is Predictive analytics the Power to Predict who Will Click, Buy, Lie or Die, a best selling book that made predictive modeling accessible, engaging and indispensable for organization seeking a competitive edge. And I've read the first book and then I not finished the second book, but I can highly, highly recommend both books. And as the founder of Machine Learning Week and a former Columbia University professor, he has spent decades exploring the power and limitations of predictive models. His work has been cited by the Financial Times, Harvard Business Review, the Wall Street Journal, and the New York Times. Ladies and gentlemen, Dr. Eric Siegel. Dr. Eric, you're most welcome.

Eric Siegel

Hello Ben. Thank you so much for the intro. I think you've said. That's all I have to say. I think we're done.

Ben Owden

Thank you so much for making the time to have this conversation with me. And I think whenever we think about things like predictive analytics or machine learning or even AI, most people have come to know these terms in the last maybe 10, 15 years or so. And so but you've been at this for a longer time. I mean, people in the 90s, some people didn't even know the Internet existed. But you were already immersed in this world. And so I want you to take us back in time, right before the books, before the conferences, before the accolades, before the target drama, you know, and I think to a moment where maybe something happened that spark you to be on this path, right of where you're interested, where you fell in love with predictive analytics, where you thought maybe machine learning, this is the future. What was that moment, maybe that really got you interested in this world of analytics generally?

Eric Siegel

Yeah, so that was the early 90s. I kind of fell in love with the idea and the actual field of machine learning. Basically when I was entering grad school, I went straight from college to graduate school. And it's kind of ironic, originally I got fascinated because I did love the concept of artificial intelligence, which is a very subjective idea of what does it mean for a machine to have intelligence, which is really a word that describes humans. But at the time I kind of came to the conclusion, well, the only way to head in that direction is machine learning, where the computer is automatically learning from examples, many examples in the form of data. And the irony is that now I kind of eschew the term AI. I feel that it's intrinsically a form of hype. I think that machine learning is extremely valuable and what's come of it, including generative AI and large language models, the chatbots and predictive AI, extremely valuable in many different ways. But that the term AI is always routinely in its general usage, conjuring a notion that basically anthropomorphizes a machine. It kind of says, hey look, the machines are becoming truly human, like in their capability, if very around the corner. And I feel that there's a narrative there that's always kind of going a bit too far. Some people say that one definition of AI is what we haven't done yet. So once you get it done, it doesn't count as AI. It's not so spectacular anymore. It can win at chess and go these incredible games. Is that AI? It's not in the intended spirit, original spirit of the word self driving cars are extremely impressive. They're not actually self driving in a full scale way. Right. They're not autonomous. You can't sleep in the back seat. In general, there's not, you know, across geographies that's coming, I think it might be a few decades, but when that gets here, is that going to be AI? No, it's just another thing that works. It's functional. And once it works, people don't call it AI. So I think that kind of, that irony kind of speaks to the problem with the term. So there's a lot of great technology. Machine learning is well defined. AI is not. So we need to strike this balance between being excited about the value and capabilities, being awestruck by what it can do and thinking that it's amazing, but finding a balance between that and also at the same time kind of being realistic and understanding the astounding capabilities, general capabilities of humans and that we're not on a large scale fully replicating that.

Ben Owden

And I'm glad you brought that up because I think I was going to ask you to set the foundation for how we use these terms in our conversation. Right. Because I think words like AI, machine learning, predictive analytics, some people use them interchangeably. And so in the way that you use them, what do these words mean and where do they overlap? Where do they diverge? And what's the most useful way to think about these words?

Eric Siegel

Yeah. So I'll define them. Machine learning, predictive analytics are well defined. AI is not. Now, AI can be used just as a synonym for one of those. And you know, you can arose by any other name. You can call things whatever they want. We use the word smartphone to describe a phone without these sort of grandiose expectations that it's truly smart. The way we refer to AI systems. So machine learning is technology that usually when people are talking about machine learning, they're talking about supervised, which means it learns from labeled examples. The learning process is supervised in that it's given a bunch of examples, maybe hundreds of thousands in some cases, or even more, where the answer is already known. So it's labeled, you know, this is a picture of a cat versus a dog. That this radiology image denotes a certain medical diagnosis that this customer did cancel, this other customer did buy, lie or die. Whatever the outcome or behavior you're trying to predict, it's either from history or human manual labeling. One way or another, you have the right answer. And when you're leveraging that, those reams of data of examples, that's called supervised machine learning. Learning from examples to predict, you're always predicting, you're determining the outcome, behavior or category. Medical diagnosis, we still use the word predict. We still call it predictive analytics, also known more recently as predictive AI, to distinguish it from generative AI. And so that is one of the main categories of use cases of machine learning. The core Technology is the ability to learn from the data, that is to say, derive a predictive model. That's some patterns or rules, formulas that have been determined from some maybe very large but limited number of examples. And it works valuable in a valuable way in that now that model does perform well over new cases that have never before been seen that are not included in the examples from which it learned. In that sense, it's actually learned something true about the world. And it's helpful. You can make predictions in the future. We don't have a magic crystal ball as far as who's going to click buy, lie or die, but we can predict better than guessing. That's often more than enough predictive performance to be extremely valuable. And typically the numbers game business is a numbers game. And you're trying to determine how to target marketing fraud investigation, which satellite to investigate is possibly running out of a battery and where to drill for oil and which train wheel to investigate and et cetera, et cetera, Right. All the main large scale operations that we conduct as organizations are benefited by predicting because those predictions directly inform the action operational decision made for each individual based on that prediction. So that's a category of use cases of predictive modeling that is also known as machine learning methods. Another category of use cases is generative AI, such as large language models, which is also based on the same class of method or algorithm machine learning. Because what it's doing there is learning what should the next word be, or more specifically token in order for it to generate language. That's why it's called generative AI or an image. Determine how to change this pixel in a high resolution image while you're iteratively, while the computer's iteratively rendering it. So it's just a completely different way to use probably a more sophisticated type of machine learning. But all these things are built on machine learning. So machine learning is well defined. When we start using the word intelligence specifically within the term AI, that's when things get really, really fuzzy, right. So there's a lot of technology that's well defined and then there's also buzzwords that can sort of mean whatever you want, depending on the context.

Ben Owden

And we'll come back to this because something else that I would like to talk about is AGI, but I'll bring that up as well. But in your work you push back, I guess against the hype around big data, that the true power is not so much in the size, but rather how we use it. And there's a line, right? Big data equals small Math suggesting that maybe many people use surface level insights rather than deeply predictive modeling. And there's a paradox here, and I'm going to sort of bring in another thinker, so to speak, in the space here. Right. Who. And I'll explain later because it was mentioned in the preface of your book, right. Nassim Taleb, who basically warns against this idea of having more data as a source that can lead to more noise, maybe more false correlations, and you can miss out on some things that you should be aware, like risks ahead of you. Right. And so is big data making us wiser or is it just making us confident in our own Probably. I wouldn't say illusion because that's a strong word. But is big data alone going to make us wiser considering that you say it's just big data equals small math?

Eric Siegel

I mean, data is a great resource. I mean, the big data movement, we don't really call it that anymore. That buzzword's become a bit passe. But what it was really refer, referring to is the abundance of data because it gets collected automatically as a side effect. It's sort of an organic effect of conducting business. You know, all the transactions get logged and what have you. So it's not necessarily being collected in order to learn from it with machine learning, but it serves that purpose, right? It's represents the collective experience of an organization, you know, which customers did respond this way or that, or canceled, whatever the outcome or behavior that may be valuable to predict, you've got many examples of that within that data. So let's use it to learn. And the more data, the better. In general, data doesn't. I mean, I think what you're alluding to is confirmation bias or other kinds of, other kinds of bias where the data could sort of be misused. But if the data that you're analyzing, that you're learning from is balanced in the sense that it's representative of the kinds of situations that, that you're going to apply what's been learned, that you're going to try to do more predictions on in the future, moving forward, then it's sound and the more the better. So the more cases you have to learn from, the better. That doesn't apply for humans. Right. We can't process 100,000 or a million cases. That would be impossible, but the machine can. So once we sort of define the way that it's going to sort of churn through this and try to find these patterns or formulas, you know, it's quite effective. So yeah, I mean, that's the idea of being data driven or being sort of empirically driven. The sort of ultimate manifestation of that is machine learning. In fact, that's not an advertisement for machine learning. That's the definition of machine learning. Machine learning is the, is the, are the types of methods or algorithms or computer programs that try to make these predictions. And prediction is what it takes in order to prove an operational decision. Should I market to this person? Depends on whether we think they're going to buy. Right. Otherwise you're wasting a dollar to contact them. Right. Should we investigate the transaction for fraud? Should we block this credit card transaction? Again, it depends on our best prediction of whether it's going to turn out to be fraudulent. So prediction is always sort of the holy grail for improving each individual operational decision. And the best way to predict is to learn from historical cases.

Ben Owden

I like that. I think I mentioned Nassim Taleb before and I'll mention now why I'm bringing him up because I think in the preface of your first book, Thomas Davenport says this book and the ideas behind it are a good counterpoint to the work of Nasib Taleb. Because I think in most of his books he's really anti prediction, so to speak. Saying that, you know, doesn't really, there's no way of actually knowing what's to come. Of course, he focuses highly on the big black swan events, not so much just day to day human behavior, which is I think in a lot of the examples that you use in how businesses can deploy predictive analytics, it focuses on more on a macro level like day to day operations, rather than big, almost like cosmic level events that shakes regions or maybe the entire world at times. But Taleb basically talks about this idea that human beings should prioritize robots, robustness over optimization because there are so many risks that are ahead of us. Right. And so in the way that AI is being built or talked about today, and part of the reason why I was excited to have this conversation with you, because it seemed, in my estimation you seem to have a more grounded look and view of what it is. Whereas I think if you look online and if you look at people who have a voice and who can speak into this, sometimes it can feel like they're talking about a future, that when you look deep into it, you realize I don't think we're there yet. Right. And I love how you grounded our conversation at the very beginning. And so in the way that we talk about AI, in the way that we think about it today, in this idea of just Optimization for short term efficiency, it's going to replace us and all of those things. Is that the right way to think about it? Because I guess I think the optimization of AI sometimes and the way that we think about doesn't quite function as a human because I think, like you're saying, it looks at historical data. Right. Whereas a human being, we can guess, but inherently we have the ability to look forward, to ask questions and to fail, to choose sometimes to even fail, even when you think about things like domains like science, where it grows and moves forward due mostly to failure. Whereas I think when we think about predictive analytics, which is historical, it's about how do we make sure that we avoid as many failures as we can. Right. And so it's in conceptualizing, say that.

Eric Siegel

Again, that's the difference is whether you're trying to predict over many, many cases where you can afford to be wrong a bunch of times. You're just, you're going to win by doing better than random guessing. So that's the way to resolve. That's how I can say Nicholas Taleb is right and so is the field of predictive analytics. They're both right. So Talib wrote those books, Fooled by Randomness.

Ben Owden

Skin in the game, Antifragility, I think Black Swan.

Eric Siegel

Yeah, yeah, Black Swan. So full by randomness and Black Swan, both, both allude to the unpredictability. Right. And Antifragile is like, let's create this kind of robustness. So, yes, the world is unpredictable. What do I mean by that? I mean it's very, it's impossible to, in general to predict. Well, is the economy going to go up or down next quarter? That's, you know, it's a fool's errand to put a lot of faith in any particular prediction on that made by anybody or any system. But if we're predicting over a hundred thousand customers who's going to purchase next quarter, I can do a great job identifying those that are five times more likely than average to predict and others that are five times less likely than average. Excuse me, to buy. Right. We can put those odds right. So, you know, target, you mentioned predicting who's pregnant in general. They're not gonna, it's not a magic crystal ball. They're not gonna be able to say for sure whether each individual is pregnant. There may be a handful based on their buying where it becomes very apparent, but for the most part, it might be like, okay, well, we think that this female customer is three times more likely than average to be pregnant. Still may only have a 3% chance of being pregnant, but a lot more than average. And that kind of difference in the odds makes a huge difference for targeting all these types of operations. Right. So when you're playing the numbers game of putting those odds. So I can say I predicted something, it's not unpredictable in the sense that technically I can predict. Even if I make a wrong prediction, it's still a prediction I'm still predicting. And if those predictions end up being correct, see significantly more than random guessing or any other simple baseline for comparison, then that's king. That's huge monetary value in improving all these main large scale operations on a per basis. So we enjoy the benefit of large numbers. We can sleep well at night knowing that Nicholas Taleb is right and the world's unpredictable. But at the same time it's not entirely unpredictable. You can put odds on things and those odds tend to pan out over many, many cases. And that. So if you then if you want to turn to his book Antifragile. Well it's sort of. This is exactly how you're generating an antifragile system is by learning from historical cases so that you're only improving based on all the times that you got it wrong in the past and getting it wrong less often in the future over many cases. Tipping the odds just a bit right. I mean predicting three times better than guessing could easily multiply the return on investment of a targeted marketing campaign by a factor of five. Five times the profit. That's quite possible. That's the way these numbers games work out. It's just back of the napkin arithmetic that shows that in fact I show that exact arithmetic in most of my keynotes and in at least my first book there Predictive Analytics. So that's. So that's what's important about discerning from the hype. The hype around predictive analytics had for a while had been look, it's a magic crystal ball, the hyper round. The problem with target predicting who's pregnant again, it can tell who's pregnant by looking at your shopping. You can't tell in the sense of highly confident predictions in general. We don't have a magic crystal ball. You can't in general predict who's going to click, buy, lie or die with super high confidence for most cases. But you can very confidently put odds that are a lot better than random guessing and thereby improve large scale operations. Right. So and in fact that kind of goes to speaks to limitations in general including generative AI and these large language model chatbots which are so incredible. But no matter how sophisticated the underlying models and technology, you can't get around the fact that predicting who's going to click, buy, lie or die, and these types of outcomes or behavior, it's diminishing returns as technology gets more complex. We don't have a magic crystal ball. There's going to be an upper limit on exactly how well we can predict these things. So the numbers game is, well, let's predict them as well as we can and act on those predictions to improve operations. So that's where it gets credible. So, you know, I, I feel that the antidote to hype is to focus on concrete, credible value propositions rather than so much kind of pie in the sky promises about where, where everything is headed.

Ben Owden

I love what you just said there and I think as you were explaining it all a question, you know, but I'm asking myself is what's the right relationship that we should have with predictive analytics? Because I think inherently as human beings and even as corporations, there's a deep desire for control and certainty. And when we have a tool that can predict and maybe more successful than a team of people, there's a tendency to bank on that certainty and treat the system as, you know, like you're saying it's a magic crystal ball, it's a God of some sort. That's all knowing and not realizing that it's a system, it can be wrong. And there have to be self correcting mechanisms in place to make sure that that iteration and evolution that you were talking about actually takes place. So what's the, I guess, right way to have that relationship? Understanding that maybe as human beings we are flawed because we do crave that control that's found in being certain.

Eric Siegel

Yeah. And we want to deify technology. I mean that's exciting. If the technology has almost godlike powers. Awesome, right? I mean that, you know, if it can predict like a magic crystal ball, that's something to hang your hat on. It sells. Right. It can be very exciting. So there's sort of two problems to that, that level of over promising. One is that people may essentially misuse it in a way that you were just alluding to, which is too much authority. They trust it too much. Like a magic, I don't know, do you guys have the magic eight ball thing where you shake the ball and you turn it over and it has any. A yes or no answer kind of comes out? It's just random. Right. So it's just a little novel toy from like the 1980s, you know, and it's sort of like, let's just ask it a question and believe the answer. It's almost the same psychology, right? Deferring to the system and saying, okay, well it's put the problem, you know, it predicts yes, so that's got to be the right answer. But then the other problem with that is that at some point decision makers get wise and they're like, wait a minute, this thing is wrong often. And then all of a sudden, because it doesn't match the over expectations and over promising, they're disillusioned and they write the whole thing off, which is also a huge mistake. So the real value, and I call it the prediction effect, is that a little prediction goes a long way. Predicting better than guessing, generally more than sufficient to drive value. We don't need magic crystal balls and we don't have them. But what we do have is the next best thing, probability. Now, probability is kind of a deal killer, right? It's not the most exciting conversation topic at a party. When I'm trying to make friends, I don't typically bring up probability, right? But it's not that arcane. All it is is a number between 0 and 100 of how likely something's to happen. Now there's rocket science involved. There's complexity in the math that was used to derive that probability, to say, hey, the chances that this customer is going to buy, you know, is 65%, whatever that is, the math to do that, and more specifically the math to learn from lots of data to create a model that in turn generates that probability, right? That's kind of the rocket science. But once you have the probability, that's great. We don't have magic crystal balls. I hate to break it to you, but the next best thing are those numbers that are on that scale of just how likely it is for each individual case. So that's the shift that's got to take place, right? If, I mean, if you like the pop science type of writers. There's also Nate Silver and he's like, we got to think more probabilistically, which is a great message. But it falls short a little bit because it's not just that we need to think probabilistically, we need to do probabilistically. And that's the shift for improving large scale operations is to get not just the data scientists, but the less technical business leaders and decision makers and stakeholders to be like, yes, we're going to act probabilistically. We're going to systematically make decisions based on numbers that are you could call them a predictive score if that sounds more friendly. But they're probabilities.

Ben Owden

And I think that's not so easy sometimes, depending on the organization a leader is working for. Because I think if certainty is incentivized and they overall confidence in a course of action is incentivized, then there's the tendency to think more in certainty than in probabilistic way that you've just explained there.

Eric Siegel

My counter argument would be like, if there's a high power executive who's got all the charm and charisma and has that real confident personality and they are putting all that, and they're saying, look, I have this Magic 8 Ball toy from the 1980s that pops up a random yes or no. And I'm telling you, this thing's right most of the time at some point it's like, you know, get with the program, let's be credible. Right? So, but taking a step back, I mean, yeah, I mean if you, if you're like me and you believe that the human race are just another type of animal, right. There's going to be some point where not everyone's going to jibe with acting en masse. So that might be a challenge. And by the same token, if my general propensity is to sort of clarify and at the same time eschew the hype, there was a very wise statement I heard recently, which is that basically the people who fight against AI hype sound smart and the people who generate the AI hype get rich. So maybe I'm on the wrong side of history.

Ben Owden

Oh, definitely. I think if you go online you would realize where the incentives are and the incentives are around generating the hype and talking about it and the certainty of where the world is going and what this technology is doing and changing and transforming. And so that's the conversation. And speaking of that, I think what I love about the, I guess, democratization of the technology is that it's making it accessible and cheaper to deploy for most organizations that maybe before couldn't do that. But the question that I have here is what sort of thinking skills are required to really harness the power of predictive analytics, machine learning? Because I was recently talking to a neural anatomist who was talking about how, and we'll talk about it in a few minutes. This idea of AGI, and it's for there to be AGI, you have to emulate both hemisphere of the human brain, the left, which is more analytical, and the right hemisphere which tends to be more immersed in the present moment. And that has the curiosity to pursue the unknown, almost like a childlike way of looking at the world and we're not there yet. And so people who tend to be very strong in their right hemisphere couple that with maybe where AI is at the moment, then that's sort of like a good combination of skills and maybe that's the current version of AGI. But as someone who knows, you started this in the 90s and the technology has gotten cheaper to deploy, so to speak, over time because these tools and models and systems are accessible to most organizations now in a way that maybe they weren't in the 90s or maybe even the early 2000s, but to make sure that we actually deploy and we get the best value from the insights, and not just the insights, but again the predictive insights from the data that we have. And this applies, for example, even in my own country. The government collects a lot of data, but I'm not seeing or hearing anything around what is the use case for this data. So many organizations just know that we need to collect data, we need to have the data, but many of them are not really harnessing the power and the value from this data. And so the question is always, is it that there's a lack of skill to do this and if so, what's the skill? Is it like thinking tools? What is the missing piece, so to speak?

Eric Siegel

Well, I think it's basically just that idea of being willing to act on a probability which is just a number between 0 and 100 or 0 and 1. But let me take a step back. First of all, again, there's sort of two very broad different categories of AI. Predictive, AI and generative. If you've never used. I'm not speaking to you because I'm sure it doesn't apply to you, but to listeners, if you haven't actually gone and tried one of the chatbots, you know, the OpenAI or anthropic, and there's a million of competitors, deep seq now.

Ben Owden

Yeah.

Eric Siegel

Then what are you waiting for? I mean it's free and it is amazing and it'll open your. It's very interesting, it's fascinating. You can try to trick it, you can see what it's capable, ask personal questions. I can't get my kid to get up in the morning, anything work related and, or ask it to write a letter to do this tricky topic. I mean it's, it's absolutely fascinating and quite useful in lots of ways. There's no training required and in fact, by definition it's extremely easy for anybody to use. Because that's the point is it operates in human languages like English. That's the point. You can just interact with it very similarly. This is brand new as of two years ago, right. I couldn't say this before, but you can basically interact with it like you do to a human because it's trained over an extraordinary amount of human language. Now I'm not saying you should expect it to do everything a human can do. Some things it can do better and many things it can't do as well, but it will be quite impressively responsive to what your request is. That's just a resource available to everybody and it's intrinsically meant for humans to use without any particular technical skill. Now if you want to go to the other kind of AI where you're, which is what you turn to, to improve large scale operations in marketing and fraud detection, credit risk and operations and logistics and all that kind of stuff for an enterprise that's predictive AI or predictive analytics, excuse me. And in that arena, you know, that sort of what you need to get your hand, get a handle on is a semi technical understanding, very accessible. And it's. This is really the theme of my more recent book, the AI Play. But it basically just comes down to what's predicted, how well and what's done about it. Now by predicted, in all of my writing and my understanding of the field, we don't mean necessarily a hardened fact, a confident yes or no, but rather a probability, right? Sort of a sliding scale. How likely is this particular outcome or behavior? For some cases it will be very confident. We know this is a loss cause, customer's very unlikely to buy this. Customer's the cancel, you know, whatever. But for the majority of cases it's somewhere on that, on that spectrum, right? Between highly confident and not confident at all. That is between 0% probability and 100%. You don't have the zeros and hundreds. Very often you're on that spectrum and you just want to make use of that. So what is involved in making use of that? For any given project it's what's predicted or what are you putting those odds on and then what do you do about it? What's the operationalization, the deployment, the way that you're putting it into action and improving operation? That kind of pair of what's predicted and what's done about it defines the use case, defines the project. And then of course you also care about how well it predicts. So what's predicted, how well and what's done about it. Those are the notions. Right. And you can almost for lots of the strategy and tactics and planning the project, you don't necessarily need to get too much into the weeds of the rocket science part. The learning from data and how the modeling works, you can leave that largely to the data scientists, but they need to interface with the stakeholders. They can't do the project in a vacuum because it's meant to. It's a business project first, not a technical project. It's a machine learning project that's meant to improve business operations. So it's an operational improvement project. But we don't call it that because it sounds boring. It sounds much more exciting to say it's an AI project, but it's an you're using some core technology you may refer to as AI, but the purpose is to improve business operations, otherwise why are you doing it? So anyone on this, on the business operations side plays just as important a role. And in order to get involved and collaborate deeply across the project, end to end, you need to get involved in that level of detail, what's predicted, how well and what's done about it. So involving those details, the the business process Practice paradigm playbook that I espouse in that book, the AI playbook that I present in that book is called BizML, the business practice for Running Machine Learning Projects. And it is a six step process that deals from end to end with those three concepts. What's predicted, how well and what's done about it.

Ben Owden

And could you very briefly and again highly encourage people to actually go buy the books, but very briefly share with us the six steps?

Eric Siegel

Sure. So the first three steps are kind of your pre production step plan the project and everyone get on the same page. Is this worth doing and why? How's it going to help operations? And those three steps involve those three concepts, but not quite in that order. So the first step is to define the actual project, the value proposition which is that pair what's predicting, what's done about it. So who's going to buy, then market to them which transaction can be fraud, then block the transaction, whatever it is. There's a million options. There's so many ways. This is an extremely horizontal type of technology. The Harvard Business Review calls machine learning the most important general purpose technology of the century because it's so widely applicable. So that's step one is sort of determine that in a broad sense. But then you got to get a lot more specific about the first part of those two of what's predicted. You can't just Say, hey, which customer is going to buy? It's got to be okay. Which customers that have been around for three months are going to increase their purchases over the next three months by 80%. Whatever pertains to the particular targeting of marketing or operational decision, you have to be really, really specific in an almost semi technical way about exactly what outcome or behavior is being predicted. So defining that prediction goal is step two of six. And the third is defining the metrics, which is how good is it? And this is a very undernourished area. People just don't talk in a very concrete way nearly as often as they should about how good is AI or how good is the machine learning model in the sense of how well does it predict and how much business value in let's say monetary terms like profit and savings, it's likely to deliver once deployed, so that everyone's on the same page about what the goals are and the purpose and all of that. So there's sort of those two different kinds of metrics, technical and business. The technical metric people often talk about accuracy, that's almost always the wrong metric. If the thing you're trying to predict happens rarely, which is usually the case, like fraud is very rare, which is a good thing. That means if you predict everything's not fraud, then you're correct 99.9% of the time. That's literally a 99.9% accuracy. But by never correctly predicting fraud, by always just predicting. No. Right. So accuracy is usually the wrong metric. There are other metrics. They tend to be precision recall, area under the curve, F score. They're all very technical and arcane. The data scientists love them, but they only tell you that the model performs well in the sense that it's significantly better than guessing or better than some other simple baseline. They don't tell you anything about the potential business value in terms of like profit or savings that the business stakeholder cares about. So when you go to those business metrics like profit and savings, well that's a necessary step, but is generally not done. And actually that's why I co founded the company Goodr AI less than two years ago. And that's what we do is we are the first full scale platform for valuating models. Not just evaluating, which usually refers to those technical metrics, but estimating or forecasting the business value in monetary terms, depending on how the model is going to be deployed anyway. So you asked for the six steps. Those are the first three. The other three are the production steps and they're the same that Machine learning methods have been since back when we called it data mining and even before that, go back to the 60s, the first time people learned from data in order to target marketing and make credit decisions. It's always involved this. Prepare the data, train the model. That's the rocket science. That's where you're learning from the data and deploy the model, actually integrate it. So you're acting on those predictions. And those three operational steps involve the first three things that we determined, but that's where you're actually going into production and doing the number crunching and deploying it. But the bigger point here, the bigger theme is across all of these six steps of BizML, it's not just a data science thing. It's got to involve deep collaboration involving business size stakeholders. If the business stakeholder doesn't get their hands dirty, their feet will get cold. So you get cold feet. They, they hesitate and fail to deploy. They fail to authorize deployment because they don't have a handle on understanding how it's going to be valuable, even though it doesn't predict with extremely high confidence, et cetera. So they've got to be involved end to end.

Ben Owden

I love that if you don't get your feet dirty, you get cold feet.

Eric Siegel

Twice. If you don't get your hands dirty.

Ben Owden

Yeah, your hands dirty. Sorry, you're going to get cold feet. That's a nice line. And I love what you've explained because I think I just kept hearing human and then collaboration, collaboration, collaboration, which is what you rarely hear. When you hear the buzz around AI, it's just like, oh my goodness, it's doomsday for human beings. And so maybe now we can pivot. As we're winding down our conversation around this concept of AGI, artificial General Intelligence. It's been explained in many ways, but I would focus, focus more on I guess the definition of the explanation, that it's a system that must be able to create new knowledge, not just optimize maybe existing patterns. And so do you think, and again, this is going to be a prediction, do you think that machine learning will ever cross from being maybe like a sophisticated system that match patterns to being a true explainer of sorts, a system that's capable of creating new knowledge as human beings are able to. And if that's something that you think is possible, what do you think would be the first sign of this shift where we're getting to a point where machines are truly becoming intelligent in the way that human beings are?

Eric Siegel

Well, intelligent often means AGI, Artificial General Intelligence. But that really is computers capable of everything a human can do. But the first part of your question was a true explainer capable of generating new knowledge. Again, these areas, those words are a bit subjective, but I would sort of, if I was just talking in a sort of neutral context in general, I'd say, yeah, it already does that. I mean, I think that these chatbots are absolutely amazing. I mean, you can get it to do things. It can be a thought partner, you know, for brainstorming. You're going to come up with stuff you wouldn't otherwise. Now, if you're cynical, you can say, well, that's just because it's sort of how semi randomly playing with words. And that's helpful. There's a little truth to that. But actually there's a lot of incredible. It's definitely, it definitely has a sense of the meaning of words and language and sentences, paragraphs. Meaning is a, you know, meaning is a loaded word, meaning the way human and the mind and the soul gets it. I don't know about that. But to a very large, unprecedented degree, it has ascertained meaning of language. It's amazing what it does. But that doesn't mean that it's approaching AGI in the generally accepted definition of that word, which is basically artificial human, capable of anything or virtually anything humans could do. Which would mean what? You can onboard it like a human employee and let it rip. Autonomous. It's fully autonomous just as much as any human. Now at one point you start asking, well, do I think that we'll ever get to that, get there? And I think that AGI is in principle quite possible. I don't know if it's going to take 1,000 years or 5,000 years, but I don't think it's coming in 10 years, like a lot of experts claim. I think that that's hype. I think that as incredible and impressive and accelerating as technological progress is, and it's almost like the acceleration is accelerating, right? It's just, it's moving so quickly and yet I still feel that none of it represents a concrete step toward AGI in the sense of artificial humans as seemingly human like as these large language models are, well, what did they learn from an incredible amount of observed human behavior in the form of writing or pictures and all video and stuff. But let's just talk about writing, right? All of the entire, the entire Internet and all the books that have ever been written and all the learning cases come down to, look, I've written all these words up to the middle of this sentence. In the middle of this paragraph, what should the next word. What's the next word right after that? And it's trying to predict that. And then once it's predicted that, then you're doing the same thing again. Well, what's the next. There's. So this amounts to an incredible number of examples of next word prediction as you can possibly imagine. So how much from that learning material about human mind can be reverse engineered? Well, apparently quite a bit, because when you play with these chatbots, there's no, there's no question, they're amazing. However, you've got to assume that there's a ceiling because we're not going into the human mind and dissecting the neurons right there. It's still just this behavior that we're learning from can only represent a very limited amount of human capabilities. And I believe that we will. That the difference between what these models can do and what humans can do more generally will become increasingly apparent.

Ben Owden

Thank you for that and I'm glad we. At some point in our conversation, there was a clear. I think, I love the anecdote that you shared right around who, how people who make money, people who are in the hype side of things, around AI versus the other side. So I think that just speaks to that same thing, meaning that there's no way of knowing when exactly we will get there. But all sort of signs point to a very distant future. And so there's a question that we ask most of our guests who come on the show. Right. It's the 1, 1, 1, which is what is the one book that you've read at some point in your life that you said, I wish I had a time machine and I could go back and read this book maybe 5, 10, 15, 20 years earlier, assuming that it was published then. Right. And then what is the one habit that you've learned to practice over time that you say, maybe the quality of my life would be much better if I had started practicing this earlier? And then what is the one personal value that you will not compromise, no matter the cost? This is a value that I will not compromise no matter the cost. So I'll ask you that question in a bit. But I think before we get to that, there's a question that I have, and I love how you wrote your book, particularly your first book, because there's so many use cases and examples and stories of people really harnessing this value of predictive analytics. Right. And so from your experience working with organizations, speaking, and I know you have the conferences, the summits that you host, which means you, beyond just working with clients directly, you hear stories of how predictive analytics is being deployed around the world. Right. And, and in the last five, 15 years of doing this kind of work, what are maybe some of the best use cases that you've heard? Because I think you say that power to see who's going to lie, who's going to die. I mean, that's a big promise. But what are some of the best use cases that you've come across where you're like, wow, this, this is just another testament to the true power of predictive analytics, beyond just the business cases that I think it's easier to land there. But I'm sure the value and the power, like you said, it's very broad in how it can be used. I'm looking forward to maybe. I don't know if there's a system out there that can predict whether or not a couple is going to last in their marriage, but it would be interesting if something like that would be available in the market. So what are some of the best use cases that you've come across?

Eric Siegel

Predictive analytics. My book does list a case of somebody predicting divorce, but that kind of thing you can predict better than guessing, right? You're not going to be able to say, hey, this couple is doomed and this one's going to definitely succeed. It's just putting odds just like any other use case. But, you know, I think that the healthcare applications are extremely important and effective. Right. So for now, the standard for pharmaceutical and other kind of medical treatment evaluation is just treatment control. And you're saying, hey, look, on average, this tends to help significantly, maybe 18% improvement in health outcomes or maybe more than that. But it's not individualized, it's not personalized in this sense that it may be that when you take this pill, it's better chance than not that it will improve your condition, but there may be individual patients where it's actually hurting the patient. So wouldn't it be better to do, to actually be able to predict for each individual, based on the characteristics of that individual healthcare patient, of whether this treatment's going to be effective? Right. That would be the difference between your sort of standard treatment control and moving to predictive analytics. And there's so many ways in which it applies in healthcare and indeed plenty of places where it applies for social good. I wrote an article on that way that it helps with social good. I just blurbed a book coming out on machine learning and AI for social good, which includes actually just fundraising, which is just like targeting marketing except that it's marketing like any other. Right. Except that instead of selling something where you have to send them the product, you don't, you just send them a thank you letter. Right. Because they made a difference. But the number crunching is very similar and you're just trying to improve and that can be a really important way to boost the effectiveness of a non profit organization. But also just in the actual operations the purpose of these organizations. So for you know, we had a keynote speaking at the keynote Eric Siegelt the Machine Learning Week conference a few years ago on predicting where there's highest risk of child neglect and abuse. I've seen this child trafficking and other kinds of trafficking. We had a predictive analytics world. Our conference series had a climate specific and we've had climate keynoters. There's all sorts of ways it applies for climate. So just taking a step back, I mean this is a tool like any other, right? It's like a knife. So you could, you can stigmatize knives. I went to this person's house and they have knives in the kitchen. Just sitting around the kitchen. That's what it's like. When people say predictive, they're trying to predict me. Look, it depends on how you're using it. It's a tool, it is amoral and it has a lot of potential for good. And improving it just for the purpose of capitalism is not always bad by any means. I mean that's what makes the world more efficient and that does translate to lots of consumer benefits. Even if capitalism in general also contributes to the increasing imbalance of power, that's a huge problem. And I would say that all kinds of technology under the umbrella of AI and in all technology in general does have potential to contribute to the increasing imbalance in power. That's a huge issue for the world. So yeah, there's all sorts of ways in which this can improve the world.

Ben Owden

Thank you for that. And especially bringing up the ethics involved and the fact that it's an amoral so which means it can be used to build or it can be used to manipulate, it can be used to exploit and that's a whole different conversation, so to speak. But what we hope for is for ethical use of it. Now the last question that I have for you is the 1, 1, 1, which I said is a question that we ask almost all of our guests, which is what's the one book that you've read that you say I wish I had this book earlier in my life. What's the one habit that you, you know, eventually you started to practice in your life, that you said, ah, maybe the quality of my life would be much higher if I'd started doing this much earlier. And then what's the one personal value that you will not compromise, no matter the cost?

Eric Siegel

Wait, so the first one was the book and the last one is the personal value. And what was the second of three?

Ben Owden

The habit.

Eric Siegel

Oh, a good habit that keeps things going.

Ben Owden

Yeah, it's something that you started practicing maybe much later in life that you said, I wish I'd started this earlier.

Eric Siegel

Sure, sure. The book. I'm going to pull out a really stodgy book and kind of old book. It's called Crossing the Chasm. But the book is still kind of the bible for startup companies and I'm an entrepreneur and Gooder AI is a very well focused startup that I co founded two years ago. But in a previous life, a couple decades ago, I was involved with a couple startups before I'd read this book. And basically the book says set a market niche. It's just the value of having a very well, finely scoped focus of exactly what problem you're trying to solve and who you're trying to sell the solution to. And the benefits of being well focused are many, including the fact that your target market, your market niche. Right. It becomes a community, a small enough community that they talk to each other and you generate buzz. Right. So, but like most popular books, it sort of has one simple message. But it's really nice to read the book and see a million examples of that. It starts with the example of the Palm Pilot, just to give you a sense of how old the book is, at least in its first edition. I haven't read the latest edition, so that's by Geoffrey Moore. As far as habits, I used to think that the easiest thing to be lazy about was exercise. That is to say, on any given day there's half a million excuses not to skip exercise on that day. Now, since August 2008 when I was coming around the corner to turning 40, I started going to the gym seven days a week and it stuck. So I go to the gym and I don't do a huge workout, but I go 365 or okay, I'll be honest, probably 362.