SaugaTalks
🎙 Welcome to SaugaTalks – where technology meets insights! We explore the latest tech trends shaping industries and driving innovation, from Artificial Intelligence and Machine Learning to Intelligent Automation and the Future of Work.
Each episode features visionary experts and trailblazing entrepreneurs, uncovering strategies and breakthroughs that set the pace for the future.
Join the conversation by liking, commenting, and sharing our episodes to spark meaningful discussions with like-minded professionals. 🤝
📩 For Tech Companies: SaugaTalks provides a platform to amplify thought leadership, engage with industry peers, and connect with an audience passionate about AI and digital transformation. Our podcast features in-depth discussions with AI pioneers and tech entrepreneurs on the cutting-edge topics transforming industries.
Why Partner with SaugaTalks?
- Expert Positioning: Feature your experts on SaugaTalks to build credibility and authority.
- Innovative Content: Showcase your AI, Automation, and Digital Transformation solutions.
- Global Reach: Connect with a worldwide audience of tech decision-makers and enthusiasts.
- High-Value Content: Collaborate on interviews, panel discussions, and sponsored episodes tailored to your goals.
Ways to Collaborate:
- Guest Appearances: Share insights on emerging tech with our audience.
- Sponsored Episodes: Align your brand with a tech-savvy audience.
- Content Collaboration: Create co-branded panels or discussions on industry trends.
- Podcast Promotion: Advertise your solutions and events on SaugaTalks.
Let’s discuss how we can partner to create impactful content and enhance your brand’s visibility!
Connect with SaugaTalks & Irene Lyakovetsky:
- Subscribe on YouTube: @SaugaTalks
- Follow on X: @lyakovet
- Join on LinkedIn: SaugaTalks on LinkedIn
SaugaTalks
Struggling to Get Execs on Board with AI? Here's How to Bet Big Without Regrets
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode of SaugaTalks, host Irene Lyakovetsky chats with Patrick Sullivan, VP of Strategy and Innovation at A-LIGN, TEDx Speaker, Forbes Technology Council member, AI Ethicist, and ISO/IEC JTC1/SC42 contributor. They dive into the high-stakes "Boardroom AI Battles"—how to convince skeptical executives to invest boldly in AI without regrets.
Key discussion points:
- AI governance as a value creator: Evaluate, direct, and monitor for optimized risk and cost (per ISACA standards).
- Informed vs. uninformed bets: Only ~3% of AI initiatives deliver value without solid plans—learn from MIT/Harvard insights.
- Success stories: A German synthetic media startup hit $4B+ valuation via ISO 42001 certification and trust-building.
- Pitfalls: A major US HRIS provider faces lawsuits over unchecked AI bias, despite NIST frameworks.
- Emerging risks: Deepfakes (e.g., $25M bank fraud) and bias in sociotechnical systems.
- Executive tips: Ask "Can I do this without AI?" Balance FOMO with reputational risks through use cases and context-specific strategies.
- Future outlook: 2026 as the era of formal AI governance, with market demands for assurance in sensitive data handling.
Perfect for C-suite leaders, AI strategists, and innovators navigating digital transformation. Whether you're worried about moving too fast or being left behind, this episode offers actionable advice on ethical AI adoption, bias mitigation, and scaling from pilots to production.
Subscribe for more on AI, automation, and tech leadership!
- YouTube: https://www.youtube.com/channel/UCCcHTwZGRuMaWTZTechKAtw
- X (Twitter): https://twitter.com/lyakovet
- LinkedIn: https://www.linkedin.com/company/saugatuckworldwide/
- Support: https://www.patreon.com/SaugaTalks
- Connect with Patrick: www.align.com | LinkedIn (daily AI posts)
#AILeadership #AIGovernance #EnterpriseAI #DigitalTransformation #TechInnovation
00:00 – Introduction
Welcome to SaugaTalks and the introduction of Patrick Sullivan.
01:05 – Why AI Is Creating Boardroom Tension
Why executives feel pressure to move fast with AI while managing uncertainty.
03:10 – The Real Stakes of AI Investment
How AI decisions today can define long-term competitive advantage.
05:20 – Convincing Executives to Bet Big on AI
Strategies for making a compelling case for AI adoption.
07:35 – Risk vs. Opportunity in AI Strategy
Balancing innovation with governance and responsible AI use.
10:05 – AI Bias and Ethical Concerns
Why bias and fairness remain major challenges in enterprise AI.
12:20 – The Growing Threat of Deepfakes
How synthetic media and misinformation create new security risks.
15:10 – Moving AI from Pilot to Production
Why many companies struggle to scale AI beyond experimentation.
18:00 – Building Trust in AI Across the Organization
How leaders can gain buy-in from stakeholders and teams.
20:45 – AI Governance and Security
Frameworks companies should consider to manage AI responsibly.
23:30 – The Future of AI in the Enterprise
What executives should be preparing for in the next wave of AI.
26:00 – Final Thoughts
Key takeaways for leaders deciding how aggressively to invest in AI.
Irene: Hello. Hello. This is Irene with SaugaTalks. Hello. Thank you everyone who is listening to us or maybe watching our longer videos or shorts on YouTube or on LinkedIn or even X. All right. I'm Irene Lyakovetsky. I speak with fascinating people in tech. I'm here to shed some light with fabulous people. Today with me is Patrick Sullivan. Patrick, how are you?
Patrick: Irene, I am so well. Very, very honored to be here. Thank you for the invitation and the opportunity to speak with not only you but also your audience.
Irene: You know, the honor is mine because we're going to talk about some of the good concerns about AI, some of the promise of AI today, and the reason I brought "boardroom AI battles" in the title is because it concerns everyone to me—it's like from practitioners all the way to executives to stakeholders. We are in this fascinating world. Patrick, can I start with something? What do you think the pressing issues people discuss at all the levels, especially at the decision level, now connecting to AI and all its promise?
Patrick: Oh my goodness. So it's interesting, Irene. You know, as we did the setup for this call, we talked about a lot of things that are pressing for both of us. I think from a boardroom perspective, which trickles down through the organization—at least in the US, I imagine it's the same globally—but corporate directors really are charged with three major things: ensuring that the organization stays on mission, ensuring financials are in order, and ensuring that there are mitigations to protect against enterprise risk. So if you're a corporate director, some way, shape, or form, that's your focus. Irene, it turns out as we think about governance, particularly AI governance, we have very similar influences and levers to pull. ISACA defines governance as the process of creating desired outcomes at optimized risk and cost. Desired outcomes align very nicely with the director focus on enterprise mission. Risk and cost—financials, risk—those things go very neatly together. And so in many ways, what I would like your listeners to understand is that the pressures on your corporate governors, on your board, are no different than the pressures on your governance teams. It just so happens we're looking out slightly different windows with slightly different views. That said, governance—or lack thereof, absence of—in my mind is the pressing issue that our field faces today. That was a very short answer to a very open-ended question.
Irene: Okay, let's talk a little bit. Okay, let's go. Let's go. Is happening as leaders are worried now about moving too fast with AI or being left behind? Where do you think we are at this moment?
Patrick: Well, right now I think we're stuck somewhere in the middle. Quite frankly, this idea of moving too fast—generally, things are classified as too fast only when we look at them in the rearview mirror. When you're on the ship, you don't know. Everything's the same relative speed. So when we say something has gone too quickly, generally that's because an issue has arisen, and those issues typically arise again because of lack of proper governance. The flip side of this: we do see some very few organizations taking the position that our governance is essentially a prohibition on AI use. Frankly, Irene, we live in a world—and you see this probably more so than I do—we live in a world that is not prepared to be successful without the use of these emerging technologies. Society has migrated in a direction in which AI truly has become a sociotechnical system. And so therefore, at least in my opinion, simply being a Luddite and saying "no, this is not for me" really isn't an option. Which brings us right back to where we started: as an organization, as a leader, as a leader anywhere at any level in your organization, governance has to be your number one priority.
Irene: I'm not surprised you're mentioning governance, Patrick. I'm not surprised at all. Yeah. Could you please then describe to us, because not everyone appreciates the importance of governance still, right? I mean, I'm amazed, but yeah, I just had a fantastic conversation not long ago speaking of again leaders where people say "I don't need a taxonomy, I don't need an ontology, right? I have Google, I have so many sources." You know, can you talk to us what governance really means today?
Patrick: Yeah, right, yeah, right. And so again, governance can be thought of in a couple of different ways. So governance is a value creation process first and foremost—one where, as we just went through, we create desired outcomes at an optimized risk and cost. More practically, however, governance can be thought of as a mechanism through which we evaluate, we direct, and we monitor. That is, we evaluate our system or systems against known good standards: Are we doing what is right and appropriate for our system in our context? We direct—that is, we set priority: This is where we are. What must we do next in order to create the outcomes that we truly desire? And then we monitor: We regularly check back in to see if feedback—either balancing or reinforcing—is required to keep us on track to create those outcomes again that offline we've decided are most important to us. So in short, governance is about value creation, Irene. Plain and simple. There are a few different ways that we can look at how we create value. But it all starts with identifying what it is we really hope to create, what the product-market fit is, where the real value in our process is, aligning risk at an optimized level and cost at an optimized level—not reducing, not pulling risk to a position that it doesn't exist because that's not practical or possible, but defining what is optimized for us in our context and operating accordingly.
Irene: Yes, sure. Let us take this conversation into a more provocative arena, meaning betting on AI. All right. No one likes to bet unless you're a really high-risk kind of person. Right.
Patrick: Right. Right.
Irene: Typical executives, they got where they are because they can balance, they can innovate at the same time, they have responsibilities, they have mission, they have people, they have budget that they're responsible for. So can you please talk to us a little bit about betting on AI in a good way, meaning that yes—oh, and I'm sorry I interrupted.
Patrick: Irene, you know, I do see a couple of different bets happening on AI today from the board perspective, and they really fall into two camps: there are uninformed bets and there are well-informed bets. Those organizations that are making well-informed bets on AI—that is, bets with an understanding of what the tool, the system is that's being deployed and the real business value to be created—those organizations by and large seem to be faring really well today. Conversely, Irene, the uninformed bets—we read new research about it every day. Maybe it was MIT, maybe it was Harvard—released a report at the end of last year saying only about 3% of AI initiatives are really creating value in organizations, and that percentage might be a little higher, a little lower. Ultimately, the take-home was organizations are deploying AI without solid governance, without a solid plan, and therefore are not seeing the return on the investment that they had hoped to, which obviously is a terrible bet to make. No one wants to bring resources to the table in hopes of amplifying them only to find out that those resources have been lost.
Irene: What happens with trust? Okay, I've been in enterprise, you know, all my life, and the one project that goes wrong, okay, it has such a huge publicity, okay, especially if it's somewhat important. All right, and when you introduce a new AI shiny toy, of course, right? And if it doesn't go well—and we know in many cases it doesn't—what kind of barrier it creates for the next innovators?
Patrick: And the barrier is one centered on risk. I mean, no matter again where you sit in the organization, most of us in business are primed to think in terms of risk. With risk stated cleanly, we can make informed decisions about what to do next. Once we've had that first failure, suddenly that risk ratio becomes upside down, and no matter what we say, we're presenting to an audience that is expecting the initiative to fail. So Irene, in so many ways, what we do is lead off on the wrong foot and we're never really able to recreate that first impression. Unfortunately, it's hard to recover for sure. And yet we here—at least my mission—and I am really optimistically looking into the future in a way that hey, we are learning. We're all evolving. Okay, there is risk—you have to, yes, when you're playing with company resources, people's lives—friendly, right? You have to be extra responsible. I'm not saying careful, I'm saying responsible.
Irene: Right, yes, responsible. Right. And at the same time, could you talk to us about successes you saw—maybe not within your organization, maybe client organizations. We don't need the names, but you know, something where yeah, this all kind of came together for great.
Patrick: Oh my goodness. I think one of the biggest successes that I've seen was an organization working with synthetic media—an organization out of Germany—working with synthetic media to create avatars, to create music, things like that. And while in business we don't necessarily think of that as being an area that should be regulated, this particular organization didn't treat compliance as a response to regulation; it treated compliance as a response to future market demand over offering assurance. And so this organization went above and beyond and earned ISO 42001 certification against their internal processes and practices. What we've seen with that particular organization since—this has been about 18 months, 24 months ago—are several rounds of investment, their latest valuation being north of $4 billion. This from a company the size that most other organizations would consider to be a startup. And Irene, it's all because they recognized the value of building trust in the market before deploying tools so that people could play with the nice new shiny things.
Irene: Incredible. No, incredible. What do you think were the key elements that played to that success? We all want to know.
Patrick: Maturity. Maturity. Again, I think very much of governance in the various processes that we described before. Governance more than anything is a framework that requires us to think before we act. Now, that in a lot of ways is the reason that people have this misunderstanding that governance can stifle innovation—that is not the point at all. What governance does do, however, is make us think before we take those risks that we've described previously. So with this organization in particular, they positioned themselves to be so successful because not only do they have a solid product that performs, that delivers what the market needs when the market asks for it, but they're able to do so in such a way that they can offer that additional layer of trust and assurance on top, ensuring that they're the go-to. They are trusted by organizations a hundred times their size because they've shown that they're trustworthy.
Irene: Hey, you made my day already. Okay. Something, you know, to be inspired by for sure. All right. Can we turn into some what I call disasters? Okay. People remember some negative experiences, but not to point fingers, but rather learn—and most importantly, share for other people to avoid this kind of mistakes. So anything that comes to your mind.
Patrick: Yes. You know, and I think right now—and not to name names—but I think this is very, very visible, at least here in the United States. Human resources—we know that one of the key knocks against AI is the potential for bias. It's clear that we've seen so many alignment issues with AI through the years that it's just natural to assume that biased outcomes will be produced based on the input data that the systems have been given. And so we have to constantly be working to eke bias out of our systems, out of our results, so that we're not furthering the issue that was in many ways a vestige of past decisions. Well, one of the largest HRIS providers in the US today is undergoing—is in the middle of—a significant class action lawsuit, not because they hadn't implemented good governance practices. In fact, this particular organization worked with NIST. They created handouts; they created real documentation around the most effective ways to implement the NIST AI risk management framework. This particular organization is ISO 42001 certified. So independent auditors evaluated their AI management system and said yes, you have in place all the appropriate things to ensure that you're operating within conformity of this particular standard, this particular framework. What this organization hasn't done well, however, is validate outcomes, validate outputs of their models to ensure that they're operating within the thresholds for bias that they had set for themselves, which has again opened the door to the significant class action lawsuit that, while it might not necessarily permanently damage the reputation of the HRIS platform, it is absolutely a headache for all the leaders in that organization today. There's just simply no way around that.
Irene: Wow. Yeah. Something to think about. Yes. Please, audience members out there. Yeah. Please evaluate what's happening around you. And it's easy to point fingers, but that's not what we're here for. We rather want those pains and lessons learned, right? Being known. And yeah, we're all collectively learning. There's no one guru that will promise you the world nowadays, right?
Patrick: Absolutely. And if someone tells you they have all the answers, be very, very wary of that person.
Irene: Yes. Totally. Totally. Patrick, on that notion, okay, what usually goes wrong, you think, when executives greenlight AI initiatives?
Patrick: Lack of understanding. Lack of understanding, Irene, and based on your experience, I'm sure you've seen this for years. Executives do see the new shiny thing and decide that's the thing that we need to make ourselves better. In effect, we have to keep up with the Joneses if we're going to retain market share. Where this becomes really problematic is making a determination to roll AI into production, into your systems, without deciding what it is that you really hope to create with those tools—necessarily creates a position that you'll never know if you were successful. And so by simply having the mandate from the board level that your organization will deploy AI without appropriate context, without appropriate boundaries, we're creating a situation where you can't ever actually be successful no matter how successful you are. Also, on the flip side of that, as we think about balancing feedback and continually measuring our results, without clear expectations for outcomes, as an organization, we won't know when it's time to pull back or pull the trigger altogether on a particular system that's been deployed.
Irene: Huh. Do you think strong guardrails, right? Because everybody is talking about guardrails nowadays, left and right, right? Do you think those help executives move faster or whether they make them more cautious?
Patrick: Um, you know, I think there's a little bit of both here. And from an executive perspective, you know, as I think about guardrails, most of the dialogue today centers around technical guardrails. As an example, we might have, with our LLM, a prohibition on creating pornography or hate speech, things like that. Those are what we traditionally think of as guardrails for the tools that are being deployed. From an executive level, that may be a little bit too in the weeds. Really, the issue for executives is having an understanding of the business value you hope to create through the deployment of a particular system. That absolutely has to be job number one: What is it we're really trying to do here? It's only then that we can think about potential harms, potential outcomes, and context of optimizing risk with a real understanding of what costs are appropriate to bear to actually push this system through to production.
Irene: Yay. This brings us to kind of the next angle to the conversation, right? What about AI risks beyond just security and compliance? Because we hear a lot about security, right? And companies seem to be prone to certain attacks and certain huge risk factors, but what's beyond that? I mean, anything you've seen?
Patrick: Yes, Irene, to your point, the AI risks are all over the place. You know, we have bias; we have the risk of disparate outcomes—so many things associated with treating people differently based on their background, based on their demographics—that's always a concern. Quite frankly, one of the bigger risks I see today—and this is becoming more and more common, more and more visible—in fact, people are doing this just as a way to have fun, but more and more deepfakes and the use of AI tools to create new and novel synthetic media—in my mind is absolutely something that we are not yet prepared to handle and something that we better be working really, really hard to figure out. I forget which bank it was, but at the beginning of 2024—so a little over two years ago now—bad actors executed a deepfake wherein they had called in someone with access to transfer funds to a conference call wherein other members of the ELT, the executive leadership team, were all deepfake personas. And so this person was duped into releasing—I think it was $25 million—to bad actors all because the deepfake technology at that time was good enough to warrant fooling this person into making that decision. And Irene, you've seen—as with everything else—we've come leaps and bounds over the past 24 months with what diffusion and other tools can do. So it's a very, very scary thing for me for the market today and something of which leaders have to be acutely aware.
Irene: Absolutely, very critical point. Thank you, Patrick, for that one. So speaking of AI regrets, all right—so do you think they are more often caused by bad technology? We know—I mean, some vendors just really not up to the standards they claim to be, right?—or poor leadership decisions? Because we still in control—humans still have the power of will, right? Implement or not implement the initiative.
Patrick: Such a good question, and this is a question that really puts the people and the technology at cross-purposes, but I don't think it has to be that way. You know, we say AI is a sociotechnical system—it's this confluence of people and society, how we think about things, and the emerging technologies themselves. And so somewhere in the middle, we've got this system from which new properties emerge on their own and create really, really cool things. So when those emergent properties are things that we don't want—or we have a failure, we have a harm, a negative impact—I don't know that we can squarely put our finger on the technology, but I don't know that we can squarely put our finger on the people that are operating the technology as well. You know, that's why I think governance is so critical. Governance, when done well, doesn't say "we expect this failure to happen and this is the action that we'll take when it presents itself." Governance says "we're going to follow best practice. We're going to follow known good standards, and we're going to continually evaluate to see if we're staying in line with what we said we wanted to create, operating within the thresholds that we've set for ourselves." And so the continuous monitoring, the continuously checking back in, for me is the most critical aspect. I think when we have seen our biggest failures, it's because governance—not people or technology—has failed specifically around the monitoring of the system and the outputs that those systems are creating.
Irene: Um, how do executives balance competitive pressure—you know, again, fear of missing out, right? Competitors being ahead of you—with reputational risk? Because again, this is something always on their mind—it's huge—and for that to maintain that balance and make good decisions.
Patrick: Irene, this is going to sound so direct, and I don't mean for it to be off-putting, but executives have to do their job. You know, our job is to lead. Our job is to understand our context and how we deliver value to our communities, to the markets. Absent that, it's really, really easy to make poor decisions about implementing AI technologies. But in the presence of real leadership—truly understanding the business and how the business creates value—what I tend to see is that we apply a little bit more discernment to the tools that we intend to deploy, and we tend to focus a little bit more on mitigating those risks so that we don't have to deal with the reputational harm. We don't have to deal with the financial loss. We don't have to deal with all those negative consequences of making rash and rushed decisions.
Irene: Yep. Yep. Yeah, makes me think. Yeah, makes me think. Uh, what's one question, okay, every executive should ask before approving a major AI investment?
Patrick: "Can I do this—whatever this is—just as effectively without AI?" If you can answer that question in the affirmative, do not deploy an AI tool. You're absolutely wasting your time, and you're absolutely strapping yourself to undue risk and burden.
Irene: Hm. If only that would be that easy, right? But that's why they have the job.
Patrick: Exactly. That's the job. Exactly. Yeah. Well, it's easy for me to say this stuff. I fully recognize making these decisions is in no way, shape, or form easy. So, but for your listeners, Irene, please don't think that that's what I'm saying. No judgment. I recognize the difficulty of your position. I also want you to recognize that things can be simpler than at times they initially seem.
Irene: You know, let's provide some helping hand. All right, we reminded people "do your job." Okay, you're there for a reason. Now, from your experience—maybe from my own a little bit. Um, helping hand: what's kind of the good principles to follow? We're not going to be comprehensive here, but what can help executives do their job better? How about that?
Patrick: Yes. And Irene, I think for me, it always starts with use cases. What is the proposal? What are we proposing we're deploying AI to do? So let's understand that use case in our context—and that context-specific view is so critical because what works for me as an organization might not necessarily work for you as an organization. So when evaluating the same tool, we're going to see utility in very, very different things. It will always be that way. So understanding use cases and context has to be number one. From there, we need to understand the risk associated with deploying this tool. And that risk again will be context-specific. But understanding our risk, understanding what it is that we hope to create, can then help us understand if this is really a business case that's justified for pushing out into production.
Irene: Looking ahead, right? What will separate those mighty responsible executives who win with AI from those who hesitate too long?
Patrick: Yes. Yes. And Irene, and this sounds self-serving because I do work in the compliance space, but what we're seeing more and more is that the market is not waiting for regulation. The market absolutely has expectations that organizations are developing and using AI responsibly, especially when those organizations are doing so while caring for, while processing, storing, or transmitting sensitive information. And so more and more, we're seeing third-party risk become a critical function within organizations—not that it hasn't always been, but specifically as it relates to AI and AI-enabled services being delivered by vendors. That said, in the presence of market pressure, assurance is absolutely necessary. So those executives that are listening, you need to shore up your compliance strategy, your assurance strategy, and ensure that the assurance that you're offering your market is appropriate for the market itself—not just something that is easy for you, but something that actually addresses the assurance needs of those that are choosing to do business with you or of those that you're hoping to supply services to.
Irene: Patrick, this will be a great place in this episode to point people where to contact you and your team for compliance needs, right? So, what's the best way to reach out?
Patrick: www.align.com. For those of you that are simply hoping to connect to talk about things related to AI, you can find me on LinkedIn posting daily—really, really hoping to help people like you make well-informed decisions, to make well-informed strategic decisions about how to move forward with your organization with respect to AI use.
Irene: Perfect. Perfect. So, what's in the future, Patrick? What are you excited about? Meaning that complexity and levels of never-ending complexity, right? That's what makes good business, right? It's the opportunity, right? Yeah. So the question is, what makes you go to work every day and hopefully looking with that smile, looking into the next day that comes.
Patrick: Irene, it's our people—and again, you very likely see this, but our community today is very fractured. We see some people focusing on AI ethics. We see some people focusing on AI security, privacy, legislation, standards, regulation—whatever the case may be. So we've got this community of very, very highly educated and passionate professionals that are slowly coming together. My hope, my goal, is that we can coalesce around a common key theme and idea that helps drive this practice forward. I do see us taking steps in that direction, but it's very young. It's very, very young. But in my mind, 2026 and beyond will be the years of AI governance proper as we take many of the governance initiatives and activities that we're all espousing today and form a formal practice.
Irene: I encourage everyone—okay, I'm connected with Patrick now on LinkedIn. I encourage everyone to follow. Fantastic content and amazing energetic delivery. So from my hosting perspective, yes, absolutely delightful conversation today, Patrick. I so appreciate it. Thank you so very much. Before I let you go, could you please share with us a few takeaways because we touched on a lot of different angles to the AI battle conversation in the boardroom?
Patrick: Yes. Yes, ma'am. So maybe we'll wrap this up the way that we started it so that we can go full circle. For those of you charged with governance, remember it's not that complicated. At the end of the day, we have to decide what it is we really hope to create. We have to understand the risk that we're going to take on by choosing to move in that direction, and then understand the cost that we're prepared to bear—both financially, from an internal resource perspective—in order to birth that baby. So simply step back before making critical decisions, before making strategic decisions, and ensure that you're clean in your thinking on those few things, those three variables. In doing so, I promise your outcomes will be more aligned with what you hope to be creating.
Irene: Thank you. Thank you, Patrick, so very much for your time. Appreciate it.
Patrick: Thank you. Irene, thank you. An honor to be here.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.