<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet href="https://feeds.captivate.fm/style.xsl" type="text/xsl"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:podcast="https://podcastindex.org/namespace/1.0"><channel><atom:link href="https://feeds.captivate.fm/ai-risk-reward/" rel="self" type="application/rss+xml"/><title><![CDATA[AI Risk Reward]]></title><podcast:guid>7de574f2-3672-50c9-b709-69cccdfb342a</podcast:guid><lastBuildDate>Tue, 21 Apr 2026 14:47:11 +0000</lastBuildDate><generator>Captivate.fm</generator><language><![CDATA[en]]></language><copyright><![CDATA[Copyright 2026 Alec Crawford]]></copyright><managingEditor>Alec Crawford</managingEditor><itunes:summary><![CDATA[I am your host, Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. and this is AI Risk-Reward, a podcast about balancing the risk and reward of using AI personally, professionally, and as a large organization!

We will discuss hot topics such as, will AI take my job or make it better?  When I ask Chat-GPT work questions, is that even safe? From an ethical perspective, is it enough for big companies to anonymize private data before using it? (Probably not.)

I am discussing these issues with AI experts to answer burning questions and stay ahead of the curve on AI.  I’d also like to give a shoutout to our podcast producer and audio engineering team at Troutman Street Audio. You can check them out on LinkedIn.]]></itunes:summary><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><itunes:owner><itunes:name>Alec Crawford</itunes:name></itunes:owner><itunes:author>Alec Crawford</itunes:author><description>I am your host, Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. and this is AI Risk-Reward, a podcast about balancing the risk and reward of using AI personally, professionally, and as a large organization!

We will discuss hot topics such as, will AI take my job or make it better?  When I ask Chat-GPT work questions, is that even safe? From an ethical perspective, is it enough for big companies to anonymize private data before using it? (Probably not.)

I am discussing these issues with AI experts to answer burning questions and stay ahead of the curve on AI.  I’d also like to give a shoutout to our podcast producer and audio engineering team at Troutman Street Audio. You can check them out on LinkedIn.</description><link>https://ai-risk-reward.captivate.fm</link><atom:link href="https://pubsubhubbub.appspot.com" rel="hub"/><itunes:subtitle><![CDATA[Balancing the risk and reward of using AI personally & as a business]]></itunes:subtitle><itunes:explicit>false</itunes:explicit><itunes:type>episodic</itunes:type><itunes:category text="Technology"></itunes:category><itunes:category text="Business"><itunes:category text="Entrepreneurship"/></itunes:category><itunes:category text="News"><itunes:category text="Tech News"/></itunes:category><podcast:locked>no</podcast:locked><podcast:medium>podcast</podcast:medium><item><title>Matthew Rosenquist on AI, Cyber Risk, and the Future of Defense</title><itunes:title>Matthew Rosenquist on AI, Cyber Risk, and the Future of Defense</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com , interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this deep dive episode, Alec speaks with Matthew Rosenquist, cybersecurity strategist and CISO, about how AI is rapidly reshaping both cyber defense and cyber offense. Matthew explains how new AI models are dramatically accelerating vulnerability discovery and exploit creation, putting pressure on traditional patching, risk management, and incident response processes. He also shares practical guidance for consumers and businesses on defending against AI-powered phishing, deepfakes, account compromise, and unsafe use of public AI tools. The conversation highlights why strong fundamentals like multi-factor authentication, least-privilege access, segmented data practices, and careful verification matter more than ever in an AI-driven threat landscape. Alec and Matthew close by exploring the emerging risks of agentic AI and MCP-connected systems, emphasizing that companies must adopt AI security controls with urgency, discipline, and realistic expectations.</p><p><strong>Summary:</strong></p><ul><li><strong>AI-Driven Vulnerabilities:</strong> Matthew discusses how advanced AI models can find and exploit software flaws far faster than traditional security processes can handle.</li><li><strong>Consumer Cyber Hygiene:</strong> The episode stresses multi-factor authentication, account alerts, password discipline, and skepticism toward emails, texts, calls, and social media interactions.</li><li><strong>Deepfakes and Social Engineering:</strong> AI is making scams more personalized, scalable, and convincing, which means users must verify before trusting.</li><li><strong>Enterprise AI Risk:</strong> Companies need to be cautious with sensitive data in public AI tools and apply strong governance to internal AI deployments.</li><li><strong>Agentic AI Security:</strong> Granting broad permissions to AI agents creates major new attack surfaces, making least-privilege design and access controls essential.</li></ul><br/><p><strong>Referenced in this episode:</strong></p><p><strong>Companies/Organizations: </strong></p><ul><li><a href="https://verapath.com/" rel="noopener noreferrer" target="_blank">Verapath</a></li><li>Anthropic</li><li>Google</li><li>Western Union</li><li>Salesforce</li></ul><br/><p>Copyright © 2026 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com , interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this deep dive episode, Alec speaks with Matthew Rosenquist, cybersecurity strategist and CISO, about how AI is rapidly reshaping both cyber defense and cyber offense. Matthew explains how new AI models are dramatically accelerating vulnerability discovery and exploit creation, putting pressure on traditional patching, risk management, and incident response processes. He also shares practical guidance for consumers and businesses on defending against AI-powered phishing, deepfakes, account compromise, and unsafe use of public AI tools. The conversation highlights why strong fundamentals like multi-factor authentication, least-privilege access, segmented data practices, and careful verification matter more than ever in an AI-driven threat landscape. Alec and Matthew close by exploring the emerging risks of agentic AI and MCP-connected systems, emphasizing that companies must adopt AI security controls with urgency, discipline, and realistic expectations.</p><p><strong>Summary:</strong></p><ul><li><strong>AI-Driven Vulnerabilities:</strong> Matthew discusses how advanced AI models can find and exploit software flaws far faster than traditional security processes can handle.</li><li><strong>Consumer Cyber Hygiene:</strong> The episode stresses multi-factor authentication, account alerts, password discipline, and skepticism toward emails, texts, calls, and social media interactions.</li><li><strong>Deepfakes and Social Engineering:</strong> AI is making scams more personalized, scalable, and convincing, which means users must verify before trusting.</li><li><strong>Enterprise AI Risk:</strong> Companies need to be cautious with sensitive data in public AI tools and apply strong governance to internal AI deployments.</li><li><strong>Agentic AI Security:</strong> Granting broad permissions to AI agents creates major new attack surfaces, making least-privilege design and access controls essential.</li></ul><br/><p><strong>Referenced in this episode:</strong></p><p><strong>Companies/Organizations: </strong></p><ul><li><a href="https://verapath.com/" rel="noopener noreferrer" target="_blank">Verapath</a></li><li>Anthropic</li><li>Google</li><li>Western Union</li><li>Salesforce</li></ul><br/><p>Copyright © 2026 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">3e5581c5-f0e1-4950-b77d-36324f4b02a7</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 21 Apr 2026 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/3e5581c5-f0e1-4950-b77d-36324f4b02a7.mp3" length="49063084" type="audio/mpeg"/><itunes:duration>51:06</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>90</itunes:episode><podcast:episode>90</podcast:episode><podcast:season>1</podcast:season></item><item><title>Antony Baker, CEO and Founder of FIFTEEN Group, on Using AI to Identify the Right People for Your Company</title><itunes:title>Antony Baker, CEO and Founder of FIFTEEN Group, on Using AI to Identify the Right People for Your Company</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com , interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec speaks with Antony Baker, CEO and Founder of FIFTEEN Group, about his unconventional path from championship sports to consulting and building AI-enabled business services. Antony explains how FIFTEEN Group was created to challenge traditional consulting models by combining talent assessment, process improvement, and practical AI adoption for mid-market companies. He emphasizes that successful AI implementation depends less on hype and more on human intelligence, training, change management, and starting with simple, high-friction business tasks that employees already dislike. The conversation also explores risks around governance, model changes, and the uncertainty created when organizations rely on rapidly evolving AI tools without strong controls. Alec and Antony close with a discussion on leadership, instinct, culture, and why hard work, talent, and adaptability remain essential even as AI becomes more embedded in business.</p><p><strong>Summary:</strong></p><ul><li><strong>Talent First:</strong> Antony Baker argues that strong people, work ethic, and the right cultural fit are the foundation for successful AI adoption.</li><li><strong>Practical AI Adoption:</strong> Companies get better results when they begin with simple use cases like meeting notes, email workflows, and reporting automation.</li><li><strong>Human and Artificial Intelligence:</strong> The episode highlights that AI performs best when paired with trained employees who know how to guide and educate the system.</li><li><strong>Governance Risk:</strong> Rapid model changes and limited user control can create serious challenges, especially for regulated industries and large enterprises.</li><li><strong>Entrepreneurial Mindset:</strong> Antony shares that resilience, learning through failure, and trusting instinct are critical to building durable businesses in fast-moving markets.</li></ul><br/><p><strong>Referenced in this episode:</strong></p><p><strong>Companies/Organizations: </strong></p><ul><li><a href="https://www.15.group/" rel="noopener noreferrer" target="_blank">FIFTEEN Group</a></li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li>Nomura</li><li>SVB</li><li>PwC</li><li>EY</li><li>Barclays</li><li>Business AI Alliance</li><li>NatWest Markets</li><li>Microsoft</li><li>OpenAI</li><li>Claude</li><li>ChatGPT</li><li>Grok</li><li>Meta</li><li>UFC</li></ul><br/><p><strong>Books: </strong></p><ul><li>Principles</li></ul><br/><p><strong>Movies: </strong></p><ul><li>The Matrix</li></ul><br/><p><strong>TV Shows: </strong></p><ul><li>The Ultimate Fighter</li></ul><br/><p>Copyright © 2026 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com , interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec speaks with Antony Baker, CEO and Founder of FIFTEEN Group, about his unconventional path from championship sports to consulting and building AI-enabled business services. Antony explains how FIFTEEN Group was created to challenge traditional consulting models by combining talent assessment, process improvement, and practical AI adoption for mid-market companies. He emphasizes that successful AI implementation depends less on hype and more on human intelligence, training, change management, and starting with simple, high-friction business tasks that employees already dislike. The conversation also explores risks around governance, model changes, and the uncertainty created when organizations rely on rapidly evolving AI tools without strong controls. Alec and Antony close with a discussion on leadership, instinct, culture, and why hard work, talent, and adaptability remain essential even as AI becomes more embedded in business.</p><p><strong>Summary:</strong></p><ul><li><strong>Talent First:</strong> Antony Baker argues that strong people, work ethic, and the right cultural fit are the foundation for successful AI adoption.</li><li><strong>Practical AI Adoption:</strong> Companies get better results when they begin with simple use cases like meeting notes, email workflows, and reporting automation.</li><li><strong>Human and Artificial Intelligence:</strong> The episode highlights that AI performs best when paired with trained employees who know how to guide and educate the system.</li><li><strong>Governance Risk:</strong> Rapid model changes and limited user control can create serious challenges, especially for regulated industries and large enterprises.</li><li><strong>Entrepreneurial Mindset:</strong> Antony shares that resilience, learning through failure, and trusting instinct are critical to building durable businesses in fast-moving markets.</li></ul><br/><p><strong>Referenced in this episode:</strong></p><p><strong>Companies/Organizations: </strong></p><ul><li><a href="https://www.15.group/" rel="noopener noreferrer" target="_blank">FIFTEEN Group</a></li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li>Nomura</li><li>SVB</li><li>PwC</li><li>EY</li><li>Barclays</li><li>Business AI Alliance</li><li>NatWest Markets</li><li>Microsoft</li><li>OpenAI</li><li>Claude</li><li>ChatGPT</li><li>Grok</li><li>Meta</li><li>UFC</li></ul><br/><p><strong>Books: </strong></p><ul><li>Principles</li></ul><br/><p><strong>Movies: </strong></p><ul><li>The Matrix</li></ul><br/><p><strong>TV Shows: </strong></p><ul><li>The Ultimate Fighter</li></ul><br/><p>Copyright © 2026 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">f1facba9-b177-4c21-9725-32ef9dbaae52</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 14 Apr 2026 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/f1facba9-b177-4c21-9725-32ef9dbaae52.mp3" length="52390869" type="audio/mpeg"/><itunes:duration>54:34</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>89</itunes:episode><podcast:episode>89</podcast:episode><podcast:season>1</podcast:season></item><item><title>Aleks Jakulin of Data.Flowers on Governing AI Through Accountability and Resilience, Not Output Control</title><itunes:title>Aleks Jakulin of Data.Flowers on Governing AI Through Accountability and Resilience, Not Output Control</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec speaks with Aleks Jakulin, Founder and President of Data.Flowers, about why current AI governance approaches often focus too heavily on policing model outputs instead of building accountability around real-world actions and system resilience. Aleks argues that AI should be governed more like fire or other critical infrastructure, with strong safeguards, reporting mechanisms, and downstream institutional redesign rather than unrealistic attempts to fully control the technology itself. He also reflects on his early work in deep learning and computational conceptualization, explaining how machines can discover new concepts through interactions in data and why better data infrastructure will be essential for reliable AI systems. The conversation explores how AI is already breaking workflows in hiring, finance, education, and cybersecurity, and why organizations should prioritize resilience, accountability loops, and high-quality input data over superficial ethics frameworks. Alec and Aleks close by discussing the decentralized promise of open models, the need for incident reporting similar to aviation safety, and the long-term potential for AI to improve human flourishing through better communication, faster learning, and broader intelligence augmentation.</p><p>Summary:</p><ul><li><strong>AI Governance:</strong> Aleks argues that AI oversight should focus on accountability, resilience, and managing real-world consequences rather than policing every generated output.</li><li><strong>Data Infrastructure:</strong> High-quality, controllable data infrastructure is presented as the missing foundation for safer and more reliable AI adoption.</li><li><strong>System Resilience:</strong> Organizations need to redesign vulnerable processes in hiring, finance, education, and operations so they can withstand widespread AI use.</li><li><strong>Open Models:</strong> Aleks suggests AI is ultimately a decentralizing force, with open and local models expanding access and reducing dependence on centralized providers.</li><li><strong>Human Flourishing:</strong> The episode highlights AI’s potential to accelerate learning, improve visual communication, and support a more capable and intelligent society.</li></ul><br/><p><strong>Referenced in this episode:</strong></p><p><strong>Companies/Organizations:</strong></p><ul><li><a href="https://data.flowers/" rel="noopener noreferrer" target="_blank">Data.Flowers</a></li><li><u><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></u></li><li>Columbia</li><li>Nvidia</li><li>NIST</li><li>OpenAI</li><li>Microsoft</li><li>OECD</li><li>MIT</li><li>FAA</li><li>NTSB</li><li>NASA</li><li>IRS</li><li>Google</li></ul><br/><p>Copyright © 2026 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec speaks with Aleks Jakulin, Founder and President of Data.Flowers, about why current AI governance approaches often focus too heavily on policing model outputs instead of building accountability around real-world actions and system resilience. Aleks argues that AI should be governed more like fire or other critical infrastructure, with strong safeguards, reporting mechanisms, and downstream institutional redesign rather than unrealistic attempts to fully control the technology itself. He also reflects on his early work in deep learning and computational conceptualization, explaining how machines can discover new concepts through interactions in data and why better data infrastructure will be essential for reliable AI systems. The conversation explores how AI is already breaking workflows in hiring, finance, education, and cybersecurity, and why organizations should prioritize resilience, accountability loops, and high-quality input data over superficial ethics frameworks. Alec and Aleks close by discussing the decentralized promise of open models, the need for incident reporting similar to aviation safety, and the long-term potential for AI to improve human flourishing through better communication, faster learning, and broader intelligence augmentation.</p><p>Summary:</p><ul><li><strong>AI Governance:</strong> Aleks argues that AI oversight should focus on accountability, resilience, and managing real-world consequences rather than policing every generated output.</li><li><strong>Data Infrastructure:</strong> High-quality, controllable data infrastructure is presented as the missing foundation for safer and more reliable AI adoption.</li><li><strong>System Resilience:</strong> Organizations need to redesign vulnerable processes in hiring, finance, education, and operations so they can withstand widespread AI use.</li><li><strong>Open Models:</strong> Aleks suggests AI is ultimately a decentralizing force, with open and local models expanding access and reducing dependence on centralized providers.</li><li><strong>Human Flourishing:</strong> The episode highlights AI’s potential to accelerate learning, improve visual communication, and support a more capable and intelligent society.</li></ul><br/><p><strong>Referenced in this episode:</strong></p><p><strong>Companies/Organizations:</strong></p><ul><li><a href="https://data.flowers/" rel="noopener noreferrer" target="_blank">Data.Flowers</a></li><li><u><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></u></li><li>Columbia</li><li>Nvidia</li><li>NIST</li><li>OpenAI</li><li>Microsoft</li><li>OECD</li><li>MIT</li><li>FAA</li><li>NTSB</li><li>NASA</li><li>IRS</li><li>Google</li></ul><br/><p>Copyright © 2026 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">84423425-5af5-469d-afd1-f53f7968f28e</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 07 Apr 2026 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/84423425-5af5-469d-afd1-f53f7968f28e.mp3" length="66199824" type="audio/mpeg"/><itunes:duration>01:08:57</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>88</itunes:episode><podcast:episode>88</podcast:episode><podcast:season>1</podcast:season></item><item><title>Is AI Making Us Stupid? Michael Erlihson, PhD, Head of AI at DriveNet</title><itunes:title>Is AI Making Us Stupid? Michael Erlihson, PhD, Head of AI at DriveNet</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com , interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec welcomes Dr. Michael Erlihson, Math PhD, AI influencer, and Head of AI at DriveNets, for an insightful conversation on the evolving risks and opportunities in artificial intelligence. Dr. Erlihson shares his journey from a science-focused family in Russia to leading AI initiatives in Israel, emphasizing the foundational role of mathematics in modern AI. The discussion explores the theme "AI is making us stupid," drawing parallels to historical debates about technology’s impact on cognition, and offering strategies to ensure ongoing learning and critical thinking in an AI-driven world. Dr. Erlihson discusses his approach to reviewing scientific literature without AI tools, the importance of connecting historical math papers to today’s AI, and his work optimizing LLM inference costs. The episode closes with a practical lightning round covering AI’s impact on education, employment, data privacy, and the democratization of AI knowledge.</p><p><strong>Summary:</strong></p><ul><li><strong>AI’s Cognitive Impact:</strong> Dr. Erlihson argues that while AI will make most people less knowledgeable, it can make a select few even smarter if used to augment ongoing learning.</li><li><strong>Mathematics in AI:</strong> Emphasizes the enduring importance of math in AI development, connecting historical mathematical insights to contemporary machine learning advances.</li><li><strong>Optimizing AI Infrastructure:</strong> Details DriveNet’s focus on reducing LLM inference costs to ensure the economic sustainability of AI deployment.</li><li><strong>Education &amp; Employment:</strong> Raises critical concerns about the future of traditional education and white-collar employment as AI accelerates automation and self-learning.</li><li><strong>Data Privacy Risks:</strong> Highlights the underestimated risks of personalizing AI with private data and advocates for stronger safeguards and user control.</li></ul><br/><p><strong>Referenced in this episode:</strong></p><p><strong>Companies/Organizations:</strong></p><ul><li>DriveNets</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li>RCOM</li><li>NVIDIA</li><li>AMD</li><li>Google</li><li>AWS</li><li>Intel</li><li>DarwinAI</li><li>Apple</li></ul><br/><p><strong>Podcasts: </strong></p><ul><li><a href="https://podcasts.apple.com/us/podcast/data-science-decoded/id1755975308" rel="noopener noreferrer" target="_blank">Data Science Decoded</a></li><li><a href="https://explainablepodcast.com/" rel="noopener noreferrer" target="_blank">ExplAInable</a></li></ul><br/><p><strong>Movies:</strong></p><ul><li>Snow White and the Seven Dwarfs</li><li>Terminator</li></ul><br/><p>Copyright © 2026 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com , interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec welcomes Dr. Michael Erlihson, Math PhD, AI influencer, and Head of AI at DriveNets, for an insightful conversation on the evolving risks and opportunities in artificial intelligence. Dr. Erlihson shares his journey from a science-focused family in Russia to leading AI initiatives in Israel, emphasizing the foundational role of mathematics in modern AI. The discussion explores the theme "AI is making us stupid," drawing parallels to historical debates about technology’s impact on cognition, and offering strategies to ensure ongoing learning and critical thinking in an AI-driven world. Dr. Erlihson discusses his approach to reviewing scientific literature without AI tools, the importance of connecting historical math papers to today’s AI, and his work optimizing LLM inference costs. The episode closes with a practical lightning round covering AI’s impact on education, employment, data privacy, and the democratization of AI knowledge.</p><p><strong>Summary:</strong></p><ul><li><strong>AI’s Cognitive Impact:</strong> Dr. Erlihson argues that while AI will make most people less knowledgeable, it can make a select few even smarter if used to augment ongoing learning.</li><li><strong>Mathematics in AI:</strong> Emphasizes the enduring importance of math in AI development, connecting historical mathematical insights to contemporary machine learning advances.</li><li><strong>Optimizing AI Infrastructure:</strong> Details DriveNet’s focus on reducing LLM inference costs to ensure the economic sustainability of AI deployment.</li><li><strong>Education &amp; Employment:</strong> Raises critical concerns about the future of traditional education and white-collar employment as AI accelerates automation and self-learning.</li><li><strong>Data Privacy Risks:</strong> Highlights the underestimated risks of personalizing AI with private data and advocates for stronger safeguards and user control.</li></ul><br/><p><strong>Referenced in this episode:</strong></p><p><strong>Companies/Organizations:</strong></p><ul><li>DriveNets</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li>RCOM</li><li>NVIDIA</li><li>AMD</li><li>Google</li><li>AWS</li><li>Intel</li><li>DarwinAI</li><li>Apple</li></ul><br/><p><strong>Podcasts: </strong></p><ul><li><a href="https://podcasts.apple.com/us/podcast/data-science-decoded/id1755975308" rel="noopener noreferrer" target="_blank">Data Science Decoded</a></li><li><a href="https://explainablepodcast.com/" rel="noopener noreferrer" target="_blank">ExplAInable</a></li></ul><br/><p><strong>Movies:</strong></p><ul><li>Snow White and the Seven Dwarfs</li><li>Terminator</li></ul><br/><p>Copyright © 2026 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">b2959648-253d-4b85-8320-27dfdc4e7241</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 31 Mar 2026 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/b2959648-253d-4b85-8320-27dfdc4e7241.mp3" length="42186818" type="audio/mpeg"/><itunes:duration>43:57</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>87</itunes:episode><podcast:episode>87</podcast:episode><podcast:season>1</podcast:season></item><item><title>Deep Dive: Trust, Quantum Computing, and the Future of AI Risk with Peter Mancini, Founder of A8A8</title><itunes:title>Deep Dive: Trust, Quantum Computing, and the Future of AI Risk with Peter Mancini, Founder of A8A8</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec sits down with Peter Mancini, founder of A8A8, and a seasoned data science expert who has leveraged AI since 2005. Peter shares his unconventional entry into artificial intelligence and reflects on key lessons learned from years of deploying AI and quantum computing in high-stakes environments, including work for the US Army and financial institutions. The conversation explores the critical importance of trust, metacognition, and continuous risk assessment throughout the AI lifecycle, with practical anecdotes ranging from model uncertainty in banking to emergent cybersecurity vulnerabilities. Peter discusses the profound implications of AI’s collaborative nature, the ethical dilemmas posed by AI-generated content, and the evolving intersection of AI, quantum computing, and blockchain. The episode concludes with concrete recommendations for transparency, explainability, and incident response, emphasizing the need for vigilance against both known and unforeseen risks, including elusive black swan events.</p><p><strong>Summary:</strong></p><p><strong>Trust and Verification:</strong> Peter emphasizes that over-trusting AI models without robust verification is a primary and often overlooked risk.</p><p><strong>Metacognition in Risk Management:</strong> He advocates for ongoing critical thinking, group validation, and policy over rigid frameworks to manage AI risks.</p><p><strong>AI-Driven Cybersecurity Threats:</strong> Real-world examples illustrate how AI can inadvertently expose sensitive associations and aid adversaries, highlighting the need for advanced guardrails.</p><p><strong>Quantum Computing Integration:</strong> Peter discusses how quantum computing accelerates probabilistic analysis but may also expose encryption vulnerabilities and new risk vectors.</p><p><strong>Ethical and Societal Impacts:</strong> The episode covers manipulation risks, deepfake challenges, and the essential role of transparency and explainability for both users and developers.</p><p><strong>Referenced in this episode:</strong></p><p><strong>Companies/Organizations:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>A8A8</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><u><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></u></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>US Army</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Fidelity Investments</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Rocket Mortgage</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>OpenAI</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Google</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Meta</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Microsoft</li></ol><br/><p><strong>Movies:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Blade Runner</li></ol><br/><p>Copyright © 2026 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec sits down with Peter Mancini, founder of A8A8, and a seasoned data science expert who has leveraged AI since 2005. Peter shares his unconventional entry into artificial intelligence and reflects on key lessons learned from years of deploying AI and quantum computing in high-stakes environments, including work for the US Army and financial institutions. The conversation explores the critical importance of trust, metacognition, and continuous risk assessment throughout the AI lifecycle, with practical anecdotes ranging from model uncertainty in banking to emergent cybersecurity vulnerabilities. Peter discusses the profound implications of AI’s collaborative nature, the ethical dilemmas posed by AI-generated content, and the evolving intersection of AI, quantum computing, and blockchain. The episode concludes with concrete recommendations for transparency, explainability, and incident response, emphasizing the need for vigilance against both known and unforeseen risks, including elusive black swan events.</p><p><strong>Summary:</strong></p><p><strong>Trust and Verification:</strong> Peter emphasizes that over-trusting AI models without robust verification is a primary and often overlooked risk.</p><p><strong>Metacognition in Risk Management:</strong> He advocates for ongoing critical thinking, group validation, and policy over rigid frameworks to manage AI risks.</p><p><strong>AI-Driven Cybersecurity Threats:</strong> Real-world examples illustrate how AI can inadvertently expose sensitive associations and aid adversaries, highlighting the need for advanced guardrails.</p><p><strong>Quantum Computing Integration:</strong> Peter discusses how quantum computing accelerates probabilistic analysis but may also expose encryption vulnerabilities and new risk vectors.</p><p><strong>Ethical and Societal Impacts:</strong> The episode covers manipulation risks, deepfake challenges, and the essential role of transparency and explainability for both users and developers.</p><p><strong>Referenced in this episode:</strong></p><p><strong>Companies/Organizations:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>A8A8</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><u><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></u></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>US Army</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Fidelity Investments</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Rocket Mortgage</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>OpenAI</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Google</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Meta</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Microsoft</li></ol><br/><p><strong>Movies:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Blade Runner</li></ol><br/><p>Copyright © 2026 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">9920c387-1344-4601-b507-41677c160bbb</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 24 Mar 2026 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/9920c387-1344-4601-b507-41677c160bbb.mp3" length="69300663" type="audio/mpeg"/><itunes:duration>01:12:11</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>86</itunes:episode><podcast:episode>86</podcast:episode><podcast:season>1</podcast:season></item><item><title>What’s Working in AI Use Cases Now: Lucas Erb, LinkedIn Top Voice &amp; AIexperts.com Founder</title><itunes:title>What’s Working in AI Use Cases Now: Lucas Erb, LinkedIn Top Voice &amp; AIexperts.com Founder</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec welcomes Lucas Erb, Founder of AIexperts.com and seasoned advisor on AI strategy, who shares his journey from early computer science interests to consulting at Deloitte, and ultimately founding his own firm. Lucas discusses the evolution of AI adoption, emphasizing the critical gap in mid-market business AI enablement and describing how his company demystifies automation and agent-based solutions for this segment. Key practical examples are explored, focusing on AI’s real-world impact—particularly in sales automation and productivity—rather than generic tool adoption. The conversation also dives deep into the ethical and social challenges of AI, highlighting the ongoing risks of bias and the necessity for thoughtful, transparent implementation. Alec and Lucas conclude with insights into future workforce implications, AI for good initiatives, and advice for young professionals navigating the rapidly changing technology landscape.</p><p><strong>Summary:</strong></p><p><strong>AI Journey:</strong> Lucas Erb recounts his path from early technical curiosity to founding AIexperts.com, highlighting his time at HP and Deloitte. </p><p><strong>Mid-Market Enablement:</strong> He identifies a critical gap in AI adoption for midsize businesses and shares how his firm provides practical, ROI-driven automation. </p><p><strong>Ethical Challenges:</strong> The episode addresses pressing issues around model bias, data selection, and the importance of ongoing evaluation to ensure fairness. </p><p><strong>Future of Work:</strong> Discussion centers on the shifting landscape for new graduates and the need for leaders to shape a responsible AI-driven workforce. </p><p><strong>AI for Good:</strong> Lucas underscores the importance of broad participation in AI ethics and safety, stressing that collective action is necessary to keep pace with innovation.</p><p><strong>Referenced in this episode:</strong></p><p><strong>Companies/Organizations:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.aiexperts.com/about" rel="noopener noreferrer" target="_blank">AIexperts.com</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Deloitte</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>HP </li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Anthropic</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Accenture</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>McKinsey</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Harvard University (AI for Human Flourishing Program)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>NASDAQ</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>MIT</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Global AI Ethics Institute</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Xerox PARC</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Apple</li></ol><br/><p><strong>Movies:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Inception</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Jurassic Park</li></ol><br/><p>Copyright © 2026 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec welcomes Lucas Erb, Founder of AIexperts.com and seasoned advisor on AI strategy, who shares his journey from early computer science interests to consulting at Deloitte, and ultimately founding his own firm. Lucas discusses the evolution of AI adoption, emphasizing the critical gap in mid-market business AI enablement and describing how his company demystifies automation and agent-based solutions for this segment. Key practical examples are explored, focusing on AI’s real-world impact—particularly in sales automation and productivity—rather than generic tool adoption. The conversation also dives deep into the ethical and social challenges of AI, highlighting the ongoing risks of bias and the necessity for thoughtful, transparent implementation. Alec and Lucas conclude with insights into future workforce implications, AI for good initiatives, and advice for young professionals navigating the rapidly changing technology landscape.</p><p><strong>Summary:</strong></p><p><strong>AI Journey:</strong> Lucas Erb recounts his path from early technical curiosity to founding AIexperts.com, highlighting his time at HP and Deloitte. </p><p><strong>Mid-Market Enablement:</strong> He identifies a critical gap in AI adoption for midsize businesses and shares how his firm provides practical, ROI-driven automation. </p><p><strong>Ethical Challenges:</strong> The episode addresses pressing issues around model bias, data selection, and the importance of ongoing evaluation to ensure fairness. </p><p><strong>Future of Work:</strong> Discussion centers on the shifting landscape for new graduates and the need for leaders to shape a responsible AI-driven workforce. </p><p><strong>AI for Good:</strong> Lucas underscores the importance of broad participation in AI ethics and safety, stressing that collective action is necessary to keep pace with innovation.</p><p><strong>Referenced in this episode:</strong></p><p><strong>Companies/Organizations:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.aiexperts.com/about" rel="noopener noreferrer" target="_blank">AIexperts.com</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Deloitte</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>HP </li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Anthropic</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Accenture</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>McKinsey</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Harvard University (AI for Human Flourishing Program)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>NASDAQ</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>MIT</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Global AI Ethics Institute</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Xerox PARC</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Apple</li></ol><br/><p><strong>Movies:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Inception</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Jurassic Park</li></ol><br/><p>Copyright © 2026 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">d5447172-1cab-4ea7-b6f3-3f5b1738e885</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 17 Mar 2026 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/d5447172-1cab-4ea7-b6f3-3f5b1738e885.mp3" length="38556417" type="audio/mpeg"/><itunes:duration>40:10</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>85</itunes:episode><podcast:episode>85</podcast:episode><podcast:season>1</podcast:season></item><item><title>Deep Dive: AI Policy and Risk Governance with Asad Ramzanali, Director of AI and Tech Policy</title><itunes:title>Deep Dive: AI Policy and Risk Governance with Asad Ramzanali, Director of AI and Tech Policy</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this deep dive episode, Alec welcomes Asad Ramzanali, Director of AI and Tech Policy at the Vanderbilt Policy Accelerator, for a comprehensive discussion on the current landscape of AI policy and risk governance. Asad explains how AI’s broad and general-purpose nature requires sector-specific regulatory strategies, emphasizing that existing frameworks must adapt to both new and exacerbated risks. The conversation covers the challenges of benchmarking and evaluating large models, the balance between federal and state governance, and the ongoing debate over regulation versus innovation. Asad highlights the importance of direct regulatory interventions, robust enforcement mechanisms, and maintaining public trust, particularly as AI adoption accelerates across public and private sectors. The episode closes with reflections on economic disruption, business model risks, and future research priorities in AI policy.</p><p><strong>Summary:</strong></p><p><strong>Defining AI Risk:</strong> Asad stresses the need for adaptable, use-case-driven frameworks due to AI’s general-purpose scope.</p><p><strong>Sectoral Regulation:</strong> Different regulators must address AI risks where they specifically arise, especially in finance, health, and national security.</p><p><strong>Benchmarking Challenges:</strong> Evaluating AI models requires independent, evolving methodologies, not just self-reported metrics from companies.</p><p><strong>Regulation vs. Innovation:</strong> The current regulatory environment is far from overreaching, and well-crafted policies can actually foster safer innovation.</p><p><strong>Accountability and Public Trust:</strong> Clear liability, enforcement, and transparency are critical for democratic legitimacy and effective AI risk management.</p><p><strong>Referenced in this episode:</strong></p><p><strong>Companies/Organizations:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Vanderbilt Policy Accelerator</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Vanderbilt University</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>FDA (U.S. Food and Drug Administration)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>FCC (Federal Communications Commission)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>NIST (National Institute of Standards and Technology)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>OpenAI</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Anthropic</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Google</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>NOAA (National Oceanic and Atmospheric Administration)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Hamilton Project (Brookings Institution)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Global AI Ethics Institute</li></ol><br/><p><strong>Movies:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Terminator</li></ol><br/><p>Copyright © 2026 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this deep dive episode, Alec welcomes Asad Ramzanali, Director of AI and Tech Policy at the Vanderbilt Policy Accelerator, for a comprehensive discussion on the current landscape of AI policy and risk governance. Asad explains how AI’s broad and general-purpose nature requires sector-specific regulatory strategies, emphasizing that existing frameworks must adapt to both new and exacerbated risks. The conversation covers the challenges of benchmarking and evaluating large models, the balance between federal and state governance, and the ongoing debate over regulation versus innovation. Asad highlights the importance of direct regulatory interventions, robust enforcement mechanisms, and maintaining public trust, particularly as AI adoption accelerates across public and private sectors. The episode closes with reflections on economic disruption, business model risks, and future research priorities in AI policy.</p><p><strong>Summary:</strong></p><p><strong>Defining AI Risk:</strong> Asad stresses the need for adaptable, use-case-driven frameworks due to AI’s general-purpose scope.</p><p><strong>Sectoral Regulation:</strong> Different regulators must address AI risks where they specifically arise, especially in finance, health, and national security.</p><p><strong>Benchmarking Challenges:</strong> Evaluating AI models requires independent, evolving methodologies, not just self-reported metrics from companies.</p><p><strong>Regulation vs. Innovation:</strong> The current regulatory environment is far from overreaching, and well-crafted policies can actually foster safer innovation.</p><p><strong>Accountability and Public Trust:</strong> Clear liability, enforcement, and transparency are critical for democratic legitimacy and effective AI risk management.</p><p><strong>Referenced in this episode:</strong></p><p><strong>Companies/Organizations:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Vanderbilt Policy Accelerator</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Vanderbilt University</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>FDA (U.S. Food and Drug Administration)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>FCC (Federal Communications Commission)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>NIST (National Institute of Standards and Technology)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>OpenAI</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Anthropic</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Google</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>NOAA (National Oceanic and Atmospheric Administration)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Hamilton Project (Brookings Institution)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Global AI Ethics Institute</li></ol><br/><p><strong>Movies:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Terminator</li></ol><br/><p>Copyright © 2026 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">1f4c3764-456b-4193-b4b2-0b0d9402f841</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 10 Mar 2026 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/1f4c3764-456b-4193-b4b2-0b0d9402f841.mp3" length="48221728" type="audio/mpeg"/><itunes:duration>50:14</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>84</itunes:episode><podcast:episode>84</podcast:episode><podcast:season>1</podcast:season></item><item><title>Rethinking Risk: Agentic AI, Ethical Insurance, and Tanner Hackett’s Journey with Counterpart</title><itunes:title>Rethinking Risk: Agentic AI, Ethical Insurance, and Tanner Hackett’s Journey with Counterpart</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com , interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec welcomes Tanner Hackett, CEO and founder of Counterpart, to discuss how AI and agentic technology are transforming the insurance industry. Tanner shares his unique entrepreneurial background, highlighting how data-driven decision-making has been the common thread across his ventures in e-commerce, marketing technology, and now, insurance. He explains how Counterpart leverages agentic AI to streamline underwriting, enhance transparency, and proactively manage risk for commercial clients, while emphasizing the importance of human expertise in high-stakes decisions. The conversation also touches on the ethical and regulatory challenges of integrating AI into insurance, including the need for change management within legacy organizations. Tanner offers candid advice to startup founders and recommends resources for keeping pace with AI innovation, before closing with a spirited lightning round on topics ranging from startup fundraising events to Lord of the Rings.</p><p><strong>Summary:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Entrepreneurial Journey:</strong> Tanner Hackett traces his path from e-commerce to founding Counterpart, focusing on the power of data in reshaping industries.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Agentic AI in Insurance:</strong> Counterpart uses agentic AI to automate and improve insurance workflows, while maintaining a critical human-in-the-loop for complex risk assessment.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Ethics and Regulation:</strong> The episode explores the ethical complexities and regulatory lag in insurance AI, with commercial lines facing fewer immediate ethical dilemmas than personal lines.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Industry Transformation:</strong> Tanner highlights the slow but inevitable modernization of insurance, predicting both efficiency gains and significant workforce changes as AI adoption grows.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Startup Insights:</strong> Practical advice is given for founders on capital raising, rapid iteration, and the importance of sales and human psychology in entrepreneurial success.</li></ol><br/><p><strong>Referenced in this episode:</strong></p><p><strong>Companies/Organizations:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://yourcounterpart.com/" rel="noopener noreferrer" target="_blank">Counterpart</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Troutman Street Audio</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Lazada</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Button</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>OpenAI</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Chubb</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Edmund Hillary Fellows (EHF)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>MIT</li></ol><br/><p><strong>Movies:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Minority Report</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Lord of the Rings </li></ol><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com , interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec welcomes Tanner Hackett, CEO and founder of Counterpart, to discuss how AI and agentic technology are transforming the insurance industry. Tanner shares his unique entrepreneurial background, highlighting how data-driven decision-making has been the common thread across his ventures in e-commerce, marketing technology, and now, insurance. He explains how Counterpart leverages agentic AI to streamline underwriting, enhance transparency, and proactively manage risk for commercial clients, while emphasizing the importance of human expertise in high-stakes decisions. The conversation also touches on the ethical and regulatory challenges of integrating AI into insurance, including the need for change management within legacy organizations. Tanner offers candid advice to startup founders and recommends resources for keeping pace with AI innovation, before closing with a spirited lightning round on topics ranging from startup fundraising events to Lord of the Rings.</p><p><strong>Summary:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Entrepreneurial Journey:</strong> Tanner Hackett traces his path from e-commerce to founding Counterpart, focusing on the power of data in reshaping industries.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Agentic AI in Insurance:</strong> Counterpart uses agentic AI to automate and improve insurance workflows, while maintaining a critical human-in-the-loop for complex risk assessment.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Ethics and Regulation:</strong> The episode explores the ethical complexities and regulatory lag in insurance AI, with commercial lines facing fewer immediate ethical dilemmas than personal lines.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Industry Transformation:</strong> Tanner highlights the slow but inevitable modernization of insurance, predicting both efficiency gains and significant workforce changes as AI adoption grows.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Startup Insights:</strong> Practical advice is given for founders on capital raising, rapid iteration, and the importance of sales and human psychology in entrepreneurial success.</li></ol><br/><p><strong>Referenced in this episode:</strong></p><p><strong>Companies/Organizations:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://yourcounterpart.com/" rel="noopener noreferrer" target="_blank">Counterpart</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Troutman Street Audio</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Lazada</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Button</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>OpenAI</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Chubb</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Edmund Hillary Fellows (EHF)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>MIT</li></ol><br/><p><strong>Movies:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Minority Report</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Lord of the Rings </li></ol><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">fff532a8-f84e-4b73-9002-1665432c0e03</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 03 Mar 2026 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/fff532a8-f84e-4b73-9002-1665432c0e03.mp3" length="39487217" type="audio/mpeg"/><itunes:duration>41:08</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>83</itunes:episode><podcast:episode>83</podcast:episode><podcast:season>1</podcast:season></item><item><title>Deep Dive: Trustworthy, Multimodal, and Personalized AI Safety with Dr. Jindong Wang, Assistant Professor at William &amp; Mary</title><itunes:title>Deep Dive: Trustworthy, Multimodal, and Personalized AI Safety with Dr. Jindong Wang, Assistant Professor at William &amp; Mary</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this deep dive episode, Alec sits down with Dr. Jindong Wang, Assistant Professor at William &amp; Mary’s Data Science Department and former Microsoft researcher, to explore the nuanced landscape of trustworthy AI, multimodal safety, and personalized AI safety. Dr. Wang details his definition of trustworthy AI, focusing on privacy, robustness, transparency, and user-centric design, and explains why these are foundational for societal trust. The discussion delves into technical strategies such as differential privacy and federated learning, as well as the complex safety challenges arising from multimodal and multi-agent AI systems. Dr. Wang shares insights on emerging research, including benchmarks for risk management, adaptive and context-aware models, and the need for regulatory and ethical advances to keep pace with technological change. The episode concludes with an examination of the future risks of AI, the importance of AI literacy, and broad recommendations for education and governance as AI becomes more deeply woven into the fabric of society.</p><p><strong>Summary:</strong></p><p><strong>Trustworthy AI Principles:</strong> Dr. Wang articulates the critical elements of trustworthy AI, emphasizing privacy, interpretability, and ethical safeguards.</p><p><strong>Technical and Regulatory Strategies:</strong> The conversation covers advanced privacy-preserving techniques and the evolving regulatory frameworks needed for effective AI risk management.</p><p><strong>Multimodal and Multi-Agent Safety:</strong> Unique risks in systems combining text, image, audio, and agentic collaboration are discussed, alongside the need for improved benchmarks and alignment.</p><p><strong>Emergent Behaviors and Human Oversight:</strong> Dr. Wang highlights frameworks for detecting and correcting emergent behaviors, and underscores the ongoing necessity of human-in-the-loop governance and AI literacy.</p><p><strong>Future Risks and Education:</strong> The episode closes with reflections on cultural bias, open-source risks, and the urgent need for scalable, personalized AI education.</p><p><strong>Referenced in this episode:</strong></p><p><strong>Companies/Organizations:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://cdsp.wm.edu/data-science/people/wang-jindong.php" rel="noopener noreferrer" target="_blank">William &amp; Mary</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Microsoft</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>OpenAI</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Google</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Nvidia</li></ol><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this deep dive episode, Alec sits down with Dr. Jindong Wang, Assistant Professor at William &amp; Mary’s Data Science Department and former Microsoft researcher, to explore the nuanced landscape of trustworthy AI, multimodal safety, and personalized AI safety. Dr. Wang details his definition of trustworthy AI, focusing on privacy, robustness, transparency, and user-centric design, and explains why these are foundational for societal trust. The discussion delves into technical strategies such as differential privacy and federated learning, as well as the complex safety challenges arising from multimodal and multi-agent AI systems. Dr. Wang shares insights on emerging research, including benchmarks for risk management, adaptive and context-aware models, and the need for regulatory and ethical advances to keep pace with technological change. The episode concludes with an examination of the future risks of AI, the importance of AI literacy, and broad recommendations for education and governance as AI becomes more deeply woven into the fabric of society.</p><p><strong>Summary:</strong></p><p><strong>Trustworthy AI Principles:</strong> Dr. Wang articulates the critical elements of trustworthy AI, emphasizing privacy, interpretability, and ethical safeguards.</p><p><strong>Technical and Regulatory Strategies:</strong> The conversation covers advanced privacy-preserving techniques and the evolving regulatory frameworks needed for effective AI risk management.</p><p><strong>Multimodal and Multi-Agent Safety:</strong> Unique risks in systems combining text, image, audio, and agentic collaboration are discussed, alongside the need for improved benchmarks and alignment.</p><p><strong>Emergent Behaviors and Human Oversight:</strong> Dr. Wang highlights frameworks for detecting and correcting emergent behaviors, and underscores the ongoing necessity of human-in-the-loop governance and AI literacy.</p><p><strong>Future Risks and Education:</strong> The episode closes with reflections on cultural bias, open-source risks, and the urgent need for scalable, personalized AI education.</p><p><strong>Referenced in this episode:</strong></p><p><strong>Companies/Organizations:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://cdsp.wm.edu/data-science/people/wang-jindong.php" rel="noopener noreferrer" target="_blank">William &amp; Mary</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Microsoft</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>OpenAI</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Google</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Nvidia</li></ol><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">15a837d8-56a1-4c63-b7ec-70134d66edc2</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 24 Feb 2026 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/15a837d8-56a1-4c63-b7ec-70134d66edc2.mp3" length="48215457" type="audio/mpeg"/><itunes:duration>50:13</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>82</itunes:episode><podcast:episode>82</podcast:episode><podcast:season>1</podcast:season></item><item><title>Inside Future Proof—Reinventing Wealth Management and AI with Matt Middleton</title><itunes:title>Inside Future Proof—Reinventing Wealth Management and AI with Matt Middleton</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec welcomes Matt Middleton, Founder and CEO of Future Proof, to discuss the intersection of AI, wealth management, and events. Matt shares insights on how Future Proof is reshaping industry conferences by integrating technology and fostering real-world connections, emphasizing their upcoming Miami Beach event as a hub for AI-first wealth professionals. He reflects on his unconventional career path, the importance of mentorship, and the evolution of community-building within finance and technology. The conversation covers ethical considerations of AI, regulatory changes, and the rapid transformation of financial advice and compliance, highlighting the need for secure, enterprise-grade AI platforms. Matt also offers practical advice for young professionals, encouraging them to become AI experts to capitalize on emerging opportunities in wealth management.</p><p><strong>Summary:</strong></p><p><strong>Future Proof Events:</strong> Matt Middleton explains how Future Proof transforms wealth management conferences with technology and outdoor formats.</p><p><strong>AI in Finance:</strong> The discussion explores practical applications and ethical considerations of AI in wealth management and event operations.</p><p><strong>Regulatory Changes:</strong> Recent SEC guidance on AI and privacy is addressed, highlighting compliance challenges for financial advisors.</p><p><strong>Industry Consolidation:</strong> Medium-sized firms are likely to benefit most from AI adoption, while smaller firms risk falling behind.</p><p><strong>Career Advice:</strong> Young professionals are encouraged to specialize in AI to become indispensable in a rapidly evolving industry.</p><p><strong>Referenced in this episode:</strong></p><p><strong>Companies/Organizations:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://futureproofhq.com/citywide/" rel="noopener noreferrer" target="_blank">Future Proof</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><u><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></u></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>ETF.com</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Informa</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Bitwise Asset Management</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Money2020</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Ritholtz Wealth Management</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>SEC</li></ol><br/><p><strong>Movies: </strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Gladiator</li></ol><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec welcomes Matt Middleton, Founder and CEO of Future Proof, to discuss the intersection of AI, wealth management, and events. Matt shares insights on how Future Proof is reshaping industry conferences by integrating technology and fostering real-world connections, emphasizing their upcoming Miami Beach event as a hub for AI-first wealth professionals. He reflects on his unconventional career path, the importance of mentorship, and the evolution of community-building within finance and technology. The conversation covers ethical considerations of AI, regulatory changes, and the rapid transformation of financial advice and compliance, highlighting the need for secure, enterprise-grade AI platforms. Matt also offers practical advice for young professionals, encouraging them to become AI experts to capitalize on emerging opportunities in wealth management.</p><p><strong>Summary:</strong></p><p><strong>Future Proof Events:</strong> Matt Middleton explains how Future Proof transforms wealth management conferences with technology and outdoor formats.</p><p><strong>AI in Finance:</strong> The discussion explores practical applications and ethical considerations of AI in wealth management and event operations.</p><p><strong>Regulatory Changes:</strong> Recent SEC guidance on AI and privacy is addressed, highlighting compliance challenges for financial advisors.</p><p><strong>Industry Consolidation:</strong> Medium-sized firms are likely to benefit most from AI adoption, while smaller firms risk falling behind.</p><p><strong>Career Advice:</strong> Young professionals are encouraged to specialize in AI to become indispensable in a rapidly evolving industry.</p><p><strong>Referenced in this episode:</strong></p><p><strong>Companies/Organizations:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://futureproofhq.com/citywide/" rel="noopener noreferrer" target="_blank">Future Proof</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><u><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></u></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>ETF.com</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Informa</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Bitwise Asset Management</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Money2020</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Ritholtz Wealth Management</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>SEC</li></ol><br/><p><strong>Movies: </strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Gladiator</li></ol><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">4d7b19ac-484c-45e4-ba3f-3935a549b31d</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 17 Feb 2026 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/4d7b19ac-484c-45e4-ba3f-3935a549b31d.mp3" length="48783884" type="audio/mpeg"/><itunes:duration>50:49</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>81</itunes:episode><podcast:episode>81</podcast:episode><podcast:season>1</podcast:season></item><item><title>Deep Dive into AI Governance, Risk Management, and Finance Innovation with Professor Agostino Capponi</title><itunes:title>Deep Dive into AI Governance, Risk Management, and Finance Innovation with Professor Agostino Capponi</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com , interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec sits down with Professor Agostino Capponi, Director of the Center for Digital Finance and Technologies at Columbia University, to explore the frontiers of AI governance, risk management, and explainability in finance. Professor Capponi outlines the Center’s mission to bridge research, education, and industry in digital finance, including AI, blockchain, and digital payments. The discussion covers emerging frameworks for AI-driven portfolio optimization, model risk, and the importance of transparency and explainability in agentic AI systems. They also address practical challenges like data privacy, regulatory compliance, cybersecurity, fairness, and the implications of delegating decision-making to AI agents within financial institutions. The episode concludes with insights on model concentration risk, convergence between AI and blockchain, and the evolving role of boards in AI governance.</p><p><strong>Summary:</strong></p><p><strong>AI in Finance:</strong> Professor Capponi discusses how AI is reshaping portfolio management, risk assessment, and the integration of digital finance technologies. </p><p><strong>Explainability &amp; Agentic AI:</strong> The conversation highlights agent-based frameworks that deliver transparency and rationales for AI-driven decisions in finance and prediction markets. </p><p><strong>Data Privacy &amp; Governance:</strong> The episode examines the challenges of data lineage, privacy-preserving techniques, and regulatory implications for financial institutions using AI. </p><p><strong>Cybersecurity &amp; Model Validation:</strong> Capponi offers perspectives on operational risk, preventing systemic threats, and the need for robust validation and benchmarking of non-deterministic AI models. </p><p><strong>Fairness, Regulation &amp; Blockchain:</strong> The dialogue explores frameworks for fairness in AI outcomes, the regulatory focus on models versus use cases, and the convergence of AI with blockchain-based payments and governance.</p><p><strong>Referenced in this episode:</strong></p><p><strong>Companies/Organizations:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.columbia.edu/" rel="noopener noreferrer" target="_blank">Columbia University</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><u><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></u></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://godigital.engineering.columbia.edu/" rel="noopener noreferrer" target="_blank">Center for Digital Finance and Technologies</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Polymarket</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>S&amp;P 500</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Bloomberg</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Thomson Reuters</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Amazon</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Google</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Walmart</li></ol><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com , interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec sits down with Professor Agostino Capponi, Director of the Center for Digital Finance and Technologies at Columbia University, to explore the frontiers of AI governance, risk management, and explainability in finance. Professor Capponi outlines the Center’s mission to bridge research, education, and industry in digital finance, including AI, blockchain, and digital payments. The discussion covers emerging frameworks for AI-driven portfolio optimization, model risk, and the importance of transparency and explainability in agentic AI systems. They also address practical challenges like data privacy, regulatory compliance, cybersecurity, fairness, and the implications of delegating decision-making to AI agents within financial institutions. The episode concludes with insights on model concentration risk, convergence between AI and blockchain, and the evolving role of boards in AI governance.</p><p><strong>Summary:</strong></p><p><strong>AI in Finance:</strong> Professor Capponi discusses how AI is reshaping portfolio management, risk assessment, and the integration of digital finance technologies. </p><p><strong>Explainability &amp; Agentic AI:</strong> The conversation highlights agent-based frameworks that deliver transparency and rationales for AI-driven decisions in finance and prediction markets. </p><p><strong>Data Privacy &amp; Governance:</strong> The episode examines the challenges of data lineage, privacy-preserving techniques, and regulatory implications for financial institutions using AI. </p><p><strong>Cybersecurity &amp; Model Validation:</strong> Capponi offers perspectives on operational risk, preventing systemic threats, and the need for robust validation and benchmarking of non-deterministic AI models. </p><p><strong>Fairness, Regulation &amp; Blockchain:</strong> The dialogue explores frameworks for fairness in AI outcomes, the regulatory focus on models versus use cases, and the convergence of AI with blockchain-based payments and governance.</p><p><strong>Referenced in this episode:</strong></p><p><strong>Companies/Organizations:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.columbia.edu/" rel="noopener noreferrer" target="_blank">Columbia University</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><u><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></u></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://godigital.engineering.columbia.edu/" rel="noopener noreferrer" target="_blank">Center for Digital Finance and Technologies</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Polymarket</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>S&amp;P 500</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Bloomberg</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Thomson Reuters</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Amazon</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Google</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Walmart</li></ol><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">41b18d04-b3dc-4fcf-82d3-70dfff818e86</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 10 Feb 2026 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/41b18d04-b3dc-4fcf-82d3-70dfff818e86.mp3" length="28271822" type="audio/mpeg"/><itunes:duration>58:54</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>80</itunes:episode><podcast:episode>80</podcast:episode><podcast:season>1</podcast:season></item><item><title>Stablecoins, AI Agents, and FinTech Innovation: A Conversation with Nik Milanović</title><itunes:title>Stablecoins, AI Agents, and FinTech Innovation: A Conversation with Nik Milanović</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com , interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec welcomes Nik Milanović, founder of This Week in FinTech, Stablecon, and General Partner at the FinTech Fund, for a candid conversation on the intersections of AI, fintech, and crypto. Nik shares his journey from aspiring lawyer and Stanford philosophy major to fintech leader, highlighting pivotal experiences at Google Pay and the value of community-driven insight in fast-moving industries. The discussion covers the rise of stablecoins, the importance of machine-readable payments for AI agents, and the ethical implications of rapid technological advancement. Nik emphasizes thoughtful adoption, regulatory caution, and the need to support those economically displaced by automation. The episode concludes with actionable advice for founders, investing insights, and a lighthearted lightning round on everything from Brooklyn tattoos to the impact of fintech newsletters.</p><p><strong>Summary:</strong></p><p><strong>Career Evolution:</strong> Nik describes his path from law and philosophy into fintech, driven by the potential to create impactful technology outside politics.</p><p><strong>Stablecoins &amp; FinTech:</strong> The conversation explores Stablecon’s mission and the growing relevance of stablecoins for financial services and mainstream adoption.</p><p><strong>AI &amp; Payments Convergence:</strong> Nik highlights how programmable, machine-readable payments will be essential for future AI agents, offering new possibilities but requiring careful governance.</p><p><strong>Ethical &amp; Regulatory Considerations:</strong> The importance of measured regulation and addressing economic displacement is discussed, with parallels drawn to previous tech disruptions.</p><p><strong>Founder &amp; Investor Insights:</strong> Nik shares advice for early-stage founders, focusing on solving specific customer problems and the evolving role of community in fintech innovation.</p><p><strong>Referenced in this episode:</strong></p><p><strong>Companies/Organizations:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.thisweekinfintech.com/" rel="noopener noreferrer" target="_blank">This Week in Fintech</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://stablecon.com/" rel="noopener noreferrer" target="_blank">Stablecon</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.thefintechfund.com/" rel="noopener noreferrer" target="_blank">The Fintech Fund</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Google</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Petal</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Citi</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Coastal Community Bank</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Circle</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Paxos</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Ripple</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Zerohash</li></ol><br/><p><strong>Books: </strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Man's Search for Meaning by Viktor Frankl</li></ol><br/><p><strong>Movies: </strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>The Social Network</li></ol><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com , interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec welcomes Nik Milanović, founder of This Week in FinTech, Stablecon, and General Partner at the FinTech Fund, for a candid conversation on the intersections of AI, fintech, and crypto. Nik shares his journey from aspiring lawyer and Stanford philosophy major to fintech leader, highlighting pivotal experiences at Google Pay and the value of community-driven insight in fast-moving industries. The discussion covers the rise of stablecoins, the importance of machine-readable payments for AI agents, and the ethical implications of rapid technological advancement. Nik emphasizes thoughtful adoption, regulatory caution, and the need to support those economically displaced by automation. The episode concludes with actionable advice for founders, investing insights, and a lighthearted lightning round on everything from Brooklyn tattoos to the impact of fintech newsletters.</p><p><strong>Summary:</strong></p><p><strong>Career Evolution:</strong> Nik describes his path from law and philosophy into fintech, driven by the potential to create impactful technology outside politics.</p><p><strong>Stablecoins &amp; FinTech:</strong> The conversation explores Stablecon’s mission and the growing relevance of stablecoins for financial services and mainstream adoption.</p><p><strong>AI &amp; Payments Convergence:</strong> Nik highlights how programmable, machine-readable payments will be essential for future AI agents, offering new possibilities but requiring careful governance.</p><p><strong>Ethical &amp; Regulatory Considerations:</strong> The importance of measured regulation and addressing economic displacement is discussed, with parallels drawn to previous tech disruptions.</p><p><strong>Founder &amp; Investor Insights:</strong> Nik shares advice for early-stage founders, focusing on solving specific customer problems and the evolving role of community in fintech innovation.</p><p><strong>Referenced in this episode:</strong></p><p><strong>Companies/Organizations:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.thisweekinfintech.com/" rel="noopener noreferrer" target="_blank">This Week in Fintech</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://stablecon.com/" rel="noopener noreferrer" target="_blank">Stablecon</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.thefintechfund.com/" rel="noopener noreferrer" target="_blank">The Fintech Fund</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Google</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Petal</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Citi</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Coastal Community Bank</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Circle</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Paxos</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Ripple</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Zerohash</li></ol><br/><p><strong>Books: </strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Man's Search for Meaning by Viktor Frankl</li></ol><br/><p><strong>Movies: </strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>The Social Network</li></ol><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">3c8b86a0-baf9-4871-a3f4-fb46a5d97083</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 03 Feb 2026 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/3c8b86a0-baf9-4871-a3f4-fb46a5d97083.mp3" length="18258773" type="audio/mpeg"/><itunes:duration>38:02</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>79</itunes:episode><podcast:episode>79</podcast:episode><podcast:season>1</podcast:season></item><item><title>AI Governance Deep Dive with Michael Hind, Distinguished Research Staff Member at IBM</title><itunes:title>AI Governance Deep Dive with Michael Hind, Distinguished Research Staff Member at IBM</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com , interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec welcomes back Michael Hind, Distinguished Research Staff Member at IBM. This episode is a special deep dive focused exclusively on the evolving field of AI governance. Michael defines AI governance from both enterprise and societal perspectives, highlighting the challenges of managing risk in rapidly evolving AI systems. He shares insights from his recent research, including the development of the AI Risk Atlas and model risk evaluation tools, and discusses the complexities of testing AI models and the importance of accurate benchmarking. The conversation covers the state of regulation, the intersection of insurance and AI risk, the role of transparency and explainability, and emerging technical solutions like entity tagging in LLMs. Alec and Michael conclude by emphasizing the need for industry-driven governance and enhanced transparency through tools such as Granite Guardian and Benchmark Cards.</p><p><strong>Summary:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Defining AI Governance</strong>: Michael Hind explains the dual perspectives of AI governance—enterprise risk management and societal impact—and discusses the need for clear taxonomies.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Taxonomies and Risk Evaluation</strong>: IBM’s AI Risk Atlas and model risk evaluation tools help organizations identify, test, and monitor relevant AI risks for specific use cases.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Regulation and Industry Responsibility</strong>: With global regulation slowing, Michael argues for proactive enterprise governance, transparency, and industry benchmarks to fill the gap.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Testing, Explainability, and Transparency</strong>: The episode explores the limits of model evaluation, the challenge of explainability, and the need for public transparency, including the Stanford Transparency Index.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Insurance and Technical Advances</strong>: The dialogue addresses how insurance may eventually adapt to AI risk, and highlights new approaches like entity tagging and fault-tolerant generative computing.</li></ol><br/><p><strong>Companies/Organizations:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.ibm.com/us-en" rel="noopener noreferrer" target="_blank">IBM</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>NIST</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>MIT</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Stanford University</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Notre Dame</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Drainpipe IO</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>OpenAI</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Google</li></ol><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com , interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec welcomes back Michael Hind, Distinguished Research Staff Member at IBM. This episode is a special deep dive focused exclusively on the evolving field of AI governance. Michael defines AI governance from both enterprise and societal perspectives, highlighting the challenges of managing risk in rapidly evolving AI systems. He shares insights from his recent research, including the development of the AI Risk Atlas and model risk evaluation tools, and discusses the complexities of testing AI models and the importance of accurate benchmarking. The conversation covers the state of regulation, the intersection of insurance and AI risk, the role of transparency and explainability, and emerging technical solutions like entity tagging in LLMs. Alec and Michael conclude by emphasizing the need for industry-driven governance and enhanced transparency through tools such as Granite Guardian and Benchmark Cards.</p><p><strong>Summary:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Defining AI Governance</strong>: Michael Hind explains the dual perspectives of AI governance—enterprise risk management and societal impact—and discusses the need for clear taxonomies.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Taxonomies and Risk Evaluation</strong>: IBM’s AI Risk Atlas and model risk evaluation tools help organizations identify, test, and monitor relevant AI risks for specific use cases.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Regulation and Industry Responsibility</strong>: With global regulation slowing, Michael argues for proactive enterprise governance, transparency, and industry benchmarks to fill the gap.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Testing, Explainability, and Transparency</strong>: The episode explores the limits of model evaluation, the challenge of explainability, and the need for public transparency, including the Stanford Transparency Index.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Insurance and Technical Advances</strong>: The dialogue addresses how insurance may eventually adapt to AI risk, and highlights new approaches like entity tagging and fault-tolerant generative computing.</li></ol><br/><p><strong>Companies/Organizations:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.ibm.com/us-en" rel="noopener noreferrer" target="_blank">IBM</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>NIST</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>MIT</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Stanford University</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Notre Dame</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Drainpipe IO</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>OpenAI</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Google</li></ol><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">8b6976b1-fdb8-49cb-875e-88206177a33c</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 27 Jan 2026 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/8b6976b1-fdb8-49cb-875e-88206177a33c.mp3" length="24400893" type="audio/mpeg"/><itunes:duration>50:50</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>78</itunes:episode><podcast:episode>78</podcast:episode><podcast:season>1</podcast:season></item><item><title>From Banker to Digital Innovator: Eric Cook, Founder and CEO of Cook Technology Solutions and Chief Mentor at The LinkedBanker, on Harnessing AI for the Future of Community Banks</title><itunes:title>From Banker to Digital Innovator: Eric Cook, Founder and CEO of Cook Technology Solutions and Chief Mentor at The LinkedBanker, on Harnessing AI for the Future of Community Banks</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com , interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec sits down with Eric Cook, founder and CEO of Cook Technology Solutions, part of WSI since 2007 and founder and chief mentor at The LinkedBanker, to discuss digital transformation and AI adoption in the banking sector. Eric shares his personal journey from being an “accidental banker” to a leader in digital innovation for community banks, emphasizing the importance of personal branding and leveraging technology for strategic advantage. The conversation covers the evolving role of AI in banking, including practical use cases, the need for robust training, and the critical importance of human oversight in AI-driven decisions. Eric offers insights into helping banks overcome fear and resistance to AI by focusing on business objectives and aligning technology with organizational strategy. The episode concludes with a lightning round, highlighting Eric’s perspectives on industry trends, technology, and personal passions.</p><p><strong>Summary:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Digital Transformation in Banking:</strong> Eric Cook shares his transition from traditional banking to digital consulting, underscoring the need for banks to embrace strategic online presence and personal branding.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>AI Adoption &amp; Training:</strong> The discussion highlights the gap between AI usage and formal training in banks, stressing the importance of education, policies, and software guardrails.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Risk Management &amp; Ethics:</strong> Eric advocates for a “human in the loop” approach, ensuring all AI outputs are critically reviewed for compliance, security, and fairness.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Future-Proofing with AI:</strong> Banks are encouraged to adopt diverse AI tools and workflows, moving beyond single-platform reliance to maximize innovation and efficiency.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Practical Assessment:</strong> Eric outlines his company’s approach to AI assessments, focusing on business goals and overcoming the “failure of imagination” in AI strategy development.</li></ol><br/><p><strong>Companies/Organizations: </strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.poweredbywsi.com/Speaking-and-Consulting/Consulting-Services" rel="noopener noreferrer" target="_blank">Cook Technology Solutions</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.poweredbywsi.com/" rel="noopener noreferrer" target="_blank">WSI</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.thelinkedbanker.com/" rel="noopener noreferrer" target="_blank">The LinkedBanker</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>International Franchise Association</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Michigan Bankers Association</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Marketing AI Institute</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>JP Morgan</li></ol><br/><p><strong>Books: </strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.amazon.com/Think-like-Brand-Not-Bank/dp/1544531230" rel="noopener noreferrer" target="_blank">Think Like a Brand, Not a Bank</a> by Allison Netzer and Liz High</li></ol><br/><p><strong>Movies: </strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Blade Runner</li></ol><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com , interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec sits down with Eric Cook, founder and CEO of Cook Technology Solutions, part of WSI since 2007 and founder and chief mentor at The LinkedBanker, to discuss digital transformation and AI adoption in the banking sector. Eric shares his personal journey from being an “accidental banker” to a leader in digital innovation for community banks, emphasizing the importance of personal branding and leveraging technology for strategic advantage. The conversation covers the evolving role of AI in banking, including practical use cases, the need for robust training, and the critical importance of human oversight in AI-driven decisions. Eric offers insights into helping banks overcome fear and resistance to AI by focusing on business objectives and aligning technology with organizational strategy. The episode concludes with a lightning round, highlighting Eric’s perspectives on industry trends, technology, and personal passions.</p><p><strong>Summary:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Digital Transformation in Banking:</strong> Eric Cook shares his transition from traditional banking to digital consulting, underscoring the need for banks to embrace strategic online presence and personal branding.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>AI Adoption &amp; Training:</strong> The discussion highlights the gap between AI usage and formal training in banks, stressing the importance of education, policies, and software guardrails.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Risk Management &amp; Ethics:</strong> Eric advocates for a “human in the loop” approach, ensuring all AI outputs are critically reviewed for compliance, security, and fairness.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Future-Proofing with AI:</strong> Banks are encouraged to adopt diverse AI tools and workflows, moving beyond single-platform reliance to maximize innovation and efficiency.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Practical Assessment:</strong> Eric outlines his company’s approach to AI assessments, focusing on business goals and overcoming the “failure of imagination” in AI strategy development.</li></ol><br/><p><strong>Companies/Organizations: </strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.poweredbywsi.com/Speaking-and-Consulting/Consulting-Services" rel="noopener noreferrer" target="_blank">Cook Technology Solutions</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.poweredbywsi.com/" rel="noopener noreferrer" target="_blank">WSI</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.thelinkedbanker.com/" rel="noopener noreferrer" target="_blank">The LinkedBanker</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>International Franchise Association</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Michigan Bankers Association</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Marketing AI Institute</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>JP Morgan</li></ol><br/><p><strong>Books: </strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.amazon.com/Think-like-Brand-Not-Bank/dp/1544531230" rel="noopener noreferrer" target="_blank">Think Like a Brand, Not a Bank</a> by Allison Netzer and Liz High</li></ol><br/><p><strong>Movies: </strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Blade Runner</li></ol><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">0b86730e-7678-4e03-ab51-955fc268c025</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 20 Jan 2026 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/0b86730e-7678-4e03-ab51-955fc268c025.mp3" length="21394303" type="audio/mpeg"/><itunes:duration>44:34</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>77</itunes:episode><podcast:episode>77</podcast:episode><podcast:season>1</podcast:season></item><item><title>Future of Digital Labor: How AI-Driven Automation Is Set to Disrupt the Global Labor Market with Jesse Anglen, CEO of Ruh.ai</title><itunes:title>Future of Digital Labor: How AI-Driven Automation Is Set to Disrupt the Global Labor Market with Jesse Anglen, CEO of Ruh.ai</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec sits down with Jesse Anglen, CEO of Ruh.ai, to discuss the rapid transformation of digital labor through AI agents and the profound impact on the modern workforce. Jesse shares his unique journey from the construction industry to leading AI companies, emphasizing how curiosity and self-driven learning fueled his success. The conversation delves into the rise of AI-powered automation, the potential for unprecedented labor disruption, and the ethical responsibilities business leaders face in this new era. Jesse also highlights practical advice for young professionals, stressing the importance of developing empathy, adaptability, and AI management skills as traditional roles evolve. The episode rounds out with a lively lightning round, touching on startup culture, open source innovation, and the ongoing need for lifelong learning in a fast-changing landscape.</p><p><strong>Summary:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Career Evolution</strong>: Jesse Anglen’s path from construction to AI leadership showcases the value of adaptability and self-education in tech.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>AI-Powered Automation</strong>: The discussion highlights how Ruh.ai and similar platforms are transforming businesses by deploying digital labor agents.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Ethical Concerns</strong>: Jesse addresses the risk of massive unemployment and urges proactive ethical considerations as AI adoption accelerates.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Future-Proof Skills</strong>: Empathy, project management, and AI literacy are key competencies for thriving in an AI-driven economy.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Continuous Learning</strong>: Lifelong learning and engagement with open source AI communities are essential for staying relevant and competitive.</li></ol><br/><p><strong>Companies/Organizations:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.ruh.ai/" rel="noopener noreferrer" target="_blank">Ruh.ai</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.rapidinnovation.io/" rel="noopener noreferrer" target="_blank">Rapid Innovation</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>OpenAI</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Anthropic</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Ethereum</li></ol><br/><p><strong>Books: </strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>The E-Myth by Michael E. Gerber</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Principles by Ray Dalio</li></ol><br/><p><strong>Movies: </strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>The Matrix</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Back to the Future</li></ol><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec sits down with Jesse Anglen, CEO of Ruh.ai, to discuss the rapid transformation of digital labor through AI agents and the profound impact on the modern workforce. Jesse shares his unique journey from the construction industry to leading AI companies, emphasizing how curiosity and self-driven learning fueled his success. The conversation delves into the rise of AI-powered automation, the potential for unprecedented labor disruption, and the ethical responsibilities business leaders face in this new era. Jesse also highlights practical advice for young professionals, stressing the importance of developing empathy, adaptability, and AI management skills as traditional roles evolve. The episode rounds out with a lively lightning round, touching on startup culture, open source innovation, and the ongoing need for lifelong learning in a fast-changing landscape.</p><p><strong>Summary:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Career Evolution</strong>: Jesse Anglen’s path from construction to AI leadership showcases the value of adaptability and self-education in tech.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>AI-Powered Automation</strong>: The discussion highlights how Ruh.ai and similar platforms are transforming businesses by deploying digital labor agents.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Ethical Concerns</strong>: Jesse addresses the risk of massive unemployment and urges proactive ethical considerations as AI adoption accelerates.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Future-Proof Skills</strong>: Empathy, project management, and AI literacy are key competencies for thriving in an AI-driven economy.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Continuous Learning</strong>: Lifelong learning and engagement with open source AI communities are essential for staying relevant and competitive.</li></ol><br/><p><strong>Companies/Organizations:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.ruh.ai/" rel="noopener noreferrer" target="_blank">Ruh.ai</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.rapidinnovation.io/" rel="noopener noreferrer" target="_blank">Rapid Innovation</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>OpenAI</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Anthropic</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Ethereum</li></ol><br/><p><strong>Books: </strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>The E-Myth by Michael E. Gerber</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Principles by Ray Dalio</li></ol><br/><p><strong>Movies: </strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>The Matrix</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Back to the Future</li></ol><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">a7c35047-29b3-48d0-bd44-994d13dcabb4</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 13 Jan 2026 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/a7c35047-29b3-48d0-bd44-994d13dcabb4.mp3" length="36075415" type="audio/mpeg"/><itunes:duration>37:35</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>76</itunes:episode><podcast:episode>76</podcast:episode><podcast:season>1</podcast:season></item><item><title>How Do You Manage the Risk of Getting Run Over by Technology Change? Insights from Becky Reed, COO of BankSocial and Co-Founder &amp; Board Chair of Pure IT</title><itunes:title>How Do You Manage the Risk of Getting Run Over by Technology Change? Insights from Becky Reed, COO of BankSocial and Co-Founder &amp; Board Chair of Pure IT</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec Crawford sits down with Becky Reed, Chief Operating Officer of BankSocial and co-founder and board chair of Pure IT Credit Union Service Organization, to explore the intersection of innovation, technology, and risk in the credit union sector. Becky shares her career journey from operations to CEO, emphasizing the importance of continuous innovation and the adoption of emerging technologies such as cloud computing, blockchain, and stablecoins. She discusses how credit unions can leverage these advancements to remain relevant, warning of the significant risks of not adapting, including potential revenue loss and deposit outflows. The conversation covers best practices in consumer protection, regulatory compliance, and the convergence of AI and blockchain in finance. Becky also offers thoughtful advice for students entering the workforce and provides candid perspectives in a fun lightning round segment.</p><p><strong>Summary:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Career Innovation:</strong> Becky Reed’s journey highlights the shift from traditional operations to technology-driven leadership in credit unions.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Blockchain &amp; Stablecoins:</strong> The episode details how decentralized finance and stablecoins can disrupt traditional banking models and why credit unions must adapt.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Risk of Inaction:</strong> Becky stresses that the greatest risk is for credit unions to remain passive, risking lost revenue and deposits as technology evolves rapidly.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Regulatory Compliance:</strong> She advocates for applying existing consumer protection and privacy regulations to new technologies, ensuring ethical and secure adoption.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Humanities in the AI Era:</strong> For students and professionals, Becky advises focusing on human-centric skills that AI cannot replicate, emphasizing the importance of adaptability and lifelong learning.</li></ol><br/><p><strong>Companies/Organizations:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://banksocial.io/" rel="noopener noreferrer" target="_blank">BankSocial</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://pureitcuso.com/" rel="noopener noreferrer" target="_blank">Pure IT Credit Union Service Organization</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><u><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></u></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Lone Star Credit Union</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Arlington Federal Credit Union</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Circle</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Katena Labs</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>OpenAI</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Amazon (AWS)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Robinhood</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Coinbase</li></ol><br/><p><strong>Books: </strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Credit Unions and DeFi, a Financial Renaissance by Becky Reed</li></ol><br/><p><strong>Movies: </strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Barbie</li></ol><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec Crawford sits down with Becky Reed, Chief Operating Officer of BankSocial and co-founder and board chair of Pure IT Credit Union Service Organization, to explore the intersection of innovation, technology, and risk in the credit union sector. Becky shares her career journey from operations to CEO, emphasizing the importance of continuous innovation and the adoption of emerging technologies such as cloud computing, blockchain, and stablecoins. She discusses how credit unions can leverage these advancements to remain relevant, warning of the significant risks of not adapting, including potential revenue loss and deposit outflows. The conversation covers best practices in consumer protection, regulatory compliance, and the convergence of AI and blockchain in finance. Becky also offers thoughtful advice for students entering the workforce and provides candid perspectives in a fun lightning round segment.</p><p><strong>Summary:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Career Innovation:</strong> Becky Reed’s journey highlights the shift from traditional operations to technology-driven leadership in credit unions.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Blockchain &amp; Stablecoins:</strong> The episode details how decentralized finance and stablecoins can disrupt traditional banking models and why credit unions must adapt.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Risk of Inaction:</strong> Becky stresses that the greatest risk is for credit unions to remain passive, risking lost revenue and deposits as technology evolves rapidly.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Regulatory Compliance:</strong> She advocates for applying existing consumer protection and privacy regulations to new technologies, ensuring ethical and secure adoption.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Humanities in the AI Era:</strong> For students and professionals, Becky advises focusing on human-centric skills that AI cannot replicate, emphasizing the importance of adaptability and lifelong learning.</li></ol><br/><p><strong>Companies/Organizations:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://banksocial.io/" rel="noopener noreferrer" target="_blank">BankSocial</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://pureitcuso.com/" rel="noopener noreferrer" target="_blank">Pure IT Credit Union Service Organization</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><u><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></u></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Lone Star Credit Union</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Arlington Federal Credit Union</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Circle</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Katena Labs</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>OpenAI</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Amazon (AWS)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Robinhood</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Coinbase</li></ol><br/><p><strong>Books: </strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Credit Unions and DeFi, a Financial Renaissance by Becky Reed</li></ol><br/><p><strong>Movies: </strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Barbie</li></ol><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">63b52448-83f4-4768-9c3b-44bf21300cfc</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 06 Jan 2026 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/63b52448-83f4-4768-9c3b-44bf21300cfc.mp3" length="13719946" type="audio/mpeg"/><itunes:duration>28:35</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>75</itunes:episode><podcast:episode>75</podcast:episode><podcast:season>1</podcast:season></item><item><title>AI and Drug Discovery: Javier Tordable, CEO and Founder of Pauling.AI</title><itunes:title>AI and Drug Discovery: Javier Tordable, CEO and Founder of Pauling.AI</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com , interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec speaks with Javier Tordable, CEO and Founder of Pauling.AI, about the transformative potential of AI in drug discovery. Javier shares his journey from mathematics and technology roles at Microsoft and Google to launching Pauling.AI, highlighting the importance of innovation and mentorship in shaping his career. He explains how Pauling.AI aims to automate the process of moving from scientific hypothesis to molecule testing, thereby accelerating and reducing the cost of bringing new medicines to market. The discussion covers the ethical considerations of AI in life sciences, the realistic risks of misuse, and the challenges posed by today’s advanced content generation capabilities. Javier also provides practical advice for pharma CEOs considering AI adoption and reflects on the evolving role of regulators in healthcare and technology.</p><p><strong>Summary:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>AI for Drug Discovery:</strong> Javier Tordable outlines how Pauling.AI leverages automation and language models to speed up the process of identifying and testing new therapeutics.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Career and Mentorship:</strong> Javier’s background in mathematics, technology, and leadership at Google informs his approach to building impactful AI solutions in life sciences.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Ethical Considerations:</strong> The episode addresses the dual-use nature of AI in science, noting both its potential for positive impact and the manageable risks of misuse.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Industry Disruption:</strong> Javier advises traditional pharma leaders to partner with AI experts rather than attempt to independently build state-of-the-art capabilities.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Regulatory Perspective:</strong> The conversation highlights the complexity regulators face and suggests global best practices, particularly with reference to the U.S. FDA and China’s accelerated clinical trial environment.</li></ol><br/><p><strong>Companies/Organizations:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.pauling.ai/" rel="noopener noreferrer" target="_blank">Pauling.AI</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><u><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></u></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Google</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Microsoft</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Nintendo</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Meditech</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Schrodinger</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Future House</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Edison Scientific</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>OpenAI</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>FDA</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Netflix</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Warner Brothers</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Amazon</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Apple</li></ol><br/><p><strong>Books:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Isaac Asimov’s Foundation series</li></ol><br/><p><strong>Movies:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>The Matrix Reloaded</li></ol><br/><p><strong>TV Shows: </strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Foundation</li></ol><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com , interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec speaks with Javier Tordable, CEO and Founder of Pauling.AI, about the transformative potential of AI in drug discovery. Javier shares his journey from mathematics and technology roles at Microsoft and Google to launching Pauling.AI, highlighting the importance of innovation and mentorship in shaping his career. He explains how Pauling.AI aims to automate the process of moving from scientific hypothesis to molecule testing, thereby accelerating and reducing the cost of bringing new medicines to market. The discussion covers the ethical considerations of AI in life sciences, the realistic risks of misuse, and the challenges posed by today’s advanced content generation capabilities. Javier also provides practical advice for pharma CEOs considering AI adoption and reflects on the evolving role of regulators in healthcare and technology.</p><p><strong>Summary:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>AI for Drug Discovery:</strong> Javier Tordable outlines how Pauling.AI leverages automation and language models to speed up the process of identifying and testing new therapeutics.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Career and Mentorship:</strong> Javier’s background in mathematics, technology, and leadership at Google informs his approach to building impactful AI solutions in life sciences.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Ethical Considerations:</strong> The episode addresses the dual-use nature of AI in science, noting both its potential for positive impact and the manageable risks of misuse.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Industry Disruption:</strong> Javier advises traditional pharma leaders to partner with AI experts rather than attempt to independently build state-of-the-art capabilities.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Regulatory Perspective:</strong> The conversation highlights the complexity regulators face and suggests global best practices, particularly with reference to the U.S. FDA and China’s accelerated clinical trial environment.</li></ol><br/><p><strong>Companies/Organizations:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.pauling.ai/" rel="noopener noreferrer" target="_blank">Pauling.AI</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><u><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></u></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Google</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Microsoft</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Nintendo</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Meditech</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Schrodinger</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Future House</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Edison Scientific</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>OpenAI</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>FDA</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Netflix</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Warner Brothers</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Amazon</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Apple</li></ol><br/><p><strong>Books:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Isaac Asimov’s Foundation series</li></ol><br/><p><strong>Movies:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>The Matrix Reloaded</li></ol><br/><p><strong>TV Shows: </strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Foundation</li></ol><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">d0ef48fb-14df-46c0-9dee-28fafdccf340</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 30 Dec 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/d0ef48fb-14df-46c0-9dee-28fafdccf340.mp3" length="14610826" type="audio/mpeg"/><itunes:duration>30:26</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>74</itunes:episode><podcast:episode>74</podcast:episode><podcast:season>1</podcast:season></item><item><title>The AI Human Paradox with Bruce Randall, Enterprise and Startup Sales Growth Strategist, Reiki Master, and Meditation Practitioner</title><itunes:title>The AI Human Paradox with Bruce Randall, Enterprise and Startup Sales Growth Strategist, Reiki Master, and Meditation Practitioner</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec Crawford welcomes Bruce Randall, an enterprise and startup sales growth strategist specializing in AI, cloud, and cybersecurity, and the author of the upcoming book, The AI Human Paradox. Bruce shares insights from his diverse career journey, starting as an entrepreneur during college and progressing through pivotal roles in software and AI sales. He discusses the intersection of AI and quantum technologies, highlighting both the transformative potential and the associated risks, particularly around ethics and the speed of innovation. Bruce also emphasizes the irreplaceable qualities of human intuition and consciousness, drawing from his experiences as a Reiki master and meditation practitioner. The conversation rounds out with practical advice for students entering the workforce, focusing on the importance of technical fluency, sales creativity, and maintaining a human-centric approach in an AI-driven world.</p><p><strong>Summary:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Career Journey</strong>: Bruce Randall reflects on his progression from student entrepreneur to AI-focused sales strategist.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Human and AI Boundaries</strong>: The discussion explores AI’s potential, the uniqueness of human consciousness, and the ethical dilemmas at the frontier of technology.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Quantum and AI Synergy</strong>: Bruce expresses both excitement and caution about the convergence of AI and quantum computing, noting the vast unknowns and risks.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Sales and Human Skills</strong>: He underscores the ongoing value of sales roles, creativity, and technical fluency, especially for students navigating an AI-impacted job market.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Ethics and Governance</strong>: The episode addresses the importance of ethical guardrails, international competition, and the need for AI regulation to ensure technology serves humanity.</li></ol><br/><p><strong>Companies/Organizations:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Johnson and Wales University</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>MIT</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>The Global Ethical AI Institute</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>LinkedIn</li></ol><br/><p><strong>Books:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>The AI Human Paradox by Bruce Randall</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Spin Selling</li></ol><br/><p><strong>Movies:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Robocop</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Schindler's List</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Jurassic Park</li></ol><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec Crawford welcomes Bruce Randall, an enterprise and startup sales growth strategist specializing in AI, cloud, and cybersecurity, and the author of the upcoming book, The AI Human Paradox. Bruce shares insights from his diverse career journey, starting as an entrepreneur during college and progressing through pivotal roles in software and AI sales. He discusses the intersection of AI and quantum technologies, highlighting both the transformative potential and the associated risks, particularly around ethics and the speed of innovation. Bruce also emphasizes the irreplaceable qualities of human intuition and consciousness, drawing from his experiences as a Reiki master and meditation practitioner. The conversation rounds out with practical advice for students entering the workforce, focusing on the importance of technical fluency, sales creativity, and maintaining a human-centric approach in an AI-driven world.</p><p><strong>Summary:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Career Journey</strong>: Bruce Randall reflects on his progression from student entrepreneur to AI-focused sales strategist.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Human and AI Boundaries</strong>: The discussion explores AI’s potential, the uniqueness of human consciousness, and the ethical dilemmas at the frontier of technology.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Quantum and AI Synergy</strong>: Bruce expresses both excitement and caution about the convergence of AI and quantum computing, noting the vast unknowns and risks.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Sales and Human Skills</strong>: He underscores the ongoing value of sales roles, creativity, and technical fluency, especially for students navigating an AI-impacted job market.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Ethics and Governance</strong>: The episode addresses the importance of ethical guardrails, international competition, and the need for AI regulation to ensure technology serves humanity.</li></ol><br/><p><strong>Companies/Organizations:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Johnson and Wales University</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>MIT</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>The Global Ethical AI Institute</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>LinkedIn</li></ol><br/><p><strong>Books:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>The AI Human Paradox by Bruce Randall</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Spin Selling</li></ol><br/><p><strong>Movies:</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Robocop</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Schindler's List</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Jurassic Park</li></ol><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">cbba8ae8-57f7-40fe-9799-4583dedb7647</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 23 Dec 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/cbba8ae8-57f7-40fe-9799-4583dedb7647.mp3" length="11171440" type="audio/mpeg"/><itunes:duration>23:16</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>73</itunes:episode><podcast:episode>73</podcast:episode><podcast:season>1</podcast:season></item><item><title>How to Choose the Right AI Tools for Your Business: Insights from RaiseAI’s CEO William Hollis</title><itunes:title>How to Choose the Right AI Tools for Your Business: Insights from RaiseAI’s CEO William Hollis</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec welcomes William Hollis, Founder and CEO of RaiseAI, to discuss the intersection of artificial intelligence and real estate investing. William shares his journey from young computer enthusiast to self-taught software engineer and entrepreneur, highlighting how his technical background informed his approach to capital raising in commercial real estate. The conversation delves into the challenges and opportunities of adopting AI tools in traditionally manual real estate processes, with William emphasizing the importance of retaining human judgment and ethical oversight. Alec and William explore the risks of bias in AI-driven decision-making and the potential for AI to both democratize and transform capital raising and due diligence. The episode concludes with a lively lightning round, touching on automation, investment tools, and future trends such as blockchain in real estate.</p><p><strong>Summary:</strong></p><p><strong>Career Journey:</strong>&nbsp;William Hollis shares how self-taught software skills led him to enterprise AI and, ultimately, real estate tech entrepreneurship.&nbsp;</p><p><strong>AI in Real Estate:</strong>&nbsp;Discussion on how AI can streamline capital raising, investor targeting, and due diligence while addressing legacy industry challenges.&nbsp;</p><p><strong>Ethical Concerns:</strong>&nbsp;William highlights the importance of maintaining human oversight to mitigate bias and ensure fair outcomes in AI-powered real estate decisions.&nbsp;</p><p><strong>Investor Advice:</strong>&nbsp;Practical guidance for new investors, stressing the value of personal relationships, thorough research, and aligning investment strategies with individual goals.&nbsp;</p><p><strong>Future Trends:</strong>&nbsp;Insights into blockchain, automation, and the evolving role of AI in shaping the future of real estate and job markets.</p><p><strong>Companies/Organizations:</strong> </p><ul><li><a href="https://theraiseai.com/" rel="noopener noreferrer" target="_blank">RaiseAI</a></li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li>Disney</li><li>Nike</li></ul><br/><p><strong>Movies: </strong></p><ul><li>Ocean's Eleven</li><li>WALL-E</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec welcomes William Hollis, Founder and CEO of RaiseAI, to discuss the intersection of artificial intelligence and real estate investing. William shares his journey from young computer enthusiast to self-taught software engineer and entrepreneur, highlighting how his technical background informed his approach to capital raising in commercial real estate. The conversation delves into the challenges and opportunities of adopting AI tools in traditionally manual real estate processes, with William emphasizing the importance of retaining human judgment and ethical oversight. Alec and William explore the risks of bias in AI-driven decision-making and the potential for AI to both democratize and transform capital raising and due diligence. The episode concludes with a lively lightning round, touching on automation, investment tools, and future trends such as blockchain in real estate.</p><p><strong>Summary:</strong></p><p><strong>Career Journey:</strong>&nbsp;William Hollis shares how self-taught software skills led him to enterprise AI and, ultimately, real estate tech entrepreneurship.&nbsp;</p><p><strong>AI in Real Estate:</strong>&nbsp;Discussion on how AI can streamline capital raising, investor targeting, and due diligence while addressing legacy industry challenges.&nbsp;</p><p><strong>Ethical Concerns:</strong>&nbsp;William highlights the importance of maintaining human oversight to mitigate bias and ensure fair outcomes in AI-powered real estate decisions.&nbsp;</p><p><strong>Investor Advice:</strong>&nbsp;Practical guidance for new investors, stressing the value of personal relationships, thorough research, and aligning investment strategies with individual goals.&nbsp;</p><p><strong>Future Trends:</strong>&nbsp;Insights into blockchain, automation, and the evolving role of AI in shaping the future of real estate and job markets.</p><p><strong>Companies/Organizations:</strong> </p><ul><li><a href="https://theraiseai.com/" rel="noopener noreferrer" target="_blank">RaiseAI</a></li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li>Disney</li><li>Nike</li></ul><br/><p><strong>Movies: </strong></p><ul><li>Ocean's Eleven</li><li>WALL-E</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">c8226033-39d2-4647-a14b-8120af385bca</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 16 Dec 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/c8226033-39d2-4647-a14b-8120af385bca.mp3" length="10794231" type="audio/mpeg"/><itunes:duration>22:29</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>72</itunes:episode><podcast:episode>72</podcast:episode><podcast:season>1</podcast:season></item><item><title>Building the Future with Agentic AI: A Conversation with Yaakov Sash of Casso AI</title><itunes:title>Building the Future with Agentic AI: A Conversation with Yaakov Sash of Casso AI</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec welcomes Yaakov Sash, Founder and CEO of Casso AI, for an in-depth conversation about the evolving intersection of AI, finance, and enterprise technology. Yaakov shares insights from his extensive background in banking and digital asset technology, recounting his journey through major financial institutions and the genesis of Casso AI. He explains the concepts of “vibe coding” and “vibe crafting,” highlighting the growing need for structured AI-driven development in enterprise settings. The discussion covers the impact of AI on traditional software roles and the shifting requirements for computer science graduates, as well as the ethical considerations and risks of advanced AI systems. Alec and Yaakov also touch on influential thinkers such as Ray Kurzweil and Eliezer Yudkowsky, and conclude with a lightning round covering everything from Brooklyn pizza to the enduring value of software design principles.</p><p><strong>Summary:</strong></p><ul><li><strong>Career Evolution:</strong> Yaakov Sash shares his trajectory from major banks to founding Casso AI, emphasizing adaptability and industry cycles.</li><li><strong>Vibe Coding vs. Crafting:</strong> The episode details the transition from rapid AI-generated app development to thoughtful, specification-driven enterprise solutions.</li><li><strong>AI’s Workforce Impact:</strong> Yaakov discusses the changing role of developers, predicting a shift toward orchestrating and reviewing AI-generated code.</li><li><strong>Ethical and Safety Concerns:</strong> The conversation explores the risks of AI deception, safety in development, and the importance of maintaining oversight.</li><li><strong>Founders’ Mindset:</strong> Building enduring value and pivoting with purpose is highlighted as key advice for entrepreneurs navigating emerging technologies.</li></ul><br/><p><strong>Companies/Organizations:</strong></p><ul><li><a href="https://casso.ai/" rel="noopener noreferrer" target="_blank">Casso AI</a></li><li><a href="https://ArtificialIntelligenceRisk,Inc." rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li>Credit Suisse</li><li>UBS</li><li>Merrill Lynch</li><li>Bank of America</li><li>JPMC</li><li>Circle</li><li>Microsoft</li><li>OpenAI (Codex)</li><li>Anthropic (Claude)</li></ul><br/><p><strong>Books:</strong></p><ul><li>The Singularity Is Near by Ray Kurzweil</li><li>The Singularity Is Nearer by Ray Kurzweil</li><li>Works by Eliezer Yudkowsky</li><li>Works by Ray Bradbury</li></ul><br/><p><strong>Movies:</strong></p><ul><li>Speed</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec welcomes Yaakov Sash, Founder and CEO of Casso AI, for an in-depth conversation about the evolving intersection of AI, finance, and enterprise technology. Yaakov shares insights from his extensive background in banking and digital asset technology, recounting his journey through major financial institutions and the genesis of Casso AI. He explains the concepts of “vibe coding” and “vibe crafting,” highlighting the growing need for structured AI-driven development in enterprise settings. The discussion covers the impact of AI on traditional software roles and the shifting requirements for computer science graduates, as well as the ethical considerations and risks of advanced AI systems. Alec and Yaakov also touch on influential thinkers such as Ray Kurzweil and Eliezer Yudkowsky, and conclude with a lightning round covering everything from Brooklyn pizza to the enduring value of software design principles.</p><p><strong>Summary:</strong></p><ul><li><strong>Career Evolution:</strong> Yaakov Sash shares his trajectory from major banks to founding Casso AI, emphasizing adaptability and industry cycles.</li><li><strong>Vibe Coding vs. Crafting:</strong> The episode details the transition from rapid AI-generated app development to thoughtful, specification-driven enterprise solutions.</li><li><strong>AI’s Workforce Impact:</strong> Yaakov discusses the changing role of developers, predicting a shift toward orchestrating and reviewing AI-generated code.</li><li><strong>Ethical and Safety Concerns:</strong> The conversation explores the risks of AI deception, safety in development, and the importance of maintaining oversight.</li><li><strong>Founders’ Mindset:</strong> Building enduring value and pivoting with purpose is highlighted as key advice for entrepreneurs navigating emerging technologies.</li></ul><br/><p><strong>Companies/Organizations:</strong></p><ul><li><a href="https://casso.ai/" rel="noopener noreferrer" target="_blank">Casso AI</a></li><li><a href="https://ArtificialIntelligenceRisk,Inc." rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li>Credit Suisse</li><li>UBS</li><li>Merrill Lynch</li><li>Bank of America</li><li>JPMC</li><li>Circle</li><li>Microsoft</li><li>OpenAI (Codex)</li><li>Anthropic (Claude)</li></ul><br/><p><strong>Books:</strong></p><ul><li>The Singularity Is Near by Ray Kurzweil</li><li>The Singularity Is Nearer by Ray Kurzweil</li><li>Works by Eliezer Yudkowsky</li><li>Works by Ray Bradbury</li></ul><br/><p><strong>Movies:</strong></p><ul><li>Speed</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">65a504c7-9ea4-46c6-909f-0894f7232c3b</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 09 Dec 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/65a504c7-9ea4-46c6-909f-0894f7232c3b.mp3" length="18281552" type="audio/mpeg"/><itunes:duration>38:05</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>71</itunes:episode><podcast:episode>71</podcast:episode><podcast:season>1</podcast:season></item><item><title>Exploring the Future of Wealth Management with Yvonne Kanner, Managing Partner and Co-founder of TRIA Capital Partners</title><itunes:title>Exploring the Future of Wealth Management with Yvonne Kanner, Managing Partner and Co-founder of TRIA Capital Partners</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec Crawford interviews Yvonne Kanner, Managing Partner and Co-founder of TRIA Capital Partners, about the future of wealth management and the impact of AI. Yvonne shares her journey from Mexico City to co-founding a venture capital firm, highlighting her experience with entrepreneurial ventures within Merrill Lynch. The discussion delves into the challenges and opportunities in wealth management, emphasizing the role of innovation and leadership in navigating industry changes. Yvonne expresses concerns about AI's ethical implications and the potential for certain populations to be left behind. The conversation concludes with advice for financial advisors on strategic AI adoption in wealth management.</p><p><strong>Summary:</strong></p><p><strong>Yvonne's Journey:</strong>&nbsp;Yvonne Kanner shares her path from Mexico City to co-founding TRIA Capital Partners.</p><p><strong>Wealth Management Evolution:</strong>&nbsp;The episode explores the changing landscape of wealth management and AI's role.</p><p><strong>Leadership and Innovation:</strong>&nbsp;Emphasis on the importance of leadership and innovation in adapting to industry shifts.</p><p><strong>AI Ethical Concerns:</strong>&nbsp;Yvonne discusses potential ethical issues and societal impacts of AI in finance.</p><p><strong>Strategic AI Adoption:</strong>&nbsp;Advice for advisors on the importance of strategic planning and data management in AI implementation.</p><p><strong>Companies/Organizations: </strong></p><ul><li><a href="https://triacapitalpartners.com/" rel="noopener noreferrer" target="_blank">TRIA Capital Partners</a></li><li>Merrill Lynch</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><strong>Movies: </strong></p><ul><li>Snow White and the Seven Dwarfs</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec Crawford interviews Yvonne Kanner, Managing Partner and Co-founder of TRIA Capital Partners, about the future of wealth management and the impact of AI. Yvonne shares her journey from Mexico City to co-founding a venture capital firm, highlighting her experience with entrepreneurial ventures within Merrill Lynch. The discussion delves into the challenges and opportunities in wealth management, emphasizing the role of innovation and leadership in navigating industry changes. Yvonne expresses concerns about AI's ethical implications and the potential for certain populations to be left behind. The conversation concludes with advice for financial advisors on strategic AI adoption in wealth management.</p><p><strong>Summary:</strong></p><p><strong>Yvonne's Journey:</strong>&nbsp;Yvonne Kanner shares her path from Mexico City to co-founding TRIA Capital Partners.</p><p><strong>Wealth Management Evolution:</strong>&nbsp;The episode explores the changing landscape of wealth management and AI's role.</p><p><strong>Leadership and Innovation:</strong>&nbsp;Emphasis on the importance of leadership and innovation in adapting to industry shifts.</p><p><strong>AI Ethical Concerns:</strong>&nbsp;Yvonne discusses potential ethical issues and societal impacts of AI in finance.</p><p><strong>Strategic AI Adoption:</strong>&nbsp;Advice for advisors on the importance of strategic planning and data management in AI implementation.</p><p><strong>Companies/Organizations: </strong></p><ul><li><a href="https://triacapitalpartners.com/" rel="noopener noreferrer" target="_blank">TRIA Capital Partners</a></li><li>Merrill Lynch</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><strong>Movies: </strong></p><ul><li>Snow White and the Seven Dwarfs</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">aef52046-a0b9-4f92-a98e-58e3d5ab6fbf</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 02 Dec 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/aef52046-a0b9-4f92-a98e-58e3d5ab6fbf.mp3" length="35222779" type="audio/mpeg"/><itunes:duration>36:41</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>70</itunes:episode><podcast:episode>70</podcast:episode><podcast:season>1</podcast:season></item><item><title>Pioneering AI Safety: Founding Insights from Geordie AI&apos;s Hannah Darley</title><itunes:title>Pioneering AI Safety: Founding Insights from Geordie AI&apos;s Hannah Darley</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec welcomes Hannah Darley, Co-Founder and Chief AI Officer at Geordie AI, to discuss the intricacies of AI agent risk management. Hannah shares her unique journey from intelligence operations to co-founding Geordie AI, highlighting the importance of structured analytics and vulnerability in leadership. She elaborates on how Geordie AI aims to mitigate risks associated with AI agents while unlocking their innovative potential. The conversation delves into ethical AI frameworks, emphasizing the need for multiple judgment layers in decision-making processes. Hannah also shares insights into her transition from employee to founder, underscoring the importance of time management and team impact.</p><p><strong>Summary:</strong></p><p><strong>AI Agent Risk Management:</strong>&nbsp;Hannah Darley explains how Geordie AI helps businesses manage AI agent risks while fostering innovation.</p><p><strong>Leadership and Vulnerability:</strong>&nbsp;Discusses the role of vulnerability in leadership and its impact on effective decision-making.</p><p><strong>Ethical AI Frameworks:</strong>&nbsp;Emphasizes the necessity of multiple judgment layers in creating ethical AI systems.</p><p><strong>Human Behavior in AI Governance:</strong>&nbsp;Highlights the influence of human behavior on AI system effectiveness and governance.</p><p><strong>Founder's Journey:</strong>&nbsp;Insights into the challenges and adjustments when transitioning from an employee to a founder.</p><p><strong>Companies/Organizations: </strong></p><ul><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li><a href="https://www.geordie.ai/" rel="noopener noreferrer" target="_blank">Geordie AI</a></li><li>Darktrace</li><li>Microsoft</li><li>Salesforce</li><li>Zendesk</li></ul><br/><p><strong>Books: </strong></p><ul><li>Daring Greatly by Brené Brown</li></ul><br/><p><strong>Movies: </strong></p><ul><li>The Lord of the Rings Extended Edition</li></ul><br/><p><strong>TV Shows: </strong></p><ul><li>Gilmore Girls</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec welcomes Hannah Darley, Co-Founder and Chief AI Officer at Geordie AI, to discuss the intricacies of AI agent risk management. Hannah shares her unique journey from intelligence operations to co-founding Geordie AI, highlighting the importance of structured analytics and vulnerability in leadership. She elaborates on how Geordie AI aims to mitigate risks associated with AI agents while unlocking their innovative potential. The conversation delves into ethical AI frameworks, emphasizing the need for multiple judgment layers in decision-making processes. Hannah also shares insights into her transition from employee to founder, underscoring the importance of time management and team impact.</p><p><strong>Summary:</strong></p><p><strong>AI Agent Risk Management:</strong>&nbsp;Hannah Darley explains how Geordie AI helps businesses manage AI agent risks while fostering innovation.</p><p><strong>Leadership and Vulnerability:</strong>&nbsp;Discusses the role of vulnerability in leadership and its impact on effective decision-making.</p><p><strong>Ethical AI Frameworks:</strong>&nbsp;Emphasizes the necessity of multiple judgment layers in creating ethical AI systems.</p><p><strong>Human Behavior in AI Governance:</strong>&nbsp;Highlights the influence of human behavior on AI system effectiveness and governance.</p><p><strong>Founder's Journey:</strong>&nbsp;Insights into the challenges and adjustments when transitioning from an employee to a founder.</p><p><strong>Companies/Organizations: </strong></p><ul><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li><a href="https://www.geordie.ai/" rel="noopener noreferrer" target="_blank">Geordie AI</a></li><li>Darktrace</li><li>Microsoft</li><li>Salesforce</li><li>Zendesk</li></ul><br/><p><strong>Books: </strong></p><ul><li>Daring Greatly by Brené Brown</li></ul><br/><p><strong>Movies: </strong></p><ul><li>The Lord of the Rings Extended Edition</li></ul><br/><p><strong>TV Shows: </strong></p><ul><li>Gilmore Girls</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">6cfb7433-865a-4ed1-ba92-0275b215b654</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 25 Nov 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/6cfb7433-865a-4ed1-ba92-0275b215b654.mp3" length="13402297" type="audio/mpeg"/><itunes:duration>27:55</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>69</itunes:episode><podcast:episode>69</podcast:episode><podcast:season>1</podcast:season></item><item><title>Exploring AI Use Cases with Jonathan Kvarfordt, Head of GTM Growth Momentum, Founder of GTM AI Academy, and Co-founder of AI Business Network</title><itunes:title>Exploring AI Use Cases with Jonathan Kvarfordt, Head of GTM Growth Momentum, Founder of GTM AI Academy, and Co-founder of AI Business Network</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec Crawford speaks with Jonathan Kvarfordt, Head of GTM Growth Momentum, Founder of GTM AI Academy, and Co-founder of the AI Business Network. Jonathan shares his journey from sales and enablement to becoming a leader in AI strategy and growth. The discussion delves into the importance of aligning AI strategies with business objectives, highlighting the need for data access and ethical considerations in AI applications. They explore the challenges and opportunities of implementing AI in regulated industries, emphasizing the importance of trust and compliance. Jonathan also offers insights into practical AI use cases and the significance of educating leaders and teams in leveraging AI effectively.</p><p><strong>Summary:</strong></p><p><strong>AI Strategy Alignment:</strong>&nbsp;Jonathan emphasizes integrating AI strategy with overarching business goals.</p><p><strong>Data Access Challenges:</strong>&nbsp;AI's potential is hindered by siloed data structures in organizations.</p><p><strong>Ethical Considerations:</strong>&nbsp;The episode discusses the need for transparency and ethical guidelines in AI use.</p><p><strong>Practical Use Cases:</strong>&nbsp;Jonathan shares examples of AI's impact on CRM processes and time optimization.</p><p><strong>Trust and Compliance:</strong>&nbsp;Highlighting the importance of trust in AI tools, especially in regulated industries.</p><p><strong>Companies/Organizations: </strong></p><ul><li><a href="https://www.momentum.io/ignite" rel="noopener noreferrer" target="_blank">GTM Growth Momentum</a></li><li><a href="https://www.gtmaiacademy.com/home" rel="noopener noreferrer" target="_blank">GTM AI Academy</a></li><li>AI Business Network</li><li>JP Morgan Chase</li><li>Momentum</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><strong>Movies:</strong> </p><ul><li>Beverly Hills Cop</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec Crawford speaks with Jonathan Kvarfordt, Head of GTM Growth Momentum, Founder of GTM AI Academy, and Co-founder of the AI Business Network. Jonathan shares his journey from sales and enablement to becoming a leader in AI strategy and growth. The discussion delves into the importance of aligning AI strategies with business objectives, highlighting the need for data access and ethical considerations in AI applications. They explore the challenges and opportunities of implementing AI in regulated industries, emphasizing the importance of trust and compliance. Jonathan also offers insights into practical AI use cases and the significance of educating leaders and teams in leveraging AI effectively.</p><p><strong>Summary:</strong></p><p><strong>AI Strategy Alignment:</strong>&nbsp;Jonathan emphasizes integrating AI strategy with overarching business goals.</p><p><strong>Data Access Challenges:</strong>&nbsp;AI's potential is hindered by siloed data structures in organizations.</p><p><strong>Ethical Considerations:</strong>&nbsp;The episode discusses the need for transparency and ethical guidelines in AI use.</p><p><strong>Practical Use Cases:</strong>&nbsp;Jonathan shares examples of AI's impact on CRM processes and time optimization.</p><p><strong>Trust and Compliance:</strong>&nbsp;Highlighting the importance of trust in AI tools, especially in regulated industries.</p><p><strong>Companies/Organizations: </strong></p><ul><li><a href="https://www.momentum.io/ignite" rel="noopener noreferrer" target="_blank">GTM Growth Momentum</a></li><li><a href="https://www.gtmaiacademy.com/home" rel="noopener noreferrer" target="_blank">GTM AI Academy</a></li><li>AI Business Network</li><li>JP Morgan Chase</li><li>Momentum</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><strong>Movies:</strong> </p><ul><li>Beverly Hills Cop</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">c2c53782-12ec-4591-8689-f73739c2f778</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Mon, 17 Nov 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/c2c53782-12ec-4591-8689-f73739c2f778.mp3" length="27825324" type="audio/mpeg"/><itunes:duration>28:59</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>68</itunes:episode><podcast:episode>68</podcast:episode><podcast:season>1</podcast:season></item><item><title>Cybersecurity and AI: A Two-Edged Sword — A Conversation with Jocelyn King, Founder of Smarter Online Safety</title><itunes:title>Cybersecurity and AI: A Two-Edged Sword — A Conversation with Jocelyn King, Founder of Smarter Online Safety</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>Jocelyn King, a leading figure in cybersecurity and founder of Smarter Online Safety, shares her compelling journey from being a cybercrime victim to becoming an advocate for online safety. The episode underscores the importance of proactive cybersecurity measures and the impact of AI in both aiding and combating cyber threats. Jocelyn discusses her personal experiences with cyber intrusions, the creation of her comprehensive checklist for digital protection, and the role of AI in contemporary cybersecurity challenges. She emphasizes the need for both individuals and companies to adopt robust security practices and highlights the psychological tactics employed by cybercriminals. Additionally, the episode covers the ethical considerations surrounding AI use, particularly in preventing data breaches and maintaining privacy.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background</strong>: Jocelyn King is a renowned cybersecurity expert and the founder of Smarter Online Safety.</li><li><strong>Cybersecurity Journey</strong>: Jocelyn shares her personal experience of being targeted by cybercriminals, which led to her mission to educate others on online safety.</li><li><strong>AI and Cybercrime</strong>: The discussion highlights how AI is used by both cybercriminals and cybersecurity experts, changing the landscape of digital threats.</li><li><strong>Digital Self-Defense</strong>: Jocelyn outlines key steps individuals can take to secure their digital lives, emphasizing the importance of basic security measures.</li><li><strong>Ethical AI Use</strong>: The conversation touches on the ethical responsibilities of companies in using AI technology safely and transparently.</li></ul><br/><p><strong>Companies/Organizations</strong>:</p><ul><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li>Smarter Online Safety</li><li>Intel</li><li>FBI</li><li>Silicon Valley AI Crypto</li><li>Waters Technology</li><li>Ferrari</li><li>Cal Berkeley</li><li>Hong Kong University</li></ul><br/><p><strong>Books</strong>:</p><ul><li>Getting Things Done by David Allen</li><li>Raising AI by Dekai</li></ul><br/><p><strong>TV Shows</strong>:</p><ul><li>Ted Lasso</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>Jocelyn King, a leading figure in cybersecurity and founder of Smarter Online Safety, shares her compelling journey from being a cybercrime victim to becoming an advocate for online safety. The episode underscores the importance of proactive cybersecurity measures and the impact of AI in both aiding and combating cyber threats. Jocelyn discusses her personal experiences with cyber intrusions, the creation of her comprehensive checklist for digital protection, and the role of AI in contemporary cybersecurity challenges. She emphasizes the need for both individuals and companies to adopt robust security practices and highlights the psychological tactics employed by cybercriminals. Additionally, the episode covers the ethical considerations surrounding AI use, particularly in preventing data breaches and maintaining privacy.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background</strong>: Jocelyn King is a renowned cybersecurity expert and the founder of Smarter Online Safety.</li><li><strong>Cybersecurity Journey</strong>: Jocelyn shares her personal experience of being targeted by cybercriminals, which led to her mission to educate others on online safety.</li><li><strong>AI and Cybercrime</strong>: The discussion highlights how AI is used by both cybercriminals and cybersecurity experts, changing the landscape of digital threats.</li><li><strong>Digital Self-Defense</strong>: Jocelyn outlines key steps individuals can take to secure their digital lives, emphasizing the importance of basic security measures.</li><li><strong>Ethical AI Use</strong>: The conversation touches on the ethical responsibilities of companies in using AI technology safely and transparently.</li></ul><br/><p><strong>Companies/Organizations</strong>:</p><ul><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li>Smarter Online Safety</li><li>Intel</li><li>FBI</li><li>Silicon Valley AI Crypto</li><li>Waters Technology</li><li>Ferrari</li><li>Cal Berkeley</li><li>Hong Kong University</li></ul><br/><p><strong>Books</strong>:</p><ul><li>Getting Things Done by David Allen</li><li>Raising AI by Dekai</li></ul><br/><p><strong>TV Shows</strong>:</p><ul><li>Ted Lasso</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">a34fd81e-a8e2-448c-871e-45bb67cf58bd</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Mon, 10 Nov 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/a34fd81e-a8e2-448c-871e-45bb67cf58bd.mp3" length="37539944" type="audio/mpeg"/><itunes:duration>39:06</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>67</itunes:episode><podcast:episode>67</podcast:episode><podcast:season>1</podcast:season></item><item><title>AI Empowering Humans, Not Replacing Them — Manuj Aggarwal, Founder &amp; Chief Innovation Officer at TetraNoodle Technologies</title><itunes:title>AI Empowering Humans, Not Replacing Them — Manuj Aggarwal, Founder &amp; Chief Innovation Officer at TetraNoodle Technologies</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec Crawford interviews Manuj Aggarwal, Founder and Chief Innovation Officer at TetraNoodle Technologies. Manuj shares his journey from a small town in India to becoming a successful entrepreneur in Canada, highlighting the challenges and opportunities he encountered. They discuss the disruptive potential of AI and how it can enhance human capabilities rather than replace them. Manuj emphasizes the importance of aligning personal and organizational goals using AI and the concept of creating digital twins to optimize decision-making. The conversation also touches on the future of work and the role of creativity in an AI-driven world.</p><p><strong>Summary:</strong></p><p><strong>Entrepreneurial Journey:</strong>&nbsp;Manuj Aggarwal shares his inspiring journey from India to Canada and his path to founding TetraNoodle Technologies.</p><p><strong>AI Empowerment:</strong>&nbsp;Emphasizes how AI can enhance human capabilities and align personal and organizational goals.</p><p><strong>Digital Twins:</strong>&nbsp;Discusses creating digital twins of executives to optimize decision-making in organizations.</p><p><strong>Future of Work:</strong>&nbsp;Explores the potential for AI to transform work and unleash human creativity.</p><p><strong>AI Disruption:</strong>&nbsp;Highlights concerns about AI's rapid changes and the need for responsible management to prevent social unrest.</p><p><strong>Companies/Organizations:</strong></p><ul><li>TetraNoodle Technologies</li><li>Artificial Intelligence Risk, Inc.</li><li>Microsoft</li><li>IBM</li><li>T-Mobile</li><li>Pearson Education</li><li>Bridgewater</li></ul><br/><p><strong>Books: </strong></p><ul><li>"Principles" by Ray Dalio</li></ul><br/><p><strong>Movies: </strong></p><ul><li>Arrival</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec Crawford interviews Manuj Aggarwal, Founder and Chief Innovation Officer at TetraNoodle Technologies. Manuj shares his journey from a small town in India to becoming a successful entrepreneur in Canada, highlighting the challenges and opportunities he encountered. They discuss the disruptive potential of AI and how it can enhance human capabilities rather than replace them. Manuj emphasizes the importance of aligning personal and organizational goals using AI and the concept of creating digital twins to optimize decision-making. The conversation also touches on the future of work and the role of creativity in an AI-driven world.</p><p><strong>Summary:</strong></p><p><strong>Entrepreneurial Journey:</strong>&nbsp;Manuj Aggarwal shares his inspiring journey from India to Canada and his path to founding TetraNoodle Technologies.</p><p><strong>AI Empowerment:</strong>&nbsp;Emphasizes how AI can enhance human capabilities and align personal and organizational goals.</p><p><strong>Digital Twins:</strong>&nbsp;Discusses creating digital twins of executives to optimize decision-making in organizations.</p><p><strong>Future of Work:</strong>&nbsp;Explores the potential for AI to transform work and unleash human creativity.</p><p><strong>AI Disruption:</strong>&nbsp;Highlights concerns about AI's rapid changes and the need for responsible management to prevent social unrest.</p><p><strong>Companies/Organizations:</strong></p><ul><li>TetraNoodle Technologies</li><li>Artificial Intelligence Risk, Inc.</li><li>Microsoft</li><li>IBM</li><li>T-Mobile</li><li>Pearson Education</li><li>Bridgewater</li></ul><br/><p><strong>Books: </strong></p><ul><li>"Principles" by Ray Dalio</li></ul><br/><p><strong>Movies: </strong></p><ul><li>Arrival</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">0b1dff2b-42e3-46d1-a80c-53c37dfc12c1</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Mon, 03 Nov 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/0b1dff2b-42e3-46d1-a80c-53c37dfc12c1.mp3" length="35389546" type="audio/mpeg"/><itunes:duration>36:52</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>66</itunes:episode><podcast:episode>66</podcast:episode><podcast:season>1</podcast:season></item><item><title>Managing Your AI Reputation: Insights from George Swetlitz, Co-Founder of RightResponse AI</title><itunes:title>Managing Your AI Reputation: Insights from George Swetlitz, Co-Founder of RightResponse AI</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>George Swetlitz, co-founder of RightResponse AI (<a href="https://www.rightresponseai.com/" rel="noopener noreferrer" target="_blank">https://www.rightresponseai.com/</a>), brings a wealth of experience from consulting and operational roles, including significant leadership positions at McKinsey and Sara Lee Corporation. This episode delves into the evolving landscape of AI in reputation management, emphasizing the transition from traditional customer feedback to AI-driven insights. Swetlitz discusses the implications of AI-generated responses and the importance of maintaining brand consistency in digital interactions. He explores the challenges of adapting to generative engine optimization, highlighting how companies can leverage AI to enhance their digital footprint. Additionally, the conversation touches on ethical concerns, including privacy and the balance between automation and human interaction in customer service.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background:</strong>&nbsp;George Swetlitz is a seasoned consultant and business leader, now co-founder of RightResponse AI.</li><li><strong>AI in Reputation Management:</strong>&nbsp;RightResponse AI uses AI to enhance how businesses manage and respond to customer reviews, focusing on brand consistency.</li><li><strong>Generative Engine Optimization:</strong>&nbsp;Swetlitz highlights the shift from traditional SEO to generative engine optimization, crucial for reaching younger audiences.</li><li><strong>Ethical AI Concerns:</strong>&nbsp;The episode covers privacy and the ethical use of AI, emphasizing the need for real personalization over fake personalization.</li><li><strong>AI Implementation Advice:</strong>&nbsp;Swetlitz advises CEOs to start with simple AI projects and choose vendors carefully to manage risks effectively.</li></ul><br/><p><strong>Companies/Organizations:</strong> </p><ul><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li><a href="https://www.rightresponseai.com/" rel="noopener noreferrer" target="_blank">RightResponse AI</a></li><li>McKinsey</li><li>Sara Lee Corporation</li><li>Harvard</li><li>J.P. Morgan</li><li>Trader Joe's</li><li>Crate and Barrel</li></ul><br/><p><strong>Books:</strong> </p><ul><li>Thinking Fast and Slow by Daniel Kahneman</li></ul><br/><p><strong>Movies: </strong></p><ul><li>Casablanca</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>George Swetlitz, co-founder of RightResponse AI (<a href="https://www.rightresponseai.com/" rel="noopener noreferrer" target="_blank">https://www.rightresponseai.com/</a>), brings a wealth of experience from consulting and operational roles, including significant leadership positions at McKinsey and Sara Lee Corporation. This episode delves into the evolving landscape of AI in reputation management, emphasizing the transition from traditional customer feedback to AI-driven insights. Swetlitz discusses the implications of AI-generated responses and the importance of maintaining brand consistency in digital interactions. He explores the challenges of adapting to generative engine optimization, highlighting how companies can leverage AI to enhance their digital footprint. Additionally, the conversation touches on ethical concerns, including privacy and the balance between automation and human interaction in customer service.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background:</strong>&nbsp;George Swetlitz is a seasoned consultant and business leader, now co-founder of RightResponse AI.</li><li><strong>AI in Reputation Management:</strong>&nbsp;RightResponse AI uses AI to enhance how businesses manage and respond to customer reviews, focusing on brand consistency.</li><li><strong>Generative Engine Optimization:</strong>&nbsp;Swetlitz highlights the shift from traditional SEO to generative engine optimization, crucial for reaching younger audiences.</li><li><strong>Ethical AI Concerns:</strong>&nbsp;The episode covers privacy and the ethical use of AI, emphasizing the need for real personalization over fake personalization.</li><li><strong>AI Implementation Advice:</strong>&nbsp;Swetlitz advises CEOs to start with simple AI projects and choose vendors carefully to manage risks effectively.</li></ul><br/><p><strong>Companies/Organizations:</strong> </p><ul><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li><a href="https://www.rightresponseai.com/" rel="noopener noreferrer" target="_blank">RightResponse AI</a></li><li>McKinsey</li><li>Sara Lee Corporation</li><li>Harvard</li><li>J.P. Morgan</li><li>Trader Joe's</li><li>Crate and Barrel</li></ul><br/><p><strong>Books:</strong> </p><ul><li>Thinking Fast and Slow by Daniel Kahneman</li></ul><br/><p><strong>Movies: </strong></p><ul><li>Casablanca</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">96e10425-4051-4934-9d8c-605fb85e8e19</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 28 Oct 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/96e10425-4051-4934-9d8c-605fb85e8e19.mp3" length="34595006" type="audio/mpeg"/><itunes:duration>36:02</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>65</itunes:episode><podcast:episode>65</podcast:episode><podcast:season>1</podcast:season></item><item><title>Sean Neville — Co-Founder of Catena Labs &amp; Circle: The Convergence Trajectory of AI and Blockchain</title><itunes:title>Sean Neville — Co-Founder of Catena Labs &amp; Circle: The Convergence Trajectory of AI and Blockchain</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec Crawford interviews Sean Neville, Co-founder and CEO of Catena Labs, as well as Co-founder and Board Member at Circle. Sean shares the origin story of Circle, reflecting on its mission to democratize finance using internet technology. The discussion covers the transformative potential of AI and blockchain, emphasizing the need for robust AI governance frameworks to ensure safety and compliance. Sean highlights the importance of programmable money and the integration of stablecoins within financial systems. The episode concludes with Sean offering insights on various topics from blockchain security to AI's role in economic transactions.</p><p><strong>Summary:</strong></p><p><strong>Circle's Origin:</strong>&nbsp;Sean Neville discusses the founding of Circle and its goal to democratize finance using global internet standards.</p><p><strong>AI and Blockchain:</strong>&nbsp;The convergence of AI and blockchain technologies is explored, highlighting their potential to transform global financial systems.</p><p><strong>Programmable Money:</strong>&nbsp;Sean explains the concept of programmable money and its implications for financial transactions.</p><p><strong>AI Governance:</strong>&nbsp;The necessity of developing AI governance frameworks for safe and compliant economic participation is emphasized.</p><p><strong>Final Insights:</strong>&nbsp;Sean shares perspectives on topics ranging from blockchain security to AI agents in finance.</p><p><strong>Companies/Organizations:</strong> </p><ul><li><a href="https://catenalabs.com/" rel="noopener noreferrer" target="_blank">Catena Labs</a></li><li><a href="https://investor.circle.com/overview/default.aspx" rel="noopener noreferrer" target="_blank">Circle</a></li><li>Brightcove</li><li>General Catalyst</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><strong>Books:</strong> </p><ul><li>Super Agency by Greg Beato and Reid Hoffman</li></ul><br/><p><strong>Movies:</strong> </p><ul><li>The Social Network</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec Crawford interviews Sean Neville, Co-founder and CEO of Catena Labs, as well as Co-founder and Board Member at Circle. Sean shares the origin story of Circle, reflecting on its mission to democratize finance using internet technology. The discussion covers the transformative potential of AI and blockchain, emphasizing the need for robust AI governance frameworks to ensure safety and compliance. Sean highlights the importance of programmable money and the integration of stablecoins within financial systems. The episode concludes with Sean offering insights on various topics from blockchain security to AI's role in economic transactions.</p><p><strong>Summary:</strong></p><p><strong>Circle's Origin:</strong>&nbsp;Sean Neville discusses the founding of Circle and its goal to democratize finance using global internet standards.</p><p><strong>AI and Blockchain:</strong>&nbsp;The convergence of AI and blockchain technologies is explored, highlighting their potential to transform global financial systems.</p><p><strong>Programmable Money:</strong>&nbsp;Sean explains the concept of programmable money and its implications for financial transactions.</p><p><strong>AI Governance:</strong>&nbsp;The necessity of developing AI governance frameworks for safe and compliant economic participation is emphasized.</p><p><strong>Final Insights:</strong>&nbsp;Sean shares perspectives on topics ranging from blockchain security to AI agents in finance.</p><p><strong>Companies/Organizations:</strong> </p><ul><li><a href="https://catenalabs.com/" rel="noopener noreferrer" target="_blank">Catena Labs</a></li><li><a href="https://investor.circle.com/overview/default.aspx" rel="noopener noreferrer" target="_blank">Circle</a></li><li>Brightcove</li><li>General Catalyst</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><strong>Books:</strong> </p><ul><li>Super Agency by Greg Beato and Reid Hoffman</li></ul><br/><p><strong>Movies:</strong> </p><ul><li>The Social Network</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">9b34a7d4-2cff-4f47-8d39-ecad8d1080c5</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 21 Oct 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/9b34a7d4-2cff-4f47-8d39-ecad8d1080c5.mp3" length="34088019" type="audio/mpeg"/><itunes:duration>35:30</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>64</itunes:episode><podcast:episode>64</podcast:episode><podcast:season>1</podcast:season></item><item><title>Andriy Burkov, PhD in AI — How Artificial Intelligence Is Changing Software Development Forever</title><itunes:title>Andriy Burkov, PhD in AI — How Artificial Intelligence Is Changing Software Development Forever</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec Crawford speaks with Andriy Burkov, who holds a PhD in artificial intelligence and is the author of several books including "The Hundred Page Machine Learning Book" and "Machine Learning Engineering." The conversation dives into Andriy's academic background in multi-agent systems and computational game theory, highlighting his significant contributions to algorithms for Nash Equilibrium in multi-turn games. Andriy shares insights into his transition from academia to industry, recounting his experiences with companies like Fujitsu and innovative startups in Quebec. The discussion shifts to how AI is revolutionizing software development, with Andriy emphasizing the increased productivity for experienced engineers and the challenges faced by recent computer science graduates in a competitive job market. The episode concludes with Andriy's perspectives on the ethical implications of AI and his advice for businesses starting their AI journey.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Introduction:</strong>&nbsp;Andriy Burkov discusses his academic background and contributions to AI, particularly in multi-agent systems.</li><li><strong>Career Transition:</strong>&nbsp;Shares experiences moving from academia to industry, working with Fujitsu and innovative startups.</li><li><strong>AI in Software Development:</strong>&nbsp;Explores how AI is enhancing productivity for experienced software engineers.</li><li><strong>Challenges for Graduates:</strong>&nbsp;Addresses the competitive job market for computer science graduates and offers advice for gaining experience.</li><li><strong>Ethical Considerations:</strong>&nbsp;Discusses the importance of understanding AI's limitations and the potential risks in its deployment.</li></ul><br/><p><strong>Companies/Organizations:</strong> </p><ul><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li>Fujitsu</li></ul><br/><p><strong>Books: </strong></p><ul><li><a href="https://www.amazon.com/Hundred-Page-Machine-Learning-Book/dp/199957950X" rel="noopener noreferrer" target="_blank">The Hundred Page Machine Learning Book</a></li><li>Artificial Intelligence: A Modern Approach</li></ul><br/><p><strong>Movies: </strong></p><ul><li>Terminator 2</li><li>Alien </li><li>Harry Potter</li><li>The Lord of the Rings</li></ul><br/><p><strong>TV Shows:</strong> </p><ul><li>Breaking Bad</li><li>Game of Thrones</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>In this episode, Alec Crawford speaks with Andriy Burkov, who holds a PhD in artificial intelligence and is the author of several books including "The Hundred Page Machine Learning Book" and "Machine Learning Engineering." The conversation dives into Andriy's academic background in multi-agent systems and computational game theory, highlighting his significant contributions to algorithms for Nash Equilibrium in multi-turn games. Andriy shares insights into his transition from academia to industry, recounting his experiences with companies like Fujitsu and innovative startups in Quebec. The discussion shifts to how AI is revolutionizing software development, with Andriy emphasizing the increased productivity for experienced engineers and the challenges faced by recent computer science graduates in a competitive job market. The episode concludes with Andriy's perspectives on the ethical implications of AI and his advice for businesses starting their AI journey.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Introduction:</strong>&nbsp;Andriy Burkov discusses his academic background and contributions to AI, particularly in multi-agent systems.</li><li><strong>Career Transition:</strong>&nbsp;Shares experiences moving from academia to industry, working with Fujitsu and innovative startups.</li><li><strong>AI in Software Development:</strong>&nbsp;Explores how AI is enhancing productivity for experienced software engineers.</li><li><strong>Challenges for Graduates:</strong>&nbsp;Addresses the competitive job market for computer science graduates and offers advice for gaining experience.</li><li><strong>Ethical Considerations:</strong>&nbsp;Discusses the importance of understanding AI's limitations and the potential risks in its deployment.</li></ul><br/><p><strong>Companies/Organizations:</strong> </p><ul><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li>Fujitsu</li></ul><br/><p><strong>Books: </strong></p><ul><li><a href="https://www.amazon.com/Hundred-Page-Machine-Learning-Book/dp/199957950X" rel="noopener noreferrer" target="_blank">The Hundred Page Machine Learning Book</a></li><li>Artificial Intelligence: A Modern Approach</li></ul><br/><p><strong>Movies: </strong></p><ul><li>Terminator 2</li><li>Alien </li><li>Harry Potter</li><li>The Lord of the Rings</li></ul><br/><p><strong>TV Shows:</strong> </p><ul><li>Breaking Bad</li><li>Game of Thrones</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">25c19bbb-56df-4ebd-ad5b-e75dc283773d</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 14 Oct 2025 10:30:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/25c19bbb-56df-4ebd-ad5b-e75dc283773d.mp3" length="40747781" type="audio/mpeg"/><itunes:duration>42:27</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>63</itunes:episode><podcast:episode>63</podcast:episode><podcast:season>1</podcast:season></item><item><title>AI vs AI: Defending Against the Bad Guys with Nicole Carignan, SVP of Security &amp; AI Strategy at Darktrace</title><itunes:title>AI vs AI: Defending Against the Bad Guys with Nicole Carignan, SVP of Security &amp; AI Strategy at Darktrace</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>Nicole Carignan, Senior Vice President of Security and AI Strategy at Darktrace, shares her journey from a computer science background to leading AI security strategies. The episode explores the necessity of using AI to combat AI-driven cyber threats, highlighting the importance of autonomous systems in defending against sophisticated attacks like those from Scattered Spider. Nicole discusses the evolving landscape of AI agents, emphasizing the blend of human intuition and AI analytics for optimal decision-making. She also reflects on the ethical implications of AI in cybersecurity and the importance of embedding security into AI innovations. Additionally, Nicole offers insights into her daily role at Darktrace and her predictions for AI's future impact on security.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background:</strong> Nicole Carignan is the Senior Vice President of Security and AI Strategy at Darktrace.</li><li><strong>AI in Cybersecurity:</strong> Nicole emphasizes the critical role of AI in defending against AI-powered cyber threats, advocating for autonomous response systems.</li><li><strong>Human vs. AI Decision Making:</strong> She discusses the importance of combining human intuition with AI analytics to enhance decision-making processes.</li><li><strong>AI Agents &amp; Autonomy:</strong> Nicole explains the concept of AI agents and the need for diverse models to achieve reliable autonomous systems.</li><li><strong>Ethical Considerations:</strong> The conversation touches on the ethical challenges of AI in cybersecurity, urging for responsible innovation and security by design.</li></ul><br/><p><strong>Companies/Organizations:</strong></p><ul><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li>Troutman Street Audio</li><li>Darktrace</li><li>NASA</li><li>Lockheed Martin</li><li>CISA</li></ul><br/><p><strong>Books:</strong></p><ul><li>AI Snake Oil</li></ul><br/><p><strong>Movies:</strong></p><ul><li>The Matrix</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.</p><p>Nicole Carignan, Senior Vice President of Security and AI Strategy at Darktrace, shares her journey from a computer science background to leading AI security strategies. The episode explores the necessity of using AI to combat AI-driven cyber threats, highlighting the importance of autonomous systems in defending against sophisticated attacks like those from Scattered Spider. Nicole discusses the evolving landscape of AI agents, emphasizing the blend of human intuition and AI analytics for optimal decision-making. She also reflects on the ethical implications of AI in cybersecurity and the importance of embedding security into AI innovations. Additionally, Nicole offers insights into her daily role at Darktrace and her predictions for AI's future impact on security.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background:</strong> Nicole Carignan is the Senior Vice President of Security and AI Strategy at Darktrace.</li><li><strong>AI in Cybersecurity:</strong> Nicole emphasizes the critical role of AI in defending against AI-powered cyber threats, advocating for autonomous response systems.</li><li><strong>Human vs. AI Decision Making:</strong> She discusses the importance of combining human intuition with AI analytics to enhance decision-making processes.</li><li><strong>AI Agents &amp; Autonomy:</strong> Nicole explains the concept of AI agents and the need for diverse models to achieve reliable autonomous systems.</li><li><strong>Ethical Considerations:</strong> The conversation touches on the ethical challenges of AI in cybersecurity, urging for responsible innovation and security by design.</li></ul><br/><p><strong>Companies/Organizations:</strong></p><ul><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li>Troutman Street Audio</li><li>Darktrace</li><li>NASA</li><li>Lockheed Martin</li><li>CISA</li></ul><br/><p><strong>Books:</strong></p><ul><li>AI Snake Oil</li></ul><br/><p><strong>Movies:</strong></p><ul><li>The Matrix</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">3bbff52d-b834-42be-98bb-38173c5b04e5</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 07 Oct 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/3bbff52d-b834-42be-98bb-38173c5b04e5.mp3" length="35389129" type="audio/mpeg"/><itunes:duration>36:52</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>62</itunes:episode><podcast:episode>62</podcast:episode><podcast:season>1</podcast:season></item><item><title>Creative Machines with Maya Ackerman, WaveAI Co-founder &amp; Professor at Santa Clara University</title><itunes:title>Creative Machines with Maya Ackerman, WaveAI Co-founder &amp; Professor at Santa Clara University</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>Maya Ackerman, a professor at Santa Clara University and co-founder of Wave AI, shares her journey from aspiring programmer to a pioneering figure in AI-generated creativity. The episode delves into the concept of AI as a reflection of society’s collective unconscious and explores how generative AI can enhance human creativity rather than replace it. Maya discusses her book, "Creative Machines," which envisions AI as a tool to elevate human creativity through humble collaboration. The conversation also touches on AI’s potential impact on employment and the ethical considerations of AI development. Maya emphasizes the importance of authentic connections and strong team dynamics in startup success and advises leaders to shape AI's future responsibly.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background:</strong> Maya Ackerman is a professor at Santa Clara University and co-founder of Wave AI, with a focus on AI and creativity.</li><li><strong>Creative Machines:</strong> Her book explores AI as a mirror reflecting societal biases and envisions AI enhancing human creativity.</li><li><strong>AI and Society:</strong> Maya discusses AI’s potential impact on employment and stresses the need for ethical development.</li><li><strong>Startup Insights:</strong> She highlights the importance of strong team dynamics and authentic connections for entrepreneurial success.</li><li><strong>Vision for AI:</strong> Maya advocates for AI as a humble partner that elevates human creativity rather than replacing it.</li></ul><br/><p><strong>Companies/Organizations</strong>:</p><ul><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li><a href="https://wave-ai.net/" rel="noopener noreferrer" target="_blank">Wave AI</a></li><li>Santa Clara University</li><li>Sound Microsystems</li><li>Nvidia</li><li>A16 Z</li><li>Lightspeed</li><li>GE</li><li>Morgan Stanley</li><li>Troutman Street Audio</li></ul><br/><p><strong>Books</strong>:</p><ul><li><a href="https://www.amazon.com/Creative-Machines-Future-Human-Creativity/dp/1394316267/" rel="noopener noreferrer" target="_blank">Creative Machines</a> by Maya Ackerman</li><li>Being You by Anil Seth</li><li>Jack Welch’s biography</li></ul><br/><p><strong>Movies</strong>:</p><ul><li>Before Midnight</li></ul><br/><p><strong>TV Shows</strong>:</p><ul><li>Game of Thrones</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>Maya Ackerman, a professor at Santa Clara University and co-founder of Wave AI, shares her journey from aspiring programmer to a pioneering figure in AI-generated creativity. The episode delves into the concept of AI as a reflection of society’s collective unconscious and explores how generative AI can enhance human creativity rather than replace it. Maya discusses her book, "Creative Machines," which envisions AI as a tool to elevate human creativity through humble collaboration. The conversation also touches on AI’s potential impact on employment and the ethical considerations of AI development. Maya emphasizes the importance of authentic connections and strong team dynamics in startup success and advises leaders to shape AI's future responsibly.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background:</strong> Maya Ackerman is a professor at Santa Clara University and co-founder of Wave AI, with a focus on AI and creativity.</li><li><strong>Creative Machines:</strong> Her book explores AI as a mirror reflecting societal biases and envisions AI enhancing human creativity.</li><li><strong>AI and Society:</strong> Maya discusses AI’s potential impact on employment and stresses the need for ethical development.</li><li><strong>Startup Insights:</strong> She highlights the importance of strong team dynamics and authentic connections for entrepreneurial success.</li><li><strong>Vision for AI:</strong> Maya advocates for AI as a humble partner that elevates human creativity rather than replacing it.</li></ul><br/><p><strong>Companies/Organizations</strong>:</p><ul><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li><a href="https://wave-ai.net/" rel="noopener noreferrer" target="_blank">Wave AI</a></li><li>Santa Clara University</li><li>Sound Microsystems</li><li>Nvidia</li><li>A16 Z</li><li>Lightspeed</li><li>GE</li><li>Morgan Stanley</li><li>Troutman Street Audio</li></ul><br/><p><strong>Books</strong>:</p><ul><li><a href="https://www.amazon.com/Creative-Machines-Future-Human-Creativity/dp/1394316267/" rel="noopener noreferrer" target="_blank">Creative Machines</a> by Maya Ackerman</li><li>Being You by Anil Seth</li><li>Jack Welch’s biography</li></ul><br/><p><strong>Movies</strong>:</p><ul><li>Before Midnight</li></ul><br/><p><strong>TV Shows</strong>:</p><ul><li>Game of Thrones</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">e787d616-f260-41c3-a5b4-1746d2091172</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 30 Sep 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/e787d616-f260-41c3-a5b4-1746d2091172.mp3" length="27862112" type="audio/mpeg"/><itunes:duration>29:01</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>61</itunes:episode><podcast:episode>61</podcast:episode><podcast:season>1</podcast:season></item><item><title>Tristan Pelloux, Entrepreneur &amp; Educator: Why Every AI Initiative Must Start with a Business Case</title><itunes:title>Tristan Pelloux, Entrepreneur &amp; Educator: Why Every AI Initiative Must Start with a Business Case</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>Tristan Pelloux, an entrepreneur and visiting professor at SRH in Germany, shares his journey from corporate finance to strategic innovation, leveraging new technologies. The episode delves into the essence of strategy, highlighting Michael Porter's influential piece, "What is Strategy?" and its foundational impact on business decision-making. Tristan discusses the current AI landscape, emphasizing the importance of starting with a clear business case before integrating AI solutions. Key discussions include the role of AI in improving operational efficiency and governance, along with potential risks such as data privacy and regulatory compliance. Tristan also shares his insights on how students and professionals can embrace AI tools to enhance their skills and adapt to changing job landscapes.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background:</strong>&nbsp;Tristan Pelloux is an entrepreneur and visiting professor at SRH in Germany, with a background in corporate finance and strategy.</li><li><strong>Strategy Simplified:</strong>&nbsp;Discusses Michael Porter's "What is Strategy?" and its importance in understanding business positioning and differentiation.</li><li><strong>AI Hype and Reality:</strong>&nbsp;Emphasizes starting with a business case and avoiding the pitfall of adopting AI without a clear problem-solving focus.</li><li><strong>AI and Governance:</strong>&nbsp;Highlights the importance of considering data privacy, regulatory compliance, and cybersecurity risks when implementing AI.</li><li><strong>Adapting to AI:</strong>&nbsp;Advises students and professionals to leverage AI tools for skill enhancement and to remain adaptable to technological changes.</li></ul><br/><p><strong>Companies/Organizations:</strong>&nbsp;</p><ul><li>SRH</li><li>Harvard Business Review</li><li>OpenAI</li><li>Microsoft Co-pilot</li><li>Prince Charles Cinema</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><strong>Books:</strong>&nbsp;</p><ul><li>"What is Strategy?" by Michael Porter</li><li>"Bulletproof Problem Solving" by Charles R. Conn and Robert McLean</li></ul><br/><p><strong>Movies:</strong>&nbsp;</p><ul><li>Commando</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>Tristan Pelloux, an entrepreneur and visiting professor at SRH in Germany, shares his journey from corporate finance to strategic innovation, leveraging new technologies. The episode delves into the essence of strategy, highlighting Michael Porter's influential piece, "What is Strategy?" and its foundational impact on business decision-making. Tristan discusses the current AI landscape, emphasizing the importance of starting with a clear business case before integrating AI solutions. Key discussions include the role of AI in improving operational efficiency and governance, along with potential risks such as data privacy and regulatory compliance. Tristan also shares his insights on how students and professionals can embrace AI tools to enhance their skills and adapt to changing job landscapes.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background:</strong>&nbsp;Tristan Pelloux is an entrepreneur and visiting professor at SRH in Germany, with a background in corporate finance and strategy.</li><li><strong>Strategy Simplified:</strong>&nbsp;Discusses Michael Porter's "What is Strategy?" and its importance in understanding business positioning and differentiation.</li><li><strong>AI Hype and Reality:</strong>&nbsp;Emphasizes starting with a business case and avoiding the pitfall of adopting AI without a clear problem-solving focus.</li><li><strong>AI and Governance:</strong>&nbsp;Highlights the importance of considering data privacy, regulatory compliance, and cybersecurity risks when implementing AI.</li><li><strong>Adapting to AI:</strong>&nbsp;Advises students and professionals to leverage AI tools for skill enhancement and to remain adaptable to technological changes.</li></ul><br/><p><strong>Companies/Organizations:</strong>&nbsp;</p><ul><li>SRH</li><li>Harvard Business Review</li><li>OpenAI</li><li>Microsoft Co-pilot</li><li>Prince Charles Cinema</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><strong>Books:</strong>&nbsp;</p><ul><li>"What is Strategy?" by Michael Porter</li><li>"Bulletproof Problem Solving" by Charles R. Conn and Robert McLean</li></ul><br/><p><strong>Movies:</strong>&nbsp;</p><ul><li>Commando</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">747b3ed7-56dc-4fe4-9575-e25628a3ad30</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 23 Sep 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/747b3ed7-56dc-4fe4-9575-e25628a3ad30.mp3" length="73559688" type="audio/mpeg"/><itunes:duration>30:39</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>60</itunes:episode><podcast:episode>60</podcast:episode><podcast:season>1</podcast:season></item><item><title>Al Kushner | LinkedVantage: What Are You Gonna Do About AI Transforming LinkedIn? How to Use AI to Your Advantage</title><itunes:title>Al Kushner | LinkedVantage: What Are You Gonna Do About AI Transforming LinkedIn? How to Use AI to Your Advantage</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc.&nbsp;<a href="http://aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a>, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at&nbsp;<a href="http://troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>Al Kushner, founder and CEO of LinkedVantage, joins the podcast to discuss the transformative impact of AI on LinkedIn. Known for his expertise in leveraging AI for professional networking, Al shares insights from his career, which began in sales, and his journey to becoming a thought leader in the AI space. The episode delves into the ever-evolving landscape of LinkedIn, discussing how AI-generated content is becoming increasingly prevalent. Al emphasizes the importance of profile optimization and personal branding, offering concrete steps for job seekers. He also addresses the challenges and opportunities presented by AI in the job market, highlighting the emerging field of prompt engineering. Finally, Al provides guidance for aspiring authors and reflects on the significance of authenticity in leveraging AI.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background:</strong>&nbsp;Al Kushner is the founder and CEO of LinkedVantage and an award-winning author of "The AI LinkedIn Advantage."</li><li><strong>Sales and Thought Leadership:</strong>&nbsp;Al emphasizes the importance of starting a career in sales and becoming a thought leader.</li><li><strong>AI's Impact on LinkedIn:</strong>&nbsp;The episode discusses the growing use of AI in generating social media content and its implications for LinkedIn.</li><li><strong>Profile Optimization:</strong>&nbsp;Al provides practical advice for job seekers on improving LinkedIn profiles and using AI tools effectively.</li><li><strong>Job Market and AI:</strong>&nbsp;The discussion includes insights on the future of the job market with AI, including the rise of prompt engineering.</li></ul><br/><p><strong>Companies/Organizations:</strong></p><ul><li><a href="https://linkedvantage.com/" rel="noopener noreferrer" target="_blank">LinkedVantage</a></li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li>Microsoft</li><li>Facebook</li><li>Amazon</li><li>Audible</li><li>Spotify</li></ul><br/><p><strong>Books:</strong></p><ul><li>"The AI LinkedIn Advantage" by Al Kushner</li><li>"Atomic Habits" by James Clear</li></ul><br/><p><strong>Movies:</strong></p><ul><li>"Pursuit of Happyness"</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc.&nbsp;<a href="http://aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a>, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at&nbsp;<a href="http://troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>Al Kushner, founder and CEO of LinkedVantage, joins the podcast to discuss the transformative impact of AI on LinkedIn. Known for his expertise in leveraging AI for professional networking, Al shares insights from his career, which began in sales, and his journey to becoming a thought leader in the AI space. The episode delves into the ever-evolving landscape of LinkedIn, discussing how AI-generated content is becoming increasingly prevalent. Al emphasizes the importance of profile optimization and personal branding, offering concrete steps for job seekers. He also addresses the challenges and opportunities presented by AI in the job market, highlighting the emerging field of prompt engineering. Finally, Al provides guidance for aspiring authors and reflects on the significance of authenticity in leveraging AI.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background:</strong>&nbsp;Al Kushner is the founder and CEO of LinkedVantage and an award-winning author of "The AI LinkedIn Advantage."</li><li><strong>Sales and Thought Leadership:</strong>&nbsp;Al emphasizes the importance of starting a career in sales and becoming a thought leader.</li><li><strong>AI's Impact on LinkedIn:</strong>&nbsp;The episode discusses the growing use of AI in generating social media content and its implications for LinkedIn.</li><li><strong>Profile Optimization:</strong>&nbsp;Al provides practical advice for job seekers on improving LinkedIn profiles and using AI tools effectively.</li><li><strong>Job Market and AI:</strong>&nbsp;The discussion includes insights on the future of the job market with AI, including the rise of prompt engineering.</li></ul><br/><p><strong>Companies/Organizations:</strong></p><ul><li><a href="https://linkedvantage.com/" rel="noopener noreferrer" target="_blank">LinkedVantage</a></li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li>Microsoft</li><li>Facebook</li><li>Amazon</li><li>Audible</li><li>Spotify</li></ul><br/><p><strong>Books:</strong></p><ul><li>"The AI LinkedIn Advantage" by Al Kushner</li><li>"Atomic Habits" by James Clear</li></ul><br/><p><strong>Movies:</strong></p><ul><li>"Pursuit of Happyness"</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">98554278-c700-4f44-9eb8-ef189a01a78a</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 16 Sep 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/98554278-c700-4f44-9eb8-ef189a01a78a.mp3" length="54986626" type="audio/mpeg"/><itunes:duration>22:55</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>59</itunes:episode><podcast:episode>59</podcast:episode><podcast:season>1</podcast:season></item><item><title>Vinay Chaudhri — AI Expert &amp; Principal at Knowledge Systems Research on Why AI Shouldn’t Mimic Human Reasoning</title><itunes:title>Vinay Chaudhri — AI Expert &amp; Principal at Knowledge Systems Research on Why AI Shouldn’t Mimic Human Reasoning</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>Vinay Chaudhri, a principal at Knowledge Systems Research, LLC, shares his journey from obtaining a PhD in the 1990s to supporting a National Science Foundation initiative focused on integrating deep learning with knowledge graphs. The episode emphasizes the importance of explicitly representing knowledge and utilizing it for reasoning. Vinay discusses his transition from a significant role at JP Morgan to his current position, highlighting the differences in technological adoption between corporate and research settings. He also explores how AI can enhance human reasoning, rather than merely mimic it. Additionally, Vinay shares insights on the potential of AI in various fields, from education to environmental sustainability.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background</strong>: Vinay Chaudhri is a principal at Knowledge Systems Research, LLC, supporting a National Science Foundation AI initiative.</li><li><strong>AI History</strong>: Vinay started his AI journey in the 1990s, focusing on knowledge bases and human-like reasoning.</li><li><strong>Career Transition</strong>: He transitioned from JP Morgan to research, valuing intellectual challenge and family goals.</li><li><strong>AI Research</strong>: Emphasizes explicit knowledge representation and AI's potential to enhance human reasoning.</li><li><strong>Technological Adoption</strong>: Contrasts JP Morgan's proactive approach to AI with more conservative institutions.</li></ul><br/><p><strong>Companies/Organizations</strong></p><ul><li>Knowledge Systems Research, LLC</li><li>National Science Foundation</li><li>University of Toronto</li><li>Stanford</li><li>JP Morgan</li><li>SRI International</li><li>Santa Fe Institute of Complexity</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><strong>Books</strong></p><ul><li>Yoga Sutras</li></ul><br/><p><strong>Movies</strong></p><ul><li>The Sound of Music</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>Vinay Chaudhri, a principal at Knowledge Systems Research, LLC, shares his journey from obtaining a PhD in the 1990s to supporting a National Science Foundation initiative focused on integrating deep learning with knowledge graphs. The episode emphasizes the importance of explicitly representing knowledge and utilizing it for reasoning. Vinay discusses his transition from a significant role at JP Morgan to his current position, highlighting the differences in technological adoption between corporate and research settings. He also explores how AI can enhance human reasoning, rather than merely mimic it. Additionally, Vinay shares insights on the potential of AI in various fields, from education to environmental sustainability.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background</strong>: Vinay Chaudhri is a principal at Knowledge Systems Research, LLC, supporting a National Science Foundation AI initiative.</li><li><strong>AI History</strong>: Vinay started his AI journey in the 1990s, focusing on knowledge bases and human-like reasoning.</li><li><strong>Career Transition</strong>: He transitioned from JP Morgan to research, valuing intellectual challenge and family goals.</li><li><strong>AI Research</strong>: Emphasizes explicit knowledge representation and AI's potential to enhance human reasoning.</li><li><strong>Technological Adoption</strong>: Contrasts JP Morgan's proactive approach to AI with more conservative institutions.</li></ul><br/><p><strong>Companies/Organizations</strong></p><ul><li>Knowledge Systems Research, LLC</li><li>National Science Foundation</li><li>University of Toronto</li><li>Stanford</li><li>JP Morgan</li><li>SRI International</li><li>Santa Fe Institute of Complexity</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><strong>Books</strong></p><ul><li>Yoga Sutras</li></ul><br/><p><strong>Movies</strong></p><ul><li>The Sound of Music</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">273ed4ec-5c8e-4f92-ad80-8bf604602fff</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 09 Sep 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/273ed4ec-5c8e-4f92-ad80-8bf604602fff.mp3" length="90257157" type="audio/mpeg"/><itunes:duration>37:36</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>58</itunes:episode><podcast:episode>58</podcast:episode><podcast:season>1</podcast:season></item><item><title>Life&apos;s Echo Co-Founder Steve Endacott: How to Build a Car Around the Gen AI Engine</title><itunes:title>Life&apos;s Echo Co-Founder Steve Endacott: How to Build a Car Around the Gen AI Engine</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>In this enlightening episode, Alec Crawford welcomes Steve Endacott, a serial entrepreneur and co-founder of Life's Echo. Steve shares his fascinating journey from disrupting the UK travel industry with dynamic packaging to pioneering AI-driven solutions. He discusses how AI is reshaping the travel sector, particularly through the innovative projects of his AI incubator, Neural River. Key topics include the transformative potential of AI in voice technology and the ethical considerations surrounding AI usage. Steve also introduces Life's Echo, a platform for creating personal autobiographies through AI. The conversation underscores the rapid pace of AI adoption and the need for strategic planning to address potential societal impacts.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background</strong>: Steve Endacott is a serial entrepreneur and co-founder of Life's Echo, with a history of innovation in the travel and tech industries.</li><li><strong>AI Incubator</strong>: Steve discusses Neural River, an AI incubator that nurtures young talent and spins out new AI-driven businesses.</li><li><strong>AI in Travel</strong>: The conversation explores how AI is poised to revolutionize the travel industry, particularly through AI Search and voice technology.</li><li><strong>Life's Echo</strong>: Steve introduces Life's Echo, a service that uses AI to create personal autobiographies and digital twins for future generations.</li><li><strong>Ethical Considerations</strong>: The discussion touches on the ethical implications of AI, including job displacement and the importance of data security.</li></ul><br/><p><strong>Companies/Organizations:</strong></p><ul><li><a href="https://lifesecho.co.uk/" rel="noopener noreferrer" target="_blank">Life's Echo</a></li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li>Google</li><li>TripAdvisor</li><li>Reddit</li><li>Konga</li><li>Booking.com</li><li>Thomas Cook</li><li>My Travel</li><li>Troutman Street Audio</li></ul><br/><p><strong>Movies:</strong></p><ul><li>Amadeus</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>In this enlightening episode, Alec Crawford welcomes Steve Endacott, a serial entrepreneur and co-founder of Life's Echo. Steve shares his fascinating journey from disrupting the UK travel industry with dynamic packaging to pioneering AI-driven solutions. He discusses how AI is reshaping the travel sector, particularly through the innovative projects of his AI incubator, Neural River. Key topics include the transformative potential of AI in voice technology and the ethical considerations surrounding AI usage. Steve also introduces Life's Echo, a platform for creating personal autobiographies through AI. The conversation underscores the rapid pace of AI adoption and the need for strategic planning to address potential societal impacts.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background</strong>: Steve Endacott is a serial entrepreneur and co-founder of Life's Echo, with a history of innovation in the travel and tech industries.</li><li><strong>AI Incubator</strong>: Steve discusses Neural River, an AI incubator that nurtures young talent and spins out new AI-driven businesses.</li><li><strong>AI in Travel</strong>: The conversation explores how AI is poised to revolutionize the travel industry, particularly through AI Search and voice technology.</li><li><strong>Life's Echo</strong>: Steve introduces Life's Echo, a service that uses AI to create personal autobiographies and digital twins for future generations.</li><li><strong>Ethical Considerations</strong>: The discussion touches on the ethical implications of AI, including job displacement and the importance of data security.</li></ul><br/><p><strong>Companies/Organizations:</strong></p><ul><li><a href="https://lifesecho.co.uk/" rel="noopener noreferrer" target="_blank">Life's Echo</a></li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li>Google</li><li>TripAdvisor</li><li>Reddit</li><li>Konga</li><li>Booking.com</li><li>Thomas Cook</li><li>My Travel</li><li>Troutman Street Audio</li></ul><br/><p><strong>Movies:</strong></p><ul><li>Amadeus</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">e8a73d0d-7a63-41b4-9e8a-3a56a27938b6</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 02 Sep 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/e8a73d0d-7a63-41b4-9e8a-3a56a27938b6.mp3" length="87094251" type="audio/mpeg"/><itunes:duration>36:17</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>57</itunes:episode><podcast:episode>57</podcast:episode><podcast:season>1</podcast:season></item><item><title>Resolving AI Ethical Dilemmas with Dan Nestle of Inquisitive Communications</title><itunes:title>Resolving AI Ethical Dilemmas with Dan Nestle of Inquisitive Communications</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>Dan Nestle, Chief Curiosity Officer at Inquisitive Communications, shares his unique journey from a communications novice to an AI ethics advocate. The episode centers around resolving AI ethical dilemmas, highlighting Dan's emphasis on transparency and accurate representation in content creation. He discusses his transition from corporate roles to founding Inquisitive Communications and the impact of AI on strategic communications. The conversation also delves into his favorite AI tools and the importance of maintaining ethical standards while leveraging AI for content creation. Dan offers insights on how CEOs can integrate AI into their business strategies, emphasizing the need for education, flexibility, and cultural adaptation.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background</strong>: Dan Nestle is the Chief Curiosity Officer at Inquisitive Communications with a rich history in strategic communications and AI ethics.</li><li><strong>AI Tools</strong>: Dan actively uses AI tools like Perplexity, Notebook LM, and Claude for content creation and strategic advising.</li><li><strong>Ethical Content Creation</strong>: He emphasizes transparency and the importance of accurately representing original thought when using AI in content creation.</li><li><strong>Corporate AI Integration</strong>: Dan advises CEOs on the importance of educating their workforce and adopting flexible AI strategies.</li><li><strong>Community Influence</strong>: He discusses the impact of the Rise community and the significance of dark social platforms in modern communications.</li></ul><br/><p><strong>Companies/Organizations</strong></p><ul><li><a href="https://be-inquisitive.com/" rel="noopener noreferrer" target="_blank">Inquisitive Communications</a></li><li>Troutman Street Audio</li><li>UPenn</li><li>LIXIL</li><li>Page Society</li><li>PRSA</li><li>Weber-Shandwick</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><strong>Books</strong></p><ul><li>Rich Dad Poor Dad</li><li>The New Rules of Marketing and PR</li><li>Co-Intelligence</li></ul><br/><p><strong>Movies</strong></p><ul><li>Moneyball</li></ul><br/><p><strong>TV Shows</strong></p><ul><li>Silo</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>Dan Nestle, Chief Curiosity Officer at Inquisitive Communications, shares his unique journey from a communications novice to an AI ethics advocate. The episode centers around resolving AI ethical dilemmas, highlighting Dan's emphasis on transparency and accurate representation in content creation. He discusses his transition from corporate roles to founding Inquisitive Communications and the impact of AI on strategic communications. The conversation also delves into his favorite AI tools and the importance of maintaining ethical standards while leveraging AI for content creation. Dan offers insights on how CEOs can integrate AI into their business strategies, emphasizing the need for education, flexibility, and cultural adaptation.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background</strong>: Dan Nestle is the Chief Curiosity Officer at Inquisitive Communications with a rich history in strategic communications and AI ethics.</li><li><strong>AI Tools</strong>: Dan actively uses AI tools like Perplexity, Notebook LM, and Claude for content creation and strategic advising.</li><li><strong>Ethical Content Creation</strong>: He emphasizes transparency and the importance of accurately representing original thought when using AI in content creation.</li><li><strong>Corporate AI Integration</strong>: Dan advises CEOs on the importance of educating their workforce and adopting flexible AI strategies.</li><li><strong>Community Influence</strong>: He discusses the impact of the Rise community and the significance of dark social platforms in modern communications.</li></ul><br/><p><strong>Companies/Organizations</strong></p><ul><li><a href="https://be-inquisitive.com/" rel="noopener noreferrer" target="_blank">Inquisitive Communications</a></li><li>Troutman Street Audio</li><li>UPenn</li><li>LIXIL</li><li>Page Society</li><li>PRSA</li><li>Weber-Shandwick</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><strong>Books</strong></p><ul><li>Rich Dad Poor Dad</li><li>The New Rules of Marketing and PR</li><li>Co-Intelligence</li></ul><br/><p><strong>Movies</strong></p><ul><li>Moneyball</li></ul><br/><p><strong>TV Shows</strong></p><ul><li>Silo</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">250f2da9-b809-4994-b472-ce3b19b2c4fa</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 26 Aug 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/250f2da9-b809-4994-b472-ce3b19b2c4fa.mp3" length="95755410" type="audio/mpeg"/><itunes:duration>39:54</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>56</itunes:episode><podcast:episode>56</podcast:episode><podcast:season>1</podcast:season></item><item><title>Mark Andrews, Founder &amp; CEO of AI Prophets: Upscale Your AI — Turn It from an Axe into a Chainsaw</title><itunes:title>Mark Andrews, Founder &amp; CEO of AI Prophets: Upscale Your AI — Turn It from an Axe into a Chainsaw</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>Mark Andrews, the founder and CEO of AI Prophets, brings a rich background in finance and digital marketing to the AI landscape. This episode focuses on leveraging AI as a practical tool for businesses and individuals, emphasizing the importance of AI literacy. Key topics include the democratization of data through AI, the impact of internal data on AI effectiveness, and the potential pitfalls of over-relying on AI without understanding its limits. Mark shares insights on how companies can enhance productivity by integrating AI into their operations, and he discusses the evolving role of AI in the workplace and its implications for new graduates. The conversation also delves into ethical considerations and the societal impact of AI technologies.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background:</strong>&nbsp;Mark Andrews is an AI literacy expert and the founder and CEO of AI Prophets.</li><li><strong>AI as a Tool:</strong>&nbsp;He emphasizes using AI to enhance productivity and save time in the workplace.</li><li><strong>Data Democratization:</strong>&nbsp;Discusses the importance of using internal data with AI for better decision-making.</li><li><strong>AI Literacy for Graduates:</strong>&nbsp;Offers advice to new graduates on embracing AI to improve job prospects.</li><li><strong>AI Ethics:</strong>&nbsp;Highlights the need for careful consideration of AI's societal and employment impacts.</li></ul><br/><p><strong>Companies/Organizations:</strong>&nbsp;</p><ul><li><a href="https://aiprophets.com/" rel="noopener noreferrer" target="_blank">AI Prophets</a></li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li>Troutman Street Audio</li><li>Bryant University</li><li>Syracuse University</li><li>Anthropic</li></ul><br/><p><strong>Books:</strong>&nbsp;</p><ul><li>Moneyball</li></ul><br/><p><strong>Movies:</strong>&nbsp;</p><ul><li>Field of Dreams</li></ul><br/><p><strong>TV Shows:</strong>&nbsp;</p><ul><li>It's Always Sunny in Philadelphia</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>Mark Andrews, the founder and CEO of AI Prophets, brings a rich background in finance and digital marketing to the AI landscape. This episode focuses on leveraging AI as a practical tool for businesses and individuals, emphasizing the importance of AI literacy. Key topics include the democratization of data through AI, the impact of internal data on AI effectiveness, and the potential pitfalls of over-relying on AI without understanding its limits. Mark shares insights on how companies can enhance productivity by integrating AI into their operations, and he discusses the evolving role of AI in the workplace and its implications for new graduates. The conversation also delves into ethical considerations and the societal impact of AI technologies.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background:</strong>&nbsp;Mark Andrews is an AI literacy expert and the founder and CEO of AI Prophets.</li><li><strong>AI as a Tool:</strong>&nbsp;He emphasizes using AI to enhance productivity and save time in the workplace.</li><li><strong>Data Democratization:</strong>&nbsp;Discusses the importance of using internal data with AI for better decision-making.</li><li><strong>AI Literacy for Graduates:</strong>&nbsp;Offers advice to new graduates on embracing AI to improve job prospects.</li><li><strong>AI Ethics:</strong>&nbsp;Highlights the need for careful consideration of AI's societal and employment impacts.</li></ul><br/><p><strong>Companies/Organizations:</strong>&nbsp;</p><ul><li><a href="https://aiprophets.com/" rel="noopener noreferrer" target="_blank">AI Prophets</a></li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li><li>Troutman Street Audio</li><li>Bryant University</li><li>Syracuse University</li><li>Anthropic</li></ul><br/><p><strong>Books:</strong>&nbsp;</p><ul><li>Moneyball</li></ul><br/><p><strong>Movies:</strong>&nbsp;</p><ul><li>Field of Dreams</li></ul><br/><p><strong>TV Shows:</strong>&nbsp;</p><ul><li>It's Always Sunny in Philadelphia</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">91627f26-2b2d-44fc-81e4-a5b8a559e89d</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 19 Aug 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/91627f26-2b2d-44fc-81e4-a5b8a559e89d.mp3" length="80966969" type="audio/mpeg"/><itunes:duration>33:44</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>55</itunes:episode><podcast:episode>55</podcast:episode><podcast:season>1</podcast:season></item><item><title>Jason Barnard, Kalicube: Taking Control of Your Brand Narrative in the AI Era</title><itunes:title>Jason Barnard, Kalicube: Taking Control of Your Brand Narrative in the AI Era</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>Jason Barnard, CEO and founder of Kalicube, joins Alec Crawford to discuss his unconventional journey from musician and cartoon creator to a leading expert in Digital Brand Engineering. Barnard emphasizes the importance of controlling one's digital footprint to shape the Digital Brand Echo that AI Assistive Engines perceive. The episode explores his personal transformation under mentor Itamar Morani, the challenges of AI-generated misinformation, and how to strategically position brands in an AI-driven world. Jason shares insights from his career, including his innovative approach, The Kalicube Process, and the development of Kalicube’s powerful SaaS platform. The discussion also touches on the future of AI in marketing and the importance of maintaining human creativity and emotion in business.</p><p><strong>Summary:</strong></p><p><strong>Guest Background</strong>: Jason Barnard is a former musician and cartoon creator turned CEO and founder of Kalicube, specializing in how brands are represented in an AI-first world.</p><p><strong>Personal Branding</strong>: He discusses the critical role of a digital footprint in shaping a brand's narrative in its Brand SERP and the AI Résumé that machines generate.</p><p><strong>Mentorship Influence</strong>: Jason shares how Itamar Morani helped him transition from a hands-on creator to an effective CEO.</p><p><strong>AI Challenges</strong>: Concerns about AI's role in spreading misinformation and the importance of establishing a single source of truth (an Entity Home).</p><p><strong>Future of AI in Marketing</strong>: Jason outlines how AI Assistive Engines create a Conversational Acquisition Funnel, guiding consumers and why brands must become Top of Algorithmic Mind to be recommended.</p><p><strong>Companies/Organizations:</strong></p><ul><li><a href="https://kalicube.com/" rel="noopener noreferrer" target="_blank">Kalicube</a></li><li>Kajabi</li><li>Google</li><li>Microsoft</li><li>ChatGPT</li><li>DeepSeek</li><li>Gemini</li><li>Copilot</li><li>Instagram</li><li>Facebook</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p>&nbsp;<strong>Books:</strong></p><ul><li>Confucius from the Heart</li></ul><br/><p><strong>Movies:</strong></p><ul><li>Cool Runnings</li><li>Some Like It Hot</li></ul><br/><p><strong>TV Shows:</strong></p><ul><li>IT Crowd</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>Jason Barnard, CEO and founder of Kalicube, joins Alec Crawford to discuss his unconventional journey from musician and cartoon creator to a leading expert in Digital Brand Engineering. Barnard emphasizes the importance of controlling one's digital footprint to shape the Digital Brand Echo that AI Assistive Engines perceive. The episode explores his personal transformation under mentor Itamar Morani, the challenges of AI-generated misinformation, and how to strategically position brands in an AI-driven world. Jason shares insights from his career, including his innovative approach, The Kalicube Process, and the development of Kalicube’s powerful SaaS platform. The discussion also touches on the future of AI in marketing and the importance of maintaining human creativity and emotion in business.</p><p><strong>Summary:</strong></p><p><strong>Guest Background</strong>: Jason Barnard is a former musician and cartoon creator turned CEO and founder of Kalicube, specializing in how brands are represented in an AI-first world.</p><p><strong>Personal Branding</strong>: He discusses the critical role of a digital footprint in shaping a brand's narrative in its Brand SERP and the AI Résumé that machines generate.</p><p><strong>Mentorship Influence</strong>: Jason shares how Itamar Morani helped him transition from a hands-on creator to an effective CEO.</p><p><strong>AI Challenges</strong>: Concerns about AI's role in spreading misinformation and the importance of establishing a single source of truth (an Entity Home).</p><p><strong>Future of AI in Marketing</strong>: Jason outlines how AI Assistive Engines create a Conversational Acquisition Funnel, guiding consumers and why brands must become Top of Algorithmic Mind to be recommended.</p><p><strong>Companies/Organizations:</strong></p><ul><li><a href="https://kalicube.com/" rel="noopener noreferrer" target="_blank">Kalicube</a></li><li>Kajabi</li><li>Google</li><li>Microsoft</li><li>ChatGPT</li><li>DeepSeek</li><li>Gemini</li><li>Copilot</li><li>Instagram</li><li>Facebook</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p>&nbsp;<strong>Books:</strong></p><ul><li>Confucius from the Heart</li></ul><br/><p><strong>Movies:</strong></p><ul><li>Cool Runnings</li><li>Some Like It Hot</li></ul><br/><p><strong>TV Shows:</strong></p><ul><li>IT Crowd</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">7ff0b91e-b3c7-4398-b92d-a68989b00ce3</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 12 Aug 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/7ff0b91e-b3c7-4398-b92d-a68989b00ce3.mp3" length="72827214" type="audio/mpeg"/><itunes:duration>30:21</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>54</itunes:episode><podcast:episode>54</podcast:episode><podcast:season>1</podcast:season></item><item><title>Stan Sukhinin of SORSO on the Business Processes AI Still Can’t Handle</title><itunes:title>Stan Sukhinin of SORSO on the Business Processes AI Still Can’t Handle</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>Stan Sukhinin, owner of SORSO Fractional CFO Services, shares his journey from the financial sector, including his tenure at institutions like Societe Generale, to founding his own company. The episode delves into why AI is not yet ready for many business processes, emphasizing the importance of consistency and the risks of hallucinations in AI models. Stan discusses the critical role of financial management in business success and the potential for AI to automate and streamline these processes. He also provides insights into his strategic vision for SORSO, focusing on integrating AI to enhance financial operations. Furthermore, Stan highlights the ethical concerns surrounding AI, such as job displacement and the potential for manipulation. He concludes by offering advice for CFOs on adopting AI tools and preparing for the future.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background</strong>: Stan Sukhinin is the owner of SORSO Fractional CFO Services with a rich background in banking and finance.</li><li><strong>AI Limitations</strong>: Discusses AI's current inconsistency and hallucination issues in business applications.</li><li><strong>Financial Management</strong>: Emphasizes the importance of understanding and managing business finances to avoid failures.</li><li><strong>AI in Finance</strong>: Explores the potential of AI to automate financial tasks and improve efficiency.</li><li><strong>Ethical Concerns</strong>: Highlights ethical challenges, including job displacement and AI’s potential for manipulation.</li></ul><br/><p><strong>Companies/Organizations</strong>:</p><ul><li>SORSO Fractional CFO Services</li><li>Societe Generale</li><li>UniCredit</li><li>Spotify</li><li>Klarna</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><strong>Books</strong>:</p><ul><li>Never Enough: From Barista to Billionaire by Andrew Wilkinson</li></ul><br/><p><strong>Movies</strong>:</p><ul><li>Bend It Like Beckham</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>Stan Sukhinin, owner of SORSO Fractional CFO Services, shares his journey from the financial sector, including his tenure at institutions like Societe Generale, to founding his own company. The episode delves into why AI is not yet ready for many business processes, emphasizing the importance of consistency and the risks of hallucinations in AI models. Stan discusses the critical role of financial management in business success and the potential for AI to automate and streamline these processes. He also provides insights into his strategic vision for SORSO, focusing on integrating AI to enhance financial operations. Furthermore, Stan highlights the ethical concerns surrounding AI, such as job displacement and the potential for manipulation. He concludes by offering advice for CFOs on adopting AI tools and preparing for the future.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background</strong>: Stan Sukhinin is the owner of SORSO Fractional CFO Services with a rich background in banking and finance.</li><li><strong>AI Limitations</strong>: Discusses AI's current inconsistency and hallucination issues in business applications.</li><li><strong>Financial Management</strong>: Emphasizes the importance of understanding and managing business finances to avoid failures.</li><li><strong>AI in Finance</strong>: Explores the potential of AI to automate financial tasks and improve efficiency.</li><li><strong>Ethical Concerns</strong>: Highlights ethical challenges, including job displacement and AI’s potential for manipulation.</li></ul><br/><p><strong>Companies/Organizations</strong>:</p><ul><li>SORSO Fractional CFO Services</li><li>Societe Generale</li><li>UniCredit</li><li>Spotify</li><li>Klarna</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><strong>Books</strong>:</p><ul><li>Never Enough: From Barista to Billionaire by Andrew Wilkinson</li></ul><br/><p><strong>Movies</strong>:</p><ul><li>Bend It Like Beckham</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">2e818c74-46fc-4665-bc2c-9e566c89ae7e</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 05 Aug 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/2e818c74-46fc-4665-bc2c-9e566c89ae7e.mp3" length="95168177" type="audio/mpeg"/><itunes:duration>39:39</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>53</itunes:episode><podcast:episode>53</podcast:episode><podcast:season>1</podcast:season></item><item><title>AI Ethically in Education: Perspectives from Matt Metzger of Belk College of Business at UNC Charlotte</title><itunes:title>AI Ethically in Education: Perspectives from Matt Metzger of Belk College of Business at UNC Charlotte</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>Matt Metzger, a Clinical Professor of Economics at Belk College of Business at UNC Charlotte, discusses the integration of AI in education. He emphasizes the importance of adapting teaching methods to incorporate AI tools, making education more relevant to current job markets. Throughout the episode, Matt shares insights from his book, "The Overnight AI Educator," and explains the concept of backwards design in curriculum development. He also addresses ethical concerns, such as privacy issues and equitable access to AI resources. Matt advocates for preparing students to use AI productively, highlighting the demand from businesses for AI-skilled graduates. The conversation explores the balance between traditional education and AI-enhanced learning environments.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background:</strong>&nbsp;Matt Metzger is a Clinical Professor of Economics at Belk College of Business, UNC Charlotte, and author of "The Overnight AI Educator."</li><li><strong>AI in Education:</strong>&nbsp;Metzger emphasizes the integration of AI tools in educational curricula to match current job market demands.</li><li><strong>Backwards Design:</strong>&nbsp;He advocates for the backwards design approach, focusing on end goals to structure educational activities effectively.</li><li><strong>Ethical Concerns:</strong>&nbsp;Discussion includes ethical issues such as AI-induced cheating, privacy, and equitable access across different regions.</li><li><strong>Future Skills:</strong>&nbsp;Metzger highlights the importance of equipping students with AI skills, aligning with industry needs for AI-competent graduates.</li></ul><br/><p><strong>Companies/Organizations:</strong></p><ul><li>Belk College of Business</li><li>UNC Charlotte</li><li>Kent State University</li><li>University of Tennessee</li><li>Global AI Ethics Institute</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><strong>Books:</strong></p><ul><li><a href="https://overnight.education" rel="noopener noreferrer" target="_blank">The Overnight AI Educator</a> by Matt Metzgar</li><li>Understanding by Design by Grant Wiggins</li><li>Essential Questions by Grant Wiggins</li><li>Schooling by Design by Grant Wiggins</li></ul><br/><p><strong>Movies:</strong></p><ul><li>Ferris Bueller's Day Off</li><li>Risky Business</li></ul><br/><p><strong>TV Shows:</strong></p><ul><li>MacGyver</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>Matt Metzger, a Clinical Professor of Economics at Belk College of Business at UNC Charlotte, discusses the integration of AI in education. He emphasizes the importance of adapting teaching methods to incorporate AI tools, making education more relevant to current job markets. Throughout the episode, Matt shares insights from his book, "The Overnight AI Educator," and explains the concept of backwards design in curriculum development. He also addresses ethical concerns, such as privacy issues and equitable access to AI resources. Matt advocates for preparing students to use AI productively, highlighting the demand from businesses for AI-skilled graduates. The conversation explores the balance between traditional education and AI-enhanced learning environments.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background:</strong>&nbsp;Matt Metzger is a Clinical Professor of Economics at Belk College of Business, UNC Charlotte, and author of "The Overnight AI Educator."</li><li><strong>AI in Education:</strong>&nbsp;Metzger emphasizes the integration of AI tools in educational curricula to match current job market demands.</li><li><strong>Backwards Design:</strong>&nbsp;He advocates for the backwards design approach, focusing on end goals to structure educational activities effectively.</li><li><strong>Ethical Concerns:</strong>&nbsp;Discussion includes ethical issues such as AI-induced cheating, privacy, and equitable access across different regions.</li><li><strong>Future Skills:</strong>&nbsp;Metzger highlights the importance of equipping students with AI skills, aligning with industry needs for AI-competent graduates.</li></ul><br/><p><strong>Companies/Organizations:</strong></p><ul><li>Belk College of Business</li><li>UNC Charlotte</li><li>Kent State University</li><li>University of Tennessee</li><li>Global AI Ethics Institute</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><strong>Books:</strong></p><ul><li><a href="https://overnight.education" rel="noopener noreferrer" target="_blank">The Overnight AI Educator</a> by Matt Metzgar</li><li>Understanding by Design by Grant Wiggins</li><li>Essential Questions by Grant Wiggins</li><li>Schooling by Design by Grant Wiggins</li></ul><br/><p><strong>Movies:</strong></p><ul><li>Ferris Bueller's Day Off</li><li>Risky Business</li></ul><br/><p><strong>TV Shows:</strong></p><ul><li>MacGyver</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">385b681d-da43-4f24-b541-2e84c5f1167a</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 29 Jul 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/385b681d-da43-4f24-b541-2e84c5f1167a.mp3" length="88986561" type="audio/mpeg"/><itunes:duration>37:05</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>52</itunes:episode><podcast:episode>52</podcast:episode><podcast:season>1</podcast:season></item><item><title>AI Is Evolving—Regulation Isn’t: Jo Ann Barefoot Cofounder of Alliance for Innovative Regulation</title><itunes:title>AI Is Evolving—Regulation Isn’t: Jo Ann Barefoot Cofounder of Alliance for Innovative Regulation</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>Jo Ann Barefoot, CEO and founder of the Alliance for Innovative Regulation (AIR), joins Alec to discuss the critical lag between AI advancements and financial regulations. A former bank regulator and U.S. Senate Banking Committee member, Jo Ann’s journey led her to establish AIR, aiming to address regulatory challenges in the tech-driven financial landscape. The episode delves into topics such as the rapid pace of technological innovation, the potential for AI to combat financial crime, and the ethical considerations for AI deployment in financial institutions. Jo Ann shares insights on how AI can enhance financial inclusion and the importance of evolving regulatory frameworks to keep pace with technological advancements. Her perspective underscores the necessity of integrating compliance into tech development and fostering a culture of ethical innovation within the financial sector.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background:</strong>&nbsp;Jo Ann Barefoot is the CEO and founder of the Alliance for Innovative Regulation, with a history as a bank regulator and consultant.</li><li><strong>Technological Convergence:</strong>&nbsp;Jo Ann emphasizes how streams of innovation converge to create breakthroughs, highlighting the rapid pace of AI development.</li><li><strong>AI in Financial Crime:</strong>&nbsp;The discussion covers how AI can revolutionize fraud detection and combat financial crime more effectively.</li><li><strong>Regulatory Challenges:</strong>&nbsp;Jo Ann explains the gap between AI advancements and current regulations, stressing the need for principles-based regulation.</li><li><strong>Ethical AI Deployment:</strong>&nbsp;Ethical considerations and risk management are crucial as financial institutions deploy AI to enhance consumer protection and financial inclusion.</li></ul><br/><p><strong>Companies/Organizations:</strong>&nbsp;</p><ul><li><a href="https://regulationinnovation.org/" rel="noopener noreferrer" target="_blank">Alliance for Innovative Regulation</a></li><li>US Senate Banking Committee</li><li>AARP</li><li>JP Morgan</li><li>Harvard University</li><li>Federal agencies</li><li>Community banks</li><li>Credit unions</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><strong>Books:</strong>&nbsp;</p><ul><li>The Future is Faster Than You Think by Peter Diamandis.</li></ul><br/><p><strong>Movies:</strong>&nbsp;</p><ul><li>Mamma Mia!</li></ul><br/><p>&nbsp;<strong>TV Shows:</strong>&nbsp;</p><ul><li>The Flintstones</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>Jo Ann Barefoot, CEO and founder of the Alliance for Innovative Regulation (AIR), joins Alec to discuss the critical lag between AI advancements and financial regulations. A former bank regulator and U.S. Senate Banking Committee member, Jo Ann’s journey led her to establish AIR, aiming to address regulatory challenges in the tech-driven financial landscape. The episode delves into topics such as the rapid pace of technological innovation, the potential for AI to combat financial crime, and the ethical considerations for AI deployment in financial institutions. Jo Ann shares insights on how AI can enhance financial inclusion and the importance of evolving regulatory frameworks to keep pace with technological advancements. Her perspective underscores the necessity of integrating compliance into tech development and fostering a culture of ethical innovation within the financial sector.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background:</strong>&nbsp;Jo Ann Barefoot is the CEO and founder of the Alliance for Innovative Regulation, with a history as a bank regulator and consultant.</li><li><strong>Technological Convergence:</strong>&nbsp;Jo Ann emphasizes how streams of innovation converge to create breakthroughs, highlighting the rapid pace of AI development.</li><li><strong>AI in Financial Crime:</strong>&nbsp;The discussion covers how AI can revolutionize fraud detection and combat financial crime more effectively.</li><li><strong>Regulatory Challenges:</strong>&nbsp;Jo Ann explains the gap between AI advancements and current regulations, stressing the need for principles-based regulation.</li><li><strong>Ethical AI Deployment:</strong>&nbsp;Ethical considerations and risk management are crucial as financial institutions deploy AI to enhance consumer protection and financial inclusion.</li></ul><br/><p><strong>Companies/Organizations:</strong>&nbsp;</p><ul><li><a href="https://regulationinnovation.org/" rel="noopener noreferrer" target="_blank">Alliance for Innovative Regulation</a></li><li>US Senate Banking Committee</li><li>AARP</li><li>JP Morgan</li><li>Harvard University</li><li>Federal agencies</li><li>Community banks</li><li>Credit unions</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><strong>Books:</strong>&nbsp;</p><ul><li>The Future is Faster Than You Think by Peter Diamandis.</li></ul><br/><p><strong>Movies:</strong>&nbsp;</p><ul><li>Mamma Mia!</li></ul><br/><p>&nbsp;<strong>TV Shows:</strong>&nbsp;</p><ul><li>The Flintstones</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">b5bf4718-946f-4e31-8139-39590b69c107</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 22 Jul 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/b5bf4718-946f-4e31-8139-39590b69c107.mp3" length="99307018" type="audio/mpeg"/><itunes:duration>41:23</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>51</itunes:episode><podcast:episode>51</podcast:episode><podcast:season>1</podcast:season></item><item><title>AI Will Let You Do Your Thing: A Conversation with Matthew White on Platypus OS</title><itunes:title>AI Will Let You Do Your Thing: A Conversation with Matthew White on Platypus OS</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>In this episode, we hear from Matthew White, founder and CEO of Platypus OS. He shares his journey from healthcare staffing to tech entrepreneurship, ultimately leading to the creation of Platypus OS, an AI-driven browser designed to streamline workflows. The conversation explores how Platypus OS integrates AI to manage tasks across over 1500 applications, aiming to revolutionize how businesses operate by treating AI as an integral team member. Matthew discusses the challenges faced while developing a new browser, the potential for AI to transform small businesses, and his vision of AI as the future operating system. He also covers the broader implications of AI on the job market and the importance of ethical considerations.</p><p><strong>Summary</strong></p><ul><li><strong>Guest Background:</strong>&nbsp;Matthew White is the founder and CEO of Platypus OS, with a diverse career path from healthcare staffing to tech innovation.</li><li><strong>AI-Powered Browsing:</strong>&nbsp;Platypus OS integrates AI into browsers, enabling automation across numerous applications, aiming to simplify and enhance business operations.</li><li><strong>Building a Browser:</strong>&nbsp;The development of Platypus OS involved unexpected challenges, particularly in creating a functional, organized browser from scratch.</li><li><strong>AI as an Operating System:</strong>&nbsp;Matthew envisions AI evolving into a central operating system, fundamentally altering how businesses and individuals interact with technology.</li><li><strong>Job Market Impact:</strong>&nbsp;He discusses potential disruptions AI may cause in the job market and the need for ethical regulation to guide its development.</li></ul><br/><p><strong>Companies/Organizations</strong></p><ul><li>Platypus OS</li><li>Qebot</li><li>Better Business Bureau</li><li>Waymo</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><strong>Books</strong></p><ul><li>The Goal by Eliyahu Goldratt</li></ul><br/><p><strong>Movies</strong></p><ul><li>The Social Network</li><li>Lord of the Rings </li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>In this episode, we hear from Matthew White, founder and CEO of Platypus OS. He shares his journey from healthcare staffing to tech entrepreneurship, ultimately leading to the creation of Platypus OS, an AI-driven browser designed to streamline workflows. The conversation explores how Platypus OS integrates AI to manage tasks across over 1500 applications, aiming to revolutionize how businesses operate by treating AI as an integral team member. Matthew discusses the challenges faced while developing a new browser, the potential for AI to transform small businesses, and his vision of AI as the future operating system. He also covers the broader implications of AI on the job market and the importance of ethical considerations.</p><p><strong>Summary</strong></p><ul><li><strong>Guest Background:</strong>&nbsp;Matthew White is the founder and CEO of Platypus OS, with a diverse career path from healthcare staffing to tech innovation.</li><li><strong>AI-Powered Browsing:</strong>&nbsp;Platypus OS integrates AI into browsers, enabling automation across numerous applications, aiming to simplify and enhance business operations.</li><li><strong>Building a Browser:</strong>&nbsp;The development of Platypus OS involved unexpected challenges, particularly in creating a functional, organized browser from scratch.</li><li><strong>AI as an Operating System:</strong>&nbsp;Matthew envisions AI evolving into a central operating system, fundamentally altering how businesses and individuals interact with technology.</li><li><strong>Job Market Impact:</strong>&nbsp;He discusses potential disruptions AI may cause in the job market and the need for ethical regulation to guide its development.</li></ul><br/><p><strong>Companies/Organizations</strong></p><ul><li>Platypus OS</li><li>Qebot</li><li>Better Business Bureau</li><li>Waymo</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><strong>Books</strong></p><ul><li>The Goal by Eliyahu Goldratt</li></ul><br/><p><strong>Movies</strong></p><ul><li>The Social Network</li><li>Lord of the Rings </li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">ae8881eb-5794-41b9-89b8-fd21107363fb</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 15 Jul 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/ae8881eb-5794-41b9-89b8-fd21107363fb.mp3" length="100427149" type="audio/mpeg"/><itunes:duration>41:51</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>50</itunes:episode><podcast:episode>50</podcast:episode><podcast:season>1</podcast:season></item><item><title>Marketing in the Age of AI: Lessons from Nuri Cankaya at Intel</title><itunes:title>Marketing in the Age of AI: Lessons from Nuri Cankaya at Intel</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>Nuri Cankaya, VP for Commercial and AI Marketing at Intel and author of "AI in Marketing," shares his journey from his early tech interest sparked by a Commodore 64 to his influential roles at Microsoft and Intel. The episode delves into the evolution of AI from early expert systems to current applications, emphasizing the role of AI in marketing and future implications. Nuri discusses AI's transformative potential in marketing through his AIM framework—Assess, Implement, and Measure. He also explores the ethical considerations and challenges of AGI, providing insights into AI regulation. The conversation concludes with a light-hearted lightning round addressing topics from AI’s job market impact to living in Kirkland, Washington.</p><p><strong>Summary:</strong></p><p><strong>Guest Background:</strong> Nuri Cankaya is the VP for Commercial and AI Marketing at Intel, with a rich history in tech spanning roles at Microsoft and authoring several books, including "AI in Marketing."</p><p><strong>AI Marketing Impact:</strong> Nuri introduces the AIM framework to illustrate AI's profound influence on marketing strategies.</p><p><strong>AGI Concerns:</strong> The discussion highlights potential risks of Artificial General Intelligence and the need for ethical guidelines.</p><p><strong>Personal Insights:</strong> Nuri shares personal anecdotes, including his journey from Microsoft to Intel and the importance of mentoring.</p><p><strong>Future AI Developments:</strong> The episode explores future possibilities, including brain-AI interfaces and the societal impact of AI advancements.</p><p><strong>Companies/Organizations:</strong></p><ul><li>Intel</li><li>Microsoft</li><li>OpenAI</li><li>Google DeepMind</li><li>Meta</li><li>Nvidia</li><li>AMD</li><li>Anthropic</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><strong>Books:</strong></p><ul><li>AI in Marketing by Nuri Cankaya</li><li>The Singularity is Nearer by Ray Kurzweil</li><li>Superintelligence by Nick Bostrom</li></ul><br/><p><strong>Movies:</strong></p><ul><li>Ready Player One</li></ul><br/><p><strong>TV Shows:</strong></p><ul><li>Black Mirror</li></ul><br/><p><br></p><p>Copyright © 2025 by Artificial Intelligence Risk</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>Nuri Cankaya, VP for Commercial and AI Marketing at Intel and author of "AI in Marketing," shares his journey from his early tech interest sparked by a Commodore 64 to his influential roles at Microsoft and Intel. The episode delves into the evolution of AI from early expert systems to current applications, emphasizing the role of AI in marketing and future implications. Nuri discusses AI's transformative potential in marketing through his AIM framework—Assess, Implement, and Measure. He also explores the ethical considerations and challenges of AGI, providing insights into AI regulation. The conversation concludes with a light-hearted lightning round addressing topics from AI’s job market impact to living in Kirkland, Washington.</p><p><strong>Summary:</strong></p><p><strong>Guest Background:</strong> Nuri Cankaya is the VP for Commercial and AI Marketing at Intel, with a rich history in tech spanning roles at Microsoft and authoring several books, including "AI in Marketing."</p><p><strong>AI Marketing Impact:</strong> Nuri introduces the AIM framework to illustrate AI's profound influence on marketing strategies.</p><p><strong>AGI Concerns:</strong> The discussion highlights potential risks of Artificial General Intelligence and the need for ethical guidelines.</p><p><strong>Personal Insights:</strong> Nuri shares personal anecdotes, including his journey from Microsoft to Intel and the importance of mentoring.</p><p><strong>Future AI Developments:</strong> The episode explores future possibilities, including brain-AI interfaces and the societal impact of AI advancements.</p><p><strong>Companies/Organizations:</strong></p><ul><li>Intel</li><li>Microsoft</li><li>OpenAI</li><li>Google DeepMind</li><li>Meta</li><li>Nvidia</li><li>AMD</li><li>Anthropic</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><strong>Books:</strong></p><ul><li>AI in Marketing by Nuri Cankaya</li><li>The Singularity is Nearer by Ray Kurzweil</li><li>Superintelligence by Nick Bostrom</li></ul><br/><p><strong>Movies:</strong></p><ul><li>Ready Player One</li></ul><br/><p><strong>TV Shows:</strong></p><ul><li>Black Mirror</li></ul><br/><p><br></p><p>Copyright © 2025 by Artificial Intelligence Risk</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">1add0fb2-7c35-4e52-9676-dfe14d62bdde</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 08 Jul 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/1add0fb2-7c35-4e52-9676-dfe14d62bdde.mp3" length="108006838" type="audio/mpeg"/><itunes:duration>45:00</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>49</itunes:episode><podcast:episode>49</podcast:episode><podcast:season>1</podcast:season></item><item><title>Francisco Quevedo: Bridging Global Challenges and AI at the Rucker Center</title><itunes:title>Francisco Quevedo: Bridging Global Challenges and AI at the Rucker Center</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>Francisco Quevedo, Executive Director of the Rucker Center for Marketing Advancement and an assistant professor, shares his compelling journey from a successful businessman in a socialist regime to an academic leader in the United States. The episode delves into the importance of strategic planning and marketing analytics, where Francisco emphasizes using AI as a co-pilot in decision-making processes. Key topics include the challenges of unequal development in global economies and the role of AI in bridging gaps in technology access. Francisco also discusses the ethical considerations in adopting AI in academia and business and highlights the potential of AI in enhancing consumer experiences.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background:</strong>&nbsp;Francisco Quevedo is the Executive Director of the Rucker Center for Marketing Advancement and an assistant professor and senator at the same institution.</li><li><strong>Strategic Planning:</strong>&nbsp;Francisco emphasizes the importance of strategic planning and marketing analytics, with a focus on consumer delight.</li><li><strong>AI in Consulting:</strong>&nbsp;He uses AI as a co-pilot in consulting to assist with research and strategy development, maintaining the need for human oversight.</li><li><strong>Unequal Development:</strong>&nbsp;Francisco discusses the concept of unequal development and the need for developing countries to embrace technological advancements like AI.</li><li><strong>Ethics &amp; AI:</strong>&nbsp;He addresses ethical considerations in AI use, advocating for its responsible integration into both educational and corporate strategies.</li></ul><br/><p><strong>Companies/Organizations:</strong>&nbsp;</p><ul><li>Rucker Center for Marketing Advancement</li><li>DuPont</li><li>Liberty Mutual</li><li>Global AI Ethics Institute</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><br></p><p><strong>Books:</strong>&nbsp;</p><ul><li>Unequal Development by Samir Amin</li></ul><br/><p><br></p><p><strong>TV Shows:</strong>&nbsp;</p><ul><li>Altered Carbon</li></ul><br/><p><br></p><p>Copyright © 2025 by Artificial Intelligence Risk</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>Francisco Quevedo, Executive Director of the Rucker Center for Marketing Advancement and an assistant professor, shares his compelling journey from a successful businessman in a socialist regime to an academic leader in the United States. The episode delves into the importance of strategic planning and marketing analytics, where Francisco emphasizes using AI as a co-pilot in decision-making processes. Key topics include the challenges of unequal development in global economies and the role of AI in bridging gaps in technology access. Francisco also discusses the ethical considerations in adopting AI in academia and business and highlights the potential of AI in enhancing consumer experiences.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background:</strong>&nbsp;Francisco Quevedo is the Executive Director of the Rucker Center for Marketing Advancement and an assistant professor and senator at the same institution.</li><li><strong>Strategic Planning:</strong>&nbsp;Francisco emphasizes the importance of strategic planning and marketing analytics, with a focus on consumer delight.</li><li><strong>AI in Consulting:</strong>&nbsp;He uses AI as a co-pilot in consulting to assist with research and strategy development, maintaining the need for human oversight.</li><li><strong>Unequal Development:</strong>&nbsp;Francisco discusses the concept of unequal development and the need for developing countries to embrace technological advancements like AI.</li><li><strong>Ethics &amp; AI:</strong>&nbsp;He addresses ethical considerations in AI use, advocating for its responsible integration into both educational and corporate strategies.</li></ul><br/><p><strong>Companies/Organizations:</strong>&nbsp;</p><ul><li>Rucker Center for Marketing Advancement</li><li>DuPont</li><li>Liberty Mutual</li><li>Global AI Ethics Institute</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><br></p><p><strong>Books:</strong>&nbsp;</p><ul><li>Unequal Development by Samir Amin</li></ul><br/><p><br></p><p><strong>TV Shows:</strong>&nbsp;</p><ul><li>Altered Carbon</li></ul><br/><p><br></p><p>Copyright © 2025 by Artificial Intelligence Risk</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">c0bda61b-5ce2-40a1-8938-8a13c1e0b225</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 01 Jul 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/c0bda61b-5ce2-40a1-8938-8a13c1e0b225.mp3" length="89614545" type="audio/mpeg"/><itunes:duration>37:20</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>48</itunes:episode><podcast:episode>48</podcast:episode><podcast:season>1</podcast:season></item><item><title>Solving the Hardest Problems with AI, Not Science: Monica Anderson&apos;s Epistemological Approach with Syntience and Bubble City</title><itunes:title>Solving the Hardest Problems with AI, Not Science: Monica Anderson&apos;s Epistemological Approach with Syntience and Bubble City</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc.&nbsp;<a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a>&nbsp;interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at&nbsp;<a href="https://www.troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>Monica Anderson, an experimental AI epistemologist, has been a prominent figure in AI research for decades, spanning both the 20th and 21st centuries. She worked at Google and now runs her own research company, Syntience, along with a social media platform called Bubble City. Monica discussed her career journey, including her time at Google, and the development of large language models (LLMs). She highlighted the history and evolution of AI, mentioning key milestones like Jeff Hinton's neural network research and the advent of GPUs, which led to the development of transformers and LLMs. Monica elaborated on the concept of organic learning, contrasting traditional models of language understanding with her more efficient algorithm based on discrete links and connectome algorithms. Additionally, she introduced Bubble City, a social media platform designed to create protected, topic-specific chat rooms using AI to eliminate spam, hate, and persuasion, and to facilitate focused discussions and research.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background</strong>: Monica Anderson is an experimental AI epistemologist who has been a leading figure in AI research for decades, working in both the 20th and 21st centuries. She has worked at Google and currently runs her research company, Syntience, and a social media platform called Bubble City.</li><li><strong>Career Journey</strong>: Monica Anderson's career journey, including her work at Google, and her development of large language models (LLMs) through her company Syntience.</li><li><strong>History of AI</strong>: The history and evolution of AI, highlighting key milestones such as Jeff Hinton's neural network research and the advent of GPUs, leading to the development of transformers and LLMs.</li><li><strong>Organic Learning</strong>: The concept of organic learning, contrasting traditional models of language understanding with Monica's more efficient algorithm based on discrete links and connectome algorithms.</li><li><strong>Bubble City</strong>: Bubble City, a social media platform designed to create protected, topic-specific chat rooms using AI, aiming to eliminate spam, hate, and persuasion, and facilitate focused discussions and research.</li></ul><br/><p><strong>Companies:</strong></p><ul><li>Google</li><li>Syntience</li><li>Bubble City</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><strong>Books:</strong></p><ul><li>Time Enough for Love by Robert Heinlein</li><li>Moon is a Harsh Mistress by Robert Heinlein</li></ul><br/><p><strong>Movies:</strong></p><ul><li>Her</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc.&nbsp;<a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a>&nbsp;interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at&nbsp;<a href="https://www.troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>Monica Anderson, an experimental AI epistemologist, has been a prominent figure in AI research for decades, spanning both the 20th and 21st centuries. She worked at Google and now runs her own research company, Syntience, along with a social media platform called Bubble City. Monica discussed her career journey, including her time at Google, and the development of large language models (LLMs). She highlighted the history and evolution of AI, mentioning key milestones like Jeff Hinton's neural network research and the advent of GPUs, which led to the development of transformers and LLMs. Monica elaborated on the concept of organic learning, contrasting traditional models of language understanding with her more efficient algorithm based on discrete links and connectome algorithms. Additionally, she introduced Bubble City, a social media platform designed to create protected, topic-specific chat rooms using AI to eliminate spam, hate, and persuasion, and to facilitate focused discussions and research.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background</strong>: Monica Anderson is an experimental AI epistemologist who has been a leading figure in AI research for decades, working in both the 20th and 21st centuries. She has worked at Google and currently runs her research company, Syntience, and a social media platform called Bubble City.</li><li><strong>Career Journey</strong>: Monica Anderson's career journey, including her work at Google, and her development of large language models (LLMs) through her company Syntience.</li><li><strong>History of AI</strong>: The history and evolution of AI, highlighting key milestones such as Jeff Hinton's neural network research and the advent of GPUs, leading to the development of transformers and LLMs.</li><li><strong>Organic Learning</strong>: The concept of organic learning, contrasting traditional models of language understanding with Monica's more efficient algorithm based on discrete links and connectome algorithms.</li><li><strong>Bubble City</strong>: Bubble City, a social media platform designed to create protected, topic-specific chat rooms using AI, aiming to eliminate spam, hate, and persuasion, and facilitate focused discussions and research.</li></ul><br/><p><strong>Companies:</strong></p><ul><li>Google</li><li>Syntience</li><li>Bubble City</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><strong>Books:</strong></p><ul><li>Time Enough for Love by Robert Heinlein</li><li>Moon is a Harsh Mistress by Robert Heinlein</li></ul><br/><p><strong>Movies:</strong></p><ul><li>Her</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">e335dce7-8f27-48ca-8e61-48b0d385db34</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 24 Jun 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/e335dce7-8f27-48ca-8e61-48b0d385db34.mp3" length="93706365" type="audio/mpeg"/><itunes:duration>39:03</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>47</itunes:episode><podcast:episode>47</podcast:episode><podcast:season>1</podcast:season></item><item><title>Dave Sobel: AI for the Right Business Outcome vs. Hype</title><itunes:title>Dave Sobel: AI for the Right Business Outcome vs. Hype</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc.&nbsp;<a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a>&nbsp;interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at&nbsp;<a href="https://www.troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In this episode of the AI Risk Reward Podcast, host Alec Crawford welcomes Dave Sobel, the host of the Business of Tech podcast, who shares insights from his extensive background in technology and entrepreneurship. The discussion centers around the importance of leveraging AI for genuine business outcomes rather than getting caught up in the hype. Dave emphasizes the necessity of tying AI initiatives back to a company’s profit and loss statement to gauge their true impact. He also explores AI's transformative potential in IT services, providing examples of how AI can enhance efficiency and customer service. Additionally, Dave elaborates on how he uses various AI tools in his media company to streamline news aggregation, summarization, and content creation, thereby delivering high-quality shows more effectively. The conversation also touches on ethical considerations and the critical role of data governance in safely and effectively implementing AI.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background: </strong>Dave Sobel, host of the Business of Tech podcast, has a deep background in technology, starting from his childhood interest in computers and leading to his career in consulting, software development, and entrepreneurship.</li><li><strong>AI for Business Outcomes vs. Hype:</strong>&nbsp;The discussion highlights the importance of focusing on AI for achieving real business results rather than getting lost in the hype. Dave emphasizes the need for AI to tie back to a business's profit and loss statement to measure its effectiveness.</li><li><strong>AI's Impact on IT Services:</strong>&nbsp;The transformative potential of AI in IT services is explored, including examples of how AI can streamline processes and improve customer service by providing quick access to information.</li><li><strong>Applications of AI:</strong>&nbsp;Dave shares how he uses AI in his media company, including tools for news aggregation, summarization, and content creation, enhancing his ability to deliver high-quality, insightful shows efficiently.</li><li><strong>Ethical Considerations and Data Governance:</strong>&nbsp;The conversation delves into the ethical issues surrounding AI, particularly data privacy and governance. Dave stresses the importance of having robust data management practices in place to fully leverage AI while mitigating risks.</li></ul><br/><p><strong>Companies:</strong>&nbsp;</p><ul><li>OpenAI</li><li>Microsoft</li><li>Swell</li><li>Notion</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><strong>Books:</strong>&nbsp;</p><ul><li>The E-Myth by Michael E. Gerber</li></ul><br/><p><strong>Movies:</strong>&nbsp;</p><ul><li>Star Trek II: The Wrath of Khan</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc.&nbsp;<a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a>&nbsp;interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at&nbsp;<a href="https://www.troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In this episode of the AI Risk Reward Podcast, host Alec Crawford welcomes Dave Sobel, the host of the Business of Tech podcast, who shares insights from his extensive background in technology and entrepreneurship. The discussion centers around the importance of leveraging AI for genuine business outcomes rather than getting caught up in the hype. Dave emphasizes the necessity of tying AI initiatives back to a company’s profit and loss statement to gauge their true impact. He also explores AI's transformative potential in IT services, providing examples of how AI can enhance efficiency and customer service. Additionally, Dave elaborates on how he uses various AI tools in his media company to streamline news aggregation, summarization, and content creation, thereby delivering high-quality shows more effectively. The conversation also touches on ethical considerations and the critical role of data governance in safely and effectively implementing AI.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background: </strong>Dave Sobel, host of the Business of Tech podcast, has a deep background in technology, starting from his childhood interest in computers and leading to his career in consulting, software development, and entrepreneurship.</li><li><strong>AI for Business Outcomes vs. Hype:</strong>&nbsp;The discussion highlights the importance of focusing on AI for achieving real business results rather than getting lost in the hype. Dave emphasizes the need for AI to tie back to a business's profit and loss statement to measure its effectiveness.</li><li><strong>AI's Impact on IT Services:</strong>&nbsp;The transformative potential of AI in IT services is explored, including examples of how AI can streamline processes and improve customer service by providing quick access to information.</li><li><strong>Applications of AI:</strong>&nbsp;Dave shares how he uses AI in his media company, including tools for news aggregation, summarization, and content creation, enhancing his ability to deliver high-quality, insightful shows efficiently.</li><li><strong>Ethical Considerations and Data Governance:</strong>&nbsp;The conversation delves into the ethical issues surrounding AI, particularly data privacy and governance. Dave stresses the importance of having robust data management practices in place to fully leverage AI while mitigating risks.</li></ul><br/><p><strong>Companies:</strong>&nbsp;</p><ul><li>OpenAI</li><li>Microsoft</li><li>Swell</li><li>Notion</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><strong>Books:</strong>&nbsp;</p><ul><li>The E-Myth by Michael E. Gerber</li></ul><br/><p><strong>Movies:</strong>&nbsp;</p><ul><li>Star Trek II: The Wrath of Khan</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">d01f2719-306b-4d69-b0dc-b8f107b15390</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 17 Jun 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/d01f2719-306b-4d69-b0dc-b8f107b15390.mp3" length="79126904" type="audio/mpeg"/><itunes:duration>32:58</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>46</itunes:episode><podcast:episode>46</podcast:episode><podcast:season>1</podcast:season></item><item><title>Pioneering Private AI: A Conversation with Colin Constable, CTO at AtSign</title><itunes:title>Pioneering Private AI: A Conversation with Colin Constable, CTO at AtSign</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc.&nbsp;<a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a>&nbsp;interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at&nbsp;<a href="https://www.troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>Colin Constable, co-founder and CTO at AtSign, has an extensive background in telecommunications, beginning with his apprenticeship at British Telecom. Colin discussed his career journey and how his mentor, Les Box, influenced his path, leading to significant roles at Credit Suisse and founding AtSign. He explained AtSign's origins, stemming from a debate on the internet's inefficiencies in data control, aiming to allow services to log into users rather than vice versa. Colin expressed excitement about the agentic movement in AI and the convergence of AI and IoT, predicting advancements in self-driving technology and private AI instances. He also shared advice on fostering healthy disagreement, strategic insights for CEOs integrating AI, and the importance of execution in Silicon Valley's innovation culture.</p><p><strong>Summary:&nbsp;</strong></p><ul><li><strong>Guest Background:</strong>&nbsp;Colin Constable is the co-founder and CTO at AtSign, with a rich background in telecommunications and network architecture, beginning with his apprenticeship at British Telecom.</li><li><strong>Early Career Influences:</strong>&nbsp;Colin shared his early career experiences at British Telecom and the pivotal role of his mentor, Les Box, in shaping his career path, which eventually led him to Credit Suisse and later founding AtSign.</li><li><strong>Founding AtSign:</strong>&nbsp;He discussed the origins of AtSign, which stemmed from an argument about the inefficiencies of the internet regarding data control and identity management. He detailed how AtSign aims to change the paradigm by allowing services to log into users rather than users logging into services.</li><li><strong>Emerging Technologies:</strong>&nbsp;Colin expressed excitement about the agentic movement in AI, particularly the intersection of AI and IoT, predicting impactful advancements in self-driving technology and private AI instances.</li><li><strong>Advice and Insights:</strong>&nbsp;He emphasized the importance of healthy disagreement for learning and growth, strategic insights for CEOs integrating AI, and the impact of execution in Silicon Valley's innovation culture.</li></ul><br/><p><strong>Companies:</strong>&nbsp;</p><ul><li>AtSign</li><li>Zeta Health</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><strong>Books:</strong>&nbsp;</p><ul><li>Accelerando by Charles Stross</li></ul><br/><p><strong>Movies:</strong>&nbsp;</p><ul><li>Yes Man</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc.&nbsp;<a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a>&nbsp;interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at&nbsp;<a href="https://www.troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>Colin Constable, co-founder and CTO at AtSign, has an extensive background in telecommunications, beginning with his apprenticeship at British Telecom. Colin discussed his career journey and how his mentor, Les Box, influenced his path, leading to significant roles at Credit Suisse and founding AtSign. He explained AtSign's origins, stemming from a debate on the internet's inefficiencies in data control, aiming to allow services to log into users rather than vice versa. Colin expressed excitement about the agentic movement in AI and the convergence of AI and IoT, predicting advancements in self-driving technology and private AI instances. He also shared advice on fostering healthy disagreement, strategic insights for CEOs integrating AI, and the importance of execution in Silicon Valley's innovation culture.</p><p><strong>Summary:&nbsp;</strong></p><ul><li><strong>Guest Background:</strong>&nbsp;Colin Constable is the co-founder and CTO at AtSign, with a rich background in telecommunications and network architecture, beginning with his apprenticeship at British Telecom.</li><li><strong>Early Career Influences:</strong>&nbsp;Colin shared his early career experiences at British Telecom and the pivotal role of his mentor, Les Box, in shaping his career path, which eventually led him to Credit Suisse and later founding AtSign.</li><li><strong>Founding AtSign:</strong>&nbsp;He discussed the origins of AtSign, which stemmed from an argument about the inefficiencies of the internet regarding data control and identity management. He detailed how AtSign aims to change the paradigm by allowing services to log into users rather than users logging into services.</li><li><strong>Emerging Technologies:</strong>&nbsp;Colin expressed excitement about the agentic movement in AI, particularly the intersection of AI and IoT, predicting impactful advancements in self-driving technology and private AI instances.</li><li><strong>Advice and Insights:</strong>&nbsp;He emphasized the importance of healthy disagreement for learning and growth, strategic insights for CEOs integrating AI, and the impact of execution in Silicon Valley's innovation culture.</li></ul><br/><p><strong>Companies:</strong>&nbsp;</p><ul><li>AtSign</li><li>Zeta Health</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><strong>Books:</strong>&nbsp;</p><ul><li>Accelerando by Charles Stross</li></ul><br/><p><strong>Movies:</strong>&nbsp;</p><ul><li>Yes Man</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">47fc2b47-d1ac-4b9a-a486-6870b04192e2</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 10 Jun 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/47fc2b47-d1ac-4b9a-a486-6870b04192e2.mp3" length="109177124" type="audio/mpeg"/><itunes:duration>45:29</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>45</itunes:episode><podcast:episode>45</podcast:episode><podcast:season>1</podcast:season></item><item><title>Daniel Goddard on Dysko and the Future of AI-Powered Networking</title><itunes:title>Daniel Goddard on Dysko and the Future of AI-Powered Networking</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc.&nbsp;<a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a>&nbsp;interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at&nbsp;<a href="https://www.troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>Daniel Goddard, the founder and CEO of Dysko, an AI networking application, joined the AI Risk Reward Podcast to discuss his fascinating journey from Australia to Los Angeles. Daniel's career path included surfing, studying business, and ultimately pursuing acting, where he starred in TV shows like <em>Beastmaster</em> and <em>Young and the Restless</em>. In the podcast, he delves into the origin of Dysko, which uses hashtags and GPS technology to help people connect based on shared interests, addressing a significant pain point in networking. Daniel also discusses the privacy concerns associated with AI and how Dysko ensures user safety by allowing control over visibility and interactions. Furthermore, he shares valuable advice emphasizing the importance of pursuing what makes you happy and reflects on the future of AI and its impact on industries and human experiences.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background</strong>: Daniel Goddard is the founder and CEO of Dysko, an AI networking application. He has a varied background, including surfing in Australia, studying business, pursuing acting, and eventually moving to Los Angeles where he starred in TV shows like <em>Beastmaster</em> and <em>Young and the Restless</em>.</li><li><strong>Journey and Career Path</strong>: Daniel shares his life journey from Australia to Los Angeles, detailing his experiences in acting and the significant pivot points that led him from a business student to an actor and eventually a tech entrepreneur.</li><li><strong>Creation of Dysko</strong>: The discussion delves into the origin of Dysko, an AI networking application designed to help people connect based on shared interests and activities, using hashtags and GPS technology to facilitate meaningful connections in both social and professional settings.</li><li><strong>Challenges and Privacy in AI</strong>: The conversation highlights the privacy concerns associated with AI technology and how Dysko addresses these issues by allowing users to control their visibility and interactions to ensure a safe networking experience.</li><li><strong>Advice and Insights</strong>: Daniel shares valuable advice he has received, emphasizing the importance of pursuing what makes you happy. He also discusses the future of AI and its potential impact on industries and human experiences.</li></ul><br/><p><strong>Books:</strong></p><ul><li><em>Michael Dell's autobiography</em></li><li><em>The Lean Startup</em> by Eric Ries</li></ul><br/><p><strong>TV Shows:</strong></p><ul><li><em>Beastmaster</em></li><li><em>Young and the Restless</em></li></ul><br/><p><strong>Movies:</strong></p><ul><li><em>There Will Be Blood</em></li><li><em>Ready Player One</em></li></ul><br/><p><strong>Companies:</strong></p><ul><li><a href="https://dysko.co/" rel="noopener noreferrer" target="_blank">Dysko</a></li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p><p><br></p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc.&nbsp;<a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a>&nbsp;interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at&nbsp;<a href="https://www.troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>Daniel Goddard, the founder and CEO of Dysko, an AI networking application, joined the AI Risk Reward Podcast to discuss his fascinating journey from Australia to Los Angeles. Daniel's career path included surfing, studying business, and ultimately pursuing acting, where he starred in TV shows like <em>Beastmaster</em> and <em>Young and the Restless</em>. In the podcast, he delves into the origin of Dysko, which uses hashtags and GPS technology to help people connect based on shared interests, addressing a significant pain point in networking. Daniel also discusses the privacy concerns associated with AI and how Dysko ensures user safety by allowing control over visibility and interactions. Furthermore, he shares valuable advice emphasizing the importance of pursuing what makes you happy and reflects on the future of AI and its impact on industries and human experiences.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background</strong>: Daniel Goddard is the founder and CEO of Dysko, an AI networking application. He has a varied background, including surfing in Australia, studying business, pursuing acting, and eventually moving to Los Angeles where he starred in TV shows like <em>Beastmaster</em> and <em>Young and the Restless</em>.</li><li><strong>Journey and Career Path</strong>: Daniel shares his life journey from Australia to Los Angeles, detailing his experiences in acting and the significant pivot points that led him from a business student to an actor and eventually a tech entrepreneur.</li><li><strong>Creation of Dysko</strong>: The discussion delves into the origin of Dysko, an AI networking application designed to help people connect based on shared interests and activities, using hashtags and GPS technology to facilitate meaningful connections in both social and professional settings.</li><li><strong>Challenges and Privacy in AI</strong>: The conversation highlights the privacy concerns associated with AI technology and how Dysko addresses these issues by allowing users to control their visibility and interactions to ensure a safe networking experience.</li><li><strong>Advice and Insights</strong>: Daniel shares valuable advice he has received, emphasizing the importance of pursuing what makes you happy. He also discusses the future of AI and its potential impact on industries and human experiences.</li></ul><br/><p><strong>Books:</strong></p><ul><li><em>Michael Dell's autobiography</em></li><li><em>The Lean Startup</em> by Eric Ries</li></ul><br/><p><strong>TV Shows:</strong></p><ul><li><em>Beastmaster</em></li><li><em>Young and the Restless</em></li></ul><br/><p><strong>Movies:</strong></p><ul><li><em>There Will Be Blood</em></li><li><em>Ready Player One</em></li></ul><br/><p><strong>Companies:</strong></p><ul><li><a href="https://dysko.co/" rel="noopener noreferrer" target="_blank">Dysko</a></li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p><p><br></p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">c3290ee5-5882-4821-bd31-520cc492a39d</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 03 Jun 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/c3290ee5-5882-4821-bd31-520cc492a39d.mp3" length="105193973" type="audio/mpeg"/><itunes:duration>43:50</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>44</itunes:episode><podcast:episode>44</podcast:episode><podcast:season>1</podcast:season></item><item><title>Steven Zeller: Transforming Ideas into Reality as an International Entrepreneur and Impact Technology Investor</title><itunes:title>Steven Zeller: Transforming Ideas into Reality as an International Entrepreneur and Impact Technology Investor</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc.&nbsp;<a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a>&nbsp;interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at&nbsp;<a href="https://www.troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In this episode of the AI Risk Reward Podcast, host Alec Crawford welcomes Steven Zeller, an international entrepreneur and impact technology investor. Steven shares his fascinating journey from a teenage entrepreneur in Kansas City to a successful investor in innovative technologies. He discusses pivotal moments that shifted his focus from merely chasing success to making meaningful impacts through technology. The conversation delves into his daily use of AI for brainstorming and problem-solving, his current projects building data centers in collaboration with governments and universities, and the future implications of AI, including its integration with robotics and brain-computer interfaces. Steven emphasizes the importance of collaboration among world leaders to ensure technological advancements benefit humanity collectively while addressing ethical concerns.</p><p><strong>&nbsp;Summary:</strong></p><ul><li><strong>Guest Background</strong>: The podcast features Steven Zeller, an international entrepreneur and impact technology investor, who began his entrepreneurship journey as a teenager in Kansas City, Missouri. He made his first million early in his career, faced ups and downs, and ultimately focused on investing in innovative technologies.</li><li><strong>Catalyst for Change</strong>: Steven discusses a pivotal moment in his life where he realized that merely chasing money and success wasn't fulfilling. This realization led him to focus on impactful and innovative technologies that could propel humanity forward.</li><li><strong>Use of AI</strong>: Steven uses AI on a daily basis for brainstorming and exploring theoretical concepts, particularly in fields like space propulsion systems and genetics. He highlights the significant potential of AI in advancing human capabilities and solving complex problems.</li><li><strong>Current Projects</strong>: Steven is working on building state-of-the-art data centers in collaboration with governments and universities. These projects aim to create mutual benefits, such as strategic energy deals, to support the infrastructure required for AI advancements.</li><li><strong>Future of AI and Technology</strong>: The podcast delves into the future implications of AI, including its integration with robotics and brain-computer interfaces. Steven emphasizes the importance of collaboration among world leaders to ensure technological advancements benefit humanity as a whole and address pressing ethical issues like security and geopolitical stability.</li></ul><br/><p>&nbsp;</p><p><strong>Companies:</strong></p><ul><li>OpenAI</li><li>Anthropic</li><li>BMW</li><li>Mercedes</li><li>Tesla</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p><p><br></p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc.&nbsp;<a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a>&nbsp;interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at&nbsp;<a href="https://www.troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In this episode of the AI Risk Reward Podcast, host Alec Crawford welcomes Steven Zeller, an international entrepreneur and impact technology investor. Steven shares his fascinating journey from a teenage entrepreneur in Kansas City to a successful investor in innovative technologies. He discusses pivotal moments that shifted his focus from merely chasing success to making meaningful impacts through technology. The conversation delves into his daily use of AI for brainstorming and problem-solving, his current projects building data centers in collaboration with governments and universities, and the future implications of AI, including its integration with robotics and brain-computer interfaces. Steven emphasizes the importance of collaboration among world leaders to ensure technological advancements benefit humanity collectively while addressing ethical concerns.</p><p><strong>&nbsp;Summary:</strong></p><ul><li><strong>Guest Background</strong>: The podcast features Steven Zeller, an international entrepreneur and impact technology investor, who began his entrepreneurship journey as a teenager in Kansas City, Missouri. He made his first million early in his career, faced ups and downs, and ultimately focused on investing in innovative technologies.</li><li><strong>Catalyst for Change</strong>: Steven discusses a pivotal moment in his life where he realized that merely chasing money and success wasn't fulfilling. This realization led him to focus on impactful and innovative technologies that could propel humanity forward.</li><li><strong>Use of AI</strong>: Steven uses AI on a daily basis for brainstorming and exploring theoretical concepts, particularly in fields like space propulsion systems and genetics. He highlights the significant potential of AI in advancing human capabilities and solving complex problems.</li><li><strong>Current Projects</strong>: Steven is working on building state-of-the-art data centers in collaboration with governments and universities. These projects aim to create mutual benefits, such as strategic energy deals, to support the infrastructure required for AI advancements.</li><li><strong>Future of AI and Technology</strong>: The podcast delves into the future implications of AI, including its integration with robotics and brain-computer interfaces. Steven emphasizes the importance of collaboration among world leaders to ensure technological advancements benefit humanity as a whole and address pressing ethical issues like security and geopolitical stability.</li></ul><br/><p>&nbsp;</p><p><strong>Companies:</strong></p><ul><li>OpenAI</li><li>Anthropic</li><li>BMW</li><li>Mercedes</li><li>Tesla</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p><p><br></p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">46e997b2-add5-496f-8755-50d2e0ddefb6</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 27 May 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/46e997b2-add5-496f-8755-50d2e0ddefb6.mp3" length="106136471" type="audio/mpeg"/><itunes:duration>44:13</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>43</itunes:episode><podcast:episode>43</podcast:episode><podcast:season>1</podcast:season></item><item><title>From NASA to Creating Your AI Financial Advisor: Alex Harmsen&apos;s Journey in AI and Financial Advisory</title><itunes:title>From NASA to Creating Your AI Financial Advisor: Alex Harmsen&apos;s Journey in AI and Financial Advisory</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc.&nbsp;<a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a>&nbsp;interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at&nbsp;<a href="https://www.troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In this episode of the AI Risk Reward Podcast, host Alec Crawford welcomes Alex Harmsen, the CEO and founder of Portfolio Pilot. Alex shares his fascinating journey from aspiring astronaut and working on NASA's Mars helicopter project to leading an autonomous vehicle company, and finally venturing into the world of consumer FinTech. With a strong background in engineering, physics, and economics, Alex discusses how his passion for modeling complex systems has driven his diverse career. He highlights the importance of mentorship, lifelong learning, and hiring experts in various fields. Alex also introduces Flight Club, a platform connecting pilots with planes and passengers, and delves into the origin and development of Portfolio Pilot. This AI-driven financial advisory firm has rapidly grown, now serving 30,000 users and managing $30 billion in assets. The episode provides valuable insights into the transformative role of AI in finance and the potential challenges and opportunities it presents.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background: </strong>Alex Harmsen is the CEO and founder of Portfolio Pilot. He has a diverse background, having studied engineering, physics, and economics. Alex initially aimed to become an astronaut and worked at NASA on the Mars helicopter project. He also ran an autonomous vehicle company before venturing into consumer FinTech with Portfolio Pilot.</li><li><strong>Transition from Physics to AI and Finance: </strong>Alex discussed his journey from studying physics and working on autonomous vehicles to founding an AI-driven financial advisory firm. He emphasized his interest in modeling complex systems, whether in physics, autonomous vehicles, or the global economy, and how this skillset transitioned smoothly to finance.</li><li><strong>Mentorship and Learning: </strong>Alex talked about the importance of having multiple mentors throughout his career. He shared insights from various mentors, including an astrophysicist who taught him to be authentic and a computer vision expert who significantly contributed to his AI expertise. He highlighted the value of lifelong learning and hiring people with more expertise.</li><li><strong>Flight Club Initiative: </strong>Alex briefly mentioned Flight Club, a platform that connects pilots with planes and passengers, similar to the Airbnb model for aircraft sharing. This initiative aims to make flying more accessible and affordable by sharing the costs of aircraft ownership and fostering a community among pilots and aircraft owners.</li><li><strong>Origin and Development of Portfolio Pilot: </strong>Alex explained the origin story of Portfolio Pilot, which started as a personal tool for managing his finances. After selling his previous company, he focused on creating a comprehensive financial advisory system that integrates various accounts and provides personalized investment strategies. The company has grown rapidly, now serving 30,000 users and managing $30 billion in assets.</li></ul><br/><p><strong>Movies/TV shows:</strong></p><ul><li>A Beautiful Mind</li><li>Black Mirror</li></ul><br/><p><strong>Companies:</strong></p><ul><li>Portfolio Pilot</li><li>NASA</li><li>Iris Automation</li><li>Bridgewater</li><li>OpenAI</li><li>Plaid</li><li>Yodlee</li><li>SnapTrade</li><li>Flight Club</li><li>Devon</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p><p><br></p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc.&nbsp;<a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a>&nbsp;interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at&nbsp;<a href="https://www.troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In this episode of the AI Risk Reward Podcast, host Alec Crawford welcomes Alex Harmsen, the CEO and founder of Portfolio Pilot. Alex shares his fascinating journey from aspiring astronaut and working on NASA's Mars helicopter project to leading an autonomous vehicle company, and finally venturing into the world of consumer FinTech. With a strong background in engineering, physics, and economics, Alex discusses how his passion for modeling complex systems has driven his diverse career. He highlights the importance of mentorship, lifelong learning, and hiring experts in various fields. Alex also introduces Flight Club, a platform connecting pilots with planes and passengers, and delves into the origin and development of Portfolio Pilot. This AI-driven financial advisory firm has rapidly grown, now serving 30,000 users and managing $30 billion in assets. The episode provides valuable insights into the transformative role of AI in finance and the potential challenges and opportunities it presents.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background: </strong>Alex Harmsen is the CEO and founder of Portfolio Pilot. He has a diverse background, having studied engineering, physics, and economics. Alex initially aimed to become an astronaut and worked at NASA on the Mars helicopter project. He also ran an autonomous vehicle company before venturing into consumer FinTech with Portfolio Pilot.</li><li><strong>Transition from Physics to AI and Finance: </strong>Alex discussed his journey from studying physics and working on autonomous vehicles to founding an AI-driven financial advisory firm. He emphasized his interest in modeling complex systems, whether in physics, autonomous vehicles, or the global economy, and how this skillset transitioned smoothly to finance.</li><li><strong>Mentorship and Learning: </strong>Alex talked about the importance of having multiple mentors throughout his career. He shared insights from various mentors, including an astrophysicist who taught him to be authentic and a computer vision expert who significantly contributed to his AI expertise. He highlighted the value of lifelong learning and hiring people with more expertise.</li><li><strong>Flight Club Initiative: </strong>Alex briefly mentioned Flight Club, a platform that connects pilots with planes and passengers, similar to the Airbnb model for aircraft sharing. This initiative aims to make flying more accessible and affordable by sharing the costs of aircraft ownership and fostering a community among pilots and aircraft owners.</li><li><strong>Origin and Development of Portfolio Pilot: </strong>Alex explained the origin story of Portfolio Pilot, which started as a personal tool for managing his finances. After selling his previous company, he focused on creating a comprehensive financial advisory system that integrates various accounts and provides personalized investment strategies. The company has grown rapidly, now serving 30,000 users and managing $30 billion in assets.</li></ul><br/><p><strong>Movies/TV shows:</strong></p><ul><li>A Beautiful Mind</li><li>Black Mirror</li></ul><br/><p><strong>Companies:</strong></p><ul><li>Portfolio Pilot</li><li>NASA</li><li>Iris Automation</li><li>Bridgewater</li><li>OpenAI</li><li>Plaid</li><li>Yodlee</li><li>SnapTrade</li><li>Flight Club</li><li>Devon</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p><p><br></p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">eb62c3ac-29b5-49ad-a2ea-c86f0f966148</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 20 May 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/eb62c3ac-29b5-49ad-a2ea-c86f0f966148.mp3" length="128626854" type="audio/mpeg"/><itunes:duration>53:36</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>42</itunes:episode><podcast:episode>42</podcast:episode><podcast:season>1</podcast:season></item><item><title>From Finance to AI Leadership: Michael Cutajar&apos;s Story with Interactive Avatars</title><itunes:title>From Finance to AI Leadership: Michael Cutajar&apos;s Story with Interactive Avatars</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc.&nbsp;<a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a>&nbsp;interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at&nbsp;<a href="https://www.troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In this insightful podcast episode, Alec Crawford interviews Michael Cutajar, the founder of Interactive Avatars. Michael shares his compelling journey from growing up on the island of Malta and starting his career in finance to becoming a tech entrepreneur. He discusses the inspirations behind Interactive Avatars and the innovative work they are doing with hyper-realistic AI avatars. Michael also opens up about the challenges he faced, including his time as CEO at Metaverse Architects, where he had to shut down the company and lay off his friends, and the lessons in resilience he learned along the way. The conversation delves into the ethical implications of AI, the impact of influential books like "Man's Search for Meaning," and the future of technology in enhancing human experiences.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background</strong>: Michael Cutajar, founder of Interactive Avatars, shares his unique journey from growing up on the sun-soaked island of Malta to becoming a leader in the tech industry. His diverse career path includes a transition from finance to technology and entrepreneurship.</li><li><strong>Inspiration Behind Interactive Avatars</strong>: Michael explains what inspired him to start Interactive Avatars, detailing his experiences in various tech roles and how they led him to focus on creating innovative AI solutions.</li><li><strong>Impact of "Man's Search for Meaning"</strong>: Michael highlights the profound impact the book "Man's Search for Meaning" by Viktor Frankl had on him during the COVID-19 pandemic. He explains how the book's themes of finding meaning in suffering resonated with him during challenging times.</li><li><strong>Challenges and Resilience</strong>: Michael shares the challenges he faced, including shutting down the previous company he was CEO at, Metaverse Architects, and having to lay off his friends. He emphasizes the importance of resilience and moving forward despite setbacks.</li><li><strong>Interactive Avatars and AI Ethics</strong>: Michael talks about the innovations at Interactive Avatars, particularly the realistic AI avatars they create. He also discusses the ethical implications of AI, security concerns, and the importance of maintaining ethical standards in AI development.</li></ul><br/><p><strong>Books:</strong></p><ul><li>Man's Search for Meaning</li></ul><br/><p><strong>Movies:</strong></p><ul><li>Cloud Atlas</li></ul><br/><p><strong>Companies:</strong></p><ul><li>Interactive Avatars</li><li>Metaverse Architects</li><li>11 Labs</li><li>DeepSeek</li><li>OpenAI</li><li>Nvidia</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc.&nbsp;<a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a>&nbsp;interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at&nbsp;<a href="https://www.troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In this insightful podcast episode, Alec Crawford interviews Michael Cutajar, the founder of Interactive Avatars. Michael shares his compelling journey from growing up on the island of Malta and starting his career in finance to becoming a tech entrepreneur. He discusses the inspirations behind Interactive Avatars and the innovative work they are doing with hyper-realistic AI avatars. Michael also opens up about the challenges he faced, including his time as CEO at Metaverse Architects, where he had to shut down the company and lay off his friends, and the lessons in resilience he learned along the way. The conversation delves into the ethical implications of AI, the impact of influential books like "Man's Search for Meaning," and the future of technology in enhancing human experiences.</p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background</strong>: Michael Cutajar, founder of Interactive Avatars, shares his unique journey from growing up on the sun-soaked island of Malta to becoming a leader in the tech industry. His diverse career path includes a transition from finance to technology and entrepreneurship.</li><li><strong>Inspiration Behind Interactive Avatars</strong>: Michael explains what inspired him to start Interactive Avatars, detailing his experiences in various tech roles and how they led him to focus on creating innovative AI solutions.</li><li><strong>Impact of "Man's Search for Meaning"</strong>: Michael highlights the profound impact the book "Man's Search for Meaning" by Viktor Frankl had on him during the COVID-19 pandemic. He explains how the book's themes of finding meaning in suffering resonated with him during challenging times.</li><li><strong>Challenges and Resilience</strong>: Michael shares the challenges he faced, including shutting down the previous company he was CEO at, Metaverse Architects, and having to lay off his friends. He emphasizes the importance of resilience and moving forward despite setbacks.</li><li><strong>Interactive Avatars and AI Ethics</strong>: Michael talks about the innovations at Interactive Avatars, particularly the realistic AI avatars they create. He also discusses the ethical implications of AI, security concerns, and the importance of maintaining ethical standards in AI development.</li></ul><br/><p><strong>Books:</strong></p><ul><li>Man's Search for Meaning</li></ul><br/><p><strong>Movies:</strong></p><ul><li>Cloud Atlas</li></ul><br/><p><strong>Companies:</strong></p><ul><li>Interactive Avatars</li><li>Metaverse Architects</li><li>11 Labs</li><li>DeepSeek</li><li>OpenAI</li><li>Nvidia</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">bcbdb5de-df8b-4436-97eb-b25b6688f843</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 13 May 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/bcbdb5de-df8b-4436-97eb-b25b6688f843.mp3" length="70411410" type="audio/mpeg"/><itunes:duration>29:20</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>41</itunes:episode><podcast:episode>41</podcast:episode><podcast:season>1</podcast:season></item><item><title>Reinventing Cargill with AI: Inspiration from Nicole Hilgenkamp</title><itunes:title>Reinventing Cargill with AI: Inspiration from Nicole Hilgenkamp</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc.&nbsp;<a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a>&nbsp;interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at&nbsp;<a href="https://www.troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference</p><p>In this episode of the AI Risk Reward Podcast, Alec Crawford hosts Nicole Hilgenkamp, Senior Director of Procurement and Transportation at Cargill. Nicole shares her journey, including a pivotal summer program at the University of Manchester Business School. They discuss the practical applications of AI, referencing the book "Your AI Roadmap" by Dr. Joan Bajorek. Nicole highlights the importance of technology adoption in corporate America and addresses the issue of bias in AI, particularly concerning women. She also explains how AI provides strategic advantages at Cargill, such as time savings in procurement, while discussing the challenges of data cleanliness and privacy. </p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background:</strong>&nbsp;Nicole Hilgenkamp is the Senior Director of Procurement and Transportation at Cargill, with a unique experience that includes a summer program at the University of Manchester Business School.</li><li><strong>Importance of Overseas Experience:</strong>&nbsp;Nicole emphasizes the value of studying and traveling abroad, sharing how her overseas experience helped her understand different cultures and step out of her comfort zone.</li><li><strong>AI and Digital Transformation:</strong>&nbsp;Nicole discusses her career focus on AI and digital transformation, highlighting the importance of technology adoption in corporate America and the transformative potential of AI.</li><li><strong>Bias in AI and Women in Technology:</strong>&nbsp;She addresses the issue of bias in AI, particularly regarding women, and stresses the need for retraining large language models to remove historical biases.</li><li><strong>AI's Strategic Advantage at Cargill:</strong>&nbsp;Nicole explains how AI provides strategic advantages at Cargill, such as saving time in procurement processes, while also discussing the challenges related to data cleanliness, process standardization, and privacy concerns.</li></ul><br/><p><strong>Companies:</strong></p><ul><li>Cargill</li><li>Hewlett Packard</li><li>Claude AI</li><li>Llama</li><li>Microsoft</li><li>JP Morgan</li><li><a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><strong>Books:</strong></p><ul><li>Your AI Roadmap</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc.&nbsp;<a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a>&nbsp;interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at&nbsp;<a href="https://www.troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference</p><p>In this episode of the AI Risk Reward Podcast, Alec Crawford hosts Nicole Hilgenkamp, Senior Director of Procurement and Transportation at Cargill. Nicole shares her journey, including a pivotal summer program at the University of Manchester Business School. They discuss the practical applications of AI, referencing the book "Your AI Roadmap" by Dr. Joan Bajorek. Nicole highlights the importance of technology adoption in corporate America and addresses the issue of bias in AI, particularly concerning women. She also explains how AI provides strategic advantages at Cargill, such as time savings in procurement, while discussing the challenges of data cleanliness and privacy. </p><p><strong>Summary:</strong></p><ul><li><strong>Guest Background:</strong>&nbsp;Nicole Hilgenkamp is the Senior Director of Procurement and Transportation at Cargill, with a unique experience that includes a summer program at the University of Manchester Business School.</li><li><strong>Importance of Overseas Experience:</strong>&nbsp;Nicole emphasizes the value of studying and traveling abroad, sharing how her overseas experience helped her understand different cultures and step out of her comfort zone.</li><li><strong>AI and Digital Transformation:</strong>&nbsp;Nicole discusses her career focus on AI and digital transformation, highlighting the importance of technology adoption in corporate America and the transformative potential of AI.</li><li><strong>Bias in AI and Women in Technology:</strong>&nbsp;She addresses the issue of bias in AI, particularly regarding women, and stresses the need for retraining large language models to remove historical biases.</li><li><strong>AI's Strategic Advantage at Cargill:</strong>&nbsp;Nicole explains how AI provides strategic advantages at Cargill, such as saving time in procurement processes, while also discussing the challenges related to data cleanliness, process standardization, and privacy concerns.</li></ul><br/><p><strong>Companies:</strong></p><ul><li>Cargill</li><li>Hewlett Packard</li><li>Claude AI</li><li>Llama</li><li>Microsoft</li><li>JP Morgan</li><li><a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><strong>Books:</strong></p><ul><li>Your AI Roadmap</li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">9ffc4a1d-1532-443b-b3fb-2a43af49fe66</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 06 May 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/9ffc4a1d-1532-443b-b3fb-2a43af49fe66.mp3" length="75646349" type="audio/mpeg"/><itunes:duration>31:31</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>40</itunes:episode><podcast:episode>40</podcast:episode><podcast:season>1</podcast:season></item><item><title>Get Smarter in Cybersecurity with Sec Gemini: A Preview from Google’s Elie Bursztein</title><itunes:title>Get Smarter in Cybersecurity with Sec Gemini: A Preview from Google’s Elie Bursztein</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc.&nbsp;<a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a>&nbsp;interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at&nbsp;<a href="https://www.troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In this episode of AI Risk Reward, host Alec Crawford welcomes back Elie Bursztein, a cybersecurity and AI expert from Google. Elie discusses the rapid advancements in AI for cybersecurity, emphasizing its potential to transform security operations and vulnerability management. He also highlights the challenges in developing reliable AI systems and the complexities of securing AI agents against untrusted inputs and prompt injection attacks. Furthermore, Elie introduces Sec Gemini, a Google-led research project aimed at providing real-time cybersecurity insights, and teases its upcoming involvement in a CTF contest at DEF CON. This episode offers a comprehensive look at the critical developments in AI and cybersecurity.</p><p><strong>Summary:</strong> </p><ul><li><strong>Elie Bursztein:</strong>&nbsp;Elie Bursztein is a cybersecurity and AI expert working at Google. He has a strong background in these fields and previously appeared on the show, making him the first returning guest.</li><li><strong>Current State of AI in Cybersecurity: </strong>Elie shares insights on the rapid advancements in AI for cybersecurity applications. He discusses the excitement around AI's potential to revolutionize security operations, vulnerability detection, and attacker capabilities. He also highlights the challenges in developing reliable AI systems for practical, production-level use.</li><li><strong>Difficulties in Vulnerability Detection and Patching: </strong>Elie elaborates on the complexities of using AI for finding and patching vulnerabilities. He mentions Google's internal efforts and the extensive tooling and customization required to make AI systems effective in these areas. Despite progress, he notes that achieving the necessary level of reliability and utility remains a significant hurdle.</li><li><strong>Agent Security and Prompt Injection Risks: </strong>Elie highlights the unique security challenges posed by AI agents, especially those that perform autonomous or semi-autonomous tasks. He explains the risks associated with untrusted inputs, such as prompt injection attacks, and the potential for these vulnerabilities to compromise agent behavior and security.</li><li><strong>Introduction to Sec Gemini: </strong>The episode introduces Sec Gemini, a research project led by Elie at Google. The invitation-only research project aims to provide real-time, up-to-date cybersecurity insights using AI. Elie discusses the goals of Sec Gemini, its current capabilities, and the collaborative approach with various organizations to refine and enhance the model. He also mentions upcoming announcements and the project's involvement in a new CTF (Capture The Flag) contest at DEF CON.</li></ul><br/><p><strong>Companies: </strong></p><ul><li>Google </li><li>Apple </li><li>OpenAI</li><li>Giskard</li><li>MLCommons&nbsp;</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><br></p><p>Copyright (c) 2025 Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc.&nbsp;<a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a>&nbsp;interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at&nbsp;<a href="https://www.troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In this episode of AI Risk Reward, host Alec Crawford welcomes back Elie Bursztein, a cybersecurity and AI expert from Google. Elie discusses the rapid advancements in AI for cybersecurity, emphasizing its potential to transform security operations and vulnerability management. He also highlights the challenges in developing reliable AI systems and the complexities of securing AI agents against untrusted inputs and prompt injection attacks. Furthermore, Elie introduces Sec Gemini, a Google-led research project aimed at providing real-time cybersecurity insights, and teases its upcoming involvement in a CTF contest at DEF CON. This episode offers a comprehensive look at the critical developments in AI and cybersecurity.</p><p><strong>Summary:</strong> </p><ul><li><strong>Elie Bursztein:</strong>&nbsp;Elie Bursztein is a cybersecurity and AI expert working at Google. He has a strong background in these fields and previously appeared on the show, making him the first returning guest.</li><li><strong>Current State of AI in Cybersecurity: </strong>Elie shares insights on the rapid advancements in AI for cybersecurity applications. He discusses the excitement around AI's potential to revolutionize security operations, vulnerability detection, and attacker capabilities. He also highlights the challenges in developing reliable AI systems for practical, production-level use.</li><li><strong>Difficulties in Vulnerability Detection and Patching: </strong>Elie elaborates on the complexities of using AI for finding and patching vulnerabilities. He mentions Google's internal efforts and the extensive tooling and customization required to make AI systems effective in these areas. Despite progress, he notes that achieving the necessary level of reliability and utility remains a significant hurdle.</li><li><strong>Agent Security and Prompt Injection Risks: </strong>Elie highlights the unique security challenges posed by AI agents, especially those that perform autonomous or semi-autonomous tasks. He explains the risks associated with untrusted inputs, such as prompt injection attacks, and the potential for these vulnerabilities to compromise agent behavior and security.</li><li><strong>Introduction to Sec Gemini: </strong>The episode introduces Sec Gemini, a research project led by Elie at Google. The invitation-only research project aims to provide real-time, up-to-date cybersecurity insights using AI. Elie discusses the goals of Sec Gemini, its current capabilities, and the collaborative approach with various organizations to refine and enhance the model. He also mentions upcoming announcements and the project's involvement in a new CTF (Capture The Flag) contest at DEF CON.</li></ul><br/><p><strong>Companies: </strong></p><ul><li>Google </li><li>Apple </li><li>OpenAI</li><li>Giskard</li><li>MLCommons&nbsp;</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p><br></p><p>Copyright (c) 2025 Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">0afc5f15-a807-4f1f-893b-9a0ca64e9639</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 29 Apr 2025 01:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/682cb2ed-3388-4f16-a3c2-952a898fe802/Elie-Burszstein-2-master.mp3" length="79634724" type="audio/mpeg"/><itunes:duration>33:11</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>39</itunes:episode><podcast:episode>39</podcast:episode><podcast:season>1</podcast:season></item><item><title>Revolutionizing Mortgages with AI: Insights from Jon Kutsmeda, CEO of LendZen</title><itunes:title>Revolutionizing Mortgages with AI: Insights from Jon Kutsmeda, CEO of LendZen</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc.&nbsp;<a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a>&nbsp;interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at&nbsp;<a href="https://www.troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In this episode of the AI Risk Reward Podcast, host Alec Crawford sits down with Jon Kutsmeda, the CEO and co-creator of LendZen.com. With nearly 20 years of experience in residential mortgage lending, Jon shares his journey from Philadelphia to Hawaii and the entrepreneurial spirit that led him to co-create LendZen.com. The discussion dives into the criticisms of pre-GFC mortgage practices, the impact of AI on the mortgage industry, and how AI can streamline mortgage processes. Jon also offers valuable advice for entrepreneurs and founders, emphasizing perseverance and the importance of understanding whether a business can scale.</p><p><strong>Summary: </strong></p><p><strong>Guest Background:</strong>&nbsp;Jon Kutsmeda is the CEO and co-creator of LendZen.com, bringing nearly 20 years of experience in residential mortgage lending to the world of fintech and AI. Originally from near Philadelphia, Jon moved to Hawaii and pursued an entrepreneurial path in the mortgage industry.</p><p><strong>Criticisms of Pre-GFC Mortgage Practices:</strong>&nbsp;Jon discusses the negative aspects of subprime loans and industry practices before the Great Financial Crisis, emphasizing the importance of ethical standards and the impact of regulatory changes post-Dodd-Frank.</p><p><strong>Impact of AI on Mortgage Industry Employment:</strong>&nbsp;The conversation highlights how technology and AI are likely to reduce the number of jobs in the mortgage industry, while also exploring the broader implications for global employment and GDP.</p><p><strong>Streamlining Mortgage Processes with AI:</strong>&nbsp;Jon explains how AI can automate and simplify the mortgage approval process, making it more transparent and efficient for consumers, while still requiring human oversight in critical areas.</p><p><strong>Advice for Entrepreneurs and Founders:</strong>&nbsp;Jon shares insights on the importance of perseverance in entrepreneurship, the need to distinguish between small business ownership and scalable startups, and practical advice for founders focusing on AI technology.</p><p><strong>Movies: </strong></p><ol><li>The Big Short</li></ol><br/><p><strong>Companies: </strong></p><ol><li>LendZen.com</li><li>Fannie Mae</li><li>Freddie Mac</li><li>Ginnie Mae</li><li>Fidelity</li><li>First American</li><li>Uber</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ol><br/><p>Copyright (c) 2025 Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc.&nbsp;<a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a>&nbsp;interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at&nbsp;<a href="https://www.troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In this episode of the AI Risk Reward Podcast, host Alec Crawford sits down with Jon Kutsmeda, the CEO and co-creator of LendZen.com. With nearly 20 years of experience in residential mortgage lending, Jon shares his journey from Philadelphia to Hawaii and the entrepreneurial spirit that led him to co-create LendZen.com. The discussion dives into the criticisms of pre-GFC mortgage practices, the impact of AI on the mortgage industry, and how AI can streamline mortgage processes. Jon also offers valuable advice for entrepreneurs and founders, emphasizing perseverance and the importance of understanding whether a business can scale.</p><p><strong>Summary: </strong></p><p><strong>Guest Background:</strong>&nbsp;Jon Kutsmeda is the CEO and co-creator of LendZen.com, bringing nearly 20 years of experience in residential mortgage lending to the world of fintech and AI. Originally from near Philadelphia, Jon moved to Hawaii and pursued an entrepreneurial path in the mortgage industry.</p><p><strong>Criticisms of Pre-GFC Mortgage Practices:</strong>&nbsp;Jon discusses the negative aspects of subprime loans and industry practices before the Great Financial Crisis, emphasizing the importance of ethical standards and the impact of regulatory changes post-Dodd-Frank.</p><p><strong>Impact of AI on Mortgage Industry Employment:</strong>&nbsp;The conversation highlights how technology and AI are likely to reduce the number of jobs in the mortgage industry, while also exploring the broader implications for global employment and GDP.</p><p><strong>Streamlining Mortgage Processes with AI:</strong>&nbsp;Jon explains how AI can automate and simplify the mortgage approval process, making it more transparent and efficient for consumers, while still requiring human oversight in critical areas.</p><p><strong>Advice for Entrepreneurs and Founders:</strong>&nbsp;Jon shares insights on the importance of perseverance in entrepreneurship, the need to distinguish between small business ownership and scalable startups, and practical advice for founders focusing on AI technology.</p><p><strong>Movies: </strong></p><ol><li>The Big Short</li></ol><br/><p><strong>Companies: </strong></p><ol><li>LendZen.com</li><li>Fannie Mae</li><li>Freddie Mac</li><li>Ginnie Mae</li><li>Fidelity</li><li>First American</li><li>Uber</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ol><br/><p>Copyright (c) 2025 Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">0a53cd51-b96c-4714-bfc5-6d6b01bffb81</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 22 Apr 2025 01:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/ff683eea-da00-481a-8c16-896ca8064b74/Jon-Kutsmeda-master.mp3" length="78254414" type="audio/mpeg"/><itunes:duration>32:36</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>38</itunes:episode><podcast:episode>38</podcast:episode><podcast:season>1</podcast:season></item><item><title>Transform your Business with AI! (and get your life back) with ManoByte’s CEO, Kevin Dean</title><itunes:title>Transform your Business with AI! (and get your life back) with ManoByte’s CEO, Kevin Dean</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc.&nbsp;<a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a>&nbsp;interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at&nbsp;<a href="https://www.troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In the AI Risk Reward Podcast, Kevin Dean, CEO of ManoByte, shares his journey from automating tasks at Sears with AI to founding ManoByte, a company focused on leveraging AI to enhance business processes. He discusses the transformative experience of the MIT X pro program for AI and the importance of continuous education. Kevin elaborates on ManoByte's origin, inspired by his passion for water and technology, and highlights a case where TripAdvisor used AI to drastically reduce customer support tickets. The conversation also delves into the ethical implications of AI, emphasizing the need for better data and regulations to ensure fairness and accountability. Kevin's insights underscore the dynamic changes AI brings to business and society.</p><p>Summary: </p><ol><li><strong>Kevin Dean's First Entrepreneurial Success</strong>: Kevin shared an intriguing story about how he automated a task at Sears using AI, which led him to quit his job and then come back to pitch and sell his AI solution to Sears, marking the start of his entrepreneurial journey.</li><li><strong>MIT AI Program Experience</strong>: Kevin described the MIT X pro program for AI as a transformative experience, emphasizing the high-quality education and invaluable networking opportunities with fellow learners from around the world.</li><li><strong>Manabyte's Origin Story</strong>: Kevin revealed that the name "ManoByte" is inspired by his love for water and scuba diving, combining "Mano," the Hawaiian word for shark, with "Byte" to represent technology, creating a unique blend of his passions.</li><li><strong>AI Revolutionizing Customer Support</strong>: Kevin shared an impressive case where TripAdvisor used AI to reduce their inbound customer support tickets by 90%, highlighting the significant cost savings and efficiency improvements achieved through AI.</li><li><strong>Ethical AI and Data Challenges</strong>: In a lively discussion, Kevin and Alec delved into the ethical implications of AI, with Kevin emphasizing that the real issue lies in biased data rather than the technology itself. They also discussed the need for ongoing improvements and regulations to ensure fairness and accountability in AI-driven decisions.</li></ol><br/><p>Movies:&nbsp;</p><p>1.&nbsp;Minority Report</p><p>2.&nbsp;Inception</p><p><br></p><p>Companies: </p><p>1.&nbsp;ManoByte </p><p>2.&nbsp;Sears</p><p>3.&nbsp;Eaton Corporation</p><p>4.&nbsp;TripAdvisor</p><p>5.&nbsp;Salesforce</p><p><a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></p><p>Copyright (c) 2025 Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc.&nbsp;<a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a>&nbsp;interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at&nbsp;<a href="https://www.troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In the AI Risk Reward Podcast, Kevin Dean, CEO of ManoByte, shares his journey from automating tasks at Sears with AI to founding ManoByte, a company focused on leveraging AI to enhance business processes. He discusses the transformative experience of the MIT X pro program for AI and the importance of continuous education. Kevin elaborates on ManoByte's origin, inspired by his passion for water and technology, and highlights a case where TripAdvisor used AI to drastically reduce customer support tickets. The conversation also delves into the ethical implications of AI, emphasizing the need for better data and regulations to ensure fairness and accountability. Kevin's insights underscore the dynamic changes AI brings to business and society.</p><p>Summary: </p><ol><li><strong>Kevin Dean's First Entrepreneurial Success</strong>: Kevin shared an intriguing story about how he automated a task at Sears using AI, which led him to quit his job and then come back to pitch and sell his AI solution to Sears, marking the start of his entrepreneurial journey.</li><li><strong>MIT AI Program Experience</strong>: Kevin described the MIT X pro program for AI as a transformative experience, emphasizing the high-quality education and invaluable networking opportunities with fellow learners from around the world.</li><li><strong>Manabyte's Origin Story</strong>: Kevin revealed that the name "ManoByte" is inspired by his love for water and scuba diving, combining "Mano," the Hawaiian word for shark, with "Byte" to represent technology, creating a unique blend of his passions.</li><li><strong>AI Revolutionizing Customer Support</strong>: Kevin shared an impressive case where TripAdvisor used AI to reduce their inbound customer support tickets by 90%, highlighting the significant cost savings and efficiency improvements achieved through AI.</li><li><strong>Ethical AI and Data Challenges</strong>: In a lively discussion, Kevin and Alec delved into the ethical implications of AI, with Kevin emphasizing that the real issue lies in biased data rather than the technology itself. They also discussed the need for ongoing improvements and regulations to ensure fairness and accountability in AI-driven decisions.</li></ol><br/><p>Movies:&nbsp;</p><p>1.&nbsp;Minority Report</p><p>2.&nbsp;Inception</p><p><br></p><p>Companies: </p><p>1.&nbsp;ManoByte </p><p>2.&nbsp;Sears</p><p>3.&nbsp;Eaton Corporation</p><p>4.&nbsp;TripAdvisor</p><p>5.&nbsp;Salesforce</p><p><a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></p><p>Copyright (c) 2025 Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">b858f734-ebe6-435d-b4b2-a8512199b50a</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 15 Apr 2025 01:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/0edd57f2-ef58-44df-888e-06dc2082637a/Kevin-Dean-master.mp3" length="95846316" type="audio/mpeg"/><itunes:duration>39:56</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>37</itunes:episode><podcast:episode>37</podcast:episode><podcast:season>1</podcast:season></item><item><title>AI in Action: Security First. Insights from Microsoft&apos;s Naveen Kumar Krishnan</title><itunes:title>AI in Action: Security First. Insights from Microsoft&apos;s Naveen Kumar Krishnan</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://www.troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>On this episode of the AI Risk Reward Podcast, host Alec Crawford welcomes Naveen Kumar Krishnan, an AI expert at Microsoft, to discuss his extensive experience and insights into the world of AI. Naveen shares his career journey, which began in India as a software engineer and led him to work for prominent companies like Infosys, PepsiCo, and Morgan Stanley before joining Microsoft. They explore the transformative role of AI in various sectors, including finance and retail, highlighting AI's potential in fraud detection, customer experience enhancement, and automation. Naveen emphasizes the importance of AI security, governance, and ethical considerations, stressing the need for unbiased data and responsible AI use. He also offers advice for aspiring AI professionals, encouraging them to gain practical experience, participate in hackathons, and continuously learn to stay ahead in the rapidly evolving AI landscape.</p><p>Summary: </p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>Guest Background</strong>: Naveen Kumar Krishnan, an AI expert at Microsoft, shares his extensive career journey, starting as a software engineer in India, working at Infosys and PepsiCo, and moving to roles in financial companies like Morgan Stanley before joining Microsoft in 2021. He is currently pursuing a master's degree at the University of Texas, Austin.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI in Finance</strong>: Naveen discusses the transformative role of AI in the finance sector, emphasizing its potential to enhance trading technologies, improve customer experiences, and combat fraud through anomaly detection and other technical capabilities.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>Current AI Trends</strong>: The conversation highlights current trends in AI, including generative AI, large language models, and the growing interest in AI agents for automation and enhanced decision-making within enterprises.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI Security and Governance</strong>: Naveen addresses the importance of AI security, governance, and regulatory compliance, particularly for high-risk AI applications in sectors like finance and healthcare. He emphasizes the need for secure data handling and implementing appropriate guardrails.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>Ethical Considerations</strong>: The discussion covers ethical considerations in AI development, stressing the importance of unbiased data, ethical use of AI, and ensuring that AI technologies serve all users equitably. Naveen also touches on strategies for effectively communicating complex AI concepts to diverse audiences.</p><p>Companies:</p><ul><li>Microsoft</li><li>Infosys</li><li>PepsiCo</li><li>Morgan Stanley</li><li>Apto</li><li>JPMorgan Chase (JPMC)</li><li>OpenAI</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://www.troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>On this episode of the AI Risk Reward Podcast, host Alec Crawford welcomes Naveen Kumar Krishnan, an AI expert at Microsoft, to discuss his extensive experience and insights into the world of AI. Naveen shares his career journey, which began in India as a software engineer and led him to work for prominent companies like Infosys, PepsiCo, and Morgan Stanley before joining Microsoft. They explore the transformative role of AI in various sectors, including finance and retail, highlighting AI's potential in fraud detection, customer experience enhancement, and automation. Naveen emphasizes the importance of AI security, governance, and ethical considerations, stressing the need for unbiased data and responsible AI use. He also offers advice for aspiring AI professionals, encouraging them to gain practical experience, participate in hackathons, and continuously learn to stay ahead in the rapidly evolving AI landscape.</p><p>Summary: </p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>Guest Background</strong>: Naveen Kumar Krishnan, an AI expert at Microsoft, shares his extensive career journey, starting as a software engineer in India, working at Infosys and PepsiCo, and moving to roles in financial companies like Morgan Stanley before joining Microsoft in 2021. He is currently pursuing a master's degree at the University of Texas, Austin.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI in Finance</strong>: Naveen discusses the transformative role of AI in the finance sector, emphasizing its potential to enhance trading technologies, improve customer experiences, and combat fraud through anomaly detection and other technical capabilities.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>Current AI Trends</strong>: The conversation highlights current trends in AI, including generative AI, large language models, and the growing interest in AI agents for automation and enhanced decision-making within enterprises.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI Security and Governance</strong>: Naveen addresses the importance of AI security, governance, and regulatory compliance, particularly for high-risk AI applications in sectors like finance and healthcare. He emphasizes the need for secure data handling and implementing appropriate guardrails.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>Ethical Considerations</strong>: The discussion covers ethical considerations in AI development, stressing the importance of unbiased data, ethical use of AI, and ensuring that AI technologies serve all users equitably. Naveen also touches on strategies for effectively communicating complex AI concepts to diverse audiences.</p><p>Companies:</p><ul><li>Microsoft</li><li>Infosys</li><li>PepsiCo</li><li>Morgan Stanley</li><li>Apto</li><li>JPMorgan Chase (JPMC)</li><li>OpenAI</li><li><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">6e2fe184-f004-416f-a7a8-1c6a9f5490d4</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 08 Apr 2025 01:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/d8c9d481-a7eb-4ee9-b915-8a09d26eb6c4/Naveen-Kumar-Krishnan-master.mp3" length="84764128" type="audio/mpeg"/><itunes:duration>35:19</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>36</itunes:episode><podcast:episode>36</podcast:episode><podcast:season>1</podcast:season></item><item><title>The Ethical Dilemma: Can AI Ever Be Truly Fair? With NZ Journalist Lucia Dore</title><itunes:title>The Ethical Dilemma: Can AI Ever Be Truly Fair? With NZ Journalist Lucia Dore</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://www.troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In this episode, Alec Crawford, our host, welcomes Lucia Dore, founder of the Lucia Dore Consultancy, to discuss the multifaceted world of AI and its impact on various sectors. Lucia shares her unique journey from a macroeconomist at the New Zealand Treasury to an award-winning journalist, climbing Mount Everest along the way. Together, they delve into the ethical dilemmas and cultural biases in AI, its role in journalism, and the ambitious AI developments in the Middle East. Lucia also offers valuable insights into how AI is reshaping communication, education, and the lives of seniors. Tune in for a thought-provoking conversation that challenges conventional views on technology and ethics.</p><p>Summary:</p><p>•	Lucia Dore's Background: Lucia Dore started her career as a macroeconomist with the New Zealand Treasury before shifting to journalism. She climbed Mount Everest and won the Robert Bell Travelling Scholarship to improve financial journalism in New Zealand.</p><p>•	AI in Journalism: Lucia discussed how AI is used to handle mundane journalistic tasks but emphasized the importance of human oversight. She highlighted studies on AI in media and journalism, noting that AI should free up journalists for more significant tasks.</p><p>•	AI Development in the Middle East: Lucia talked about the Middle East's ambition to become an AI hub, particularly focusing on Saudi Arabia and the UAE. She emphasized the region's advancements in cryptocurrencies and AI, driven by investments and regulatory reforms.</p><p>•	Cultural Perspectives on AI Ethics: Lucia pointed out the different ethical perspectives between Western and Eastern cultures regarding AI. She stressed the need to consider varied cultural understandings of ethics, especially in regions like Africa and the Middle East.</p><p>•	AI's Role in Communication and Education: Lucia believes AI will significantly boost communication and education by enhancing outreach and information accessibility. She mentioned her work with AI in education and how it is transforming teaching and learning practices.</p><p>Books and Movies: </p><ul><li>"The Magus" by John Fowles</li><li>"Lord of the Rings" (Movie Trilogy) </li><li>"Dr. Zhivago" (Movie)</li></ul><br/><p>Companies: </p><ul><li>Microsoft </li><li>Xero </li><li>Canva </li><li>Financial Times Group </li><li>Nikkei </li><li>NVIDIA </li><li>Pearson</li><li><a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc. </a></li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://www.troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In this episode, Alec Crawford, our host, welcomes Lucia Dore, founder of the Lucia Dore Consultancy, to discuss the multifaceted world of AI and its impact on various sectors. Lucia shares her unique journey from a macroeconomist at the New Zealand Treasury to an award-winning journalist, climbing Mount Everest along the way. Together, they delve into the ethical dilemmas and cultural biases in AI, its role in journalism, and the ambitious AI developments in the Middle East. Lucia also offers valuable insights into how AI is reshaping communication, education, and the lives of seniors. Tune in for a thought-provoking conversation that challenges conventional views on technology and ethics.</p><p>Summary:</p><p>•	Lucia Dore's Background: Lucia Dore started her career as a macroeconomist with the New Zealand Treasury before shifting to journalism. She climbed Mount Everest and won the Robert Bell Travelling Scholarship to improve financial journalism in New Zealand.</p><p>•	AI in Journalism: Lucia discussed how AI is used to handle mundane journalistic tasks but emphasized the importance of human oversight. She highlighted studies on AI in media and journalism, noting that AI should free up journalists for more significant tasks.</p><p>•	AI Development in the Middle East: Lucia talked about the Middle East's ambition to become an AI hub, particularly focusing on Saudi Arabia and the UAE. She emphasized the region's advancements in cryptocurrencies and AI, driven by investments and regulatory reforms.</p><p>•	Cultural Perspectives on AI Ethics: Lucia pointed out the different ethical perspectives between Western and Eastern cultures regarding AI. She stressed the need to consider varied cultural understandings of ethics, especially in regions like Africa and the Middle East.</p><p>•	AI's Role in Communication and Education: Lucia believes AI will significantly boost communication and education by enhancing outreach and information accessibility. She mentioned her work with AI in education and how it is transforming teaching and learning practices.</p><p>Books and Movies: </p><ul><li>"The Magus" by John Fowles</li><li>"Lord of the Rings" (Movie Trilogy) </li><li>"Dr. Zhivago" (Movie)</li></ul><br/><p>Companies: </p><ul><li>Microsoft </li><li>Xero </li><li>Canva </li><li>Financial Times Group </li><li>Nikkei </li><li>NVIDIA </li><li>Pearson</li><li><a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc. </a></li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">23774de5-ce7c-42bb-840f-f6a3501788cf</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 01 Apr 2025 01:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/0cd4031d-3a4d-4d65-8681-4570d5fe6245/Lucia-Dore-master.mp3" length="87014838" type="audio/mpeg"/><itunes:duration>36:15</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>35</itunes:episode><podcast:episode>35</podcast:episode><podcast:season>1</podcast:season></item><item><title>AI Fiction Fast Becoming Fact: AI Pioneer and co-author of “Private I” Dr. Jill Fain Lehman of Carnegie Mellon</title><itunes:title>AI Fiction Fast Becoming Fact: AI Pioneer and co-author of “Private I” Dr. Jill Fain Lehman of Carnegie Mellon</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://www.troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In today's episode, Alec sits down with Dr. Jill Fain Lehman, an AI research pioneer and senior project scientist at Carnegie Mellon University. Jill is also the recent author of "Private I," a compelling exploration of AI's potential and pitfalls – in the show, she says if you give her the money, she can build the future. Together, they embark on a fascinating discussion about the historical evolution of AI, its current trajectory, and the ethical considerations that accompany its rapid development.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>Historical Context of AI</strong>: Dr. Jill Fain Lehman provides a rich historical perspective on artificial intelligence, tracing its roots back to the 1950s and highlighting significant developments through the decades. She emphasizes the cyclical nature of AI's evolution, noting how past advancements have set the stage for current innovations. Understanding this historical context is crucial for appreciating AI's current state and anticipating future trends.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>The Importance of Ethical AI Development</strong>: The conversation underscores the necessity of integrating ethics into AI development. Jill and Alec discuss the potential societal impacts of AI and the importance of establishing clear guidelines and regulations to ensure AI technologies are developed and used responsibly. This aligns with the ongoing global discourse on AI ethics and the need for frameworks that prioritize human well-being.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI's Role in Enhancing Human Capabilities</strong>: Throughout the episode, Jill highlights AI's potential to augment human capabilities rather than replace them. This perspective encourages a collaborative approach to AI, where technology and human skills are seen as complementary.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>Insights from "Private I"</strong>: Jill discusses her book "Private I," co-authored with Ashley Watson and Paul Pangara. The novel explores AI that will soon go from fiction to fact, providing readers with a narrative framework to understand AI's potential impacts on society. It serves as a thought-provoking tool for imagining future AI interactions and the ethical questions they may raise.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>The Future Trajectory of AI</strong>: The episode delves into what might come after large language models (LLMs), suggesting that future advancements might involve new technologies or methodologies that further push the boundaries of AI. Jill emphasizes the importance of staying informed and adaptable as AI continues to evolve, encouraging listeners to engage with ongoing research and discussions in the field.&nbsp;</p><p><strong>References</strong></p><p>Jill Fain Lehman, PhD, is a Senior Project Scientist at Carnegie Mellon University’s Human-Computer Interaction Institute. With over 40 years of experience in artificial intelligence, natural language processing, and machine learning, Jill has worked with leading organizations like Disney Research, The Rand Corporation, and Carnegie Learning. Jill is the co-author of&nbsp; <em>Private I</em>, a speculative fiction novel with Ashlei E. Watson and Paul Pangaro, PhD. A new take on classic noir themes,<em> Private I </em>is a mystery thriller rooted in today’s headlines. For fans of speculative fiction and sci-fi thrillers, <em>Private I</em> offers a gripping narrative that combines edge-of-your-seat mystery with profound philosophical questions about AI and humanity.<em> Private I </em>is available on <a href="https://tinyurl.com/private-i-novel" rel="noopener noreferrer" target="_blank">Amazon.</a> The authors appreciate any reviews shared on <a href="https://www.goodreads.com/book/show/122882572-private-i" rel="noopener noreferrer" target="_blank">Goodreads</a> or other social media platforms!</p><p><strong>Books:</strong></p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"Private I" by Dr. Jill Fain Lehman, Ashlei Watson, and Paul Pangaro</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;“And then There Were None” by Agatha Christie</p><p>&nbsp;<strong>Movies:</strong></p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;When Harry Met Sally"</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"Saving Private Ryan"</p><p>&nbsp;<strong>Companies/Schools:</strong></p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Meta Platforms, Inc. (formerly Facebook)</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Carnegie Mellon University</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Yale University</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Stanford University</p><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://www.troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In today's episode, Alec sits down with Dr. Jill Fain Lehman, an AI research pioneer and senior project scientist at Carnegie Mellon University. Jill is also the recent author of "Private I," a compelling exploration of AI's potential and pitfalls – in the show, she says if you give her the money, she can build the future. Together, they embark on a fascinating discussion about the historical evolution of AI, its current trajectory, and the ethical considerations that accompany its rapid development.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>Historical Context of AI</strong>: Dr. Jill Fain Lehman provides a rich historical perspective on artificial intelligence, tracing its roots back to the 1950s and highlighting significant developments through the decades. She emphasizes the cyclical nature of AI's evolution, noting how past advancements have set the stage for current innovations. Understanding this historical context is crucial for appreciating AI's current state and anticipating future trends.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>The Importance of Ethical AI Development</strong>: The conversation underscores the necessity of integrating ethics into AI development. Jill and Alec discuss the potential societal impacts of AI and the importance of establishing clear guidelines and regulations to ensure AI technologies are developed and used responsibly. This aligns with the ongoing global discourse on AI ethics and the need for frameworks that prioritize human well-being.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI's Role in Enhancing Human Capabilities</strong>: Throughout the episode, Jill highlights AI's potential to augment human capabilities rather than replace them. This perspective encourages a collaborative approach to AI, where technology and human skills are seen as complementary.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>Insights from "Private I"</strong>: Jill discusses her book "Private I," co-authored with Ashley Watson and Paul Pangara. The novel explores AI that will soon go from fiction to fact, providing readers with a narrative framework to understand AI's potential impacts on society. It serves as a thought-provoking tool for imagining future AI interactions and the ethical questions they may raise.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>The Future Trajectory of AI</strong>: The episode delves into what might come after large language models (LLMs), suggesting that future advancements might involve new technologies or methodologies that further push the boundaries of AI. Jill emphasizes the importance of staying informed and adaptable as AI continues to evolve, encouraging listeners to engage with ongoing research and discussions in the field.&nbsp;</p><p><strong>References</strong></p><p>Jill Fain Lehman, PhD, is a Senior Project Scientist at Carnegie Mellon University’s Human-Computer Interaction Institute. With over 40 years of experience in artificial intelligence, natural language processing, and machine learning, Jill has worked with leading organizations like Disney Research, The Rand Corporation, and Carnegie Learning. Jill is the co-author of&nbsp; <em>Private I</em>, a speculative fiction novel with Ashlei E. Watson and Paul Pangaro, PhD. A new take on classic noir themes,<em> Private I </em>is a mystery thriller rooted in today’s headlines. For fans of speculative fiction and sci-fi thrillers, <em>Private I</em> offers a gripping narrative that combines edge-of-your-seat mystery with profound philosophical questions about AI and humanity.<em> Private I </em>is available on <a href="https://tinyurl.com/private-i-novel" rel="noopener noreferrer" target="_blank">Amazon.</a> The authors appreciate any reviews shared on <a href="https://www.goodreads.com/book/show/122882572-private-i" rel="noopener noreferrer" target="_blank">Goodreads</a> or other social media platforms!</p><p><strong>Books:</strong></p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"Private I" by Dr. Jill Fain Lehman, Ashlei Watson, and Paul Pangaro</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;“And then There Were None” by Agatha Christie</p><p>&nbsp;<strong>Movies:</strong></p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;When Harry Met Sally"</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"Saving Private Ryan"</p><p>&nbsp;<strong>Companies/Schools:</strong></p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Meta Platforms, Inc. (formerly Facebook)</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Carnegie Mellon University</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Yale University</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Stanford University</p><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">1b32dbdd-aca9-4016-881e-e662b8a49f53</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 25 Mar 2025 01:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/ae9975ab-af45-42cb-8ac4-aee087bfc87d/Jill-Fain-Lehman-master.mp3" length="123246675" type="audio/mpeg"/><itunes:duration>51:21</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>34</itunes:episode><podcast:episode>34</podcast:episode><podcast:season>1</podcast:season></item><item><title>Future Proof Your Career: AI Innovation and Education with Shiva Rajgopal from Columbia</title><itunes:title>Future Proof Your Career: AI Innovation and Education with Shiva Rajgopal from Columbia</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://www.troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>This week, Alec Crawford engages with Shiva Rajgopal, Professor at Columbia Business School, to explore AI's transformative impact on the business world. Shiva discusses how AI is reshaping business education, challenging traditional teaching methods, and redefining creativity and judgment in decision-making. They examine AI's role helping the CFO, in back-office automation and research, and how business schools are adapting to prepare future leaders for a tech-driven landscape. This episode offers a compelling look at AI's opportunities and challenges in business and management.</p><p>Summary</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI in Education</strong>: Shiva highlights that AI is a game-changer in business education, enabling access to vast information and challenging traditional teaching methods. It pushes educators to focus more on developing students' critical thinking and judgment skills.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>Automation in Business</strong>: The discussion reveals that AI is revolutionizing back-office operations by automating routine tasks, which could lead to significant shifts in employment dynamics. Businesses need to adapt to these changes by upskilling their workforce.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI's Limitations and Opportunities</strong>: While AI excels at pattern recognition and handling large datasets, Shiva points out that it still lacks in areas requiring deep logical reasoning and nuanced understanding, which opens opportunities for human collaboration with AI.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI in Decision-Making</strong>: Shiva emphasizes the importance of using AI as a tool to enhance decision-making rather than replace it. The focus should be on leveraging AI to provide valuable insights while maintaining human oversight for critical judgments.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>Future of Business Schools</strong>: AI's integration into education is prompting business schools to rethink their value proposition, focusing on social learning and networking opportunities that AI cannot replicate, ensuring they remain relevant in a rapidly evolving technological landscape.</p><p><strong>References</strong></p><p><strong>Books/Movies:</strong></p><ul><li><em>The Fountainhead</em> by Ayn Rand</li><li><em>The Intelligent Investor</em> by Benjamin Graham</li><li><em>Delhi Belly</em> (Bollywood movie)</li></ul><br/><p><strong>Companies/Schools:</strong></p><ul><li>Columbia Business School: (<a href="https://www8.gsb.columbia.edu/" rel="noopener noreferrer" target="_blank">https://www8.gsb.columbia.edu/</a>)</li><li>Procter &amp; Gamble (<a href="https://us.pg.com/" rel="noopener noreferrer" target="_blank">https://us.pg.com/</a>)</li><li>Citibank (<a href="https://www.citi.com/" rel="noopener noreferrer" target="_blank">https://www.citi.com/</a>)</li><li>Tesla (<a href="https://www.tesla.com/" rel="noopener noreferrer" target="_blank">https://www.tesla.com/</a>)</li><li>Apple (<a href="https://www.apple.com/" rel="noopener noreferrer" target="_blank">https://www.apple.com/</a>)</li><li>AI Risk (<a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">https://www.aicrisk.com/</a>) </li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://www.troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>This week, Alec Crawford engages with Shiva Rajgopal, Professor at Columbia Business School, to explore AI's transformative impact on the business world. Shiva discusses how AI is reshaping business education, challenging traditional teaching methods, and redefining creativity and judgment in decision-making. They examine AI's role helping the CFO, in back-office automation and research, and how business schools are adapting to prepare future leaders for a tech-driven landscape. This episode offers a compelling look at AI's opportunities and challenges in business and management.</p><p>Summary</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI in Education</strong>: Shiva highlights that AI is a game-changer in business education, enabling access to vast information and challenging traditional teaching methods. It pushes educators to focus more on developing students' critical thinking and judgment skills.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>Automation in Business</strong>: The discussion reveals that AI is revolutionizing back-office operations by automating routine tasks, which could lead to significant shifts in employment dynamics. Businesses need to adapt to these changes by upskilling their workforce.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI's Limitations and Opportunities</strong>: While AI excels at pattern recognition and handling large datasets, Shiva points out that it still lacks in areas requiring deep logical reasoning and nuanced understanding, which opens opportunities for human collaboration with AI.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI in Decision-Making</strong>: Shiva emphasizes the importance of using AI as a tool to enhance decision-making rather than replace it. The focus should be on leveraging AI to provide valuable insights while maintaining human oversight for critical judgments.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>Future of Business Schools</strong>: AI's integration into education is prompting business schools to rethink their value proposition, focusing on social learning and networking opportunities that AI cannot replicate, ensuring they remain relevant in a rapidly evolving technological landscape.</p><p><strong>References</strong></p><p><strong>Books/Movies:</strong></p><ul><li><em>The Fountainhead</em> by Ayn Rand</li><li><em>The Intelligent Investor</em> by Benjamin Graham</li><li><em>Delhi Belly</em> (Bollywood movie)</li></ul><br/><p><strong>Companies/Schools:</strong></p><ul><li>Columbia Business School: (<a href="https://www8.gsb.columbia.edu/" rel="noopener noreferrer" target="_blank">https://www8.gsb.columbia.edu/</a>)</li><li>Procter &amp; Gamble (<a href="https://us.pg.com/" rel="noopener noreferrer" target="_blank">https://us.pg.com/</a>)</li><li>Citibank (<a href="https://www.citi.com/" rel="noopener noreferrer" target="_blank">https://www.citi.com/</a>)</li><li>Tesla (<a href="https://www.tesla.com/" rel="noopener noreferrer" target="_blank">https://www.tesla.com/</a>)</li><li>Apple (<a href="https://www.apple.com/" rel="noopener noreferrer" target="_blank">https://www.apple.com/</a>)</li><li>AI Risk (<a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">https://www.aicrisk.com/</a>) </li></ul><br/><p>Copyright © 2025 by Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">fa2186c1-ecdb-4147-82e5-8bb13fc3ef14</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 18 Mar 2025 01:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/2a8f7201-48fa-44ad-bbc5-f3c73623c053/32-Shiva-Rajgopal-master.mp3" length="111746528" type="audio/mpeg"/><itunes:duration>46:34</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>32</itunes:episode><podcast:episode>32</podcast:episode><podcast:season>1</podcast:season></item><item><title>From Tech to Tariffs: Super-Analyst Adam Parker on TODAY&apos;s Markets</title><itunes:title>From Tech to Tariffs: Super-Analyst Adam Parker on TODAY&apos;s Markets</itunes:title><description><![CDATA[<p>Alec Crawford, CEO of <a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">www.aicrisk.com</a> and noted investment risk management expert interviews Adam Parker, super-analyst from Trivariate and Trivector research. He has a strong background in Market strategy and he was a top-ranked Institutional Investor award winner in Semiconductors earlier in his career.</p><ul><li>Embrace healthcare (Adam's top sector pick) and tech innovations for growth opportunities in evolving sectors. Adam likes NVDA even better today after its retreat in price. </li><li><span style="font-family: var(--bs-font-sans-serif); font-size: 1.125rem; color: var(--bs-accordion-color);">Adam notes that at the MS conference two weeks ago, none of the tech titans cut back on their commitments to buy NVDA chips.</span></li><li><span style="font-family: var(--bs-font-sans-serif); font-size: 1.125rem; color: var(--bs-accordion-color);">Otherwise, stay cautious amid policy changes, as tariffs and trade policies can drive market volatility. Wait for positive signals from management later this year before jumping in.</span></li></ul><br/><p>Stocks mentioned: NVDA, AVGO, LLY, INTC</p><p>Check it out below!</p><p><a href="https://trivariateresearch.com/" rel="noopener noreferrer" target="_blank">https://trivariateresearch.com/</a> </p>]]></description><content:encoded><![CDATA[<p>Alec Crawford, CEO of <a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">www.aicrisk.com</a> and noted investment risk management expert interviews Adam Parker, super-analyst from Trivariate and Trivector research. He has a strong background in Market strategy and he was a top-ranked Institutional Investor award winner in Semiconductors earlier in his career.</p><ul><li>Embrace healthcare (Adam's top sector pick) and tech innovations for growth opportunities in evolving sectors. Adam likes NVDA even better today after its retreat in price. </li><li><span style="font-family: var(--bs-font-sans-serif); font-size: 1.125rem; color: var(--bs-accordion-color);">Adam notes that at the MS conference two weeks ago, none of the tech titans cut back on their commitments to buy NVDA chips.</span></li><li><span style="font-family: var(--bs-font-sans-serif); font-size: 1.125rem; color: var(--bs-accordion-color);">Otherwise, stay cautious amid policy changes, as tariffs and trade policies can drive market volatility. Wait for positive signals from management later this year before jumping in.</span></li></ul><br/><p>Stocks mentioned: NVDA, AVGO, LLY, INTC</p><p>Check it out below!</p><p><a href="https://trivariateresearch.com/" rel="noopener noreferrer" target="_blank">https://trivariateresearch.com/</a> </p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">94566e95-f15a-4ee7-8b9f-ee635e02d996</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Fri, 14 Mar 2025 01:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/93630a5d-335c-439e-b411-d3193bab5213/riverside-adam-parker-on-airr-mar-14-2025-001-air-studio.mp3" length="15209970" type="audio/mpeg"/><itunes:duration>31:41</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>33</itunes:episode><podcast:episode>33</podcast:episode><podcast:season>1</podcast:season></item><item><title>AI Is Rocking the Music World: David Griffiths on the Existential Threat to Human Creativity</title><itunes:title>AI Is Rocking the Music World: David Griffiths on the Existential Threat to Human Creativity</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In this episode of the AI Risk Reward Podcast, we explore the dynamic intersection of artificial intelligence and creativity with our special guest, David Griffiths. David is the CEO of Content Creating Academy and a digital creator with an impressive background in music education and performance from institutions like Peabody Conservatory and Columbia Union College. With his extensive expertise in vocal training, songwriting, and digital content creation, David shares his unique insights into the transformative role of AI in music and other creative fields. Tune in as we discuss how AI is reshaping creative processes, the ethical considerations it brings, and the innovative strategies David employs to engage and expand his audience in the digital age. This episode is a must-listen for tech enthusiasts, creative artists, and anyone interested in the future of AI in creative industries.</p><p><strong>Summary</strong></p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI in Creative Processes</strong>: David Griffiths discusses using AI tools like Claude, ChatGPT, Perplexity, and Gemini to aid in refining his ebook. He treats these AI tools as team members, leveraging their diverse feedback to improve his work. This approach highlights AI's utility in enhancing content creation processes by providing varied insights and suggestions. Nevertheless, given how fast and easy AI tools are, they could strongly discourage human creativity.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI in Music Leadership</strong>: As a Minister of Music, Griffiths utilized AI to plan and propose future activities. He used AI to create a podcast from a document, which he found effective in conveying ideas to his team. This demonstrates AI's potential in streamlining communication and project management within music-related fields.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI and Songwriting</strong>: Griffiths shared his personal experience that AI-generated content lacks the emotional depth of human-created songs. He believes that while AI can assist in content creation, the personal touch in songwriting remains irreplaceable, particularly for personal expressions and connections.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>Ethical Considerations</strong>: The discussion touched on the ethical implications of AI in music, with Griffiths expressing concern about AI's impact on creativity and originality. He notes that while AI can produce content quickly, it may also diminish the uniqueness of human creativity.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI's Role in Future Music Creation</strong>: Griffiths acknowledges that those who effectively harness AI tools will likely excel in the evolving landscape. He emphasizes the importance of adapting to AI's capabilities to remain competitive in music creation and performance.</p><p><strong><u>References</u></strong></p><p>&nbsp;<strong>AI Tools</strong></p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Claude, ChatGPT, Perplexity, Gemini</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Notebook LM for creating podcasts</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Fathom (note-taking during Zoom meetings)</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Opus Clips and video.ai (video editing tools)</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Riverside (for creating reels)</p><p><strong>Books and Movies</strong></p><ul><li>"10X is Easier Than 2X" by Dr. Benjamin Hardy and Dan Sullivan - "The One Thing" by Gary Keller</li><li>"Sister Act II"</li><li>"The Terminator"</li></ul><br/><p><strong>&nbsp;</strong></p><p><strong>Companies/Platforms/Schools:</strong></p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href="https://contentcreatingacademy.com/" rel="noopener noreferrer" target="_blank">Content Creating Academy</a> (David Griffiths' company)</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Carpe Diem (community choir directed by Griffiths)</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">AI Risk, Inc.</a></p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Dude Chat (Zoom gathering initiated during the pandemic)</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Generationology (Jeff Vargas' public speaking initiative)</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Peabody Conservatory</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Columbia Union College (now Washington Adventist University)</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Oakwood University - Andrews University</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In this episode of the AI Risk Reward Podcast, we explore the dynamic intersection of artificial intelligence and creativity with our special guest, David Griffiths. David is the CEO of Content Creating Academy and a digital creator with an impressive background in music education and performance from institutions like Peabody Conservatory and Columbia Union College. With his extensive expertise in vocal training, songwriting, and digital content creation, David shares his unique insights into the transformative role of AI in music and other creative fields. Tune in as we discuss how AI is reshaping creative processes, the ethical considerations it brings, and the innovative strategies David employs to engage and expand his audience in the digital age. This episode is a must-listen for tech enthusiasts, creative artists, and anyone interested in the future of AI in creative industries.</p><p><strong>Summary</strong></p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI in Creative Processes</strong>: David Griffiths discusses using AI tools like Claude, ChatGPT, Perplexity, and Gemini to aid in refining his ebook. He treats these AI tools as team members, leveraging their diverse feedback to improve his work. This approach highlights AI's utility in enhancing content creation processes by providing varied insights and suggestions. Nevertheless, given how fast and easy AI tools are, they could strongly discourage human creativity.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI in Music Leadership</strong>: As a Minister of Music, Griffiths utilized AI to plan and propose future activities. He used AI to create a podcast from a document, which he found effective in conveying ideas to his team. This demonstrates AI's potential in streamlining communication and project management within music-related fields.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI and Songwriting</strong>: Griffiths shared his personal experience that AI-generated content lacks the emotional depth of human-created songs. He believes that while AI can assist in content creation, the personal touch in songwriting remains irreplaceable, particularly for personal expressions and connections.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>Ethical Considerations</strong>: The discussion touched on the ethical implications of AI in music, with Griffiths expressing concern about AI's impact on creativity and originality. He notes that while AI can produce content quickly, it may also diminish the uniqueness of human creativity.</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI's Role in Future Music Creation</strong>: Griffiths acknowledges that those who effectively harness AI tools will likely excel in the evolving landscape. He emphasizes the importance of adapting to AI's capabilities to remain competitive in music creation and performance.</p><p><strong><u>References</u></strong></p><p>&nbsp;<strong>AI Tools</strong></p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Claude, ChatGPT, Perplexity, Gemini</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Notebook LM for creating podcasts</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Fathom (note-taking during Zoom meetings)</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Opus Clips and video.ai (video editing tools)</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Riverside (for creating reels)</p><p><strong>Books and Movies</strong></p><ul><li>"10X is Easier Than 2X" by Dr. Benjamin Hardy and Dan Sullivan - "The One Thing" by Gary Keller</li><li>"Sister Act II"</li><li>"The Terminator"</li></ul><br/><p><strong>&nbsp;</strong></p><p><strong>Companies/Platforms/Schools:</strong></p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href="https://contentcreatingacademy.com/" rel="noopener noreferrer" target="_blank">Content Creating Academy</a> (David Griffiths' company)</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Carpe Diem (community choir directed by Griffiths)</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">AI Risk, Inc.</a></p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Dude Chat (Zoom gathering initiated during the pandemic)</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Generationology (Jeff Vargas' public speaking initiative)</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Peabody Conservatory</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Columbia Union College (now Washington Adventist University)</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Oakwood University - Andrews University</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">7689a32a-cc56-41d2-8d07-087928588482</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 11 Mar 2025 01:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/1cf7dc96-0474-4984-b4d0-77d18683061c/31-David-Griffiths-master.mp3" length="92124389" type="audio/mpeg"/><itunes:duration>38:23</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>31</itunes:episode><podcast:episode>31</podcast:episode><podcast:season>1</podcast:season></item><item><title>AI in the Sky: Jesse Hamel on Autonomous Drones and Next Gen Warfare</title><itunes:title>AI in the Sky: Jesse Hamel on Autonomous Drones and Next Gen Warfare</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In this episode, Alec interviews Jesse Hamel, the dynamic founder of Victus Technologies, as they delve into a compelling conversation about Jesse's remarkable journey from a military career to entrepreneurship. Jesse shares insights from his 20-year tenure in the Air Force, including his experiences with AC-130 gunships, commanding drone squadrons and the pivotal moments that shaped his path. Learn about "the stack", the future of drone warfare and other thought-provoking topics such as the future role of AI in national security and the potential challenges it poses. Please like, comment, and review the show!</p><p><strong style="color: var(--bs-accordion-color);"><u>Key Takeaways</u></strong></p><p><span style="font-family: var(--bs-font-sans-serif); font-size: 1.125rem; color: var(--bs-accordion-color);">1.&nbsp;</span><strong style="font-family: var(--bs-font-sans-serif); font-size: 1.125rem; color: var(--bs-accordion-color);">Military Service and Its Impact</strong><span style="font-family: var(--bs-font-sans-serif); font-size: 1.125rem; color: var(--bs-accordion-color);">: Jesse Hamel's career in the military started after 9/11, leading to a 20-year active duty career in the Air Force, including 800 hours of combat duty in an AC-130 gunship. His experience in the military shaped his leadership skills and his ability to handle complex situations .</span></p><p>2.&nbsp;<strong>Technological Integration in Combat</strong>: Jesse highlighted the use of advanced unmanned systems and the integration of multiple aircraft types during combat missions. This combination of manned and unmanned systems became a significant interest for him, showing the potential of technology in modern warfare .</p><p>3.&nbsp;<strong>Leadership and Mentorship</strong>: His experience working with General Mark Schwartz during his time in Germany was pivotal. He learned valuable lessons on leadership and the importance of building and leading new teams effectively, which influenced his approach to founding and running Victus Technologies .</p><p>4.&nbsp;<strong>Challenges and Adaptations in Transition</strong>: Transitioning from military to civilian life involved a significant learning curve. Jesse emphasized the importance of networking, continuous learning, and adapting military skills to the private sector. He also highlighted the unique advantages veterans have in entrepreneurship due to their resilience and experience in handling difficult situations .</p><p>5.&nbsp;<strong>Victus Technologies' Innovations</strong>: Jesse founded Victus Technologies to address the limitations of existing autonomous systems, particularly the reliance on GPS. The company focuses on developing advanced autonomous software solutions that enhance the capabilities of drones and other robotic systems, aiming to revolutionize various sectors, including national security and commercial industries .</p><p><strong>References</strong></p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong><em>Zero to One</em> by Peter Thiel</strong></p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong><em>Clear and Present Danger</em> by Tom Clancy</strong></p><p><strong>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<em>Apocalypse Now</em></strong></p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In this episode, Alec interviews Jesse Hamel, the dynamic founder of Victus Technologies, as they delve into a compelling conversation about Jesse's remarkable journey from a military career to entrepreneurship. Jesse shares insights from his 20-year tenure in the Air Force, including his experiences with AC-130 gunships, commanding drone squadrons and the pivotal moments that shaped his path. Learn about "the stack", the future of drone warfare and other thought-provoking topics such as the future role of AI in national security and the potential challenges it poses. Please like, comment, and review the show!</p><p><strong style="color: var(--bs-accordion-color);"><u>Key Takeaways</u></strong></p><p><span style="font-family: var(--bs-font-sans-serif); font-size: 1.125rem; color: var(--bs-accordion-color);">1.&nbsp;</span><strong style="font-family: var(--bs-font-sans-serif); font-size: 1.125rem; color: var(--bs-accordion-color);">Military Service and Its Impact</strong><span style="font-family: var(--bs-font-sans-serif); font-size: 1.125rem; color: var(--bs-accordion-color);">: Jesse Hamel's career in the military started after 9/11, leading to a 20-year active duty career in the Air Force, including 800 hours of combat duty in an AC-130 gunship. His experience in the military shaped his leadership skills and his ability to handle complex situations .</span></p><p>2.&nbsp;<strong>Technological Integration in Combat</strong>: Jesse highlighted the use of advanced unmanned systems and the integration of multiple aircraft types during combat missions. This combination of manned and unmanned systems became a significant interest for him, showing the potential of technology in modern warfare .</p><p>3.&nbsp;<strong>Leadership and Mentorship</strong>: His experience working with General Mark Schwartz during his time in Germany was pivotal. He learned valuable lessons on leadership and the importance of building and leading new teams effectively, which influenced his approach to founding and running Victus Technologies .</p><p>4.&nbsp;<strong>Challenges and Adaptations in Transition</strong>: Transitioning from military to civilian life involved a significant learning curve. Jesse emphasized the importance of networking, continuous learning, and adapting military skills to the private sector. He also highlighted the unique advantages veterans have in entrepreneurship due to their resilience and experience in handling difficult situations .</p><p>5.&nbsp;<strong>Victus Technologies' Innovations</strong>: Jesse founded Victus Technologies to address the limitations of existing autonomous systems, particularly the reliance on GPS. The company focuses on developing advanced autonomous software solutions that enhance the capabilities of drones and other robotic systems, aiming to revolutionize various sectors, including national security and commercial industries .</p><p><strong>References</strong></p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong><em>Zero to One</em> by Peter Thiel</strong></p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong><em>Clear and Present Danger</em> by Tom Clancy</strong></p><p><strong>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<em>Apocalypse Now</em></strong></p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">e76874d5-9cc8-4a75-b2cf-a739573f21cf</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 04 Mar 2025 01:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/6a0dbb9d-35c2-4111-97af-e5d528dfb501/Jesse-Hamel-master.mp3" length="114151883" type="audio/mpeg"/><itunes:duration>47:34</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>30</itunes:episode><podcast:episode>30</podcast:episode><podcast:season>1</podcast:season></item><item><title>Evolution or Extinction? The Future of RIAs in the AI Era with Joe Moss!</title><itunes:title>Evolution or Extinction? The Future of RIAs in the AI Era with Joe Moss!</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In this episode, Alec Crawford welcomes Joe Moss, the COO of January Capital Advisors and conneqtor at <a href="https://conneqtor.org" rel="noopener noreferrer" target="_blank">conneqtor.org</a>, to the AI Risk-Reward Podcast. Known for his innovative approach and insightful perspectives in the financial advisory space, Joe has garnered a reputation for his forward-thinking strategies and adept use of social media to influence and engage with the financial community. Throughout the episode, Joe shares his experiences, from early lessons learned in the industry to his views on the transformative potential of AI and technology in financial advising. Joe is publishing a book on advisor tech! See information in the show notes, below.</p><p><strong>Takeaways</strong></p><ol><li><strong>Learning from Mistakes in Financial Advising</strong>: What <u>not</u> to do as an advisor! Joe Moss  discusses how his initial experiences taught him about inefficiencies and poor practices, such as high fees, underutilized technology, and lack of timely client communication, which are more common than one might expect. (Check out fynancial.com for client communication specifically for RIAs.)</li><li><strong>The Impact of Technology and AI on Financial Advising</strong>: Alec and Joe discuss the transformative potential of AI and technology in the financial advising sector. They predict that advisors who do not integrate AI into their practices may struggle to remain competitive. The conversation also touches on the expected consolidation of AI tools and their impact on the industry.</li><li><strong>Privacy and Security Concerns with AI Tools</strong>: The podcast highlights the importance of privacy and security when using AI tools in financial advising. Joe Moss points out the varying levels of data security among different AI tools and stresses the need for due diligence to ensure client data is protected and compliance rules are followed.</li><li><strong>Strategic Use of Niches and Content Creation in Financial Advising</strong>: Joe Moss advises new financial advisors to specialize in a niche to better serve clients and enhance their marketability. Additionally, he emphasizes the role of content creation in building credibility and expertise within a chosen niche, which can be crucial for client acquisition and retention.</li><li><strong>Human Connection Amidst Technological Advancements</strong>: Despite advancements in AI, Joe stresses the value of human-to-human interaction and suggests that the essence of personal connections could become more significant as AI becomes prevalent. FOr example, he mentions his rule of avoiding bots in meetings to maintain genuine interactions, reflecting a broader sentiment about balancing technology use with personal touch in professional settings.</li></ol><br/><p><strong>References</strong></p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<em>The Ultimate AdvisorTech Stack for 2025</em>, by Joe Moss <a href="https://www.amazon.com/Ultimate-AdvisorTech-Stack-2025-Independent-ebook/dp/B0DV5N6ZRC" rel="noopener noreferrer" target="_blank">https://www.amazon.com/Ultimate-AdvisorTech-Stack-2025-Independent-ebook/dp/B0DV5N6ZRC</a></p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<em>Tribe of Mentors</em> by Tim Ferriss</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href="https://www.linkedin.com/in/joemoss-xyz/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/joemoss-xyz/</a></p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href="https://www.skool.com/conneqtor/about" rel="noopener noreferrer" target="_blank">https://www.skool.com/conneqtor/about</a></p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">https://www.aicrisk.com</a></p><p>&nbsp;</p><p>&nbsp;</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In this episode, Alec Crawford welcomes Joe Moss, the COO of January Capital Advisors and conneqtor at <a href="https://conneqtor.org" rel="noopener noreferrer" target="_blank">conneqtor.org</a>, to the AI Risk-Reward Podcast. Known for his innovative approach and insightful perspectives in the financial advisory space, Joe has garnered a reputation for his forward-thinking strategies and adept use of social media to influence and engage with the financial community. Throughout the episode, Joe shares his experiences, from early lessons learned in the industry to his views on the transformative potential of AI and technology in financial advising. Joe is publishing a book on advisor tech! See information in the show notes, below.</p><p><strong>Takeaways</strong></p><ol><li><strong>Learning from Mistakes in Financial Advising</strong>: What <u>not</u> to do as an advisor! Joe Moss  discusses how his initial experiences taught him about inefficiencies and poor practices, such as high fees, underutilized technology, and lack of timely client communication, which are more common than one might expect. (Check out fynancial.com for client communication specifically for RIAs.)</li><li><strong>The Impact of Technology and AI on Financial Advising</strong>: Alec and Joe discuss the transformative potential of AI and technology in the financial advising sector. They predict that advisors who do not integrate AI into their practices may struggle to remain competitive. The conversation also touches on the expected consolidation of AI tools and their impact on the industry.</li><li><strong>Privacy and Security Concerns with AI Tools</strong>: The podcast highlights the importance of privacy and security when using AI tools in financial advising. Joe Moss points out the varying levels of data security among different AI tools and stresses the need for due diligence to ensure client data is protected and compliance rules are followed.</li><li><strong>Strategic Use of Niches and Content Creation in Financial Advising</strong>: Joe Moss advises new financial advisors to specialize in a niche to better serve clients and enhance their marketability. Additionally, he emphasizes the role of content creation in building credibility and expertise within a chosen niche, which can be crucial for client acquisition and retention.</li><li><strong>Human Connection Amidst Technological Advancements</strong>: Despite advancements in AI, Joe stresses the value of human-to-human interaction and suggests that the essence of personal connections could become more significant as AI becomes prevalent. FOr example, he mentions his rule of avoiding bots in meetings to maintain genuine interactions, reflecting a broader sentiment about balancing technology use with personal touch in professional settings.</li></ol><br/><p><strong>References</strong></p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<em>The Ultimate AdvisorTech Stack for 2025</em>, by Joe Moss <a href="https://www.amazon.com/Ultimate-AdvisorTech-Stack-2025-Independent-ebook/dp/B0DV5N6ZRC" rel="noopener noreferrer" target="_blank">https://www.amazon.com/Ultimate-AdvisorTech-Stack-2025-Independent-ebook/dp/B0DV5N6ZRC</a></p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<em>Tribe of Mentors</em> by Tim Ferriss</p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href="https://www.linkedin.com/in/joemoss-xyz/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/joemoss-xyz/</a></p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href="https://www.skool.com/conneqtor/about" rel="noopener noreferrer" target="_blank">https://www.skool.com/conneqtor/about</a></p><p>·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">https://www.aicrisk.com</a></p><p>&nbsp;</p><p>&nbsp;</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">24b8c495-5f2b-43f4-8bb7-ef74974a6780</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 25 Feb 2025 01:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/56ac3af1-f1ab-441b-87a7-d80feb0cf429/29-Joe-Moss-master.mp3" length="90652128" type="audio/mpeg"/><itunes:duration>37:46</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>29</itunes:episode><podcast:episode>29</podcast:episode><podcast:season>1</podcast:season></item><item><title>AI Breakthroughs, AlphaFold and Ethical Boundaries with Ron Green, Co-Founder and CTO of Kung Fu AI</title><itunes:title>AI Breakthroughs, AlphaFold and Ethical Boundaries with Ron Green, Co-Founder and CTO of Kung Fu AI</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://www.troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In this engaging episode of the AI Risk Reward Podcast, host Alec Crawford and guest Ron Green, co-founder and CTO of Kung Fu AI, delve into the evolving landscape of artificial intelligence and its profound impact across various sectors. They explore groundbreaking advancements like DeepMind's AlphaFold, discuss the ethical challenges and biases in AI, and forecast the future integration of AI technologies in industries such as engineering and healthcare. With insightful anecdotes and expert perspectives, this episode offers listeners a comprehensive understanding of AI's current capabilities and its transformative potential.</p><p><strong>Summary</strong></p><p>1.&nbsp;&nbsp;&nbsp;&nbsp;<strong>Evolutionary and Adaptive Systems</strong>: Ron Green discussed his educational background in computer science and a master's degree focused on evolutionary and adaptive systems, emphasizing genetic algorithms and genetic programming. He highlighted the importance of understanding foundational theoretical concepts in computer science to adapt to technological changes over time.</p><p>2.&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI in Biology - AlphaFold</strong>: Ron Green praised Demis Hassabis and his work with DeepMind on AlphaFold, a breakthrough AI technology that predicts protein structures. This advancement is crucial for understanding biological functions and has been considered a grand challenge in biology for decades. AlphaFold's development marked a significant leap in the application of AI to solve complex biological problems.</p><p>3.&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI's Ubiquity in Engineering</strong>: Ron Green predicted that AI would become as integral to industries as the internet, affecting fields like mechanical and civil engineering. He mentioned that AI's impact would be broad, transitioning from perceptive and generative phases to more sophisticated applications, including robotics and physical AI.</p><p>4.&nbsp;&nbsp;&nbsp;&nbsp;<strong>Ethical Considerations and Bias in AI</strong>: Ron Green shared experiences of facing ethical dilemmas in AI projects, such as refusing to work on weapon-targeting systems. He stressed the importance of addressing biases in AI systems, as they can replicate data biases, potentially leading to unfair outcomes.</p><p>5.&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI's Future and Industry Implications</strong>: The discussion touched on the potential risks and benefits of AI, particularly concerning AGI (Artificial General Intelligence) and ASI (Artificial Super Intelligence). Ron Green expressed concerns about alignment with human values and highlighted the current hype cycle in AI, predicting that while there might be fluctuations in interest, the utility of AI ensures its continued relevance.</p><p><strong>References</strong></p><p><strong>Books</strong></p><ul><li><strong>The Selfish Gene by Richard Dawkins</strong>: A popular science book discussing genetics and natural selection from the gene's perspective.</li><li><strong>Demis Hassabis</strong>: Co-founder of DeepMind, known for his work on AI and AlphaFold.</li><li><strong>DeepMind and AlphaFold</strong>: A leading AI research lab known for breakthroughs in AI, particularly in protein folding with AlphaFold.</li><li><strong>The Singularity is Near by Ray Kurzweil</strong>: A book exploring the future of technology and AI</li></ul><br/><p><strong>Schools/Companies</strong></p><ul><li><strong>University of Sussex</strong>: Known for its research and focus on evolutionary and adaptive systems.</li><li><strong>University of Texas at Austin</strong>: Mentioned in the context of AI and machine learning research.</li><li><strong>Kung Fu AI</strong>: A company co-founded by Ron Green, specializing in AI solutions and strategies <a href="https://www.kungfu.ai" rel="noopener noreferrer" target="_blank">https://www.kungfu.ai</a></li><li><strong>AI Risk, Inc. </strong>Accelerates the adoption of Gen AI safely. They provide a secure enterprise AI platform for banks and other regulated organizations. <a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">https://www.aicrisk.com</a></li></ul><br/><p><strong>Movies</strong></p><ul><li><strong>The Godfather</strong>: A classic film often considered one of the best movies ever made.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://www.troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In this engaging episode of the AI Risk Reward Podcast, host Alec Crawford and guest Ron Green, co-founder and CTO of Kung Fu AI, delve into the evolving landscape of artificial intelligence and its profound impact across various sectors. They explore groundbreaking advancements like DeepMind's AlphaFold, discuss the ethical challenges and biases in AI, and forecast the future integration of AI technologies in industries such as engineering and healthcare. With insightful anecdotes and expert perspectives, this episode offers listeners a comprehensive understanding of AI's current capabilities and its transformative potential.</p><p><strong>Summary</strong></p><p>1.&nbsp;&nbsp;&nbsp;&nbsp;<strong>Evolutionary and Adaptive Systems</strong>: Ron Green discussed his educational background in computer science and a master's degree focused on evolutionary and adaptive systems, emphasizing genetic algorithms and genetic programming. He highlighted the importance of understanding foundational theoretical concepts in computer science to adapt to technological changes over time.</p><p>2.&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI in Biology - AlphaFold</strong>: Ron Green praised Demis Hassabis and his work with DeepMind on AlphaFold, a breakthrough AI technology that predicts protein structures. This advancement is crucial for understanding biological functions and has been considered a grand challenge in biology for decades. AlphaFold's development marked a significant leap in the application of AI to solve complex biological problems.</p><p>3.&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI's Ubiquity in Engineering</strong>: Ron Green predicted that AI would become as integral to industries as the internet, affecting fields like mechanical and civil engineering. He mentioned that AI's impact would be broad, transitioning from perceptive and generative phases to more sophisticated applications, including robotics and physical AI.</p><p>4.&nbsp;&nbsp;&nbsp;&nbsp;<strong>Ethical Considerations and Bias in AI</strong>: Ron Green shared experiences of facing ethical dilemmas in AI projects, such as refusing to work on weapon-targeting systems. He stressed the importance of addressing biases in AI systems, as they can replicate data biases, potentially leading to unfair outcomes.</p><p>5.&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI's Future and Industry Implications</strong>: The discussion touched on the potential risks and benefits of AI, particularly concerning AGI (Artificial General Intelligence) and ASI (Artificial Super Intelligence). Ron Green expressed concerns about alignment with human values and highlighted the current hype cycle in AI, predicting that while there might be fluctuations in interest, the utility of AI ensures its continued relevance.</p><p><strong>References</strong></p><p><strong>Books</strong></p><ul><li><strong>The Selfish Gene by Richard Dawkins</strong>: A popular science book discussing genetics and natural selection from the gene's perspective.</li><li><strong>Demis Hassabis</strong>: Co-founder of DeepMind, known for his work on AI and AlphaFold.</li><li><strong>DeepMind and AlphaFold</strong>: A leading AI research lab known for breakthroughs in AI, particularly in protein folding with AlphaFold.</li><li><strong>The Singularity is Near by Ray Kurzweil</strong>: A book exploring the future of technology and AI</li></ul><br/><p><strong>Schools/Companies</strong></p><ul><li><strong>University of Sussex</strong>: Known for its research and focus on evolutionary and adaptive systems.</li><li><strong>University of Texas at Austin</strong>: Mentioned in the context of AI and machine learning research.</li><li><strong>Kung Fu AI</strong>: A company co-founded by Ron Green, specializing in AI solutions and strategies <a href="https://www.kungfu.ai" rel="noopener noreferrer" target="_blank">https://www.kungfu.ai</a></li><li><strong>AI Risk, Inc. </strong>Accelerates the adoption of Gen AI safely. They provide a secure enterprise AI platform for banks and other regulated organizations. <a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">https://www.aicrisk.com</a></li></ul><br/><p><strong>Movies</strong></p><ul><li><strong>The Godfather</strong>: A classic film often considered one of the best movies ever made.</li></ul><br/>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">78d4d521-9c0b-413a-b7cf-58460bc7fdc7</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 18 Feb 2025 01:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/d960f517-1560-4870-9564-597a03f28f35/28-Ron-Green-master.mp3" length="94352112" type="audio/mpeg"/><itunes:duration>39:19</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>28</itunes:episode><podcast:episode>28</podcast:episode><podcast:season>1</podcast:season></item><item><title>Enhancing Human Communication with AI: Balancing Connection and Privacy with Cairn Insights</title><itunes:title>Enhancing Human Communication with AI: Balancing Connection and Privacy with Cairn Insights</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://www.troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In this episode, Alec doubles down by speaking to both Kevin Greaney, Founder and CEO and Chris Camacho, Ph.D., Co-Founder and Chief Scientist at Cairn Insights. Listen as they discuss their “user focused AI” and opine on ethical AI and “human in the loop”.</p><p>1.&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI in Digital Communication</strong>: Cairn Insights utilizes AI to analyze digital communication such as emails, Slack, and video calls from a psychological standpoint. The goal is to improve communication within teams and organizations by examining patterns, sequences, and reaction times.</p><p>2.&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI and Employment</strong>: Chris Camacho discusses the role of AI in augmenting human tasks rather than replacing jobs. AI can handle data organization and repetitive tasks, allowing humans to focus on more complex issues.</p><p>3.&nbsp;&nbsp;&nbsp;&nbsp;<strong>Ethical AI Implementation</strong>: Both Kevin and Chris discuss the importance of integrating people into AI processes to ensure ethical implementation. They stress the need for human oversight to prevent biases and ensure AI benefits society, not just technologically but ethically.</p><p>4.&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI Surveillance Concerns</strong>: Kevin addresses the potential privacy concerns related to AI surveillance. He emphasizes that Cairn Insights aims to use AI positively, enhancing quality of life and communication efficiency rather than intrusively monitoring individuals.</p><p>5.&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI Regulation and Future Outlook</strong>: Kevin predicts a significant shift in AI regulation, emphasizing the need for companies to adopt AI strategies or risk falling behind. He believes that despite potential risks, AI will ultimately benefit society by performing tasks that allow humans to engage in higher-level thinking.</p><h3>References</h3><h3>Movies</h3><p>1.&nbsp;&nbsp;&nbsp;&nbsp;<strong>Apollo 13</strong></p><ul><li>An acclaimed movie about the ill-fated Apollo 13 lunar mission.</li><li>Link: [Apollo 13 on IMDb](<a href="https://www.imdb.com/title/tt0112384/" rel="noopener noreferrer" target="_blank">https://www.imdb.com/title/tt0112384/</a>)</li></ul><br/><p>2.&nbsp;&nbsp;&nbsp;&nbsp;<strong>The Social Network</strong></p><ul><li>A film about the founding of Facebook.</li><li>Link: [The Social Network on IMDb](<a href="https://www.imdb.com/title/tt1285016/" rel="noopener noreferrer" target="_blank">https://www.imdb.com/title/tt1285016/</a>)</li></ul><br/><h3>Books</h3><p>1.&nbsp;&nbsp;&nbsp;&nbsp;<strong>A Life in Leadership by John Whitehead:</strong> A book detailing the leadership journey of John Whitehead.</p><p>2.&nbsp;&nbsp;&nbsp;&nbsp;<strong>Foundation Series by Isaac Asimov</strong>: A classic science fiction series exploring themes of future societies and technology.</p><h3>Companies</h3><p><strong>1.&nbsp;&nbsp;&nbsp;&nbsp;Cairn Insights</strong> A company specializing in AI and digital communication analysis. <a href="https://cairninsights.ai/" rel="noopener noreferrer" target="_blank">https://cairninsights.ai/</a></p><p><strong>2.&nbsp;&nbsp;&nbsp;&nbsp;AI Risk, Inc. A company providing a turnkey, secure enterprise AI platform for banks and other regulated organizations </strong><a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">https://www.aicrisk.com</a></p><p><strong>3.&nbsp;&nbsp;&nbsp;&nbsp;Elon Musk</strong> (@elonmusk) CEO of Tesla and SpaceX, known for his innovation in electric vehicles and space, and now DOGE.</p><p><strong>&nbsp;</strong></p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://www.troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In this episode, Alec doubles down by speaking to both Kevin Greaney, Founder and CEO and Chris Camacho, Ph.D., Co-Founder and Chief Scientist at Cairn Insights. Listen as they discuss their “user focused AI” and opine on ethical AI and “human in the loop”.</p><p>1.&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI in Digital Communication</strong>: Cairn Insights utilizes AI to analyze digital communication such as emails, Slack, and video calls from a psychological standpoint. The goal is to improve communication within teams and organizations by examining patterns, sequences, and reaction times.</p><p>2.&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI and Employment</strong>: Chris Camacho discusses the role of AI in augmenting human tasks rather than replacing jobs. AI can handle data organization and repetitive tasks, allowing humans to focus on more complex issues.</p><p>3.&nbsp;&nbsp;&nbsp;&nbsp;<strong>Ethical AI Implementation</strong>: Both Kevin and Chris discuss the importance of integrating people into AI processes to ensure ethical implementation. They stress the need for human oversight to prevent biases and ensure AI benefits society, not just technologically but ethically.</p><p>4.&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI Surveillance Concerns</strong>: Kevin addresses the potential privacy concerns related to AI surveillance. He emphasizes that Cairn Insights aims to use AI positively, enhancing quality of life and communication efficiency rather than intrusively monitoring individuals.</p><p>5.&nbsp;&nbsp;&nbsp;&nbsp;<strong>AI Regulation and Future Outlook</strong>: Kevin predicts a significant shift in AI regulation, emphasizing the need for companies to adopt AI strategies or risk falling behind. He believes that despite potential risks, AI will ultimately benefit society by performing tasks that allow humans to engage in higher-level thinking.</p><h3>References</h3><h3>Movies</h3><p>1.&nbsp;&nbsp;&nbsp;&nbsp;<strong>Apollo 13</strong></p><ul><li>An acclaimed movie about the ill-fated Apollo 13 lunar mission.</li><li>Link: [Apollo 13 on IMDb](<a href="https://www.imdb.com/title/tt0112384/" rel="noopener noreferrer" target="_blank">https://www.imdb.com/title/tt0112384/</a>)</li></ul><br/><p>2.&nbsp;&nbsp;&nbsp;&nbsp;<strong>The Social Network</strong></p><ul><li>A film about the founding of Facebook.</li><li>Link: [The Social Network on IMDb](<a href="https://www.imdb.com/title/tt1285016/" rel="noopener noreferrer" target="_blank">https://www.imdb.com/title/tt1285016/</a>)</li></ul><br/><h3>Books</h3><p>1.&nbsp;&nbsp;&nbsp;&nbsp;<strong>A Life in Leadership by John Whitehead:</strong> A book detailing the leadership journey of John Whitehead.</p><p>2.&nbsp;&nbsp;&nbsp;&nbsp;<strong>Foundation Series by Isaac Asimov</strong>: A classic science fiction series exploring themes of future societies and technology.</p><h3>Companies</h3><p><strong>1.&nbsp;&nbsp;&nbsp;&nbsp;Cairn Insights</strong> A company specializing in AI and digital communication analysis. <a href="https://cairninsights.ai/" rel="noopener noreferrer" target="_blank">https://cairninsights.ai/</a></p><p><strong>2.&nbsp;&nbsp;&nbsp;&nbsp;AI Risk, Inc. A company providing a turnkey, secure enterprise AI platform for banks and other regulated organizations </strong><a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">https://www.aicrisk.com</a></p><p><strong>3.&nbsp;&nbsp;&nbsp;&nbsp;Elon Musk</strong> (@elonmusk) CEO of Tesla and SpaceX, known for his innovation in electric vehicles and space, and now DOGE.</p><p><strong>&nbsp;</strong></p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">f05f8a88-7833-45b8-b9d4-5e55f9977161</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 11 Feb 2025 01:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/8a256dbe-f0db-4d31-ac20-21084c17db17/27-Cairn-Insights-master.mp3" length="84906234" type="audio/mpeg"/><itunes:duration>35:23</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>27</itunes:episode><podcast:episode>27</podcast:episode><podcast:season>1</podcast:season></item><item><title>Is AI Winning the Cyber War? with Kodjo Hogan</title><itunes:title>Is AI Winning the Cyber War? with Kodjo Hogan</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://www.troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>This exciting episode features Kodjo Hogan, an esteemed Information Security Risk Executive, as he joins host Alec Crawford on the "AI Risk Reward Podcast." Together, they delve into the transformative realm of artificial intelligence, examining both its promising opportunities and the challenges it presents in the realm of cybersecurity. The conversation offers listeners valuable insights into AI's impact across various industries, from its technical intricacies to its broader societal implications. Through thought-provoking discussions, the episode provides a comprehensive perspective on how AI is reshaping the la</p><p><strong>Summary</strong></p><ul><li>Kodjo Hogan emphasizes the importance of focusing on what's important in information security while remaining aware of the surrounding environment, drawing on a parable from Paulo Coelho's "The Alchemist".</li><li>AI's ability to perform psychological profiling and create personalized phishing scams is identified as a significant risk, highlighting the need for enhanced cybersecurity measures.</li><li>The episode discusses how regulations, although essential, are often outdated by the time they are implemented, stressing the need for flexible and innovative approaches in cybersecurity.</li><li>Kodjo Hogan advises financial institutions to leverage generative AI for efficiency gains, like speeding up data analysis, while maintaining caution in its adoption.</li><li>The discussion touches on the potential of AI to eliminate jobs, suggesting that professionals need to continually upskill and adapt.</li></ul><br/><p><strong>References</strong></p><p><u>Books</u></p><ul><li>"The Alchemist" by Paulo Coelho.</li><li>"Artificial Intelligence: A Guide to Intelligent Systems" by Michael Negnevitsky.</li><li>"The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win" by Gene Kim, Kevin Behr, and George Spafford.</li></ul><br/><h3><u class="ql-size-small">Movies</u></h3><ul><li>Amadeus.</li></ul><br/><h3><u class="ql-size-small">Internet Resources</u></h3><ul><li>TLDR: (<a href="https://tldr.tech/" rel="noopener noreferrer" target="_blank">https://tldr.tech</a>) - A curated list of current events in technology and more.</li><li>Wired: (<a href="https://www.wired.com/" rel="noopener noreferrer" target="_blank">https://www.wired.com</a>) - A source for in-depth articles on technology and cybersecurity news.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://www.troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>This exciting episode features Kodjo Hogan, an esteemed Information Security Risk Executive, as he joins host Alec Crawford on the "AI Risk Reward Podcast." Together, they delve into the transformative realm of artificial intelligence, examining both its promising opportunities and the challenges it presents in the realm of cybersecurity. The conversation offers listeners valuable insights into AI's impact across various industries, from its technical intricacies to its broader societal implications. Through thought-provoking discussions, the episode provides a comprehensive perspective on how AI is reshaping the la</p><p><strong>Summary</strong></p><ul><li>Kodjo Hogan emphasizes the importance of focusing on what's important in information security while remaining aware of the surrounding environment, drawing on a parable from Paulo Coelho's "The Alchemist".</li><li>AI's ability to perform psychological profiling and create personalized phishing scams is identified as a significant risk, highlighting the need for enhanced cybersecurity measures.</li><li>The episode discusses how regulations, although essential, are often outdated by the time they are implemented, stressing the need for flexible and innovative approaches in cybersecurity.</li><li>Kodjo Hogan advises financial institutions to leverage generative AI for efficiency gains, like speeding up data analysis, while maintaining caution in its adoption.</li><li>The discussion touches on the potential of AI to eliminate jobs, suggesting that professionals need to continually upskill and adapt.</li></ul><br/><p><strong>References</strong></p><p><u>Books</u></p><ul><li>"The Alchemist" by Paulo Coelho.</li><li>"Artificial Intelligence: A Guide to Intelligent Systems" by Michael Negnevitsky.</li><li>"The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win" by Gene Kim, Kevin Behr, and George Spafford.</li></ul><br/><h3><u class="ql-size-small">Movies</u></h3><ul><li>Amadeus.</li></ul><br/><h3><u class="ql-size-small">Internet Resources</u></h3><ul><li>TLDR: (<a href="https://tldr.tech/" rel="noopener noreferrer" target="_blank">https://tldr.tech</a>) - A curated list of current events in technology and more.</li><li>Wired: (<a href="https://www.wired.com/" rel="noopener noreferrer" target="_blank">https://www.wired.com</a>) - A source for in-depth articles on technology and cybersecurity news.</li></ul><br/>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">937948da-d967-4482-9c6c-6605ff32305b</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 04 Feb 2025 01:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/64886951-5b1f-494d-a5f4-a4d6cf434b1e/25-Kodjo-Hogan-master.mp3" length="110393385" type="audio/mpeg"/><itunes:duration>46:00</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>26</itunes:episode><podcast:episode>26</podcast:episode><podcast:season>1</podcast:season></item><item><title>Every Job Will Be an AI Job, with Mike Montague</title><itunes:title>Every Job Will Be an AI Job, with Mike Montague</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://www.troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>Our guest this week is Mike Montague, a seasoned AI marketing expert and founder, who  shares his insights on how AI is reshaping the marketing landscape. In this engaging show, we explore the balance between technology and human ingenuity, addressing ethical considerations, evolving strategies, and the potential pitfalls of over-dependence on automation. </p><ul><li><strong>AI and Human Collaboration</strong>: Mike Montague emphasizes the importance of adopting a "human first" approach in AI marketing, insisting that while AI can significantly boost efficiency, the human element remains vital for fostering creativity and providing context. He strongly advocates for leveraging AI as a tool to enhance, rather than replace, human creativity, ensuring that the personal touch remains at the forefront of marketing endeavors.</li><li><strong>Ethical Use of AI</strong>: Montague highlights pressing ethical concerns surrounding AI, particularly focusing on issues such as data privacy and the evolving nature of intellectual property. He suggests that as AI becomes more integrated into creative processes, traditional notions of intellectual property may become obsolete, emphasizing the importance of crediting and supporting the original creators of content.</li><li><strong>Marketing Strategy Evolution</strong>: According to Montague, AI tools have revolutionized the marketing landscape, making it easier for small and medium-sized businesses to compete on a more level playing field with larger brands. This technological shift allows smaller companies to produce high-quality content without the need for extensive resources that were once necessary.</li><li><strong>Challenges with AI in Marketing</strong>: Montague cautions against the dangers of over-reliance on AI, which can lead to the creation of generic or spammy content that fails to resonate with audiences. He advises marketers to prioritize building genuine connections with their audience, rather than exploiting AI's capabilities to spam or manufacture fake engagements, ensuring that authenticity remains a key focus.</li></ul><br/><p><strong>References</strong></p><p><u>Books</u></p><ul><li>"A Thousand True Fans" by John Longhurst</li><li>"What to Do When It's Your Turn" by Seth Godin</li><li>"Duct Tape Marketing" by Jon Jantsch </li></ul><br/><p><u>Movies</u></p><ul><li>Iron Man</li><li>Terminator</li></ul><br/><p><u>Online Resources</u></p><ul><li><a href="https://www.youtube.com/c/lexfridman/videos" rel="noopener noreferrer" target="_blank">Lex Fridman's YouTube channel and podcast</a></li><li><a href="https://annhandley.com/newsletter/" rel="noopener noreferrer" target="_blank">Anne Handley's email newsletter</a> </li></ul><br/>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://www.troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>Our guest this week is Mike Montague, a seasoned AI marketing expert and founder, who  shares his insights on how AI is reshaping the marketing landscape. In this engaging show, we explore the balance between technology and human ingenuity, addressing ethical considerations, evolving strategies, and the potential pitfalls of over-dependence on automation. </p><ul><li><strong>AI and Human Collaboration</strong>: Mike Montague emphasizes the importance of adopting a "human first" approach in AI marketing, insisting that while AI can significantly boost efficiency, the human element remains vital for fostering creativity and providing context. He strongly advocates for leveraging AI as a tool to enhance, rather than replace, human creativity, ensuring that the personal touch remains at the forefront of marketing endeavors.</li><li><strong>Ethical Use of AI</strong>: Montague highlights pressing ethical concerns surrounding AI, particularly focusing on issues such as data privacy and the evolving nature of intellectual property. He suggests that as AI becomes more integrated into creative processes, traditional notions of intellectual property may become obsolete, emphasizing the importance of crediting and supporting the original creators of content.</li><li><strong>Marketing Strategy Evolution</strong>: According to Montague, AI tools have revolutionized the marketing landscape, making it easier for small and medium-sized businesses to compete on a more level playing field with larger brands. This technological shift allows smaller companies to produce high-quality content without the need for extensive resources that were once necessary.</li><li><strong>Challenges with AI in Marketing</strong>: Montague cautions against the dangers of over-reliance on AI, which can lead to the creation of generic or spammy content that fails to resonate with audiences. He advises marketers to prioritize building genuine connections with their audience, rather than exploiting AI's capabilities to spam or manufacture fake engagements, ensuring that authenticity remains a key focus.</li></ul><br/><p><strong>References</strong></p><p><u>Books</u></p><ul><li>"A Thousand True Fans" by John Longhurst</li><li>"What to Do When It's Your Turn" by Seth Godin</li><li>"Duct Tape Marketing" by Jon Jantsch </li></ul><br/><p><u>Movies</u></p><ul><li>Iron Man</li><li>Terminator</li></ul><br/><p><u>Online Resources</u></p><ul><li><a href="https://www.youtube.com/c/lexfridman/videos" rel="noopener noreferrer" target="_blank">Lex Fridman's YouTube channel and podcast</a></li><li><a href="https://annhandley.com/newsletter/" rel="noopener noreferrer" target="_blank">Anne Handley's email newsletter</a> </li></ul><br/>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">57409f68-ec44-4b31-8a85-9c461a095abf</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 28 Jan 2025 01:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/ad838583-a2b5-476b-a692-91fa91b30350/24-Mike-Montague-master.mp3" length="102061369" type="audio/mpeg"/><itunes:duration>42:31</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>25</itunes:episode><podcast:episode>25</podcast:episode><podcast:season>1</podcast:season></item><item><title>Breaking! Top 10 Cybersecurity Predictions for 2025 from Matthew Rosenquist</title><itunes:title>Breaking! Top 10 Cybersecurity Predictions for 2025 from Matthew Rosenquist</itunes:title><description><![CDATA[<p>Alec Crawford, CEO of <a href="https://www.aicisk.com" rel="noopener noreferrer" target="_blank">AI Risk, Inc.</a>, talks to Matthew Rosenquist about his Top 10 Cybersecurity Predictions for 2025. From Nation States to rising expectations to the wave of AI-driven hacking, you heard it here first!</p><p>You can find the full list on<a href="https://www.linkedin.com/pulse/10-cybersecurity-predictions-2025-matthew-rosenquist-ejy4c/" rel="noopener noreferrer" target="_blank"> LinkedIn</a> or Matthew's new <a href="https://substack.com/@matthewrosenquist/note/p-155297051?utm_source=notes-share-action&amp;r=1qxkwn" rel="noopener noreferrer" target="_blank">SubStack</a>!</p>]]></description><content:encoded><![CDATA[<p>Alec Crawford, CEO of <a href="https://www.aicisk.com" rel="noopener noreferrer" target="_blank">AI Risk, Inc.</a>, talks to Matthew Rosenquist about his Top 10 Cybersecurity Predictions for 2025. From Nation States to rising expectations to the wave of AI-driven hacking, you heard it here first!</p><p>You can find the full list on<a href="https://www.linkedin.com/pulse/10-cybersecurity-predictions-2025-matthew-rosenquist-ejy4c/" rel="noopener noreferrer" target="_blank"> LinkedIn</a> or Matthew's new <a href="https://substack.com/@matthewrosenquist/note/p-155297051?utm_source=notes-share-action&amp;r=1qxkwn" rel="noopener noreferrer" target="_blank">SubStack</a>!</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">cffc1850-c692-4956-a64b-8555b1d72b03</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Wed, 22 Jan 2025 01:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/8446b1e7-7d5b-41ea-b99a-073935b22cdd/MR-Top-10-Cyber.mp3" length="7534777" type="audio/mpeg"/><itunes:duration>15:42</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>24</itunes:episode><podcast:episode>24</podcast:episode><podcast:season>1</podcast:season></item><item><title>Don&apos;t Let AI Hijack Art! Be Human-Centric, with Michaell Magrutsche</title><itunes:title>Don&apos;t Let AI Hijack Art! Be Human-Centric, with Michaell Magrutsche</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><ul><li><strong>The Role of AI in Creativity</strong>: Magrutsche expressed concerns about AI's role in creativity, suggesting that AI lacks the personal touch and uniqueness inherent in human art. He described AI as a tool that, while useful for certain tasks, cannot replicate the personal creative process.</li><li><strong>Art's Value Beyond Economics</strong>: The conversation highlighted how art can function outside economic systems. Magruch pointed out that the true value of art is in its creation and personal expression, not necessarily in its commercial success.</li><li><strong>Human-Centric Versus System-Centric Views</strong>: Magrutsche discussed the importance of shifting from system-centric to human-centric approaches, especially in education and creativity. He argued that the current system fails many, particularly those who are neurodivergent.</li><li><strong>AI as a Tool, Not a Creator</strong>: Magrutsche compared AI to a knife, useful but not self-determining. He stressed the importance of human awareness and control over AI's use, particularly in creative fields.</li><li><strong>Ethical Concerns with AI</strong>: The podcast touched on ethical issues regarding AI, including its potential misuse and the importance of maintaining transparency and ethical boundaries in its deployment.</li><li><strong>Art and Emotional Intelligence</strong>: Magrutsche emphasized the need to harness emotional intelligence in creative endeavors and life, suggesting that AI cannot replicate this uniquely human capability.</li><li><strong>The Creation Process</strong>: He advocated for appreciation of the creative process itself, arguing that creation should not be driven solely by the desire for economic gain or societal approval.</li><li><strong>Impact of Social Media on Art</strong>: There was a discussion on how social media affects art, with Magrutsche suggesting that while it can expose people to more art, it often fails to capture the depth and personal connection of the creative process.</li><li><strong>AI's Limitations in Art</strong>: Magrutsche concluded that AI would never fully replicate the nuances of human creativity, as it generalizes and lacks the individual perspective necessary for true artistic expression.</li></ul><br/><p><strong>Books, Movies, Artists and Related Links</strong>:</p><p><strong>Books</strong>:</p><ul><li>"The Power of Now" by Eckhart Tolle</li></ul><br/><p><strong>Artists:</strong></p><ul><li>Bruegel</li><li>Andy Warhol</li><li>Vincent van Gogh</li><li>Jeff Koons</li><li>Taylor Swift (mentioned in the context of music)</li><li>Pablo Picasso</li></ul><br/><p><br></p><p><br></p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><ul><li><strong>The Role of AI in Creativity</strong>: Magrutsche expressed concerns about AI's role in creativity, suggesting that AI lacks the personal touch and uniqueness inherent in human art. He described AI as a tool that, while useful for certain tasks, cannot replicate the personal creative process.</li><li><strong>Art's Value Beyond Economics</strong>: The conversation highlighted how art can function outside economic systems. Magruch pointed out that the true value of art is in its creation and personal expression, not necessarily in its commercial success.</li><li><strong>Human-Centric Versus System-Centric Views</strong>: Magrutsche discussed the importance of shifting from system-centric to human-centric approaches, especially in education and creativity. He argued that the current system fails many, particularly those who are neurodivergent.</li><li><strong>AI as a Tool, Not a Creator</strong>: Magrutsche compared AI to a knife, useful but not self-determining. He stressed the importance of human awareness and control over AI's use, particularly in creative fields.</li><li><strong>Ethical Concerns with AI</strong>: The podcast touched on ethical issues regarding AI, including its potential misuse and the importance of maintaining transparency and ethical boundaries in its deployment.</li><li><strong>Art and Emotional Intelligence</strong>: Magrutsche emphasized the need to harness emotional intelligence in creative endeavors and life, suggesting that AI cannot replicate this uniquely human capability.</li><li><strong>The Creation Process</strong>: He advocated for appreciation of the creative process itself, arguing that creation should not be driven solely by the desire for economic gain or societal approval.</li><li><strong>Impact of Social Media on Art</strong>: There was a discussion on how social media affects art, with Magrutsche suggesting that while it can expose people to more art, it often fails to capture the depth and personal connection of the creative process.</li><li><strong>AI's Limitations in Art</strong>: Magrutsche concluded that AI would never fully replicate the nuances of human creativity, as it generalizes and lacks the individual perspective necessary for true artistic expression.</li></ul><br/><p><strong>Books, Movies, Artists and Related Links</strong>:</p><p><strong>Books</strong>:</p><ul><li>"The Power of Now" by Eckhart Tolle</li></ul><br/><p><strong>Artists:</strong></p><ul><li>Bruegel</li><li>Andy Warhol</li><li>Vincent van Gogh</li><li>Jeff Koons</li><li>Taylor Swift (mentioned in the context of music)</li><li>Pablo Picasso</li></ul><br/><p><br></p><p><br></p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">62b1e44e-7c65-4ebb-9665-510117e28b88</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 21 Jan 2025 01:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/496c4ef8-f0ef-46ad-95a5-e605b1111c37/23-Michaell-Magruttsche-master.mp3" length="94167165" type="audio/mpeg"/><itunes:duration>39:14</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>23</itunes:episode><podcast:episode>23</podcast:episode><podcast:season>1</podcast:season></item><item><title>Don&apos;t Make This Mistake with AI! Ex-Intel Cybersecurity Strategist Matthew Rosenquist Warns Us</title><itunes:title>Don&apos;t Make This Mistake with AI! Ex-Intel Cybersecurity Strategist Matthew Rosenquist Warns Us</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>In this engaging episode of the AI Risk Reward Podcast, host Alec Crawford converses with Matthew Rosenquist, a renowned CISO and cybersecurity strategist at Mercury Risk and Compliance. The discussion delves into the intersection of AI, cybersecurity, business strategy, and ethical considerations, enriched by Rosenquist's extensive experience at Intel and his insights into the evolving landscape of AI technologies.</p><p><strong>Impossible Problems:</strong> Rosenquist emphasizes the value of pursuing roles that tackle "impossible problems," drawing from his experiences at Intel, where such challenges were abundant.</p><p><strong>Leadership Lessons:</strong> He shares anecdotes about Andy Grove of Intel, highlighting a culture that valued ideas over titles.</p><p><strong>Strategic Insights:</strong> The relevance of Sun Tzu's "The Art of War" in cybersecurity and business is discussed, emphasizing the need to understand oneself, the adversary, and the battleground.</p><p><strong>AI in Cybersecurity:</strong> While deep fakes grab headlines, AI's real value for attackers lies in enhancing social engineering tactics, such as phishing.</p><p><strong>Cyber Insurance Challenges:</strong> The difficulties faced by the insurance industry in adapting traditional risk assessment models to the chaotic nature of cybersecurity threats are explored.</p><p><strong>GenAI Risks:</strong> Rosenquist outlines the risks of using generative AI in enterprises, particularly concerning data security and integration with backend systems.</p><p><strong>Ethical AI Deployment:</strong> Organizations are urged to understand the ethical implications of AI and establish robust oversight to mitigate biases and harmful outputs.</p><p><strong>Regulatory Roles:</strong> The role of governments in AI regulation is seen as a balancing act between fostering innovation and ensuring citizen safety.</p><p><strong>AI Evolution:</strong> The rapid evolution of AI technologies, like movie-making AI, is highlighted, sparking curiosity about future advancements.</p><p><strong>Advice for Professionals:</strong> For those entering cybersecurity, Rosenquist advises understanding the dynamic nature of the field and developing a personal ethical code.</p><p><strong>References:</strong></p><p>"Only the Paranoid Survive" by Andy Grove</p><p>"The Art of War" by Sun Tzu.</p><p>"Ethical Machines" by Reid Blackman</p><p><a href="https://www.intel.com" rel="noopener noreferrer" target="_blank">https://www.intel.com</a></p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>In this engaging episode of the AI Risk Reward Podcast, host Alec Crawford converses with Matthew Rosenquist, a renowned CISO and cybersecurity strategist at Mercury Risk and Compliance. The discussion delves into the intersection of AI, cybersecurity, business strategy, and ethical considerations, enriched by Rosenquist's extensive experience at Intel and his insights into the evolving landscape of AI technologies.</p><p><strong>Impossible Problems:</strong> Rosenquist emphasizes the value of pursuing roles that tackle "impossible problems," drawing from his experiences at Intel, where such challenges were abundant.</p><p><strong>Leadership Lessons:</strong> He shares anecdotes about Andy Grove of Intel, highlighting a culture that valued ideas over titles.</p><p><strong>Strategic Insights:</strong> The relevance of Sun Tzu's "The Art of War" in cybersecurity and business is discussed, emphasizing the need to understand oneself, the adversary, and the battleground.</p><p><strong>AI in Cybersecurity:</strong> While deep fakes grab headlines, AI's real value for attackers lies in enhancing social engineering tactics, such as phishing.</p><p><strong>Cyber Insurance Challenges:</strong> The difficulties faced by the insurance industry in adapting traditional risk assessment models to the chaotic nature of cybersecurity threats are explored.</p><p><strong>GenAI Risks:</strong> Rosenquist outlines the risks of using generative AI in enterprises, particularly concerning data security and integration with backend systems.</p><p><strong>Ethical AI Deployment:</strong> Organizations are urged to understand the ethical implications of AI and establish robust oversight to mitigate biases and harmful outputs.</p><p><strong>Regulatory Roles:</strong> The role of governments in AI regulation is seen as a balancing act between fostering innovation and ensuring citizen safety.</p><p><strong>AI Evolution:</strong> The rapid evolution of AI technologies, like movie-making AI, is highlighted, sparking curiosity about future advancements.</p><p><strong>Advice for Professionals:</strong> For those entering cybersecurity, Rosenquist advises understanding the dynamic nature of the field and developing a personal ethical code.</p><p><strong>References:</strong></p><p>"Only the Paranoid Survive" by Andy Grove</p><p>"The Art of War" by Sun Tzu.</p><p>"Ethical Machines" by Reid Blackman</p><p><a href="https://www.intel.com" rel="noopener noreferrer" target="_blank">https://www.intel.com</a></p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">6691cef1-e584-4adf-a96c-78b1c862bae6</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 14 Jan 2025 01:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/038fdf50-9354-441a-8db4-b9d83e4e1cc9/22-Matthew-Rosenquist-master.mp3" length="131846185" type="audio/mpeg"/><itunes:duration>54:56</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>22</itunes:episode><podcast:episode>22</podcast:episode><podcast:season>1</podcast:season></item><item><title>AI: The Future of Banking or the End of an Era? Ask Rutger van Faassen, Founder of Information Banker.</title><itunes:title>AI: The Future of Banking or the End of an Era? Ask Rutger van Faassen, Founder of Information Banker.</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>Join Alec Crawford on the AI Risk Reward podcast as he welcomes Rutger van Faassen, the founder and CEO of Information Banker. In this insightful episode, Rutger shares his unique career journey from studying at hotel school to becoming a prominent figure in finance at ABN Amro and beyond. He discusses the influence of Steve Jobs on his leadership style, the potential impact of AI and blockchain on the finance industry, and the importance of storytelling in business. Rutger also provides valuable advice for aspiring entrepreneurs and shares his experiences with AI applications in his business.</p><ul><li>Steve Jobs' influence on Rutger's leadership style includes the significance of "walk and talk" meetings for better decision-making.</li><li>Rutger's journey from ABN Amro to founding Information Banker showcases the value of diverse experiences in banking and technology sectors.</li><li>The concept of "Start With Why" by Simon Sinek is central to Rutger's approach in helping clients connect emotionally with their audience.</li><li>Rutger uses AI tools like perplexity AI and Copilot for content creation and customer relationship management.</li><li>AI is seen as an enhancer rather than a disruptor in traditional banking, with a focus on improving efficiency and customer experience.</li><li>Regulatory challenges with AI include managing bias and ensuring ethical applications in sensitive areas like banking.</li><li>Aspiring founders should focus on finding real customers early on to test and refine their business ideas.</li></ul><br/><p><strong>References</strong></p><p>1. Books:</p><p>&nbsp;&nbsp;- "Start With Why" by Simon Sinek</p><p>&nbsp;&nbsp;- "Built for Tomorrow" by Jason Pfeiffe</p><p>2. Websites:</p><p>&nbsp;&nbsp;- Information Banker: <a href="https://www.informationbanker.com" rel="noopener noreferrer" target="_blank">https://www.informationbanker.com</a> </p><p>&nbsp;&nbsp;- Perplexity AI: <a href="https://www.perplexity.ai" rel="noopener noreferrer" target="_blank">https://www.perplexity.ai</a></p><p>3. Podcasts:</p><p>&nbsp;&nbsp;- <a href="https://open.spotify.com/show/4GFQDInlJAYGYNZorvFmPi?si=85bef72857074924" rel="noopener noreferrer" target="_blank">AI Risk Reward</a> with Alec Crawford &nbsp;</p><p>&nbsp;&nbsp;- <a href="https://open.spotify.com/show/16bZe0rywlZ9WGBZLiEGFf?si=c46df7ed3d4d47e9" rel="noopener noreferrer" target="_blank">Banking on Information</a> by Rutger van Faassen</p><p>&nbsp;&nbsp;- <a href="https://podcasts.voxmedia.com/show/pivot" rel="noopener noreferrer" target="_blank">Pivot</a> with Kara Swisher and Scott Galloway</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>Join Alec Crawford on the AI Risk Reward podcast as he welcomes Rutger van Faassen, the founder and CEO of Information Banker. In this insightful episode, Rutger shares his unique career journey from studying at hotel school to becoming a prominent figure in finance at ABN Amro and beyond. He discusses the influence of Steve Jobs on his leadership style, the potential impact of AI and blockchain on the finance industry, and the importance of storytelling in business. Rutger also provides valuable advice for aspiring entrepreneurs and shares his experiences with AI applications in his business.</p><ul><li>Steve Jobs' influence on Rutger's leadership style includes the significance of "walk and talk" meetings for better decision-making.</li><li>Rutger's journey from ABN Amro to founding Information Banker showcases the value of diverse experiences in banking and technology sectors.</li><li>The concept of "Start With Why" by Simon Sinek is central to Rutger's approach in helping clients connect emotionally with their audience.</li><li>Rutger uses AI tools like perplexity AI and Copilot for content creation and customer relationship management.</li><li>AI is seen as an enhancer rather than a disruptor in traditional banking, with a focus on improving efficiency and customer experience.</li><li>Regulatory challenges with AI include managing bias and ensuring ethical applications in sensitive areas like banking.</li><li>Aspiring founders should focus on finding real customers early on to test and refine their business ideas.</li></ul><br/><p><strong>References</strong></p><p>1. Books:</p><p>&nbsp;&nbsp;- "Start With Why" by Simon Sinek</p><p>&nbsp;&nbsp;- "Built for Tomorrow" by Jason Pfeiffe</p><p>2. Websites:</p><p>&nbsp;&nbsp;- Information Banker: <a href="https://www.informationbanker.com" rel="noopener noreferrer" target="_blank">https://www.informationbanker.com</a> </p><p>&nbsp;&nbsp;- Perplexity AI: <a href="https://www.perplexity.ai" rel="noopener noreferrer" target="_blank">https://www.perplexity.ai</a></p><p>3. Podcasts:</p><p>&nbsp;&nbsp;- <a href="https://open.spotify.com/show/4GFQDInlJAYGYNZorvFmPi?si=85bef72857074924" rel="noopener noreferrer" target="_blank">AI Risk Reward</a> with Alec Crawford &nbsp;</p><p>&nbsp;&nbsp;- <a href="https://open.spotify.com/show/16bZe0rywlZ9WGBZLiEGFf?si=c46df7ed3d4d47e9" rel="noopener noreferrer" target="_blank">Banking on Information</a> by Rutger van Faassen</p><p>&nbsp;&nbsp;- <a href="https://podcasts.voxmedia.com/show/pivot" rel="noopener noreferrer" target="_blank">Pivot</a> with Kara Swisher and Scott Galloway</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">b7e5bf5c-85cf-406e-a3fb-9806061144d2</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 07 Jan 2025 01:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/03d69d47-3a45-4a83-8e0d-3c01175f5dfb/21-Rutger-van-Faassen-master.mp3" length="81166545" type="audio/mpeg"/><itunes:duration>33:49</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>21</itunes:episode><podcast:episode>21</podcast:episode><podcast:season>1</podcast:season></item><item><title>Venture Capital’s Role in the AI Revolution, with Adrian Mendoza, GP of Mendoza Ventures</title><itunes:title>Venture Capital’s Role in the AI Revolution, with Adrian Mendoza, GP of Mendoza Ventures</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>Our host, Alec Crawford, interviews Adrian Mendoza, the founder and general partner at Mendoza Ventures, who shares insights from his journey in venture capital, his experiences with AI, and the evolution of the tech economy.</p><p><strong>From Architecture to AI:</strong> Adrian recounts his path from architecture to venture capital, highlighting his time at Harvard and his early involvement at Walt Disney Imagineering. He talks about the dot-com boom, his experiences at HBS, and the cyclical nature of the tech economy.</p><p><strong>AI and Venture Capital:</strong> Adrian Mendoza emphasized the transformative impact of AI on the venture capital landscape. He highlighted Mendoza Ventures' strategic focus on investing in AI, fintech, and blockchain companies, noting that their first AI investment involved scraping data for corporate gifting and sales enablement. Mendoza Ventures leverages AI to analyze data, guide investment decisions, and create AI-driven content such as podcasts. Adrian underscored the importance of understanding data and its applications, and the potential of large language models (LLMs) like ChatGPT, despite issues like data hallucination and the need for transparency and ethical considerations in AI deployment.</p><p><strong>AI Evolution: </strong>Significant emphasis is placed on the evolution of AI, the importance of data, and the role of large language models (LLMs) in current technology trends. Adrian also discusses the ethical considerations and regulatory aspects of AI, emphasizing transparency and the potential impacts of generative AI on data privacy.</p><p><strong>Data Privacy and AI:</strong> Adrian Mendoza discussed the significant issues surrounding data privacy and generative AI. There is a growing concern about how companies utilize personal data, especially with the advent of AI tools like ChatGPT. Adrian highlighted that people are unknowingly feeding sensitive data into these AI systems, which then learn from and potentially misuse this information. This lack of transparency and potential for data leaks raises serious ethical and regulatory concerns about AI usage.</p><p><strong>Regulation of AI:</strong> Adrian compared the regulation of AI to the past failures of regulating social media, suggesting that politicians might not be well-equipped to handle the complexities of AI regulation. He mentioned that, similar to how ad tech evolved post-political events with clearer markings for sponsored content, AI might need similar transparency measures. The challenge lies in ensuring that AI systems explain their decision-making processes to maintain compliance and build trust.</p><p>The conversation touches on Mendoza Ventures' investment strategies, focusing on fintech, blockchain, and AI, and the importance of building supportive ecosystems and networks for startups.</p><p><strong>Books</strong></p><p><em>Devil in the White City</em> by Eric Larson</p><p><strong>&nbsp;</strong></p><p><strong>Movies</strong></p><p>•&nbsp;&nbsp;&nbsp;Men in Black</p><p>•&nbsp;&nbsp;&nbsp;Iron Man 2</p><p>•&nbsp;&nbsp;&nbsp;The Clone Wars</p><p><strong>Useful Links</strong></p><p><a href="https://www.notebooklm.com/" rel="noopener noreferrer" target="_blank">https://www.notebooklm.com</a></p><p><a href="https://www.mendoza-ventures.com/" rel="noopener noreferrer" target="_blank">https://www.mendoza-ventures.com</a></p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>Our host, Alec Crawford, interviews Adrian Mendoza, the founder and general partner at Mendoza Ventures, who shares insights from his journey in venture capital, his experiences with AI, and the evolution of the tech economy.</p><p><strong>From Architecture to AI:</strong> Adrian recounts his path from architecture to venture capital, highlighting his time at Harvard and his early involvement at Walt Disney Imagineering. He talks about the dot-com boom, his experiences at HBS, and the cyclical nature of the tech economy.</p><p><strong>AI and Venture Capital:</strong> Adrian Mendoza emphasized the transformative impact of AI on the venture capital landscape. He highlighted Mendoza Ventures' strategic focus on investing in AI, fintech, and blockchain companies, noting that their first AI investment involved scraping data for corporate gifting and sales enablement. Mendoza Ventures leverages AI to analyze data, guide investment decisions, and create AI-driven content such as podcasts. Adrian underscored the importance of understanding data and its applications, and the potential of large language models (LLMs) like ChatGPT, despite issues like data hallucination and the need for transparency and ethical considerations in AI deployment.</p><p><strong>AI Evolution: </strong>Significant emphasis is placed on the evolution of AI, the importance of data, and the role of large language models (LLMs) in current technology trends. Adrian also discusses the ethical considerations and regulatory aspects of AI, emphasizing transparency and the potential impacts of generative AI on data privacy.</p><p><strong>Data Privacy and AI:</strong> Adrian Mendoza discussed the significant issues surrounding data privacy and generative AI. There is a growing concern about how companies utilize personal data, especially with the advent of AI tools like ChatGPT. Adrian highlighted that people are unknowingly feeding sensitive data into these AI systems, which then learn from and potentially misuse this information. This lack of transparency and potential for data leaks raises serious ethical and regulatory concerns about AI usage.</p><p><strong>Regulation of AI:</strong> Adrian compared the regulation of AI to the past failures of regulating social media, suggesting that politicians might not be well-equipped to handle the complexities of AI regulation. He mentioned that, similar to how ad tech evolved post-political events with clearer markings for sponsored content, AI might need similar transparency measures. The challenge lies in ensuring that AI systems explain their decision-making processes to maintain compliance and build trust.</p><p>The conversation touches on Mendoza Ventures' investment strategies, focusing on fintech, blockchain, and AI, and the importance of building supportive ecosystems and networks for startups.</p><p><strong>Books</strong></p><p><em>Devil in the White City</em> by Eric Larson</p><p><strong>&nbsp;</strong></p><p><strong>Movies</strong></p><p>•&nbsp;&nbsp;&nbsp;Men in Black</p><p>•&nbsp;&nbsp;&nbsp;Iron Man 2</p><p>•&nbsp;&nbsp;&nbsp;The Clone Wars</p><p><strong>Useful Links</strong></p><p><a href="https://www.notebooklm.com/" rel="noopener noreferrer" target="_blank">https://www.notebooklm.com</a></p><p><a href="https://www.mendoza-ventures.com/" rel="noopener noreferrer" target="_blank">https://www.mendoza-ventures.com</a></p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">7aeba416-1d41-4e5b-8a17-ce27607ed2b3</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 17 Dec 2024 01:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/8a3b794e-653a-40c2-9991-227120164c57/20-Adrian-Mendoza-master.mp3" length="99606904" type="audio/mpeg"/><itunes:duration>41:30</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>20</itunes:episode><podcast:episode>20</podcast:episode><podcast:season>1</podcast:season></item><item><title>AI Domination: from Poker to Innovation with Nick Jain, CEO of Ideascale</title><itunes:title>AI Domination: from Poker to Innovation with Nick Jain, CEO of Ideascale</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://www.troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In this engaging conversation, Alec Crawford interviews Nick Jain, CEO of Ideascale, a social platform for identifying the best ideas from your customers and your own team!</p><p>Harvard Business School Experience: Nick Jain reflects on the value of his Harvard Business School education, emphasizing the importance of the network and exposure to diverse perspectives.</p><p>Business Insights from Kix Founders: Jain shares insights from his interactions with the founders of Kix, highlighting their practical advice on leadership and business operations without the heavy reliance on venture capital.</p><p>Role of AI in Ideascale: Jain discusses how Ideascale leverages AI to enhance innovation processes, from helping users overcome writing stage fright to performing qualitative data analytics to assess idea feasibility.</p><p>Personal Use of AI: Jain uses AI for personal life optimization, such as automating grocery purchases and preparing for fatherhood. He also shares his experience as a competitive poker player, multitasking learning with gaming.</p><p>Concerns about AI: Jain expresses concerns about AI's impact on employment and human skill development, emphasizing the need for ethical oversight to avoid unintended consequences like autonomous decision-making without human understanding.</p><p><strong>Resources</strong></p><p>Books:</p><p>"Valuation" by Aswath Damodaran</p><p>"Black Swan" and "Fooled by Randomness" by Nassim Taleb</p><p>Movies and TV Shows:</p><p>"12 Angry Men"</p><p>"Stargate Atlantis"</p><p>"Carbon Black" (Netflix series)</p><p><br></p><p>Online Resources:</p><p><a href="https://www.ideascale.com" rel="noopener noreferrer" target="_blank">https://www.ideascale.com</a></p><p><a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">https://www.aicrisk.com </a></p><p><a href="https://www.linkedin.com/in/nickjain/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/nickjain/</a></p><h3><br></h3>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://www.troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In this engaging conversation, Alec Crawford interviews Nick Jain, CEO of Ideascale, a social platform for identifying the best ideas from your customers and your own team!</p><p>Harvard Business School Experience: Nick Jain reflects on the value of his Harvard Business School education, emphasizing the importance of the network and exposure to diverse perspectives.</p><p>Business Insights from Kix Founders: Jain shares insights from his interactions with the founders of Kix, highlighting their practical advice on leadership and business operations without the heavy reliance on venture capital.</p><p>Role of AI in Ideascale: Jain discusses how Ideascale leverages AI to enhance innovation processes, from helping users overcome writing stage fright to performing qualitative data analytics to assess idea feasibility.</p><p>Personal Use of AI: Jain uses AI for personal life optimization, such as automating grocery purchases and preparing for fatherhood. He also shares his experience as a competitive poker player, multitasking learning with gaming.</p><p>Concerns about AI: Jain expresses concerns about AI's impact on employment and human skill development, emphasizing the need for ethical oversight to avoid unintended consequences like autonomous decision-making without human understanding.</p><p><strong>Resources</strong></p><p>Books:</p><p>"Valuation" by Aswath Damodaran</p><p>"Black Swan" and "Fooled by Randomness" by Nassim Taleb</p><p>Movies and TV Shows:</p><p>"12 Angry Men"</p><p>"Stargate Atlantis"</p><p>"Carbon Black" (Netflix series)</p><p><br></p><p>Online Resources:</p><p><a href="https://www.ideascale.com" rel="noopener noreferrer" target="_blank">https://www.ideascale.com</a></p><p><a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">https://www.aicrisk.com </a></p><p><a href="https://www.linkedin.com/in/nickjain/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/nickjain/</a></p><h3><br></h3>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">37bab2bb-2fd4-4188-ae62-b0ed8a456caf</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Wed, 11 Dec 2024 01:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/7d1a8719-7cd2-49cb-a789-9677753ab626/19-Nick-Jain-master.mp3" length="62047002" type="audio/mpeg"/><itunes:duration>25:51</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>19</itunes:episode><podcast:episode>19</podcast:episode><podcast:season>1</podcast:season></item><item><title>Epic Fail! Broken US Healthcare Data: Fix it with Heather Flannery and AI MINDSystems</title><itunes:title>Epic Fail! Broken US Healthcare Data: Fix it with Heather Flannery and AI MINDSystems</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>Alec’s guest today is Heather Flannery (@heatherflannery), co-founder and CEO at AI MINDSystems Foundation. Heather discusses her work with AI and blockchain, focusing on privacy-preserving federated learning architectures for regulated industries, emphasizing the importance of data decentralization. Heather highlights the inevitability of financializing data as an asset class, pointing out that data acquisition challenges are structural, legal, and organizational rather than merely technical.</p><p>The <strong>National Hero Initiative</strong> focuses on orchestrating health-related personal data for U.S. military members, veterans, and first responders, aiming to empower these groups within a transparent data economy.</p><p>The <strong>AI MINDSystems Foundation's Mission</strong> focuses on creating legal and operational models for individuals to have agency over their data, contributing to the healthcare system more effectively and ethically. Heather invites listeners to engage with AI Mind Systems Foundation through various means such as volunteering or donations</p><p>Heather is involved with the <strong>AI 2030 Global Think Tank</strong>, which aims to create frameworks for responsible AI, emphasizing that ethical AI practices can drive business growth.</p><p>Heather shares insights from her experiences, including advice from <strong>Ethereum co-founder Joseph Lubin</strong> (@Ethereumjoseph) on embracing uncertainty in business ventures. You don't want to miss this!</p><p><strong>References</strong></p><ul><li><a href="https://www.intel.com/content/www/us/en/customer-spotlight/stories/ai-mindsystems-foundation-customer-story.html" rel="noopener noreferrer" target="_blank">Intel Case Study: AI Aids in Decentralization of Health Data AI MINDSystems Foundation acts to empower individuals with control of their health and personal data via self-sovereign AI.</a></li><li><a href="https://a.co/d/dUheZEI" rel="noopener noreferrer" target="_blank">Book by Co-Founder Hassan Tetteh, MD, Forbes, released TODAY: "Smarter Healthcare with AI: Harnessing Military Medicine to Revolutionize Healthcare for Everyone, Everywhere"</a></li><li><a href="https://a.co/d/d1DP5a1" rel="noopener noreferrer" target="_blank">Book by Co-Founder Sean Manion, PhD: "Blockchain for Medical Research: Accelerating Trust in Healthcare"</a></li><li><a href="https://www.businesswire.com/news/home/20240816259984/en/AI-MINDSystems-Foundation-Recruits-Chief-Veterans-Health-Officer-Advances-National-HERO-Person-Sovereign-AI-Initiative-for-Wellbeing-Prosperity-and-Privacy#:~:text=WASHINGTON%2D%2D(BUSINESS%20WIRE)%2D%2D,its%20Chief%20Veterans%20Health%20Officer." rel="noopener noreferrer" target="_blank">Press Release August 2024, "AI MINDSystems Foundation Recruits Chief Veterans Health Officer, Advances 'National HERO' Person-Sovereign AI Initiative for Wellbeing, Prosperity, and Privacy"</a></li><li>Support AI MINDSystems Foundation's National HERO initiative through Donation or Volunteering: visit&nbsp;<a href="http://www.ai-mindsytems.org" rel="noopener noreferrer" target="_blank">http://www.ai-mindsytems.org</a>.</li></ul><br/><p><strong>Books</strong>: Isaac Asimov's "iRobot" series and "Foundation" series.</p><p><strong>Movies</strong>: "Dune" (specifically, Jaworski's unproduced version).</p><p>AI Mind Systems Foundation (<a href="https://www.ai-mindsystemsfoundation.org/" rel="noopener noreferrer" target="_blank">https://www.ai-mindsystemsfoundation.org</a>)</p><p>AI Risk, Inc. (<a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">https://www.aicrisk.com/</a>)</p><p>Microsoft (<a href="https://www.microsoft.com/" rel="noopener noreferrer" target="_blank">https://www.microsoft.com</a>) $MSFT</p><p>ConsenSys (<a href="https://consensys.net/" rel="noopener noreferrer" target="_blank">https://consensys.net</a>)</p><p>Ethereum (<a href="https://ethereum.org/" rel="noopener noreferrer" target="_blank">https://ethereum.org</a>)</p><p>#AI #AIsafety #ethereum</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>Alec’s guest today is Heather Flannery (@heatherflannery), co-founder and CEO at AI MINDSystems Foundation. Heather discusses her work with AI and blockchain, focusing on privacy-preserving federated learning architectures for regulated industries, emphasizing the importance of data decentralization. Heather highlights the inevitability of financializing data as an asset class, pointing out that data acquisition challenges are structural, legal, and organizational rather than merely technical.</p><p>The <strong>National Hero Initiative</strong> focuses on orchestrating health-related personal data for U.S. military members, veterans, and first responders, aiming to empower these groups within a transparent data economy.</p><p>The <strong>AI MINDSystems Foundation's Mission</strong> focuses on creating legal and operational models for individuals to have agency over their data, contributing to the healthcare system more effectively and ethically. Heather invites listeners to engage with AI Mind Systems Foundation through various means such as volunteering or donations</p><p>Heather is involved with the <strong>AI 2030 Global Think Tank</strong>, which aims to create frameworks for responsible AI, emphasizing that ethical AI practices can drive business growth.</p><p>Heather shares insights from her experiences, including advice from <strong>Ethereum co-founder Joseph Lubin</strong> (@Ethereumjoseph) on embracing uncertainty in business ventures. You don't want to miss this!</p><p><strong>References</strong></p><ul><li><a href="https://www.intel.com/content/www/us/en/customer-spotlight/stories/ai-mindsystems-foundation-customer-story.html" rel="noopener noreferrer" target="_blank">Intel Case Study: AI Aids in Decentralization of Health Data AI MINDSystems Foundation acts to empower individuals with control of their health and personal data via self-sovereign AI.</a></li><li><a href="https://a.co/d/dUheZEI" rel="noopener noreferrer" target="_blank">Book by Co-Founder Hassan Tetteh, MD, Forbes, released TODAY: "Smarter Healthcare with AI: Harnessing Military Medicine to Revolutionize Healthcare for Everyone, Everywhere"</a></li><li><a href="https://a.co/d/d1DP5a1" rel="noopener noreferrer" target="_blank">Book by Co-Founder Sean Manion, PhD: "Blockchain for Medical Research: Accelerating Trust in Healthcare"</a></li><li><a href="https://www.businesswire.com/news/home/20240816259984/en/AI-MINDSystems-Foundation-Recruits-Chief-Veterans-Health-Officer-Advances-National-HERO-Person-Sovereign-AI-Initiative-for-Wellbeing-Prosperity-and-Privacy#:~:text=WASHINGTON%2D%2D(BUSINESS%20WIRE)%2D%2D,its%20Chief%20Veterans%20Health%20Officer." rel="noopener noreferrer" target="_blank">Press Release August 2024, "AI MINDSystems Foundation Recruits Chief Veterans Health Officer, Advances 'National HERO' Person-Sovereign AI Initiative for Wellbeing, Prosperity, and Privacy"</a></li><li>Support AI MINDSystems Foundation's National HERO initiative through Donation or Volunteering: visit&nbsp;<a href="http://www.ai-mindsytems.org" rel="noopener noreferrer" target="_blank">http://www.ai-mindsytems.org</a>.</li></ul><br/><p><strong>Books</strong>: Isaac Asimov's "iRobot" series and "Foundation" series.</p><p><strong>Movies</strong>: "Dune" (specifically, Jaworski's unproduced version).</p><p>AI Mind Systems Foundation (<a href="https://www.ai-mindsystemsfoundation.org/" rel="noopener noreferrer" target="_blank">https://www.ai-mindsystemsfoundation.org</a>)</p><p>AI Risk, Inc. (<a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">https://www.aicrisk.com/</a>)</p><p>Microsoft (<a href="https://www.microsoft.com/" rel="noopener noreferrer" target="_blank">https://www.microsoft.com</a>) $MSFT</p><p>ConsenSys (<a href="https://consensys.net/" rel="noopener noreferrer" target="_blank">https://consensys.net</a>)</p><p>Ethereum (<a href="https://ethereum.org/" rel="noopener noreferrer" target="_blank">https://ethereum.org</a>)</p><p>#AI #AIsafety #ethereum</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">e0671687-4920-402b-8d4c-fc4fc1a60063</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Mon, 11 Nov 2024 01:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/c41ffe48-d06d-46be-b4bd-d685543427a1/18-Heather-Flannery-master.mp3" length="92734610" type="audio/mpeg"/><itunes:duration>38:38</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>18</itunes:episode><podcast:episode>18</podcast:episode><podcast:season>1</podcast:season></item><item><title>Uberhacker Chris Roberts Reveals: AI, Cybersecurity, and Looming Chaos</title><itunes:title>Uberhacker Chris Roberts Reveals: AI, Cybersecurity, and Looming Chaos</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In this engaging conversation, Alec Crawford interviews Chris Roberts, iconic hacker and cybersecurity professional, about his journey from early computing to cybersecurity. They discuss the evolution of technology, the importance of understanding the underlying systems, and the risks including in transporation. Chris shares memorable experiences from DEF CON and his current work with AI, emphasizing the need for innovative thinking in cybersecurity.</p><p>Chris Roberts then discusses the cybersecurity challenges associated with large language models, emphasizing the importance of data integrity and the risks of misinformation. He explores the dual nature of AI, highlighting both its potential benefits in healthcare and the ethical dilemmas it presents. The discussion also delves into future AI developments, including the concept of digital twins. Chris shares resources for learning about AI and cybersecurity, offers advice for tech founders.</p><p>Here is a list of movies, books, and useful links mentioned in the transcript:</p><p><strong>Movies:</strong></p><p>1. Inception</p><p>2. Tenet </p><p><strong>Books:</strong></p><p>1. Terry Pratchett's and Stephen Baxter's "The Long Earth" series</p><p>2. "Neuromancer" by William Gibson</p><p><strong>Other References:</strong></p><p>1. Douglas Adams - Known for his work in the early days of gaming and command line games, as well as "Hitchhiker's Guide to the Galaxy".</p><p>2. Starship Titanic - A game mentioned in connection with Douglas Adams.</p><p>3. Dungeons &amp; Dragons - Mentioned in the context of imagery and AI model training.</p><p>4. ChatGPT - Mentioned as a tool to play the old text-based "Adventure" game.</p><p><strong>Useful Links (Organizations/Resources):</strong></p><p>1. AI Risk, Inc. - Resource for managing risks, including cybersecurity specifically for LLMs. <a href="https://aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a></p><p>2. Purdue University - Noted for having research papers and resources on AI and cybersecurity. Plus they have a flight school! <a href="https://www.purdue.edu/" rel="noopener noreferrer" target="_blank">https://www.purdue.edu/</a> </p><p>3. World Wide Technology (WWT) - Mentioned in relation to AI days and resources. <a href="https://www.wwt.com/aiday" rel="noopener noreferrer" target="_blank">https://www.wwt.com/aiday</a> </p><p>4. DEF CON - Hacker and cybersecurity conference. <a href="https://defcon.org/" rel="noopener noreferrer" target="_blank">https://defcon.org/</a></p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In this engaging conversation, Alec Crawford interviews Chris Roberts, iconic hacker and cybersecurity professional, about his journey from early computing to cybersecurity. They discuss the evolution of technology, the importance of understanding the underlying systems, and the risks including in transporation. Chris shares memorable experiences from DEF CON and his current work with AI, emphasizing the need for innovative thinking in cybersecurity.</p><p>Chris Roberts then discusses the cybersecurity challenges associated with large language models, emphasizing the importance of data integrity and the risks of misinformation. He explores the dual nature of AI, highlighting both its potential benefits in healthcare and the ethical dilemmas it presents. The discussion also delves into future AI developments, including the concept of digital twins. Chris shares resources for learning about AI and cybersecurity, offers advice for tech founders.</p><p>Here is a list of movies, books, and useful links mentioned in the transcript:</p><p><strong>Movies:</strong></p><p>1. Inception</p><p>2. Tenet </p><p><strong>Books:</strong></p><p>1. Terry Pratchett's and Stephen Baxter's "The Long Earth" series</p><p>2. "Neuromancer" by William Gibson</p><p><strong>Other References:</strong></p><p>1. Douglas Adams - Known for his work in the early days of gaming and command line games, as well as "Hitchhiker's Guide to the Galaxy".</p><p>2. Starship Titanic - A game mentioned in connection with Douglas Adams.</p><p>3. Dungeons &amp; Dragons - Mentioned in the context of imagery and AI model training.</p><p>4. ChatGPT - Mentioned as a tool to play the old text-based "Adventure" game.</p><p><strong>Useful Links (Organizations/Resources):</strong></p><p>1. AI Risk, Inc. - Resource for managing risks, including cybersecurity specifically for LLMs. <a href="https://aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a></p><p>2. Purdue University - Noted for having research papers and resources on AI and cybersecurity. Plus they have a flight school! <a href="https://www.purdue.edu/" rel="noopener noreferrer" target="_blank">https://www.purdue.edu/</a> </p><p>3. World Wide Technology (WWT) - Mentioned in relation to AI days and resources. <a href="https://www.wwt.com/aiday" rel="noopener noreferrer" target="_blank">https://www.wwt.com/aiday</a> </p><p>4. DEF CON - Hacker and cybersecurity conference. <a href="https://defcon.org/" rel="noopener noreferrer" target="_blank">https://defcon.org/</a></p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">f21116dc-25ef-4055-b0b1-9e7cc3d6b8eb</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Wed, 23 Oct 2024 06:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/d1c6a1c7-dd68-435d-bab9-ed80daadef46/17-Chris-Roberts-master.mp3" length="122136993" type="audio/mpeg"/><itunes:duration>50:53</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>17</itunes:episode><podcast:episode>17</podcast:episode><podcast:season>1</podcast:season></item><item><title>Transform Your Company&apos;s Brand with AI -- Authentic Insights from Greg Kahn</title><itunes:title>Transform Your Company&apos;s Brand with AI -- Authentic Insights from Greg Kahn</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. <a href="http://aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="http://troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In this conversation, Alec Crawford interviews Greg Kahn, CEO of GK Digital Ventures, discussing his diverse background in economics, film, and technology. They explore the evolution of media and advertising, the implications of AI, and the importance of ethical considerations in technology. Greg shares insights on AI productivity tools, the Internet of Things, and AI in media. He emphasizes the significance of authenticity in brand building and provides advice for aspiring media professionals and organizations looking to integrate AI into their operations.</p><p>AI Trailblazers&nbsp;Conference - Website: <a href="http://aitrailblazers.io/" rel="noopener noreferrer" target="_blank">http://aitrailblazers.io</a> </p><p><strong><u>Books</u></strong></p><p><em>The Coming Wave</em>&nbsp;by Mustafa Suleyman</p><p><strong><u>Movies</u></strong></p><p><em>Braveheart</em>&nbsp;with Mel Gibson</p><p><strong><u>Resources</u></strong></p><p><strong>Artificial Intelligence Risk, Inc.</strong> <a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">https://www.aicrisk.com/</a></p><p><strong>GK Digital Ventures </strong><a href="https://www.gkdigitalventures.com/" rel="noopener noreferrer" target="_blank">https://www.gkdigitalventures.com/</a></p><p><strong>Omnicom Group</strong> <a href="https://www.omnicomgroup.com/" rel="noopener noreferrer" target="_blank">https://www.omnicomgroup.com/</a></p><p><strong>Warner Bros. Discovery</strong> <a href="https://www.wbd.com/" rel="noopener noreferrer" target="_blank">https://www.wbd.com/</a></p><p><strong>Publicis Groupe</strong> <a href="https://www.publicis.com/" rel="noopener noreferrer" target="_blank">https://www.publicis.com/</a></p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. <a href="http://aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="http://troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In this conversation, Alec Crawford interviews Greg Kahn, CEO of GK Digital Ventures, discussing his diverse background in economics, film, and technology. They explore the evolution of media and advertising, the implications of AI, and the importance of ethical considerations in technology. Greg shares insights on AI productivity tools, the Internet of Things, and AI in media. He emphasizes the significance of authenticity in brand building and provides advice for aspiring media professionals and organizations looking to integrate AI into their operations.</p><p>AI Trailblazers&nbsp;Conference - Website: <a href="http://aitrailblazers.io/" rel="noopener noreferrer" target="_blank">http://aitrailblazers.io</a> </p><p><strong><u>Books</u></strong></p><p><em>The Coming Wave</em>&nbsp;by Mustafa Suleyman</p><p><strong><u>Movies</u></strong></p><p><em>Braveheart</em>&nbsp;with Mel Gibson</p><p><strong><u>Resources</u></strong></p><p><strong>Artificial Intelligence Risk, Inc.</strong> <a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">https://www.aicrisk.com/</a></p><p><strong>GK Digital Ventures </strong><a href="https://www.gkdigitalventures.com/" rel="noopener noreferrer" target="_blank">https://www.gkdigitalventures.com/</a></p><p><strong>Omnicom Group</strong> <a href="https://www.omnicomgroup.com/" rel="noopener noreferrer" target="_blank">https://www.omnicomgroup.com/</a></p><p><strong>Warner Bros. Discovery</strong> <a href="https://www.wbd.com/" rel="noopener noreferrer" target="_blank">https://www.wbd.com/</a></p><p><strong>Publicis Groupe</strong> <a href="https://www.publicis.com/" rel="noopener noreferrer" target="_blank">https://www.publicis.com/</a></p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">812d01d1-cb8d-4a70-8dcc-142861540c84</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 15 Oct 2024 06:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/b978ae11-cf64-4fd9-b907-6e9b775bbdaf/16-Greg-Kahn-master.mp3" length="80326447" type="audio/mpeg"/><itunes:duration>33:28</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>16</itunes:episode><podcast:episode>16</podcast:episode><podcast:season>1</podcast:season></item><item><title>AI in Healthcare: Top of the First Inning, with Brigham Hyde, Co-Founder &amp; CEO of Atropos Health</title><itunes:title>AI in Healthcare: Top of the First Inning, with Brigham Hyde, Co-Founder &amp; CEO of Atropos Health</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In Episode 15 of AI Risk Reward, Brigham Hyde, co-founder and CEO of Atropos Health, discusses his background in pharmacology and his interest in AI. He explains how Atropos Health is using AI to bridge the evidence gap in healthcare and provide personalized care based on real-world data. Brigham also talks about his involvement with HIMSS and Audere Capital. He shares his thoughts on the VC funding environment and the potential impact of interest rate cuts. Brigham Hyde discusses the challenges of excessive capital in the VC market and the need for thoughtful deployment of capital. He emphasizes the importance of building healthy businesses and the shift towards a more operator-focused approach. Hyde also addresses the ethical challenges of integrating AI in healthcare and the need for transparency and quality in AI models. He discusses the current state of AI adoption in healthcare and the regulatory and infrastructure changes required for widespread implementation.</p><p><strong>Resources/Items Mentioned</strong></p><p>&nbsp;&nbsp;- "The Lean Startup" by Eric Ries</p><p>&nbsp;&nbsp;- "Shogun" by James Clavell</p><p>&nbsp;&nbsp;- Louis Lasagna: Known for his work related to the placebo effect and involvement in the FDA regulation through the Fen-Fen trials.</p><p>&nbsp;&nbsp;- Stanford University's Green Button Project: An AI-related project for generating real-world evidence at the point of care.</p><p>&nbsp;&nbsp;- Symphony AI: A Silicon Valley AI-focused fund.</p><p>&nbsp;&nbsp;- HIMSS (Healthcare Information and Management Systems Society): A mission-driven nonprofit organization in health innovation.</p><p>&nbsp;&nbsp;- Coalition for Health AI: Ethical guidelines for AI in healthcare.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In Episode 15 of AI Risk Reward, Brigham Hyde, co-founder and CEO of Atropos Health, discusses his background in pharmacology and his interest in AI. He explains how Atropos Health is using AI to bridge the evidence gap in healthcare and provide personalized care based on real-world data. Brigham also talks about his involvement with HIMSS and Audere Capital. He shares his thoughts on the VC funding environment and the potential impact of interest rate cuts. Brigham Hyde discusses the challenges of excessive capital in the VC market and the need for thoughtful deployment of capital. He emphasizes the importance of building healthy businesses and the shift towards a more operator-focused approach. Hyde also addresses the ethical challenges of integrating AI in healthcare and the need for transparency and quality in AI models. He discusses the current state of AI adoption in healthcare and the regulatory and infrastructure changes required for widespread implementation.</p><p><strong>Resources/Items Mentioned</strong></p><p>&nbsp;&nbsp;- "The Lean Startup" by Eric Ries</p><p>&nbsp;&nbsp;- "Shogun" by James Clavell</p><p>&nbsp;&nbsp;- Louis Lasagna: Known for his work related to the placebo effect and involvement in the FDA regulation through the Fen-Fen trials.</p><p>&nbsp;&nbsp;- Stanford University's Green Button Project: An AI-related project for generating real-world evidence at the point of care.</p><p>&nbsp;&nbsp;- Symphony AI: A Silicon Valley AI-focused fund.</p><p>&nbsp;&nbsp;- HIMSS (Healthcare Information and Management Systems Society): A mission-driven nonprofit organization in health innovation.</p><p>&nbsp;&nbsp;- Coalition for Health AI: Ethical guidelines for AI in healthcare.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">efb19791-a6b4-44f2-99da-a5ca9d1ea180</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Mon, 30 Sep 2024 06:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/1b1d09cf-cdb3-45b2-acad-0a2ec5407ae0/15-Brigham-Hyde-master.mp3" length="125249744" type="audio/mpeg"/><itunes:duration>52:11</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>15</itunes:episode><podcast:episode>15</podcast:episode><podcast:season>1</podcast:season></item><item><title>How Can We Trust AI? Yagub Rahimov, Founder and CEO of Polygraf</title><itunes:title>How Can We Trust AI? Yagub Rahimov, Founder and CEO of Polygraf</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>In episode 14 of AI Risk Reward, Alec Crawford interviews Yagub Rahimov, Founder and CEO of Polygraf. Yagub discusses privacy in AI, trust in AI, and the journey that led him to his current role. He shares his early experiences in trading and then the AI space.</p><p>Yagub emphasizes the importance of understanding the privacy implications of AI and the need for trustworthiness in AI-generated content. He introduces Polygraf as a tool that detects AI-generated content and provides transparency on its origin and data sources. Yagub also highlights the importance of privacy in AI and the need for responsible AI governance. He shares his concerns about the flaws in AI tools and the potential risks of AI in the future. Additionally, Rahimov talks about his role as a program advisor at Texas A&amp;M and the importance of teaching AI governance to students.</p><p>Links</p><p>AI Content Detection:&nbsp;<a href="https://polygraf.ai/ai-content-detector" rel="noopener noreferrer" target="_blank">https://polygraf.ai/ai-content-detector</a></p><p>AI Governance:&nbsp;<a href="https://polygraf.ai/ai-governance" rel="noopener noreferrer" target="_blank">https://polygraf.ai/ai-governance</a>&nbsp;</p><p>Contact us:&nbsp;<a href="https://polygraf.ai/contact" rel="noopener noreferrer" target="_blank">https://polygraf.ai/contact</a>&nbsp;</p><p>And a special coupon code for the audience who would like to try the most accurate AI Detector on the market:&nbsp;WELCOME10</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com. You can hear the difference.</p><p>In episode 14 of AI Risk Reward, Alec Crawford interviews Yagub Rahimov, Founder and CEO of Polygraf. Yagub discusses privacy in AI, trust in AI, and the journey that led him to his current role. He shares his early experiences in trading and then the AI space.</p><p>Yagub emphasizes the importance of understanding the privacy implications of AI and the need for trustworthiness in AI-generated content. He introduces Polygraf as a tool that detects AI-generated content and provides transparency on its origin and data sources. Yagub also highlights the importance of privacy in AI and the need for responsible AI governance. He shares his concerns about the flaws in AI tools and the potential risks of AI in the future. Additionally, Rahimov talks about his role as a program advisor at Texas A&amp;M and the importance of teaching AI governance to students.</p><p>Links</p><p>AI Content Detection:&nbsp;<a href="https://polygraf.ai/ai-content-detector" rel="noopener noreferrer" target="_blank">https://polygraf.ai/ai-content-detector</a></p><p>AI Governance:&nbsp;<a href="https://polygraf.ai/ai-governance" rel="noopener noreferrer" target="_blank">https://polygraf.ai/ai-governance</a>&nbsp;</p><p>Contact us:&nbsp;<a href="https://polygraf.ai/contact" rel="noopener noreferrer" target="_blank">https://polygraf.ai/contact</a>&nbsp;</p><p>And a special coupon code for the audience who would like to try the most accurate AI Detector on the market:&nbsp;WELCOME10</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">15d6704e-aebb-42a7-a6d0-3db2aa3f65e7</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Wed, 18 Sep 2024 06:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/c911b9d0-271a-466e-8a49-6a548c83fd94/14-Yagub-Rahimov-master.mp3" length="122814087" type="audio/mpeg"/><itunes:duration>51:10</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>14</itunes:episode><podcast:episode>14</podcast:episode><podcast:season>1</podcast:season></item><item><title>The Future of AI Security with Elie Bursztein, Google Cybersecurity and AI Research Lead</title><itunes:title>The Future of AI Security with Elie Bursztein, Google Cybersecurity and AI Research Lead</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://www.troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In episode 13 of AI Risk Reward, Alec Crawford interviews Elie Bursztein, Cybersecurity and AI Research Lead at Google DeepMind. They discuss Elie's early experiences with computers,  magic tricks, and his art foundation, <a href="https://etteilla.org/en/" rel="noopener noreferrer" target="_blank">https://etteilla.org/en/</a>. They also touch on the Culture series by Ian Banks and Elie's PhD thesis on game theory applied to network security. Elie  discusses his journey from engineering to research and his experiences at Google. He emphasizes the importance of resilience and the ability to adapt in the face of challenges. Elie also delves into the current state of cybersecurity for large language models, highlighting the need for basic security practices and the challenges of aligning models with ethical standards. He concludes by discussing the potential of AI in finding and patching vulnerabilities.</p><p>Books/Movies/References</p><ul><li>The Culture Series by Ian Banks</li><li>The Art of Magic Tricks</li></ul><br/><p><br></p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://www.troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a>. You can hear the difference.</p><p>In episode 13 of AI Risk Reward, Alec Crawford interviews Elie Bursztein, Cybersecurity and AI Research Lead at Google DeepMind. They discuss Elie's early experiences with computers,  magic tricks, and his art foundation, <a href="https://etteilla.org/en/" rel="noopener noreferrer" target="_blank">https://etteilla.org/en/</a>. They also touch on the Culture series by Ian Banks and Elie's PhD thesis on game theory applied to network security. Elie  discusses his journey from engineering to research and his experiences at Google. He emphasizes the importance of resilience and the ability to adapt in the face of challenges. Elie also delves into the current state of cybersecurity for large language models, highlighting the need for basic security practices and the challenges of aligning models with ethical standards. He concludes by discussing the potential of AI in finding and patching vulnerabilities.</p><p>Books/Movies/References</p><ul><li>The Culture Series by Ian Banks</li><li>The Art of Magic Tricks</li></ul><br/><p><br></p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">6df860b5-24fe-44ee-839b-f3f875505da0</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 03 Sep 2024 06:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/633df24a-3fe3-462e-9af0-9ff89d5464d2/13-Elie-Burszstein-master.mp3" length="126377189" type="audio/mpeg"/><itunes:duration>52:39</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>13</itunes:episode><podcast:episode>13</podcast:episode><podcast:season>1</podcast:season></item><item><title>This New Type of AI Is Smarter than LLMs</title><itunes:title>This New Type of AI Is Smarter than LLMs</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://www.troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a></p><p>Peter Voss, CEO and Chief Scientist of AIGO, discusses his journey in the field of AI and the development of AGI. He shares his experience in programming chips and developing software, as well as his passion for AI and cognitive intelligence. Voss explains the three waves of AI and where his work fits in the spectrum. He also talks about the applications of AI agents and the challenges in making interactions with AI more intuitive and effective. Voss shares his thoughts on AI governance, ethical questions in AI, and the potential benefits of AGI for humanity.</p><p><strong><u>References</u></strong></p><p>LinkedIn:&nbsp;<a href="https://www.linkedin.com/in/vosspeter/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/vosspeter/</a></p><p>Company:&nbsp;<a href="https://aigo.ai/" rel="noopener noreferrer" target="_blank">https://aigo.ai/</a></p><p>Direct Path to AGI:&nbsp;<a href="https://aigo.ai/insa-integrated-neuro-symbolic-architecture/" rel="noopener noreferrer" target="_blank">https://aigo.ai/insa-integrated-neuro-symbolic-architecture/</a></p><p><strong><u>Books/Movies</u></strong></p><p><strong>"The Mind's Eye"</strong>&nbsp;by Douglas Hofstadter and Daniel Dennett.</p><p><strong>"The Singularity is Nearer" </strong>by Ray Kurzweil</p><p><strong>"Short Circuit",</strong> feature length movie</p><p><br></p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://www.troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a></p><p>Peter Voss, CEO and Chief Scientist of AIGO, discusses his journey in the field of AI and the development of AGI. He shares his experience in programming chips and developing software, as well as his passion for AI and cognitive intelligence. Voss explains the three waves of AI and where his work fits in the spectrum. He also talks about the applications of AI agents and the challenges in making interactions with AI more intuitive and effective. Voss shares his thoughts on AI governance, ethical questions in AI, and the potential benefits of AGI for humanity.</p><p><strong><u>References</u></strong></p><p>LinkedIn:&nbsp;<a href="https://www.linkedin.com/in/vosspeter/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/vosspeter/</a></p><p>Company:&nbsp;<a href="https://aigo.ai/" rel="noopener noreferrer" target="_blank">https://aigo.ai/</a></p><p>Direct Path to AGI:&nbsp;<a href="https://aigo.ai/insa-integrated-neuro-symbolic-architecture/" rel="noopener noreferrer" target="_blank">https://aigo.ai/insa-integrated-neuro-symbolic-architecture/</a></p><p><strong><u>Books/Movies</u></strong></p><p><strong>"The Mind's Eye"</strong>&nbsp;by Douglas Hofstadter and Daniel Dennett.</p><p><strong>"The Singularity is Nearer" </strong>by Ray Kurzweil</p><p><strong>"Short Circuit",</strong> feature length movie</p><p><br></p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">0dec29b1-7e33-43fa-ab47-7f9105625b40</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 20 Aug 2024 01:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/a7f181cd-4a7b-4541-bbe6-ed6037ae1056/12-Peter-Voss-master.mp3" length="87796422" type="audio/mpeg"/><itunes:duration>36:35</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>12</itunes:episode><podcast:episode>12</podcast:episode><podcast:season>1</podcast:season></item><item><title>Do More Faster with AI! Productivity Expert and Bassist Gerald Leonard</title><itunes:title>Do More Faster with AI! Productivity Expert and Bassist Gerald Leonard</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com</p><p>Gerald Leonard, CEO of the Leonard Productivity Intelligence Institute, discusses his journey as a professional musician and entrepreneur. Gerald shares his career journey, including his transition from music to IT and his current work as a consultant and author. He highlights the importance of productivity intelligence and how AI can be used to enhance productivity. Gerald provides insights into the ethical considerations of AI and shares advice for aspiring entrepreneurs. He also discusses the underrated aspects of being an angel investor, speaking at a TEDx event, and the movie '42' about Jackie Robinson.</p><p><strong>Titles</strong></p><p>Books</p><p>- Cultures Is the Bass by Gerald J. Leonard</p><p>- Workplace Jazz by Gerald J. Leonard</p><p>- A Symphony of Choices by Gerald J. Leonard</p><p>- 1001 Ways to Market Your Book by John Kremer</p><p>- The Goal by Eliyahu M. Goldratt</p><p>- AI 2041 by Kai-Fu Lee and Chen Qiufan</p><p>Other</p><p>- Productivity Smarts podcast by Gerald J. Leonard</p><p>- Ted Lasso (TV series)</p><p>Artists</p><p>- Marcus Miller</p><p>- Victor Wooten</p><p>- Stanley Clarke</p><p>- Jaco Pastorius</p><p>- Louis Johnson</p><p>- Earth, Wind &amp; Fire</p><p>- Commodores</p><p>- Buddy Rich Orchestra</p><p>- Mike Rayburn</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at troutmanstreetaudio.com</p><p>Gerald Leonard, CEO of the Leonard Productivity Intelligence Institute, discusses his journey as a professional musician and entrepreneur. Gerald shares his career journey, including his transition from music to IT and his current work as a consultant and author. He highlights the importance of productivity intelligence and how AI can be used to enhance productivity. Gerald provides insights into the ethical considerations of AI and shares advice for aspiring entrepreneurs. He also discusses the underrated aspects of being an angel investor, speaking at a TEDx event, and the movie '42' about Jackie Robinson.</p><p><strong>Titles</strong></p><p>Books</p><p>- Cultures Is the Bass by Gerald J. Leonard</p><p>- Workplace Jazz by Gerald J. Leonard</p><p>- A Symphony of Choices by Gerald J. Leonard</p><p>- 1001 Ways to Market Your Book by John Kremer</p><p>- The Goal by Eliyahu M. Goldratt</p><p>- AI 2041 by Kai-Fu Lee and Chen Qiufan</p><p>Other</p><p>- Productivity Smarts podcast by Gerald J. Leonard</p><p>- Ted Lasso (TV series)</p><p>Artists</p><p>- Marcus Miller</p><p>- Victor Wooten</p><p>- Stanley Clarke</p><p>- Jaco Pastorius</p><p>- Louis Johnson</p><p>- Earth, Wind &amp; Fire</p><p>- Commodores</p><p>- Buddy Rich Orchestra</p><p>- Mike Rayburn</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">afb44f84-707f-4a57-8236-645b48765f8c</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 23 Jul 2024 06:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/f69f44be-e055-43a5-bf00-462bc786c4a5/12-Gerald-Leonard.mp3" length="95891247" type="audio/mpeg"/><itunes:duration>39:57</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>11</itunes:episode><podcast:episode>11</podcast:episode><podcast:season>1</podcast:season></item><item><title>Don&apos;t Sue My AI! Scott McKinney on AI Legal Risk/Reward</title><itunes:title>Don&apos;t Sue My AI! Scott McKinney on AI Legal Risk/Reward</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a></p><p>Scott McKinney, partner at Wilson, Sonsini, Goodrich and Rosati PC and co-founder of their global AI practice, discusses his journey with computers, his decision to go to law school, and the impact of books like 'A Brief History of Time' on his interest in learning. He also talks about the use of AI in the legal field, including the development of an AI tool for contract writing and negotiation. Scott shares his insights on AI governance at big companies, the current state of AI regulation in the US, and ethical considerations in AI. He concludes with advice for CEOs adopting AI and undergrads interested in AI.</p><p><strong>Takeaways</strong></p><ul><li>After majoring in computer science, Scott decided to go to law school because he realized that being a coder all day every day was not for him.</li><li>Scott's law firm has developed an AI tool for contract writing and negotiation, which helps companies streamline the review process and come up with proposed responses.</li><li>AI governance at big companies can be challenging, with the risk of either over-governing or adopting a one-size-fits-all approach. Scott advises companies to be thoughtful and flexible in their AI governance.</li><li>The current state of AI regulation in the US involves various existing laws that apply to AI, such as privacy laws and consumer protection laws. Some states are also introducing specific AI regulations. The SEC has proposed </li><li>Ethical considerations in AI include issues of privacy, bias, and potential misuse of AI tools. Scott highlights the importance of being transparent and thoughtful in the use of AI.</li><li>For CEOs adopting AI, Scott advises understanding the specific risks and implications of using AI in their organization and implementing technical measures and clear notices to mitigate potential issues.</li></ul><br/><p><strong>References/Resources</strong></p><p><em>A Brief History of Time</em> by Stephen Hawking</p><p><em>Three Body Problem</em> by Liu Cixin</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a></p><p>Scott McKinney, partner at Wilson, Sonsini, Goodrich and Rosati PC and co-founder of their global AI practice, discusses his journey with computers, his decision to go to law school, and the impact of books like 'A Brief History of Time' on his interest in learning. He also talks about the use of AI in the legal field, including the development of an AI tool for contract writing and negotiation. Scott shares his insights on AI governance at big companies, the current state of AI regulation in the US, and ethical considerations in AI. He concludes with advice for CEOs adopting AI and undergrads interested in AI.</p><p><strong>Takeaways</strong></p><ul><li>After majoring in computer science, Scott decided to go to law school because he realized that being a coder all day every day was not for him.</li><li>Scott's law firm has developed an AI tool for contract writing and negotiation, which helps companies streamline the review process and come up with proposed responses.</li><li>AI governance at big companies can be challenging, with the risk of either over-governing or adopting a one-size-fits-all approach. Scott advises companies to be thoughtful and flexible in their AI governance.</li><li>The current state of AI regulation in the US involves various existing laws that apply to AI, such as privacy laws and consumer protection laws. Some states are also introducing specific AI regulations. The SEC has proposed </li><li>Ethical considerations in AI include issues of privacy, bias, and potential misuse of AI tools. Scott highlights the importance of being transparent and thoughtful in the use of AI.</li><li>For CEOs adopting AI, Scott advises understanding the specific risks and implications of using AI in their organization and implementing technical measures and clear notices to mitigate potential issues.</li></ul><br/><p><strong>References/Resources</strong></p><p><em>A Brief History of Time</em> by Stephen Hawking</p><p><em>Three Body Problem</em> by Liu Cixin</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">4452d3f5-466a-4a9e-aa33-b679ea4b78da</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Wed, 10 Jul 2024 03:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/f69e0fb5-b9c3-4ed0-ba87-eef50a4e9c41/10-Scott-McKinney-master.mp3" length="97466953" type="audio/mpeg"/><itunes:duration>40:37</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>10</itunes:episode><podcast:episode>10</podcast:episode><podcast:season>1</podcast:season></item><item><title>Risk Manage Your AI Implementation or Else! With special guest Grant Ecker, Enterprise Architect</title><itunes:title>Risk Manage Your AI Implementation or Else! With special guest Grant Ecker, Enterprise Architect</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. <a href="http://aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="http://troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a></p><p>Grant Ecker, Enterprise Architect and founder of a global forum of chief architects, discusses his journey in the world of technology and AI. He shares his experiences including his early encounters with AI. Grant also talks about the importance of focusing on people rather than tasks in leadership roles. He highlights the need for ethical AI governance and the risks associated with AI implementation in large companies. Grant provides advice for those interested in AI and shares his favorite sources of information.</p><p><strong>Takeaways</strong></p><ul><li>Focus on people rather than tasks in leadership roles.</li><li>Ethical AI governance is crucial in large companies.</li><li>Stay informed about AI developments through LinkedIn and curated sources (see below).</li><li>Experiment and play with AI to learn and discover its potential.</li><li>Consider the business impact when adopting large language models.</li><li>Curate your LinkedIn feed to follow thought leaders in the AI space.</li><li>Start by deploying simple AI solutions, such as chatbots, to provide value within organizations.</li><li>Critically examine each component of AI architecture to mitigate data leakage risks.</li></ul><br/><p><strong>Resources</strong></p><p><a href="http://aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a></p><p><a href="http://cisa.gov/" rel="noopener noreferrer" target="_blank">cisa.gov</a></p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. <a href="http://aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="http://troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a></p><p>Grant Ecker, Enterprise Architect and founder of a global forum of chief architects, discusses his journey in the world of technology and AI. He shares his experiences including his early encounters with AI. Grant also talks about the importance of focusing on people rather than tasks in leadership roles. He highlights the need for ethical AI governance and the risks associated with AI implementation in large companies. Grant provides advice for those interested in AI and shares his favorite sources of information.</p><p><strong>Takeaways</strong></p><ul><li>Focus on people rather than tasks in leadership roles.</li><li>Ethical AI governance is crucial in large companies.</li><li>Stay informed about AI developments through LinkedIn and curated sources (see below).</li><li>Experiment and play with AI to learn and discover its potential.</li><li>Consider the business impact when adopting large language models.</li><li>Curate your LinkedIn feed to follow thought leaders in the AI space.</li><li>Start by deploying simple AI solutions, such as chatbots, to provide value within organizations.</li><li>Critically examine each component of AI architecture to mitigate data leakage risks.</li></ul><br/><p><strong>Resources</strong></p><p><a href="http://aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a></p><p><a href="http://cisa.gov/" rel="noopener noreferrer" target="_blank">cisa.gov</a></p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">ae16d95c-fa4f-4b7f-9ae9-8909f5421e8c</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Thu, 27 Jun 2024 05:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/3aaac884-1ec4-4283-830a-b64e8bf3fa3c/9-Grant-Ecker.mp3" length="85636618" type="audio/mpeg"/><itunes:duration>35:41</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>9</itunes:episode><podcast:episode>9</podcast:episode><podcast:season>1</podcast:season></item><item><title>The Risks of Using AI for Investing, with Columbia Business School Prof. Harry Mamaysky</title><itunes:title>The Risks of Using AI for Investing, with Columbia Business School Prof. Harry Mamaysky</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a></p><p>Harry Mamaysky, professor at Columbia Business School and co-founder of Quant Street Capital, discusses his early experiences with personal computers, his studies in AI, his PhD thesis on illiquidity in securities pricing, his time at Morgan Stanley, and his meeting with Ray Dalio. He also talks about his motivation for starting Quant Street Capital and how they provide analytics and portfolio management services to registered investment advisors.</p><p>&nbsp;Takeaways</p><ul><li>His PhD thesis focused on the impact of illiquidity on securities prices.</li><li>During his time at Morgan Stanley, he worked in the Capital Structure Arbitrage group and learned valuable lessons from his bosses.</li><li>He had the opportunity to meet and have dinner with Ray Dalio, who emphasized the importance of admitting mistakes and learning from them.</li><li>The origin story of Quant Street Capital was driven by Harry's desire to combine research and investing, and to provide better financial advice to clients.</li><li>Quant Street Capital offers portfolio management services and analytics to registered investment advisors.</li><li>Harry has concerns about AI, particularly in areas like deep fakes and hype.</li></ul><br/><p>References</p><p><a href="https://www.quantstreetcapital.com" rel="noopener noreferrer" target="_blank">www.quantstreetcapital.com</a></p><p><a href="https://STAYblog.substack.com" rel="noopener noreferrer" target="_blank">STAYblog.substack.com</a></p><p>&nbsp;</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a></p><p>Harry Mamaysky, professor at Columbia Business School and co-founder of Quant Street Capital, discusses his early experiences with personal computers, his studies in AI, his PhD thesis on illiquidity in securities pricing, his time at Morgan Stanley, and his meeting with Ray Dalio. He also talks about his motivation for starting Quant Street Capital and how they provide analytics and portfolio management services to registered investment advisors.</p><p>&nbsp;Takeaways</p><ul><li>His PhD thesis focused on the impact of illiquidity on securities prices.</li><li>During his time at Morgan Stanley, he worked in the Capital Structure Arbitrage group and learned valuable lessons from his bosses.</li><li>He had the opportunity to meet and have dinner with Ray Dalio, who emphasized the importance of admitting mistakes and learning from them.</li><li>The origin story of Quant Street Capital was driven by Harry's desire to combine research and investing, and to provide better financial advice to clients.</li><li>Quant Street Capital offers portfolio management services and analytics to registered investment advisors.</li><li>Harry has concerns about AI, particularly in areas like deep fakes and hype.</li></ul><br/><p>References</p><p><a href="https://www.quantstreetcapital.com" rel="noopener noreferrer" target="_blank">www.quantstreetcapital.com</a></p><p><a href="https://STAYblog.substack.com" rel="noopener noreferrer" target="_blank">STAYblog.substack.com</a></p><p>&nbsp;</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">bbc1f9d8-4428-402d-80d8-c175b53cce6d</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Thu, 13 Jun 2024 06:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/83334438-de5e-4238-ab25-85b364ac58dc/Harry-Mamaysky-master.mp3" length="118412977" type="audio/mpeg"/><itunes:duration>49:20</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>8</itunes:episode><podcast:episode>8</podcast:episode><podcast:season>1</podcast:season></item><item><title>Brace For Impact: AI and Cybersecurity with Josh Cook</title><itunes:title>Brace For Impact: AI and Cybersecurity with Josh Cook</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a></p><p>Josh Cook, founder at Left of Boom Consulting, discusses his journey from philosophy major to law school and his passion for cybersecurity. The movie the Terminator may have gotten him started on his cybersecurity career journey! He shares stories of encountering Mark Wahlberg and Rick Pitino while waiting tables in Boston. Josh emphasizes the importance of being prepared for cybersecurity incidents and the role of AI in both defense and attack. He highlights the legal issues surrounding AI, such as bias and transparency, and offers advice for lawyers using AI.</p><p><strong><u>Takeaways</u></strong></p><ul><li>Being prepared for cybersecurity incidents is crucial for businesses of all sizes.</li><li>AI has the potential to both enhance decision-making and create legal and ethical challenges.</li><li>Lawyers should use AI as a tool, but not rely on it to replace their own judgment.</li></ul><br/><p><strong><u>Resources</u></strong></p><p>IBM Fundamentals in AI course</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a></p><p>Josh Cook, founder at Left of Boom Consulting, discusses his journey from philosophy major to law school and his passion for cybersecurity. The movie the Terminator may have gotten him started on his cybersecurity career journey! He shares stories of encountering Mark Wahlberg and Rick Pitino while waiting tables in Boston. Josh emphasizes the importance of being prepared for cybersecurity incidents and the role of AI in both defense and attack. He highlights the legal issues surrounding AI, such as bias and transparency, and offers advice for lawyers using AI.</p><p><strong><u>Takeaways</u></strong></p><ul><li>Being prepared for cybersecurity incidents is crucial for businesses of all sizes.</li><li>AI has the potential to both enhance decision-making and create legal and ethical challenges.</li><li>Lawyers should use AI as a tool, but not rely on it to replace their own judgment.</li></ul><br/><p><strong><u>Resources</u></strong></p><p>IBM Fundamentals in AI course</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">2a6d0119-bc91-407e-a0f7-c5c21195d084</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Wed, 22 May 2024 05:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/749eb8a5-72f5-482c-bf9a-1c4d46224746/7-Joshua-Cook-master.mp3" length="113146691" type="audio/mpeg"/><itunes:duration>47:09</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>7</itunes:episode><podcast:episode>7</podcast:episode><podcast:season>1</podcast:season></item><item><title>Will AI Devastate the Job Market? An Interview with AI Ethicist Kevin LaGrandeur</title><itunes:title>Will AI Devastate the Job Market? An Interview with AI Ethicist Kevin LaGrandeur</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, <strong>Alec Crawford</strong>, Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a></p><p>This episode's guest is Dr. Kevin LaGrandeur, Director of Research at the Global AI Ethics Institute. He discusses his background and research priorities in AI and ethics. He highlights the importance of considering the ethical implications of AI and the need for AI ethics specialists to ensure responsible use. He also explores the potential benefits and challenges of AI in various fields, such as medicine and government. He is concerned about massive job losses over the next 10-15 years before AI jobs created catches up to jobs lost. LaGrandeur emphasizes the need for businesses and organizations to prioritize ethics in AI development and implementation. He also discusses the dangers of AI hype and the importance of accurately representing AI capabilities.</p><p>Takeaways</p><ul><li>Ethics should be a priority in AI development and implementation to ensure responsible and beneficial use.</li><li>AI has the potential to bring significant advancements in fields like medicine and communication.</li><li>Businesses and organizations should prioritize AI ethics to address concerns and ensure alignment with human values.</li><li>AI hype can lead to misinformation and unrealistic expectations, which can have negative, even lethal, consequences.</li></ul><br/><p><strong>Resources</strong></p><p>The Global Ethical AI Institute <a href="https://globalethics.ai/" rel="noopener noreferrer" target="_blank">https://globalethics.ai/</a></p><p>The Consequences of AI Hype and Misinformation <a href="https://link.springer.com/article/10.1007/s43681-023-00352-y" rel="noopener noreferrer" target="_blank">https://link.springer.com/article/10.1007/s43681-023-00352-y</a> </p><p>Prioritizing Ethics in AI Development and Implementation</p><p>Blade Runner</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, <strong>Alec Crawford</strong>, Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://aicrisk.com/" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a></p><p>This episode's guest is Dr. Kevin LaGrandeur, Director of Research at the Global AI Ethics Institute. He discusses his background and research priorities in AI and ethics. He highlights the importance of considering the ethical implications of AI and the need for AI ethics specialists to ensure responsible use. He also explores the potential benefits and challenges of AI in various fields, such as medicine and government. He is concerned about massive job losses over the next 10-15 years before AI jobs created catches up to jobs lost. LaGrandeur emphasizes the need for businesses and organizations to prioritize ethics in AI development and implementation. He also discusses the dangers of AI hype and the importance of accurately representing AI capabilities.</p><p>Takeaways</p><ul><li>Ethics should be a priority in AI development and implementation to ensure responsible and beneficial use.</li><li>AI has the potential to bring significant advancements in fields like medicine and communication.</li><li>Businesses and organizations should prioritize AI ethics to address concerns and ensure alignment with human values.</li><li>AI hype can lead to misinformation and unrealistic expectations, which can have negative, even lethal, consequences.</li></ul><br/><p><strong>Resources</strong></p><p>The Global Ethical AI Institute <a href="https://globalethics.ai/" rel="noopener noreferrer" target="_blank">https://globalethics.ai/</a></p><p>The Consequences of AI Hype and Misinformation <a href="https://link.springer.com/article/10.1007/s43681-023-00352-y" rel="noopener noreferrer" target="_blank">https://link.springer.com/article/10.1007/s43681-023-00352-y</a> </p><p>Prioritizing Ethics in AI Development and Implementation</p><p>Blade Runner</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">03b6a96f-f809-4119-838e-f7eb021edc66</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Wed, 08 May 2024 05:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/7da4877a-b992-43e3-9fe6-59b0ece22f6f/6-Kevin-LaGrandeur-master.mp3" length="116486185" type="audio/mpeg"/><itunes:duration>48:32</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>6</itunes:episode><podcast:episode>6</podcast:episode><podcast:season>1</podcast:season></item><item><title>Avoid These Key AI Mistakes with Reid Blackman, &quot;Ethical Machines&quot; Author</title><itunes:title>Avoid These Key AI Mistakes with Reid Blackman, &quot;Ethical Machines&quot; Author</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, <strong>Alec Crawford</strong>, Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a></p><p><strong>Reid Blackman</strong>, author of Ethical Machines and founder of Virtue Consultants, discusses his background, philosophy on founding companies, interest in ethics and philosophy, and experience as a philosophy professor. He shares his research focus and reading preferences (papers, not books), as well as his insights on preparing firms and employees for AI adoption.</p><p>Reid also explores the challenges of transparency, explainability, privacy, deepfakes, and AI's ability to make ethical distinctions. He offers advice for college students interested in AI as well as CEOs implementing AI.</p><p>Takeaways</p><ul><li>Reid Blackman emphasizes the importance of transparency and explainability in AI systems to build trust and mitigate ethical risks.</li><li>Privacy concerns in AI extend beyond data anonymization, as inferences made by AI systems can also violate privacy.</li><li>The future of AI will depend on the collective impact of AI applications, and it is crucial to consider the ethical implications of these applications.</li><li>Reid Blackman advises college students interested in AI to consider their specific goals within the field and pursue relevant disciplines such as math, computer science, linguistics, or philosophy.</li><li>For CEOs implementing AI, Reid Blackman's company, Virtue Consultants, provides guidance and support to ensure responsible and ethical AI practices. </li></ul><br/><p><strong><u>Links</u></strong></p><p><a href="https://www.virtueconsultants.com/" rel="noopener noreferrer" target="_blank">https://www.virtueconsultants.com/</a> </p><p><a href="https://en.wikipedia.org/wiki/Brazilian_jiu-jitsu" rel="noopener noreferrer" target="_blank">https://en.wikipedia.org/wiki/Brazilian_jiu-jitsu</a></p><p><strong><u>Books</u></strong></p><p><em>Ethical Machines</em> by Reid Blackman</p><p><strong><u>Movies</u></strong></p><p>Dead Poets Society</p><p>&nbsp;</p><p>&nbsp;</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, <strong>Alec Crawford</strong>, Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://aicrisk.com" rel="noopener noreferrer" target="_blank">aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">troutmanstreetaudio.com</a></p><p><strong>Reid Blackman</strong>, author of Ethical Machines and founder of Virtue Consultants, discusses his background, philosophy on founding companies, interest in ethics and philosophy, and experience as a philosophy professor. He shares his research focus and reading preferences (papers, not books), as well as his insights on preparing firms and employees for AI adoption.</p><p>Reid also explores the challenges of transparency, explainability, privacy, deepfakes, and AI's ability to make ethical distinctions. He offers advice for college students interested in AI as well as CEOs implementing AI.</p><p>Takeaways</p><ul><li>Reid Blackman emphasizes the importance of transparency and explainability in AI systems to build trust and mitigate ethical risks.</li><li>Privacy concerns in AI extend beyond data anonymization, as inferences made by AI systems can also violate privacy.</li><li>The future of AI will depend on the collective impact of AI applications, and it is crucial to consider the ethical implications of these applications.</li><li>Reid Blackman advises college students interested in AI to consider their specific goals within the field and pursue relevant disciplines such as math, computer science, linguistics, or philosophy.</li><li>For CEOs implementing AI, Reid Blackman's company, Virtue Consultants, provides guidance and support to ensure responsible and ethical AI practices. </li></ul><br/><p><strong><u>Links</u></strong></p><p><a href="https://www.virtueconsultants.com/" rel="noopener noreferrer" target="_blank">https://www.virtueconsultants.com/</a> </p><p><a href="https://en.wikipedia.org/wiki/Brazilian_jiu-jitsu" rel="noopener noreferrer" target="_blank">https://en.wikipedia.org/wiki/Brazilian_jiu-jitsu</a></p><p><strong><u>Books</u></strong></p><p><em>Ethical Machines</em> by Reid Blackman</p><p><strong><u>Movies</u></strong></p><p>Dead Poets Society</p><p>&nbsp;</p><p>&nbsp;</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">b3e65700-0ce5-4b37-a146-d0f200c7564e</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 02 Apr 2024 05:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/526200f8-f77e-40d0-9b03-79f42c933702/5-Reid-Blackman-master.mp3" length="107239883" type="audio/mpeg"/><itunes:duration>44:41</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>5</itunes:episode><podcast:episode>5</podcast:episode><podcast:season>1</podcast:season></item><item><title>Trustworthy AI with Mike Hind at IBM research</title><itunes:title>Trustworthy AI with Mike Hind at IBM research</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">https://www.aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://www.troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">https://www.troutmanstreetaudio.com</a> </p><p>In this episode, Mike Hind, a distinguished research staff member at IBM, shares his journey in computer science and his focus on trustworthy AI. He discusses his early interest in computers and his thesis on automatic parallelization. Mike also talks about the perception of AI in the 1980s and 1990s and the current focus on trustworthy AI at IBM. He highlights the importance of ethical AI and the challenges of explainability.</p><p>Mike shares his insights on potential applications and challenges of AI in the future and provides advice for leaders preparing for AI adoption. Finally, he shares his thoughts on various topics in the underrated or overrated segment. In this conversation, Alec and Mike Hind discuss various topics, including New York pizza, quantum computing, and Monty Python's The Holy Grail. </p><p><strong><u>Books</u></strong></p><p><em>The Power Broker: Robert Moses and the Fall of New York</em>, by Robert Caro</p><p><strong><u>Movies</u></strong></p><p>Monty Python's <em>The Holy Grail</em></p><p><strong><u>References/Links</u></strong></p><p>LUNAR: <a href="https://en.wikipedia.org/wiki/William_Aaron_Woods" rel="noopener noreferrer" target="_blank">https://en.wikipedia.org/wiki/William_Aaron_Woods</a></p><p>Mike Hind:&nbsp;<a href="https://research.ibm.com/people/michael-hind" rel="noopener noreferrer" target="_blank">https://research.ibm.com/people/michael-hind</a></p><p>Research:&nbsp;<a href="https://aifs360.res.ibm.com/" rel="noopener noreferrer" target="_blank">https://aifs360.res.ibm.com/</a></p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. <a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">https://www.aicrisk.com</a> interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://www.troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">https://www.troutmanstreetaudio.com</a> </p><p>In this episode, Mike Hind, a distinguished research staff member at IBM, shares his journey in computer science and his focus on trustworthy AI. He discusses his early interest in computers and his thesis on automatic parallelization. Mike also talks about the perception of AI in the 1980s and 1990s and the current focus on trustworthy AI at IBM. He highlights the importance of ethical AI and the challenges of explainability.</p><p>Mike shares his insights on potential applications and challenges of AI in the future and provides advice for leaders preparing for AI adoption. Finally, he shares his thoughts on various topics in the underrated or overrated segment. In this conversation, Alec and Mike Hind discuss various topics, including New York pizza, quantum computing, and Monty Python's The Holy Grail. </p><p><strong><u>Books</u></strong></p><p><em>The Power Broker: Robert Moses and the Fall of New York</em>, by Robert Caro</p><p><strong><u>Movies</u></strong></p><p>Monty Python's <em>The Holy Grail</em></p><p><strong><u>References/Links</u></strong></p><p>LUNAR: <a href="https://en.wikipedia.org/wiki/William_Aaron_Woods" rel="noopener noreferrer" target="_blank">https://en.wikipedia.org/wiki/William_Aaron_Woods</a></p><p>Mike Hind:&nbsp;<a href="https://research.ibm.com/people/michael-hind" rel="noopener noreferrer" target="_blank">https://research.ibm.com/people/michael-hind</a></p><p>Research:&nbsp;<a href="https://aifs360.res.ibm.com/" rel="noopener noreferrer" target="_blank">https://aifs360.res.ibm.com/</a></p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">f381ae4e-1e72-4a96-b6fc-1d1ad538d1b1</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 19 Mar 2024 05:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/e9f5f402-4415-48a1-ad7f-2e7550c1400e/4-Mike-Hind-master.mp3" length="131806479" type="audio/mpeg"/><itunes:duration>54:55</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>4</itunes:episode><podcast:episode>4</podcast:episode><podcast:season>1</podcast:season></item><item><title>Refresh Your AI Narrative with Chris Donahoe, Head of AI At Edelman-Smithfield</title><itunes:title>Refresh Your AI Narrative with Chris Donahoe, Head of AI At Edelman-Smithfield</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host,&nbsp;<strong>Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc.&nbsp;</strong><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">https://www.aicrisk.com</a>&nbsp;interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by&nbsp;<strong>Troutman Street Audio</strong>. You can find them on LinkedIn and at <a href="https://www.troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">https://www.troutmanstreetaudio.com</a></p><p>In this episode, Alec interviews <strong>Chris Donahoe,</strong> Executive Vice President and Head of AI Strategy at Edelman-Smithfield, discusses his background, education, and career journey. Chris shares insights on the impact of AI on the communications industry and the need for professionals to adapt and embrace responsible innovation. Chris also provides advice for young professionals interested in the field of AI and emphasizes the importance of understanding the implications and potential risks of AI. He discusses the role of AI in communications and the challenges and opportunities it presents. Chris concludes by discussing the societal and regulatory aspects of AI and the need for a responsible approach to its development and implementation.</p><p><strong>Links</strong></p><p><a href="https://www.edelman.com/" rel="noopener noreferrer" target="_blank">https://www.edelman.com</a>&nbsp;</p><p>https://www.sec.gov/news/press-release/2023-140</p><p><a href="https://www.linkedin.com/pulse/why-your-ai-narrative-need-refresh-2024-chris-donahoe-8k6ec/?trackingId=2rv5jQNwSiy%2BSIoa82%2FaMQ%3D%3D" rel="noopener noreferrer" target="_blank">Why your AI narrative will need a refresh in 2024</a></p><p><a href="https://www.edelmansmithfield.com/Recent-AI-Executive-Order-Carries-Implications-for-Financial-Communications" rel="noopener noreferrer" target="_blank">Recent AI Executive Order Carries Implications for Financial Communications</a></p><p><a href="https://www.linkedin.com/pulse/ai-powered-businesses-can-learn-key-lessons-from-esg-movement-chris/?trackingId=%2FHkEoFUjSd2qGqTmZCcQDQ%3D%3D" rel="noopener noreferrer" target="_blank">AI-Powered Businesses Can Learn Key Lessons from the ESG Movement</a></p><p><strong>Books</strong></p><p>The Years of Lyndon Johnson by Robert A. Caro</p><p>Winston Churchill by William Manchester</p><p><strong>Movie</strong></p><p>The Lives of Others</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host,&nbsp;<strong>Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc.&nbsp;</strong><a href="https://www.aicrisk.com/" rel="noopener noreferrer" target="_blank">https://www.aicrisk.com</a>&nbsp;interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole.</p><p>Podcast production and sound engineering by&nbsp;<strong>Troutman Street Audio</strong>. You can find them on LinkedIn and at <a href="https://www.troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">https://www.troutmanstreetaudio.com</a></p><p>In this episode, Alec interviews <strong>Chris Donahoe,</strong> Executive Vice President and Head of AI Strategy at Edelman-Smithfield, discusses his background, education, and career journey. Chris shares insights on the impact of AI on the communications industry and the need for professionals to adapt and embrace responsible innovation. Chris also provides advice for young professionals interested in the field of AI and emphasizes the importance of understanding the implications and potential risks of AI. He discusses the role of AI in communications and the challenges and opportunities it presents. Chris concludes by discussing the societal and regulatory aspects of AI and the need for a responsible approach to its development and implementation.</p><p><strong>Links</strong></p><p><a href="https://www.edelman.com/" rel="noopener noreferrer" target="_blank">https://www.edelman.com</a>&nbsp;</p><p>https://www.sec.gov/news/press-release/2023-140</p><p><a href="https://www.linkedin.com/pulse/why-your-ai-narrative-need-refresh-2024-chris-donahoe-8k6ec/?trackingId=2rv5jQNwSiy%2BSIoa82%2FaMQ%3D%3D" rel="noopener noreferrer" target="_blank">Why your AI narrative will need a refresh in 2024</a></p><p><a href="https://www.edelmansmithfield.com/Recent-AI-Executive-Order-Carries-Implications-for-Financial-Communications" rel="noopener noreferrer" target="_blank">Recent AI Executive Order Carries Implications for Financial Communications</a></p><p><a href="https://www.linkedin.com/pulse/ai-powered-businesses-can-learn-key-lessons-from-esg-movement-chris/?trackingId=%2FHkEoFUjSd2qGqTmZCcQDQ%3D%3D" rel="noopener noreferrer" target="_blank">AI-Powered Businesses Can Learn Key Lessons from the ESG Movement</a></p><p><strong>Books</strong></p><p>The Years of Lyndon Johnson by Robert A. Caro</p><p>Winston Churchill by William Manchester</p><p><strong>Movie</strong></p><p>The Lives of Others</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">bbfacd6e-18ed-4b72-b70b-182219252eef</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Tue, 27 Feb 2024 03:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/7c2971b3-818b-4667-ba4c-f9183a248958/3-Chris-Donahoe-master.mp3" length="111466495" type="audio/mpeg"/><itunes:duration>46:27</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>3</itunes:episode><podcast:episode>3</podcast:episode><podcast:season>1</podcast:season></item><item><title>AI Drug DIscovery with Rick Fine, Founder and CEO at Phronesis AI</title><itunes:title>AI Drug DIscovery with Rick Fine, Founder and CEO at Phronesis AI</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, <strong>Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. </strong>(https://www.aicrisk.com), interviews guests about Artificial Intelligence. They discuss how to use AI effectively to generate value and the different risks AI creates, as well as how to mitigate risks with governance, risk, compliance, and cybersecurity (GRCC).</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://www.troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">https://www.troutmanstreetaudio.com</a>. </p><p>In this episode, Alec interviews <strong>Rick Fine, Co-Founder and CEO of Phronesis AI</strong>. Rick discusses his background in building companies and the lessons he has learned along the way. He shares his love for independence and adventure, going back to his experiences as a latchkey kid. Rick also talks about Sleeping Donkey, a company he co-founded for partnering in business growth.</p><p>The conversation then transitions to drug discovery and the use of AI in the process. Rick emphasizes the genius and character of his partner, Will Spagnoli, and explains the concept of <em>de novo</em> molecules. He discusses the strategy for Phronesis Artificial Intelligence, focusing on partnerships and licensing. Rick shares the partnership with Temple University and the potential value of their output. He also discusses the ethical considerations of AI and expresses optimism for the future.</p><p>Key Takeaways</p><ul><li>Building companies requires perseverance and a willingness to take action and risk.</li><li>Independence and adventure can shape one's entrepreneurial journey.</li><li>AI plays a crucial role in drug discovery, enabling the design of de novo molecules.</li><li>Partnerships and licensing can accelerate the development and commercialization of drugs.</li><li>Ethical considerations and responsible use of AI are important in the industry.</li><li>Optimism for the future lies in meeting basic needs, promoting peace, and advancing healthcare.</li></ul><br/><p><strong><u>Resources</u></strong></p><p>https://www.phronesisai.com/ </p><p>https://www.linkedin.com/company/phronesis-ai/ </p><p><strong><u>Books Mentioned</u></strong></p><p>Huckleberry Finn by Mark Twain</p><p><br></p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, <strong>Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. </strong>(https://www.aicrisk.com), interviews guests about Artificial Intelligence. They discuss how to use AI effectively to generate value and the different risks AI creates, as well as how to mitigate risks with governance, risk, compliance, and cybersecurity (GRCC).</p><p>Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn and at <a href="https://www.troutmanstreetaudio.com/" rel="noopener noreferrer" target="_blank">https://www.troutmanstreetaudio.com</a>. </p><p>In this episode, Alec interviews <strong>Rick Fine, Co-Founder and CEO of Phronesis AI</strong>. Rick discusses his background in building companies and the lessons he has learned along the way. He shares his love for independence and adventure, going back to his experiences as a latchkey kid. Rick also talks about Sleeping Donkey, a company he co-founded for partnering in business growth.</p><p>The conversation then transitions to drug discovery and the use of AI in the process. Rick emphasizes the genius and character of his partner, Will Spagnoli, and explains the concept of <em>de novo</em> molecules. He discusses the strategy for Phronesis Artificial Intelligence, focusing on partnerships and licensing. Rick shares the partnership with Temple University and the potential value of their output. He also discusses the ethical considerations of AI and expresses optimism for the future.</p><p>Key Takeaways</p><ul><li>Building companies requires perseverance and a willingness to take action and risk.</li><li>Independence and adventure can shape one's entrepreneurial journey.</li><li>AI plays a crucial role in drug discovery, enabling the design of de novo molecules.</li><li>Partnerships and licensing can accelerate the development and commercialization of drugs.</li><li>Ethical considerations and responsible use of AI are important in the industry.</li><li>Optimism for the future lies in meeting basic needs, promoting peace, and advancing healthcare.</li></ul><br/><p><strong><u>Resources</u></strong></p><p>https://www.phronesisai.com/ </p><p>https://www.linkedin.com/company/phronesis-ai/ </p><p><strong><u>Books Mentioned</u></strong></p><p>Huckleberry Finn by Mark Twain</p><p><br></p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">79ed9634-15e3-4295-a679-b53923c06a4e</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Sun, 04 Feb 2024 06:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/aa7618d3-6d92-4420-a35f-6ad999999492/2-Rick-Fine-master.mp3" length="75881451" type="audio/mpeg"/><itunes:duration>31:37</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>2</itunes:episode><podcast:episode>2</podcast:episode><podcast:season>1</podcast:season></item><item><title>Investing in AI with Craig Lifschutz, Founding Partner at Lytical Ventures</title><itunes:title>Investing in AI with Craig Lifschutz, Founding Partner at Lytical Ventures</itunes:title><description><![CDATA[<p>In the AI Risk Reward podcast, our host, <strong>Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc.</strong>, interviews guests about Artificial Intelligence. They discuss how to use AI effectively to generate value and the different risks AI creates, as well as how to mitigate risks with governance, risk, compliance, and cybersecurity (GRCC).</p><p>This podcast is produced by <strong>Troutman Street Audio</strong>. You can find them on LinkedIn and at <a href="https://www.troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">https://www.troutmanstreetaudio.com</a>. </p><p>In this episode, Alec interviews <strong>Craig Lifschutz, Managing Director at Lyrical Partners and Founding Partner at Lytical Ventures</strong>. Craig is a founder as well as a venture capitalist and focuses on seed through the "A" round, where he believes is the sweet spot for investing in technology companies.</p><p>Craig empathizes the importance of ethics in AI and investing, stating that good investing doesn’t involve exploiting or manipulating people. He doesn’t endorse investing in products that negatively affect customers, such as using personal data to sway voters, and prefers companies that protect private data and maintain the integrity of systems.</p><p>Craig underlines the necessity for proper AI regulation, which should be collaborative with both legislators and industry experts who understand the technology's potential and risks. He points out the current mismatch between the rapid development of AI capabilities and slower legislative processes, highlighting the need for well-intentioned and knowledgeable people to determine what implementations of AI are acceptable.</p><p>Craig's career journey includes working in asset management while attending NYU Stern, transitioning from client service to an analyst role, running investments for a family office, managing a special situations hedge fund, and starting an e-commerce company.</p><p>Craig's decision to join Lyrical Partners came about through an introduction to Jeff Keswin, which led to a significant career in the investment field spanning over a decade.</p><p><strong>References/Links</strong></p><p>Against the Gods by Peter Bernstein</p><p>Manias, Panics and Crashes by Charles Kindleberger</p><p>Alexander Hamilton by Ron Chernow (which inspired the Boardway show)</p><p>The Terminator (movie)</p><p><a href="https://www.lyticalventures.com/" rel="noopener noreferrer" target="_blank">Lytical Ventures</a></p><p><a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></p><p>Copyright (c) 2024 Artificial Intelligence Risk, Inc.</p>]]></description><content:encoded><![CDATA[<p>In the AI Risk Reward podcast, our host, <strong>Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc.</strong>, interviews guests about Artificial Intelligence. They discuss how to use AI effectively to generate value and the different risks AI creates, as well as how to mitigate risks with governance, risk, compliance, and cybersecurity (GRCC).</p><p>This podcast is produced by <strong>Troutman Street Audio</strong>. You can find them on LinkedIn and at <a href="https://www.troutmanstreetaudio.com" rel="noopener noreferrer" target="_blank">https://www.troutmanstreetaudio.com</a>. </p><p>In this episode, Alec interviews <strong>Craig Lifschutz, Managing Director at Lyrical Partners and Founding Partner at Lytical Ventures</strong>. Craig is a founder as well as a venture capitalist and focuses on seed through the "A" round, where he believes is the sweet spot for investing in technology companies.</p><p>Craig empathizes the importance of ethics in AI and investing, stating that good investing doesn’t involve exploiting or manipulating people. He doesn’t endorse investing in products that negatively affect customers, such as using personal data to sway voters, and prefers companies that protect private data and maintain the integrity of systems.</p><p>Craig underlines the necessity for proper AI regulation, which should be collaborative with both legislators and industry experts who understand the technology's potential and risks. He points out the current mismatch between the rapid development of AI capabilities and slower legislative processes, highlighting the need for well-intentioned and knowledgeable people to determine what implementations of AI are acceptable.</p><p>Craig's career journey includes working in asset management while attending NYU Stern, transitioning from client service to an analyst role, running investments for a family office, managing a special situations hedge fund, and starting an e-commerce company.</p><p>Craig's decision to join Lyrical Partners came about through an introduction to Jeff Keswin, which led to a significant career in the investment field spanning over a decade.</p><p><strong>References/Links</strong></p><p>Against the Gods by Peter Bernstein</p><p>Manias, Panics and Crashes by Charles Kindleberger</p><p>Alexander Hamilton by Ron Chernow (which inspired the Boardway show)</p><p>The Terminator (movie)</p><p><a href="https://www.lyticalventures.com/" rel="noopener noreferrer" target="_blank">Lytical Ventures</a></p><p><a href="https://www.aicrisk.com" rel="noopener noreferrer" target="_blank">Artificial Intelligence Risk, Inc.</a></p><p>Copyright (c) 2024 Artificial Intelligence Risk, Inc.</p>]]></content:encoded><link><![CDATA[https://ai-risk-reward.captivate.fm]]></link><guid isPermaLink="false">d2574141-422d-468a-a5a7-e84d70e6d259</guid><itunes:image href="https://artwork.captivate.fm/09dd24c5-6c4d-4cac-9d1a-8257c0716ebe/rxTtJuE8CtpYwcJbYWydKw9o.png"/><pubDate>Sun, 28 Jan 2024 06:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/76109def-95cc-40e0-a440-038e63a1568b/1-Craig-Lifschutz-master.mp3" length="122786920" type="audio/mpeg"/><itunes:duration>51:10</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>1</itunes:episode><podcast:episode>1</podcast:episode><podcast:season>1</podcast:season></item></channel></rss>