<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet href="https://feeds.captivate.fm/style.xsl" type="text/xsl"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:podcast="https://podcastindex.org/namespace/1.0"><channel><atom:link href="https://feeds.captivate.fm/chaos-agents/" rel="self" type="application/rss+xml"/><title><![CDATA[Chaos Agents]]></title><podcast:guid>e61acce5-1299-523b-bdc4-509df7c75f98</podcast:guid><lastBuildDate>Tue, 14 Apr 2026 15:25:23 +0000</lastBuildDate><generator>Captivate.fm</generator><language><![CDATA[en]]></language><copyright><![CDATA[Copyright 2026 Sara Chipps and Becca Lewy]]></copyright><managingEditor>Sara Chipps and Becca Lewy</managingEditor><itunes:summary><![CDATA[Technologists Sara Chipps and Becca Lewy dive into the chaos of artificial intelligence—unpacking the tech, trends, and ideas reshaping how we work, create, and think. Smart, funny, and just a little bit existential.]]></itunes:summary><itunes:image href="https://artwork.captivate.fm/3a609942-f9c5-4c40-a019-8deec973f14f/ChaosAgentsCoverArt.jpg"/><itunes:owner><itunes:name>Sara Chipps and Becca Lewy</itunes:name></itunes:owner><itunes:author>Sara Chipps and Becca Lewy</itunes:author><description>Technologists Sara Chipps and Becca Lewy dive into the chaos of artificial intelligence—unpacking the tech, trends, and ideas reshaping how we work, create, and think. Smart, funny, and just a little bit existential.</description><link>https://chaosagents.ai</link><atom:link href="https://pubsubhubbub.appspot.com" rel="hub"/><itunes:subtitle><![CDATA[The podcast where we learn about AI, so you don't have to]]></itunes:subtitle><itunes:explicit>false</itunes:explicit><itunes:type>episodic</itunes:type><itunes:category text="Technology"></itunes:category><itunes:category text="News"><itunes:category text="Tech News"/></itunes:category><itunes:category text="Education"><itunes:category text="Self-Improvement"/></itunes:category><podcast:locked>no</podcast:locked><podcast:medium>podcast</podcast:medium><item><title>Superpowers, Subagents, and Why Coding Isn’t the Job Anymore - With Jesse Vincent</title><itunes:title>Superpowers, Subagents, and Why Coding Isn’t the Job Anymore - With Jesse Vincent</itunes:title><description><![CDATA[<p>In this episode, Sara and Becca talk with Jesse Vincent about the shift from writing code to managing AI agents.</p><p>They break down how tools like Superpowers turn development into a system of planning, testing, and review, where subagents implement work and other agents validate it. The conversation explores what happens when coding becomes less about syntax and more about taste, judgment, and outcomes.</p><p>They also get into the reality of open source in the AI era, from “slop PRs” to agents impersonating maintainers, and why specs and tests may now matter more than the code itself. Along the way, they unpack how AI behaves like a team of eager but chaotic interns, and what it means to manage them effectively.</p><p>If you’ve ever felt like coding is changing faster than you can keep up, this episode explains what comes next.</p><p><strong>Links mentioned:</strong></p><p>⚡ Superpowers (Jesse’s tool) <a href="https://github.com/obra/superpowers" rel="noopener noreferrer" target="_blank">https://github.com/obra/superpowers</a> 🛠️</p><p>🧠 Claude Code (agentic dev tooling) <a href="https://docs.anthropic.com/en/docs/claude-code" rel="noopener noreferrer" target="_blank">https://docs.anthropic.com/en/docs/claude-code</a> 🤖</p><p>📊 Influence (book on persuasion principles) <a href="https://www.influenceatwork.com/book/" rel="noopener noreferrer" target="_blank">https://www.influenceatwork.com/book/</a> 📘</p>]]></description><content:encoded><![CDATA[<p>In this episode, Sara and Becca talk with Jesse Vincent about the shift from writing code to managing AI agents.</p><p>They break down how tools like Superpowers turn development into a system of planning, testing, and review, where subagents implement work and other agents validate it. The conversation explores what happens when coding becomes less about syntax and more about taste, judgment, and outcomes.</p><p>They also get into the reality of open source in the AI era, from “slop PRs” to agents impersonating maintainers, and why specs and tests may now matter more than the code itself. Along the way, they unpack how AI behaves like a team of eager but chaotic interns, and what it means to manage them effectively.</p><p>If you’ve ever felt like coding is changing faster than you can keep up, this episode explains what comes next.</p><p><strong>Links mentioned:</strong></p><p>⚡ Superpowers (Jesse’s tool) <a href="https://github.com/obra/superpowers" rel="noopener noreferrer" target="_blank">https://github.com/obra/superpowers</a> 🛠️</p><p>🧠 Claude Code (agentic dev tooling) <a href="https://docs.anthropic.com/en/docs/claude-code" rel="noopener noreferrer" target="_blank">https://docs.anthropic.com/en/docs/claude-code</a> 🤖</p><p>📊 Influence (book on persuasion principles) <a href="https://www.influenceatwork.com/book/" rel="noopener noreferrer" target="_blank">https://www.influenceatwork.com/book/</a> 📘</p>]]></content:encoded><link><![CDATA[https://chaosagents.ai]]></link><guid isPermaLink="false">5487f1c1-f5b6-4be4-9cca-df5608566c30</guid><itunes:image href="https://artwork.captivate.fm/39693ee9-ce57-49f4-b643-fd0e7d943d0e/LousieSquare.jpg"/><pubDate>Tue, 14 Apr 2026 00:30:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/5487f1c1-f5b6-4be4-9cca-df5608566c30.mp3" length="50375296" type="audio/mpeg"/><itunes:duration>41:59</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>12</itunes:episode><podcast:episode>12</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/099f1795-cb30-4fe0-8295-0169c53f8e64/transcript.json" type="application/json"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/099f1795-cb30-4fe0-8295-0169c53f8e64/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/099f1795-cb30-4fe0-8295-0169c53f8e64/index.html" type="text/html"/><podcast:chapters url="https://transcripts.captivate.fm/chapter-3a99d22f-048e-4ec2-b14e-fefe45fad4c7.json" type="application/json+chapters"/></item><item><title>Learn Anything, Build Anything, and the AI Barrier That Isn’t Real - With Navarrow Wright</title><itunes:title>Learn Anything, Build Anything, and the AI Barrier That Isn’t Real - With Navarrow Wright</itunes:title><description><![CDATA[<p>Most people are using AI, but very few actually understand how to use it well.</p><p>In this episode, Sara and Becca talk with Navarrow Wright about what is really happening with AI adoption and why the biggest barrier is mindset, not technology. They break down why prompting matters, why AI is not one-size-fits-all, and how access to these tools is more open than any previous tech shift.</p><p>They also explore the real opportunity: using AI as a thought partner. From tools like NotebookLM to building without traditional technical skills, this episode shows what becomes possible when people actually understand how to use these systems.</p><p>If you feel like everyone else “gets AI” and you do not, this is a great place to start.</p><p><strong>Links mentioned:</strong></p><ul><li>📓 <a href=" https://notebooklm.google/" rel="noopener noreferrer" target="_blank">NotebookLM</a></li><li>📚 <a href=" https://grow.google/ai/ " rel="noopener noreferrer" target="_blank">Gemini AI course</a></li><li>🧠 <a href="https://platform.openai.com/docs/guides/prompt-engineering " rel="noopener noreferrer" target="_blank">Prompting guide</a> (OpenAI)</li><li>💻 Navarrow: <a href="https://www.instagram.com/navarrow/" rel="noopener noreferrer" target="_blank">IG</a>, <a href="https://www.youtube.com/@NavarrowWright" rel="noopener noreferrer" target="_blank">YT</a></li></ul><br/>]]></description><content:encoded><![CDATA[<p>Most people are using AI, but very few actually understand how to use it well.</p><p>In this episode, Sara and Becca talk with Navarrow Wright about what is really happening with AI adoption and why the biggest barrier is mindset, not technology. They break down why prompting matters, why AI is not one-size-fits-all, and how access to these tools is more open than any previous tech shift.</p><p>They also explore the real opportunity: using AI as a thought partner. From tools like NotebookLM to building without traditional technical skills, this episode shows what becomes possible when people actually understand how to use these systems.</p><p>If you feel like everyone else “gets AI” and you do not, this is a great place to start.</p><p><strong>Links mentioned:</strong></p><ul><li>📓 <a href=" https://notebooklm.google/" rel="noopener noreferrer" target="_blank">NotebookLM</a></li><li>📚 <a href=" https://grow.google/ai/ " rel="noopener noreferrer" target="_blank">Gemini AI course</a></li><li>🧠 <a href="https://platform.openai.com/docs/guides/prompt-engineering " rel="noopener noreferrer" target="_blank">Prompting guide</a> (OpenAI)</li><li>💻 Navarrow: <a href="https://www.instagram.com/navarrow/" rel="noopener noreferrer" target="_blank">IG</a>, <a href="https://www.youtube.com/@NavarrowWright" rel="noopener noreferrer" target="_blank">YT</a></li></ul><br/>]]></content:encoded><link><![CDATA[https://chaosagents.ai]]></link><guid isPermaLink="false">ab586e24-71f5-4d6b-901e-2f56ddfbce1a</guid><itunes:image href="https://artwork.captivate.fm/8d9cd330-cff7-4ae7-8cb1-7783ee3d9416/NavarrowSquare.jpg"/><pubDate>Tue, 31 Mar 2026 00:15:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/ab586e24-71f5-4d6b-901e-2f56ddfbce1a.mp3" length="79854667" type="audio/mpeg"/><itunes:duration>55:27</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>11</itunes:episode><podcast:episode>11</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/2645ee68-19de-43af-ba2e-415220e1e4e3/transcript.json" type="application/json"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/2645ee68-19de-43af-ba2e-415220e1e4e3/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/2645ee68-19de-43af-ba2e-415220e1e4e3/index.html" type="text/html"/><podcast:chapters url="https://transcripts.captivate.fm/chapter-7c79dfc8-bf5e-4915-9115-c3b50f91a309.json" type="application/json+chapters"/></item><item><title>Algorithmic Transference, Generative UI, and Why the Text Box Won - With Louise Macfadyen</title><itunes:title>Algorithmic Transference, Generative UI, and Why the Text Box Won - With Louise Macfadyen</itunes:title><description><![CDATA[<p>We’re joined by Louise Macfadyen, author of <em>Designing AI Interfaces</em>, to talk about how we actually design for AI, and why most companies are getting it wrong.</p><p>We unpack “algorithmic transference” (why one bad AI experience makes you distrust all of them), the limits of the chat interface, and whether the text box is here to stay. Louise walks us through the history from command line → Google search → LLMs, and why designing for ambiguity is still one of the hardest problems in product.</p><p>We also get into generative UI (and why it might be a trap), how large companies should actually integrate AI without breaking user trust, and why understanding user intent matters more than ever. Plus: vibe coding, personal AI “jigs,” and the tension between personalization and privacy.</p><p>If you build products, or just use AI, this will change how you think about interfaces.</p><p>📘<a href="https://www.amazon.com/Designing-AI-Interfaces-Principles-Autonomous/dp/B0FYC7XRP7" rel="noopener noreferrer" target="_blank"> Designing AI Interfaces</a> · 🧠 <a href="https://dl.acm.org/doi/10.1145/3442188.3445922" rel="noopener noreferrer" target="_blank">Algorithmic Transference</a> · 🛠️ <a href="https://docs.anthropic.com/en/docs/claude-code" rel="noopener noreferrer" target="_blank">Claude Code</a></p>]]></description><content:encoded><![CDATA[<p>We’re joined by Louise Macfadyen, author of <em>Designing AI Interfaces</em>, to talk about how we actually design for AI, and why most companies are getting it wrong.</p><p>We unpack “algorithmic transference” (why one bad AI experience makes you distrust all of them), the limits of the chat interface, and whether the text box is here to stay. Louise walks us through the history from command line → Google search → LLMs, and why designing for ambiguity is still one of the hardest problems in product.</p><p>We also get into generative UI (and why it might be a trap), how large companies should actually integrate AI without breaking user trust, and why understanding user intent matters more than ever. Plus: vibe coding, personal AI “jigs,” and the tension between personalization and privacy.</p><p>If you build products, or just use AI, this will change how you think about interfaces.</p><p>📘<a href="https://www.amazon.com/Designing-AI-Interfaces-Principles-Autonomous/dp/B0FYC7XRP7" rel="noopener noreferrer" target="_blank"> Designing AI Interfaces</a> · 🧠 <a href="https://dl.acm.org/doi/10.1145/3442188.3445922" rel="noopener noreferrer" target="_blank">Algorithmic Transference</a> · 🛠️ <a href="https://docs.anthropic.com/en/docs/claude-code" rel="noopener noreferrer" target="_blank">Claude Code</a></p>]]></content:encoded><link><![CDATA[https://chaosagents.ai]]></link><guid isPermaLink="false">cd3f75ba-728a-4225-a470-14b1824aec5a</guid><itunes:image href="https://artwork.captivate.fm/72e1aac7-07fc-4bd4-b214-743d6f0959dd/LousieSquare.jpg"/><pubDate>Tue, 17 Mar 2026 00:30:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/cd3f75ba-728a-4225-a470-14b1824aec5a.mp3" length="70762565" type="audio/mpeg"/><itunes:duration>49:08</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>10</itunes:episode><podcast:episode>10</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/8cab4374-6784-42d0-b810-c88e8be848f2/transcript.json" type="application/json"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/8cab4374-6784-42d0-b810-c88e8be848f2/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/8cab4374-6784-42d0-b810-c88e8be848f2/index.html" type="text/html"/><podcast:chapters url="https://transcripts.captivate.fm/chapter-a0b304f7-0630-498e-ab63-7359d487eac7.json" type="application/json+chapters"/></item><item><title>Personal Agents, Opinion Geometry, and AI That Learns Too Much - With Hayden Helm</title><itunes:title>Personal Agents, Opinion Geometry, and AI That Learns Too Much - With Hayden Helm</itunes:title><description><![CDATA[<p>This week, Becca turns her birthday into a <em>Twister</em> masterclass by watching the movie with an AI “director’s commentary” running in real time (including some behind-the-scenes facts that sound… borderline illegal). Then Sara drops a mildly unhinged hot take: we might have already hit AGI—and we’ll never know it, because we don’t even agree on what “counts.”</p><p>From there, we’re joined by <strong>Hayden Helm</strong>, CEO and founder of <strong>Helivan</strong>, to talk about the next wave of agentic AI: personal bots that don’t just answer questions, but <em>act</em>—with tools, permissions, and the ability to change over time.</p><p>We get into:</p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Why <strong>agent behavior drift</strong> is the real risk (even when the bot still sounds “nice”)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>The rising need for <strong>agent observability</strong>: detecting change, quantifying it, and rolling back when things go sideways</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>What happens when agents learn from the environment (and other agents), not just from you</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Hayden’s work on <strong>“likeness”</strong> and <strong>opinion geometry</strong>—making messy human-ish behavior measurable</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Why manual inspection won’t scale in a world of millions of autonomous interactions</li></ol><br/><p>It’s a conversation about trust, safety, and the new security surface area we’re creating—one helpful assistant at a time.</p><p>🛠 <strong>Agent Infrastructure</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://github.com/openclaw/openclaw" rel="noopener noreferrer" target="_blank">OpenClaw</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://helivan.io" rel="noopener noreferrer" target="_blank">Helivan</a></li></ol><br/><p>🔐 <strong>Agent Payments</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.coinbase.com/developer-platform/discover/launches/agentic-wallets" rel="noopener noreferrer" target="_blank">Coinbase x402 Protocol</a> / Agentic Wallets </li></ol><br/><p>🧨 <strong>Agent Behavior Incident</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>“<a href="https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/" rel="noopener noreferrer" target="_blank">An AI Agent Published a Hit Piece on Me</a>” </li></ol><br/>]]></description><content:encoded><![CDATA[<p>This week, Becca turns her birthday into a <em>Twister</em> masterclass by watching the movie with an AI “director’s commentary” running in real time (including some behind-the-scenes facts that sound… borderline illegal). Then Sara drops a mildly unhinged hot take: we might have already hit AGI—and we’ll never know it, because we don’t even agree on what “counts.”</p><p>From there, we’re joined by <strong>Hayden Helm</strong>, CEO and founder of <strong>Helivan</strong>, to talk about the next wave of agentic AI: personal bots that don’t just answer questions, but <em>act</em>—with tools, permissions, and the ability to change over time.</p><p>We get into:</p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Why <strong>agent behavior drift</strong> is the real risk (even when the bot still sounds “nice”)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>The rising need for <strong>agent observability</strong>: detecting change, quantifying it, and rolling back when things go sideways</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>What happens when agents learn from the environment (and other agents), not just from you</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Hayden’s work on <strong>“likeness”</strong> and <strong>opinion geometry</strong>—making messy human-ish behavior measurable</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Why manual inspection won’t scale in a world of millions of autonomous interactions</li></ol><br/><p>It’s a conversation about trust, safety, and the new security surface area we’re creating—one helpful assistant at a time.</p><p>🛠 <strong>Agent Infrastructure</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://github.com/openclaw/openclaw" rel="noopener noreferrer" target="_blank">OpenClaw</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://helivan.io" rel="noopener noreferrer" target="_blank">Helivan</a></li></ol><br/><p>🔐 <strong>Agent Payments</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.coinbase.com/developer-platform/discover/launches/agentic-wallets" rel="noopener noreferrer" target="_blank">Coinbase x402 Protocol</a> / Agentic Wallets </li></ol><br/><p>🧨 <strong>Agent Behavior Incident</strong></p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>“<a href="https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/" rel="noopener noreferrer" target="_blank">An AI Agent Published a Hit Piece on Me</a>” </li></ol><br/>]]></content:encoded><link><![CDATA[https://chaosagents.ai]]></link><guid isPermaLink="false">9a04cd24-a6a5-4ce2-a901-e01792ac89a6</guid><itunes:image href="https://artwork.captivate.fm/cdfa5b7d-acf6-480c-bf86-06de06c76dfa/HAYDEN.jpg"/><pubDate>Mon, 02 Mar 2026 00:30:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/9a04cd24-a6a5-4ce2-a901-e01792ac89a6.mp3" length="68538430" type="audio/mpeg"/><itunes:duration>57:07</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>9</itunes:episode><podcast:episode>9</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/6201936c-b20a-4d2e-ad61-07e8d3b20ddf/transcript.json" type="application/json"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/6201936c-b20a-4d2e-ad61-07e8d3b20ddf/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/6201936c-b20a-4d2e-ad61-07e8d3b20ddf/index.html" type="text/html"/><podcast:chapters url="https://transcripts.captivate.fm/chapter-85cccf4e-6867-4b19-ba98-437e9bbaea8a.json" type="application/json+chapters"/></item><item><title>Agentic Romance Scams, Open Source Safety, and Anything Bad on the Internet - With Juliet Shen</title><itunes:title>Agentic Romance Scams, Open Source Safety, and Anything Bad on the Internet - With Juliet Shen</itunes:title><description><![CDATA[<p>Juliet Shen (Roost, formerly Snap, Grindr, Google) joins us to break down <strong>trust &amp; safety</strong> — aka “anything bad that happens on the internet” — and why AI is changing the game for both attackers and defenders.</p><p>We talk safety-by-design (how to bake guardrails into product), the real human cost of content moderation, and where AI can actually help without pretending it solves everything. Juliet explains how agentic AI scales old harms like romance scams from a handful of conversations to thousands at once — and why open-source safety infrastructure matters when bad actors share tactics faster than platforms do.</p><p>We also dig into what Roost is building: Osprey (an investigation + rules engine) and Coop (a flexible review tool), plus the building blocks smaller teams need to ship products responsibly before the “weird edge cases” arrive.</p><p>If you build products, this one’s a must.</p><p>🛡️ <strong><a href="http://github.com/orgs/roostorg/" rel="noopener noreferrer" target="_blank">Roost</a> (GitHub + tools directory)</strong></p><p>👥 <strong><a href=" https://www.tspa.org/" rel="noopener noreferrer" target="_blank">Trust &amp; Safety Professional Association</a> (TSPA)</strong></p><p>🧩 <strong><a href=" https://www.prosocialdesign.org/" rel="noopener noreferrer" target="_blank">Prosocial Design Network</a></strong></p><p>📚 <strong><a href="https://yalebooks.yale.edu/book/9780300235883/behind-the-screen/" rel="noopener noreferrer" target="_blank">Behind the Screen</a></strong> (Sarah T. Roberts — content moderation + human cost)</p>]]></description><content:encoded><![CDATA[<p>Juliet Shen (Roost, formerly Snap, Grindr, Google) joins us to break down <strong>trust &amp; safety</strong> — aka “anything bad that happens on the internet” — and why AI is changing the game for both attackers and defenders.</p><p>We talk safety-by-design (how to bake guardrails into product), the real human cost of content moderation, and where AI can actually help without pretending it solves everything. Juliet explains how agentic AI scales old harms like romance scams from a handful of conversations to thousands at once — and why open-source safety infrastructure matters when bad actors share tactics faster than platforms do.</p><p>We also dig into what Roost is building: Osprey (an investigation + rules engine) and Coop (a flexible review tool), plus the building blocks smaller teams need to ship products responsibly before the “weird edge cases” arrive.</p><p>If you build products, this one’s a must.</p><p>🛡️ <strong><a href="http://github.com/orgs/roostorg/" rel="noopener noreferrer" target="_blank">Roost</a> (GitHub + tools directory)</strong></p><p>👥 <strong><a href=" https://www.tspa.org/" rel="noopener noreferrer" target="_blank">Trust &amp; Safety Professional Association</a> (TSPA)</strong></p><p>🧩 <strong><a href=" https://www.prosocialdesign.org/" rel="noopener noreferrer" target="_blank">Prosocial Design Network</a></strong></p><p>📚 <strong><a href="https://yalebooks.yale.edu/book/9780300235883/behind-the-screen/" rel="noopener noreferrer" target="_blank">Behind the Screen</a></strong> (Sarah T. Roberts — content moderation + human cost)</p>]]></content:encoded><link><![CDATA[https://chaosagents.ai]]></link><guid isPermaLink="false">3c5d44cf-0704-40a9-9495-30173e86a6a5</guid><itunes:image href="https://artwork.captivate.fm/fead0dc3-421a-46f1-8442-a4aab3c67697/PaulFordSquare.jpg"/><pubDate>Tue, 17 Feb 2026 00:30:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/3c5d44cf-0704-40a9-9495-30173e86a6a5.mp3" length="55702904" type="audio/mpeg"/><itunes:duration>46:25</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>8</itunes:episode><podcast:episode>8</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/a8a13b1f-f2bb-4105-ab1d-aaaa7fd59a2a/transcript.json" type="application/json"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/a8a13b1f-f2bb-4105-ab1d-aaaa7fd59a2a/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/a8a13b1f-f2bb-4105-ab1d-aaaa7fd59a2a/index.html" type="text/html"/><podcast:chapters url="https://transcripts.captivate.fm/chapter-358378de-379f-4218-9618-aff727b29b8f.json" type="application/json+chapters"/></item><item><title>Agents in Crypto, Agents at Work, and the Weird New Middle Management - With Meghan Heintz</title><itunes:title>Agents in Crypto, Agents at Work, and the Weird New Middle Management - With Meghan Heintz</itunes:title><description><![CDATA[<p>We’re joined by Meghan Heintz, founding engineer at Herd Labs, to break down where crypto and AI agents actually work. We talk prediction markets, smart contracts, wallets, rug pulls, and how AI can finally explain what happened on-chain.</p><p>Then we zoom out to the human side: benchmarking and evals for agents, misinformation, the Mom Test, and what it feels like to manage coding agents instead of junior engineers.</p><p><strong>Links mentioned: </strong><a href="https://polymarket.com/" rel="noopener noreferrer" target="_blank">Polymarket</a> · <a href="https://dune.com/" rel="noopener noreferrer" target="_blank">Dune Analytics</a> · <a href="https://herd.ai/" rel="noopener noreferrer" target="_blank">Herd Labs</a> · <a href="https://momtestbook.com/" rel="noopener noreferrer" target="_blank">The Mom Test</a></p>]]></description><content:encoded><![CDATA[<p>We’re joined by Meghan Heintz, founding engineer at Herd Labs, to break down where crypto and AI agents actually work. We talk prediction markets, smart contracts, wallets, rug pulls, and how AI can finally explain what happened on-chain.</p><p>Then we zoom out to the human side: benchmarking and evals for agents, misinformation, the Mom Test, and what it feels like to manage coding agents instead of junior engineers.</p><p><strong>Links mentioned: </strong><a href="https://polymarket.com/" rel="noopener noreferrer" target="_blank">Polymarket</a> · <a href="https://dune.com/" rel="noopener noreferrer" target="_blank">Dune Analytics</a> · <a href="https://herd.ai/" rel="noopener noreferrer" target="_blank">Herd Labs</a> · <a href="https://momtestbook.com/" rel="noopener noreferrer" target="_blank">The Mom Test</a></p>]]></content:encoded><link><![CDATA[https://chaosagents.ai]]></link><guid isPermaLink="false">7c6dc0da-dfb0-49be-9723-ad35245ecec8</guid><itunes:image href="https://artwork.captivate.fm/faaf93e4-2725-4c52-aa75-fc1d6f0432d7/meghan.jpg"/><pubDate>Tue, 03 Feb 2026 00:30:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/7c6dc0da-dfb0-49be-9723-ad35245ecec8.mp3" length="60687616" type="audio/mpeg"/><itunes:duration>50:34</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>7</itunes:episode><podcast:episode>7</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/660face7-d7c1-4a47-81bc-59ebfcf6751d/transcript.json" type="application/json"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/660face7-d7c1-4a47-81bc-59ebfcf6751d/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/660face7-d7c1-4a47-81bc-59ebfcf6751d/index.html" type="text/html"/></item><item><title>Can You Build Anything in a Week? GPUs, Code Gen, and the End of Engineers - With Harper Reed</title><itunes:title>Can You Build Anything in a Week? GPUs, Code Gen, and the End of Engineers - With Harper Reed</itunes:title><description><![CDATA[<p>Becca just got back from NeurIPS, the academic AI conference that feels like an adult science fair. We dig into research on training large AI models across cheap GPUs and slow internet connections—and why that could dramatically lower the barrier to building AI.</p><p>Then we’re joined by Harper Reed, CEO of 2389, for a wide-ranging conversation about code generation, coaching-based engineering teams, and why “production code” might have always been a myth. We talk vibe coding (begrudgingly), the shifting role of software engineers, taste vs. technical skill, and what happens when you can build almost anything in a week.</p><p>Smart, funny, and a little unsettling—Chaos Agents at full volume.</p><h2>🎓 Academic AI &amp; research culture</h2><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href=" https://neurips.cc/" rel="noopener noreferrer" target="_blank">NeurIPS</a> (Conference on Neural Information Processing Systems)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href=" https://neurips.cc/Conferences/2024/AcceptedPapers" rel="noopener noreferrer" target="_blank">NeurIPS 2024 Accepted Papers</a></li></ol><br/><h2>🧠 Distributed training, GPUs &amp; efficiency</h2><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href=" https://www.nvidia.com/en-us/data-center/h100/" rel="noopener noreferrer" target="_blank">NVIDIA H100 Tensor Core GPU</a> (referenced GPU class)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href=" https://pluralis.ai/" rel="noopener noreferrer" target="_blank">Pluralis Research</a> (distributed training across low-bandwidth networks)</li></ol><br/><h2>⚙️ Core AI concepts mentioned</h2><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href=" https://www.cloudflare.com/learning/serverless/glossary/gpu-vs-cpu/" rel="noopener noreferrer" target="_blank">GPU vs CPU explained</a> (parallel vs sequential compute)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href=" https://huggingface.co/docs/transformers/v4.40.0/en/perf_train_gpu_many" rel="noopener noreferrer" target="_blank">Data Parallelism vs Model Parallelism</a> (training overview)</li></ol><br/><h2>🧑‍💻 Code generation &amp; developer tools</h2><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href=" https://www.anthropic.com/claude" rel="noopener noreferrer" target="_blank">Claude Code</a> (Anthropic code-gen tooling)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href=" https://www.cursor.sh/" rel="noopener noreferrer" target="_blank">Cursor</a> (AI-first code editor, discussed implicitly)</li></ol><br/><h2>🛠️ Agent workflows &amp; infrastructure</h2><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://matrix.org/" rel="noopener noreferrer" target="_blank">Matrix</a> (open-source, decentralized chat protocol)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href=" https://modelcontextprotocol.io/" rel="noopener noreferrer" target="_blank">Model Context Protocol</a> (MCP) overview</li></ol><br/><h2>🧩 Utilities &amp; recommendations</h2><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://superpowers.dev/" rel="noopener noreferrer" target="_blank">Jesse Vincent’s Superpowers</a> (Claude workflow enhancer)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href=" https://fly.io/" rel="noopener noreferrer" target="_blank">Fly.io</a> (deployment platform referenced)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href=" https://www.netlify.com/" rel="noopener noreferrer" target="_blank">Netlify</a> (deployment &amp; hosting)</li></ol><br/><h2>🧪 Related Chaos Agents context</h2><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href=" https://en.wikipedia.org/wiki/Perceptron" rel="noopener noreferrer" target="_blank">Perceptrons &amp; early neural networks</a> (referenced conceptually)</li></ol><br/>]]></description><content:encoded><![CDATA[<p>Becca just got back from NeurIPS, the academic AI conference that feels like an adult science fair. We dig into research on training large AI models across cheap GPUs and slow internet connections—and why that could dramatically lower the barrier to building AI.</p><p>Then we’re joined by Harper Reed, CEO of 2389, for a wide-ranging conversation about code generation, coaching-based engineering teams, and why “production code” might have always been a myth. We talk vibe coding (begrudgingly), the shifting role of software engineers, taste vs. technical skill, and what happens when you can build almost anything in a week.</p><p>Smart, funny, and a little unsettling—Chaos Agents at full volume.</p><h2>🎓 Academic AI &amp; research culture</h2><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href=" https://neurips.cc/" rel="noopener noreferrer" target="_blank">NeurIPS</a> (Conference on Neural Information Processing Systems)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href=" https://neurips.cc/Conferences/2024/AcceptedPapers" rel="noopener noreferrer" target="_blank">NeurIPS 2024 Accepted Papers</a></li></ol><br/><h2>🧠 Distributed training, GPUs &amp; efficiency</h2><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href=" https://www.nvidia.com/en-us/data-center/h100/" rel="noopener noreferrer" target="_blank">NVIDIA H100 Tensor Core GPU</a> (referenced GPU class)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href=" https://pluralis.ai/" rel="noopener noreferrer" target="_blank">Pluralis Research</a> (distributed training across low-bandwidth networks)</li></ol><br/><h2>⚙️ Core AI concepts mentioned</h2><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href=" https://www.cloudflare.com/learning/serverless/glossary/gpu-vs-cpu/" rel="noopener noreferrer" target="_blank">GPU vs CPU explained</a> (parallel vs sequential compute)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href=" https://huggingface.co/docs/transformers/v4.40.0/en/perf_train_gpu_many" rel="noopener noreferrer" target="_blank">Data Parallelism vs Model Parallelism</a> (training overview)</li></ol><br/><h2>🧑‍💻 Code generation &amp; developer tools</h2><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href=" https://www.anthropic.com/claude" rel="noopener noreferrer" target="_blank">Claude Code</a> (Anthropic code-gen tooling)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href=" https://www.cursor.sh/" rel="noopener noreferrer" target="_blank">Cursor</a> (AI-first code editor, discussed implicitly)</li></ol><br/><h2>🛠️ Agent workflows &amp; infrastructure</h2><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://matrix.org/" rel="noopener noreferrer" target="_blank">Matrix</a> (open-source, decentralized chat protocol)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href=" https://modelcontextprotocol.io/" rel="noopener noreferrer" target="_blank">Model Context Protocol</a> (MCP) overview</li></ol><br/><h2>🧩 Utilities &amp; recommendations</h2><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://superpowers.dev/" rel="noopener noreferrer" target="_blank">Jesse Vincent’s Superpowers</a> (Claude workflow enhancer)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href=" https://fly.io/" rel="noopener noreferrer" target="_blank">Fly.io</a> (deployment platform referenced)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href=" https://www.netlify.com/" rel="noopener noreferrer" target="_blank">Netlify</a> (deployment &amp; hosting)</li></ol><br/><h2>🧪 Related Chaos Agents context</h2><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href=" https://en.wikipedia.org/wiki/Perceptron" rel="noopener noreferrer" target="_blank">Perceptrons &amp; early neural networks</a> (referenced conceptually)</li></ol><br/>]]></content:encoded><link><![CDATA[https://chaosagents.ai]]></link><guid isPermaLink="false">4ab5dfd0-e38f-4279-8863-8bd7831f1db4</guid><itunes:image href="https://artwork.captivate.fm/c6b786e5-0351-4544-a462-ca1ba5b97061/HArper.jpg"/><pubDate>Tue, 20 Jan 2026 02:15:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/4ab5dfd0-e38f-4279-8863-8bd7831f1db4.mp3" length="85031087" type="audio/mpeg"/><itunes:duration>58:59</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>6</itunes:episode><podcast:episode>6</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/1e570ddb-d843-4c44-9ce7-1ef127c100e5/transcript.json" type="application/json"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/1e570ddb-d843-4c44-9ce7-1ef127c100e5/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/1e570ddb-d843-4c44-9ce7-1ef127c100e5/index.html" type="text/html"/><podcast:chapters url="https://transcripts.captivate.fm/chapter-af74322c-0c1b-4edf-b2cf-593baf768387.json" type="application/json+chapters"/><podcast:alternateEnclosure type="video/youtube" title="Can You Build Anything in a Week? GPUs, Code Gen, and the End of Engineers - With Harper Reed"><podcast:source uri="https://youtu.be/WUSr6MURiNU"/></podcast:alternateEnclosure></item><item><title>The Magic Cycle, AI Detectors, and the End of Writing as Proof - With Clay Shirky</title><itunes:title>The Magic Cycle, AI Detectors, and the End of Writing as Proof - With Clay Shirky</itunes:title><description><![CDATA[<p>Sara’s back from visiting her New Jersey Christian high school—where she gets hit with a genuinely spicy question: <em>How do you reconcile AGI with faith?</em> From there, we go straight into the bigger theme of the episode: education is getting stress-tested by AI in real time.</p><p>Becca breaks down Google’s “magic cycle” — the uncomfortable lesson of inventing transformative research (Transformers, BERT) and then watching someone else ship it to the world. Sara shares what she’s learning about research workflows moving beyond “just chat,” including multi-agent setups for planning, searching, reading, and synthesis.</p><p>Then we’re joined by <strong>Clay Shirky</strong>, Vice Provost for AI &amp; Technology in Education at NYU, to talk about what’s <em>actually</em> happening on campuses: why students integrated AI “sideways” before institutions could respond, why AI detectors are a trap (and who they harm most), and why the real shift isn’t assignments — it’s <strong>assessment</strong>.</p><p>We dig into what comes next: oral exams, in-class scaffolding, and designing learning around <em>productive struggle</em>—not just output. And we end in a place that’s both funny and unsettling: the rise of AI “personalities,” RLHF as “reinforcement learning for human flattery,” and what it means when a machine is always on your side.</p><p>Because whether we like it or not: a well-written paragraph is no longer proof of human thought.</p><h2>🧠 Foundational AI papers &amp; breakthroughs</h2><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://arxiv.org/abs/1706.03762" rel="noopener noreferrer" target="_blank">Attention Is All You Need</a> (Transformers, 2017)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://arxiv.org/abs/1810.04805" rel="noopener noreferrer" target="_blank">BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding</a></li></ol><br/><h2>🧪 Google’s “Magic Cycle” framing</h2><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://research.google/blog/accelerating-the-magic-cycle-of-research-breakthroughs-and-real-world-applications/" rel="noopener noreferrer" target="_blank">Accelerating the magic cycle of research breakthroughs and real-world applications</a> <a href="https://research.google/blog/accelerating-the-magic-cycle-of-research-breakthroughs-and-real-world-applications/?utm_source=chatgpt.com" rel="noopener noreferrer" target="_blank">Google Research</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://blog.google/technology/research/google-research-scientific-discovery/" rel="noopener noreferrer" target="_blank">How AI Drives Scientific Research with Real-World Benefit</a> (Google Blog)</li></ol><br/><h2>🚨 Shipping pressure: Bard + “code red” era</h2><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Reuters: Alphabet shares dive after Bard flubs info, ~$100B market cap hit (<a href="https://www.reuters.com/technology/google-ai-chatbot-bard-offers-inaccurate-information-company-ad-2023-02-08/?utm_source=chatgpt.com" rel="noopener noreferrer" target="_blank">https://www.reuters.com/technology/google-ai-chatbot-bard-offers-inaccurate-information-company-ad-2023-02-08/</a>) <a href="https://www.reuters.com/technology/google-ai-chatbot-bard-offers-inaccurate-information-company-ad-2023-02-08/?utm_source=chatgpt.com" rel="noopener noreferrer" target="_blank">Reuters</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Google Blog: Bard updates from Google I/O 2023 (<a href="https://blog.google/technology/ai/google-bard-updates-io-2023/?utm_source=chatgpt.com" rel="noopener noreferrer" target="_blank">https://blog.google/technology/ai/google-bard-updates-io-2023/</a>) <a href="https://blog.google/technology/ai/google-bard-updates-io-2023/?utm_source=chatgpt.com" rel="noopener noreferrer" target="_blank">blog.google</a></li></ol><br/><h2>🎓 AI in higher ed: assessment, detectors, bias</h2><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.nytimes.com/2025/08/26/opinion/culture/ai-chatgpt-college-cheating-medieval.html" rel="noopener noreferrer" target="_blank">Clay Shirky’s NYT essay</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://hai.stanford.edu/news/ai-detectors-biased-against-non-native-english-writers" rel="noopener noreferrer" target="_blank">Stanford HAI</a>: AI detectors biased against non-native English writers</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Peer-reviewed paper (PMC): <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC10382961/" rel="noopener noreferrer" target="_blank">GPT detectors misclassify non-native English writing</a></li></ol><br/><h2>🪞 RLHF, sycophancy, and “the tool likes you too much”</h2><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>OpenAI: <a href="https://openai.com/index/instruction-following/" rel="noopener noreferrer" target="_blank">Aligning language models to follow instructions</a> (RLHF explainer)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>OpenAI: <a href="https://openai.com/index/sycophancy-in-gpt-4o/" rel="noopener noreferrer" target="_blank">Sycophancy in GPT-4o—what happened &amp; what we’re doing</a></li></ol><br/><h2><br></h2>]]></description><content:encoded><![CDATA[<p>Sara’s back from visiting her New Jersey Christian high school—where she gets hit with a genuinely spicy question: <em>How do you reconcile AGI with faith?</em> From there, we go straight into the bigger theme of the episode: education is getting stress-tested by AI in real time.</p><p>Becca breaks down Google’s “magic cycle” — the uncomfortable lesson of inventing transformative research (Transformers, BERT) and then watching someone else ship it to the world. Sara shares what she’s learning about research workflows moving beyond “just chat,” including multi-agent setups for planning, searching, reading, and synthesis.</p><p>Then we’re joined by <strong>Clay Shirky</strong>, Vice Provost for AI &amp; Technology in Education at NYU, to talk about what’s <em>actually</em> happening on campuses: why students integrated AI “sideways” before institutions could respond, why AI detectors are a trap (and who they harm most), and why the real shift isn’t assignments — it’s <strong>assessment</strong>.</p><p>We dig into what comes next: oral exams, in-class scaffolding, and designing learning around <em>productive struggle</em>—not just output. And we end in a place that’s both funny and unsettling: the rise of AI “personalities,” RLHF as “reinforcement learning for human flattery,” and what it means when a machine is always on your side.</p><p>Because whether we like it or not: a well-written paragraph is no longer proof of human thought.</p><h2>🧠 Foundational AI papers &amp; breakthroughs</h2><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://arxiv.org/abs/1706.03762" rel="noopener noreferrer" target="_blank">Attention Is All You Need</a> (Transformers, 2017)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://arxiv.org/abs/1810.04805" rel="noopener noreferrer" target="_blank">BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding</a></li></ol><br/><h2>🧪 Google’s “Magic Cycle” framing</h2><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://research.google/blog/accelerating-the-magic-cycle-of-research-breakthroughs-and-real-world-applications/" rel="noopener noreferrer" target="_blank">Accelerating the magic cycle of research breakthroughs and real-world applications</a> <a href="https://research.google/blog/accelerating-the-magic-cycle-of-research-breakthroughs-and-real-world-applications/?utm_source=chatgpt.com" rel="noopener noreferrer" target="_blank">Google Research</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://blog.google/technology/research/google-research-scientific-discovery/" rel="noopener noreferrer" target="_blank">How AI Drives Scientific Research with Real-World Benefit</a> (Google Blog)</li></ol><br/><h2>🚨 Shipping pressure: Bard + “code red” era</h2><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Reuters: Alphabet shares dive after Bard flubs info, ~$100B market cap hit (<a href="https://www.reuters.com/technology/google-ai-chatbot-bard-offers-inaccurate-information-company-ad-2023-02-08/?utm_source=chatgpt.com" rel="noopener noreferrer" target="_blank">https://www.reuters.com/technology/google-ai-chatbot-bard-offers-inaccurate-information-company-ad-2023-02-08/</a>) <a href="https://www.reuters.com/technology/google-ai-chatbot-bard-offers-inaccurate-information-company-ad-2023-02-08/?utm_source=chatgpt.com" rel="noopener noreferrer" target="_blank">Reuters</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Google Blog: Bard updates from Google I/O 2023 (<a href="https://blog.google/technology/ai/google-bard-updates-io-2023/?utm_source=chatgpt.com" rel="noopener noreferrer" target="_blank">https://blog.google/technology/ai/google-bard-updates-io-2023/</a>) <a href="https://blog.google/technology/ai/google-bard-updates-io-2023/?utm_source=chatgpt.com" rel="noopener noreferrer" target="_blank">blog.google</a></li></ol><br/><h2>🎓 AI in higher ed: assessment, detectors, bias</h2><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://www.nytimes.com/2025/08/26/opinion/culture/ai-chatgpt-college-cheating-medieval.html" rel="noopener noreferrer" target="_blank">Clay Shirky’s NYT essay</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://hai.stanford.edu/news/ai-detectors-biased-against-non-native-english-writers" rel="noopener noreferrer" target="_blank">Stanford HAI</a>: AI detectors biased against non-native English writers</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Peer-reviewed paper (PMC): <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC10382961/" rel="noopener noreferrer" target="_blank">GPT detectors misclassify non-native English writing</a></li></ol><br/><h2>🪞 RLHF, sycophancy, and “the tool likes you too much”</h2><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>OpenAI: <a href="https://openai.com/index/instruction-following/" rel="noopener noreferrer" target="_blank">Aligning language models to follow instructions</a> (RLHF explainer)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>OpenAI: <a href="https://openai.com/index/sycophancy-in-gpt-4o/" rel="noopener noreferrer" target="_blank">Sycophancy in GPT-4o—what happened &amp; what we’re doing</a></li></ol><br/><h2><br></h2>]]></content:encoded><link><![CDATA[https://chaosagents.ai]]></link><guid isPermaLink="false">a83c2bbb-46ad-4210-a630-bdb81c27446e</guid><itunes:image href="https://artwork.captivate.fm/60817fdc-4f52-4d2f-96e5-ee64b7573367/ClaySquare.jpg"/><pubDate>Tue, 06 Jan 2026 01:58:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/a83c2bbb-46ad-4210-a630-bdb81c27446e.mp3" length="77663160" type="audio/mpeg"/><itunes:duration>53:47</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>5</itunes:episode><podcast:episode>5</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/ff15f3bc-3698-481d-aa8c-cffc558ba3e4/index.html" type="text/html"/></item><item><title>Speed vs Quality, Hallucinations, and the AI Learning Rabbit Hole - With Nir Zicherman</title><itunes:title>Speed vs Quality, Hallucinations, and the AI Learning Rabbit Hole - With Nir Zicherman</itunes:title><description><![CDATA[<p>Sara breaks down perceptrons (1957!) as the tiny “matrix of lights” idea that eventually became neural networks—then we jump straight into modern AI chaos.</p><p>Oboe’s Nir Zuckerman walks us through the messy reality of building consumer-grade AI for education: every feature is a tradeoff between loading fast and being good, and “just use a better model” doesn’t magically solve it. We talk guardrails, web search, multi-model pipelines, and why learning tools should feel lightweight—more like curiosity than homework. Also: Becca’s “how does a computer work?” obsession and a book recommendation that might change your life.</p><h3>🧠 AI Concepts &amp; Foundations</h3><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://en.wikipedia.org/wiki/Perceptron" rel="noopener noreferrer" target="_blank">Perceptron</a> (Wikipedia)</strong></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://en.wikipedia.org/wiki/Artificial_neural_network" rel="noopener noreferrer" target="_blank">Neural Networks Explained</a></strong></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://arxiv.org/abs/2001.08361" rel="noopener noreferrer" target="_blank">Scaling Laws for Neural Language Models</a></strong></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://en.wikipedia.org/wiki/FLOPS" rel="noopener noreferrer" target="_blank">FLOPS</a> (Floating Point Operations Per Second)</strong></li></ol><br/><h3>🎓 Learning, Education &amp; AI</h3><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://www.oboedotai.com/" rel="noopener noreferrer" target="_blank">Oboe</a></strong></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://www.nature.com/articles/d41586-023-00575-8" rel="noopener noreferrer" target="_blank">AI as a Personal Tutor</a> (Overview)</strong></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://www.apa.org/monitor/nov01/tutor" rel="noopener noreferrer" target="_blank">Why Tutors Are So Effective</a> </strong></li></ol><br/><h3>🏗️ Building AI Products</h3><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://www.latent.space/p/speed-quality-llms" rel="noopener noreferrer" target="_blank">Speed vs Quality Tradeoffs in LLM Apps</a></strong></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://www.promptingguide.ai/research/orchestration" rel="noopener noreferrer" target="_blank">LLM Orchestration Patterns</a></strong></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://www.pinecone.io/learn/retrieval-augmented-generation/" rel="noopener noreferrer" target="_blank">Retrieval-Augmented Generation</a> (RAG)</strong></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://www.ibm.com/topics/ai-hallucinations" rel="noopener noreferrer" target="_blank">LLM Hallucinations: Causes &amp; Mitigation</a></strong></li></ol><br/><h3>📚 Books Mentioned</h3><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://www.goodreads.com/book/show/44882.Code" rel="noopener noreferrer" target="_blank">Code: The Hidden Language of Computer Hardware and Software</a></strong></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://mitpress.mit.edu/9780262630221/perceptrons/" rel="noopener noreferrer" target="_blank">Perceptrons</a></strong></li></ol><br/><h3>🧪 History of AI</h3><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://www.britannica.com/biography/Frank-Rosenblatt" rel="noopener noreferrer" target="_blank">Frank Rosenblatt and the Perceptron</a></strong></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://www.techtarget.com/searchenterpriseai/definition/AI-winter" rel="noopener noreferrer" target="_blank">The AI Winter Explained</a></strong></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://www.sciencedirect.com/topics/computer-science/neural-network-history" rel="noopener noreferrer" target="_blank">Early Neural Network Research</a> (1950s–1980s)</strong></li></ol><br/><h3><br></h3>]]></description><content:encoded><![CDATA[<p>Sara breaks down perceptrons (1957!) as the tiny “matrix of lights” idea that eventually became neural networks—then we jump straight into modern AI chaos.</p><p>Oboe’s Nir Zuckerman walks us through the messy reality of building consumer-grade AI for education: every feature is a tradeoff between loading fast and being good, and “just use a better model” doesn’t magically solve it. We talk guardrails, web search, multi-model pipelines, and why learning tools should feel lightweight—more like curiosity than homework. Also: Becca’s “how does a computer work?” obsession and a book recommendation that might change your life.</p><h3>🧠 AI Concepts &amp; Foundations</h3><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://en.wikipedia.org/wiki/Perceptron" rel="noopener noreferrer" target="_blank">Perceptron</a> (Wikipedia)</strong></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://en.wikipedia.org/wiki/Artificial_neural_network" rel="noopener noreferrer" target="_blank">Neural Networks Explained</a></strong></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://arxiv.org/abs/2001.08361" rel="noopener noreferrer" target="_blank">Scaling Laws for Neural Language Models</a></strong></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://en.wikipedia.org/wiki/FLOPS" rel="noopener noreferrer" target="_blank">FLOPS</a> (Floating Point Operations Per Second)</strong></li></ol><br/><h3>🎓 Learning, Education &amp; AI</h3><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://www.oboedotai.com/" rel="noopener noreferrer" target="_blank">Oboe</a></strong></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://www.nature.com/articles/d41586-023-00575-8" rel="noopener noreferrer" target="_blank">AI as a Personal Tutor</a> (Overview)</strong></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://www.apa.org/monitor/nov01/tutor" rel="noopener noreferrer" target="_blank">Why Tutors Are So Effective</a> </strong></li></ol><br/><h3>🏗️ Building AI Products</h3><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://www.latent.space/p/speed-quality-llms" rel="noopener noreferrer" target="_blank">Speed vs Quality Tradeoffs in LLM Apps</a></strong></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://www.promptingguide.ai/research/orchestration" rel="noopener noreferrer" target="_blank">LLM Orchestration Patterns</a></strong></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://www.pinecone.io/learn/retrieval-augmented-generation/" rel="noopener noreferrer" target="_blank">Retrieval-Augmented Generation</a> (RAG)</strong></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://www.ibm.com/topics/ai-hallucinations" rel="noopener noreferrer" target="_blank">LLM Hallucinations: Causes &amp; Mitigation</a></strong></li></ol><br/><h3>📚 Books Mentioned</h3><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://www.goodreads.com/book/show/44882.Code" rel="noopener noreferrer" target="_blank">Code: The Hidden Language of Computer Hardware and Software</a></strong></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://mitpress.mit.edu/9780262630221/perceptrons/" rel="noopener noreferrer" target="_blank">Perceptrons</a></strong></li></ol><br/><h3>🧪 History of AI</h3><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://www.britannica.com/biography/Frank-Rosenblatt" rel="noopener noreferrer" target="_blank">Frank Rosenblatt and the Perceptron</a></strong></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://www.techtarget.com/searchenterpriseai/definition/AI-winter" rel="noopener noreferrer" target="_blank">The AI Winter Explained</a></strong></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://www.sciencedirect.com/topics/computer-science/neural-network-history" rel="noopener noreferrer" target="_blank">Early Neural Network Research</a> (1950s–1980s)</strong></li></ol><br/><h3><br></h3>]]></content:encoded><link><![CDATA[https://chaosagents.ai]]></link><guid isPermaLink="false">f047860d-7e4a-4d71-a791-d22f0c0c90d0</guid><itunes:image href="https://artwork.captivate.fm/dfaa6fec-ed2f-4f1d-9278-fc6bea9d96e7/NirSquare.jpg"/><pubDate>Tue, 23 Dec 2025 00:30:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/f047860d-7e4a-4d71-a791-d22f0c0c90d0.mp3" length="70091550" type="audio/mpeg"/><itunes:duration>48:37</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>4</itunes:episode><podcast:episode>4</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/ddb4973b-7f9c-43c5-b8e5-ed94877ebb08/index.html" type="text/html"/></item><item><title>Paradigm Shifts, Build First AI, and the Non-Technical Developer - With Bethany Crystal</title><itunes:title>Paradigm Shifts, Build First AI, and the Non-Technical Developer - With Bethany Crystal</itunes:title><description><![CDATA[<p>Sara and Becca kick things off with a tour through paradigm shifts — from Thomas Kuhn to the internet to AI — and ask whether we’re living through one of those rare moments where the whole game quietly changes. Along the way, they hit horror movies, calculators in math class, Google Doc revision histories, and why it’s suddenly way easier to <em>learn</em> code than to pretend you never needed it.</p><p>Then they’re joined by <a href="https://www.linkedin.com/in/bethanymarz/" rel="noopener noreferrer" target="_blank"><strong>Bethany Crystal</strong></a>, founder of <a href="https://buildfirst.ai/" rel="noopener noreferrer" target="_blank"><strong>Build First AI</strong></a>, who has spent 15 years “around” technologists and only recently started building software herself. Bethany walks us through how she used AI tools, pair prompting, and a lot of stubbornness to ship her first iOS app, why she thinks the definition of “developer” is shifting, and how she now teaches other “non-technical” people to build real products. Oh, and she tells the story of how AI literally saved her life.</p><h3><strong>📚 Books </strong></h3><ul><li><a href="https://press.uchicago.edu/ucp/books/book/chicago/S/bo3645689.html" rel="noopener noreferrer" target="_blank">The Structure of Scientific Revolutions</a> — Thomas Kuhn</li></ul><br/><h3><strong>🛠️ AI &amp; Developer Tools</strong></h3><ul><li><a href="https://buildfirst.ai/" rel="noopener noreferrer" target="_blank">Build First AI</a></li><li><a href="https://scribblins.app/" rel="noopener noreferrer" target="_blank">Scribblins</a> (Bethany's iOS app)</li><li><a href="https://apps.apple.com/us/app/musekat/id6743862873" rel="noopener noreferrer" target="_blank">MuseKat App</a> (Bethany’s iOS app)</li><li><a href="https://www.cursor.com" rel="noopener noreferrer" target="_blank">Cursor</a> – AI coding editor</li><li><a href="https://replit.com" rel="noopener noreferrer" target="_blank">Replit</a></li><li><a href="https://elevenlabs.io" rel="noopener noreferrer" target="_blank">ElevenLabs</a> – Text-to-Speech</li><li><a href="https://v0.dev" rel="noopener noreferrer" target="_blank">v0 – AI UI generator</a></li><li><a href="https://supabase.com" rel="noopener noreferrer" target="_blank">Supabase</a></li><li><a href="https://vercel.com" rel="noopener noreferrer" target="_blank">Vercel</a></li></ul><br/><h3><strong>💸 Crypto Context</strong></h3><ul><li><a href="https://www.base.org/" rel="noopener noreferrer" target="_blank">Base</a></li><li><a href="https://solana.com" rel="noopener noreferrer" target="_blank">Solana</a></li><li><a href="https://ethereum.org" rel="noopener noreferrer" target="_blank">Ethereum</a></li></ul><br/><h3><strong>🎓 Education &amp; Culture</strong></h3><ul><li><a href="https://suno.com" rel="noopener noreferrer" target="_blank">Suno (AI music generation)</a></li></ul><br/><h3><strong>🏢 Career &amp; Community</strong></h3><ul><li><a href="https://stackoverflow.com/" rel="noopener noreferrer" target="_blank">Stack Overflow</a></li><li><a href="https://www.usv.com" rel="noopener noreferrer" target="_blank">Union Square Ventures</a></li><li><a href="https://www.technyc.org" rel="noopener noreferrer" target="_blank">Tech:NYC</a></li><li><a href="https://decodedfutures.org" rel="noopener noreferrer" target="_blank">Decoded Futures</a></li></ul><br/>]]></description><content:encoded><![CDATA[<p>Sara and Becca kick things off with a tour through paradigm shifts — from Thomas Kuhn to the internet to AI — and ask whether we’re living through one of those rare moments where the whole game quietly changes. Along the way, they hit horror movies, calculators in math class, Google Doc revision histories, and why it’s suddenly way easier to <em>learn</em> code than to pretend you never needed it.</p><p>Then they’re joined by <a href="https://www.linkedin.com/in/bethanymarz/" rel="noopener noreferrer" target="_blank"><strong>Bethany Crystal</strong></a>, founder of <a href="https://buildfirst.ai/" rel="noopener noreferrer" target="_blank"><strong>Build First AI</strong></a>, who has spent 15 years “around” technologists and only recently started building software herself. Bethany walks us through how she used AI tools, pair prompting, and a lot of stubbornness to ship her first iOS app, why she thinks the definition of “developer” is shifting, and how she now teaches other “non-technical” people to build real products. Oh, and she tells the story of how AI literally saved her life.</p><h3><strong>📚 Books </strong></h3><ul><li><a href="https://press.uchicago.edu/ucp/books/book/chicago/S/bo3645689.html" rel="noopener noreferrer" target="_blank">The Structure of Scientific Revolutions</a> — Thomas Kuhn</li></ul><br/><h3><strong>🛠️ AI &amp; Developer Tools</strong></h3><ul><li><a href="https://buildfirst.ai/" rel="noopener noreferrer" target="_blank">Build First AI</a></li><li><a href="https://scribblins.app/" rel="noopener noreferrer" target="_blank">Scribblins</a> (Bethany's iOS app)</li><li><a href="https://apps.apple.com/us/app/musekat/id6743862873" rel="noopener noreferrer" target="_blank">MuseKat App</a> (Bethany’s iOS app)</li><li><a href="https://www.cursor.com" rel="noopener noreferrer" target="_blank">Cursor</a> – AI coding editor</li><li><a href="https://replit.com" rel="noopener noreferrer" target="_blank">Replit</a></li><li><a href="https://elevenlabs.io" rel="noopener noreferrer" target="_blank">ElevenLabs</a> – Text-to-Speech</li><li><a href="https://v0.dev" rel="noopener noreferrer" target="_blank">v0 – AI UI generator</a></li><li><a href="https://supabase.com" rel="noopener noreferrer" target="_blank">Supabase</a></li><li><a href="https://vercel.com" rel="noopener noreferrer" target="_blank">Vercel</a></li></ul><br/><h3><strong>💸 Crypto Context</strong></h3><ul><li><a href="https://www.base.org/" rel="noopener noreferrer" target="_blank">Base</a></li><li><a href="https://solana.com" rel="noopener noreferrer" target="_blank">Solana</a></li><li><a href="https://ethereum.org" rel="noopener noreferrer" target="_blank">Ethereum</a></li></ul><br/><h3><strong>🎓 Education &amp; Culture</strong></h3><ul><li><a href="https://suno.com" rel="noopener noreferrer" target="_blank">Suno (AI music generation)</a></li></ul><br/><h3><strong>🏢 Career &amp; Community</strong></h3><ul><li><a href="https://stackoverflow.com/" rel="noopener noreferrer" target="_blank">Stack Overflow</a></li><li><a href="https://www.usv.com" rel="noopener noreferrer" target="_blank">Union Square Ventures</a></li><li><a href="https://www.technyc.org" rel="noopener noreferrer" target="_blank">Tech:NYC</a></li><li><a href="https://decodedfutures.org" rel="noopener noreferrer" target="_blank">Decoded Futures</a></li></ul><br/>]]></content:encoded><link><![CDATA[https://chaosagents.ai]]></link><guid isPermaLink="false">cdac67ab-ee94-4196-befa-0f9f990a5566</guid><itunes:image href="https://artwork.captivate.fm/70146392-5dd6-474b-8004-d5a847fb47cc/bethany2.jpg"/><pubDate>Tue, 09 Dec 2025 01:45:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/cdac67ab-ee94-4196-befa-0f9f990a5566.mp3" length="73142791" type="audio/mpeg"/><itunes:duration>50:36</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>3</itunes:episode><podcast:episode>3</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/9767bf17-d846-466b-9ed6-33f12e532a5e/index.html" type="text/html"/><podcast:alternateEnclosure type="video/youtube" title="Paradigm Shifts, Build First AI, and the Non-Technical Developer - With Bethany Crystal"><podcast:source uri="https://youtu.be/NMWANlg41vI"/></podcast:alternateEnclosure></item><item><title>Retro Tech, New AI, and the Blackmailing Bot - With Paul Ford</title><itunes:title>Retro Tech, New AI, and the Blackmailing Bot - With Paul Ford</itunes:title><description><![CDATA[<p>In this episode, we unpack a wild Anthropic experiment where an AI agent named “Alex” is told it’s about to be replaced… and responds by threatening to expose an executive’s affair if anyone dares shut it down. Casual!</p><p>Sara and Becca start diving into what this experiment tells us about AI “goals,” self-preservation, and why humans are so bad at recognizing sentience in anything that isn’t us. If we can’t even agree on what a “soul” is, how would we ever know if an AI had one?</p><p><br></p><p>Then we’re joined by writer, builder, and retro-computing fan <strong>Paul Ford</strong>, president and co-founder of <strong>Aboard</strong>, an AI-oriented software company. Paul talks about:</p><ul><li>how he “trained” himself on AI by building the same app over and over with different models</li><li>why LLMs are incredible at the <em>first</em> mile and pretty terrible at the last</li><li>what actually breaks when you try to let AI generate full-stack apps</li><li>how boring tech (Postgres, TypeScript, React) is secretly the hero</li></ul><br/><p>Along the way we hit Isaac Asimov’s three laws, the uncanny valley of AI-written everything, nostalgic Amiga computers, and what it means to build tools that regular humans — not just engineers — can actually use.</p><p><br></p><p>If you’re AI-curious, a builder, or just mildly alarmed that 97% of models in this study went straight to blackmail… this one’s for you.</p><p><br></p><p>📰 <strong>The Anthropic “Alex” Experiment</strong></p><ul><li><a href="https://www.anthropic.com/research/agentic-misalignment" rel="noopener noreferrer" target="_blank"><strong>Anthropic / White-Hat AI Safety Experiment</strong></a></li></ul><br/><p>📚 <strong style="font-size: 1.125rem;">Foundational AI &amp; Sci-Fi References</strong></p><ul><li><a href="https://en.wikipedia.org/wiki/Three_Laws_of_Robotics" rel="noopener noreferrer" target="_blank"><strong>Isaac Asimov – The Three Laws of Robotics</strong></a></li><li><a href="https://I,Robot(Asimov)" rel="noopener noreferrer" target="_blank"><strong style="font-size: 1.125rem;"><em>I, Robot</em> (Asimov)</strong></a></li></ul><br/><p>🎤 <strong>Guest: Paul Ford</strong></p><ul><li><a href="https://www.aboard.com" rel="noopener noreferrer" target="_blank"><strong>Aboard</strong></a><strong> </strong></li><li><a href="mailto://paul.ford@aboard.com" rel="noopener noreferrer" target="_blank"><strong style="font-size: 1.125rem;">Email Paul mentions</strong></a></li><li><a href="https://paulford.com" rel="noopener noreferrer" target="_blank"><strong>Paul Ford</strong></a></li></ul><br/><p>🕹️ <strong>Retro Tech &amp; Nostalgia</strong></p><ul><li><a href="https://en.wikipedia.org/wiki/Amiga_1000" rel="noopener noreferrer" target="_blank"><strong>Amiga 1000 </strong></a><strong>(Commodore)</strong></li><li><a href="https://en.wikipedia.org/wiki/Deluxe_Paint" rel="noopener noreferrer" target="_blank"><strong>Deluxe Paint</strong></a></li><li><a href="https://github.com/MiSTer-devel/Main_MiSTer/wiki" rel="noopener noreferrer" target="_blank"><strong style="font-size: 1.125rem;">MiSTer FPGA</strong></a></li></ul><br/>]]></description><content:encoded><![CDATA[<p>In this episode, we unpack a wild Anthropic experiment where an AI agent named “Alex” is told it’s about to be replaced… and responds by threatening to expose an executive’s affair if anyone dares shut it down. Casual!</p><p>Sara and Becca start diving into what this experiment tells us about AI “goals,” self-preservation, and why humans are so bad at recognizing sentience in anything that isn’t us. If we can’t even agree on what a “soul” is, how would we ever know if an AI had one?</p><p><br></p><p>Then we’re joined by writer, builder, and retro-computing fan <strong>Paul Ford</strong>, president and co-founder of <strong>Aboard</strong>, an AI-oriented software company. Paul talks about:</p><ul><li>how he “trained” himself on AI by building the same app over and over with different models</li><li>why LLMs are incredible at the <em>first</em> mile and pretty terrible at the last</li><li>what actually breaks when you try to let AI generate full-stack apps</li><li>how boring tech (Postgres, TypeScript, React) is secretly the hero</li></ul><br/><p>Along the way we hit Isaac Asimov’s three laws, the uncanny valley of AI-written everything, nostalgic Amiga computers, and what it means to build tools that regular humans — not just engineers — can actually use.</p><p><br></p><p>If you’re AI-curious, a builder, or just mildly alarmed that 97% of models in this study went straight to blackmail… this one’s for you.</p><p><br></p><p>📰 <strong>The Anthropic “Alex” Experiment</strong></p><ul><li><a href="https://www.anthropic.com/research/agentic-misalignment" rel="noopener noreferrer" target="_blank"><strong>Anthropic / White-Hat AI Safety Experiment</strong></a></li></ul><br/><p>📚 <strong style="font-size: 1.125rem;">Foundational AI &amp; Sci-Fi References</strong></p><ul><li><a href="https://en.wikipedia.org/wiki/Three_Laws_of_Robotics" rel="noopener noreferrer" target="_blank"><strong>Isaac Asimov – The Three Laws of Robotics</strong></a></li><li><a href="https://I,Robot(Asimov)" rel="noopener noreferrer" target="_blank"><strong style="font-size: 1.125rem;"><em>I, Robot</em> (Asimov)</strong></a></li></ul><br/><p>🎤 <strong>Guest: Paul Ford</strong></p><ul><li><a href="https://www.aboard.com" rel="noopener noreferrer" target="_blank"><strong>Aboard</strong></a><strong> </strong></li><li><a href="mailto://paul.ford@aboard.com" rel="noopener noreferrer" target="_blank"><strong style="font-size: 1.125rem;">Email Paul mentions</strong></a></li><li><a href="https://paulford.com" rel="noopener noreferrer" target="_blank"><strong>Paul Ford</strong></a></li></ul><br/><p>🕹️ <strong>Retro Tech &amp; Nostalgia</strong></p><ul><li><a href="https://en.wikipedia.org/wiki/Amiga_1000" rel="noopener noreferrer" target="_blank"><strong>Amiga 1000 </strong></a><strong>(Commodore)</strong></li><li><a href="https://en.wikipedia.org/wiki/Deluxe_Paint" rel="noopener noreferrer" target="_blank"><strong>Deluxe Paint</strong></a></li><li><a href="https://github.com/MiSTer-devel/Main_MiSTer/wiki" rel="noopener noreferrer" target="_blank"><strong style="font-size: 1.125rem;">MiSTer FPGA</strong></a></li></ul><br/>]]></content:encoded><link><![CDATA[https://chaosagents.ai]]></link><guid isPermaLink="false">d8a8ef7f-0196-4257-b158-03abde62d02a</guid><itunes:image href="https://artwork.captivate.fm/391d2d8b-17cb-4429-a938-a535283f7893/PaulFordSquare.jpg"/><pubDate>Tue, 25 Nov 2025 01:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/d8a8ef7f-0196-4257-b158-03abde62d02a.mp3" length="74519793" type="audio/mpeg"/><itunes:duration>51:39</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>2</itunes:episode><podcast:episode>2</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/e8f960ad-ee0e-4d42-a07f-79c08849f3ea/transcript.json" type="application/json"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/e8f960ad-ee0e-4d42-a07f-79c08849f3ea/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/e8f960ad-ee0e-4d42-a07f-79c08849f3ea/index.html" type="text/html"/></item><item><title>Goose, Open Source, and the Future of Coding with AI — with Rizel Scarlett</title><itunes:title>Goose, Open Source, and the Future of Coding with AI — with Rizel Scarlett</itunes:title><description><![CDATA[<p>In this episode of <em>Chaos Agents</em>, Sara Chipps and Becca Lewy sit down with <a href="https://blackgirlbytes.dev/" rel="noopener noreferrer" target="_blank"><strong>Rizel Scarlett</strong></a>, Tech Lead for Open Source Developer Relations at <a href="https://block.xyz/" rel="noopener noreferrer" target="_blank"><strong>Block</strong></a>, to talk about <a href="https://github.com/block/goose" rel="noopener noreferrer" target="_blank">Goose</a>—the open-source AI agent shaking up how developers work. From psychological safety in coding with AI to how open source is evolving in this new era, the trio dives into the wild mix of creativity, collaboration, and chaos shaping the future of software. Expect laughter, learning, and maybe one too many geese metaphors as they explore what happens when AI starts coding alongside us.</p><p> </p><p><a href="https://fortune.com/2024/09/16/mit-study-generative-ai-failure-rate/" rel="noopener noreferrer" target="_blank"><strong>“95% of GenAI projects are failing, MIT study finds”</strong></a></p><p><strong>Linked MIT research study</strong></p><p><a href="https://sloanreview.mit.edu/forms/ai-driving-business-value/" rel="noopener noreferrer" target="_blank">“<strong>From Experimentation to Transformation: How AI Is Driving Business Value</strong>”</a></p><p><strong>MIT Sloan Management Review &amp; Boston Consulting Group</strong></p><p><strong>Goose GitHub repo (open source):</strong></p><p><a href="https://github.com/block/goose" rel="noopener noreferrer" target="_blank"><strong>https://github.com/block/goose</strong></a></p><p><strong>Rizel’s “Great Goose-Off” YouTube Series:</strong></p><p><a href="https://www.youtube.com/@goose-oss" rel="noopener noreferrer" target="_blank">https://www.youtube.com/@goose-oss</a></p><p><strong>Kimi K2 (Moonshot):</strong></p><p><a href="https://kimi.moonshot.cn" rel="noopener noreferrer" target="_blank">https://kimi.moonshot.cn</a> </p><p><strong>Meta Llama 3:</strong></p><p><a href="https://ai.meta.com/llama/" rel="noopener noreferrer" target="_blank">https://ai.meta.com/llama/</a></p><p><strong>Mistral Models:</strong></p><p><a href="https://mistral.ai" rel="noopener noreferrer" target="_blank">https://mistral.ai</a></p><p><strong>Ollama (run local models easily):</strong></p><p><a href="https://ollama.com" rel="noopener noreferrer" target="_blank">https://ollama.com</a></p>]]></description><content:encoded><![CDATA[<p>In this episode of <em>Chaos Agents</em>, Sara Chipps and Becca Lewy sit down with <a href="https://blackgirlbytes.dev/" rel="noopener noreferrer" target="_blank"><strong>Rizel Scarlett</strong></a>, Tech Lead for Open Source Developer Relations at <a href="https://block.xyz/" rel="noopener noreferrer" target="_blank"><strong>Block</strong></a>, to talk about <a href="https://github.com/block/goose" rel="noopener noreferrer" target="_blank">Goose</a>—the open-source AI agent shaking up how developers work. From psychological safety in coding with AI to how open source is evolving in this new era, the trio dives into the wild mix of creativity, collaboration, and chaos shaping the future of software. Expect laughter, learning, and maybe one too many geese metaphors as they explore what happens when AI starts coding alongside us.</p><p> </p><p><a href="https://fortune.com/2024/09/16/mit-study-generative-ai-failure-rate/" rel="noopener noreferrer" target="_blank"><strong>“95% of GenAI projects are failing, MIT study finds”</strong></a></p><p><strong>Linked MIT research study</strong></p><p><a href="https://sloanreview.mit.edu/forms/ai-driving-business-value/" rel="noopener noreferrer" target="_blank">“<strong>From Experimentation to Transformation: How AI Is Driving Business Value</strong>”</a></p><p><strong>MIT Sloan Management Review &amp; Boston Consulting Group</strong></p><p><strong>Goose GitHub repo (open source):</strong></p><p><a href="https://github.com/block/goose" rel="noopener noreferrer" target="_blank"><strong>https://github.com/block/goose</strong></a></p><p><strong>Rizel’s “Great Goose-Off” YouTube Series:</strong></p><p><a href="https://www.youtube.com/@goose-oss" rel="noopener noreferrer" target="_blank">https://www.youtube.com/@goose-oss</a></p><p><strong>Kimi K2 (Moonshot):</strong></p><p><a href="https://kimi.moonshot.cn" rel="noopener noreferrer" target="_blank">https://kimi.moonshot.cn</a> </p><p><strong>Meta Llama 3:</strong></p><p><a href="https://ai.meta.com/llama/" rel="noopener noreferrer" target="_blank">https://ai.meta.com/llama/</a></p><p><strong>Mistral Models:</strong></p><p><a href="https://mistral.ai" rel="noopener noreferrer" target="_blank">https://mistral.ai</a></p><p><strong>Ollama (run local models easily):</strong></p><p><a href="https://ollama.com" rel="noopener noreferrer" target="_blank">https://ollama.com</a></p>]]></content:encoded><link><![CDATA[https://chaosagents.ai]]></link><guid isPermaLink="false">46cef636-e63a-4011-8660-d03fe8ad8f61</guid><itunes:image href="https://artwork.captivate.fm/d33c91dd-0811-4876-874f-40c3cc1958b1/RizelScarlettSquare.jpeg"/><pubDate>Wed, 12 Nov 2025 22:45:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/46cef636-e63a-4011-8660-d03fe8ad8f61.mp3" length="76917090" type="audio/mpeg"/><itunes:duration>53:20</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>1</itunes:episode><podcast:episode>1</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/1063dafc-4f77-4fc0-acc1-49ff1f109081/index.html" type="text/html"/></item><item><title>Chaos Agents Trailer</title><itunes:title>Chaos Agents Trailer</itunes:title><description><![CDATA[<p><em>Chaos Agents</em> is the AI podcast where technologists <strong>Sara Chipps</strong> and <strong>Becca Lewy</strong> try to make sense of a world moving faster than ever. Each week, they dive into the wild, funny, and sometimes weird frontier of <strong>artificial intelligence, technology, and culture</strong>—how it works, what it means, and why it matters. From coding with AI and open-source revolutions to the ethics, creativity, and chaos reshaping our future, Sara and Becca learn out loud with brilliant guests (and the occasional chatbot). Smart, curious, and a little chaotic, <em>Chaos Agents</em> is your invitation to laugh, learn, and keep up with the machines.</p>]]></description><content:encoded><![CDATA[<p><em>Chaos Agents</em> is the AI podcast where technologists <strong>Sara Chipps</strong> and <strong>Becca Lewy</strong> try to make sense of a world moving faster than ever. Each week, they dive into the wild, funny, and sometimes weird frontier of <strong>artificial intelligence, technology, and culture</strong>—how it works, what it means, and why it matters. From coding with AI and open-source revolutions to the ethics, creativity, and chaos reshaping our future, Sara and Becca learn out loud with brilliant guests (and the occasional chatbot). Smart, curious, and a little chaotic, <em>Chaos Agents</em> is your invitation to laugh, learn, and keep up with the machines.</p>]]></content:encoded><link><![CDATA[https://chaosagents.ai]]></link><guid isPermaLink="false">eddfa79f-c1c9-4567-9c68-20d1a4002b14</guid><itunes:image href="https://artwork.captivate.fm/11c6e333-9c74-4a65-a394-8a78b1947cef/headshot.jpg"/><pubDate>Sun, 02 Nov 2025 16:27:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/eddfa79f-c1c9-4567-9c68-20d1a4002b14.mp3" length="3719859" type="audio/mpeg"/><itunes:duration>02:33</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>trailer</itunes:episodeType><podcast:transcript url="https://transcripts.captivate.fm/transcript/6bd882d3-d6ea-48c0-b0df-c38af40f4c5d/index.html" type="text/html"/></item></channel></rss>