<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet href="https://feeds.captivate.fm/style.xsl" type="text/xsl"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:podcast="https://podcastindex.org/namespace/1.0"><channel><atom:link href="https://feeds.captivate.fm/the-ai-governance/" rel="self" type="application/rss+xml"/><title><![CDATA[The AI Governance Briefing with Dr. Tuboise Floyd]]></title><podcast:guid>80a83b77-ba0d-5c2f-bf40-6a773562e7ff</podcast:guid><lastBuildDate>Wed, 29 Apr 2026 21:15:00 +0000</lastBuildDate><generator>Captivate.fm</generator><language><![CDATA[en]]></language><copyright><![CDATA[© 2026 Dr. Tuboise Floyd, PhD. All rights reserved.]]></copyright><managingEditor>Dr. Tuboise Floyd, PhD</managingEditor><itunes:summary><![CDATA[The AI Governance Briefing is an independent AI governance and strategy podcast for operators navigating institutions disrupted by artificial intelligence. Hosted by Dr. Tuboise Floyd, PhD — founder, researcher, and principal analyst at Human Signal.

The market has split in two. The consumption economy trades in noise, checklists, and compliance theater. The investment economy trades in signal infrastructure, physics, and sovereignty. The AI Governance Briefing is the intelligence feed for the investment economy. We do not trade in content. We trade in leverage.

Each episode applies the TAIMScore™ framework, GASP™ diagnostic, L.E.A.C. Protocol™, and the Failure Files™ instrument to reverse-engineer real institutional AI failures — and build governance infrastructure before autonomous systems break the institution. 

Produced with Creative Director Jeremy Jarvis, the show covers asymmetric strategy, critical infrastructure, and the physics of risk for government contracting and builder sectors.

New episodes, visual briefs, and honest playbooks at https://theaigovernancebriefing.com/podcast

© 2026 Dr. Tuboise Floyd. All rights reserved. Episode content applies the TAIMScore™ framework, GASP™ diagnostic, L.E.A.C. Protocol™, and the Failure Files™ instrument. The AI Governance Briefing is a publication of Human Signal.

The AI Governance Briefing is an independent media and research platform. All episode content — including analysis, case studies, and framework application — is provided for educational and informational purposes only. Nothing in any episode constitutes legal, regulatory, compliance, financial, or professional advice. No advisory or consulting relationship is created by listening to or engaging with this content. Guest opinions are those of the guest alone and do not represent the positions of Human Signal or Dr. Tuboise Floyd. Case studies and institutional failure analyses are based on publicly available information and are presented as pedagogical tools — not legal findings or regulatory determinations.<br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></itunes:summary><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><itunes:owner><itunes:name>Dr. Tuboise Floyd, PhD</itunes:name></itunes:owner><itunes:author>Dr. Tuboise Floyd, PhD</itunes:author><description>The AI Governance Briefing is an independent AI governance and strategy podcast for operators navigating institutions disrupted by artificial intelligence. Hosted by Dr. Tuboise Floyd, PhD — founder, researcher, and principal analyst at Human Signal.

The market has split in two. The consumption economy trades in noise, checklists, and compliance theater. The investment economy trades in signal infrastructure, physics, and sovereignty. The AI Governance Briefing is the intelligence feed for the investment economy. We do not trade in content. We trade in leverage.

Each episode applies the TAIMScore™ framework, GASP™ diagnostic, L.E.A.C. Protocol™, and the Failure Files™ instrument to reverse-engineer real institutional AI failures — and build governance infrastructure before autonomous systems break the institution. 

Produced with Creative Director Jeremy Jarvis, the show covers asymmetric strategy, critical infrastructure, and the physics of risk for government contracting and builder sectors.

New episodes, visual briefs, and honest playbooks at https://theaigovernancebriefing.com/podcast

© 2026 Dr. Tuboise Floyd. All rights reserved. Episode content applies the TAIMScore™ framework, GASP™ diagnostic, L.E.A.C. Protocol™, and the Failure Files™ instrument. The AI Governance Briefing is a publication of Human Signal.

The AI Governance Briefing is an independent media and research platform. All episode content — including analysis, case studies, and framework application — is provided for educational and informational purposes only. Nothing in any episode constitutes legal, regulatory, compliance, financial, or professional advice. No advisory or consulting relationship is created by listening to or engaging with this content. Guest opinions are those of the guest alone and do not represent the positions of Human Signal or Dr. Tuboise Floyd. Case studies and institutional failure analyses are based on publicly available information and are presented as pedagogical tools — not legal findings or regulatory determinations.

This podcast uses the following third-party services for analysis: 

OP3 - https://op3.dev/privacy</description><link>https://theaigovernancebriefing.com/podcast</link><atom:link href="https://pubsubhubbub.appspot.com" rel="hub"/><itunes:subtitle><![CDATA[AI Governance · Institutional Risk · Federal Policy · Dr. Tuboise Floyd · Human Signal]]></itunes:subtitle><itunes:explicit>false</itunes:explicit><itunes:type>serial</itunes:type><itunes:category text="Business"><itunes:category text="Management"/></itunes:category><itunes:category text="Technology"></itunes:category><itunes:category text="Society &amp; Culture"><itunes:category text="Documentary"/></itunes:category><itunes:new-feed-url>https://feeds.captivate.fm/the-ai-governance/</itunes:new-feed-url><podcast:locked>yes</podcast:locked><podcast:medium>podcast</podcast:medium><podcast:funding url="https://theaigovernancebriefing.com/underwrite">Underwrite the Show</podcast:funding><podcast:license url="https://humansignal.io">All Rights Reserved</podcast:license><podcast:location>Washington, DC, United States</podcast:location><item><title>Dr. Rhonda Farrell: AI Doesn&apos;t Break Your Organization. It Reveals It.</title><itunes:title>Dr. Rhonda Farrell: AI Doesn&apos;t Break Your Organization. It Reveals It.</itunes:title><description><![CDATA[<p><strong>Most organizations are not failing because they lack effort. They are failing because policy, process, people, and platforms were never designed to operate as an integrated system.</strong></p><p><strong>AI does not fix that system. AI reveals it.</strong></p><p>In this Guest Feature, Dr. Tuboise Floyd sits down with Dr. Rhonda Farrell — United States Marine Corps veteran, ASQ Fellow, IEEE Senior Member, ISSA Distinguished Fellow, and Founder of the Cyber &amp; STEAM Global Innovation Alliance — for a field report from inside the DoD, DHS, NSA, Department of State, and the broader intelligence community on what functional AI governance actually requires.</p><p>Her thesis: AI is a forcing function. And pressure does not break solutions. It reveals them.</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>🔑 KEY IDEAS</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>▪ Why high-performing people keep drowning inside low-performing ecosystems ▪ The difference between performative compliance and mission-functional compliance ▪ Why policy without an execution path is just theory ▪ The four-P traceability model: Policy → Process → People → Platforms ▪ How to sequence NIST CSF, CMMC, NIST AI RMF, and TAIMScore™ as a phased maturation program ▪ The Monday Morning Move — a live exercise leaders can run tomorrow ▪ Why pressure does not break solutions. It reveals them.</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>🎙 CHAPTERS</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>00:00:00 — Open: The Veteran's Diagnosis</p><p>00:02:00 — High-Performing People, Low-Performing Ecosystems</p><p>00:05:00 — The Design Problem: Why Integration Was Never Built</p><p>00:10:00 — Policy → Process → People → Platforms: The Traceability Model</p><p>00:12:00 — Performative vs. Mission-Functional Compliance</p><p>00:14:00 — AI as the Forcing Function</p><p>00:17:00 — Sequencing NIST CSF, CMMC, NIST AI RMF, and TAIMScore™</p><p>00:20:00 — Policy Without Execution Is Just Theory</p><p>00:25:00 — The Design Shift: From Frameworks to Operationalization</p><p>00:32:00 — The Monday Morning Move</p><p>00:37:00 — Pressure Does Not Break Solutions. It Reveals Them.</p><p>00:39:00 — Where to Follow Dr. Farrell's Work</p><p>00:41:00 — Close: Govern the Machine or Be the Resource It Consumes</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>📺 WATCH ON YOUTUBE</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>https://youtu.be/ACe9px5XZZ8</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>📰 COMPANION PIECES</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>▶ Issue 015 of <em>The AI Governance Record</em> (written companion): https://theaigovernancerecord.com/record/issue-015</p><p>▶ Subscribe to the newsletter: https://theaigovernancerecord.com/newsletter</p><p>▶ Deep-dive blog: https://theaigovernancebriefing.com/blog</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>🎟 LIVE EVENT — MAY 14, 2026</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p><strong>Human Signal Town Hall — The Strict Reality of AI Governance</strong></p><p>Dr. Rhonda Farrell is a confirmed speaker.</p><p>50 seats. $97 pre-sale through April 30. $147 from May 1. Reserve: https://humansignal.io/townhall</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>📄 READ THE POSITION PAPER</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p><em>The Pedagogy Problem in AI Governance</em> — Dr. Tuboise Floyd's foundational position paper. Available on SSRN and at humansignal.io/position-paper. DOI: 10.2139/ssrn.6549178</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>🎓 TAIMSCORE™ ASSESSOR WORKSHOP</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>Want to measure your organization's AI governance maturity against a standard? Dr. Floyd is a TAIMScore™ Certified Assessor (HISPI). https://humansignal.io/taimscore</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>📚 FAILURE FILES™</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>Governance pedagogy through real AI failure analysis. Free public instrument from Human Signal, underwritten by Project Cerebellum. https://humansignal.io/failure-files</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>👤 ABOUT THE GUEST</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>Dr. Rhonda Farrell, DM, MS, MBA, CISSP, PMP, LSSMBB, is Executive Advisor, Transformation Strategist, and Founder &amp; CEO of Global Innovation Strategies and the Cyber &amp; STEAM Global Innovation Alliance. A United States Marine Corps veteran with twenty-plus years driving enterprise transformation, AI, cybersecurity, and innovation across the DoD, NSA, DHS, Department of State, and broader intelligence community. ASQ Fellow. IEEE Senior Member. ISSA Distinguished Fellow.</p><p>Her Cyber &amp; STEAM Global Innovation Alliance is building toward 10,000 partners serving one million people globally across STEAM, cyber, and innovation.</p><p>LinkedIn: https://www.linkedin.com/in/rhondafarrell/ Website: https://gblinnovstratllc.com Email: CEO@gblinnovstratllc.com</p><p>Watch Dr. Farrell's weekly "Spotlight on Strategy" series on YouTube: https://youtube.com/@gblinnovstrat</p><p></p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>🎙 ABOUT THE HOST</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>Dr. Tuboise Floyd, PhD, is Founder and Chief Sensemaking Officer of Human Signal (humansignal.io), Editor in Chief of <em>The AI Governance Record</em> (theaigovernancerecord.com), Host of <em>The AI Governance Briefing</em>(theaigovernancebriefing.com), and a TAIMScore™ Certified Assessor. His doctoral work at Auburn University (2010) in adult education and systems theory underpins Human Signal's pedagogical frameworks, including the Trust Gap, GASP™, the Workflow Thesis, Noise Discipline, and the L.E.A.C. Protocol™.</p><p>ORCID: 0009-0008-0055-072X</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>🔗 CONNECT</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>Website: https://humansignal.io LinkedIn: https://linkedin.com/in/drtuboisefloyd Brand channel: https://linkedin.com/in/theaigovernancebriefing Podcast RSS: https://feeds.captivate.fm/the-ai-governance Apple Podcasts: https://podcasts.apple.com/us/podcast/the-ai-governance-briefing/id1847777960</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p><strong>Independence is not a feature. It is the product.</strong> <strong>Govern the machine. Or be the resource it consumes.</strong></p><p>#AIGovernance #AIRiskManagement #DigitalTransformation #Compliance #GRC #DrRhondaFarrell #DrTuboiseFloyd #HumanSignal #TheAIGovernanceBriefing #NISTAIRMF #TAIMScore #HISPI #USMC #DoD #NSA #EnterpriseAI #Leadership</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p><strong>Most organizations are not failing because they lack effort. They are failing because policy, process, people, and platforms were never designed to operate as an integrated system.</strong></p><p><strong>AI does not fix that system. AI reveals it.</strong></p><p>In this Guest Feature, Dr. Tuboise Floyd sits down with Dr. Rhonda Farrell — United States Marine Corps veteran, ASQ Fellow, IEEE Senior Member, ISSA Distinguished Fellow, and Founder of the Cyber &amp; STEAM Global Innovation Alliance — for a field report from inside the DoD, DHS, NSA, Department of State, and the broader intelligence community on what functional AI governance actually requires.</p><p>Her thesis: AI is a forcing function. And pressure does not break solutions. It reveals them.</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>🔑 KEY IDEAS</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>▪ Why high-performing people keep drowning inside low-performing ecosystems ▪ The difference between performative compliance and mission-functional compliance ▪ Why policy without an execution path is just theory ▪ The four-P traceability model: Policy → Process → People → Platforms ▪ How to sequence NIST CSF, CMMC, NIST AI RMF, and TAIMScore™ as a phased maturation program ▪ The Monday Morning Move — a live exercise leaders can run tomorrow ▪ Why pressure does not break solutions. It reveals them.</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>🎙 CHAPTERS</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>00:00:00 — Open: The Veteran's Diagnosis</p><p>00:02:00 — High-Performing People, Low-Performing Ecosystems</p><p>00:05:00 — The Design Problem: Why Integration Was Never Built</p><p>00:10:00 — Policy → Process → People → Platforms: The Traceability Model</p><p>00:12:00 — Performative vs. Mission-Functional Compliance</p><p>00:14:00 — AI as the Forcing Function</p><p>00:17:00 — Sequencing NIST CSF, CMMC, NIST AI RMF, and TAIMScore™</p><p>00:20:00 — Policy Without Execution Is Just Theory</p><p>00:25:00 — The Design Shift: From Frameworks to Operationalization</p><p>00:32:00 — The Monday Morning Move</p><p>00:37:00 — Pressure Does Not Break Solutions. It Reveals Them.</p><p>00:39:00 — Where to Follow Dr. Farrell's Work</p><p>00:41:00 — Close: Govern the Machine or Be the Resource It Consumes</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>📺 WATCH ON YOUTUBE</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>https://youtu.be/ACe9px5XZZ8</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>📰 COMPANION PIECES</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>▶ Issue 015 of <em>The AI Governance Record</em> (written companion): https://theaigovernancerecord.com/record/issue-015</p><p>▶ Subscribe to the newsletter: https://theaigovernancerecord.com/newsletter</p><p>▶ Deep-dive blog: https://theaigovernancebriefing.com/blog</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>🎟 LIVE EVENT — MAY 14, 2026</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p><strong>Human Signal Town Hall — The Strict Reality of AI Governance</strong></p><p>Dr. Rhonda Farrell is a confirmed speaker.</p><p>50 seats. $97 pre-sale through April 30. $147 from May 1. Reserve: https://humansignal.io/townhall</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>📄 READ THE POSITION PAPER</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p><em>The Pedagogy Problem in AI Governance</em> — Dr. Tuboise Floyd's foundational position paper. Available on SSRN and at humansignal.io/position-paper. DOI: 10.2139/ssrn.6549178</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>🎓 TAIMSCORE™ ASSESSOR WORKSHOP</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>Want to measure your organization's AI governance maturity against a standard? Dr. Floyd is a TAIMScore™ Certified Assessor (HISPI). https://humansignal.io/taimscore</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>📚 FAILURE FILES™</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>Governance pedagogy through real AI failure analysis. Free public instrument from Human Signal, underwritten by Project Cerebellum. https://humansignal.io/failure-files</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>👤 ABOUT THE GUEST</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>Dr. Rhonda Farrell, DM, MS, MBA, CISSP, PMP, LSSMBB, is Executive Advisor, Transformation Strategist, and Founder &amp; CEO of Global Innovation Strategies and the Cyber &amp; STEAM Global Innovation Alliance. A United States Marine Corps veteran with twenty-plus years driving enterprise transformation, AI, cybersecurity, and innovation across the DoD, NSA, DHS, Department of State, and broader intelligence community. ASQ Fellow. IEEE Senior Member. ISSA Distinguished Fellow.</p><p>Her Cyber &amp; STEAM Global Innovation Alliance is building toward 10,000 partners serving one million people globally across STEAM, cyber, and innovation.</p><p>LinkedIn: https://www.linkedin.com/in/rhondafarrell/ Website: https://gblinnovstratllc.com Email: CEO@gblinnovstratllc.com</p><p>Watch Dr. Farrell's weekly "Spotlight on Strategy" series on YouTube: https://youtube.com/@gblinnovstrat</p><p></p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>🎙 ABOUT THE HOST</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>Dr. Tuboise Floyd, PhD, is Founder and Chief Sensemaking Officer of Human Signal (humansignal.io), Editor in Chief of <em>The AI Governance Record</em> (theaigovernancerecord.com), Host of <em>The AI Governance Briefing</em>(theaigovernancebriefing.com), and a TAIMScore™ Certified Assessor. His doctoral work at Auburn University (2010) in adult education and systems theory underpins Human Signal's pedagogical frameworks, including the Trust Gap, GASP™, the Workflow Thesis, Noise Discipline, and the L.E.A.C. Protocol™.</p><p>ORCID: 0009-0008-0055-072X</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>🔗 CONNECT</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p>Website: https://humansignal.io LinkedIn: https://linkedin.com/in/drtuboisefloyd Brand channel: https://linkedin.com/in/theaigovernancebriefing Podcast RSS: https://feeds.captivate.fm/the-ai-governance Apple Podcasts: https://podcasts.apple.com/us/podcast/the-ai-governance-briefing/id1847777960</p><p>━━━━━━━━━━━━━━━━━━━━━━━━━━</p><p><strong>Independence is not a feature. It is the product.</strong> <strong>Govern the machine. Or be the resource it consumes.</strong></p><p>#AIGovernance #AIRiskManagement #DigitalTransformation #Compliance #GRC #DrRhondaFarrell #DrTuboiseFloyd #HumanSignal #TheAIGovernanceBriefing #NISTAIRMF #TAIMScore #HISPI #USMC #DoD #NSA #EnterpriseAI #Leadership</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/rhonda-farrell-ai-reveals-misaligned-organizations]]></link><guid isPermaLink="false">c15a43f8-b56d-49ce-aef7-973c1c3c15bf</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Thu, 23 Apr 2026 04:15:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/c15a43f8-b56d-49ce-aef7-973c1c3c15bf.mp3" length="39453027" type="audio/mpeg"/><itunes:duration>41:06</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>2</itunes:season><podcast:season>2</podcast:season><podcast:alternateEnclosure type="video/youtube" title="Dr. Rhonda Farrell: AI Doesn&apos;t Break Your Organization. It Reveals It."><podcast:source uri="https://youtu.be/ACe9px5XZZ8"/></podcast:alternateEnclosure></item><item><title>Navigating the Complexities of AI Governance: Introducing TAIMScore™</title><itunes:title>Navigating the Complexities of AI Governance: Introducing TAIMScore™</itunes:title><description><![CDATA[<p>TAIMScore™ is the Trusted AI Model Score — a 20-control AI governance framework built by HISPI Project Cerebellum. In this episode of The AI Governance Briefing, Dr. Tuboise Floyd, PhD breaks down how TAIMScore™ turns AI accountability into something you can measure, score, and prove.</p><p></p><p>Four governance domains. Twenty essential controls. Mapped against NIST AI RMF, the EU AI Act, HIPAA, PCI DSS, SOC 2, EU GDPR, and the White House AI Executive Order. If your institution needs a blueprint for AI governance that survives regulatory scrutiny, this is the starting point.</p><p></p><p>AI is already deployed. The institutions that survive will be the ones that can prove they govern it.</p><p></p><p>──────────────────────────────────────</p><p>WHAT YOU WILL LEARN</p><p>──────────────────────────────────────</p><p>∙ Why AI incidents make governance non-negotiable</p><p>∙ The Project Cerebellum mission: AI should cause no harm</p><p>∙ How the four TAIM domains — GOVERN, MAP, MEASURE, MANAGE — work as an accountability cycle</p><p>∙ The 20 TAIMScore™ controls every AI-deploying organization must address</p><p>∙ How to crosswalk your AI posture against global regulatory frameworks</p><p>∙ Why the AI kill switch is essential governance — not optional</p><p></p><p>──────────────────────────────────────</p><p>CHAPTERS</p><p>──────────────────────────────────────</p><p>0:00 Welcome and Introduction</p><p>0:28 Real Risks of AI</p><p>1:18 Real Generative AI Incidents</p><p>2:13 Project Cerebellum: AI Should Cause No Harm</p><p>2:48 Vision and Mission</p><p>3:33 The Four TAIM Domains</p><p>5:01 GOVERN — AI Risk Training (Govern 2.2)</p><p>5:31 GOVERN — Supply Chain Policy (Govern 6.1)</p><p>6:01 MAP — Establishing Context (Map 1.2)</p><p>6:26 MAP — System Requirements (Map 1.6)</p><p>6:55 MAP — Third Party Risk (Map 4.1)</p><p>7:13 MAP — Impact Documentation (Map 5.1)</p><p>7:33 MEASURE — Human Evaluations (Measure 2.2)</p><p>7:51 MEASURE — Reliability (Measure 2.5)</p><p>8:07 MEASURE — Safety Risk (Measure 2.6)</p><p>8:23 MEASURE — Explainability (Measure 2.9)</p><p>8:37 MEASURE — Privacy Risk (Measure 2.10)</p><p>8:51 MEASURE — Fairness and Bias (Measure 2.11)</p><p>9:09 MEASURE — Risk Tracking (Measure 3.1)</p><p>9:23 MEASURE — Feedback Loops (Measure 3.3)</p><p>9:41 MEASURE — Performance Data (Measure 4.3)</p><p>9:57 MANAGE — Resource Allocation (Manage 2.1)</p><p>10:19 MANAGE — Unknown Risks (Manage 2.3)</p><p>10:35 MANAGE — The Kill Switch (Manage 2.4)</p><p>10:55 MANAGE — Post-Deployment Monitoring (Manage 4.1)</p><p>11:11 MANAGE — Incident Communications (Manage 4.3)</p><p>11:29 TAIMScore™: The Payoff</p><p>11:52 Framework Crosswalks — HIPAA, SOC 2, EU AI Act</p><p>13:51 Closing and How to Get Involved</p><p></p><p>──────────────────────────────────────</p><p>TAIMSCORE™ ASSESSOR WORKSHOP</p><p>──────────────────────────────────────</p><p>Virtual. Instructor-led. One day. Six CPEs. Third Friday of every month.</p><p>🔗 humansignal.io/taimscore_assessor_workshop</p><p></p><p>──────────────────────────────────────</p><p>FAILURE FILES™ — TAIMScore™ APPLIED</p><p>──────────────────────────────────────</p><p>See TAIMScore™ applied to real institutional failures:</p><p>🔗 humansignal.io/failure-files</p><p></p><p>──────────────────────────────────────</p><p>RESOURCES</p><p>──────────────────────────────────────</p><p>Project Cerebellum — projectcerebellum.com</p><p>HISPI — hispi.org</p><p>HISPI LinkedIn Group — linkedin.com/groups/6624427</p><p>Email — projectcerebellum@hispi.org</p><p></p><p>──────────────────────────────────────</p><p>ABOUT HISPI PROJECT CEREBELLUM</p><p>──────────────────────────────────────</p><p>Project Cerebellum is the AI Governance Think Tank of HISPI — the Holistic Information Security Practitioner Institute. The Trusted AI Model (TAIM) is a flagship framework of 72 controls across four domains that harmonize leading AI governance standards into a practical scoring system. TAIMScore™ was created by Taiye Lambo, Founder and Chief Artificial Intelligence Officer of HISPI.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p>Dr. Tuboise Floyd, PhD is the Founder and Chief Sensemaking Officer of Human Signal, Editor in Chief of The AI Governance Record, and a TAIMScore™ Certified Assessor. He holds a PhD from Auburn University and is a member of the HISPI Advocacy &amp; Education Working Group (Project Cerebellum).</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com/podcast</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p></p><p>Govern the machine. Or be the resource it consumes.</p><p></p><p>#TAIMScore #AIGovernance #AIAccountability #HISPI #ProjectCerebellum #NISTAIRMF #EUAIAct #AICompliance #FailureFiles #TrustedAIModel #DrTuboiseFloyd #HumanSignal #TheAIGovernanceBriefing #BuilderClass #AIRisk</p><p>Companies mentioned in this episode:</p><ul><li>hispi</li><li>Holistic Information Security Practitioner Institute</li><li>Project Cerebellum</li><li>Microsoft</li><li>OpenAI</li><li>ISO</li><li>IEC</li><li>HIPAA</li><li>PCI</li><li>DSS</li><li>SoC2</li><li>EU AI Act</li><li>EU GDPR</li><li>White House AI Executive Order</li></ul><br/><p>Takeaways:</p><ul><li>The podcast episode emphasizes the necessity for organizations to establish robust frameworks for AI governance, particularly through the TAIM model.</li><li>The TAIM framework is designed to ensure that AI deployments are safe, secure, responsible, and trustworthy, addressing potential risks proactively.</li><li>A significant focus of the episode is on the real-world examples of AI risks, illustrating the importance of governance in mitigating these risks.</li><li>Effective AI governance requires continuous monitoring and assessment, ensuring that systems remain compliant with evolving regulatory standards.</li><li>The TAIM score provides organizations with a concrete evaluation of their AI governance posture against relevant regulatory frameworks.</li><li>The importance of interdisciplinary collaboration in AI governance is highlighted, underscoring the necessity of diverse perspectives in risk assessment.</li></ul><br/><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>TAIMScore™ is the Trusted AI Model Score — a 20-control AI governance framework built by HISPI Project Cerebellum. In this episode of The AI Governance Briefing, Dr. Tuboise Floyd, PhD breaks down how TAIMScore™ turns AI accountability into something you can measure, score, and prove.</p><p></p><p>Four governance domains. Twenty essential controls. Mapped against NIST AI RMF, the EU AI Act, HIPAA, PCI DSS, SOC 2, EU GDPR, and the White House AI Executive Order. If your institution needs a blueprint for AI governance that survives regulatory scrutiny, this is the starting point.</p><p></p><p>AI is already deployed. The institutions that survive will be the ones that can prove they govern it.</p><p></p><p>──────────────────────────────────────</p><p>WHAT YOU WILL LEARN</p><p>──────────────────────────────────────</p><p>∙ Why AI incidents make governance non-negotiable</p><p>∙ The Project Cerebellum mission: AI should cause no harm</p><p>∙ How the four TAIM domains — GOVERN, MAP, MEASURE, MANAGE — work as an accountability cycle</p><p>∙ The 20 TAIMScore™ controls every AI-deploying organization must address</p><p>∙ How to crosswalk your AI posture against global regulatory frameworks</p><p>∙ Why the AI kill switch is essential governance — not optional</p><p></p><p>──────────────────────────────────────</p><p>CHAPTERS</p><p>──────────────────────────────────────</p><p>0:00 Welcome and Introduction</p><p>0:28 Real Risks of AI</p><p>1:18 Real Generative AI Incidents</p><p>2:13 Project Cerebellum: AI Should Cause No Harm</p><p>2:48 Vision and Mission</p><p>3:33 The Four TAIM Domains</p><p>5:01 GOVERN — AI Risk Training (Govern 2.2)</p><p>5:31 GOVERN — Supply Chain Policy (Govern 6.1)</p><p>6:01 MAP — Establishing Context (Map 1.2)</p><p>6:26 MAP — System Requirements (Map 1.6)</p><p>6:55 MAP — Third Party Risk (Map 4.1)</p><p>7:13 MAP — Impact Documentation (Map 5.1)</p><p>7:33 MEASURE — Human Evaluations (Measure 2.2)</p><p>7:51 MEASURE — Reliability (Measure 2.5)</p><p>8:07 MEASURE — Safety Risk (Measure 2.6)</p><p>8:23 MEASURE — Explainability (Measure 2.9)</p><p>8:37 MEASURE — Privacy Risk (Measure 2.10)</p><p>8:51 MEASURE — Fairness and Bias (Measure 2.11)</p><p>9:09 MEASURE — Risk Tracking (Measure 3.1)</p><p>9:23 MEASURE — Feedback Loops (Measure 3.3)</p><p>9:41 MEASURE — Performance Data (Measure 4.3)</p><p>9:57 MANAGE — Resource Allocation (Manage 2.1)</p><p>10:19 MANAGE — Unknown Risks (Manage 2.3)</p><p>10:35 MANAGE — The Kill Switch (Manage 2.4)</p><p>10:55 MANAGE — Post-Deployment Monitoring (Manage 4.1)</p><p>11:11 MANAGE — Incident Communications (Manage 4.3)</p><p>11:29 TAIMScore™: The Payoff</p><p>11:52 Framework Crosswalks — HIPAA, SOC 2, EU AI Act</p><p>13:51 Closing and How to Get Involved</p><p></p><p>──────────────────────────────────────</p><p>TAIMSCORE™ ASSESSOR WORKSHOP</p><p>──────────────────────────────────────</p><p>Virtual. Instructor-led. One day. Six CPEs. Third Friday of every month.</p><p>🔗 humansignal.io/taimscore_assessor_workshop</p><p></p><p>──────────────────────────────────────</p><p>FAILURE FILES™ — TAIMScore™ APPLIED</p><p>──────────────────────────────────────</p><p>See TAIMScore™ applied to real institutional failures:</p><p>🔗 humansignal.io/failure-files</p><p></p><p>──────────────────────────────────────</p><p>RESOURCES</p><p>──────────────────────────────────────</p><p>Project Cerebellum — projectcerebellum.com</p><p>HISPI — hispi.org</p><p>HISPI LinkedIn Group — linkedin.com/groups/6624427</p><p>Email — projectcerebellum@hispi.org</p><p></p><p>──────────────────────────────────────</p><p>ABOUT HISPI PROJECT CEREBELLUM</p><p>──────────────────────────────────────</p><p>Project Cerebellum is the AI Governance Think Tank of HISPI — the Holistic Information Security Practitioner Institute. The Trusted AI Model (TAIM) is a flagship framework of 72 controls across four domains that harmonize leading AI governance standards into a practical scoring system. TAIMScore™ was created by Taiye Lambo, Founder and Chief Artificial Intelligence Officer of HISPI.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p>Dr. Tuboise Floyd, PhD is the Founder and Chief Sensemaking Officer of Human Signal, Editor in Chief of The AI Governance Record, and a TAIMScore™ Certified Assessor. He holds a PhD from Auburn University and is a member of the HISPI Advocacy &amp; Education Working Group (Project Cerebellum).</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com/podcast</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p></p><p>Govern the machine. Or be the resource it consumes.</p><p></p><p>#TAIMScore #AIGovernance #AIAccountability #HISPI #ProjectCerebellum #NISTAIRMF #EUAIAct #AICompliance #FailureFiles #TrustedAIModel #DrTuboiseFloyd #HumanSignal #TheAIGovernanceBriefing #BuilderClass #AIRisk</p><p>Companies mentioned in this episode:</p><ul><li>hispi</li><li>Holistic Information Security Practitioner Institute</li><li>Project Cerebellum</li><li>Microsoft</li><li>OpenAI</li><li>ISO</li><li>IEC</li><li>HIPAA</li><li>PCI</li><li>DSS</li><li>SoC2</li><li>EU AI Act</li><li>EU GDPR</li><li>White House AI Executive Order</li></ul><br/><p>Takeaways:</p><ul><li>The podcast episode emphasizes the necessity for organizations to establish robust frameworks for AI governance, particularly through the TAIM model.</li><li>The TAIM framework is designed to ensure that AI deployments are safe, secure, responsible, and trustworthy, addressing potential risks proactively.</li><li>A significant focus of the episode is on the real-world examples of AI risks, illustrating the importance of governance in mitigating these risks.</li><li>Effective AI governance requires continuous monitoring and assessment, ensuring that systems remain compliant with evolving regulatory standards.</li><li>The TAIM score provides organizations with a concrete evaluation of their AI governance posture against relevant regulatory frameworks.</li><li>The importance of interdisciplinary collaboration in AI governance is highlighted, underscoring the necessity of diverse perspectives in risk assessment.</li></ul><br/><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/taimscore]]></link><guid isPermaLink="false">177c04bd-60af-4b0a-bf9e-d1e6ca5c65c1</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Wed, 22 Apr 2026 05:00:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/177c04bd-60af-4b0a-bf9e-d1e6ca5c65c1.mp3" length="24275844" type="audio/mpeg"/><itunes:duration>16:51</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>2</itunes:season><podcast:season>2</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/9ae21f90-02de-4bf2-8ec8-9248631ce864/transcript.json" type="application/json"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/9ae21f90-02de-4bf2-8ec8-9248631ce864/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/9ae21f90-02de-4bf2-8ec8-9248631ce864/index.html" type="text/html"/><podcast:chapters url="https://transcripts.captivate.fm/chapter-fc214ce3-5fef-4677-bae0-f6716a52e883.json" type="application/json+chapters"/><podcast:alternateEnclosure type="video/youtube" title="Is Your AI Actually Trustworthy? Introducing the TAIMScore™ | HISPI Project Cerebellum"><podcast:source uri="https://youtu.be/b9QGyfujZiY"/></podcast:alternateEnclosure></item><item><title>Distributed AI Has No Governor: The Structural Failure Behind Enterprise AI Accountability</title><itunes:title>Distributed AI Has No Governor: The Structural Failure Behind Enterprise AI Accountability</itunes:title><description><![CDATA[<p>EPISODE SUMMARY</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd delivers a pointed analysis of why enterprise AI governance is failing at the structural level. The problem isn't a lack of policy — it's that governance was designed for a world that no longer exists. Distributed AI — running across edge devices, vendor stacks, and multi-agent pipelines — has dissolved the single point of control that traditional compliance frameworks depend on.</p><p></p><p>──────────────────────────────────────</p><p>KEY TAKEAWAYS</p><p>──────────────────────────────────────</p><p></p><p>Key Takeaway 1: Distributed AI Is a Governance Condition, Not a Technology Trend</p><p>The shift to distributed AI isn't just an infrastructure evolution — it's a fundamental change in where accountability lives. When AI executes across multiple nodes, devices, or third-party systems without unified oversight, you're no longer in a governance framework. You're in a governance gap. Every edge deployment, every federated model, every multi-agent workflow is an accountability question first, a technology question second.</p><p></p><p>Key Takeaway 2: The Architecture of Blame Is Predictable — and Avoidable</p><p>The pattern behind every major AI failure in recent years is the same: the vendor says the output was within spec; the integrator says the client configured the workflow; the client says legal approved the policy; legal says the policy covered the old system. Nobody owns the failure. The reason isn't bad actors — it's structural ambiguity. When no one owns the decision at the node, blame distributes as efficiently as the AI does.</p><p></p><p>Key Takeaway 3: "Permitted" Is Not the Same as "Admissible"</p><p>A policy that allows a model to run is not the same as governance that can see what the model is doing. This visibility gap — between what is authorized on paper and what is observable in execution — is where accountability collapses. Functional governance requires audit trails, intervention triggers, and independence from vendor contracts built into the architecture itself, not appended to it.</p><p></p><p>──────────────────────────────────────</p><p>DR. FLOYD'S 3 DIAGNOSTIC QUESTIONS</p><p>──────────────────────────────────────</p><p></p><p>1. Who owns the decision at the node — not the system, the decision? If the answer is vague, you have a gap.</p><p>2. What is the escalation path? A single risk officer cannot handle fifty simultaneous failures across fifty nodes. The architecture must match the distribution.</p><p>3. What accountability exists without the vendor? If your governance breaks when the vendor changes the API, you don't have governance — you have vendor dependency.</p><p></p><p>──────────────────────────────────────</p><p>3 REQUIREMENTS FOR FUNCTIONAL GOVERNANCE</p><p>──────────────────────────────────────</p><p></p><p>1. Visibility at every execution point. If you cannot see the node, you cannot govern the node.</p><p>2. Accountability without humans in every loop. Humans cannot scale to distributed AI. Audit trails and intervention triggers must be designed into the system.</p><p>3. Independence. The governance structure must survive vendor changes and contract terminations.</p><p></p><p>──────────────────────────────────────</p><p>CLOSING REFLECTION</p><p>──────────────────────────────────────</p><p></p><p>The winners in the AI era won't be the organizations with the best technology. They'll be the ones with the structural discipline to govern it. This week, ask yourself three things: Can you name every device where your AI is making decisions? If your vendor changed the model tonight, how long would it take you to find out? And who is responsible when failure happens inside a workflow you don't control? Architect for reality — or discover reality when the system fails.</p><p></p><p>Govern the machine. Or be the resource it consumes.</p><p></p><p>──────────────────────────────────────</p><p>CHAPTERS</p><p>──────────────────────────────────────</p><p></p><p>0:00 - The Illusion of Governance</p><p>0:32 - Distributed AI Outruns Policy</p><p>1:10 - The Architecture of Blame</p><p>1:52 - The Trust Gap Framework</p><p>2:18 - Permitted ≠ Admissible</p><p>2:45 - Redesigning Accountability Architecture</p><p>3:28 - 3 Diagnostic Questions</p><p>4:10 - What Functional Governance Actually Requires</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p>→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>AI governance, AI accountability, distributed AI, AI policy, responsible AI, AI compliance, AI risk management, AI at the edge, federated learning, multi-agent systems, edge computing AI, AI governance framework, AI accountability gap, AI oversight, trust gap framework, AI leadership, AI regulation, AI vendor risk, governance architecture, AI decision making, AI audit trail, AI policy failure, AI governance failure, GASP framework, L.E.A.C. Protocol, Failure Files, TAIMScore, Dr. Tuboise Floyd, Human Signal, The AI Governance Briefing</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>EPISODE SUMMARY</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd delivers a pointed analysis of why enterprise AI governance is failing at the structural level. The problem isn't a lack of policy — it's that governance was designed for a world that no longer exists. Distributed AI — running across edge devices, vendor stacks, and multi-agent pipelines — has dissolved the single point of control that traditional compliance frameworks depend on.</p><p></p><p>──────────────────────────────────────</p><p>KEY TAKEAWAYS</p><p>──────────────────────────────────────</p><p></p><p>Key Takeaway 1: Distributed AI Is a Governance Condition, Not a Technology Trend</p><p>The shift to distributed AI isn't just an infrastructure evolution — it's a fundamental change in where accountability lives. When AI executes across multiple nodes, devices, or third-party systems without unified oversight, you're no longer in a governance framework. You're in a governance gap. Every edge deployment, every federated model, every multi-agent workflow is an accountability question first, a technology question second.</p><p></p><p>Key Takeaway 2: The Architecture of Blame Is Predictable — and Avoidable</p><p>The pattern behind every major AI failure in recent years is the same: the vendor says the output was within spec; the integrator says the client configured the workflow; the client says legal approved the policy; legal says the policy covered the old system. Nobody owns the failure. The reason isn't bad actors — it's structural ambiguity. When no one owns the decision at the node, blame distributes as efficiently as the AI does.</p><p></p><p>Key Takeaway 3: "Permitted" Is Not the Same as "Admissible"</p><p>A policy that allows a model to run is not the same as governance that can see what the model is doing. This visibility gap — between what is authorized on paper and what is observable in execution — is where accountability collapses. Functional governance requires audit trails, intervention triggers, and independence from vendor contracts built into the architecture itself, not appended to it.</p><p></p><p>──────────────────────────────────────</p><p>DR. FLOYD'S 3 DIAGNOSTIC QUESTIONS</p><p>──────────────────────────────────────</p><p></p><p>1. Who owns the decision at the node — not the system, the decision? If the answer is vague, you have a gap.</p><p>2. What is the escalation path? A single risk officer cannot handle fifty simultaneous failures across fifty nodes. The architecture must match the distribution.</p><p>3. What accountability exists without the vendor? If your governance breaks when the vendor changes the API, you don't have governance — you have vendor dependency.</p><p></p><p>──────────────────────────────────────</p><p>3 REQUIREMENTS FOR FUNCTIONAL GOVERNANCE</p><p>──────────────────────────────────────</p><p></p><p>1. Visibility at every execution point. If you cannot see the node, you cannot govern the node.</p><p>2. Accountability without humans in every loop. Humans cannot scale to distributed AI. Audit trails and intervention triggers must be designed into the system.</p><p>3. Independence. The governance structure must survive vendor changes and contract terminations.</p><p></p><p>──────────────────────────────────────</p><p>CLOSING REFLECTION</p><p>──────────────────────────────────────</p><p></p><p>The winners in the AI era won't be the organizations with the best technology. They'll be the ones with the structural discipline to govern it. This week, ask yourself three things: Can you name every device where your AI is making decisions? If your vendor changed the model tonight, how long would it take you to find out? And who is responsible when failure happens inside a workflow you don't control? Architect for reality — or discover reality when the system fails.</p><p></p><p>Govern the machine. Or be the resource it consumes.</p><p></p><p>──────────────────────────────────────</p><p>CHAPTERS</p><p>──────────────────────────────────────</p><p></p><p>0:00 - The Illusion of Governance</p><p>0:32 - Distributed AI Outruns Policy</p><p>1:10 - The Architecture of Blame</p><p>1:52 - The Trust Gap Framework</p><p>2:18 - Permitted ≠ Admissible</p><p>2:45 - Redesigning Accountability Architecture</p><p>3:28 - 3 Diagnostic Questions</p><p>4:10 - What Functional Governance Actually Requires</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p>→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>AI governance, AI accountability, distributed AI, AI policy, responsible AI, AI compliance, AI risk management, AI at the edge, federated learning, multi-agent systems, edge computing AI, AI governance framework, AI accountability gap, AI oversight, trust gap framework, AI leadership, AI regulation, AI vendor risk, governance architecture, AI decision making, AI audit trail, AI policy failure, AI governance failure, GASP framework, L.E.A.C. Protocol, Failure Files, TAIMScore, Dr. Tuboise Floyd, Human Signal, The AI Governance Briefing</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/ai-accountability-is-broken-heres-why]]></link><guid isPermaLink="false">26f34796-ea53-4a27-9e5e-dd66340a4e59</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Fri, 10 Apr 2026 21:40:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/26f34796-ea53-4a27-9e5e-dd66340a4e59.mp3" length="4525104" type="audio/mpeg"/><itunes:duration>04:43</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>2</itunes:season><podcast:season>2</podcast:season><podcast:chapters url="https://transcripts.captivate.fm/chapter-96001aad-2dd1-49b2-99a1-3b31edbbad99.json" type="application/json+chapters"/><podcast:alternateEnclosure type="video/youtube" title="Distributed AI Has No Governor: The Structural Failure Behind Enterprise AI Accountability"><podcast:source uri="https://youtu.be/f7vsYuEpjWg"/></podcast:alternateEnclosure></item><item><title>AI Governance Open Forum: Critical Thinking, Risk, and “Never Blindly Trust—Always Verify”</title><itunes:title>AI Governance Open Forum: Critical Thinking, Risk, and “Never Blindly Trust—Always Verify”</itunes:title><description><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>At a laid-back campus open forum, students are invited to ask questions about AI governance to Taiye Lambo, Founder and Chief Artificial Intelligence Officer of the Holistic Information Security Practitioner Institute (HISPI), and Dr. Tuboise Floyd, Founder of Human Signal and Host of The AI Governance Briefing.</p><p></p><p>Speakers frame AI literacy as a civic and professional survival skill — not a technical one. Employers now expect workers to critically evaluate AI outputs, not just use them. The conversation covers deepfakes and short-form media manipulation, the dangers of overreliance on AI (including the attorney who cited fabricated ChatGPT case law in federal court), the principle of "never blindly trust, always verify," and the structural need for continuous auditing, accountability, and an honest human in the loop — especially in clinical and environmental contexts. Students are advised to build strong domain knowledge, think critically, pursue internships, and invest in AI governance and risk certifications over tool-specific training.</p><p></p><p>──────────────────────────────────────</p><p>CHAPTERS</p><p>──────────────────────────────────────</p><p></p><p>00:00 Welcome and Setup</p><p>00:52 Meet the Experts</p><p>01:57 Taiye on Governance Focus</p><p>02:53 Dr. Floyd Background and Podcast</p><p>04:39 Open Forum Begins</p><p>05:02 AI Literacy for Careers</p><p>07:23 Threat or Opportunity Poll</p><p>10:01 AI Literacy Beyond STEM</p><p>10:49 Spotting Deepfakes in Shorts</p><p>15:35 Using AI Without Replacing Learning</p><p>16:14 Lawyer Case and Overtrusting AI</p><p>18:08 Never Blindly Trust — Verify</p><p>19:06 Wikipedia Analogy and Real Risks</p><p>20:31 Business Ethics Reality Check</p><p>21:06 Continuous Audits in Clinics</p><p>21:28 Human in the Loop Matters</p><p>22:04 Environmental AI Data Gaps</p><p>23:13 Public Trust and Accountability</p><p>23:33 Honest Human Oversight</p><p>25:28 Tokens and Hallucinations</p><p>26:51 Bias in Training Data</p><p>27:56 Interviewing in the AI Era</p><p>30:28 AI Disruption and Generational Shift</p><p>33:21 High-Stakes AI Blind Spots</p><p>36:02 Rapid Fire Career Advice</p><p>41:03 Closing and Next Steps</p><p></p><p>──────────────────────────────────────</p><p>GUEST</p><p>──────────────────────────────────────</p><p></p><p>Taiye Lambo</p><p>Founder &amp; Chief Artificial Intelligence Officer</p><p>Holistic Information Security Practitioner Institute (HISPI)</p><p>🔗 https://www.hispi.org</p><p>🔗 https://projectcerebellum.com</p><p>LinkedIn: linkedin.com/in/taiyelambo</p><p></p><p>TAIMScore™ Assessor Workshop</p><p>🔗 https://humansignal.io/taimscore_assessor_workshop</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p>→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p></p><p>──────────────────────────────────────</p><p>KEY TAKEAWAYS</p><p>──────────────────────────────────────</p><p></p><p>1. AI governance is the structural discipline that makes ethical decision-making and risk mitigation possible — not a compliance checkbox.</p><p>2. Employers now expect candidates to critically evaluate AI outputs. Using AI without scrutiny is a liability, not a skill.</p><p>3. AI literacy is not a STEM competency. It is a professional survival skill for every sector.</p><p>4. Human oversight is not optional in high-stakes AI deployments. Audit trails and intervention triggers must be designed in — not appended after failure.</p><p>5. Understanding how AI systems are trained matters — especially in healthcare, law, and environmental contexts where bad data produces dangerous outputs.</p><p>6. Domain knowledge, critical thinking, and governance certifications outperform tool-specific training in a market where the tools change every six months.</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available at:</p><p>https://humansignal.io/blog/ai-governance-open-forum-never-blindly-trust-verify</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice. Guest opinions are those of the guest alone.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>AI governance, AI literacy, AI accountability, AI policy, responsible AI, AI compliance, AI risk management, AI ethics, enterprise AI, government AI, technology leadership, deepfakes, AI hallucination, AI training data bias, human in the loop, AI oversight, AI career advice, AI governance certification, TAIMScore, GASP framework, Failure Files, Trust Gap, never blindly trust verify, Project Cerebellum, HISPI, Taiye Lambo, Dr. Tuboise Floyd, Human Signal, The AI Governance Briefing</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>At a laid-back campus open forum, students are invited to ask questions about AI governance to Taiye Lambo, Founder and Chief Artificial Intelligence Officer of the Holistic Information Security Practitioner Institute (HISPI), and Dr. Tuboise Floyd, Founder of Human Signal and Host of The AI Governance Briefing.</p><p></p><p>Speakers frame AI literacy as a civic and professional survival skill — not a technical one. Employers now expect workers to critically evaluate AI outputs, not just use them. The conversation covers deepfakes and short-form media manipulation, the dangers of overreliance on AI (including the attorney who cited fabricated ChatGPT case law in federal court), the principle of "never blindly trust, always verify," and the structural need for continuous auditing, accountability, and an honest human in the loop — especially in clinical and environmental contexts. Students are advised to build strong domain knowledge, think critically, pursue internships, and invest in AI governance and risk certifications over tool-specific training.</p><p></p><p>──────────────────────────────────────</p><p>CHAPTERS</p><p>──────────────────────────────────────</p><p></p><p>00:00 Welcome and Setup</p><p>00:52 Meet the Experts</p><p>01:57 Taiye on Governance Focus</p><p>02:53 Dr. Floyd Background and Podcast</p><p>04:39 Open Forum Begins</p><p>05:02 AI Literacy for Careers</p><p>07:23 Threat or Opportunity Poll</p><p>10:01 AI Literacy Beyond STEM</p><p>10:49 Spotting Deepfakes in Shorts</p><p>15:35 Using AI Without Replacing Learning</p><p>16:14 Lawyer Case and Overtrusting AI</p><p>18:08 Never Blindly Trust — Verify</p><p>19:06 Wikipedia Analogy and Real Risks</p><p>20:31 Business Ethics Reality Check</p><p>21:06 Continuous Audits in Clinics</p><p>21:28 Human in the Loop Matters</p><p>22:04 Environmental AI Data Gaps</p><p>23:13 Public Trust and Accountability</p><p>23:33 Honest Human Oversight</p><p>25:28 Tokens and Hallucinations</p><p>26:51 Bias in Training Data</p><p>27:56 Interviewing in the AI Era</p><p>30:28 AI Disruption and Generational Shift</p><p>33:21 High-Stakes AI Blind Spots</p><p>36:02 Rapid Fire Career Advice</p><p>41:03 Closing and Next Steps</p><p></p><p>──────────────────────────────────────</p><p>GUEST</p><p>──────────────────────────────────────</p><p></p><p>Taiye Lambo</p><p>Founder &amp; Chief Artificial Intelligence Officer</p><p>Holistic Information Security Practitioner Institute (HISPI)</p><p>🔗 https://www.hispi.org</p><p>🔗 https://projectcerebellum.com</p><p>LinkedIn: linkedin.com/in/taiyelambo</p><p></p><p>TAIMScore™ Assessor Workshop</p><p>🔗 https://humansignal.io/taimscore_assessor_workshop</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p>→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p></p><p>──────────────────────────────────────</p><p>KEY TAKEAWAYS</p><p>──────────────────────────────────────</p><p></p><p>1. AI governance is the structural discipline that makes ethical decision-making and risk mitigation possible — not a compliance checkbox.</p><p>2. Employers now expect candidates to critically evaluate AI outputs. Using AI without scrutiny is a liability, not a skill.</p><p>3. AI literacy is not a STEM competency. It is a professional survival skill for every sector.</p><p>4. Human oversight is not optional in high-stakes AI deployments. Audit trails and intervention triggers must be designed in — not appended after failure.</p><p>5. Understanding how AI systems are trained matters — especially in healthcare, law, and environmental contexts where bad data produces dangerous outputs.</p><p>6. Domain knowledge, critical thinking, and governance certifications outperform tool-specific training in a market where the tools change every six months.</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available at:</p><p>https://humansignal.io/blog/ai-governance-open-forum-never-blindly-trust-verify</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice. Guest opinions are those of the guest alone.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>AI governance, AI literacy, AI accountability, AI policy, responsible AI, AI compliance, AI risk management, AI ethics, enterprise AI, government AI, technology leadership, deepfakes, AI hallucination, AI training data bias, human in the loop, AI oversight, AI career advice, AI governance certification, TAIMScore, GASP framework, Failure Files, Trust Gap, never blindly trust verify, Project Cerebellum, HISPI, Taiye Lambo, Dr. Tuboise Floyd, Human Signal, The AI Governance Briefing</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/ai-governance-open-forum-never-blindly-trust-verify]]></link><guid isPermaLink="false">dbb548c0-ff74-4fcc-a63d-3d5c10e1ca57</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Sun, 29 Mar 2026 16:06:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/7761d668-b58c-4f61-846e-97288ab95992.mp3" length="41477597" type="audio/mpeg"/><itunes:duration>43:12</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>2</itunes:season><podcast:season>2</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/39b7f9f0-ed63-4396-a690-5214b7ccc1ab/index.html" type="text/html"/><podcast:chapters url="https://transcripts.captivate.fm/chapter-aa8afddf-a73b-474c-80ca-bbcf57155de8.json" type="application/json+chapters"/><podcast:alternateEnclosure type="video/youtube" title="AI Governance Open Forum: Critical Thinking, Risk, and “Never Blindly Trust—Always Verify”"><podcast:source uri="https://youtu.be/zM1J45jSwdA"/></podcast:alternateEnclosure></item><item><title>Korean Air KC&amp;D: Supply Chain Breach and the Data That Never Left</title><itunes:title>Korean Air KC&amp;D: Supply Chain Breach and the Data That Never Left</itunes:title><description><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd breaks down the Korean Air / KC&amp;D supply chain breach — a forensic autopsy of what happens when data governance doesn't travel with the data.</p><p></p><p>In December 2025, Korean Air disclosed that 30,000 employee records were stolen. The breach didn't come through Korean Air's systems. It came through KC&amp;D Service — a catering subsidiary spun off and sold to private equity in 2020. Five years later, KC&amp;D was still holding Korean Air employee data on an unpatched Oracle ERP server. The Cl0p ransomware group exploited CVE-2025-61882 — CVSS 9.8 — and published 500GB on a dark web leak site.</p><p></p><p>Six TAIMScore™ controls failed simultaneously. Three domains. All because the data moved out of sight — not out of risk.</p><p></p><p>This is a Failure File™. Not a warning. A forensic record.</p><p></p><p>──────────────────────────────────────</p><p>KEY TOPICS</p><p>──────────────────────────────────────</p><p></p><p>∙ Supply chain governance and third-party vendor risk</p><p>∙ What happens when a divestiture doesn't include data governance</p><p>∙ The Oracle EBS zero-day and its 100+ organizational victims</p><p>∙ TAIMScore™ forensic: GOVERN, MAP, and MANAGE domain failures</p><p>∙ The one question every institution needs to ask today</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ Failure Files™ — humansignal.io/failure-files</p><p>→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available at:</p><p>https://theaigovernancebriefing.com/blog</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice. Case studies are based on publicly available information and presented as pedagogical tools — not legal findings or accusations of wrongdoing.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>AI governance, supply chain risk, third-party vendor risk, data breach, Korean Air, KC&amp;D, Cl0p ransomware, Oracle EBS, CVE-2025-61882, TAIMScore, TAIM framework, Failure Files, institutional risk, data governance, divestiture risk, vendor oversight, AI accountability, GASP framework, Trust Gap, governance failure, Dr. Tuboise Floyd, Human Signal, The AI Governance Briefing</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd breaks down the Korean Air / KC&amp;D supply chain breach — a forensic autopsy of what happens when data governance doesn't travel with the data.</p><p></p><p>In December 2025, Korean Air disclosed that 30,000 employee records were stolen. The breach didn't come through Korean Air's systems. It came through KC&amp;D Service — a catering subsidiary spun off and sold to private equity in 2020. Five years later, KC&amp;D was still holding Korean Air employee data on an unpatched Oracle ERP server. The Cl0p ransomware group exploited CVE-2025-61882 — CVSS 9.8 — and published 500GB on a dark web leak site.</p><p></p><p>Six TAIMScore™ controls failed simultaneously. Three domains. All because the data moved out of sight — not out of risk.</p><p></p><p>This is a Failure File™. Not a warning. A forensic record.</p><p></p><p>──────────────────────────────────────</p><p>KEY TOPICS</p><p>──────────────────────────────────────</p><p></p><p>∙ Supply chain governance and third-party vendor risk</p><p>∙ What happens when a divestiture doesn't include data governance</p><p>∙ The Oracle EBS zero-day and its 100+ organizational victims</p><p>∙ TAIMScore™ forensic: GOVERN, MAP, and MANAGE domain failures</p><p>∙ The one question every institution needs to ask today</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ Failure Files™ — humansignal.io/failure-files</p><p>→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available at:</p><p>https://theaigovernancebriefing.com/blog</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice. Case studies are based on publicly available information and presented as pedagogical tools — not legal findings or accusations of wrongdoing.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>AI governance, supply chain risk, third-party vendor risk, data breach, Korean Air, KC&amp;D, Cl0p ransomware, Oracle EBS, CVE-2025-61882, TAIMScore, TAIM framework, Failure Files, institutional risk, data governance, divestiture risk, vendor oversight, AI accountability, GASP framework, Trust Gap, governance failure, Dr. Tuboise Floyd, Human Signal, The AI Governance Briefing</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/korean-air-supply-chain-failure-file]]></link><guid isPermaLink="false">7e595b28-bf5b-4642-9f42-01d3748ebeb5</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Thu, 26 Mar 2026 03:01:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/18c5e7a4-1858-4da0-973c-e85a8544a20d.mp3" length="5539775" type="audio/mpeg"/><itunes:duration>05:46</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>2</itunes:season><podcast:season>2</podcast:season><podcast:alternateEnclosure type="video/youtube" title="Korean Air KC&amp;D: Supply Chain Breach and the Data That Never Left"><podcast:source uri="https://youtu.be/alZllaxBTc4"/></podcast:alternateEnclosure></item><item><title>AI Governance: Balancing Innovation With Risk Management</title><itunes:title>AI Governance: Balancing Innovation With Risk Management</itunes:title><description><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd is joined by Col. Kathy Swacina (USA, Ret.), CIO of SherpaWerx, and Taiye Lambo, Founder and Chief Artificial Intelligence Officer of the Holistic Information Security Practitioner Institute (HISPI), to discuss Project Cerebellum, AI governance, and balancing innovation with risk management.</p><p></p><p>The conversation cuts to the structural reality: without a holistic control layer, the race to be first with AI produces institutions that are exposed before they know it. The panel covers the evolving role of CIOs in AI oversight, what it actually means to build accountability into AI systems, and why most risk management frameworks fail at execution. This is not a theoretical discussion. These are practitioners who have governed AI in high-stakes environments.</p><p></p><p>──────────────────────────────────────</p><p>KEY TOPICS</p><p>──────────────────────────────────────</p><p></p><p>∙ Project Cerebellum and holistic AI control layers</p><p>∙ The race to AI deployment vs. responsible governance</p><p>∙ The evolving role of CIOs in AI oversight</p><p>∙ Building accountability into AI systems — not appending it after deployment</p><p>∙ Risk management frameworks that survive real humans, real incentives, and real pressure</p><p></p><p>──────────────────────────────────────</p><p>GUESTS</p><p>──────────────────────────────────────</p><p></p><p>Col. Kathy Swacina (USA, Ret.)</p><p>CIO, SherpaWerx</p><p>Chair, HISPI AI Think Tank — Project Cerebellum</p><p>🔗 https://sherpawerx.com</p><p></p><p>Taiye Lambo</p><p>Founder &amp; Chief Artificial Intelligence Officer</p><p>Holistic Information Security Practitioner Institute (HISPI)</p><p>🔗 https://www.hispi.org</p><p>🔗 https://projectcerebellum.com</p><p>LinkedIn: linkedin.com/in/taiyelambo</p><p></p><p>TAIMScore™ Assessor Workshop</p><p>🔗 https://humansignal.io/taimscore_assessor_workshop</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ Failure Files™ — humansignal.io/failure-files</p><p>→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available at:</p><p>https://theaigovernancebriefing.com/blog</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice. Guest opinions are those of the guest alone.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>AI governance, risk management, AI innovation, Project Cerebellum, CIO leadership, AI accountability, AI policy, enterprise AI, government AI, technology leadership, AI oversight, AI control layer, AI ethics, TAIMScore, GASP framework, Trust Gap, Failure Files, HISPI, Col. Kathy Swacina, Taiye Lambo, Dr. Tuboise Floyd, Human Signal, The AI Governance Briefing</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd is joined by Col. Kathy Swacina (USA, Ret.), CIO of SherpaWerx, and Taiye Lambo, Founder and Chief Artificial Intelligence Officer of the Holistic Information Security Practitioner Institute (HISPI), to discuss Project Cerebellum, AI governance, and balancing innovation with risk management.</p><p></p><p>The conversation cuts to the structural reality: without a holistic control layer, the race to be first with AI produces institutions that are exposed before they know it. The panel covers the evolving role of CIOs in AI oversight, what it actually means to build accountability into AI systems, and why most risk management frameworks fail at execution. This is not a theoretical discussion. These are practitioners who have governed AI in high-stakes environments.</p><p></p><p>──────────────────────────────────────</p><p>KEY TOPICS</p><p>──────────────────────────────────────</p><p></p><p>∙ Project Cerebellum and holistic AI control layers</p><p>∙ The race to AI deployment vs. responsible governance</p><p>∙ The evolving role of CIOs in AI oversight</p><p>∙ Building accountability into AI systems — not appending it after deployment</p><p>∙ Risk management frameworks that survive real humans, real incentives, and real pressure</p><p></p><p>──────────────────────────────────────</p><p>GUESTS</p><p>──────────────────────────────────────</p><p></p><p>Col. Kathy Swacina (USA, Ret.)</p><p>CIO, SherpaWerx</p><p>Chair, HISPI AI Think Tank — Project Cerebellum</p><p>🔗 https://sherpawerx.com</p><p></p><p>Taiye Lambo</p><p>Founder &amp; Chief Artificial Intelligence Officer</p><p>Holistic Information Security Practitioner Institute (HISPI)</p><p>🔗 https://www.hispi.org</p><p>🔗 https://projectcerebellum.com</p><p>LinkedIn: linkedin.com/in/taiyelambo</p><p></p><p>TAIMScore™ Assessor Workshop</p><p>🔗 https://humansignal.io/taimscore_assessor_workshop</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ Failure Files™ — humansignal.io/failure-files</p><p>→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available at:</p><p>https://theaigovernancebriefing.com/blog</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice. Guest opinions are those of the guest alone.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>AI governance, risk management, AI innovation, Project Cerebellum, CIO leadership, AI accountability, AI policy, enterprise AI, government AI, technology leadership, AI oversight, AI control layer, AI ethics, TAIMScore, GASP framework, Trust Gap, Failure Files, HISPI, Col. Kathy Swacina, Taiye Lambo, Dr. Tuboise Floyd, Human Signal, The AI Governance Briefing</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/ai-governance-balancing-innovation-risk-management]]></link><guid isPermaLink="false">8d4d1ec8-d06d-4132-8a20-8d75325401cc</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Wed, 25 Mar 2026 08:10:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/d8da013d-1c03-4293-8918-0edbcd9243d4.mp3" length="48474069" type="audio/mpeg"/><itunes:duration>50:30</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>2</itunes:season><podcast:season>2</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/91659f94-a5bd-4732-ac80-6c9ef46680c4/index.html" type="text/html"/><podcast:alternateEnclosure type="video/youtube" title="AI Governance: Balancing Innovation With Risk Management"><podcast:source uri="https://youtu.be/AtJ6pxjDdSw"/></podcast:alternateEnclosure></item><item><title>Amazon Broomway: When GPS Routes a Driver Into a Tidal Death Trap</title><itunes:title>Amazon Broomway: When GPS Routes a Driver Into a Tidal Death Trap</itunes:title><description><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd breaks down the Amazon delivery van reportedly stranded on the Broomway — one of Britain's most dangerous tidal tracks in Essex — after blindly following GPS directions toward Foulness Island. No alert. No override. No human in the loop.</p><p></p><p>This isn't a story about bad technology. It's a story about ungoverned automation making context-free decisions about human movement in the physical world. And it's exactly the kind of incident the HISPI Project Cerebellum AI Incidents database exists to document — so organizations can stop repeating the same failures.</p><p></p><p>──────────────────────────────────────</p><p>THE INCIDENT</p><p>──────────────────────────────────────</p><p></p><p>An Amazon delivery van followed GPS routing onto the Broomway — a tidal road across the mudflats of the Thames Estuary that floods rapidly and without visible warning. The system had no awareness of tidal zones, flood-risk roads, or environmental danger conditions. The driver had no alert, no override prompt, and no human checkpoint between the algorithm's instruction and physical execution.</p><p></p><p>The Broomway is one of the oldest roads in England, dating to the 1600s. It runs across tidal mudflats and has claimed numerous lives. It is considered one of the most dangerous roads in the United Kingdom.</p><p></p><p>──────────────────────────────────────</p><p>TAIMSCORE™ FAILURE ANALYSIS</p><p>──────────────────────────────────────</p><p></p><p>Running this incident through a TAIMScore™ lens reveals failure across three critical dimensions:</p><p></p><p>❌ Safety — FAIL</p><p>No guardrails for hazardous geographic areas. The routing system had no awareness of tidal zones, flood-risk roads, or environmental danger conditions. A system operating in the physical world with zero environmental context is an unacceptable safety liability.</p><p></p><p>❌ Trust — FAIL</p><p>When workers discover that guidance systems can route them into danger, trust collapses — not just in that system, but in all automated guidance. The second-order effect is that workers either override systems entirely (defeating the purpose) or follow blindly (accepting the risk). Neither is acceptable.</p><p></p><p>❌ Responsibility — FAIL</p><p>Who owns the risk when an algorithm routes a human into danger? The driver? The dispatcher? The software vendor? The organization deploying the tool? Without clear accountability architecture, no one owns it — until someone gets hurt.</p><p></p><p>──────────────────────────────────────</p><p>THE CORE THESIS</p><p>──────────────────────────────────────</p><p></p><p>The technology works exactly as designed. The governance around it does not exist.</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop</p><p>→ HISPI Project Cerebellum — projectcerebellum.com</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice. Case studies are based on publicly available information and presented as pedagogical tools — not legal findings or accusations of wrongdoing.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>AI governance, ungoverned automation, GPS failure, logistics AI, AI safety, AI accountability, human in the loop, AI risk, responsible AI, AI incidents, AI ethics, TAIMScore, GASP framework, Trust Gap, Failure Files, Project Cerebellum, HISPI, physical world AI, autonomous systems, AI liability, Dr. Tuboise Floyd, Human Signal, The AI Governance Briefing</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd breaks down the Amazon delivery van reportedly stranded on the Broomway — one of Britain's most dangerous tidal tracks in Essex — after blindly following GPS directions toward Foulness Island. No alert. No override. No human in the loop.</p><p></p><p>This isn't a story about bad technology. It's a story about ungoverned automation making context-free decisions about human movement in the physical world. And it's exactly the kind of incident the HISPI Project Cerebellum AI Incidents database exists to document — so organizations can stop repeating the same failures.</p><p></p><p>──────────────────────────────────────</p><p>THE INCIDENT</p><p>──────────────────────────────────────</p><p></p><p>An Amazon delivery van followed GPS routing onto the Broomway — a tidal road across the mudflats of the Thames Estuary that floods rapidly and without visible warning. The system had no awareness of tidal zones, flood-risk roads, or environmental danger conditions. The driver had no alert, no override prompt, and no human checkpoint between the algorithm's instruction and physical execution.</p><p></p><p>The Broomway is one of the oldest roads in England, dating to the 1600s. It runs across tidal mudflats and has claimed numerous lives. It is considered one of the most dangerous roads in the United Kingdom.</p><p></p><p>──────────────────────────────────────</p><p>TAIMSCORE™ FAILURE ANALYSIS</p><p>──────────────────────────────────────</p><p></p><p>Running this incident through a TAIMScore™ lens reveals failure across three critical dimensions:</p><p></p><p>❌ Safety — FAIL</p><p>No guardrails for hazardous geographic areas. The routing system had no awareness of tidal zones, flood-risk roads, or environmental danger conditions. A system operating in the physical world with zero environmental context is an unacceptable safety liability.</p><p></p><p>❌ Trust — FAIL</p><p>When workers discover that guidance systems can route them into danger, trust collapses — not just in that system, but in all automated guidance. The second-order effect is that workers either override systems entirely (defeating the purpose) or follow blindly (accepting the risk). Neither is acceptable.</p><p></p><p>❌ Responsibility — FAIL</p><p>Who owns the risk when an algorithm routes a human into danger? The driver? The dispatcher? The software vendor? The organization deploying the tool? Without clear accountability architecture, no one owns it — until someone gets hurt.</p><p></p><p>──────────────────────────────────────</p><p>THE CORE THESIS</p><p>──────────────────────────────────────</p><p></p><p>The technology works exactly as designed. The governance around it does not exist.</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop</p><p>→ HISPI Project Cerebellum — projectcerebellum.com</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice. Case studies are based on publicly available information and presented as pedagogical tools — not legal findings or accusations of wrongdoing.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>AI governance, ungoverned automation, GPS failure, logistics AI, AI safety, AI accountability, human in the loop, AI risk, responsible AI, AI incidents, AI ethics, TAIMScore, GASP framework, Trust Gap, Failure Files, Project Cerebellum, HISPI, physical world AI, autonomous systems, AI liability, Dr. Tuboise Floyd, Human Signal, The AI Governance Briefing</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/gps-drives-you-into-the-sea-ungoverned-automation]]></link><guid isPermaLink="false">cf6e394b-0cdb-4ae4-8be0-847ade5fa4d0</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Fri, 06 Mar 2026 00:10:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/dda15961-86e1-4ec7-9d83-bdb9363ef138.mp3" length="2240405" type="audio/mpeg"/><itunes:duration>02:20</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>2</itunes:season><podcast:season>2</podcast:season><podcast:alternateEnclosure type="video/youtube" title="When Your GPS Happily Drives You Into The Sea"><podcast:source uri="https://youtu.be/_I5WNExh6Fc"/></podcast:alternateEnclosure></item><item><title>Making Digital Accessibility Work In The AI Era</title><itunes:title>Making Digital Accessibility Work In The AI Era</itunes:title><description><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd is joined by Dr. Michele A. Williams — UX and accessibility consultant and author of Accessible UX Research — to examine why digital accessibility failures across 97% of the web create equity, resilience, and trust risks that AI can magnify at scale.</p><p></p><p>Dr. Williams contrasts the medical and social models of disability, addresses ableism and language (person-first vs. identity-first), and argues that checklists cannot replace lived experience or disabled participation in UX research and leadership. The conversation covers how inaccessible code tools and AI trained on inaccessible data produce compounding issues — missing labels, broken keyboard paths, poor semantic structure — and warns against disability dongles: technology solutions that add a layer instead of removing the systemic barrier. Dr. Williams closes with a practical 90-day plan: establish a baseline with scans and process mapping, change defaults, and normalize inclusion from the inside out.</p><p></p><p>Nothing about us without us.</p><p></p><p>──────────────────────────────────────</p><p>CHAPTERS</p><p>──────────────────────────────────────</p><p></p><p>00:00 Accessibility Wake Up Call</p><p>00:57 Meet Dr. Michele Williams</p><p>02:07 Equity, Resilience, Trust</p><p>04:01 Disability Mindset Shift</p><p>05:59 Why Lived Experience Matters</p><p>07:14 Person First vs. Identity First</p><p>13:01 AI Promise and Harm</p><p>15:23 Social Model In Practice</p><p>19:58 Beyond Screen Readers</p><p>25:02 Exclusion Inside Real Teams</p><p>26:58 Semantic Code Chaos</p><p>28:32 Standards Lag Tech</p><p>29:12 Siri Zoom Panic</p><p>31:23 Disability Dongles</p><p>33:36 AI Hype Reality</p><p>37:25 Beyond Checklists</p><p>40:32 90 Day Baseline</p><p>42:30 Change Defaults</p><p>44:17 Normalize Inclusion</p><p>46:47 Nothing About Us</p><p>49:13 One Action This Week</p><p>50:35 Closing Credits</p><p></p><p>──────────────────────────────────────</p><p>GUEST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Michele A. Williams</p><p>UX and Accessibility Consultant</p><p>Author, Accessible UX Research — Smashing Magazine</p><p>🔗 https://mawconsultingllc.com</p><p>LinkedIn: linkedin.com/in/micheleawilliams1</p><p></p><p>Accessible UX Research</p><p>Publisher: Smashing Magazine</p><p>🔗 https://www.smashingmagazine.com/2025/06/accessible-ux-research-pre-release/</p><p></p><p>──────────────────────────────────────</p><p>WATCH ON YOUTUBE</p><p>──────────────────────────────────────</p><p></p><p>🎥 https://youtu.be/pxXLNsbyJhc?si=Dt9mf2HK4AtyCx6_</p><p></p><p>──────────────────────────────────────</p><p>KEY TAKEAWAYS</p><p>──────────────────────────────────────</p><p></p><p>1. 97% of the web contains accessibility barriers that actively exclude disabled individuals — this is not a niche compliance issue, it is a structural governance failure at scale.</p><p>2. Accessibility is not a checklist. Genuine inclusion requires disabled participation in UX research, leadership, and product decisions from the start.</p><p>3. AI trained on inaccessible data reproduces and amplifies inaccessibility. The governance problem precedes the technology problem.</p><p>4. Disability dongles — technology layered on top of broken systems — are not solutions. They are evidence that the underlying barrier was never addressed.</p><p>5. Organizations serious about inclusion must change defaults, not add accommodations after the fact.</p><p></p><p>──────────────────────────────────────</p><p>COMPANIES REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>Smashing Magazine · Accessibe</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ Failure Files™ — humansignal.io/failure-files</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice. Guest opinions are those of the guest alone.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>digital accessibility, AI accessibility, UX research, accessible design, disability inclusion, web accessibility, WCAG, screen readers, semantic HTML, keyboard navigation, disability dongles, social model of disability, medical model of disability, person first language, identity first language, ableism, inclusive design, AI bias, AI training data, AI governance, equity resilience trust, 97 percent web accessibility, nothing about us without us, Dr. Michele Williams, Accessible UX Research, Smashing Magazine, Dr. Tuboise Floyd, Human Signal, The AI Governance Briefing</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd is joined by Dr. Michele A. Williams — UX and accessibility consultant and author of Accessible UX Research — to examine why digital accessibility failures across 97% of the web create equity, resilience, and trust risks that AI can magnify at scale.</p><p></p><p>Dr. Williams contrasts the medical and social models of disability, addresses ableism and language (person-first vs. identity-first), and argues that checklists cannot replace lived experience or disabled participation in UX research and leadership. The conversation covers how inaccessible code tools and AI trained on inaccessible data produce compounding issues — missing labels, broken keyboard paths, poor semantic structure — and warns against disability dongles: technology solutions that add a layer instead of removing the systemic barrier. Dr. Williams closes with a practical 90-day plan: establish a baseline with scans and process mapping, change defaults, and normalize inclusion from the inside out.</p><p></p><p>Nothing about us without us.</p><p></p><p>──────────────────────────────────────</p><p>CHAPTERS</p><p>──────────────────────────────────────</p><p></p><p>00:00 Accessibility Wake Up Call</p><p>00:57 Meet Dr. Michele Williams</p><p>02:07 Equity, Resilience, Trust</p><p>04:01 Disability Mindset Shift</p><p>05:59 Why Lived Experience Matters</p><p>07:14 Person First vs. Identity First</p><p>13:01 AI Promise and Harm</p><p>15:23 Social Model In Practice</p><p>19:58 Beyond Screen Readers</p><p>25:02 Exclusion Inside Real Teams</p><p>26:58 Semantic Code Chaos</p><p>28:32 Standards Lag Tech</p><p>29:12 Siri Zoom Panic</p><p>31:23 Disability Dongles</p><p>33:36 AI Hype Reality</p><p>37:25 Beyond Checklists</p><p>40:32 90 Day Baseline</p><p>42:30 Change Defaults</p><p>44:17 Normalize Inclusion</p><p>46:47 Nothing About Us</p><p>49:13 One Action This Week</p><p>50:35 Closing Credits</p><p></p><p>──────────────────────────────────────</p><p>GUEST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Michele A. Williams</p><p>UX and Accessibility Consultant</p><p>Author, Accessible UX Research — Smashing Magazine</p><p>🔗 https://mawconsultingllc.com</p><p>LinkedIn: linkedin.com/in/micheleawilliams1</p><p></p><p>Accessible UX Research</p><p>Publisher: Smashing Magazine</p><p>🔗 https://www.smashingmagazine.com/2025/06/accessible-ux-research-pre-release/</p><p></p><p>──────────────────────────────────────</p><p>WATCH ON YOUTUBE</p><p>──────────────────────────────────────</p><p></p><p>🎥 https://youtu.be/pxXLNsbyJhc?si=Dt9mf2HK4AtyCx6_</p><p></p><p>──────────────────────────────────────</p><p>KEY TAKEAWAYS</p><p>──────────────────────────────────────</p><p></p><p>1. 97% of the web contains accessibility barriers that actively exclude disabled individuals — this is not a niche compliance issue, it is a structural governance failure at scale.</p><p>2. Accessibility is not a checklist. Genuine inclusion requires disabled participation in UX research, leadership, and product decisions from the start.</p><p>3. AI trained on inaccessible data reproduces and amplifies inaccessibility. The governance problem precedes the technology problem.</p><p>4. Disability dongles — technology layered on top of broken systems — are not solutions. They are evidence that the underlying barrier was never addressed.</p><p>5. Organizations serious about inclusion must change defaults, not add accommodations after the fact.</p><p></p><p>──────────────────────────────────────</p><p>COMPANIES REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>Smashing Magazine · Accessibe</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ Failure Files™ — humansignal.io/failure-files</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice. Guest opinions are those of the guest alone.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>digital accessibility, AI accessibility, UX research, accessible design, disability inclusion, web accessibility, WCAG, screen readers, semantic HTML, keyboard navigation, disability dongles, social model of disability, medical model of disability, person first language, identity first language, ableism, inclusive design, AI bias, AI training data, AI governance, equity resilience trust, 97 percent web accessibility, nothing about us without us, Dr. Michele Williams, Accessible UX Research, Smashing Magazine, Dr. Tuboise Floyd, Human Signal, The AI Governance Briefing</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/digital-accessibility-ai-era-dr-michele-williams]]></link><guid isPermaLink="false">723a6e84-34f5-48d5-9744-aa93b122fbe4</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Mon, 02 Mar 2026 05:20:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/5ef55ce6-9740-4e54-a644-a530e84316f4.mp3" length="49804433" type="audio/mpeg"/><itunes:duration>51:53</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>2</itunes:season><podcast:season>2</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/46a8cb21-fbf5-4c7a-bb6b-be8c1e1c3c4c/index.html" type="text/html"/><podcast:chapters url="https://transcripts.captivate.fm/chapter-32273633-2c60-48a8-b9ef-7f5e1efedd03.json" type="application/json+chapters"/><podcast:alternateEnclosure type="video/youtube" title="Making Digital Accessibility Work In The AI Era | Dr. Michele A. Williams The AI Governance Briefing"><podcast:source uri="https://youtu.be/mz-YDrfaBrQ"/></podcast:alternateEnclosure></item><item><title>Digital Accessibility In An AI World</title><itunes:title>Digital Accessibility In An AI World</itunes:title><description><![CDATA[<p>Digital Accessibility In An AI World — 2026</p><p></p><p>As a podcast host exploring the intersection of humanity and technology, I keep asking: Are we really including everyone in our digital transformation?</p><p></p><p>Dr. Michele A. Williams — UX and Accessibility Consultant, author of Accessible UX Research (Smashing Magazine) — challenges us to move beyond checklists and design with, not for, disabled users.</p><p></p><p>Now live on The AI Governance Briefing: Dr. Michele A. Williams joins Dr. Tuboise Floyd to break down how to make digital accessibility work in an AI world.</p><p></p><p>🔗 https://mawconsultingllc.com</p><p></p><p>Accessibility is not just about digital spaces. Accessibility is about fundamental human rights.</p><p></p><p>What leaders and operators need to know:</p><p>✓ How to spot invisible exclusion in UX research and code</p><p>✓ Moving beyond compliance checklists to build truly inclusive systems</p><p>✓ Using AI for captions and alt text without creating new barriers</p><p>✓ The 90-day accessibility baseline your team can sustain</p><p></p><p>Because real inclusion means ensuring everyone has access to the places and systems they need — whether digital or physical.</p><p></p><p>Nothing about us without us.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #DigitalAccessibility #ArtificialIntelligence #InclusiveDesign #UXResearch #TechLeadership #Accessibility #AIGovernance #NothingAboutUsWithoutUs #AccessibleUXResearch #DisabilityInclusion</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>Digital Accessibility In An AI World — 2026</p><p></p><p>As a podcast host exploring the intersection of humanity and technology, I keep asking: Are we really including everyone in our digital transformation?</p><p></p><p>Dr. Michele A. Williams — UX and Accessibility Consultant, author of Accessible UX Research (Smashing Magazine) — challenges us to move beyond checklists and design with, not for, disabled users.</p><p></p><p>Now live on The AI Governance Briefing: Dr. Michele A. Williams joins Dr. Tuboise Floyd to break down how to make digital accessibility work in an AI world.</p><p></p><p>🔗 https://mawconsultingllc.com</p><p></p><p>Accessibility is not just about digital spaces. Accessibility is about fundamental human rights.</p><p></p><p>What leaders and operators need to know:</p><p>✓ How to spot invisible exclusion in UX research and code</p><p>✓ Moving beyond compliance checklists to build truly inclusive systems</p><p>✓ Using AI for captions and alt text without creating new barriers</p><p>✓ The 90-day accessibility baseline your team can sustain</p><p></p><p>Because real inclusion means ensuring everyone has access to the places and systems they need — whether digital or physical.</p><p></p><p>Nothing about us without us.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #DigitalAccessibility #ArtificialIntelligence #InclusiveDesign #UXResearch #TechLeadership #Accessibility #AIGovernance #NothingAboutUsWithoutUs #AccessibleUXResearch #DisabilityInclusion</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/digital-accessibility-ai-world-2026]]></link><guid isPermaLink="false">326893ee-c07d-4502-927a-7ccc813c6b8c</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Mon, 23 Feb 2026 00:07:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/b3e745a6-5c18-4a98-89ff-c12f94ed8795.mp3" length="1454224" type="audio/mpeg"/><itunes:duration>01:31</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>2</itunes:season><podcast:season>2</podcast:season><podcast:alternateEnclosure type="video/youtube" title="Digital Accessibility In An AI World | The AI Governance Briefing"><podcast:source uri="https://youtu.be/Lm3-AUf54MA"/></podcast:alternateEnclosure></item><item><title>AI Activism For Insiders This Is Not Ethics Work</title><itunes:title>AI Activism For Insiders This Is Not Ethics Work</itunes:title><description><![CDATA[<p>AI Activism For Insiders — This Is Not Ethics Work | 2026</p><p></p><p>🧠 About Human Signal</p><p></p><p>Human Signal is an independent AI governance research and media platform. Through the L.E.A.C. Protocol™, GASP™, Noise Discipline, and the Workflow Thesis, we reverse-engineer where governance erodes under capital pressure — and where external oversight must be applied. Independence is not a feature. It is the product.</p><p></p><p>🔗 humansignal.io</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #AIActivism #ResponsibleAI #AIAccountability #GovernanceCollapse #NoiseDisciipline #LEACProtocol #GASP #WorkflowThesis #FrontierAI #AIPolicy #AIEthics #AIInsiders</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>AI Activism For Insiders — This Is Not Ethics Work | 2026</p><p></p><p>🧠 About Human Signal</p><p></p><p>Human Signal is an independent AI governance research and media platform. Through the L.E.A.C. Protocol™, GASP™, Noise Discipline, and the Workflow Thesis, we reverse-engineer where governance erodes under capital pressure — and where external oversight must be applied. Independence is not a feature. It is the product.</p><p></p><p>🔗 humansignal.io</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #AIActivism #ResponsibleAI #AIAccountability #GovernanceCollapse #NoiseDisciipline #LEACProtocol #GASP #WorkflowThesis #FrontierAI #AIPolicy #AIEthics #AIInsiders</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/ai-activism-insiders-not-ethics-work]]></link><guid isPermaLink="false">12fafff6-354b-4045-a94d-f99150dcf446</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Fri, 20 Feb 2026 18:16:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/74f51a26-60c7-4bb5-99e6-1084b251416a.mp3" length="1899768" type="audio/mpeg"/><itunes:duration>01:59</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>2</itunes:season><podcast:season>2</podcast:season><podcast:alternateEnclosure type="video/youtube" title="AI Activism for Insiders: This Is Not Ethics Work | The AI Governance Briefing"><podcast:source uri="https://youtu.be/ZUBm7wbh0dI"/></podcast:alternateEnclosure></item><item><title>Anthropic Safeguards Chief Resigns: What Governance Collapse Looks Like From Inside</title><itunes:title>Anthropic Safeguards Chief Resigns: What Governance Collapse Looks Like From Inside</itunes:title><description><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd examines the resignation of Mrinank Sharma — Anthropic's head of safeguards research — on February 9, 2026, and what it reveals about what happens when billion-dollar infrastructure commitments collide with safety protocols.</p><p></p><p>This is not a personnel story. It is organizational telemetry. Sharma's departure tells us everything about the gap between stated safety commitments and operational reality — and why that gap is exactly where systemic risk accumulates.</p><p></p><p>──────────────────────────────────────</p><p>KEY TOPICS</p><p>──────────────────────────────────────</p><p></p><p>The Signal, Not Just the Personnel</p><p>∙ Mrinank Sharma's resignation as organizational telemetry</p><p>∙ Sharma's critical research areas: reality distortion in AI chatbots, AI-assisted bioterrorism defense, and sycophancy prevention</p><p>∙ Why departures from safety leadership roles are data points in governance collapse patterns — not random exits</p><p></p><p>Infrastructure Economics vs. Safety</p><p>∙ The capital-intensive reality: lithography, GPUs, data centers, and energy</p><p>∙ How financial models lock organizations into velocity-prioritizing postures</p><p>∙ The mechanism of slow-motion governance collapse</p><p></p><p>The Public-Private Governance Gap</p><p>∙ U.S. Department of Labor's AI Literacy Framework and public-side initiatives</p><p>∙ The irony of raising the AI literacy floor while the ceiling cracks inside frontier labs</p><p>∙ Where systemic risk accumulates in this disconnect</p><p></p><p>The L.E.A.C. Protocol™ Applied</p><p>∙ How Lithography, Energy, Arbitrage, and Cooling create the capital pressure that drives governance erosion</p><p>∙ Why organizations don't abandon safety — they redefine it, water it down, or sideline the people holding the line</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p>→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop</p><p>→ Project Cerebellum — projectcerebellum.com</p><p>→ U.S. Department of Labor AI Literacy Framework — https://www.dol.gov/sites/dolgov/files/ETA/advisories/TEN/2025/TEN%2007-25/TEN%2007-25%20(complete%20document).pdf</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #AIEthics #Anthropic #AISafety #AIPolicy #FrontierAI #GovernanceCollapse #AIAccountability #AIInfrastructure #LEACProtocol #GASP #TrustGap #FailureFiles #TAIMScore #ProjectCerebellum #MrinankSharma #AIResearch #NoiseDisciipline</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd examines the resignation of Mrinank Sharma — Anthropic's head of safeguards research — on February 9, 2026, and what it reveals about what happens when billion-dollar infrastructure commitments collide with safety protocols.</p><p></p><p>This is not a personnel story. It is organizational telemetry. Sharma's departure tells us everything about the gap between stated safety commitments and operational reality — and why that gap is exactly where systemic risk accumulates.</p><p></p><p>──────────────────────────────────────</p><p>KEY TOPICS</p><p>──────────────────────────────────────</p><p></p><p>The Signal, Not Just the Personnel</p><p>∙ Mrinank Sharma's resignation as organizational telemetry</p><p>∙ Sharma's critical research areas: reality distortion in AI chatbots, AI-assisted bioterrorism defense, and sycophancy prevention</p><p>∙ Why departures from safety leadership roles are data points in governance collapse patterns — not random exits</p><p></p><p>Infrastructure Economics vs. Safety</p><p>∙ The capital-intensive reality: lithography, GPUs, data centers, and energy</p><p>∙ How financial models lock organizations into velocity-prioritizing postures</p><p>∙ The mechanism of slow-motion governance collapse</p><p></p><p>The Public-Private Governance Gap</p><p>∙ U.S. Department of Labor's AI Literacy Framework and public-side initiatives</p><p>∙ The irony of raising the AI literacy floor while the ceiling cracks inside frontier labs</p><p>∙ Where systemic risk accumulates in this disconnect</p><p></p><p>The L.E.A.C. Protocol™ Applied</p><p>∙ How Lithography, Energy, Arbitrage, and Cooling create the capital pressure that drives governance erosion</p><p>∙ Why organizations don't abandon safety — they redefine it, water it down, or sideline the people holding the line</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p>→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop</p><p>→ Project Cerebellum — projectcerebellum.com</p><p>→ U.S. Department of Labor AI Literacy Framework — https://www.dol.gov/sites/dolgov/files/ETA/advisories/TEN/2025/TEN%2007-25/TEN%2007-25%20(complete%20document).pdf</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #AIEthics #Anthropic #AISafety #AIPolicy #FrontierAI #GovernanceCollapse #AIAccountability #AIInfrastructure #LEACProtocol #GASP #TrustGap #FailureFiles #TAIMScore #ProjectCerebellum #MrinankSharma #AIResearch #NoiseDisciipline</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/anthropic-exodus-governance-collapse]]></link><guid isPermaLink="false">76cb7a80-f8f4-404c-a7ed-19c33636b39d</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Fri, 20 Feb 2026 10:47:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/7c8d6d25-83f6-49a6-9aec-ae58ff6230b3.mp3" length="6409966" type="audio/mpeg"/><itunes:duration>06:41</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>2</itunes:season><podcast:season>2</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/80dfaea7-8f28-40e8-8bf2-978be56bcbab/index.html" type="text/html"/><podcast:alternateEnclosure type="video/youtube" title="Anthropic Safeguards Chief Resigns: What Governance Collapse Looks Like From Inside "><podcast:source uri="https://youtu.be/G5L5zqsF1Lk"/></podcast:alternateEnclosure></item><item><title>AI Contracts Are Moving Faster Than Governance: The Gap Where Failures Live</title><itunes:title>AI Contracts Are Moving Faster Than Governance: The Gap Where Failures Live</itunes:title><description><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd asks the question every institutional operator is living right now: Is your leadership signing AI contracts faster than they're building governance?</p><p></p><p>That gap is where the lawsuits, scandals, and quiet institutional failures live. It's how you wake up with a "successful AI pilot" and a mess in risk, workforce, and public trust. The governance gap is not a technology problem. It is a structural problem — and it is solvable from the inside.</p><p></p><p>──────────────────────────────────────</p><p>THE CORE PROBLEM</p><p>──────────────────────────────────────</p><p></p><p>Organizations are racing to deploy AI without the control systems, oversight mechanisms, and governance frameworks needed to manage the technology accountably. The result is a dangerous gap between what leadership promises and what operations can actually deliver. Mid-career operators are absorbing the exposure while the contracts keep moving.</p><p></p><p>──────────────────────────────────────</p><p>WHO THIS IS FOR</p><p>──────────────────────────────────────</p><p></p><p>∙ Mid-career operators inside AI-disrupted institutions</p><p>∙ Federal IT leaders watching risky deployments unfold</p><p>∙ University CIOs managing AI rollouts without adequate governance</p><p>∙ Enterprise strategists caught between innovation pressure and risk reality</p><p>∙ Policy teams trying to create guardrails after the fact</p><p></p><p>──────────────────────────────────────</p><p>KEY TAKEAWAY</p><p>──────────────────────────────────────</p><p></p><p>You don't have to wait for leadership to figure this out. Mid-career operators have the leverage to intervene, redirect, and demand better governance before the failures compound. This isn't about slowing down innovation. It's about surviving it.</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p>→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #GovernanceGap #AIGovernance #AIContracts #RiskManagement #InstitutionalAI #FederalIT #EnterpriseAI #AIDeployment #AIOversight #GASP #TrustGap #WorkflowThesis #LEACProtocol #FailureFiles #TAIMScore #AIPolicy #AIAccountability #BuilderClass</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd asks the question every institutional operator is living right now: Is your leadership signing AI contracts faster than they're building governance?</p><p></p><p>That gap is where the lawsuits, scandals, and quiet institutional failures live. It's how you wake up with a "successful AI pilot" and a mess in risk, workforce, and public trust. The governance gap is not a technology problem. It is a structural problem — and it is solvable from the inside.</p><p></p><p>──────────────────────────────────────</p><p>THE CORE PROBLEM</p><p>──────────────────────────────────────</p><p></p><p>Organizations are racing to deploy AI without the control systems, oversight mechanisms, and governance frameworks needed to manage the technology accountably. The result is a dangerous gap between what leadership promises and what operations can actually deliver. Mid-career operators are absorbing the exposure while the contracts keep moving.</p><p></p><p>──────────────────────────────────────</p><p>WHO THIS IS FOR</p><p>──────────────────────────────────────</p><p></p><p>∙ Mid-career operators inside AI-disrupted institutions</p><p>∙ Federal IT leaders watching risky deployments unfold</p><p>∙ University CIOs managing AI rollouts without adequate governance</p><p>∙ Enterprise strategists caught between innovation pressure and risk reality</p><p>∙ Policy teams trying to create guardrails after the fact</p><p></p><p>──────────────────────────────────────</p><p>KEY TAKEAWAY</p><p>──────────────────────────────────────</p><p></p><p>You don't have to wait for leadership to figure this out. Mid-career operators have the leverage to intervene, redirect, and demand better governance before the failures compound. This isn't about slowing down innovation. It's about surviving it.</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p>→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #GovernanceGap #AIGovernance #AIContracts #RiskManagement #InstitutionalAI #FederalIT #EnterpriseAI #AIDeployment #AIOversight #GASP #TrustGap #WorkflowThesis #LEACProtocol #FailureFiles #TAIMScore #AIPolicy #AIAccountability #BuilderClass</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/governance-gap-ai-contracts-outpace-control-systems]]></link><guid isPermaLink="false">4fd055c8-7ed5-4bca-ab38-b66833cfc292</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Sat, 14 Feb 2026 22:04:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/b8ad1326-5116-4bd6-9b96-520ebab12dbc.mp3" length="515487" type="audio/mpeg"/><itunes:duration>00:32</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>2</itunes:season><podcast:season>2</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/baaf3e93-ab78-4cc8-b9e1-085418317b83/transcript.json" type="application/json"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/baaf3e93-ab78-4cc8-b9e1-085418317b83/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/baaf3e93-ab78-4cc8-b9e1-085418317b83/index.html" type="text/html"/><podcast:alternateEnclosure type="video/youtube" title="Is your leadership signing AI contracts faster than they’re building governance?"><podcast:source uri="https://youtu.be/x6gSLYSDZ-8"/></podcast:alternateEnclosure></item><item><title>Noise Discipline: Social Media Destroys Strategic Focus</title><itunes:title>Noise Discipline: Social Media Destroys Strategic Focus</itunes:title><description><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd breaks down Noise Discipline — and why builders must treat social feeds as enemy territory.</p><p></p><p>High-speed social media feeds are not neutral infrastructure. They degrade critical thinking, soften fact-checking instincts, increase cognitive stress, and produce source amnesia — the condition where you forget where ideas came from and mistake them for your own. For institutional operators making consequential decisions about AI deployment, that is not a productivity problem. It is a governance risk.</p><p></p><p>──────────────────────────────────────</p><p>KEY TOPICS</p><p>──────────────────────────────────────</p><p></p><p>∙ How social feeds degrade critical thinking capabilities</p><p>∙ The real cost of source amnesia on strategic decision-making</p><p>∙ Why constant scrolling softens fact-checking instincts</p><p>∙ The stress cascade triggered by high-velocity information environments</p><p>∙ Noise Discipline as a cognitive defense framework for operators</p><p></p><p>──────────────────────────────────────</p><p>THE FOUR INTERVENTIONS</p><p>──────────────────────────────────────</p><p></p><p>Treat feeds like radiation zones:</p><p></p><p>∙ Set timers and enforce strict exposure limits</p><p>∙ Question who wrote what and what they're selling</p><p>∙ Skip anything that doesn't help you build</p><p>∙ Reclaim your attention as a strategic asset</p><p></p><p>This isn't about productivity hacks. It's about survival in an environment designed to colonize your attention.</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #NoiseDiscipline #AttentionManagement #BuilderMindset #SourceAmnesia #CriticalThinking #InformationHygiene #VendorCapture #CognitiveDefense #AIGovernance #GASP #TrustGap #WorkflowThesis #LEACProtocol #StrategicFocus #DeepWork #BuilderClass</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd breaks down Noise Discipline — and why builders must treat social feeds as enemy territory.</p><p></p><p>High-speed social media feeds are not neutral infrastructure. They degrade critical thinking, soften fact-checking instincts, increase cognitive stress, and produce source amnesia — the condition where you forget where ideas came from and mistake them for your own. For institutional operators making consequential decisions about AI deployment, that is not a productivity problem. It is a governance risk.</p><p></p><p>──────────────────────────────────────</p><p>KEY TOPICS</p><p>──────────────────────────────────────</p><p></p><p>∙ How social feeds degrade critical thinking capabilities</p><p>∙ The real cost of source amnesia on strategic decision-making</p><p>∙ Why constant scrolling softens fact-checking instincts</p><p>∙ The stress cascade triggered by high-velocity information environments</p><p>∙ Noise Discipline as a cognitive defense framework for operators</p><p></p><p>──────────────────────────────────────</p><p>THE FOUR INTERVENTIONS</p><p>──────────────────────────────────────</p><p></p><p>Treat feeds like radiation zones:</p><p></p><p>∙ Set timers and enforce strict exposure limits</p><p>∙ Question who wrote what and what they're selling</p><p>∙ Skip anything that doesn't help you build</p><p>∙ Reclaim your attention as a strategic asset</p><p></p><p>This isn't about productivity hacks. It's about survival in an environment designed to colonize your attention.</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #NoiseDiscipline #AttentionManagement #BuilderMindset #SourceAmnesia #CriticalThinking #InformationHygiene #VendorCapture #CognitiveDefense #AIGovernance #GASP #TrustGap #WorkflowThesis #LEACProtocol #StrategicFocus #DeepWork #BuilderClass</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/noise-discipline-social-media-strategic-focus]]></link><guid isPermaLink="false">460bdcd1-4408-4f70-942c-e26b9b844163</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Fri, 06 Feb 2026 13:38:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/d6e7b9a5-30d3-40f8-856b-62107b788dbe.mp3" length="1236885" type="audio/mpeg"/><itunes:duration>01:17</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>2</itunes:season><podcast:season>2</podcast:season></item><item><title>Aging Power Grids Meet Autonomous AI: The Infrastructure Breaking Point</title><itunes:title>Aging Power Grids Meet Autonomous AI: The Infrastructure Breaking Point</itunes:title><description><![CDATA[<p><strong>EPISODE DESCRIPTION</strong></p><p></p><p><strong>🎧 When Physics Meets Agentic AI: The Infrastructure Breaking Point</strong></p><p></p><p><strong>We explore the collision between aging, fragile infrastructure and autonomous AI systems making real-time decisions. We discuss how climate stress, overloaded grids, and agentic AI operating without full human oversight create cascading failure risks across universities, hospitals, and federal agencies.</strong></p><p></p><p><strong>Critical Questions Explored:</strong></p><ul><li><strong>​ What happens when autonomous AI meets 50-year-old infrastructure? </strong></li><li><strong>​ How climate stress amplifies the risks of agentic systems </strong></li><li><strong>​ Why overloaded power grids become AI failure multipliers </strong></li><li><strong>​ The hidden vulnerability in universities, hospitals, and federal agencies </strong></li></ul><br/><p></p><p><strong>What Actually Fails First?</strong></p><p></p><p><strong>When physics meets automation, we examine three failure points:</strong></p><ul><li><strong>​ The hardware: aging infrastructure that can't keep pace </strong></li><li><strong>​ Organizational structures: governance models built for slower systems </strong></li><li><strong>​ Human control itself: when oversight becomes impossible at AI speed </strong></li></ul><br/><p></p><p><strong>This isn't theoretical. This is the breaking point that institutions are racing toward right now.</strong></p><p></p><p><strong>SUBSCRIBE &amp; SUPPORT</strong></p><p></p><p><strong>Subscribe now to lock in the feed. This isn't just content; it's a continuing briefing for the Builder Class.</strong></p><p></p><p><strong>Support Human Signal: </strong></p><p><strong>Help fuel six months of new episodes, visual briefs, and honest playbooks. </strong></p><p><strong>🔗 https://humansignal.com/support</strong></p><p></p><p><strong>Every contribution sustains the signal.</strong></p><p></p><p><strong>ABOUT THE HOST</strong></p><p></p><p><strong>Dr. Tuboise Floyd is the founder of Human Signal, a strategy lab and podcast for people deploying AI inside government agencies, universities, and enterprise systems. A PhD social scientist and former federal contracting strategist, he reverse-engineers system failures and designs AI governance controls that survive real humans, real incentives, and real pressure.</strong></p><p></p><p><strong>PRODUCTION NOTES</strong></p><p></p><p><strong>Host &amp; Producer: Dr. Tuboise Floyd </strong></p><p><strong>Creative Director: Jeremy Jarvis</strong></p><p></p><p><strong>Tech Specs: </strong></p><p><strong>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</strong></p><p></p><p><strong>CONNECT</strong></p><p><strong>LinkedIn: linkedin.com/in/tuboise</strong></p><p><strong>Email: tuboise@theaigovernancebriefing.com</strong></p><p></p><p><strong>TRANSCRIPT</strong></p><p><strong>Full transcript available upon request at hello@theaigovernancebriefing.com</strong></p><p></p><p><strong>TAGS/KEYWORDS</strong></p><p></p><p><strong>Agentic AI, Infrastructure Risk, Critical Infrastructure, AI Safety, Climate Stress, Power Grid Resilience, Autonomous Systems, Cascading Failures, Federal AI, Healthcare AI, University Systems, Enterprise Risk</strong></p><p></p><p><strong>HASHTAGS</strong></p><p></p><p><strong>#AgenticAI #InfrastructureRisk #AIGovernance #CriticalInfrastructure #ClimateRisk #HumanSignal #AutonomousSystems #EnterpriseAI #CascadingFailure #AIPolicy</strong></p><p></p><p><strong>LEGAL</strong></p><p></p><p><strong>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™ and L.E.A.C. Protocol™.</strong></p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p><strong>EPISODE DESCRIPTION</strong></p><p></p><p><strong>🎧 When Physics Meets Agentic AI: The Infrastructure Breaking Point</strong></p><p></p><p><strong>We explore the collision between aging, fragile infrastructure and autonomous AI systems making real-time decisions. We discuss how climate stress, overloaded grids, and agentic AI operating without full human oversight create cascading failure risks across universities, hospitals, and federal agencies.</strong></p><p></p><p><strong>Critical Questions Explored:</strong></p><ul><li><strong>​ What happens when autonomous AI meets 50-year-old infrastructure? </strong></li><li><strong>​ How climate stress amplifies the risks of agentic systems </strong></li><li><strong>​ Why overloaded power grids become AI failure multipliers </strong></li><li><strong>​ The hidden vulnerability in universities, hospitals, and federal agencies </strong></li></ul><br/><p></p><p><strong>What Actually Fails First?</strong></p><p></p><p><strong>When physics meets automation, we examine three failure points:</strong></p><ul><li><strong>​ The hardware: aging infrastructure that can't keep pace </strong></li><li><strong>​ Organizational structures: governance models built for slower systems </strong></li><li><strong>​ Human control itself: when oversight becomes impossible at AI speed </strong></li></ul><br/><p></p><p><strong>This isn't theoretical. This is the breaking point that institutions are racing toward right now.</strong></p><p></p><p><strong>SUBSCRIBE &amp; SUPPORT</strong></p><p></p><p><strong>Subscribe now to lock in the feed. This isn't just content; it's a continuing briefing for the Builder Class.</strong></p><p></p><p><strong>Support Human Signal: </strong></p><p><strong>Help fuel six months of new episodes, visual briefs, and honest playbooks. </strong></p><p><strong>🔗 https://humansignal.com/support</strong></p><p></p><p><strong>Every contribution sustains the signal.</strong></p><p></p><p><strong>ABOUT THE HOST</strong></p><p></p><p><strong>Dr. Tuboise Floyd is the founder of Human Signal, a strategy lab and podcast for people deploying AI inside government agencies, universities, and enterprise systems. A PhD social scientist and former federal contracting strategist, he reverse-engineers system failures and designs AI governance controls that survive real humans, real incentives, and real pressure.</strong></p><p></p><p><strong>PRODUCTION NOTES</strong></p><p></p><p><strong>Host &amp; Producer: Dr. Tuboise Floyd </strong></p><p><strong>Creative Director: Jeremy Jarvis</strong></p><p></p><p><strong>Tech Specs: </strong></p><p><strong>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</strong></p><p></p><p><strong>CONNECT</strong></p><p><strong>LinkedIn: linkedin.com/in/tuboise</strong></p><p><strong>Email: tuboise@theaigovernancebriefing.com</strong></p><p></p><p><strong>TRANSCRIPT</strong></p><p><strong>Full transcript available upon request at hello@theaigovernancebriefing.com</strong></p><p></p><p><strong>TAGS/KEYWORDS</strong></p><p></p><p><strong>Agentic AI, Infrastructure Risk, Critical Infrastructure, AI Safety, Climate Stress, Power Grid Resilience, Autonomous Systems, Cascading Failures, Federal AI, Healthcare AI, University Systems, Enterprise Risk</strong></p><p></p><p><strong>HASHTAGS</strong></p><p></p><p><strong>#AgenticAI #InfrastructureRisk #AIGovernance #CriticalInfrastructure #ClimateRisk #HumanSignal #AutonomousSystems #EnterpriseAI #CascadingFailure #AIPolicy</strong></p><p></p><p><strong>LEGAL</strong></p><p></p><p><strong>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™ and L.E.A.C. Protocol™.</strong></p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/agentic-ai-aging-infrastructure-breaking-point]]></link><guid isPermaLink="false">eeb3b58f-322b-4eba-97f6-96fb0a051a0a</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Thu, 05 Feb 2026 14:08:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/d13049c4-7c14-40f2-b2b2-84056f9f1416.mp3" length="1315879" type="audio/mpeg"/><itunes:duration>01:22</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>2</itunes:season><podcast:season>2</podcast:season></item><item><title>Tomorrow&apos;s War: Our Children Pay the AI Debt We&apos;re Running Now</title><itunes:title>Tomorrow&apos;s War: Our Children Pay the AI Debt We&apos;re Running Now</itunes:title><description><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd delivers Briefing 005: Tomorrow's War — Our Children Pay Our AI Debt.</p><p></p><p>Tomorrow's war won't look like soldiers fighting beasts with long teeth. It will look like our children quietly paying the bill for our decision to unleash ungoverned AI and call it "progress."</p><p></p><p>We are the first generation to hand machines the keys to our attention, our labor markets, and our democracy — and then shrug at the fine print.</p><p></p><p>──────────────────────────────────────</p><p>THE REAL COST</p><p>──────────────────────────────────────</p><p></p><p>We cash the convenience and productivity now while the real cost is deferred:</p><p></p><p>∙ Their mental health</p><p>∙ Their privacy</p><p>∙ Their ability to tell what's real</p><p>∙ Their leverage in a world run by systems they never chose</p><p></p><p>This is the old story of the sins of the fathers — rewritten in code. We are loading our fear of change and our hunger for growth at any price onto our children's backs, compounding it with every unchecked model we ship.</p><p></p><p>──────────────────────────────────────</p><p>TOMORROW'S WAR</p><p>──────────────────────────────────────</p><p></p><p>The fight over that inheritance. Whether we govern AI now, or let our children spend their lives paying down a debt they never agreed to incur.</p><p></p><p>This isn't about technology. It's about intergenerational accountability.</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #AIAccountability #IntergenerationalJustice #AIDebt #TomorrowsWar #AIEthics #AIPolicy #Ungoverned AI #PrivacyRights #MentalHealth #DigitalDemocracy #GASP #TrustGap #LEACProtocol #FailureFiles #BuilderClass #GoverningAI #AIResponsibility</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd delivers Briefing 005: Tomorrow's War — Our Children Pay Our AI Debt.</p><p></p><p>Tomorrow's war won't look like soldiers fighting beasts with long teeth. It will look like our children quietly paying the bill for our decision to unleash ungoverned AI and call it "progress."</p><p></p><p>We are the first generation to hand machines the keys to our attention, our labor markets, and our democracy — and then shrug at the fine print.</p><p></p><p>──────────────────────────────────────</p><p>THE REAL COST</p><p>──────────────────────────────────────</p><p></p><p>We cash the convenience and productivity now while the real cost is deferred:</p><p></p><p>∙ Their mental health</p><p>∙ Their privacy</p><p>∙ Their ability to tell what's real</p><p>∙ Their leverage in a world run by systems they never chose</p><p></p><p>This is the old story of the sins of the fathers — rewritten in code. We are loading our fear of change and our hunger for growth at any price onto our children's backs, compounding it with every unchecked model we ship.</p><p></p><p>──────────────────────────────────────</p><p>TOMORROW'S WAR</p><p>──────────────────────────────────────</p><p></p><p>The fight over that inheritance. Whether we govern AI now, or let our children spend their lives paying down a debt they never agreed to incur.</p><p></p><p>This isn't about technology. It's about intergenerational accountability.</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #AIAccountability #IntergenerationalJustice #AIDebt #TomorrowsWar #AIEthics #AIPolicy #Ungoverned AI #PrivacyRights #MentalHealth #DigitalDemocracy #GASP #TrustGap #LEACProtocol #FailureFiles #BuilderClass #GoverningAI #AIResponsibility</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/tomorrows-war-ai-intergenerational-debt]]></link><guid isPermaLink="false">8e446bf0-7204-4dbc-a61a-6a8a40904dfa</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Wed, 04 Feb 2026 03:17:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/9cc042ee-3080-4765-a8af-33401c1bbc1e.mp3" length="1210553" type="audio/mpeg"/><itunes:duration>01:16</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>2</itunes:season><podcast:season>2</podcast:season></item><item><title>Beyond AI: Quantum Computing and Organoid Intelligence</title><itunes:title>Beyond AI: Quantum Computing and Organoid Intelligence</itunes:title><description><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd looks beyond AI to the larger technological transformation already in motion — and the power race determining who controls it.</p><p></p><p>AI was the warning shot. What comes next determines who survives and who becomes obsolete.</p><p></p><p>──────────────────────────────────────</p><p>EMERGING TECHNOLOGIES RESHAPING WHAT'S NEXT</p><p>──────────────────────────────────────</p><p></p><p>∙ Organoid intelligence: computing with biological neural tissue</p><p>∙ Quantum computing: exponential leaps beyond classical limits</p><p>∙ The critical energy constraints driving the race</p><p>∙ Why power consumption is the bottleneck no one is talking about</p><p></p><p>──────────────────────────────────────</p><p>THE HIDDEN RISKS</p><p>──────────────────────────────────────</p><p></p><p>The danger of reducing human involvement in increasingly powerful systems does not disappear when the technology label changes. As these technologies accelerate beyond AI, the question isn't just "what can they do?" — it's "who controls them, and who pays the price when they fail?"</p><p></p><p>──────────────────────────────────────</p><p>WHY THIS MATTERS NOW</p><p>──────────────────────────────────────</p><p></p><p>Tracking where power moves and identifying human stakes in these futures is not a futurist exercise. It is institutional survival. This isn't about keeping up with tech trends. It's about understanding the inflection points before they shape your institution, your workforce, and your options.</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p>→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #BeyondAI #OrganoidIntelligence #QuantumComputing #EmergingTech #EnergyConstraints #TechTransformation #PowerDynamics #HumanStakes #TechGovernance #LEACProtocol #GASP #TrustGap #FailureFiles #BuilderClass #StrategicForesight #AIEvolution #Biocomputing</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd looks beyond AI to the larger technological transformation already in motion — and the power race determining who controls it.</p><p></p><p>AI was the warning shot. What comes next determines who survives and who becomes obsolete.</p><p></p><p>──────────────────────────────────────</p><p>EMERGING TECHNOLOGIES RESHAPING WHAT'S NEXT</p><p>──────────────────────────────────────</p><p></p><p>∙ Organoid intelligence: computing with biological neural tissue</p><p>∙ Quantum computing: exponential leaps beyond classical limits</p><p>∙ The critical energy constraints driving the race</p><p>∙ Why power consumption is the bottleneck no one is talking about</p><p></p><p>──────────────────────────────────────</p><p>THE HIDDEN RISKS</p><p>──────────────────────────────────────</p><p></p><p>The danger of reducing human involvement in increasingly powerful systems does not disappear when the technology label changes. As these technologies accelerate beyond AI, the question isn't just "what can they do?" — it's "who controls them, and who pays the price when they fail?"</p><p></p><p>──────────────────────────────────────</p><p>WHY THIS MATTERS NOW</p><p>──────────────────────────────────────</p><p></p><p>Tracking where power moves and identifying human stakes in these futures is not a futurist exercise. It is institutional survival. This isn't about keeping up with tech trends. It's about understanding the inflection points before they shape your institution, your workforce, and your options.</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p>→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #BeyondAI #OrganoidIntelligence #QuantumComputing #EmergingTech #EnergyConstraints #TechTransformation #PowerDynamics #HumanStakes #TechGovernance #LEACProtocol #GASP #TrustGap #FailureFiles #BuilderClass #StrategicForesight #AIEvolution #Biocomputing</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/beyond-ai-quantum-computing-organoid-intelligence]]></link><guid isPermaLink="false">539c529f-3bf6-4aab-868e-1a2c03ea44a4</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Tue, 03 Feb 2026 21:02:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/1cf163e8-1937-4604-b2bc-50eb1e63940f.mp3" length="1177117" type="audio/mpeg"/><itunes:duration>01:14</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>2</itunes:season><podcast:season>2</podcast:season></item><item><title>Stop Renting AI You Can&apos;t Deploy: The Case for Sovereign Infrastructure</title><itunes:title>Stop Renting AI You Can&apos;t Deploy: The Case for Sovereign Infrastructure</itunes:title><description><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd breaks down why enterprise boards are hemorrhaging AI transformation budgets through subscription models and cloud credits — without building the foundational infrastructure needed to support them.</p><p></p><p>──────────────────────────────────────</p><p>THE FERRARI IN THE SWAMP PROBLEM</p><p>──────────────────────────────────────</p><p></p><p>Companies are renting expensive tools to operate in environments where they cannot be used effectively. You are paying premium prices for capabilities your infrastructure cannot support. The tool is not the problem. The structural mismatch is.</p><p></p><p>──────────────────────────────────────</p><p>WHO PROFITS FROM THE MISALIGNMENT</p><p>──────────────────────────────────────</p><p></p><p>∙ Vendors selling you tools you cannot deploy</p><p>∙ Consultants billing for transformations that collapse on contact</p><p>∙ Cloud providers stacking credits while your ROI evaporates</p><p></p><p>──────────────────────────────────────</p><p>THE REAL PHYSICS OF YOUR BURN RATE</p><p>──────────────────────────────────────</p><p></p><p>This isn't digital transformation theater. It's capital efficiency and sovereignty.</p><p></p><p>∙ Why subscription models drain budgets without building capacity</p><p>∙ What sovereign infrastructure actually means for enterprise control</p><p>∙ How to identify the gap between what you're buying and what you can use</p><p>∙ The hidden cost of vendor dependency vs. building owned infrastructure</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p>→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #SovereignInfrastructure #AIBudget #EnterpriseAI #CapitalEfficiency #CloudCosts #VendorLockIn #VendorCapture #DigitalTransformation #AIROl #ITStrategy #LEACProtocol #GASP #NoiseDiscipline #TrustGap #FailureFiles #BuilderClass #AIPolicy</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd breaks down why enterprise boards are hemorrhaging AI transformation budgets through subscription models and cloud credits — without building the foundational infrastructure needed to support them.</p><p></p><p>──────────────────────────────────────</p><p>THE FERRARI IN THE SWAMP PROBLEM</p><p>──────────────────────────────────────</p><p></p><p>Companies are renting expensive tools to operate in environments where they cannot be used effectively. You are paying premium prices for capabilities your infrastructure cannot support. The tool is not the problem. The structural mismatch is.</p><p></p><p>──────────────────────────────────────</p><p>WHO PROFITS FROM THE MISALIGNMENT</p><p>──────────────────────────────────────</p><p></p><p>∙ Vendors selling you tools you cannot deploy</p><p>∙ Consultants billing for transformations that collapse on contact</p><p>∙ Cloud providers stacking credits while your ROI evaporates</p><p></p><p>──────────────────────────────────────</p><p>THE REAL PHYSICS OF YOUR BURN RATE</p><p>──────────────────────────────────────</p><p></p><p>This isn't digital transformation theater. It's capital efficiency and sovereignty.</p><p></p><p>∙ Why subscription models drain budgets without building capacity</p><p>∙ What sovereign infrastructure actually means for enterprise control</p><p>∙ How to identify the gap between what you're buying and what you can use</p><p>∙ The hidden cost of vendor dependency vs. building owned infrastructure</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p>→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #SovereignInfrastructure #AIBudget #EnterpriseAI #CapitalEfficiency #CloudCosts #VendorLockIn #VendorCapture #DigitalTransformation #AIROl #ITStrategy #LEACProtocol #GASP #NoiseDiscipline #TrustGap #FailureFiles #BuilderClass #AIPolicy</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/enterprise-ai-budget-subscriptions-vs-sovereign-infrastructure]]></link><guid isPermaLink="false">89ee23a1-a192-4646-8956-7dbc037f4d7f</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Mon, 02 Feb 2026 23:50:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/be0c3710-2c23-461b-81a3-4d13aab21919.mp3" length="1072627" type="audio/mpeg"/><itunes:duration>01:07</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The Builder Class: AI Governance Briefing for Federal and Enterprise Leaders</title><itunes:title>The Builder Class: AI Governance Briefing for Federal and Enterprise Leaders</itunes:title><description><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>If you are hearing this, you have successfully moved operations behind the wire. You are no longer a consumer. You are a Builder.</p><p></p><p>The public internet is a noise engine designed to extract your attention. The AI Governance Briefing is a signal processing facility designed to build your leverage.</p><p></p><p>We do not trade in content here. We trade in clearance.</p><p></p><p>──────────────────────────────────────</p><p>WHAT THIS MEANS</p><p>──────────────────────────────────────</p><p></p><p>∙ You've crossed the threshold from passive consumption to active building</p><p>∙ This isn't another feed competing for your distraction</p><p>∙ This is infrastructure — signal, not noise</p><p>∙ This is leverage, not entertainment</p><p></p><p>──────────────────────────────────────</p><p>THE BUILDER CLASS</p><p>──────────────────────────────────────</p><p></p><p>This channel exists for people who don't just navigate systems — they design, deploy, and defend them. For federal IT leaders, university CIOs, enterprise strategists, and policy architects who understand that survival requires moving operations behind the wire.</p><p></p><p>The public internet extracts. We construct.</p><p></p><p>Welcome to the infrastructure. This is The AI Governance Briefing. This is your clearance.</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p>→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #BuilderClass #NoiseDiscipline #AIGovernance #SignalNotNoise #AttentionManagement #StrategicLeverage #SystemsThinking #Infrastructure #GASP #TrustGap #LEACProtocol #FailureFiles #FederalIT #EnterpriseAI #AIPolicy #Clearance #GoverningAI</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>If you are hearing this, you have successfully moved operations behind the wire. You are no longer a consumer. You are a Builder.</p><p></p><p>The public internet is a noise engine designed to extract your attention. The AI Governance Briefing is a signal processing facility designed to build your leverage.</p><p></p><p>We do not trade in content here. We trade in clearance.</p><p></p><p>──────────────────────────────────────</p><p>WHAT THIS MEANS</p><p>──────────────────────────────────────</p><p></p><p>∙ You've crossed the threshold from passive consumption to active building</p><p>∙ This isn't another feed competing for your distraction</p><p>∙ This is infrastructure — signal, not noise</p><p>∙ This is leverage, not entertainment</p><p></p><p>──────────────────────────────────────</p><p>THE BUILDER CLASS</p><p>──────────────────────────────────────</p><p></p><p>This channel exists for people who don't just navigate systems — they design, deploy, and defend them. For federal IT leaders, university CIOs, enterprise strategists, and policy architects who understand that survival requires moving operations behind the wire.</p><p></p><p>The public internet extracts. We construct.</p><p></p><p>Welcome to the infrastructure. This is The AI Governance Briefing. This is your clearance.</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p>→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #BuilderClass #NoiseDiscipline #AIGovernance #SignalNotNoise #AttentionManagement #StrategicLeverage #SystemsThinking #Infrastructure #GASP #TrustGap #LEACProtocol #FailureFiles #FederalIT #EnterpriseAI #AIPolicy #Clearance #GoverningAI</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/builder-class-ai-governance-federal-enterprise-leaders]]></link><guid isPermaLink="false">21a3a176-1b05-4869-840e-8b29086f62cb</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Sun, 01 Feb 2026 22:30:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/02f2512a-4a43-4c2e-902c-e38a42e4000a.mp3" length="390517" type="audio/mpeg"/><itunes:duration>00:24</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>2</itunes:season><podcast:season>2</podcast:season></item><item><title>FedRAMP Is Not Resilience: The Compliance vs. Readiness Gap in GovCon AI</title><itunes:title>FedRAMP Is Not Resilience: The Compliance vs. Readiness Gap in GovCon AI</itunes:title><description><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd draws the line that the DC corridor keeps refusing to draw: compliance is not readiness.</p><p></p><p>We have a dangerous habit of celebrating the paperwork while ignoring the pulse. We hire leaders who can manage a Gantt chart, but we are not hiring architects who understand the physics of the risk.</p><p></p><p>──────────────────────────────────────</p><p>THE FALSE FINISH LINE</p><p>──────────────────────────────────────</p><p></p><p>A "finished" building isn't one that passed inspection. It's one that survives the first thermal runaway without blinking.</p><p></p><p>For GovCon leaders: FedRAMP High is a certification. Resilience is a discipline.</p><p></p><p>The loudest sound in a live environment isn't the generator testing. It's the silence after a logic error trips the transfer switch.</p><p></p><p>──────────────────────────────────────</p><p>WHAT THE MARKET ACTUALLY NEEDS</p><p>──────────────────────────────────────</p><p></p><p>The market doesn't need more people who can read the contract. It needs systems architects who can guarantee the signal.</p><p></p><p>∙ Compliance gets you the contract</p><p>∙ Readiness keeps the system alive</p><p>∙ Certification proves you checked boxes</p><p>∙ Resilience proves you understand physics</p><p></p><p>Don't just build the shell. Own the uptime.</p><p></p><p>This is the difference between passing inspection and surviving first contact with reality.</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p>→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #ComplianceVsReadiness #FedRAMP #GovCon #SystemsArchitecture #ResilienceEngineering #FederalIT #InfrastructureResilience #RiskManagement #DCCorridor #GASP #TrustGap #LEACProtocol #WorkflowThesis #FailureFiles #BuilderClass #AIPolicy #GovernmentTechnology</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd draws the line that the DC corridor keeps refusing to draw: compliance is not readiness.</p><p></p><p>We have a dangerous habit of celebrating the paperwork while ignoring the pulse. We hire leaders who can manage a Gantt chart, but we are not hiring architects who understand the physics of the risk.</p><p></p><p>──────────────────────────────────────</p><p>THE FALSE FINISH LINE</p><p>──────────────────────────────────────</p><p></p><p>A "finished" building isn't one that passed inspection. It's one that survives the first thermal runaway without blinking.</p><p></p><p>For GovCon leaders: FedRAMP High is a certification. Resilience is a discipline.</p><p></p><p>The loudest sound in a live environment isn't the generator testing. It's the silence after a logic error trips the transfer switch.</p><p></p><p>──────────────────────────────────────</p><p>WHAT THE MARKET ACTUALLY NEEDS</p><p>──────────────────────────────────────</p><p></p><p>The market doesn't need more people who can read the contract. It needs systems architects who can guarantee the signal.</p><p></p><p>∙ Compliance gets you the contract</p><p>∙ Readiness keeps the system alive</p><p>∙ Certification proves you checked boxes</p><p>∙ Resilience proves you understand physics</p><p></p><p>Don't just build the shell. Own the uptime.</p><p></p><p>This is the difference between passing inspection and surviving first contact with reality.</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p>→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #ComplianceVsReadiness #FedRAMP #GovCon #SystemsArchitecture #ResilienceEngineering #FederalIT #InfrastructureResilience #RiskManagement #DCCorridor #GASP #TrustGap #LEACProtocol #WorkflowThesis #FailureFiles #BuilderClass #AIPolicy #GovernmentTechnology</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/compliance-vs-readiness-fedramp-system-resilience]]></link><guid isPermaLink="false">5ffa4948-62eb-4a5e-be6d-03fddb1a1858</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Sun, 01 Feb 2026 21:35:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/05304e07-7fcf-445a-b836-7cff8db4bf13.mp3" length="1161652" type="audio/mpeg"/><itunes:duration>01:13</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>2</itunes:season><podcast:season>2</podcast:season></item><item><title>The L.E.A.C. Protocol: Why Lithography, Energy, Arbitrage, and Cooling Determine Which AI Institutions Survive — The AI Governance Briefing</title><itunes:title>The L.E.A.C. Protocol: Why Lithography, Energy, Arbitrage, and Cooling Determine Which AI Institutions Survive — The AI Governance Briefing</itunes:title><description><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd introduces the L.E.A.C. Protocol™ — the four physical constraints that determine which AI companies survive the infrastructure war.</p><p></p><p>The market has split in two. One side is cutting jobs. The other is building hardware. If your AI strategy doesn't address all four constraints, you are leaking value.</p><p></p><p>──────────────────────────────────────</p><p>THE L.E.A.C. PROTOCOL™</p><p>──────────────────────────────────────</p><p></p><p>L — Lithography</p><p>The physics of chip manufacturing. If you cannot access cutting-edge fabrication, you are already obsolete. Control of the semiconductor supply chain — particularly photolithography equipment — is the first constraint every serious AI strategy must address.</p><p></p><p>E — Energy</p><p>Power consumption isn't a footnote — it's the constraint. Without gigawatt-scale energy access, your models don't run. Securing reliable power is not an infrastructure decision. It is a strategic one.</p><p></p><p>A — Arbitrage</p><p>The strategic positioning to exploit cost differentials in compute, power, and talent before competitors close the gap. Retail electricity pricing is unsustainable at scale. The organizations finding stranded energy, flare gas, and off-peak power are the ones surviving the burn rate.</p><p></p><p>C — Cooling</p><p>Thermodynamics doesn't negotiate. Heat dissipation determines density, efficiency, and survivability. Without adequate cooling infrastructure, clusters cannot run. This is a fundamental solvency issue.</p><p></p><p>──────────────────────────────────────</p><p>THE BOTTOM LINE</p><p>──────────────────────────────────────</p><p></p><p>If your strategy ignores thermodynamics, you don't have a company. You have a fire hazard.</p><p></p><p>This isn't about software features or model performance. It's about the physics that determines who stays online and who burns out — literally. The companies that master L.E.A.C. constraints will dominate. The ones that don't will exit.</p><p></p><p>This is the infrastructure war.</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p>→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #LEACProtocol #AIInfrastructure #AIGovernance #Lithography #EnergyConstraints #Thermodynamics #Cooling #AIHardware #Semiconductors #InfrastructureWar #AIStrategy #CapitalEfficiency #GASP #TrustGap #FailureFiles #BuilderClass #FrontierAI #PhysicsOfAI</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd introduces the L.E.A.C. Protocol™ — the four physical constraints that determine which AI companies survive the infrastructure war.</p><p></p><p>The market has split in two. One side is cutting jobs. The other is building hardware. If your AI strategy doesn't address all four constraints, you are leaking value.</p><p></p><p>──────────────────────────────────────</p><p>THE L.E.A.C. PROTOCOL™</p><p>──────────────────────────────────────</p><p></p><p>L — Lithography</p><p>The physics of chip manufacturing. If you cannot access cutting-edge fabrication, you are already obsolete. Control of the semiconductor supply chain — particularly photolithography equipment — is the first constraint every serious AI strategy must address.</p><p></p><p>E — Energy</p><p>Power consumption isn't a footnote — it's the constraint. Without gigawatt-scale energy access, your models don't run. Securing reliable power is not an infrastructure decision. It is a strategic one.</p><p></p><p>A — Arbitrage</p><p>The strategic positioning to exploit cost differentials in compute, power, and talent before competitors close the gap. Retail electricity pricing is unsustainable at scale. The organizations finding stranded energy, flare gas, and off-peak power are the ones surviving the burn rate.</p><p></p><p>C — Cooling</p><p>Thermodynamics doesn't negotiate. Heat dissipation determines density, efficiency, and survivability. Without adequate cooling infrastructure, clusters cannot run. This is a fundamental solvency issue.</p><p></p><p>──────────────────────────────────────</p><p>THE BOTTOM LINE</p><p>──────────────────────────────────────</p><p></p><p>If your strategy ignores thermodynamics, you don't have a company. You have a fire hazard.</p><p></p><p>This isn't about software features or model performance. It's about the physics that determines who stays online and who burns out — literally. The companies that master L.E.A.C. constraints will dominate. The ones that don't will exit.</p><p></p><p>This is the infrastructure war.</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p>→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #LEACProtocol #AIInfrastructure #AIGovernance #Lithography #EnergyConstraints #Thermodynamics #Cooling #AIHardware #Semiconductors #InfrastructureWar #AIStrategy #CapitalEfficiency #GASP #TrustGap #FailureFiles #BuilderClass #FrontierAI #PhysicsOfAI</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/leac-protocol-lithography-energy-arbitrage-cooling]]></link><guid isPermaLink="false">1ab30ec9-0927-4909-a3c6-3a539bda259d</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Sun, 01 Feb 2026 19:29:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/b726cd94-846e-4fb9-9311-494b177d3251.mp3" length="992379" type="audio/mpeg"/><itunes:duration>01:02</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>2</itunes:season><podcast:season>2</podcast:season></item><item><title>Project Cerebellum: Deploying Survivable AI in Federal and Enterprise Systems</title><itunes:title>Project Cerebellum: Deploying Survivable AI in Federal and Enterprise Systems</itunes:title><description><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd opens Season 2 with a direct statement: we are driving 200 miles per hour with no brakes. We are deploying alien intelligence into critical infrastructure without a nervous system.</p><p></p><p>Season 2 stops asking if AI will take your job. It starts asking if the system can survive the deployment.</p><p></p><p>──────────────────────────────────────</p><p>GUESTS</p><p>──────────────────────────────────────</p><p></p><p>Col. Kathy Swacina (USA, Ret.)</p><p>CIO, SherpaWerx</p><p>Chair, HISPI AI Think Tank — Project Cerebellum</p><p>🔗 https://sherpawerx.com</p><p></p><p>Taiye Lambo</p><p>Founder &amp; Chief Artificial Intelligence Officer</p><p>Holistic Information Security Practitioner Institute (HISPI)</p><p>🔗 https://www.hispi.org</p><p>🔗 https://projectcerebellum.com</p><p>LinkedIn: linkedin.com/in/taiyelambo</p><p></p><p>TAIMScore™ Assessor Workshop</p><p>🔗 https://humansignal.io/taimscore_assessor_workshop</p><p></p><p>──────────────────────────────────────</p><p>PROJECT CEREBELLUM</p><p>──────────────────────────────────────</p><p></p><p>The critical missing layer in AI deployment: the control mechanisms, feedback loops, and governance structures that act as a nervous system for autonomous intelligence operating in high-stakes environments. Without it, the system cannot self-regulate, cannot escalate, and cannot stop.</p><p></p><p>──────────────────────────────────────</p><p>KEY QUESTIONS EXPLORED</p><p>──────────────────────────────────────</p><p></p><p>∙ What happens when AI operates in critical infrastructure without oversight mechanisms?</p><p>∙ How do we build reflexive control systems for autonomous intelligence?</p><p>∙ Why "move fast and break things" is a death sentence in federal and enterprise environments</p><p>∙ The difference between deploying AI and deploying survivable AI</p><p></p><p>This isn't about slowing down innovation. It's about not crashing at 200 miles per hour.</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p>→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop</p><p>→ Project Cerebellum — projectcerebellum.com</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice. Guest opinions are those of the guest alone.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #ProjectCerebellum #CriticalInfrastructure #AISafety #AutonomousAI #FederalAI #EnterpriseAI #AIDeployment #SurvivableAI #SystemResilience #GASP #TrustGap #LEACProtocol #FailureFiles #TAIMScore #HISPI #ColKathySwacina #TaiyeLambo #BuilderClass #AIPolicy</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd opens Season 2 with a direct statement: we are driving 200 miles per hour with no brakes. We are deploying alien intelligence into critical infrastructure without a nervous system.</p><p></p><p>Season 2 stops asking if AI will take your job. It starts asking if the system can survive the deployment.</p><p></p><p>──────────────────────────────────────</p><p>GUESTS</p><p>──────────────────────────────────────</p><p></p><p>Col. Kathy Swacina (USA, Ret.)</p><p>CIO, SherpaWerx</p><p>Chair, HISPI AI Think Tank — Project Cerebellum</p><p>🔗 https://sherpawerx.com</p><p></p><p>Taiye Lambo</p><p>Founder &amp; Chief Artificial Intelligence Officer</p><p>Holistic Information Security Practitioner Institute (HISPI)</p><p>🔗 https://www.hispi.org</p><p>🔗 https://projectcerebellum.com</p><p>LinkedIn: linkedin.com/in/taiyelambo</p><p></p><p>TAIMScore™ Assessor Workshop</p><p>🔗 https://humansignal.io/taimscore_assessor_workshop</p><p></p><p>──────────────────────────────────────</p><p>PROJECT CEREBELLUM</p><p>──────────────────────────────────────</p><p></p><p>The critical missing layer in AI deployment: the control mechanisms, feedback loops, and governance structures that act as a nervous system for autonomous intelligence operating in high-stakes environments. Without it, the system cannot self-regulate, cannot escalate, and cannot stop.</p><p></p><p>──────────────────────────────────────</p><p>KEY QUESTIONS EXPLORED</p><p>──────────────────────────────────────</p><p></p><p>∙ What happens when AI operates in critical infrastructure without oversight mechanisms?</p><p>∙ How do we build reflexive control systems for autonomous intelligence?</p><p>∙ Why "move fast and break things" is a death sentence in federal and enterprise environments</p><p>∙ The difference between deploying AI and deploying survivable AI</p><p></p><p>This isn't about slowing down innovation. It's about not crashing at 200 miles per hour.</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p>→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop</p><p>→ Project Cerebellum — projectcerebellum.com</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice. Guest opinions are those of the guest alone.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #ProjectCerebellum #CriticalInfrastructure #AISafety #AutonomousAI #FederalAI #EnterpriseAI #AIDeployment #SurvivableAI #SystemResilience #GASP #TrustGap #LEACProtocol #FailureFiles #TAIMScore #HISPI #ColKathySwacina #TaiyeLambo #BuilderClass #AIPolicy</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/project-cerebellum-survivable-ai-federal-enterprise]]></link><guid isPermaLink="false">c2cf3531-d863-4ce8-827f-c8a5e7ab9bcb</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Sat, 31 Jan 2026 03:19:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/bb87b014-b953-44d3-8217-96b08bf20d46.mp3" length="1318387" type="audio/mpeg"/><itunes:duration>01:22</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>2</itunes:season><podcast:season>2</podcast:season></item><item><title>Break the Mold | Building Teams, Leading Change &amp; Impact (feat. Jasher Cox)</title><itunes:title>Break the Mold | Building Teams, Leading Change &amp; Impact (feat. Jasher Cox)</itunes:title><description><![CDATA[<p><strong>Human Signal: Season Finale Part 2 Show Notes</strong></p><p></p><p><strong>Episode Title: Break the Mold: Jasher Cox on Building Teams, Leading Change, and Owning Impact</strong></p><p></p><p><strong>Guest: Jasher Cox, Director of Regional Development University of Notre Dame (https://directory.nd.edu/profile/jcox23@nd.edu)</strong></p><p></p><p><strong>Segment 1: The Broken System and Engineered Tension</strong></p><p></p><p><strong>The Rigged System: Jasher discusses how career ladders were dismantled and opportunities hoarded, confirming that the real problem was never a lack of talent.</strong></p><p></p><p><strong>The Hiring Mistake: The danger of hiring people you like over subject-matter experts, which ultimately hurts the establishment. Focus must be on hiring experts for plug-and-play success.</strong></p><p></p><p><strong>A Call for Social Skills: A focus on the degradation of social skills post-2020 due to disease and isolation. The critical importance of polishing social skills and actively engaging with leaders to build genuine, non-transactional relationships.</strong></p><p></p><p><strong>Forcing Exposure: Parents and leaders must "force-feed" the next generation by bringing them to the table and exposing them to working environments to foster learning and growth.</strong></p><p></p><p><strong>Segment 2: Architectural Mindset and Institutional Change</strong></p><p></p><p><strong>The Drive for Innovation: Jasher’s architectural mindset is rooted in collaboration and a focus on maximizing student enrollment.</strong></p><p></p><p><strong>The HBCU Wrestling Story: The strategic decision to launch HBCU Women's Wrestling to add a unique student demographic and support emerging sports, creating a powerful recruiting success.</strong></p><p></p><p><strong>The Power of Diversity (D.E.I.): D.E.I. is not just about race; it's about being open-minded and committed to hiring and supporting fully capable individuals, regardless of perceived limitations. The institutional commitment at Notre Dame to women's inclusion is highlighted as an example.</strong></p><p></p><p><strong>Segment 3: Alliance, Building Bridges, and the Future of Leadership</strong></p><p></p><p><strong>Calling a Truce: The importance of moving from generational blame to alliance and bridge-building, recognizing that friction is necessary for growth.</strong></p><p></p><p><strong>The AI Perspective: The younger generation's concern about the accuracy and over-reliance on AI, urging the return to basic research methods like visiting the library to learn how to find and retain information.</strong></p><p></p><p><strong>The Best Coaching Mindset: Coaches and leaders must treat athletes (and employees) as capable athletes, not as fragile individuals, understanding that they want to be pushed for success ("Women don't want to be coached like girls, they want to be coached like athletes").</strong></p><p></p><p><strong>Final Call to Action: Lead with Endearment: True leadership involves emotional intelligence and knowing when to show empathy (like letting an employee off early for a family event). You will be rewarded with a dedicated team member who knows you care.</strong></p><p></p><p><strong>Core Philosophy: The segment emphasizes the final goal is to be good people and lead with the philosophy of cura personalis (care for the whole person).</strong></p><p></p><p><strong>Next Steps</strong></p><p></p><p><strong>Share your thoughts: Let us know what you learned from Jasher Cox in the comments!</strong></p><p></p><p><strong>Subscribe now to lock in the feed. This isn't just content; it's a continuing briefing for the Builder Class.</strong></p><p></p><p><strong>Tech Specs / Production Note:</strong></p><p><strong>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</strong></p><p></p><p><strong>If this mission resonates with you you can support the Human Signal launch fund to fuel six months of new episodes visual briefs and honest playbooks at</strong></p><p><strong>https://humansignal.io/support</strong></p><p><strong>CONNECT</strong></p><p><strong>LinkedIn: linkedin.com/in/tuboise</strong></p><p><strong>Email: tuboise@theaigovernancebriefing.com</strong></p><p></p><p><strong>TRANSCRIPT</strong></p><p><strong>Full transcript available upon request at hello@theaigovernancebriefing.com</strong></p><p><strong>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™ and L.E.A.C. Protocol™.</strong></p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p><strong>Human Signal: Season Finale Part 2 Show Notes</strong></p><p></p><p><strong>Episode Title: Break the Mold: Jasher Cox on Building Teams, Leading Change, and Owning Impact</strong></p><p></p><p><strong>Guest: Jasher Cox, Director of Regional Development University of Notre Dame (https://directory.nd.edu/profile/jcox23@nd.edu)</strong></p><p></p><p><strong>Segment 1: The Broken System and Engineered Tension</strong></p><p></p><p><strong>The Rigged System: Jasher discusses how career ladders were dismantled and opportunities hoarded, confirming that the real problem was never a lack of talent.</strong></p><p></p><p><strong>The Hiring Mistake: The danger of hiring people you like over subject-matter experts, which ultimately hurts the establishment. Focus must be on hiring experts for plug-and-play success.</strong></p><p></p><p><strong>A Call for Social Skills: A focus on the degradation of social skills post-2020 due to disease and isolation. The critical importance of polishing social skills and actively engaging with leaders to build genuine, non-transactional relationships.</strong></p><p></p><p><strong>Forcing Exposure: Parents and leaders must "force-feed" the next generation by bringing them to the table and exposing them to working environments to foster learning and growth.</strong></p><p></p><p><strong>Segment 2: Architectural Mindset and Institutional Change</strong></p><p></p><p><strong>The Drive for Innovation: Jasher’s architectural mindset is rooted in collaboration and a focus on maximizing student enrollment.</strong></p><p></p><p><strong>The HBCU Wrestling Story: The strategic decision to launch HBCU Women's Wrestling to add a unique student demographic and support emerging sports, creating a powerful recruiting success.</strong></p><p></p><p><strong>The Power of Diversity (D.E.I.): D.E.I. is not just about race; it's about being open-minded and committed to hiring and supporting fully capable individuals, regardless of perceived limitations. The institutional commitment at Notre Dame to women's inclusion is highlighted as an example.</strong></p><p></p><p><strong>Segment 3: Alliance, Building Bridges, and the Future of Leadership</strong></p><p></p><p><strong>Calling a Truce: The importance of moving from generational blame to alliance and bridge-building, recognizing that friction is necessary for growth.</strong></p><p></p><p><strong>The AI Perspective: The younger generation's concern about the accuracy and over-reliance on AI, urging the return to basic research methods like visiting the library to learn how to find and retain information.</strong></p><p></p><p><strong>The Best Coaching Mindset: Coaches and leaders must treat athletes (and employees) as capable athletes, not as fragile individuals, understanding that they want to be pushed for success ("Women don't want to be coached like girls, they want to be coached like athletes").</strong></p><p></p><p><strong>Final Call to Action: Lead with Endearment: True leadership involves emotional intelligence and knowing when to show empathy (like letting an employee off early for a family event). You will be rewarded with a dedicated team member who knows you care.</strong></p><p></p><p><strong>Core Philosophy: The segment emphasizes the final goal is to be good people and lead with the philosophy of cura personalis (care for the whole person).</strong></p><p></p><p><strong>Next Steps</strong></p><p></p><p><strong>Share your thoughts: Let us know what you learned from Jasher Cox in the comments!</strong></p><p></p><p><strong>Subscribe now to lock in the feed. This isn't just content; it's a continuing briefing for the Builder Class.</strong></p><p></p><p><strong>Tech Specs / Production Note:</strong></p><p><strong>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</strong></p><p></p><p><strong>If this mission resonates with you you can support the Human Signal launch fund to fuel six months of new episodes visual briefs and honest playbooks at</strong></p><p><strong>https://humansignal.io/support</strong></p><p><strong>CONNECT</strong></p><p><strong>LinkedIn: linkedin.com/in/tuboise</strong></p><p><strong>Email: tuboise@theaigovernancebriefing.com</strong></p><p></p><p><strong>TRANSCRIPT</strong></p><p><strong>Full transcript available upon request at hello@theaigovernancebriefing.com</strong></p><p><strong>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™ and L.E.A.C. Protocol™.</strong></p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/break-the-mold-building-teams-leading-change-impact-feat-jasher-cox]]></link><guid isPermaLink="false">353fa662-2763-4546-ac26-8e2df3d4a07b</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Tue, 25 Nov 2025 21:05:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/5452bd4f-ff90-42a9-9f68-1111c9a02d56.mp3" length="38742870" type="audio/mpeg"/><itunes:duration>40:21</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><podcast:season>1</podcast:season></item><item><title>Season 1 Finale Pt. 1: Burn the Playbook | Architecting What Doesn’t Exist (feat. Paul Wilson Jr.)</title><itunes:title>Season 1 Finale Pt. 1: Burn the Playbook | Architecting What Doesn’t Exist (feat. Paul Wilson Jr.)</itunes:title><description><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd sits down with Paul Wilson Jr. — CEO of Paul Wilson Global Solutions — for the Season Finale Part 1: Burn the Playbook | Architecting What Doesn't Exist.</p><p></p><p>This is a conversation about what happens when high-energy talent meets closed-minded institutions — and what it takes to build something that has never existed before.</p><p></p><p>──────────────────────────────────────</p><p>KEY DISCUSSION POINTS</p><p>──────────────────────────────────────</p><p></p><p>The Problem with Corporate America</p><p>Paul shares his experience with a system that actively discourages employee ambition — leading to high turnover, wasted talent, and institutions that cannot capitalize on the next generation of builders.</p><p></p><p>The "Why Would We Want Them to Know That?" Question</p><p>The shocking perspective of closed-minded management that feared employee growth. When leadership treats knowledge as a threat rather than an asset, the institution has already started its collapse.</p><p></p><p>Fire in the Context of a Fireplace</p><p>The metaphor for harnessing high-energy talent. The goal is not to extinguish it. It is to contain it in a context where it produces heat instead of destruction.</p><p></p><p>The Digital Native Builder</p><p>Gen Z energy in the workplace — what it looks like, what it signals, and how OG wisdom (Gen X/Boomers) can bridge the gap rather than widen it.</p><p></p><p>Ghosting and Authenticity</p><p>Ghosting is not rudeness. It is conflict avoidance rooted in fear — the flight response when inauthenticity is detected. Gen Z sniffs it out early and exits before the confrontation arrives.</p><p></p><p>The Flaws in Modern Capitalism and Innovation</p><p>The broken pursuit of unicorn status. The one-size-fits-all growth model that discounts six-figure businesses and community impact. Why purpose-based entrepreneurship outlasts capital-chasing entrepreneurship.</p><p></p><p>Capitalism Is Redeemable</p><p>A critique of "capitalism at all costs" — and the case for investors and organizations with a social conscience strong enough to govern their own incentives.</p><p></p><p>──────────────────────────────────────</p><p>TACTICAL TAKEAWAYS</p><p>──────────────────────────────────────</p><p></p><p>∙ Be a bridge builder — the future of professional success requires alliance and the ability to disagree constructively</p><p>∙ Embrace the pivot — Ice T, Ice Cube, and every durable builder mastered the ability to be both raw and polished when the context demands it</p><p>∙ The learner's mindset — at every level of growth, there is new learning required; humility is not weakness, it is the entry fee</p><p>∙ Lead with value proposition — when pursuing funding, lead with what you bring to the investor, not just the problem you want solved</p><p>∙ Submission is not surrender — great leaders understand that submitting to a goal, a client, or a higher authority is how they practice sovereignty, not how they abandon it</p><p></p><p>──────────────────────────────────────</p><p>GUEST</p><p>──────────────────────────────────────</p><p></p><p>Paul Wilson Jr.</p><p>CEO, Paul Wilson Global Solutions, LLC</p><p>🔗 https://www.paulwilsonglobal.com</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice. Guest opinions are those of the guest alone.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #BurnThePlaybook #SeasonFinale #PaulWilsonJr #BuilderClass #Entrepreneurship #PurposeDrivenBusiness #GenZ #DigitalNative #Leadership #Ghosting #Authenticity #TalentRetention #CapitalismRedeemed #PivotMindset #LearnersM indset #AIGovernance #GASP #NoiseDiscipline #WorkflowThesis #LEACProtocol</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd sits down with Paul Wilson Jr. — CEO of Paul Wilson Global Solutions — for the Season Finale Part 1: Burn the Playbook | Architecting What Doesn't Exist.</p><p></p><p>This is a conversation about what happens when high-energy talent meets closed-minded institutions — and what it takes to build something that has never existed before.</p><p></p><p>──────────────────────────────────────</p><p>KEY DISCUSSION POINTS</p><p>──────────────────────────────────────</p><p></p><p>The Problem with Corporate America</p><p>Paul shares his experience with a system that actively discourages employee ambition — leading to high turnover, wasted talent, and institutions that cannot capitalize on the next generation of builders.</p><p></p><p>The "Why Would We Want Them to Know That?" Question</p><p>The shocking perspective of closed-minded management that feared employee growth. When leadership treats knowledge as a threat rather than an asset, the institution has already started its collapse.</p><p></p><p>Fire in the Context of a Fireplace</p><p>The metaphor for harnessing high-energy talent. The goal is not to extinguish it. It is to contain it in a context where it produces heat instead of destruction.</p><p></p><p>The Digital Native Builder</p><p>Gen Z energy in the workplace — what it looks like, what it signals, and how OG wisdom (Gen X/Boomers) can bridge the gap rather than widen it.</p><p></p><p>Ghosting and Authenticity</p><p>Ghosting is not rudeness. It is conflict avoidance rooted in fear — the flight response when inauthenticity is detected. Gen Z sniffs it out early and exits before the confrontation arrives.</p><p></p><p>The Flaws in Modern Capitalism and Innovation</p><p>The broken pursuit of unicorn status. The one-size-fits-all growth model that discounts six-figure businesses and community impact. Why purpose-based entrepreneurship outlasts capital-chasing entrepreneurship.</p><p></p><p>Capitalism Is Redeemable</p><p>A critique of "capitalism at all costs" — and the case for investors and organizations with a social conscience strong enough to govern their own incentives.</p><p></p><p>──────────────────────────────────────</p><p>TACTICAL TAKEAWAYS</p><p>──────────────────────────────────────</p><p></p><p>∙ Be a bridge builder — the future of professional success requires alliance and the ability to disagree constructively</p><p>∙ Embrace the pivot — Ice T, Ice Cube, and every durable builder mastered the ability to be both raw and polished when the context demands it</p><p>∙ The learner's mindset — at every level of growth, there is new learning required; humility is not weakness, it is the entry fee</p><p>∙ Lead with value proposition — when pursuing funding, lead with what you bring to the investor, not just the problem you want solved</p><p>∙ Submission is not surrender — great leaders understand that submitting to a goal, a client, or a higher authority is how they practice sovereignty, not how they abandon it</p><p></p><p>──────────────────────────────────────</p><p>GUEST</p><p>──────────────────────────────────────</p><p></p><p>Paul Wilson Jr.</p><p>CEO, Paul Wilson Global Solutions, LLC</p><p>🔗 https://www.paulwilsonglobal.com</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice. Guest opinions are those of the guest alone.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #BurnThePlaybook #SeasonFinale #PaulWilsonJr #BuilderClass #Entrepreneurship #PurposeDrivenBusiness #GenZ #DigitalNative #Leadership #Ghosting #Authenticity #TalentRetention #CapitalismRedeemed #PivotMindset #LearnersM indset #AIGovernance #GASP #NoiseDiscipline #WorkflowThesis #LEACProtocol</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/season-1-finale-pt-1-burn-the-playbook-architecting-what-doesnt-exist-feat-paul-wilson-jr]]></link><guid isPermaLink="false">015d6300-e41c-4226-b279-409b43de2f7b</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Wed, 19 Nov 2025 13:46:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/6da3cc95-f54d-4691-8999-470a67e4005c.mp3" length="57464934" type="audio/mpeg"/><itunes:duration>59:52</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><podcast:season>1</podcast:season></item><item><title>Break the Mold: Jasher Cox on Building Teams, Leading Change, and Owning Impact</title><itunes:title>Break the Mold: Jasher Cox on Building Teams, Leading Change, and Owning Impact</itunes:title><description><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd is joined by Jasher Cox, Director of Regional Development at the University of Notre Dame, to deconstruct the mechanics of leading change and owning impact.</p><p></p><p>Leadership is not a title. It is an architecture.</p><p></p><p>When standard scripts for team building fail, you have to break the mold. This isn't about management styles. It's about building high-performance systems that survive contact with reality.</p><p></p><p>──────────────────────────────────────</p><p>KEY INTELLIGENCE</p><p>──────────────────────────────────────</p><p></p><p>∙ Building Teams — moving beyond the roster to the ecosystem</p><p>∙ Leading Change — how to pivot without breaking the structure</p><p>∙ Owning Impact — measuring the signal, not just the noise</p><p></p><p>──────────────────────────────────────</p><p>GUEST</p><p>──────────────────────────────────────</p><p></p><p>Jasher Cox</p><p>Director of Regional Development</p><p>University of Notre Dame</p><p>🔗 https://directory.nd.edu/profile/jcox23@nd.edu</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice. Guest opinions are those of the guest alone.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #Leadership #LeadingChange #TeamBuilding #HighPerformance #BuilderClass #JasherCox #NotreDame #OwningImpact #SystemsThinking #GASP #TrustGap #WorkflowThesis #NoiseDiscipline #LEACProtocol #InstitutionalLeadership #SignalNotNoise</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd is joined by Jasher Cox, Director of Regional Development at the University of Notre Dame, to deconstruct the mechanics of leading change and owning impact.</p><p></p><p>Leadership is not a title. It is an architecture.</p><p></p><p>When standard scripts for team building fail, you have to break the mold. This isn't about management styles. It's about building high-performance systems that survive contact with reality.</p><p></p><p>──────────────────────────────────────</p><p>KEY INTELLIGENCE</p><p>──────────────────────────────────────</p><p></p><p>∙ Building Teams — moving beyond the roster to the ecosystem</p><p>∙ Leading Change — how to pivot without breaking the structure</p><p>∙ Owning Impact — measuring the signal, not just the noise</p><p></p><p>──────────────────────────────────────</p><p>GUEST</p><p>──────────────────────────────────────</p><p></p><p>Jasher Cox</p><p>Director of Regional Development</p><p>University of Notre Dame</p><p>🔗 https://directory.nd.edu/profile/jcox23@nd.edu</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice. Guest opinions are those of the guest alone.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #Leadership #LeadingChange #TeamBuilding #HighPerformance #BuilderClass #JasherCox #NotreDame #OwningImpact #SystemsThinking #GASP #TrustGap #WorkflowThesis #NoiseDiscipline #LEACProtocol #InstitutionalLeadership #SignalNotNoise</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/break-the-mold-jasher-cox-on-building-teams-leading-change-and-owning-impact]]></link><guid isPermaLink="false">d7a68011-b958-4f14-8d72-3d9461e5fec3</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Mon, 17 Nov 2025 03:38:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/c08b301e-393d-400d-84d4-d5e78a9aa50c.mp3" length="1096033" type="audio/mpeg"/><itunes:duration>01:08</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><podcast:season>1</podcast:season></item><item><title>Testing the Broadcast Signal (A Raw Voiceover Test)</title><itunes:title>Testing the Broadcast Signal (A Raw Voiceover Test)</itunes:title><description><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>The signal is live.</p><p></p><p>This is the raw calibration of The AI Governance Briefing frequency. Before the strategy, before the architecture, there is the voice. This test establishes the baseline for the intelligence to come.</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #SignalIsLive #BuilderClass #Trailer #Calibration #SignalNotNoise #DrTuboiseFlo yd #IndependentMedia #AIPolicy #GoverningAI</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>The signal is live.</p><p></p><p>This is the raw calibration of The AI Governance Briefing frequency. Before the strategy, before the architecture, there is the voice. This test establishes the baseline for the intelligence to come.</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #SignalIsLive #BuilderClass #Trailer #Calibration #SignalNotNoise #DrTuboiseFlo yd #IndependentMedia #AIPolicy #GoverningAI</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/testing-the-broadcast-signal-a-raw-voiceover-test]]></link><guid isPermaLink="false">8c14303a-6843-4b88-ba2e-458031902b37</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Sun, 16 Nov 2025 02:38:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/dbd86130-091b-4ba4-8297-2b10005d3fe9.mp3" length="383965" type="audio/mpeg"/><itunes:duration>00:24</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><podcast:season>1</podcast:season></item><item><title>A Generational Truce: Gen X and Gen Z in 2026</title><itunes:title>A Generational Truce: Gen X and Gen Z in 2026</itunes:title><description><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd explores A Generational Truce — the surprising and powerful alignment forming between Gen X and Gen Z.</p><p></p><p>Both generations are rejecting the systems built to divide them, joining forces to rebuild what was lost and design what comes next. From dismantled career pathways to AI as the ultimate equalizer, this conversation goes deep into how skepticism meets innovation — and why this fusion of experience and energy could reset the institutional order completely.</p><p></p><p>The old guard won't survive this shift. Collaboration, transparency, and creative rebellion will.</p><p></p><p>The truce isn't just generational. It's transformational.</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #GenerationalTruce #GenX #GenZ #Leadership #FutureOfWork #Collaboration #CreativeRebellion #InstitutionalChange #AIEqualizer #BuilderClass #GASP #TrustGap #NoiseDiscipline #WorkflowThesis #LEACProtocol #FailureFiles #Transformation</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd explores A Generational Truce — the surprising and powerful alignment forming between Gen X and Gen Z.</p><p></p><p>Both generations are rejecting the systems built to divide them, joining forces to rebuild what was lost and design what comes next. From dismantled career pathways to AI as the ultimate equalizer, this conversation goes deep into how skepticism meets innovation — and why this fusion of experience and energy could reset the institutional order completely.</p><p></p><p>The old guard won't survive this shift. Collaboration, transparency, and creative rebellion will.</p><p></p><p>The truce isn't just generational. It's transformational.</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #GenerationalTruce #GenX #GenZ #Leadership #FutureOfWork #Collaboration #CreativeRebellion #InstitutionalChange #AIEqualizer #BuilderClass #GASP #TrustGap #NoiseDiscipline #WorkflowThesis #LEACProtocol #FailureFiles #Transformation</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/a-generational-truce-gen-x-and-gen-z-in-2026]]></link><guid isPermaLink="false">80f449bf-1c0b-48e8-baa8-6ac880f5a92a</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Fri, 14 Nov 2025 02:26:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/422eb8f0-a0d1-4c06-bd88-6906303b554a.mp3" length="6313970" type="audio/mpeg"/><itunes:duration>06:35</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Burn the Playbook: Paul Wilson, Jr. on Architecting What Doesn’t Exist</title><itunes:title>Burn the Playbook: Paul Wilson, Jr. on Architecting What Doesn’t Exist</itunes:title><description><![CDATA[<p>On The AI Governance Briefing, we don't interview followers.</p><p></p><p>We bring you builders — leaders who architect new systems instead of optimizing the status quo.</p><p></p><p>🎙️ Coming up on the Season Finale: Paul Wilson Jr., CEO of Paul Wilson Global Solutions, LLC. A supply chain strategist and ecosystem architect who has built across continents and said no to the safe script.</p><p></p><p>This episode is engineered for the people who want more than just adaptation — who want to build what's next.</p><p></p><p>🔗 https://www.paulwilsonglobal.com</p><p></p><p>Drop a 🔥 if you're building, not just watching.</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #BuilderClass #PaulWilsonJr #SeasonFinale #Entrepreneurship #SystemsArchitect #SupplyChain #Leadership #BuildNotWatch #SignalNotNoise #GASP #LEACProtocol #FailureFiles</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>On The AI Governance Briefing, we don't interview followers.</p><p></p><p>We bring you builders — leaders who architect new systems instead of optimizing the status quo.</p><p></p><p>🎙️ Coming up on the Season Finale: Paul Wilson Jr., CEO of Paul Wilson Global Solutions, LLC. A supply chain strategist and ecosystem architect who has built across continents and said no to the safe script.</p><p></p><p>This episode is engineered for the people who want more than just adaptation — who want to build what's next.</p><p></p><p>🔗 https://www.paulwilsonglobal.com</p><p></p><p>Drop a 🔥 if you're building, not just watching.</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #BuilderClass #PaulWilsonJr #SeasonFinale #Entrepreneurship #SystemsArchitect #SupplyChain #Leadership #BuildNotWatch #SignalNotNoise #GASP #LEACProtocol #FailureFiles</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/burn-the-playbook-paul-wilson-jr-on-architecting-what-doesnt-exist]]></link><guid isPermaLink="false">4ed476db-6266-4bef-90ad-89089ea822d6</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Sat, 08 Nov 2025 17:36:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/a663a530-1035-45e0-822c-64de5874241b.mp3" length="838570" type="audio/mpeg"/><itunes:duration>00:52</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Heroic Visibility: Turning Marginality into Magnetism</title><itunes:title>Heroic Visibility: Turning Marginality into Magnetism</itunes:title><description><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd presents the Heroic Visibility framework — a direct protocol for protecting your intellectual property in the digital age.</p><p></p><p>Visibility is not vanity. It is your strongest defense mechanism.</p><p></p><p>If you stay hidden, your ideas will be borrowed. This briefing is the full audio of the "From Margins to Signal" article. It lays out why public, timestamped documentation is the only shield innovators have left against institutional theft.</p><p></p><p>──────────────────────────────────────</p><p>THE PROTOCOL</p><p>──────────────────────────────────────</p><p></p><p>∙ Stop hiding in the margins</p><p>∙ Document your signal in public</p><p>∙ Make your attribution undeniable</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ Presence Signaling Architecture® (PSA) — visiblehuman.co</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #HeroicVisibility #PresenceSignalingArchitecture #PSA #IntellectualProperty #PublicDocumentation #Attribution #FromMarginsToSignal #BuilderClass #SignalNotNoise #GASP #TrustGap #NoiseDiscipline #LEACProtocol #FailureFiles #VisibleHuman #DigitalDefense</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd presents the Heroic Visibility framework — a direct protocol for protecting your intellectual property in the digital age.</p><p></p><p>Visibility is not vanity. It is your strongest defense mechanism.</p><p></p><p>If you stay hidden, your ideas will be borrowed. This briefing is the full audio of the "From Margins to Signal" article. It lays out why public, timestamped documentation is the only shield innovators have left against institutional theft.</p><p></p><p>──────────────────────────────────────</p><p>THE PROTOCOL</p><p>──────────────────────────────────────</p><p></p><p>∙ Stop hiding in the margins</p><p>∙ Document your signal in public</p><p>∙ Make your attribution undeniable</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ Presence Signaling Architecture® (PSA) — visiblehuman.co</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #HeroicVisibility #PresenceSignalingArchitecture #PSA #IntellectualProperty #PublicDocumentation #Attribution #FromMarginsToSignal #BuilderClass #SignalNotNoise #GASP #TrustGap #NoiseDiscipline #LEACProtocol #FailureFiles #VisibleHuman #DigitalDefense</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/heroic-visibility-turning-marginality-into-magnetism]]></link><guid isPermaLink="false">e4095f9e-bbbe-441f-b98b-f76373454c1c</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Fri, 31 Oct 2025 13:33:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/0a58ecfd-73f6-4168-92e5-9cd5f16a64df.mp3" length="5368128" type="audio/mpeg"/><itunes:duration>05:35</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Beyond Excuse: The BAR Protocol for Intentional Signal Design</title><itunes:title>Beyond Excuse: The BAR Protocol for Intentional Signal Design</itunes:title><description><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd revives the BAR Method — Background, Action, Result — as a framework for the ultimate self-interview.</p><p></p><p>Your excuses are a loop. This is the code to break them.</p><p></p><p>Stop lying to yourself about your patterns. Surface the data, analyze the background, and rewire your next move with total honesty.</p><p></p><p>──────────────────────────────────────</p><p>THE METHOD</p><p>──────────────────────────────────────</p><p></p><p>∙ Background — what is actually happening?</p><p>∙ Action — what did you do, or fail to do?</p><p>∙ Result — the undeniable outcome</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #BARMethod #SelfInterview #BackgroundActionResult #BuilderClass #PersonalAccountability #PatternRecognition #SignalNotNoise #GASP #TrustGap #NoiseDiscipline #WorkflowThesis #LEACProtocol #FailureFiles #Leadership #StrategicThinking</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd revives the BAR Method — Background, Action, Result — as a framework for the ultimate self-interview.</p><p></p><p>Your excuses are a loop. This is the code to break them.</p><p></p><p>Stop lying to yourself about your patterns. Surface the data, analyze the background, and rewire your next move with total honesty.</p><p></p><p>──────────────────────────────────────</p><p>THE METHOD</p><p>──────────────────────────────────────</p><p></p><p>∙ Background — what is actually happening?</p><p>∙ Action — what did you do, or fail to do?</p><p>∙ Result — the undeniable outcome</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #BARMethod #SelfInterview #BackgroundActionResult #BuilderClass #PersonalAccountability #PatternRecognition #SignalNotNoise #GASP #TrustGap #NoiseDiscipline #WorkflowThesis #LEACProtocol #FailureFiles #Leadership #StrategicThinking</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/beyond-excuse-the-bar-protocol-for-intentional-signal-design]]></link><guid isPermaLink="false">65e7b793-3825-4587-90b1-92df09c3ef30</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Sun, 26 Oct 2025 05:04:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/cc172b7d-b39f-4817-afd8-cb677fa916b1.mp3" length="6539668" type="audio/mpeg"/><itunes:duration>06:49</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><podcast:season>1</podcast:season></item><item><title>Re-Tuning the Self: Signal Integrity in an Over-Connected World</title><itunes:title>Re-Tuning the Self: Signal Integrity in an Over-Connected World</itunes:title><description><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd takes you on a real-world journey to rediscover who you are beneath the noise and demands of modern life.</p><p></p><p>No trends. No hollow advice. No self-help jargon. Just clarity.</p><p></p><p>Five actionable steps: wake your conscious self, break out of mindless routines, claim time for solitude, abandon negativity, and find purpose through serving others. The episode exposes why comparison culture and autopilot living erode true fulfillment — and why your sense of meaning thrives on curiosity and contribution, not perfection.</p><p></p><p>This is a framework to transform daily survival into genuine growth, purpose, and connection. Not just surviving. Thriving, right where you are.</p><p></p><p>──────────────────────────────────────</p><p>THE FIVE STEPS</p><p>──────────────────────────────────────</p><p></p><p>∙ Wake your conscious self</p><p>∙ Break out of mindless routines</p><p>∙ Claim time for solitude</p><p>∙ Abandon negativity</p><p>∙ Find purpose through serving others</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ Presence Signaling Architecture® (PSA) — visiblehuman.co</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AuthenticSelf #PersonalGrowth #Solitude #PurposeDriven #NoiseDiscipline #BuilderClass #PSA #PresenceSignalingArchitecture #ComparisonCulture #Autopilot #MindfulLiving #SignalNotNoise #GASP #TrustGap #LEACProtocol #Clarity #Contribution #VisibleHuman</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd takes you on a real-world journey to rediscover who you are beneath the noise and demands of modern life.</p><p></p><p>No trends. No hollow advice. No self-help jargon. Just clarity.</p><p></p><p>Five actionable steps: wake your conscious self, break out of mindless routines, claim time for solitude, abandon negativity, and find purpose through serving others. The episode exposes why comparison culture and autopilot living erode true fulfillment — and why your sense of meaning thrives on curiosity and contribution, not perfection.</p><p></p><p>This is a framework to transform daily survival into genuine growth, purpose, and connection. Not just surviving. Thriving, right where you are.</p><p></p><p>──────────────────────────────────────</p><p>THE FIVE STEPS</p><p>──────────────────────────────────────</p><p></p><p>∙ Wake your conscious self</p><p>∙ Break out of mindless routines</p><p>∙ Claim time for solitude</p><p>∙ Abandon negativity</p><p>∙ Find purpose through serving others</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ Presence Signaling Architecture® (PSA) — visiblehuman.co</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AuthenticSelf #PersonalGrowth #Solitude #PurposeDriven #NoiseDiscipline #BuilderClass #PSA #PresenceSignalingArchitecture #ComparisonCulture #Autopilot #MindfulLiving #SignalNotNoise #GASP #TrustGap #LEACProtocol #Clarity #Contribution #VisibleHuman</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/re-tuning-the-self-signal-integrity-in-an-over-connected-world]]></link><guid isPermaLink="false">4d386c32-afcc-4fc7-9d2b-9019dd4b8bac</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Wed, 22 Oct 2025 03:17:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/49998ec0-8948-47da-9520-26400fd19bf2.mp3" length="6125753" type="audio/mpeg"/><itunes:duration>06:23</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><podcast:season>1</podcast:season></item><item><title>Presence in the Age of Automation: Why Purpose Out Ranks Progress</title><itunes:title>Presence in the Age of Automation: Why Purpose Out Ranks Progress</itunes:title><description><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd asks the question that automation keeps deferring: why do we still show up after the bots clock in?</p><p></p><p>We gave AI the grind. We ended up losing the meaning.</p><p></p><p>AI took the friction. Automation eased the pain. But the tension — the part that made work feel alive — is automated now too. We supervise the dashboards. We watch the pipelines. Less friction, but less purpose.</p><p></p><p>If you feel like a sidekick in your own job, you're not alone. Tools should amplify human judgment, not erase it. Until we design for that, every upgrade risks being a downgrade for the human spirit. The more we automate, the more essential it is to stay human.</p><p></p><p>Progress without purpose isn't progress at all.</p><p></p><p>──────────────────────────────────────</p><p>INSPIRED BY</p><p>──────────────────────────────────────</p><p></p><p>Benn Stancil — "We Were Hired to Do the Grunt Work"</p><p>🔗 https://open.substack.com/pub/benn/p/we-were-hired-to-do-the-grunt-work</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ Presence Signaling Architecture® (PSA) — visiblehuman.co</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #ProgressWithoutPurpose #HumanJudgment #Automation #FutureOfWork #BuilderClass #PSA #PresenceSignalingArchitecture #HumanInTheLoop #AIAccountability #GASP #TrustGap #NoiseDiscipline #WorkflowThesis #LEACProtocol #FailureFiles #StayHuman #VisibleHuman</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd asks the question that automation keeps deferring: why do we still show up after the bots clock in?</p><p></p><p>We gave AI the grind. We ended up losing the meaning.</p><p></p><p>AI took the friction. Automation eased the pain. But the tension — the part that made work feel alive — is automated now too. We supervise the dashboards. We watch the pipelines. Less friction, but less purpose.</p><p></p><p>If you feel like a sidekick in your own job, you're not alone. Tools should amplify human judgment, not erase it. Until we design for that, every upgrade risks being a downgrade for the human spirit. The more we automate, the more essential it is to stay human.</p><p></p><p>Progress without purpose isn't progress at all.</p><p></p><p>──────────────────────────────────────</p><p>INSPIRED BY</p><p>──────────────────────────────────────</p><p></p><p>Benn Stancil — "We Were Hired to Do the Grunt Work"</p><p>🔗 https://open.substack.com/pub/benn/p/we-were-hired-to-do-the-grunt-work</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ Presence Signaling Architecture® (PSA) — visiblehuman.co</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #ProgressWithoutPurpose #HumanJudgment #Automation #FutureOfWork #BuilderClass #PSA #PresenceSignalingArchitecture #HumanInTheLoop #AIAccountability #GASP #TrustGap #NoiseDiscipline #WorkflowThesis #LEACProtocol #FailureFiles #StayHuman #VisibleHuman</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/presence-in-the-age-of-automation-why-purpose-out-ranks-progress]]></link><guid isPermaLink="false">d2355943-8d69-40ca-a26b-4d7b1c3862b5</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Wed, 22 Oct 2025 01:14:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/6d2e520c-085d-4b78-9cea-77e09a1863b4.mp3" length="1554669" type="audio/mpeg"/><itunes:duration>01:37</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><podcast:season>1</podcast:season></item><item><title>Signal 0: Value = Visibility</title><itunes:title>Signal 0: Value = Visibility</itunes:title><description><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd breaks down — in 60 seconds — how market signals, not just skills, shape career opportunities.</p><p></p><p>If you're ready to stand out and amplify your professional story, this briefing is for you.</p><p></p><p>Skills get you in the room. Signal gets you remembered.</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ Presence Signaling Architecture® (PSA) — visiblehuman.co</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #MarketSignals #CareerStrategy #ProfessionalBrand #BuilderClass #PSA #PresenceSignalingArchitecture #SignalNotNoise #NoiseDiscipline #AmplifyYourSignal #Leadership #CareerGrowth #LEACProtocol #GASP #VisibleHuman</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd breaks down — in 60 seconds — how market signals, not just skills, shape career opportunities.</p><p></p><p>If you're ready to stand out and amplify your professional story, this briefing is for you.</p><p></p><p>Skills get you in the room. Signal gets you remembered.</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ Presence Signaling Architecture® (PSA) — visiblehuman.co</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #MarketSignals #CareerStrategy #ProfessionalBrand #BuilderClass #PSA #PresenceSignalingArchitecture #SignalNotNoise #NoiseDiscipline #AmplifyYourSignal #Leadership #CareerGrowth #LEACProtocol #GASP #VisibleHuman</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/signal-0-value-visibility]]></link><guid isPermaLink="false">884ee1f4-063c-48c4-b449-4215409e4982</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Wed, 22 Oct 2025 00:33:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/957f723c-edbf-4c52-8eac-b30bac2c5d1e.mp3" length="1231168" type="audio/mpeg"/><itunes:duration>01:17</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><podcast:season>1</podcast:season></item><item><title>Synthetic Trust: AI Spokespeople and the Erosion of Authenticity</title><itunes:title>Synthetic Trust: AI Spokespeople and the Erosion of Authenticity</itunes:title><description><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd examines the age of the synthetic spokesperson — and asks the harder question behind the Carvana x Shaq AI story that every headline is calling innovation.</p><p></p><p>When a company can license, clone, and automate not just a star's image but their very personality, the line between technology and trust fundamentally shifts.</p><p></p><p>This isn't about convenience or celebrity. It's about what happens to meaning, authenticity, and human presence when even connection is coded into the script. What's lost — or gained — when brands become the authors of personality itself? Is this the future of trust, or the moment we settle for endless copies?</p><p></p><p>──────────────────────────────────────</p><p>INSPIRED BY</p><p>──────────────────────────────────────</p><p></p><p>Forbes — "Shaq Teams Up With Carvana to Launch AI ShaqBot"</p><p>🔗 https://www.forbes.com/sites/andyfrye/2025/09/15/shaq-teams-up-with-carvana-to-launch-ai-shaqbot/</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ Presence Signaling Architecture® (PSA) — visiblehuman.co</p><p>→ AIaPI™ (AI as Presence Interface) — visiblehuman.co</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #SyntheticSpokesperson #AIPersonality #Carvana #ShaqBot #AIAuthenticity #HumanPresence #BrandTrust #AICloning #PSA #PresenceSignalingArchitecture #AIaPI #TrustGap #NoiseDiscipline #GASP #LEACProtocol #BuilderClass #VisibleHuman #AIEthics</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd examines the age of the synthetic spokesperson — and asks the harder question behind the Carvana x Shaq AI story that every headline is calling innovation.</p><p></p><p>When a company can license, clone, and automate not just a star's image but their very personality, the line between technology and trust fundamentally shifts.</p><p></p><p>This isn't about convenience or celebrity. It's about what happens to meaning, authenticity, and human presence when even connection is coded into the script. What's lost — or gained — when brands become the authors of personality itself? Is this the future of trust, or the moment we settle for endless copies?</p><p></p><p>──────────────────────────────────────</p><p>INSPIRED BY</p><p>──────────────────────────────────────</p><p></p><p>Forbes — "Shaq Teams Up With Carvana to Launch AI ShaqBot"</p><p>🔗 https://www.forbes.com/sites/andyfrye/2025/09/15/shaq-teams-up-with-carvana-to-launch-ai-shaqbot/</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ Presence Signaling Architecture® (PSA) — visiblehuman.co</p><p>→ AIaPI™ (AI as Presence Interface) — visiblehuman.co</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #SyntheticSpokesperson #AIPersonality #Carvana #ShaqBot #AIAuthenticity #HumanPresence #BrandTrust #AICloning #PSA #PresenceSignalingArchitecture #AIaPI #TrustGap #NoiseDiscipline #GASP #LEACProtocol #BuilderClass #VisibleHuman #AIEthics</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/synthetic-trust-ai-spokespeople-and-the-erosion-of-authenticity]]></link><guid isPermaLink="false">8a318345-0e46-4c9c-a4b9-98bc25d7268b</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Tue, 21 Oct 2025 22:47:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/cdd3591f-2577-425c-b2b7-99124e9ffeba.mp3" length="1891544" type="audio/mpeg"/><itunes:duration>01:58</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><podcast:season>1</podcast:season></item><item><title>Touchless Beginnings: The Robotics of Human Bond Formation</title><itunes:title>Touchless Beginnings: The Robotics of Human Bond Formation</itunes:title><description><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd examines the cognitive and ethical implications of Touchless Beginnings — the moment we handed the spark of life to an algorithm.</p><p></p><p>First, we automated the factory. Now, we are automating the womb.</p><p></p><p>A robot has successfully performed the delicate act of fertilization for IVF. When we remove the human hand from the most fundamental human journey, what happens to the bond? This is a critical look at the intersection of biology and the machine age.</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ Presence Signaling Architecture® (PSA) — visiblehuman.co</p><p>→ AIaPI™ (AI as Presence Interface) — visiblehuman.co</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #TouchlessBeginnings #IVFRobot #AIBiology #HumanPresence #AIEthics #SparkOfLife #AIAndHumanity #PSA #PresenceSignalingArchitecture #AIaPI #TrustGap #GASP #NoiseDiscipline #LEACProtocol #FailureFiles #BuilderClass #MachineAge #VisibleHuman</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd examines the cognitive and ethical implications of Touchless Beginnings — the moment we handed the spark of life to an algorithm.</p><p></p><p>First, we automated the factory. Now, we are automating the womb.</p><p></p><p>A robot has successfully performed the delicate act of fertilization for IVF. When we remove the human hand from the most fundamental human journey, what happens to the bond? This is a critical look at the intersection of biology and the machine age.</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ Presence Signaling Architecture® (PSA) — visiblehuman.co</p><p>→ AIaPI™ (AI as Presence Interface) — visiblehuman.co</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #TouchlessBeginnings #IVFRobot #AIBiology #HumanPresence #AIEthics #SparkOfLife #AIAndHumanity #PSA #PresenceSignalingArchitecture #AIaPI #TrustGap #GASP #NoiseDiscipline #LEACProtocol #FailureFiles #BuilderClass #MachineAge #VisibleHuman</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/touchless-beginnings-the-robotics-of-human-bond-formation]]></link><guid isPermaLink="false">fe3de027-b98c-44f4-b11a-2a2385443128</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Tue, 21 Oct 2025 22:40:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/9382df21-3725-4bb5-9212-6369a96b3044.mp3" length="1551325" type="audio/mpeg"/><itunes:duration>01:37</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><podcast:season>1</podcast:season></item><item><title>Effort Is Dead: Leverage as the New Metric of Human Signal</title><itunes:title>Effort Is Dead: Leverage as the New Metric of Human Signal</itunes:title><description><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd examines the shift already underway inside modern workplaces: effort doesn't matter anymore — leverage does.</p><p></p><p>Your next promotion might depend less on hard work and more on how you use AI. This isn't a dystopian forecast. It's already happening.</p><p></p><p>What happens when algorithms decide what "good work" means? We break down the shift — and what it means for being human at work.</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ Presence Signaling Architecture® (PSA) — visiblehuman.co</p><p>→ AIaPI™ (AI as Presence Interface) — visiblehuman.co</p><p>→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #EffortVsLeverage #AIAtWork #FutureOfWork #AlgorithmicWorkplace #HumanJudgment #AILeverage #BuilderClass #PSA #PresenceSignalingArchitecture #AIaPI #WorkflowThesis #GASP #TrustGap #NoiseDiscipline #LEACProtocol #FailureFiles #VisibleHuman #BeingHuman</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></description><content:encoded><![CDATA[<p>EPISODE DESCRIPTION</p><p></p><p>In this episode of The AI Governance Briefing, Dr. Tuboise Floyd examines the shift already underway inside modern workplaces: effort doesn't matter anymore — leverage does.</p><p></p><p>Your next promotion might depend less on hard work and more on how you use AI. This isn't a dystopian forecast. It's already happening.</p><p></p><p>What happens when algorithms decide what "good work" means? We break down the shift — and what it means for being human at work.</p><p></p><p>──────────────────────────────────────</p><p>FRAMEWORKS REFERENCED</p><p>──────────────────────────────────────</p><p></p><p>→ Presence Signaling Architecture® (PSA) — visiblehuman.co</p><p>→ AIaPI™ (AI as Presence Interface) — visiblehuman.co</p><p>→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis</p><p>→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp</p><p>→ The Trust Gap — humansignal.io/frameworks/trust-gap</p><p>→ Noise Discipline — humansignal.io/frameworks/noise-discipline</p><p>→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol</p><p>→ Failure Files™ — humansignal.io/failure-files</p><p></p><p>──────────────────────────────────────</p><p>SUPPORT THE SHOW</p><p>──────────────────────────────────────</p><p></p><p>Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.</p><p></p><p>Help fuel independent AI governance research, new episodes, and the Failure Files™ series.</p><p>🔗 https://theaigovernancebriefing.com/support</p><p></p><p>Every contribution sustains the signal.</p><p></p><p>──────────────────────────────────────</p><p>ABOUT THE HOST</p><p>──────────────────────────────────────</p><p></p><p>Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).</p><p></p><p>A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.</p><p></p><p>Independence is not a feature. It is the product.</p><p></p><p>──────────────────────────────────────</p><p>PRODUCTION NOTES</p><p>──────────────────────────────────────</p><p></p><p>Host &amp; Producer: Dr. Tuboise Floyd</p><p>Creative Director: Jeremy Jarvis</p><p>A Human Signal Production</p><p></p><p>Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.</p><p></p><p>──────────────────────────────────────</p><p>CONNECT</p><p>──────────────────────────────────────</p><p></p><p>Website: humansignal.io</p><p>Podcast: theaigovernancebriefing.com</p><p>LinkedIn: linkedin.com/in/drtuboisefloyd</p><p>Email: tuboise@theaigovernancebriefing.com</p><p>General inquiries: hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>TRANSCRIPT</p><p>──────────────────────────────────────</p><p></p><p>Full transcript available upon request at hello@theaigovernancebriefing.com</p><p></p><p>──────────────────────────────────────</p><p>LEGAL</p><p>──────────────────────────────────────</p><p></p><p>© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.</p><p></p><p>──────────────────────────────────────</p><p>TAGS</p><p>──────────────────────────────────────</p><p></p><p>#TheAIGovernanceBriefing #HumanSignal #AIGovernance #EffortVsLeverage #AIAtWork #FutureOfWork #AlgorithmicWorkplace #HumanJudgment #AILeverage #BuilderClass #PSA #PresenceSignalingArchitecture #AIaPI #WorkflowThesis #GASP #TrustGap #NoiseDiscipline #LEACProtocol #FailureFiles #VisibleHuman #BeingHuman</p><br/><br/>This podcast uses the following third-party services for analysis: <br/><br/>OP3 - https://op3.dev/privacy]]></content:encoded><link><![CDATA[https://podcast.theaigovernancebriefing.com/episode/effort-is-dead-leverage-as-the-new-metric-of-human-signal]]></link><guid isPermaLink="false">3c193f3e-2015-4303-92ec-d305b13fb6d3</guid><itunes:image href="https://artwork.captivate.fm/c6af5cf7-4564-498b-91bc-6bf0761604da/Show-Image-Dr-Tuboise-Floyd-JPG.jpg"/><pubDate>Tue, 21 Oct 2025 22:29:00 -0400</pubDate><enclosure url="https://op3.dev/e/episodes.captivate.fm/episode/16c9f11d-a2b8-4226-b614-cdd43db67b91.mp3" length="1796249" type="audio/mpeg"/><itunes:duration>01:52</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><podcast:season>1</podcast:season></item></channel></rss>