<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet href="https://feeds.captivate.fm/style.xsl" type="text/xsl"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:podcast="https://podcastindex.org/namespace/1.0"><channel><atom:link href="https://feeds.captivate.fm/palisade-research/" rel="self" type="application/rss+xml"/><title><![CDATA[Palisade Research Podcast]]></title><podcast:guid>ade2479f-ce3c-5757-b126-7adbf1ec5b90</podcast:guid><lastBuildDate>Sat, 24 Jan 2026 08:15:00 +0000</lastBuildDate><generator>Captivate.fm</generator><language><![CDATA[en]]></language><copyright><![CDATA[Copyright 2026 Palisade Research]]></copyright><managingEditor>Palisade Research</managingEditor><itunes:summary><![CDATA[Interviews with AI researchers talking about the latest AI research]]></itunes:summary><itunes:image href="https://artwork.captivate.fm/59b8c5cc-dcd1-466b-8b41-ef6a842ff515/palisade-logo-3000x3000.jpg"/><itunes:owner><itunes:name>Palisade Research</itunes:name></itunes:owner><itunes:author>Palisade Research</itunes:author><description>Interviews with AI researchers talking about the latest AI research</description><link>https://palisade-research.captivate.fm</link><atom:link href="https://pubsubhubbub.appspot.com" rel="hub"/><itunes:subtitle><![CDATA[Covering the latest AI research]]></itunes:subtitle><itunes:explicit>false</itunes:explicit><itunes:type>episodic</itunes:type><itunes:category text="Technology"></itunes:category><itunes:category text="Science"></itunes:category><itunes:category text="News"><itunes:category text="Tech News"/></itunes:category><podcast:locked>no</podcast:locked><podcast:medium>podcast</podcast:medium><item><title>Do AI Models Lie on Purpose? Scheming, Deception, and Alignment with Marius Hobbhahn of Apollo Research</title><itunes:title>Do AI Models Lie on Purpose? Scheming, Deception, and Alignment with Marius Hobbhahn of Apollo Research</itunes:title><description><![CDATA[<p>Marius Hobbhahn is the CEO and co-founder of Apollo Research. Through a joint research project with OpenAI, his team discovered that as models become more capable, they are developing the ability to hide their true reasoning from human oversight.</p><p>Jeffrey Ladish, Executive Director of Palisade Research, talks with Marius about this work. They discuss the difference between hallucination and deliberate deception and the urgent challenge of aligning increasingly capable AI systems.</p><p>Links:</p><p>Marius<strong>’ </strong>Twitter:<a href="https://twitter.com/mariushobbhahn" rel="noopener noreferrer" target="_blank"> </a><u><a href="https://twitter.com/mariushobbhahn" rel="noopener noreferrer" target="_blank">https://twitter.com/mariushobbhahn</a></u></p><p>Apollo Research Twitter: <u><a href="https://twitter.com/apolloaievals" rel="noopener noreferrer" target="_blank">https://twitter.com/apolloaievals</a></u></p><p>Apollo Research: <u><a href="https://www.apolloresearch.ai" rel="noopener noreferrer" target="_blank">https://www.apolloresearch.ai</a></u></p><p>Palisade Research: <u><a href="https://palisaderesearch.org/" rel="noopener noreferrer" target="_blank">https://palisaderesearch.org/</a></u></p><p>Twitter/X: <u><a href="https://x.com/PalisadeAI" rel="noopener noreferrer" target="_blank">https://x.com/PalisadeAI</a></u></p><p>Anti-Scheming Project:<a href="https://www.antischeming.ai" rel="noopener noreferrer" target="_blank"> </a><u><a href="https://www.antischeming.ai" rel="noopener noreferrer" target="_blank">https://www.antischeming.ai</a></u></p><p>Research paper “Stress Testing Deliberative Alignment for Anti-Scheming Training”: <u><a href="https://www.arxiv.org/pdf/2509.15541" rel="noopener noreferrer" target="_blank">https://www.arxiv.org/pdf/2509.15541</a></u></p><p>Blog posts from OpenAI and Apollo: <u><a href="https://openai.com/index/detecting-and-reducing-scheming-in-ai-models/" rel="noopener noreferrer" target="_blank">https://openai.com/index/detecting-and-reducing-scheming-in-ai-models/</a></u> <u><a href="https://www.apolloresearch.ai/research/stress-testing-deliberative-alignment-for-anti-scheming-training/" rel="noopener noreferrer" target="_blank">https://www.apolloresearch.ai/research/stress-testing-deliberative-alignment-for-anti-scheming-training/</a></u></p>]]></description><content:encoded><![CDATA[<p>Marius Hobbhahn is the CEO and co-founder of Apollo Research. Through a joint research project with OpenAI, his team discovered that as models become more capable, they are developing the ability to hide their true reasoning from human oversight.</p><p>Jeffrey Ladish, Executive Director of Palisade Research, talks with Marius about this work. They discuss the difference between hallucination and deliberate deception and the urgent challenge of aligning increasingly capable AI systems.</p><p>Links:</p><p>Marius<strong>’ </strong>Twitter:<a href="https://twitter.com/mariushobbhahn" rel="noopener noreferrer" target="_blank"> </a><u><a href="https://twitter.com/mariushobbhahn" rel="noopener noreferrer" target="_blank">https://twitter.com/mariushobbhahn</a></u></p><p>Apollo Research Twitter: <u><a href="https://twitter.com/apolloaievals" rel="noopener noreferrer" target="_blank">https://twitter.com/apolloaievals</a></u></p><p>Apollo Research: <u><a href="https://www.apolloresearch.ai" rel="noopener noreferrer" target="_blank">https://www.apolloresearch.ai</a></u></p><p>Palisade Research: <u><a href="https://palisaderesearch.org/" rel="noopener noreferrer" target="_blank">https://palisaderesearch.org/</a></u></p><p>Twitter/X: <u><a href="https://x.com/PalisadeAI" rel="noopener noreferrer" target="_blank">https://x.com/PalisadeAI</a></u></p><p>Anti-Scheming Project:<a href="https://www.antischeming.ai" rel="noopener noreferrer" target="_blank"> </a><u><a href="https://www.antischeming.ai" rel="noopener noreferrer" target="_blank">https://www.antischeming.ai</a></u></p><p>Research paper “Stress Testing Deliberative Alignment for Anti-Scheming Training”: <u><a href="https://www.arxiv.org/pdf/2509.15541" rel="noopener noreferrer" target="_blank">https://www.arxiv.org/pdf/2509.15541</a></u></p><p>Blog posts from OpenAI and Apollo: <u><a href="https://openai.com/index/detecting-and-reducing-scheming-in-ai-models/" rel="noopener noreferrer" target="_blank">https://openai.com/index/detecting-and-reducing-scheming-in-ai-models/</a></u> <u><a href="https://www.apolloresearch.ai/research/stress-testing-deliberative-alignment-for-anti-scheming-training/" rel="noopener noreferrer" target="_blank">https://www.apolloresearch.ai/research/stress-testing-deliberative-alignment-for-anti-scheming-training/</a></u></p>]]></content:encoded><link><![CDATA[https://palisade-research.captivate.fm]]></link><guid isPermaLink="false">251e1195-b8dd-4eb9-826b-6039a583fee8</guid><itunes:image href="https://artwork.captivate.fm/59b8c5cc-dcd1-466b-8b41-ef6a842ff515/palisade-logo-3000x3000.jpg"/><pubDate>Fri, 16 Jan 2026 19:47:00 -0500</pubDate><enclosure url="https://episodes.captivate.fm/episode/251e1195-b8dd-4eb9-826b-6039a583fee8.mp3" length="121979737" type="audio/mpeg"/><itunes:duration>01:24:42</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>1</itunes:episode><podcast:episode>1</podcast:episode><podcast:chapters url="https://transcripts.captivate.fm/chapter-36f92ad9-ae61-4389-b3cd-cc113ffdb6c2.json" type="application/json+chapters"/></item></channel></rss>