<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet href="https://feeds.captivate.fm/style.xsl" type="text/xsl"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:podcast="https://podcastindex.org/namespace/1.0"><channel><atom:link href="https://feeds.captivate.fm/education-futures/" rel="self" type="application/rss+xml"/><title><![CDATA[Education Futures]]></title><podcast:guid>0b5eecca-09bf-5ad2-9af1-b6088063c37a</podcast:guid><lastBuildDate>Mon, 13 Apr 2026 09:06:59 +0000</lastBuildDate><generator>Captivate.fm</generator><language><![CDATA[en]]></language><copyright><![CDATA[Copyright 2026 Svenia Busson & Laurent Jolie]]></copyright><managingEditor>Svenia Busson &amp; Laurent Jolie</managingEditor><itunes:summary><![CDATA[A podcast about the future of education in the age of AI. We bring together interdisciplinary voices to explore how we can shape more desirable futures for learning.]]></itunes:summary><itunes:image href="https://artwork.captivate.fm/506710c6-d213-4b9b-a0a3-f9644ed331f5/Official-podcast-tile.png"/><itunes:owner><itunes:name>Svenia Busson &amp; Laurent Jolie</itunes:name></itunes:owner><itunes:author>Svenia Busson &amp; Laurent Jolie</itunes:author><description>A podcast about the future of education in the age of AI. We bring together interdisciplinary voices to explore how we can shape more desirable futures for learning.</description><link>https://www.educationfutures.ai</link><atom:link href="https://pubsubhubbub.appspot.com" rel="hub"/><itunes:explicit>false</itunes:explicit><itunes:type>episodic</itunes:type><itunes:category text="Education"></itunes:category><itunes:category text="Kids &amp; Family"><itunes:category text="Education for Kids"/></itunes:category><itunes:category text="Society &amp; Culture"></itunes:category><podcast:locked>no</podcast:locked><podcast:medium>podcast</podcast:medium><item><title>AI in education: separating the hype from the evidence</title><itunes:title>AI in education: separating the hype from the evidence</itunes:title><description><![CDATA[<p>In this episode, Svenia Busson sits down with <strong>Wess Trabelsi</strong>, a Tech Integration Specialist at Ulster BOCES in New York, where he supports eight rural school districts. Wess is neither a cheerleader nor a doomsayer when it comes to AI in education, he's something rarer: an evidence-driven practitioner who actually read the research.</p><p>Wess shares his deep dive into the science (or lack thereof) behind AI in K-12. After reviewing over 100 studies, he found that the vast majority are noise, glorified surveys, opinion pieces, and what he calls "dead horse studies" that prove the obvious. His findings closely mirror those of the Stanford AI Hub for Education's newly released 2026 Review, which started with 800 studies and kept only 20 with strong causal evidence, and found zero conducted in U.S. K-12 settings.</p><p>Together, Svenia and Wess unpack the two most significant studies to date: the Harvard RCT showing a custom AI tutor significantly helped motivated physics students, and the landmark Wharton/Turkey study showing that AI-assisted practice gains completely disappeared when AI was removed at test time. Neither provides a clear playbook for the average classroom.</p><p>But the conversation goes much deeper. Wess makes the case that AI didn't create a problem in educatio, it merely <em>exposed</em> one that was already there, putting "the final nail in the coffin of the traditional model." He advocates for process-based and project-based learning (citing the inspiring model of High Tech High in California), and for rethinking assessment entirely, away from written essays as proxies for thinking, toward conversation, video, and authentic real-world problem-solving through work-based learning.</p><p>If you've been overwhelmed by AI headlines in education and wished someone would just tell you what the evidence actually says, this episode is for you.</p><p><strong> Resources mentioned in this episode:</strong></p><ul><li>📄 <a href="https://scale.stanford.edu/sites/default/files/The%20Evidence%20Base%20on%20AI%20in%20K-12%20Report.pdf" rel="noopener noreferrer" target="_blank">The Evidence Base on AI in K-12: A 2026 Review — Stanford AI Hub for Education (PDF)</a></li><li>✍️ <a href="https://wesstrabelsi.substack.com/p/the-good-the-bad-and-the-ugly-science" rel="noopener noreferrer" target="_blank">The Good, the Bad, and the Ugly Science of AI in Education — Wess Trabelsi (Substack)</a></li><li>🎓 <a href="https://www.hightechhigh.org" rel="noopener noreferrer" target="_blank">High Tech High — San Diego, California</a> (school that does PBL)</li><li>📖 <em>Superintelligence</em> by Nick Bostrom · Works by Ray Kurzweil</li></ul><br/><p><strong>Substack writers recommended by Wess:</strong></p><ul><li><a href="https://aiwaypoints.substack.com/" rel="noopener noreferrer" target="_blank">Mike Taubman — AI Waypoints</a></li><li><a href="https://fitzyhistory.substack.com/" rel="noopener noreferrer" target="_blank">Stephen Fitzpatrick — Teaching in the Age of AI</a></li><li><a href="https://mikekentz.substack.com/" rel="noopener noreferrer" target="_blank">Mike Kentz — How We Frame Machines</a></li></ul><br/>]]></description><content:encoded><![CDATA[<p>In this episode, Svenia Busson sits down with <strong>Wess Trabelsi</strong>, a Tech Integration Specialist at Ulster BOCES in New York, where he supports eight rural school districts. Wess is neither a cheerleader nor a doomsayer when it comes to AI in education, he's something rarer: an evidence-driven practitioner who actually read the research.</p><p>Wess shares his deep dive into the science (or lack thereof) behind AI in K-12. After reviewing over 100 studies, he found that the vast majority are noise, glorified surveys, opinion pieces, and what he calls "dead horse studies" that prove the obvious. His findings closely mirror those of the Stanford AI Hub for Education's newly released 2026 Review, which started with 800 studies and kept only 20 with strong causal evidence, and found zero conducted in U.S. K-12 settings.</p><p>Together, Svenia and Wess unpack the two most significant studies to date: the Harvard RCT showing a custom AI tutor significantly helped motivated physics students, and the landmark Wharton/Turkey study showing that AI-assisted practice gains completely disappeared when AI was removed at test time. Neither provides a clear playbook for the average classroom.</p><p>But the conversation goes much deeper. Wess makes the case that AI didn't create a problem in educatio, it merely <em>exposed</em> one that was already there, putting "the final nail in the coffin of the traditional model." He advocates for process-based and project-based learning (citing the inspiring model of High Tech High in California), and for rethinking assessment entirely, away from written essays as proxies for thinking, toward conversation, video, and authentic real-world problem-solving through work-based learning.</p><p>If you've been overwhelmed by AI headlines in education and wished someone would just tell you what the evidence actually says, this episode is for you.</p><p><strong> Resources mentioned in this episode:</strong></p><ul><li>📄 <a href="https://scale.stanford.edu/sites/default/files/The%20Evidence%20Base%20on%20AI%20in%20K-12%20Report.pdf" rel="noopener noreferrer" target="_blank">The Evidence Base on AI in K-12: A 2026 Review — Stanford AI Hub for Education (PDF)</a></li><li>✍️ <a href="https://wesstrabelsi.substack.com/p/the-good-the-bad-and-the-ugly-science" rel="noopener noreferrer" target="_blank">The Good, the Bad, and the Ugly Science of AI in Education — Wess Trabelsi (Substack)</a></li><li>🎓 <a href="https://www.hightechhigh.org" rel="noopener noreferrer" target="_blank">High Tech High — San Diego, California</a> (school that does PBL)</li><li>📖 <em>Superintelligence</em> by Nick Bostrom · Works by Ray Kurzweil</li></ul><br/><p><strong>Substack writers recommended by Wess:</strong></p><ul><li><a href="https://aiwaypoints.substack.com/" rel="noopener noreferrer" target="_blank">Mike Taubman — AI Waypoints</a></li><li><a href="https://fitzyhistory.substack.com/" rel="noopener noreferrer" target="_blank">Stephen Fitzpatrick — Teaching in the Age of AI</a></li><li><a href="https://mikekentz.substack.com/" rel="noopener noreferrer" target="_blank">Mike Kentz — How We Frame Machines</a></li></ul><br/>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">43719198-29fd-4196-9505-e5c921598c9d</guid><itunes:image href="https://artwork.captivate.fm/d27962fe-fc32-4f74-9153-eb50801a03da/Copy-of-EF-CAPTIVATE-3000-x-3000-px.png"/><pubDate>Mon, 13 Apr 2026 09:50:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/43719198-29fd-4196-9505-e5c921598c9d.mp3" length="26136259" type="audio/mpeg"/><itunes:duration>54:27</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>35</itunes:episode><podcast:episode>35</podcast:episode><podcast:season>1</podcast:season></item><item><title>Teaching and measuring soft skills in the age of AI</title><itunes:title>Teaching and measuring soft skills in the age of AI</itunes:title><description><![CDATA[<p>In this episode of <em>Education Futures</em>, Svenia Busson is joined by <strong>Michaela Horvathova</strong>, founder of <strong>Beyond Education</strong> and former policy analyst at the <strong>OECD</strong>.</p><p>For more than a decade, Michaela has worked on an important and misunderstood challenge in education: how to define, develop, and assess metacognition and soft skills.</p><p>As AI makes knowledge abundant and easily accessible, these competencies are becoming essential. Yet across most education systems, they remain poorly defined, inconsistently taught, and rarely measured.</p><p>In this conversation, we explore:</p><p>• Why we still lack a shared definition and taxonomy of soft skills</p><p>• Why what gets measured and graded still determines what gets taught</p><p>• The gap between policy ambition and classroom reality</p><p>• How the <strong>Four-Dimensional Education Framework</strong> (knowledge, skills, character, meta-learning) helps structure these competencies</p><p>• Why metacognition (learning how to learn) is becoming a foundational skill</p><p>• The risks of cognitive offloading and the emergence of a “cognitive divide”</p><p>• Why assessment must shift from outputs to thinking processes</p><p>We also discuss how schools can move toward competency-based models, drawing on Michaela’s work at Beyond International School.</p><p>The challenge is not just to talk about soft skills, but to define them clearly, prioritize them, and measure them in ways that truly reflect how humans learn.</p><p>To explore further what was discussed in the episode:</p><p>https://beyondeducation.tech/</p><p>https://curriculumredesign.org/4-dimensions/</p>]]></description><content:encoded><![CDATA[<p>In this episode of <em>Education Futures</em>, Svenia Busson is joined by <strong>Michaela Horvathova</strong>, founder of <strong>Beyond Education</strong> and former policy analyst at the <strong>OECD</strong>.</p><p>For more than a decade, Michaela has worked on an important and misunderstood challenge in education: how to define, develop, and assess metacognition and soft skills.</p><p>As AI makes knowledge abundant and easily accessible, these competencies are becoming essential. Yet across most education systems, they remain poorly defined, inconsistently taught, and rarely measured.</p><p>In this conversation, we explore:</p><p>• Why we still lack a shared definition and taxonomy of soft skills</p><p>• Why what gets measured and graded still determines what gets taught</p><p>• The gap between policy ambition and classroom reality</p><p>• How the <strong>Four-Dimensional Education Framework</strong> (knowledge, skills, character, meta-learning) helps structure these competencies</p><p>• Why metacognition (learning how to learn) is becoming a foundational skill</p><p>• The risks of cognitive offloading and the emergence of a “cognitive divide”</p><p>• Why assessment must shift from outputs to thinking processes</p><p>We also discuss how schools can move toward competency-based models, drawing on Michaela’s work at Beyond International School.</p><p>The challenge is not just to talk about soft skills, but to define them clearly, prioritize them, and measure them in ways that truly reflect how humans learn.</p><p>To explore further what was discussed in the episode:</p><p>https://beyondeducation.tech/</p><p>https://curriculumredesign.org/4-dimensions/</p>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">783ade89-c9f9-432e-b8b4-2ced7e0b928e</guid><itunes:image href="https://artwork.captivate.fm/8bf0587b-3500-4ec7-b40b-17df8b7c63ff/EF-CAPTIVATE-3000-x-3000-px-26.png"/><pubDate>Wed, 08 Apr 2026 19:05:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/783ade89-c9f9-432e-b8b4-2ced7e0b928e.mp3" length="17989608" type="audio/mpeg"/><itunes:duration>37:29</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>34</itunes:episode><podcast:episode>34</podcast:episode><podcast:season>1</podcast:season></item><item><title>Rethinking assessment in the age of AI</title><itunes:title>Rethinking assessment in the age of AI</itunes:title><description><![CDATA[<p>What does it mean to assess learning in a world where AI can generate answers instantly?</p><p>In this episode of <em>Education Futures</em>, Svenia Busson is joined by <strong>Alina von Davier</strong>, Chief of Assessment at <strong>Duolingo</strong>, and <strong>Elie Bechara</strong>, who works on institutional partnerships for the <strong>Duolingo English Test</strong>.</p><p>Together, they bring a rare combination of <strong>assessment science, product innovation, and real-world university dynamics</strong>.</p><p>As AI tools become widely accessible, traditional forms of assessment, especially high-stakes exams, are being fundamentally challenged.</p><p> If students can generate answers with AI, <strong>what are we actually measuring?</strong></p><p>In this conversation, we explore:</p><p>• Why AI is putting pressure on the <strong>validity of traditional exams</strong></p><p> • How universities are responding to new questions around <strong>academic integrity and admissions</strong></p><p> • The shift from testing <strong>knowledge → competencies and skills</strong></p><p> • Why assessment needs to become more <strong>continuous, contextual, and embedded in learning</strong></p><p> • How the <strong>Duolingo English Test</strong> is rethinking language assessment using AI</p><p> • The importance of designing assessments that reflect <strong>real-world performance, not artificial test conditions</strong></p><p>We also discuss how assessment can evolve from a moment of evaluation into a <strong>tool for learning itself</strong> — providing feedback, guiding progress, and supporting long-term skill development.</p><p>The challenge ahead is about redefining <strong>what we value, what we measure, and what we trust</strong> in education.</p><p>To explore further what was discussed in the episode:</p><p><u><a href="https://englishtest.duolingo.com/" rel="noopener noreferrer" target="_blank">https://englishtest.duolingo.com/</a></u> </p><p><u><a href="https://blog.duolingo.com/video-call/" rel="noopener noreferrer" target="_blank">https://blog.duolingo.com/video-call/</a></u></p>]]></description><content:encoded><![CDATA[<p>What does it mean to assess learning in a world where AI can generate answers instantly?</p><p>In this episode of <em>Education Futures</em>, Svenia Busson is joined by <strong>Alina von Davier</strong>, Chief of Assessment at <strong>Duolingo</strong>, and <strong>Elie Bechara</strong>, who works on institutional partnerships for the <strong>Duolingo English Test</strong>.</p><p>Together, they bring a rare combination of <strong>assessment science, product innovation, and real-world university dynamics</strong>.</p><p>As AI tools become widely accessible, traditional forms of assessment, especially high-stakes exams, are being fundamentally challenged.</p><p> If students can generate answers with AI, <strong>what are we actually measuring?</strong></p><p>In this conversation, we explore:</p><p>• Why AI is putting pressure on the <strong>validity of traditional exams</strong></p><p> • How universities are responding to new questions around <strong>academic integrity and admissions</strong></p><p> • The shift from testing <strong>knowledge → competencies and skills</strong></p><p> • Why assessment needs to become more <strong>continuous, contextual, and embedded in learning</strong></p><p> • How the <strong>Duolingo English Test</strong> is rethinking language assessment using AI</p><p> • The importance of designing assessments that reflect <strong>real-world performance, not artificial test conditions</strong></p><p>We also discuss how assessment can evolve from a moment of evaluation into a <strong>tool for learning itself</strong> — providing feedback, guiding progress, and supporting long-term skill development.</p><p>The challenge ahead is about redefining <strong>what we value, what we measure, and what we trust</strong> in education.</p><p>To explore further what was discussed in the episode:</p><p><u><a href="https://englishtest.duolingo.com/" rel="noopener noreferrer" target="_blank">https://englishtest.duolingo.com/</a></u> </p><p><u><a href="https://blog.duolingo.com/video-call/" rel="noopener noreferrer" target="_blank">https://blog.duolingo.com/video-call/</a></u></p>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">bfac920b-2cc2-4268-b425-593e590860b2</guid><itunes:image href="https://artwork.captivate.fm/0b3b20d1-cccf-4a7d-9067-045b75e19b99/EF-CAPTIVATE-3000-x-3000-px-25.png"/><pubDate>Mon, 06 Apr 2026 08:00:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/bfac920b-2cc2-4268-b425-593e590860b2.mp3" length="30095169" type="audio/mpeg"/><itunes:duration>01:02:42</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>33</itunes:episode><podcast:episode>33</podcast:episode><podcast:season>1</podcast:season></item><item><title>From knowledge to durable skills: rethinking higher education</title><itunes:title>From knowledge to durable skills: rethinking higher education</itunes:title><description><![CDATA[<p>What should universities teach in the age of AI?</p><p>In this episode of <em>Education Futures</em>, Svenia Busson speaks with <strong>Art Markman</strong>, Chief Academic Officer at <strong>Minerva University</strong> and a leading expert in cognitive science, decision-making, and learning.</p><p>With a career spanning academia, research, and applied learning design, Art brings a powerful perspective on how education must evolve — not just to keep up with AI, but to remain relevant in a rapidly changing world.</p><p>In this episode, we explore:</p><p>• Why education should focus on <strong>“far transfer”</strong> — the ability to apply knowledge across contexts</p><p> • The shift from teaching content → developing <strong>durable skills</strong> (reasoning, communication, empathy, systems thinking)</p><p> • Why most universities are still structured like they were centuries ago</p><p> • How <strong>Minerva Project</strong> is redesigning higher education from first principles</p><p> • Why <strong>grades and transcripts fail to capture real learning</strong></p><p> • How AI is reshaping the role of assignments, assessment, and feedback</p><p> • Why trying to <strong>ban AI in universities is the wrong approach</strong></p><p> • How AI can be used as a <strong>tutor, feedback engine, and learning accelerator</strong></p><p></p><p>To go further:</p><p>Minerva University: https://www.minerva.edu/</p><p>UT Austin’s Homegrown AI Tutor Platform: https://tech.utexas.edu/news/ut-austin-and-aws-launch-ut-sage-ut-austins-homegrown-ai-tutor-platform</p>]]></description><content:encoded><![CDATA[<p>What should universities teach in the age of AI?</p><p>In this episode of <em>Education Futures</em>, Svenia Busson speaks with <strong>Art Markman</strong>, Chief Academic Officer at <strong>Minerva University</strong> and a leading expert in cognitive science, decision-making, and learning.</p><p>With a career spanning academia, research, and applied learning design, Art brings a powerful perspective on how education must evolve — not just to keep up with AI, but to remain relevant in a rapidly changing world.</p><p>In this episode, we explore:</p><p>• Why education should focus on <strong>“far transfer”</strong> — the ability to apply knowledge across contexts</p><p> • The shift from teaching content → developing <strong>durable skills</strong> (reasoning, communication, empathy, systems thinking)</p><p> • Why most universities are still structured like they were centuries ago</p><p> • How <strong>Minerva Project</strong> is redesigning higher education from first principles</p><p> • Why <strong>grades and transcripts fail to capture real learning</strong></p><p> • How AI is reshaping the role of assignments, assessment, and feedback</p><p> • Why trying to <strong>ban AI in universities is the wrong approach</strong></p><p> • How AI can be used as a <strong>tutor, feedback engine, and learning accelerator</strong></p><p></p><p>To go further:</p><p>Minerva University: https://www.minerva.edu/</p><p>UT Austin’s Homegrown AI Tutor Platform: https://tech.utexas.edu/news/ut-austin-and-aws-launch-ut-sage-ut-austins-homegrown-ai-tutor-platform</p>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">78c8e8d7-5588-4e4c-beb9-0ae5a9638f95</guid><itunes:image href="https://artwork.captivate.fm/8f8409bd-9c32-4fd5-afc1-ba2de4e82f9d/EF-CAPTIVATE-3000-x-3000-px-26.png"/><pubDate>Thu, 02 Apr 2026 06:00:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/78c8e8d7-5588-4e4c-beb9-0ae5a9638f95.mp3" length="23717947" type="audio/mpeg"/><itunes:duration>49:25</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>32</itunes:episode><podcast:episode>32</podcast:episode><podcast:season>1</podcast:season></item><item><title>The trust crisis in education and the role of AI</title><itunes:title>The trust crisis in education and the role of AI</itunes:title><description><![CDATA[<p>What happens when students start trusting AI more than their teachers?</p><p>In this episode of <em>Education Futures</em>, Svenia Busson speaks with <strong>Mary Burns</strong>, researcher at the <strong>Brookings Institution</strong> and co-author of the report <em>“A New Direction for Students in an AI World: Prosper, Prepare, Protect.”</em></p><p>With over 40 years of experience in education — from classroom teaching to advising ministries of education across more than 100 countries — Mary brings a rare, global perspective on how AI is reshaping learning systems.</p><p>In this episode, we explore:</p><p>• The <strong>“web of distrust”</strong> emerging in education systems</p><p>• Why students may begin to trust AI more than teachers</p><p>• The risks of <strong>cognitive, emotional, and social offloading</strong> to AI</p><p>• Why AI feels <strong>more reliable, neutral, and emotionally safe</strong> than humans</p><p>• The rise of <strong>teacher over-reliance on AI</strong> — an under-discussed risk</p><p>• Why <strong>AI literacy must be holistic</strong> (not just tool usage)</p><p>• The dangers of <strong>sycophantic AI systems</strong> and emotional attachment</p><p>• The growing gap between <strong>AI adoption and regulation</strong></p><p>A key framework from the episode:</p><p>Mary co-authored a major report proposing a 3-pillar framework for AI in education:</p><p>👉 <strong>Prosper, Prepare, Protect</strong></p><p>• <strong>Prosper</strong>: Use AI to improve learning and opportunity</p><p>• <strong>Prepare</strong>: Equip students, teachers, and systems to use AI responsibly</p><p>• <strong>Protect</strong>: Safeguard learners from risks (cognitive, emotional, data-related)</p><p>📄 Read the full report:</p><p>Brookings Institution – <em>A New Direction for Students in an AI World</em></p><p>https://www.brookings.edu/wp-content/uploads/2026/01/A-New-Direction-for-Students-in-an-AI-World-FULL-REPORT.pdf</p><p>Distance education for teacher training: modes, models and methods - <a href="http://go.edc.org/07xd" rel="noopener noreferrer" target="_blank">http://go.edc.org/07xd</a></p><p>Read "eyes wide open" the chapter Mary wrote in "Que educação nos exige – hoje − o porvir? Se não agora, quando. Universidade de Lisboa: Centro de investigação e de estudos em belas-artes, p. 50-69"</p><p>English version here: https://static1.squarespace.com/static/5fac2fdb0da84a28cc76b714/t/67b56f484fc8d51dd5546fa4/1739943752612/Mary+Burns-Eyes+Wide+Open+What+We+Lose+from+Generative+AI+in+Education.pdf</p>]]></description><content:encoded><![CDATA[<p>What happens when students start trusting AI more than their teachers?</p><p>In this episode of <em>Education Futures</em>, Svenia Busson speaks with <strong>Mary Burns</strong>, researcher at the <strong>Brookings Institution</strong> and co-author of the report <em>“A New Direction for Students in an AI World: Prosper, Prepare, Protect.”</em></p><p>With over 40 years of experience in education — from classroom teaching to advising ministries of education across more than 100 countries — Mary brings a rare, global perspective on how AI is reshaping learning systems.</p><p>In this episode, we explore:</p><p>• The <strong>“web of distrust”</strong> emerging in education systems</p><p>• Why students may begin to trust AI more than teachers</p><p>• The risks of <strong>cognitive, emotional, and social offloading</strong> to AI</p><p>• Why AI feels <strong>more reliable, neutral, and emotionally safe</strong> than humans</p><p>• The rise of <strong>teacher over-reliance on AI</strong> — an under-discussed risk</p><p>• Why <strong>AI literacy must be holistic</strong> (not just tool usage)</p><p>• The dangers of <strong>sycophantic AI systems</strong> and emotional attachment</p><p>• The growing gap between <strong>AI adoption and regulation</strong></p><p>A key framework from the episode:</p><p>Mary co-authored a major report proposing a 3-pillar framework for AI in education:</p><p>👉 <strong>Prosper, Prepare, Protect</strong></p><p>• <strong>Prosper</strong>: Use AI to improve learning and opportunity</p><p>• <strong>Prepare</strong>: Equip students, teachers, and systems to use AI responsibly</p><p>• <strong>Protect</strong>: Safeguard learners from risks (cognitive, emotional, data-related)</p><p>📄 Read the full report:</p><p>Brookings Institution – <em>A New Direction for Students in an AI World</em></p><p>https://www.brookings.edu/wp-content/uploads/2026/01/A-New-Direction-for-Students-in-an-AI-World-FULL-REPORT.pdf</p><p>Distance education for teacher training: modes, models and methods - <a href="http://go.edc.org/07xd" rel="noopener noreferrer" target="_blank">http://go.edc.org/07xd</a></p><p>Read "eyes wide open" the chapter Mary wrote in "Que educação nos exige – hoje − o porvir? Se não agora, quando. Universidade de Lisboa: Centro de investigação e de estudos em belas-artes, p. 50-69"</p><p>English version here: https://static1.squarespace.com/static/5fac2fdb0da84a28cc76b714/t/67b56f484fc8d51dd5546fa4/1739943752612/Mary+Burns-Eyes+Wide+Open+What+We+Lose+from+Generative+AI+in+Education.pdf</p>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">fc87a9d1-4f49-4705-b01d-42dcf5319ef2</guid><itunes:image href="https://artwork.captivate.fm/0ad13574-e14f-46be-a87b-3dfb7e16f7d5/EF-CAPTIVATE-3000-x-3000-px-27.png"/><pubDate>Mon, 30 Mar 2026 08:00:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/fc87a9d1-4f49-4705-b01d-42dcf5319ef2.mp3" length="26421516" type="audio/mpeg"/><itunes:duration>55:03</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>31</itunes:episode><podcast:episode>31</podcast:episode><podcast:season>1</podcast:season></item><item><title>AI in education: a conversation with an 11-year-old</title><itunes:title>AI in education: a conversation with an 11-year-old</itunes:title><description><![CDATA[<p>What if we asked children how to design the future of education?</p><p>In this special episode of <em>Education Futures</em>, Svenia Busson is joined by <strong>Selena Marwaha</strong>, an 11-year-old builder, coder, and speaker who has already presented at global events such as <strong>COP28</strong>, <strong>COP29</strong>, and <strong>WISE Summit Qatar</strong>.</p><p>She is joined by <strong>François Taddei</strong>, co-founder of the <strong>Learning Planet Institute</strong>, whose work focuses on collective intelligence, learning ecosystems, and the future of education.</p><p>Together, they explore what education could look like if it were <strong>co-designed with the next generation</strong>.</p><p>In this episode, we explore:</p><p>• Why children should be <strong>actively involved in shaping the future of education</strong></p><p> • Selena’s vision for a school that prioritizes <strong>creativity, mentorship, and problem-solving</strong></p><p> • Why AI should be used as a <strong>co-creator, not a shortcut</strong></p><p> • The importance of <strong>art, curiosity, and human expression</strong> in a tech-driven world</p><p> • How to balance <strong>AI acceleration with human development</strong></p><p> • Why schools should help students navigate <strong>emotions, relationships, and purpose</strong></p><p>Selena also shares her project <strong>Planetary Stories</strong> — a platform where children from around the world can share their dreams, challenges, and visions for the future, creating a collective dataset of youth perspectives to inform decision-making.</p><p>Go further with these ressources:</p><ul><li>Selena's initiative "Planetary Stories" will be launched on April 22nd (Earth Day), we will share these here in the show notes.</li><li>Mara Mintzer TED Talk "How kids can help design cities" https://www.ted.com/talks/mara_mintzer_how_kids_can_help_design_cities</li><li><strong>Alison Gopnik's work: </strong>research on child creativity and learning https://alisongopnik.com/</li><li><strong>Kiran Bir Sethi's work:</strong> creator of the <strong>Design for Change</strong> framework (“Feel, Imagine, Do, Share”) - https://dfcworld.org/SITE</li></ul><br/>]]></description><content:encoded><![CDATA[<p>What if we asked children how to design the future of education?</p><p>In this special episode of <em>Education Futures</em>, Svenia Busson is joined by <strong>Selena Marwaha</strong>, an 11-year-old builder, coder, and speaker who has already presented at global events such as <strong>COP28</strong>, <strong>COP29</strong>, and <strong>WISE Summit Qatar</strong>.</p><p>She is joined by <strong>François Taddei</strong>, co-founder of the <strong>Learning Planet Institute</strong>, whose work focuses on collective intelligence, learning ecosystems, and the future of education.</p><p>Together, they explore what education could look like if it were <strong>co-designed with the next generation</strong>.</p><p>In this episode, we explore:</p><p>• Why children should be <strong>actively involved in shaping the future of education</strong></p><p> • Selena’s vision for a school that prioritizes <strong>creativity, mentorship, and problem-solving</strong></p><p> • Why AI should be used as a <strong>co-creator, not a shortcut</strong></p><p> • The importance of <strong>art, curiosity, and human expression</strong> in a tech-driven world</p><p> • How to balance <strong>AI acceleration with human development</strong></p><p> • Why schools should help students navigate <strong>emotions, relationships, and purpose</strong></p><p>Selena also shares her project <strong>Planetary Stories</strong> — a platform where children from around the world can share their dreams, challenges, and visions for the future, creating a collective dataset of youth perspectives to inform decision-making.</p><p>Go further with these ressources:</p><ul><li>Selena's initiative "Planetary Stories" will be launched on April 22nd (Earth Day), we will share these here in the show notes.</li><li>Mara Mintzer TED Talk "How kids can help design cities" https://www.ted.com/talks/mara_mintzer_how_kids_can_help_design_cities</li><li><strong>Alison Gopnik's work: </strong>research on child creativity and learning https://alisongopnik.com/</li><li><strong>Kiran Bir Sethi's work:</strong> creator of the <strong>Design for Change</strong> framework (“Feel, Imagine, Do, Share”) - https://dfcworld.org/SITE</li></ul><br/>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">50585850-7e34-4000-a356-8073e1b98aac</guid><itunes:image href="https://artwork.captivate.fm/528d1958-ef03-41d0-8691-ae3055d2e132/EF-CAPTIVATE-3000-x-3000-px-25.png"/><pubDate>Fri, 27 Mar 2026 11:10:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/50585850-7e34-4000-a356-8073e1b98aac.mp3" length="20379289" type="audio/mpeg"/><itunes:duration>42:27</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>30</itunes:episode><podcast:episode>30</podcast:episode><podcast:season>1</podcast:season></item><item><title>Rethinking Edtech: where is the evidence?</title><itunes:title>Rethinking Edtech: where is the evidence?</itunes:title><description><![CDATA[<p>Are we building educational technology faster than we can prove it actually works?</p><p>In this episode of <em>Education Futures</em>, Svenia Busson speaks with <strong>Natalia Kucirkova</strong>, professor at the <strong>University of Stavanger</strong> and co-founder of the <strong>International Centre for EdTech Impact</strong>.</p><p>Natalia has spent years researching how children learn, and how digital tools can (or cannot) support that process. Her work sits at the intersection of <strong>learning science, AI, and impact evaluation</strong>.</p><p>As AI tools rapidly enter classrooms, one question becomes critical:</p><p><em>Where is the evidence that these tools actually improve learning?</em></p><p>In this conversation, we explore:</p><p>• Why <strong>engagement is not the same as learning</strong></p><p>• The risk of deploying AI tools <strong>without evidence of impact</strong></p><p>• How EdTech companies can collaborate with researchers to <strong>design better products</strong></p><p>• Why we need to <strong>slow down</strong> before scaling untested technologies</p><p>• The difference between <strong>efficacy, effectiveness, and real-world impact</strong></p><p>• Why traditional evaluation methods (like RCTs) need to evolve in the age of AI</p><p>• How teachers and schools can make <strong>more informed, evidence-based choices</strong></p><p>We also discuss concrete tools and initiatives aiming to bring more transparency to the field:</p><p>Natalia's Centre for Edtech Impact: <a href="https://foreduimpact.org/" rel="noopener noreferrer" target="_blank">https://foreduimpact.org/</a></p><p>AI safety benchmark Natalia has contributed to:</p><p><a href="https://korabench.ai/" rel="noopener noreferrer" target="_blank">https://korabench.ai/</a></p><p>Edtech Certification Natalia is affiliated with &amp; recommends: <a href="https://eduevidence.org/" rel="noopener noreferrer" target="_blank">https://eduevidence.org/</a></p><p>Other Certifications that exist:</p><p><a href="https://www.1edtech.org/" rel="noopener noreferrer" target="_blank">https://www.1edtech.org/</a></p><p><a href="https://edtechimpact.com/" rel="noopener noreferrer" target="_blank">https://edtechimpact.com/</a></p><p><a href="https://iste.org/edtech-product-selection" rel="noopener noreferrer" target="_blank">https://iste.org/edtech-product-selection</a></p><p><a href="https://www.edtechtulna.org/" rel="noopener noreferrer" target="_blank">https://www.edtechtulna.org/</a></p><p>Natalia also explains how EdTech evaluation works in practice — from early-stage testing (A/B testing, rapid cycles) to large-scale studies like <strong>randomized controlled trials (RCTs)</strong>.</p><p>The takeaway:</p><p>In education, <strong>good intentions and high engagement are not enough</strong>. If we want technology to truly support learning, we need to <strong>measure it well</strong>.</p>]]></description><content:encoded><![CDATA[<p>Are we building educational technology faster than we can prove it actually works?</p><p>In this episode of <em>Education Futures</em>, Svenia Busson speaks with <strong>Natalia Kucirkova</strong>, professor at the <strong>University of Stavanger</strong> and co-founder of the <strong>International Centre for EdTech Impact</strong>.</p><p>Natalia has spent years researching how children learn, and how digital tools can (or cannot) support that process. Her work sits at the intersection of <strong>learning science, AI, and impact evaluation</strong>.</p><p>As AI tools rapidly enter classrooms, one question becomes critical:</p><p><em>Where is the evidence that these tools actually improve learning?</em></p><p>In this conversation, we explore:</p><p>• Why <strong>engagement is not the same as learning</strong></p><p>• The risk of deploying AI tools <strong>without evidence of impact</strong></p><p>• How EdTech companies can collaborate with researchers to <strong>design better products</strong></p><p>• Why we need to <strong>slow down</strong> before scaling untested technologies</p><p>• The difference between <strong>efficacy, effectiveness, and real-world impact</strong></p><p>• Why traditional evaluation methods (like RCTs) need to evolve in the age of AI</p><p>• How teachers and schools can make <strong>more informed, evidence-based choices</strong></p><p>We also discuss concrete tools and initiatives aiming to bring more transparency to the field:</p><p>Natalia's Centre for Edtech Impact: <a href="https://foreduimpact.org/" rel="noopener noreferrer" target="_blank">https://foreduimpact.org/</a></p><p>AI safety benchmark Natalia has contributed to:</p><p><a href="https://korabench.ai/" rel="noopener noreferrer" target="_blank">https://korabench.ai/</a></p><p>Edtech Certification Natalia is affiliated with &amp; recommends: <a href="https://eduevidence.org/" rel="noopener noreferrer" target="_blank">https://eduevidence.org/</a></p><p>Other Certifications that exist:</p><p><a href="https://www.1edtech.org/" rel="noopener noreferrer" target="_blank">https://www.1edtech.org/</a></p><p><a href="https://edtechimpact.com/" rel="noopener noreferrer" target="_blank">https://edtechimpact.com/</a></p><p><a href="https://iste.org/edtech-product-selection" rel="noopener noreferrer" target="_blank">https://iste.org/edtech-product-selection</a></p><p><a href="https://www.edtechtulna.org/" rel="noopener noreferrer" target="_blank">https://www.edtechtulna.org/</a></p><p>Natalia also explains how EdTech evaluation works in practice — from early-stage testing (A/B testing, rapid cycles) to large-scale studies like <strong>randomized controlled trials (RCTs)</strong>.</p><p>The takeaway:</p><p>In education, <strong>good intentions and high engagement are not enough</strong>. If we want technology to truly support learning, we need to <strong>measure it well</strong>.</p>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">5b7d0242-3d69-4c45-abc2-c92200b2e279</guid><itunes:image href="https://artwork.captivate.fm/85709b77-b608-4a9e-b9c5-9414d59e3825/EF-CAPTIVATE-3000-x-3000-px-24.png"/><pubDate>Mon, 23 Mar 2026 21:00:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/5b7d0242-3d69-4c45-abc2-c92200b2e279.mp3" length="19200853" type="audio/mpeg"/><itunes:duration>40:00</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>29</itunes:episode><podcast:episode>29</podcast:episode><podcast:season>1</podcast:season></item><item><title>What schools must protect in the age of AI</title><itunes:title>What schools must protect in the age of AI</itunes:title><description><![CDATA[<p>What should we protect in education, as AI transforms how we learn, think, and work?</p><p>In this episode of <em>Education Futures</em>, Svenia Busson speaks with <strong>Nick Krichevsky</strong>, a high school teacher and Head of Digitalization at the <strong>German International School Johannesburg</strong> in South Africa.</p><p>With over 15 years of teaching experience across Germany and South Africa, Nick brings a grounded, classroom-based perspective to one of the most pressing questions of our time:</p><p><strong>What should education hold onto, and what should it let go of, in the age of AI?</strong></p><p>In this conversation, we explore:</p><p>• Why AI should <strong>serve a clear educational purpose</strong>, not drive it</p><p>• The limits of a <strong>one-size-fits-all education system</strong> across different contexts</p><p>• What Nick calls the <strong>“promise — and misspoken promise — of education”</strong></p><p>• Why many students still lack <strong>practical skills and opportunities</strong>, despite access to schooling</p><p>• The importance of <strong>effort, friction, and cognitive rigor</strong> in real learning</p><p>• How AI can support education by <strong>freeing up time and resources</strong>, not replacing thinking</p><p>• Why students must develop <strong>ownership of their learning process</strong></p><p>• The growing concern among students about <strong>AI, self-efficacy, and their future</strong></p><p>Nick also shares a powerful vision for the <strong>school of the future</strong>:</p><p>Not one centered on technology, but one built around <strong>human relationships, social learning, mentorship, and responsibility</strong>.</p>]]></description><content:encoded><![CDATA[<p>What should we protect in education, as AI transforms how we learn, think, and work?</p><p>In this episode of <em>Education Futures</em>, Svenia Busson speaks with <strong>Nick Krichevsky</strong>, a high school teacher and Head of Digitalization at the <strong>German International School Johannesburg</strong> in South Africa.</p><p>With over 15 years of teaching experience across Germany and South Africa, Nick brings a grounded, classroom-based perspective to one of the most pressing questions of our time:</p><p><strong>What should education hold onto, and what should it let go of, in the age of AI?</strong></p><p>In this conversation, we explore:</p><p>• Why AI should <strong>serve a clear educational purpose</strong>, not drive it</p><p>• The limits of a <strong>one-size-fits-all education system</strong> across different contexts</p><p>• What Nick calls the <strong>“promise — and misspoken promise — of education”</strong></p><p>• Why many students still lack <strong>practical skills and opportunities</strong>, despite access to schooling</p><p>• The importance of <strong>effort, friction, and cognitive rigor</strong> in real learning</p><p>• How AI can support education by <strong>freeing up time and resources</strong>, not replacing thinking</p><p>• Why students must develop <strong>ownership of their learning process</strong></p><p>• The growing concern among students about <strong>AI, self-efficacy, and their future</strong></p><p>Nick also shares a powerful vision for the <strong>school of the future</strong>:</p><p>Not one centered on technology, but one built around <strong>human relationships, social learning, mentorship, and responsibility</strong>.</p>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">8084364a-d788-49f1-8162-7822f4fdd608</guid><itunes:image href="https://artwork.captivate.fm/346e2e6f-55b3-483d-aece-57fef63b522f/EF-CAPTIVATE-3000-x-3000-px-24.png"/><pubDate>Thu, 19 Mar 2026 10:20:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/8084364a-d788-49f1-8162-7822f4fdd608.mp3" length="21847789" type="audio/mpeg"/><itunes:duration>45:31</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>28</itunes:episode><podcast:episode>28</podcast:episode><podcast:season>1</podcast:season><podcast:alternateEnclosure type="video/youtube" title="What schools must protect in the age of AI (with Nick Krichevsky)"><podcast:source uri="https://youtu.be/fv-x7l8LI-M"/></podcast:alternateEnclosure></item><item><title>Rethinking university in the age of AI</title><itunes:title>Rethinking university in the age of AI</itunes:title><description><![CDATA[<p> In this episode of <strong>Education Futures</strong>, Svenia Busson speaks with education entrepreneur <strong>Ed Fidoe</strong>, founder of <strong>London Interdisciplinary School</strong> and co-founder of <strong>School 21</strong> in London.</p><p>Ed has spent the last decade building new kinds of educational institutions from the ground up. His work challenges one of the core assumptions of higher education: that students should specialize in a single discipline.</p><p>Instead, the London Interdisciplinary School is built around a radically different idea: <strong>teaching students to tackle complex real-world problems using knowledge from multiple disciplines.</strong></p><p>In this conversation we explore:</p><p>• Why universities are facing a <strong>structural crisis</strong></p><p>• Why the traditional <strong>single-discipline degree may no longer make sense</strong></p><p>• How AI is reshaping what students need to learn</p><p>• Why universities should teach students <strong>how to identify the right problems to solve</strong></p><p>• Why innovation in education requires <strong>experimentation and new institutions</strong></p><p>We also discuss the <strong>Education Futures Master’s</strong> at LIS, a new interdisciplinary program designed for educators, learning leaders, and anyone interested in how education systems must evolve in an AI-driven world.</p><p>Ed also shares lessons from <strong>School 21</strong>, including the development of <strong>oracy education, </strong>teaching students how to think and communicate effectively through speech.</p><p>We mentioned:</p><p>https://www.lis.ac.uk/</p><p>https://school21.org.uk/</p><p>https://voice21.org/</p>]]></description><content:encoded><![CDATA[<p> In this episode of <strong>Education Futures</strong>, Svenia Busson speaks with education entrepreneur <strong>Ed Fidoe</strong>, founder of <strong>London Interdisciplinary School</strong> and co-founder of <strong>School 21</strong> in London.</p><p>Ed has spent the last decade building new kinds of educational institutions from the ground up. His work challenges one of the core assumptions of higher education: that students should specialize in a single discipline.</p><p>Instead, the London Interdisciplinary School is built around a radically different idea: <strong>teaching students to tackle complex real-world problems using knowledge from multiple disciplines.</strong></p><p>In this conversation we explore:</p><p>• Why universities are facing a <strong>structural crisis</strong></p><p>• Why the traditional <strong>single-discipline degree may no longer make sense</strong></p><p>• How AI is reshaping what students need to learn</p><p>• Why universities should teach students <strong>how to identify the right problems to solve</strong></p><p>• Why innovation in education requires <strong>experimentation and new institutions</strong></p><p>We also discuss the <strong>Education Futures Master’s</strong> at LIS, a new interdisciplinary program designed for educators, learning leaders, and anyone interested in how education systems must evolve in an AI-driven world.</p><p>Ed also shares lessons from <strong>School 21</strong>, including the development of <strong>oracy education, </strong>teaching students how to think and communicate effectively through speech.</p><p>We mentioned:</p><p>https://www.lis.ac.uk/</p><p>https://school21.org.uk/</p><p>https://voice21.org/</p>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">898d91fc-33d8-4637-806e-e45b68e3a3df</guid><itunes:image href="https://artwork.captivate.fm/ff5a7676-6969-4366-b774-3514509e890a/EF-CAPTIVATE-3000-x-3000-px-22.png"/><pubDate>Mon, 16 Mar 2026 12:50:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/898d91fc-33d8-4637-806e-e45b68e3a3df.mp3" length="20308445" type="audio/mpeg"/><itunes:duration>42:19</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>27</itunes:episode><podcast:episode>27</podcast:episode><podcast:season>1</podcast:season><podcast:alternateEnclosure type="video/youtube" title="Rethinking university in the age of AI (with Ed Fidoe)"><podcast:source uri="https://youtu.be/EFLXehInQm0"/></podcast:alternateEnclosure></item><item><title>Peace building and education in the age of AI</title><itunes:title>Peace building and education in the age of AI</itunes:title><description><![CDATA[<p>In this episode, Svenia Busson speaks with <strong>Guila Clara Kessous</strong>, a UNESCO Artist for Peace and Harvard teacher, about the intersection of art, education, and peace-building. They explore how performing arts and literature can heal trauma and how "diplomatic entrepreneurship" can help the next generation navigate a world shaped by AI and social media.</p><p><strong>Key Topics </strong></p><ul><li><strong>Guila’s Journey</strong>: From performing arts conservatory to a PhD with Nobel laureate Elie Wiesel.</li><li><strong>Art Therapy &amp; Bibliotherapy</strong>: Understanding how drama and reading (recognized by the WHO) can increase serotonin and well-being.</li><li><strong>Education in the Age of AI</strong>: Why we should teach kids how to "prompt" like the game <em>Jeopardy</em> to find neutral information and overcome biases.</li><li><strong>Diplomatic Entrepreneurship</strong>: Using culture and art to reconcile communities in conflict, such as through Israeli-Palestinian youth summer camps.</li><li><strong>Non-Violent Communication (NVC)</strong>: The importance of teaching children to "agree to disagree" and express views respectfully.</li><li><strong>The School of the Future</strong>: A vision of an open school in nature that connects the "heart, head, and body".</li></ul><br/><p><strong>Resources Mentioned</strong></p><ul><li><strong>World Art Day International Forum</strong>: An annual event at UNESCO every April 15th focusing on arts, health, and activism. (Website: <a href="http://www.worldartdayforum.com/" rel="noopener noreferrer" target="_blank">worldartdayforum.com</a>)</li><li><strong>Faber and Mazlish Technique</strong>: Practical methodology for communication between adults and children ("How to Talk So Kids Will Listen").</li><li><strong>Femina Vox</strong>: An international forum at UNESCO dedicated to Women’s Rights.</li><li><strong>Movements for Peace</strong>: "Women Wage Peace" and "Women of the Sun," including the "Prayer of the Mothers" song.</li><li><strong>Bibliotherapy</strong>: Mention of the power of reading as recognized by the World Health Organization (WHO).</li></ul><br/>]]></description><content:encoded><![CDATA[<p>In this episode, Svenia Busson speaks with <strong>Guila Clara Kessous</strong>, a UNESCO Artist for Peace and Harvard teacher, about the intersection of art, education, and peace-building. They explore how performing arts and literature can heal trauma and how "diplomatic entrepreneurship" can help the next generation navigate a world shaped by AI and social media.</p><p><strong>Key Topics </strong></p><ul><li><strong>Guila’s Journey</strong>: From performing arts conservatory to a PhD with Nobel laureate Elie Wiesel.</li><li><strong>Art Therapy &amp; Bibliotherapy</strong>: Understanding how drama and reading (recognized by the WHO) can increase serotonin and well-being.</li><li><strong>Education in the Age of AI</strong>: Why we should teach kids how to "prompt" like the game <em>Jeopardy</em> to find neutral information and overcome biases.</li><li><strong>Diplomatic Entrepreneurship</strong>: Using culture and art to reconcile communities in conflict, such as through Israeli-Palestinian youth summer camps.</li><li><strong>Non-Violent Communication (NVC)</strong>: The importance of teaching children to "agree to disagree" and express views respectfully.</li><li><strong>The School of the Future</strong>: A vision of an open school in nature that connects the "heart, head, and body".</li></ul><br/><p><strong>Resources Mentioned</strong></p><ul><li><strong>World Art Day International Forum</strong>: An annual event at UNESCO every April 15th focusing on arts, health, and activism. (Website: <a href="http://www.worldartdayforum.com/" rel="noopener noreferrer" target="_blank">worldartdayforum.com</a>)</li><li><strong>Faber and Mazlish Technique</strong>: Practical methodology for communication between adults and children ("How to Talk So Kids Will Listen").</li><li><strong>Femina Vox</strong>: An international forum at UNESCO dedicated to Women’s Rights.</li><li><strong>Movements for Peace</strong>: "Women Wage Peace" and "Women of the Sun," including the "Prayer of the Mothers" song.</li><li><strong>Bibliotherapy</strong>: Mention of the power of reading as recognized by the World Health Organization (WHO).</li></ul><br/>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">edbd0e52-c43e-4a16-b05a-ca53ce18b774</guid><itunes:image href="https://artwork.captivate.fm/35e4924c-217f-4bbd-a6f4-4bac8bd6c6d1/EF-CAPTIVATE-3000-x-3000-px-21.png"/><pubDate>Thu, 12 Mar 2026 11:15:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/edbd0e52-c43e-4a16-b05a-ca53ce18b774.mp3" length="16743253" type="audio/mpeg"/><itunes:duration>34:53</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>26</itunes:episode><podcast:episode>26</podcast:episode><podcast:season>1</podcast:season></item><item><title>How Khan Academy is designing AI for learning</title><itunes:title>How Khan Academy is designing AI for learning</itunes:title><description><![CDATA[<p>In this episode of <em>Education Futures</em>, we speak with <strong>Kristen DiCerbo</strong>, Chief Learning Officer at <strong>Khan Academy</strong>, where she leads the teams responsible for content, product design, assessment, and learning science.</p><p>With a PhD in educational psychology, Kristen brings a rare perspective to the AI conversation: <strong>learning science first, technology second.</strong></p><p>We explore how Khan Academy is building <strong>Khanmigo</strong>, its AI tutor and teaching assistant, and what it takes to design AI tools that support real learning rather than shortcuts.</p><p>In this conversation, we discuss:</p><ul><li>Why <strong>screen time is a poor proxy for learning</strong></li><li>How AI tutors can support <strong>practice and feedback at scale</strong></li><li>Why <strong>foundational knowledge still matters in the age of AI</strong></li><li>The growing concern around <strong>cognitive offloading</strong> and students delegating their thinking to machines</li><li>How Khan Academy designed <strong>guardrails and safety mechanisms</strong> for AI used by children</li><li>The tension between <strong>gamification, motivation, and real learning</strong></li><li>Why <strong>human relationships with teachers remain the strongest driver of learning</strong></li><li>What the <strong>school of the future</strong> could look like, combining technology with project-based learning</li></ul><br/><p>Kristen also shares how Khan Academy applies a <strong>risk-management approach to responsible AI</strong>, identifying potential harms early and designing safeguards directly into their systems. (more on this: https://blog.khanacademy.org/khan-academys-framework-for-responsible-ai-in-education/)</p><p>The takeaway: AI may transform education, but <strong>learning will always require effort, curiosity, and human guidance.</strong></p><p>Read this to go further:</p><p>National Study in Top Journal Finds Khan Academy Learning Gains After Accounting for Key Unmeasured Factors : https://blog.khanacademy.org/national-study-in-top-journal-finds-khan-academy-learning-gains-after-accounting-for-key-unmeasured-factors/</p>]]></description><content:encoded><![CDATA[<p>In this episode of <em>Education Futures</em>, we speak with <strong>Kristen DiCerbo</strong>, Chief Learning Officer at <strong>Khan Academy</strong>, where she leads the teams responsible for content, product design, assessment, and learning science.</p><p>With a PhD in educational psychology, Kristen brings a rare perspective to the AI conversation: <strong>learning science first, technology second.</strong></p><p>We explore how Khan Academy is building <strong>Khanmigo</strong>, its AI tutor and teaching assistant, and what it takes to design AI tools that support real learning rather than shortcuts.</p><p>In this conversation, we discuss:</p><ul><li>Why <strong>screen time is a poor proxy for learning</strong></li><li>How AI tutors can support <strong>practice and feedback at scale</strong></li><li>Why <strong>foundational knowledge still matters in the age of AI</strong></li><li>The growing concern around <strong>cognitive offloading</strong> and students delegating their thinking to machines</li><li>How Khan Academy designed <strong>guardrails and safety mechanisms</strong> for AI used by children</li><li>The tension between <strong>gamification, motivation, and real learning</strong></li><li>Why <strong>human relationships with teachers remain the strongest driver of learning</strong></li><li>What the <strong>school of the future</strong> could look like, combining technology with project-based learning</li></ul><br/><p>Kristen also shares how Khan Academy applies a <strong>risk-management approach to responsible AI</strong>, identifying potential harms early and designing safeguards directly into their systems. (more on this: https://blog.khanacademy.org/khan-academys-framework-for-responsible-ai-in-education/)</p><p>The takeaway: AI may transform education, but <strong>learning will always require effort, curiosity, and human guidance.</strong></p><p>Read this to go further:</p><p>National Study in Top Journal Finds Khan Academy Learning Gains After Accounting for Key Unmeasured Factors : https://blog.khanacademy.org/national-study-in-top-journal-finds-khan-academy-learning-gains-after-accounting-for-key-unmeasured-factors/</p>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">747163a5-d5b3-408f-88c7-29d9738877f8</guid><itunes:image href="https://artwork.captivate.fm/43f29768-9a80-496c-b51a-b31f1900807e/EF-CAPTIVATE-3000-x-3000-px-20.png"/><pubDate>Mon, 09 Mar 2026 19:35:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/747163a5-d5b3-408f-88c7-29d9738877f8.mp3" length="24492635" type="audio/mpeg"/><itunes:duration>51:02</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>25</itunes:episode><podcast:episode>25</podcast:episode><podcast:season>1</podcast:season><podcast:alternateEnclosure type="video/youtube" title="How Khan Academy is designing AI for learning"><podcast:source uri="https://youtu.be/JHev3l4LPjc"/></podcast:alternateEnclosure></item><item><title>AI, literacy, and the global learning gap</title><itunes:title>AI, literacy, and the global learning gap</itunes:title><description><![CDATA[<p>By 2050, Sub-Saharan Africa will be home to nearly 1 billion people under 18.</p><p>Today, 90% of 10-year-olds in the region cannot read a simple paragraph (according to the World Bank)</p><p>What happens when artificial intelligence accelerates, but foundational literacy remains out of reach for millions of children?</p><p>In this episode of <em>Education Futures</em>, Svenia Busson speaks with <strong>Paul Atherton</strong>, founder of <strong>Fab AI</strong>, about the future of AI in low- and middle-income countries, and whether it will close or widen the global learning gap.</p><p>Paul’s mission is to ensure that the world’s best technologies serve children who lack access to foundational literacy and quality schooling.</p><p>We explore:</p><ul><li>Why foundational literacy is the non-negotiable starting point</li><li>How AI could help leapfrog infrastructure gaps in Sub-Saharan Africa</li><li>Why most AI funding focuses on short-term pilots instead of long-term system architecture</li><li>The risk that high-income countries experience exponential productivity gains while others fall further behind</li><li>How rapid, decision-ready RCTs could modernize evidence in edtech</li><li>The difference between AI as autopilot vs co-pilot in learning</li><li>Why friction and effort remain essential to deep learning</li><li>Why Paul worries more about today’s 15-year-olds than 5-year-olds</li></ul><br/><p>This conversation is about infrastructure, inequality, and the billion young people whose future will shape the global economy, and whose literacy will determine whether AI becomes a tool of opportunity or a force that widens the gap.</p><p>Try Fab AI's new web app which can help evaluate the quality of foundational literacy and numeracy materials for low- and middle-income countries: https://fab-content-curation.web.app/</p><p>Read the latest Fab-AI Research Paper: "Context counts: Measuring how AI reflects local realities in education" : https://www.fab-ai.org/initiatives/ai-for-education/edtech-quality/resources/research-paper/measuring-how-ai-reflects-local-realities</p><p>Subscribe to Paul's Substack: https://paulfabai.substack.com/</p>]]></description><content:encoded><![CDATA[<p>By 2050, Sub-Saharan Africa will be home to nearly 1 billion people under 18.</p><p>Today, 90% of 10-year-olds in the region cannot read a simple paragraph (according to the World Bank)</p><p>What happens when artificial intelligence accelerates, but foundational literacy remains out of reach for millions of children?</p><p>In this episode of <em>Education Futures</em>, Svenia Busson speaks with <strong>Paul Atherton</strong>, founder of <strong>Fab AI</strong>, about the future of AI in low- and middle-income countries, and whether it will close or widen the global learning gap.</p><p>Paul’s mission is to ensure that the world’s best technologies serve children who lack access to foundational literacy and quality schooling.</p><p>We explore:</p><ul><li>Why foundational literacy is the non-negotiable starting point</li><li>How AI could help leapfrog infrastructure gaps in Sub-Saharan Africa</li><li>Why most AI funding focuses on short-term pilots instead of long-term system architecture</li><li>The risk that high-income countries experience exponential productivity gains while others fall further behind</li><li>How rapid, decision-ready RCTs could modernize evidence in edtech</li><li>The difference between AI as autopilot vs co-pilot in learning</li><li>Why friction and effort remain essential to deep learning</li><li>Why Paul worries more about today’s 15-year-olds than 5-year-olds</li></ul><br/><p>This conversation is about infrastructure, inequality, and the billion young people whose future will shape the global economy, and whose literacy will determine whether AI becomes a tool of opportunity or a force that widens the gap.</p><p>Try Fab AI's new web app which can help evaluate the quality of foundational literacy and numeracy materials for low- and middle-income countries: https://fab-content-curation.web.app/</p><p>Read the latest Fab-AI Research Paper: "Context counts: Measuring how AI reflects local realities in education" : https://www.fab-ai.org/initiatives/ai-for-education/edtech-quality/resources/research-paper/measuring-how-ai-reflects-local-realities</p><p>Subscribe to Paul's Substack: https://paulfabai.substack.com/</p>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">24f74985-822a-4345-a480-f575d9b459e8</guid><itunes:image href="https://artwork.captivate.fm/cca4abc0-df07-44fe-8c0f-b9007f3a4c2d/EF-CAPTIVATE-3000-x-3000-px-20.png"/><pubDate>Wed, 04 Mar 2026 18:30:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/24f74985-822a-4345-a480-f575d9b459e8.mp3" length="19790594" type="audio/mpeg"/><itunes:duration>41:14</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>24</itunes:episode><podcast:episode>24</podcast:episode><podcast:season>1</podcast:season><podcast:alternateEnclosure type="video/youtube" title="AI, literacy, and the global learning gap (with Paul Atherton)"><podcast:source uri="https://youtu.be/6roKthhjPD4"/></podcast:alternateEnclosure></item><item><title>AI governance and child safety in education</title><itunes:title>AI governance and child safety in education</itunes:title><description><![CDATA[<p>AI is already in classrooms. The real question is: <strong>who is responsible for governing it?</strong></p><p>In this episode of <em>Education Futures</em>, Svenia Busson sits down with <strong>Clara Hawking</strong>, founder of Kompass Education, to explore what AI governance actually means for schools and educators.</p><p>As governments roll out new regulations, including the EU AI Act, schools are facing urgent questions around compliance, safety, privacy, and responsibility. But governance is not just about following the law. It is about building trust, protecting children, and making intentional decisions about how AI enters learning environments.</p><p>In our conversation, we explore:</p><ul><li>What “AI governance” really means in practice</li><li>How the EU AI Act impacts schools and educational organizations</li><li>Why individual teacher subscriptions to AI tools can create legal and safety risks</li><li>The difference between AI literacy and AI safety</li><li>Why students are hesitant to admit how they use AI</li><li>The growing cognitive dependency concerns for Generation Alpha</li><li>And what a more human-centered, collaborative school of 2040 could look like</li></ul><br/><p>Clara makes a compelling case: AI adoption without governance is not innovation, it is risk.</p><p>For listeners interested in evaluating AI tools from a child safety perspective, we also mention <strong>Kora (https://korabench.ai/)</strong>, a new initiative that benchmarks large language models on child safety criteria.</p><p>Follow Clara on Linkedin for more content around these topics: https://www.linkedin.com/in/clara-hawking-ba9123149/</p>]]></description><content:encoded><![CDATA[<p>AI is already in classrooms. The real question is: <strong>who is responsible for governing it?</strong></p><p>In this episode of <em>Education Futures</em>, Svenia Busson sits down with <strong>Clara Hawking</strong>, founder of Kompass Education, to explore what AI governance actually means for schools and educators.</p><p>As governments roll out new regulations, including the EU AI Act, schools are facing urgent questions around compliance, safety, privacy, and responsibility. But governance is not just about following the law. It is about building trust, protecting children, and making intentional decisions about how AI enters learning environments.</p><p>In our conversation, we explore:</p><ul><li>What “AI governance” really means in practice</li><li>How the EU AI Act impacts schools and educational organizations</li><li>Why individual teacher subscriptions to AI tools can create legal and safety risks</li><li>The difference between AI literacy and AI safety</li><li>Why students are hesitant to admit how they use AI</li><li>The growing cognitive dependency concerns for Generation Alpha</li><li>And what a more human-centered, collaborative school of 2040 could look like</li></ul><br/><p>Clara makes a compelling case: AI adoption without governance is not innovation, it is risk.</p><p>For listeners interested in evaluating AI tools from a child safety perspective, we also mention <strong>Kora (https://korabench.ai/)</strong>, a new initiative that benchmarks large language models on child safety criteria.</p><p>Follow Clara on Linkedin for more content around these topics: https://www.linkedin.com/in/clara-hawking-ba9123149/</p>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">1c57bfab-4c9e-4b64-9163-e4496088e82e</guid><itunes:image href="https://artwork.captivate.fm/2754e565-583f-4f07-97d2-36fe87035c8a/EF-CAPTIVATE-3000-x-3000-px-20.png"/><pubDate>Mon, 02 Mar 2026 16:00:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/1c57bfab-4c9e-4b64-9163-e4496088e82e.mp3" length="21860537" type="audio/mpeg"/><itunes:duration>45:33</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>23</itunes:episode><podcast:episode>23</podcast:episode><podcast:season>1</podcast:season><podcast:alternateEnclosure type="video/youtube" title="AI governance and child safety in education (with Clara Hawking)"><podcast:source uri="https://youtu.be/FRm2Yt4mYrI"/></podcast:alternateEnclosure></item><item><title>How AI exposes inequities in modern schooling</title><itunes:title>How AI exposes inequities in modern schooling</itunes:title><description><![CDATA[<p>In this episode of <em>Education Futures</em>, host <strong>Svenia Busson</strong> sits down with <strong>Ken Shelton</strong>, a 20-year teaching veteran and global thought leader in educational technology. Drawing from his experience working with schools in over <strong>50 countries</strong>, Ken challenges the "all gas, no brakes" approach to AI, as well as the knee-jerk "ban and block" mentality seen in many governments.</p><p><strong>Key topics discussed in this episode include:</strong></p><ul><li><strong>The Digital Equity "Quilt"</strong>: Why digital equity is about much more than just providing a device, it’s about broadband infrastructure, "digital redlining," and the quality of the platforms being used.</li><li><strong>The Problem with Efficiency</strong>: A critique of the AI marketing trend that focuses on "grading faster" at the expense of pedagogical efficacy and meaningful feedback.</li><li><strong>AI as a Truth-Teller</strong>: How AI hasn't necessarily created "cheating" problems, but has instead highlighted ineffective and antiquated forms of assessment like multiple-choice tests.</li><li><strong>Practical Pedagogy</strong>: Ken's "Golden Rule" for AI (More Context = Better Output) and how teachers can use <strong>Project Zero</strong> thinking routines to "AI-proof" learning.</li><li><strong>Confronting Bias</strong>: Engaging activities to help students and teachers identify the human-generated biases embedded in image generators and LLMs.</li><li><strong>The School of 2040</strong>: Ken’s vision for a future-ready education system that prioritizes lifelong intellectual curiosity, multilingualism, and media literacy over static curriculum.</li></ul><br/><p>Ken reminds us that while the platforms may change, the skill sets required to navigate them: critical thinking, ethical leadership, and human-centered design, are evergreen.</p>]]></description><content:encoded><![CDATA[<p>In this episode of <em>Education Futures</em>, host <strong>Svenia Busson</strong> sits down with <strong>Ken Shelton</strong>, a 20-year teaching veteran and global thought leader in educational technology. Drawing from his experience working with schools in over <strong>50 countries</strong>, Ken challenges the "all gas, no brakes" approach to AI, as well as the knee-jerk "ban and block" mentality seen in many governments.</p><p><strong>Key topics discussed in this episode include:</strong></p><ul><li><strong>The Digital Equity "Quilt"</strong>: Why digital equity is about much more than just providing a device, it’s about broadband infrastructure, "digital redlining," and the quality of the platforms being used.</li><li><strong>The Problem with Efficiency</strong>: A critique of the AI marketing trend that focuses on "grading faster" at the expense of pedagogical efficacy and meaningful feedback.</li><li><strong>AI as a Truth-Teller</strong>: How AI hasn't necessarily created "cheating" problems, but has instead highlighted ineffective and antiquated forms of assessment like multiple-choice tests.</li><li><strong>Practical Pedagogy</strong>: Ken's "Golden Rule" for AI (More Context = Better Output) and how teachers can use <strong>Project Zero</strong> thinking routines to "AI-proof" learning.</li><li><strong>Confronting Bias</strong>: Engaging activities to help students and teachers identify the human-generated biases embedded in image generators and LLMs.</li><li><strong>The School of 2040</strong>: Ken’s vision for a future-ready education system that prioritizes lifelong intellectual curiosity, multilingualism, and media literacy over static curriculum.</li></ul><br/><p>Ken reminds us that while the platforms may change, the skill sets required to navigate them: critical thinking, ethical leadership, and human-centered design, are evergreen.</p>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">a4ad90b5-6093-4eee-bd04-41ce9fae8d7f</guid><itunes:image href="https://artwork.captivate.fm/79e758a2-4a77-4d6a-b821-39d8f6ea6531/EF-CAPTIVATE-3000-x-3000-px-19.png"/><pubDate>Mon, 23 Feb 2026 16:15:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/a4ad90b5-6093-4eee-bd04-41ce9fae8d7f.mp3" length="22159587" type="audio/mpeg"/><itunes:duration>46:10</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>22</itunes:episode><podcast:episode>22</podcast:episode><podcast:season>1</podcast:season><podcast:alternateEnclosure type="video/youtube" title="How AI exposes inequities in modern schooling (with Ken Shelton)"><podcast:source uri="https://youtu.be/3WybGBmTAcc"/></podcast:alternateEnclosure></item><item><title>Insights into Estonia’s OpenAI rollout</title><itunes:title>Insights into Estonia’s OpenAI rollout</itunes:title><description><![CDATA[<p>In this episode of <strong>Education Futures</strong>, we explore the <strong>"AI Leap"</strong>—Estonia's ambitious national strategy to roll out a specialized version of OpenAI across the entire country. Our guest, <strong>Jaan</strong>, is a neuroscientist and psychologist at the <strong>University of Tartu</strong> who is leading the scientific team overseeing and evaluating this project.</p><p>Unlike standard implementations, Estonia has collaborated with OpenAI to create a custom, pedagogically driven tutor that prioritizes active learning . Jaan explains why this system is designed to <strong>stop and ask questions</strong> rather than simply providing answers, ensuring that students continue to construct knowledge in their own brains</p><p><strong>In our conversation, we explore:</strong></p><ul><li><strong>The Pedagogical Tutor:</strong> Why Estonia rejected the standard ChatGPT EDU in favor of a model that mimics a Socratic tutor .</li><li><strong>Brain Literacy vs. AI Literacy:</strong> The importance of teaching students why mental effort and friction are mandatory for long-term learning .</li><li><strong>Teacher Autonomy:</strong> How Estonia’s culture of trust allows teachers to lead the AI transition without rigid, top-down supervision .</li><li><strong>Measuring What Matters:</strong> Why the research team is moving beyond "overrated" grades to track more granular aspects of the learning process.</li><li><strong>The "Safe to Fail" Environment:</strong> How AI can scale the ability for students to make mistakes and receive gentle, honest feedback .</li></ul><br/><p>More about the AI leap strategy: <u><a href="https://www.google.com/url?q=https://e-estonia.com/ai-leap-2025-estonia-sets-ai-standard-in-education/&amp;sa=D&amp;source=docs&amp;ust=1771184452152117&amp;usg=AOvVaw3KWkN8s_ua2UtkiC7bjg6Y" rel="noopener noreferrer" target="_blank">https://e-estonia.com/ai-leap-2025-estonia-sets-ai-standard-in-education/</a></u></p><p>A great interview of Jaan on the OpenAI rollout: <u><a href="https://www.google.com/url?q=https://tihupe.ee/en/ai-researcher-thinking-for-oneself-is-the-only-way-to-be-free-and-in-control/&amp;sa=D&amp;source=docs&amp;ust=1771184452152753&amp;usg=AOvVaw3KvrzANeSyktfeW3JGo4gq" rel="noopener noreferrer" target="_blank">https://tihupe.ee/en/ai-researcher-thinking-for-oneself-is-the-only-way-to-be-free-and-in-control/</a></u></p>]]></description><content:encoded><![CDATA[<p>In this episode of <strong>Education Futures</strong>, we explore the <strong>"AI Leap"</strong>—Estonia's ambitious national strategy to roll out a specialized version of OpenAI across the entire country. Our guest, <strong>Jaan</strong>, is a neuroscientist and psychologist at the <strong>University of Tartu</strong> who is leading the scientific team overseeing and evaluating this project.</p><p>Unlike standard implementations, Estonia has collaborated with OpenAI to create a custom, pedagogically driven tutor that prioritizes active learning . Jaan explains why this system is designed to <strong>stop and ask questions</strong> rather than simply providing answers, ensuring that students continue to construct knowledge in their own brains</p><p><strong>In our conversation, we explore:</strong></p><ul><li><strong>The Pedagogical Tutor:</strong> Why Estonia rejected the standard ChatGPT EDU in favor of a model that mimics a Socratic tutor .</li><li><strong>Brain Literacy vs. AI Literacy:</strong> The importance of teaching students why mental effort and friction are mandatory for long-term learning .</li><li><strong>Teacher Autonomy:</strong> How Estonia’s culture of trust allows teachers to lead the AI transition without rigid, top-down supervision .</li><li><strong>Measuring What Matters:</strong> Why the research team is moving beyond "overrated" grades to track more granular aspects of the learning process.</li><li><strong>The "Safe to Fail" Environment:</strong> How AI can scale the ability for students to make mistakes and receive gentle, honest feedback .</li></ul><br/><p>More about the AI leap strategy: <u><a href="https://www.google.com/url?q=https://e-estonia.com/ai-leap-2025-estonia-sets-ai-standard-in-education/&amp;sa=D&amp;source=docs&amp;ust=1771184452152117&amp;usg=AOvVaw3KWkN8s_ua2UtkiC7bjg6Y" rel="noopener noreferrer" target="_blank">https://e-estonia.com/ai-leap-2025-estonia-sets-ai-standard-in-education/</a></u></p><p>A great interview of Jaan on the OpenAI rollout: <u><a href="https://www.google.com/url?q=https://tihupe.ee/en/ai-researcher-thinking-for-oneself-is-the-only-way-to-be-free-and-in-control/&amp;sa=D&amp;source=docs&amp;ust=1771184452152753&amp;usg=AOvVaw3KvrzANeSyktfeW3JGo4gq" rel="noopener noreferrer" target="_blank">https://tihupe.ee/en/ai-researcher-thinking-for-oneself-is-the-only-way-to-be-free-and-in-control/</a></u></p>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">29665594-d339-4371-b0e4-51ce648b0b64</guid><itunes:image href="https://artwork.captivate.fm/2f63e434-4a51-4fc2-aa28-8c0b47231df2/EF-CAPTIVATE-3000-x-3000-px-18.png"/><pubDate>Thu, 19 Feb 2026 07:00:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/29665594-d339-4371-b0e4-51ce648b0b64.mp3" length="23198215" type="audio/mpeg"/><itunes:duration>48:20</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>21</itunes:episode><podcast:episode>21</podcast:episode><podcast:season>1</podcast:season></item><item><title>Using AI as a motivational translator</title><itunes:title>Using AI as a motivational translator</itunes:title><description><![CDATA[<p>In this episode of <strong>Education Futures</strong>, host Svenia Busson sits down with <strong>Alex Sarlin</strong>, the founder of <em>EdTech Insiders</em> and Global Edtech lead at the <strong>ASU GSV Summit</strong> . With over 15 years of experience at giants like Coursera and Skillshare, Alex provides a masterclass on where learning is headed in the age of generative AI.</p><p>Alex challenges the dystopian view of AI as an isolating force, arguing instead that it will serve as a <strong>"motivational translator"</strong> and a facilitator for deeper human relationships within schools . We dive into the psychological traits students need to thrive—focusing on <strong>metacognition and agility</strong> rather than specific technical tools—and discuss the "race against time" the industry faces to prove its value before a significant cultural backlash takes root .</p><h3><strong>Key Highlights:</strong></h3><ul><li><strong>The School of the Future:</strong> How AI will facilitate small group instruction and peer matchmaking rather than just individual screen time .</li><li><strong>Survival Skills for 2036:</strong> Why "learning how to learn" is the only durable skill in an asymptotic technological curve .</li><li><strong>Founder Wisdom:</strong> Why entrepreneurs should avoid "frothy" markets like generic AI language apps and look for underserved niches like accessibility compliance or specialized education .</li><li><strong>ASU GSV Summit:</strong> A preview of the "meeting of the minds" happening in San Diego this April.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>In this episode of <strong>Education Futures</strong>, host Svenia Busson sits down with <strong>Alex Sarlin</strong>, the founder of <em>EdTech Insiders</em> and Global Edtech lead at the <strong>ASU GSV Summit</strong> . With over 15 years of experience at giants like Coursera and Skillshare, Alex provides a masterclass on where learning is headed in the age of generative AI.</p><p>Alex challenges the dystopian view of AI as an isolating force, arguing instead that it will serve as a <strong>"motivational translator"</strong> and a facilitator for deeper human relationships within schools . We dive into the psychological traits students need to thrive—focusing on <strong>metacognition and agility</strong> rather than specific technical tools—and discuss the "race against time" the industry faces to prove its value before a significant cultural backlash takes root .</p><h3><strong>Key Highlights:</strong></h3><ul><li><strong>The School of the Future:</strong> How AI will facilitate small group instruction and peer matchmaking rather than just individual screen time .</li><li><strong>Survival Skills for 2036:</strong> Why "learning how to learn" is the only durable skill in an asymptotic technological curve .</li><li><strong>Founder Wisdom:</strong> Why entrepreneurs should avoid "frothy" markets like generic AI language apps and look for underserved niches like accessibility compliance or specialized education .</li><li><strong>ASU GSV Summit:</strong> A preview of the "meeting of the minds" happening in San Diego this April.</li></ul><br/>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">75836d9f-961b-4283-9fc0-d05450e88689</guid><itunes:image href="https://artwork.captivate.fm/2d7dae56-ee11-4bfb-82f2-9416aac8450c/EF-CAPTIVATE-3000-x-3000-px-16.png"/><pubDate>Mon, 16 Feb 2026 06:00:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/75836d9f-961b-4283-9fc0-d05450e88689.mp3" length="20754199" type="audio/mpeg"/><itunes:duration>43:14</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>20</itunes:episode><podcast:episode>20</podcast:episode><podcast:season>1</podcast:season><podcast:alternateEnclosure type="video/youtube" title="Using AI as a motivational translator (with Alex Sarlin)"><podcast:source uri="https://youtu.be/0xj8UHRsW2k"/></podcast:alternateEnclosure></item><item><title>Why hope is our greatest educational asset</title><itunes:title>Why hope is our greatest educational asset</itunes:title><description><![CDATA[<p>In this episode, we sit down with <strong>François Taddei</strong>, Chief Exploration Officer of the Learning Planet Institute, to discuss the radical shifts needed in global education. François shares his vision of "ethical dream-weaving"—the process of building futures that are a nightmare for no one—and argues that in a world of polycrisis, hope and collective wisdom are the only tools that can truly scale.</p><h3>Key Themes &amp; Highlights</h3><ul><li><strong>The Ethical Dreamer</strong>: François describes himself as a weaver of ethical dreams. He explains how we must balance the "nightmares" built by those in power by meeting fellow dreamers to co-create a better world.</li><li><strong>Intergenerational Co-design</strong>: Why should decisions about the future be made only by those who won't live to see them? François points out that while 50% of the population is under 30, they hold only 1% of decision-making power. He advocates for involving students in co-designing everything from their buildings to their curricula.</li><li><strong>Education as the "Midwife" of Democracy</strong>: Drawing on John Dewey, the discussion explores how education must help democracy be born anew in every generation. We look at the history of democratic "fractals"—from city-states to nations—and why we now need a <strong>planetary democracy</strong> to manage our shared commons.</li><li><strong>AI and the Humanity Loop</strong>: Rather than delegating thinking to machines, François suggests we use AI to analyze complex "dream spaces" and find consensus among 10 billion people. However, he warns that the most vital human skills—managing emotions, mourning, and finding hope—cannot be learned from an algorithm.</li><li><strong>The Power of Hope</strong>: Referencing young activist Francisco Vera, François concludes that hope is the last thing we can afford to lose. Even if systems collapse, hope allows a generation to rebuild, reinvent, and co-create alternatives.</li></ul><br/><h3>Notable Quotes</h3><blockquote>"I tend to try to weave ethical dreams, which are dreams that are a nightmare for no one, neither today nor tomorrow." — <strong>François Taddei</strong></blockquote><blockquote>"The last thing we can afford to lose is hope. Because if you lose hope, then you cannot do anything. But if you have hope... you can rebuild and you can reinvent." — <strong>Francisco Vera</strong> (via François Taddei)</blockquote><h3>Resources Mentioned</h3><ul><li><strong>Learning Planet Institute</strong>: An interdisciplinary center co-designed by students.</li><li><strong>The UN Pact for the Future</strong>: A commitment to making decisions in the best interest of future generations.</li><li><strong>Planetary Commons</strong>: The three types of commons we must protect: Natural, Human-made, and Digital.</li><li><strong>Aristotle's Three Forms of Knowledge</strong>: <em>Episteme</em> (science), <em>Tekne</em> (technology), and <em>Phronesis</em> (the ethics of action).</li></ul><br/>]]></description><content:encoded><![CDATA[<p>In this episode, we sit down with <strong>François Taddei</strong>, Chief Exploration Officer of the Learning Planet Institute, to discuss the radical shifts needed in global education. François shares his vision of "ethical dream-weaving"—the process of building futures that are a nightmare for no one—and argues that in a world of polycrisis, hope and collective wisdom are the only tools that can truly scale.</p><h3>Key Themes &amp; Highlights</h3><ul><li><strong>The Ethical Dreamer</strong>: François describes himself as a weaver of ethical dreams. He explains how we must balance the "nightmares" built by those in power by meeting fellow dreamers to co-create a better world.</li><li><strong>Intergenerational Co-design</strong>: Why should decisions about the future be made only by those who won't live to see them? François points out that while 50% of the population is under 30, they hold only 1% of decision-making power. He advocates for involving students in co-designing everything from their buildings to their curricula.</li><li><strong>Education as the "Midwife" of Democracy</strong>: Drawing on John Dewey, the discussion explores how education must help democracy be born anew in every generation. We look at the history of democratic "fractals"—from city-states to nations—and why we now need a <strong>planetary democracy</strong> to manage our shared commons.</li><li><strong>AI and the Humanity Loop</strong>: Rather than delegating thinking to machines, François suggests we use AI to analyze complex "dream spaces" and find consensus among 10 billion people. However, he warns that the most vital human skills—managing emotions, mourning, and finding hope—cannot be learned from an algorithm.</li><li><strong>The Power of Hope</strong>: Referencing young activist Francisco Vera, François concludes that hope is the last thing we can afford to lose. Even if systems collapse, hope allows a generation to rebuild, reinvent, and co-create alternatives.</li></ul><br/><h3>Notable Quotes</h3><blockquote>"I tend to try to weave ethical dreams, which are dreams that are a nightmare for no one, neither today nor tomorrow." — <strong>François Taddei</strong></blockquote><blockquote>"The last thing we can afford to lose is hope. Because if you lose hope, then you cannot do anything. But if you have hope... you can rebuild and you can reinvent." — <strong>Francisco Vera</strong> (via François Taddei)</blockquote><h3>Resources Mentioned</h3><ul><li><strong>Learning Planet Institute</strong>: An interdisciplinary center co-designed by students.</li><li><strong>The UN Pact for the Future</strong>: A commitment to making decisions in the best interest of future generations.</li><li><strong>Planetary Commons</strong>: The three types of commons we must protect: Natural, Human-made, and Digital.</li><li><strong>Aristotle's Three Forms of Knowledge</strong>: <em>Episteme</em> (science), <em>Tekne</em> (technology), and <em>Phronesis</em> (the ethics of action).</li></ul><br/>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">70c90d20-7100-4df6-9fd2-f50ecc199c4f</guid><itunes:image href="https://artwork.captivate.fm/4e71155e-1cff-447b-a146-4e794f7d9412/EF-CAPTIVATE-3000-x-3000-px-15.png"/><pubDate>Mon, 09 Feb 2026 06:30:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/70c90d20-7100-4df6-9fd2-f50ecc199c4f.mp3" length="27721160" type="audio/mpeg"/><itunes:duration>57:45</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>19</itunes:episode><podcast:episode>19</podcast:episode><podcast:season>1</podcast:season><podcast:alternateEnclosure type="video/youtube" title="Why hope is our greatest educational asset (with François Taddei, LPI)"><podcast:source uri="https://youtu.be/FnCaB7z5ymw"/></podcast:alternateEnclosure></item><item><title>Translating AI research into educational reality</title><itunes:title>Translating AI research into educational reality</itunes:title><description><![CDATA[<p>AI in education is evolving at a pace that often overwhelms teachers, school leaders, and policymakers. New tools appear weekly. Research lags behind practice. Hype fills the gap.</p><p>So how do we make <em>good decisions</em> when certainty is impossible?</p><p>In this episode of <strong>Education Futures</strong>, Svenia is joined by <strong>Chris Agnew</strong>, who leads the AI Hub for Education at the <strong>Stanford Accelerator for Learning</strong>.</p><p>Chris brings a rare perspective to the AI conversation. With a background in environmental and experiential education, from outdoor classrooms to apprenticeship-based learning, he has spent decades trying to bridge relevance, rigor, and access. Today, his role is to translate cutting-edge AI research into practical guidance for superintendents, state leaders, and education systems making decisions right now.</p><p>In our conversation, we explore:</p><ul><li>Why the biggest challenge is not innovation, but <strong>sense-making</strong></li><li>How the speed of AI creates noise, confusion, and decision paralysis</li><li>The persistent <strong>research-to-practice gap, </strong>and why it’s even harder with AI</li><li>What current evidence actually tells us (and doesn’t) about AI in K–12</li><li>Why most research today shows <em>promise</em>, not certainty</li><li>How leaders can think in <strong>short-cycle experiments</strong> instead of long-term predictions</li><li>The difference between using AI for <strong>efficiency</strong>, <strong>outcomes</strong>, and <strong>reimagining school</strong></li><li>Why personalization has too often turned into isolation, and how AI could help reverse that</li><li>A vision of future schools built around collaboration, real-world learning, and apprenticeship-like experiences</li></ul><br/><p>Chris also shares why banning AI from schools is unrealistic, but blindly adopting it is equally risky, and why adult judgment, not student technical skill, will matter most in the years ahead.</p><p>This episode is not about finding definitive answers, it’s about <strong>building the capacity to learn, adapt, and decide well,</strong> even when the future remains uncertain.</p><p>Learn more about the hub here: https://scale.stanford.edu/ai</p><p>The report <strong><em><a href="https://scale.stanford.edu/sites/default/files/The%20Evidence%20Base%20on%20AI%20in%20K-12%20Report.pdf" rel="noopener noreferrer" target="_blank">The Evidence Base on AI in K-12: A 2026 Review</a> </em></strong>is out now:</p><p>https://scale.stanford.edu/sites/default/files/The%20Evidence%20Base%20on%20AI%20in%20K-12%20Report.pdf</p>]]></description><content:encoded><![CDATA[<p>AI in education is evolving at a pace that often overwhelms teachers, school leaders, and policymakers. New tools appear weekly. Research lags behind practice. Hype fills the gap.</p><p>So how do we make <em>good decisions</em> when certainty is impossible?</p><p>In this episode of <strong>Education Futures</strong>, Svenia is joined by <strong>Chris Agnew</strong>, who leads the AI Hub for Education at the <strong>Stanford Accelerator for Learning</strong>.</p><p>Chris brings a rare perspective to the AI conversation. With a background in environmental and experiential education, from outdoor classrooms to apprenticeship-based learning, he has spent decades trying to bridge relevance, rigor, and access. Today, his role is to translate cutting-edge AI research into practical guidance for superintendents, state leaders, and education systems making decisions right now.</p><p>In our conversation, we explore:</p><ul><li>Why the biggest challenge is not innovation, but <strong>sense-making</strong></li><li>How the speed of AI creates noise, confusion, and decision paralysis</li><li>The persistent <strong>research-to-practice gap, </strong>and why it’s even harder with AI</li><li>What current evidence actually tells us (and doesn’t) about AI in K–12</li><li>Why most research today shows <em>promise</em>, not certainty</li><li>How leaders can think in <strong>short-cycle experiments</strong> instead of long-term predictions</li><li>The difference between using AI for <strong>efficiency</strong>, <strong>outcomes</strong>, and <strong>reimagining school</strong></li><li>Why personalization has too often turned into isolation, and how AI could help reverse that</li><li>A vision of future schools built around collaboration, real-world learning, and apprenticeship-like experiences</li></ul><br/><p>Chris also shares why banning AI from schools is unrealistic, but blindly adopting it is equally risky, and why adult judgment, not student technical skill, will matter most in the years ahead.</p><p>This episode is not about finding definitive answers, it’s about <strong>building the capacity to learn, adapt, and decide well,</strong> even when the future remains uncertain.</p><p>Learn more about the hub here: https://scale.stanford.edu/ai</p><p>The report <strong><em><a href="https://scale.stanford.edu/sites/default/files/The%20Evidence%20Base%20on%20AI%20in%20K-12%20Report.pdf" rel="noopener noreferrer" target="_blank">The Evidence Base on AI in K-12: A 2026 Review</a> </em></strong>is out now:</p><p>https://scale.stanford.edu/sites/default/files/The%20Evidence%20Base%20on%20AI%20in%20K-12%20Report.pdf</p>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">636697dd-4f5c-41ab-949b-8b946a62a818</guid><itunes:image href="https://artwork.captivate.fm/fd3b7d2d-b693-47dc-be2b-ddc11e115be8/EF-CAPTIVATE-3000-x-3000-px-14.png"/><pubDate>Mon, 02 Feb 2026 15:55:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/636697dd-4f5c-41ab-949b-8b946a62a818.mp3" length="22013928" type="audio/mpeg"/><itunes:duration>45:52</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>18</itunes:episode><podcast:episode>18</podcast:episode><podcast:season>1</podcast:season><podcast:alternateEnclosure type="video/youtube" title="Translating AI research into educational reality (with Chris Agnew, Stanford)"><podcast:source uri="https://youtu.be/M5FR28BrF9s"/></podcast:alternateEnclosure></item><item><title>AI denial is the real risk</title><itunes:title>AI denial is the real risk</itunes:title><description><![CDATA[<p>In this episode of <strong>Education Futures</strong>, we welcome <strong>Louis Rosenberg</strong> — technologist, entrepreneur, and long-time researcher in virtual reality, augmented reality, and artificial intelligence.</p><p>Louis has spent decades building technologies designed to <strong>augment human intelligence </strong>and warning about the risks when we fail to understand what we are building.</p><p>In this conversation, we focus on a phenomenon Louis has recently written extensively about: <strong>AI denial: </strong>society’s tendency to underestimate AI’s capabilities as a way of avoiding an uncomfortable truth.</p><p>We explore:</p><ul><li>Why dismissing AI as “just slop” or “just autocomplete” is dangerously outdated</li><li>How AI systems are becoming <strong>cognitive rivals</strong>, not just tools</li><li>Why humans are especially vulnerable to <strong>anthropomorphizing</strong> conversational AI</li><li>The concept of <strong>asymmetric relationships</strong> between humans and AI</li><li>Why photorealistic, conversational AI represents a new and far more powerful form of influence</li><li>How AI may reshape education, work, relationships, and childhood itself</li><li>What skills children actually need to develop in a world of rapid, continuous change</li><li>Why banning AI in schools is not a solution — but neither is naïve adoption</li></ul><br/><p>Louis argues that we are repeating the same mistake we made with social media: regulating yesterday’s risks while ignoring tomorrow’s.</p><p>He also shares a radically different vision for AI’s future — inspired by <strong>swarm intelligence and biomimicry</strong> — where AI is used not to replace humans, but to <strong>connect groups of people into collective intelligence</strong>, keeping human values, judgment, and responsibility at the center.</p><p>This episode is a call to move beyond fear <em>and</em> denial — and to educate the next generation with clarity, realism, and agency.</p><h3>📚 Essential reading — Louis Rosenberg on AI denial</h3><p>These recent articles are directly referenced in the conversation and provide crucial context:</p><p>https://bigthink.com/the-present/the-rise-of-ai-denialism/</p><p>https://venturebeat.com/technology/ai-denial-is-becoming-an-enterprise-risk-why-dismissing-slop-obscures-real</p><p>https://bigthink.com/the-future/what-happens-the-day-after-humans-create-agi/</p>]]></description><content:encoded><![CDATA[<p>In this episode of <strong>Education Futures</strong>, we welcome <strong>Louis Rosenberg</strong> — technologist, entrepreneur, and long-time researcher in virtual reality, augmented reality, and artificial intelligence.</p><p>Louis has spent decades building technologies designed to <strong>augment human intelligence </strong>and warning about the risks when we fail to understand what we are building.</p><p>In this conversation, we focus on a phenomenon Louis has recently written extensively about: <strong>AI denial: </strong>society’s tendency to underestimate AI’s capabilities as a way of avoiding an uncomfortable truth.</p><p>We explore:</p><ul><li>Why dismissing AI as “just slop” or “just autocomplete” is dangerously outdated</li><li>How AI systems are becoming <strong>cognitive rivals</strong>, not just tools</li><li>Why humans are especially vulnerable to <strong>anthropomorphizing</strong> conversational AI</li><li>The concept of <strong>asymmetric relationships</strong> between humans and AI</li><li>Why photorealistic, conversational AI represents a new and far more powerful form of influence</li><li>How AI may reshape education, work, relationships, and childhood itself</li><li>What skills children actually need to develop in a world of rapid, continuous change</li><li>Why banning AI in schools is not a solution — but neither is naïve adoption</li></ul><br/><p>Louis argues that we are repeating the same mistake we made with social media: regulating yesterday’s risks while ignoring tomorrow’s.</p><p>He also shares a radically different vision for AI’s future — inspired by <strong>swarm intelligence and biomimicry</strong> — where AI is used not to replace humans, but to <strong>connect groups of people into collective intelligence</strong>, keeping human values, judgment, and responsibility at the center.</p><p>This episode is a call to move beyond fear <em>and</em> denial — and to educate the next generation with clarity, realism, and agency.</p><h3>📚 Essential reading — Louis Rosenberg on AI denial</h3><p>These recent articles are directly referenced in the conversation and provide crucial context:</p><p>https://bigthink.com/the-present/the-rise-of-ai-denialism/</p><p>https://venturebeat.com/technology/ai-denial-is-becoming-an-enterprise-risk-why-dismissing-slop-obscures-real</p><p>https://bigthink.com/the-future/what-happens-the-day-after-humans-create-agi/</p>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">f8c347bd-9b64-40bd-8a49-f0f948065d5e</guid><itunes:image href="https://artwork.captivate.fm/688f7c5d-58fc-4b94-a2b8-94143f170331/EF-CAPTIVATE-3000-x-3000-px-13.png"/><pubDate>Mon, 26 Jan 2026 06:30:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/f8c347bd-9b64-40bd-8a49-f0f948065d5e.mp3" length="27427126" type="audio/mpeg"/><itunes:duration>57:08</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>17</itunes:episode><podcast:episode>17</podcast:episode><podcast:season>1</podcast:season><podcast:alternateEnclosure type="video/youtube" title="AI denial is the real risk (with Louis Rosenberg, Unanimous AI)"><podcast:source uri="https://youtu.be/dP46nyMhbdY"/></podcast:alternateEnclosure></item><item><title>Education for the stewardship of the commons</title><itunes:title>Education for the stewardship of the commons</itunes:title><description><![CDATA[<p>What is education actually for, in a world where AI can increasingly do things <em>for</em> us?</p><p>In this episode of <strong>Education Futures</strong>, recorded at the Learning Planet Institute in Paris, we sit down with <strong>Seth Frey</strong>, professor at UC Davis working at the intersection of computer science, social science, and self-governance.</p><p>Seth’s work focuses on a rarely discussed question:</p><p><strong>what skills do humans need to run things together — responsibly, collectively, and democratically?</strong></p><p>Rather than framing AI as a tool to accelerate productivity or replace learning, Seth argues that AI is an <em>uncomfortable gift</em>: it strips away the superficial parts of education and forces us to confront why we learn in the first place.</p><p>In this conversation, we explore:</p><ul><li>Why AI is often used as a <strong>substitute for learning</strong>, not a support for it</li><li>The crucial difference between <strong>formative</strong> and <strong>summative</strong> uses of AI</li><li>Why authenticity and motivation matter more than ever in education</li><li>How peer-to-peer learning reduces reliance on AI shortcuts</li><li>Why meetings, dialogue, and facilitation are <em>learned skills</em>, not inefficiencies</li><li>What it means to educate for <strong>stewardship of the commons</strong></li><li>Why responsible technology requires people who can govern together</li><li>How education could shift from credentials to lived, cooperative experience</li></ul><br/><p>Seth introduces the idea of a <em>“commoning standard”</em>: a framework for the basic literacies required to steward shared resources — from classrooms and organizations to technologies and communities.</p><p>This episode is about <strong>re-centering education on agency, responsibility, and collective capacity, </strong>and asking what kind of people we need to cultivate <em>before</em> deciding what role AI should play.</p>]]></description><content:encoded><![CDATA[<p>What is education actually for, in a world where AI can increasingly do things <em>for</em> us?</p><p>In this episode of <strong>Education Futures</strong>, recorded at the Learning Planet Institute in Paris, we sit down with <strong>Seth Frey</strong>, professor at UC Davis working at the intersection of computer science, social science, and self-governance.</p><p>Seth’s work focuses on a rarely discussed question:</p><p><strong>what skills do humans need to run things together — responsibly, collectively, and democratically?</strong></p><p>Rather than framing AI as a tool to accelerate productivity or replace learning, Seth argues that AI is an <em>uncomfortable gift</em>: it strips away the superficial parts of education and forces us to confront why we learn in the first place.</p><p>In this conversation, we explore:</p><ul><li>Why AI is often used as a <strong>substitute for learning</strong>, not a support for it</li><li>The crucial difference between <strong>formative</strong> and <strong>summative</strong> uses of AI</li><li>Why authenticity and motivation matter more than ever in education</li><li>How peer-to-peer learning reduces reliance on AI shortcuts</li><li>Why meetings, dialogue, and facilitation are <em>learned skills</em>, not inefficiencies</li><li>What it means to educate for <strong>stewardship of the commons</strong></li><li>Why responsible technology requires people who can govern together</li><li>How education could shift from credentials to lived, cooperative experience</li></ul><br/><p>Seth introduces the idea of a <em>“commoning standard”</em>: a framework for the basic literacies required to steward shared resources — from classrooms and organizations to technologies and communities.</p><p>This episode is about <strong>re-centering education on agency, responsibility, and collective capacity, </strong>and asking what kind of people we need to cultivate <em>before</em> deciding what role AI should play.</p>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">95b924bb-c9b7-4c0b-b396-3797206d34c5</guid><itunes:image href="https://artwork.captivate.fm/8224cd79-4ac0-4236-970c-acab5ae26517/EF-CAPTIVATE-3000-x-3000-px-12.png"/><pubDate>Mon, 19 Jan 2026 06:00:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/95b924bb-c9b7-4c0b-b396-3797206d34c5.mp3" length="19417774" type="audio/mpeg"/><itunes:duration>40:27</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>16</itunes:episode><podcast:episode>16</podcast:episode><podcast:season>1</podcast:season><podcast:alternateEnclosure type="video/youtube" title="Education for the stewardship of the commons (with Seth Frey, UC Davis)"><podcast:source uri="https://youtu.be/Z-I8_gmJULE"/></podcast:alternateEnclosure></item><item><title>When AI forces Universities to face reality</title><itunes:title>When AI forces Universities to face reality</itunes:title><description><![CDATA[<p>What if artificial intelligence isn’t the real threat to higher education?</p><p>In this episode of <strong>Education Futures</strong>, we speak with <strong>Michael Burgess</strong>, a sharp and uncompromising voice on the future of higher education, to unpack a provocative idea:</p><p><strong>Universities don’t have a technology problem, they have a strategy problem.</strong></p><p>Michael argues that AI isn’t breaking universities.</p><p>It’s revealing what has been broken for a long time.</p><p>Together, we explore:</p><ul><li>Why higher education’s operating model hasn’t fundamentally changed for centuries</li><li>How AI is accelerating the exposure of slow, inefficient, and misaligned systems</li><li>Why “AI transformation” is often the wrong framing, and why <em>betterment</em> matters more</li><li>What universities misunderstand about students, industry, and the marketplace</li><li>Why entry-level work, credentialing, and degree length must be rethought</li><li>How innovation is more likely to come from outside universities than within</li><li>What the future of higher education could look like in a plural, AI-enabled world</li></ul><br/><p>Michael also explains why most institutions fall in love with shiny tools, layer them onto broken processes, and then wonder why nothing changes, and why real progress starts with courage, clarity, and a willingness to cannibalize parts of the existing model.</p>]]></description><content:encoded><![CDATA[<p>What if artificial intelligence isn’t the real threat to higher education?</p><p>In this episode of <strong>Education Futures</strong>, we speak with <strong>Michael Burgess</strong>, a sharp and uncompromising voice on the future of higher education, to unpack a provocative idea:</p><p><strong>Universities don’t have a technology problem, they have a strategy problem.</strong></p><p>Michael argues that AI isn’t breaking universities.</p><p>It’s revealing what has been broken for a long time.</p><p>Together, we explore:</p><ul><li>Why higher education’s operating model hasn’t fundamentally changed for centuries</li><li>How AI is accelerating the exposure of slow, inefficient, and misaligned systems</li><li>Why “AI transformation” is often the wrong framing, and why <em>betterment</em> matters more</li><li>What universities misunderstand about students, industry, and the marketplace</li><li>Why entry-level work, credentialing, and degree length must be rethought</li><li>How innovation is more likely to come from outside universities than within</li><li>What the future of higher education could look like in a plural, AI-enabled world</li></ul><br/><p>Michael also explains why most institutions fall in love with shiny tools, layer them onto broken processes, and then wonder why nothing changes, and why real progress starts with courage, clarity, and a willingness to cannibalize parts of the existing model.</p>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">adefeb88-433a-4368-9bc2-e54a83f5c25b</guid><itunes:image href="https://artwork.captivate.fm/31179fb4-f0d9-4d2a-a1ac-296cc88ed9bc/EF-CAPTIVATE-3000-x-3000-px-10.png"/><pubDate>Mon, 12 Jan 2026 14:23:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/adefeb88-433a-4368-9bc2-e54a83f5c25b.mp3" length="19773875" type="audio/mpeg"/><itunes:duration>41:12</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>15</itunes:episode><podcast:episode>15</podcast:episode><podcast:season>1</podcast:season><podcast:alternateEnclosure type="video/youtube" title="When AI forces Universities to face reality (with Michael Burgess)"><podcast:source uri="https://youtu.be/wnQGCFgf8Zw"/></podcast:alternateEnclosure></item><item><title>Imagining better futures in the age of AI</title><itunes:title>Imagining better futures in the age of AI</itunes:title><description><![CDATA[<p>Are we really living through the worst moment in history, or are we actually in the <em>best position ever</em> to build a better future?</p><p>In this episode of <strong>Education Futures</strong>, we welcome <strong>Beatrice Erkers</strong>, Existential Hope Program Manager at the <strong>Foresight Institute</strong>.</p><p>Beatrice works at the intersection of technology, long-term thinking, and societal progress, helping people move beyond doom narratives to imagine <em>desirable</em> futures — and take responsibility for shaping them.</p><p>Together, we explore:</p><ul><li>Why pessimism about the future is historically misguided</li><li>How “existential hope” differs from blind optimism</li><li>Why agency matters more than prediction when thinking about the future</li><li>How world-building and scenario planning can help rethink education</li><li>Why education may be the most powerful lever for long-term change</li><li>How AI could make learning more humane — not less</li><li>What skills and mindsets future education systems should prioritize</li><li>Why plural, diverse futures matter more than one “perfect” vision</li></ul><br/><p>Beatrice also shares inspiring examples — from AI tutoring models like <strong>Alpha School</strong> to progress-oriented movements — and explains why hope is inseparable from action.</p><p>This episode is an invitation to stop asking <em>“what will happen?”</em></p><p>and start asking <em>“what future do we want to build — and how do we begin now?”</em></p>]]></description><content:encoded><![CDATA[<p>Are we really living through the worst moment in history, or are we actually in the <em>best position ever</em> to build a better future?</p><p>In this episode of <strong>Education Futures</strong>, we welcome <strong>Beatrice Erkers</strong>, Existential Hope Program Manager at the <strong>Foresight Institute</strong>.</p><p>Beatrice works at the intersection of technology, long-term thinking, and societal progress, helping people move beyond doom narratives to imagine <em>desirable</em> futures — and take responsibility for shaping them.</p><p>Together, we explore:</p><ul><li>Why pessimism about the future is historically misguided</li><li>How “existential hope” differs from blind optimism</li><li>Why agency matters more than prediction when thinking about the future</li><li>How world-building and scenario planning can help rethink education</li><li>Why education may be the most powerful lever for long-term change</li><li>How AI could make learning more humane — not less</li><li>What skills and mindsets future education systems should prioritize</li><li>Why plural, diverse futures matter more than one “perfect” vision</li></ul><br/><p>Beatrice also shares inspiring examples — from AI tutoring models like <strong>Alpha School</strong> to progress-oriented movements — and explains why hope is inseparable from action.</p><p>This episode is an invitation to stop asking <em>“what will happen?”</em></p><p>and start asking <em>“what future do we want to build — and how do we begin now?”</em></p>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">5f8fc681-090c-41e6-a28f-d811654f761d</guid><itunes:image href="https://artwork.captivate.fm/689b293e-dc9a-4cda-b9c6-4cad626cdda6/EF-CAPTIVATE-3000-x-3000-px-8.png"/><pubDate>Mon, 05 Jan 2026 08:00:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/5f8fc681-090c-41e6-a28f-d811654f761d.mp3" length="23458604" type="audio/mpeg"/><itunes:duration>48:52</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>14</itunes:episode><podcast:episode>14</podcast:episode><podcast:season>1</podcast:season><podcast:alternateEnclosure type="video/youtube" title="Imagining better futures in the age of AI (with Beatrice Erkers, Foresight Institute)"><podcast:source uri="https://youtu.be/NYT8nqoXqzM"/></podcast:alternateEnclosure></item><item><title>A student&apos;s take on learning in the age of AI</title><itunes:title>A student&apos;s take on learning in the age of AI</itunes:title><description><![CDATA[<p>What does school look like through the eyes of a 15-year-old who <em>actually</em> uses AI to learn, not just to finish homework faster?</p><p>In this episode of <strong>Education Futures</strong>, Laurent &amp; Svenia sit down with <strong>Hudson</strong>, a high school student from San Diego who is already doing computer science education research, and working with a UCSD mentor on learning science–informed approaches to teaching.</p><p>Hudson shares a strikingly clear perspective on what’s broken in today’s education system, and what could be rebuilt in the age of AI.</p><p>Rather than focusing on AI literacy, tools, or prompt engineering, he argues for something deeper: <strong>teaching students how to think</strong>.</p><p><strong>People &amp; institutions mentioned:</strong></p><ul><li>Art of Problem Solving for Math education - https://artofproblemsolving.com/</li><li>Carnegie Mellon University – LearnLab - Kenneth Koedinger &amp; John Stamper (intelligent tutoring systems) https://learnlab.org</li></ul><br/>]]></description><content:encoded><![CDATA[<p>What does school look like through the eyes of a 15-year-old who <em>actually</em> uses AI to learn, not just to finish homework faster?</p><p>In this episode of <strong>Education Futures</strong>, Laurent &amp; Svenia sit down with <strong>Hudson</strong>, a high school student from San Diego who is already doing computer science education research, and working with a UCSD mentor on learning science–informed approaches to teaching.</p><p>Hudson shares a strikingly clear perspective on what’s broken in today’s education system, and what could be rebuilt in the age of AI.</p><p>Rather than focusing on AI literacy, tools, or prompt engineering, he argues for something deeper: <strong>teaching students how to think</strong>.</p><p><strong>People &amp; institutions mentioned:</strong></p><ul><li>Art of Problem Solving for Math education - https://artofproblemsolving.com/</li><li>Carnegie Mellon University – LearnLab - Kenneth Koedinger &amp; John Stamper (intelligent tutoring systems) https://learnlab.org</li></ul><br/>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">3aac2a04-bf07-4a4f-8f4d-2dc4319c3ac3</guid><itunes:image href="https://artwork.captivate.fm/626563ec-1550-410c-8ce3-234c9d25b58b/EF-CAPTIVATE-3000-x-3000-px-7.png"/><pubDate>Mon, 29 Dec 2025 06:30:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/3aac2a04-bf07-4a4f-8f4d-2dc4319c3ac3.mp3" length="19005458" type="audio/mpeg"/><itunes:duration>39:36</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>13</itunes:episode><podcast:episode>13</podcast:episode><podcast:season>1</podcast:season></item><item><title>Why teach coding in the age of vibe coding</title><itunes:title>Why teach coding in the age of vibe coding</itunes:title><description><![CDATA[<p>Is learning how to code still relevant when AI can generate code for us?</p><p>In this episode of <strong>Education Futures</strong>, we sit down with <strong>Dora Palfi</strong>, founder of imagi, to explore what computer science education should become in an AI-driven world — and what <em>must not</em> be lost along the way.</p><p>Dora has a background in computer science and neuroscience and has spent years working at the intersection of education, technology, and equity. Her core conviction is clear: <strong>AI should not become a shortcut that removes understanding, agency, and critical thinking from learning</strong>.</p><p>We discuss why coding still matters — not as a job guarantee, but as a way to understand how the world works — and why telling students they don’t need to understand technology anymore is not only wrong, but patronizing.</p><p>Check out "hour of AI" a Loveable x OpenAI x imagi initiative: https://imagilabs.com/pages/hour-of-code</p><p>We discussed the book "IF ANYONE BUILDS IT,EVERYONE DIES" - Eliezer Yudkowsky &amp; Nate Soares</p><p>https://ifanyonebuildsit.com/</p>]]></description><content:encoded><![CDATA[<p>Is learning how to code still relevant when AI can generate code for us?</p><p>In this episode of <strong>Education Futures</strong>, we sit down with <strong>Dora Palfi</strong>, founder of imagi, to explore what computer science education should become in an AI-driven world — and what <em>must not</em> be lost along the way.</p><p>Dora has a background in computer science and neuroscience and has spent years working at the intersection of education, technology, and equity. Her core conviction is clear: <strong>AI should not become a shortcut that removes understanding, agency, and critical thinking from learning</strong>.</p><p>We discuss why coding still matters — not as a job guarantee, but as a way to understand how the world works — and why telling students they don’t need to understand technology anymore is not only wrong, but patronizing.</p><p>Check out "hour of AI" a Loveable x OpenAI x imagi initiative: https://imagilabs.com/pages/hour-of-code</p><p>We discussed the book "IF ANYONE BUILDS IT,EVERYONE DIES" - Eliezer Yudkowsky &amp; Nate Soares</p><p>https://ifanyonebuildsit.com/</p>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">23def3c4-304c-4653-bcc7-ec7837905eb8</guid><itunes:image href="https://artwork.captivate.fm/c420cb1d-12fe-4f43-867a-a4e5007bc4be/EF-CAPTIVATE-3000-x-3000-px-9.png"/><pubDate>Mon, 22 Dec 2025 06:00:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/23def3c4-304c-4653-bcc7-ec7837905eb8.mp3" length="12472129" type="audio/mpeg"/><itunes:duration>25:59</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>12</itunes:episode><podcast:episode>12</podcast:episode><podcast:season>1</podcast:season></item><item><title>🇩🇪 A youth dialogue on keeping the future human</title><itunes:title>🇩🇪 A youth dialogue on keeping the future human</itunes:title><description><![CDATA[<p>(EPISODE IN GERMAN - FOR THE ENGLISH SUBTITLES CHECK OUR YOUTUBE CHANNEL: https://youtu.be/tiLh9Jq1ApU)</p><p>In dieser preisgekrönten Folge der Podcast-Serie "A youth dialogue on keeping the future human" spricht Gastgeberin Svenia Busson mit drei Schülern aus Berlin und Hamburg: Flor (13), Luca (15) und Sofia (15). Während sich die globale Debatte oft auf zukünftige Risiken konzentriert, leben diese Schüler bereits heute mit der Realität von KI im Klassenzimmer.</p><p>Sie geben einen offenen Einblick, wie KI die Bildung schon jetzt verändert – von Lehrern, die mit Drohungen auf ChatGPT-Aufsätze reagieren, bis hin zur sehr realen Angst vor Deepfakes, die die Gesellschaft spalten. Sie argumentieren, dass Verbote keine Lösung sind; stattdessen fordern sie, dass Schulen kritisches Denken und KI-Kompetenz von Anfang an vermitteln.</p><p>In dieser Episode diskutieren wir u.a:</p><ul><li>Ungleichheit und die "Superreichen": Flor (13) stellt eine brutale ethische Frage an die Entwickler: <em>"Unterstützt es viele Menschen oder hilft es nur den Superreichen, noch reicher zu werden?"</em> Dies ist eine direkte Kritik an der Konzentration wirtschaftlicher Macht, einem zentralen Thema in Aguirres Essay.</li><li>Datensouveränität: Luca (15) äußert eine sehr klare Sorge, nicht über die Intelligenz der KI, sondern über die Privatsphäre.</li><li>Das "Her"-Syndrom (Soziale Isolation): Flor und Luca betonen die emotionale Gefahr. Sie erkennen das Risiko, dass KI Freundschaften ersetzt, nicht nur Arbeit.</li><li>KI als "Super-Tutor" (Positive Nutzung): Im Gegensatz zu der Vorstellung, dass Schüler KI nur zum Schummeln nutzen, erklärt Flor, dass sie personalisierte KI-Assistenten (die ihr Vater via Fobizz erstellt hat) nutzt, um Debatten zu üben oder ihre Aussprache zu korrigieren. </li></ul><br/>]]></description><content:encoded><![CDATA[<p>(EPISODE IN GERMAN - FOR THE ENGLISH SUBTITLES CHECK OUR YOUTUBE CHANNEL: https://youtu.be/tiLh9Jq1ApU)</p><p>In dieser preisgekrönten Folge der Podcast-Serie "A youth dialogue on keeping the future human" spricht Gastgeberin Svenia Busson mit drei Schülern aus Berlin und Hamburg: Flor (13), Luca (15) und Sofia (15). Während sich die globale Debatte oft auf zukünftige Risiken konzentriert, leben diese Schüler bereits heute mit der Realität von KI im Klassenzimmer.</p><p>Sie geben einen offenen Einblick, wie KI die Bildung schon jetzt verändert – von Lehrern, die mit Drohungen auf ChatGPT-Aufsätze reagieren, bis hin zur sehr realen Angst vor Deepfakes, die die Gesellschaft spalten. Sie argumentieren, dass Verbote keine Lösung sind; stattdessen fordern sie, dass Schulen kritisches Denken und KI-Kompetenz von Anfang an vermitteln.</p><p>In dieser Episode diskutieren wir u.a:</p><ul><li>Ungleichheit und die "Superreichen": Flor (13) stellt eine brutale ethische Frage an die Entwickler: <em>"Unterstützt es viele Menschen oder hilft es nur den Superreichen, noch reicher zu werden?"</em> Dies ist eine direkte Kritik an der Konzentration wirtschaftlicher Macht, einem zentralen Thema in Aguirres Essay.</li><li>Datensouveränität: Luca (15) äußert eine sehr klare Sorge, nicht über die Intelligenz der KI, sondern über die Privatsphäre.</li><li>Das "Her"-Syndrom (Soziale Isolation): Flor und Luca betonen die emotionale Gefahr. Sie erkennen das Risiko, dass KI Freundschaften ersetzt, nicht nur Arbeit.</li><li>KI als "Super-Tutor" (Positive Nutzung): Im Gegensatz zu der Vorstellung, dass Schüler KI nur zum Schummeln nutzen, erklärt Flor, dass sie personalisierte KI-Assistenten (die ihr Vater via Fobizz erstellt hat) nutzt, um Debatten zu üben oder ihre Aussprache zu korrigieren. </li></ul><br/>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">86f45a6c-5691-4c59-804a-ed60ba6db8c1</guid><itunes:image href="https://artwork.captivate.fm/a8282e75-ae15-41e6-afbd-987488126bd3/EF-CAPTIVATE-3000-x-3000-px-1.png"/><pubDate>Tue, 16 Dec 2025 18:00:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/86f45a6c-5691-4c59-804a-ed60ba6db8c1.mp3" length="67202354" type="audio/mpeg"/><itunes:duration>35:00</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>11</itunes:episode><podcast:episode>11</podcast:episode><podcast:season>1</podcast:season><podcast:alternateEnclosure type="video/youtube" title="🇩🇪 A youth dialogue on keeping the future human"><podcast:source uri="https://youtu.be/tiLh9Jq1ApU"/></podcast:alternateEnclosure></item><item><title>🇫🇷 A youth dialogue on keeping the future human</title><itunes:title>🇫🇷 A youth dialogue on keeping the future human</itunes:title><description><![CDATA[<p>(EPISODE IN FRENCH - ENGLISH SUBTITLES AVAILABLE ON YOUTUBE: https://youtu.be/iR5IhYTjYA4)</p><p>Dans cet épisode de la série primée "A youth dialogue on keeping the future human", Svenia Busson s'entretient avec trois élèves français : Alaïs (11 ans), Sarah-Léa (15 ans) et Arthur (16 ans). Alors que beaucoup craignent une IA dangereuse ou "méchante", ces jeunes redoutent quelque chose de plus subtil : qu'elle nous rende inutiles.</p><p>Ensemble, ils débattent du "problème du plateau d'argent" — l'idée que si l'IA nous facilite trop la vie, nous perdons notre capacité à penser, à fournir des efforts et à grandir. </p><p><strong>Dans cet épisode, nous abordons :</strong></p><ul><li><strong>Le "Plateau d'Argent" :</strong> L'avertissement d'Arthur sur la commodité qui tue notre curiosité.</li><li><strong>La Valeur de l'Effort :</strong> Pourquoi Sarah-Léa pense que la difficulté est essentielle au bonheur humain.</li><li><strong>La Connexion Humaine :</strong> La définition d'Alaïs d'un "futur humain" : un monde où l'on se rencontre encore dans la vraie vie, pas juste derrière des écrans.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>(EPISODE IN FRENCH - ENGLISH SUBTITLES AVAILABLE ON YOUTUBE: https://youtu.be/iR5IhYTjYA4)</p><p>Dans cet épisode de la série primée "A youth dialogue on keeping the future human", Svenia Busson s'entretient avec trois élèves français : Alaïs (11 ans), Sarah-Léa (15 ans) et Arthur (16 ans). Alors que beaucoup craignent une IA dangereuse ou "méchante", ces jeunes redoutent quelque chose de plus subtil : qu'elle nous rende inutiles.</p><p>Ensemble, ils débattent du "problème du plateau d'argent" — l'idée que si l'IA nous facilite trop la vie, nous perdons notre capacité à penser, à fournir des efforts et à grandir. </p><p><strong>Dans cet épisode, nous abordons :</strong></p><ul><li><strong>Le "Plateau d'Argent" :</strong> L'avertissement d'Arthur sur la commodité qui tue notre curiosité.</li><li><strong>La Valeur de l'Effort :</strong> Pourquoi Sarah-Léa pense que la difficulté est essentielle au bonheur humain.</li><li><strong>La Connexion Humaine :</strong> La définition d'Alaïs d'un "futur humain" : un monde où l'on se rencontre encore dans la vraie vie, pas juste derrière des écrans.</li></ul><br/>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">96c8a329-b6c5-43a3-921d-5813a9ca71bd</guid><itunes:image href="https://artwork.captivate.fm/4bb17118-8631-44f2-a114-0cbd1d6d2c18/EF-CAPTIVATE-3000-x-3000-px-2.png"/><pubDate>Tue, 16 Dec 2025 18:00:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/96c8a329-b6c5-43a3-921d-5813a9ca71bd.mp3" length="80130667" type="audio/mpeg"/><itunes:duration>41:44</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>10</itunes:episode><podcast:episode>10</podcast:episode><podcast:season>1</podcast:season><podcast:alternateEnclosure type="video/youtube" title="🇫🇷 A youth dialogue on keeping the future human"><podcast:source uri="https://youtu.be/iR5IhYTjYA4"/></podcast:alternateEnclosure></item><item><title>🇺🇸 A youth dialogue on keeping the future human</title><itunes:title>🇺🇸 A youth dialogue on keeping the future human</itunes:title><description><![CDATA[<p>In this episode Svenia sits down with Hudson (15), Joseph (15), and Violet (17), high school students in San Diego and members of their high school’s AI Club.</p><p>She decodes Anthony Aguirre’s "Keep the Future Human" essay and ask these high school students burning questions around AI. Moving far beyond surface-level tech talk, they tackle the hardest ethical questions facing Silicon Valley today.</p><p>In this episode, we discuss:</p><ul><li>The "Bystander Effect": Violet’s powerful challenge to tech CEOs who claim they are "just following the market."</li><li>Optimization vs. Agency: How the recommendation algorithms they grew up with have shaped—and potentially hijacked—their cognitive freedom.</li><li>The "Danger Triangle": A technical debate on whether adding <em>Autonomy</em> to <em>General Intelligence</em> inevitably leads to loss of control.</li><li>Optimism for "Tool AI": Why Hudson believes AI should remain an "exoskeleton for the mind" (like AlphaFold) rather than a replacement for human thought.</li></ul><br/><p>Violet shared a podcast by David Fajgenbaum (on repurposing medicine) listen to it here: https://www.youtube.com/watch?v=YWIft9yiHAo</p>]]></description><content:encoded><![CDATA[<p>In this episode Svenia sits down with Hudson (15), Joseph (15), and Violet (17), high school students in San Diego and members of their high school’s AI Club.</p><p>She decodes Anthony Aguirre’s "Keep the Future Human" essay and ask these high school students burning questions around AI. Moving far beyond surface-level tech talk, they tackle the hardest ethical questions facing Silicon Valley today.</p><p>In this episode, we discuss:</p><ul><li>The "Bystander Effect": Violet’s powerful challenge to tech CEOs who claim they are "just following the market."</li><li>Optimization vs. Agency: How the recommendation algorithms they grew up with have shaped—and potentially hijacked—their cognitive freedom.</li><li>The "Danger Triangle": A technical debate on whether adding <em>Autonomy</em> to <em>General Intelligence</em> inevitably leads to loss of control.</li><li>Optimism for "Tool AI": Why Hudson believes AI should remain an "exoskeleton for the mind" (like AlphaFold) rather than a replacement for human thought.</li></ul><br/><p>Violet shared a podcast by David Fajgenbaum (on repurposing medicine) listen to it here: https://www.youtube.com/watch?v=YWIft9yiHAo</p>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">fe44de36-8aea-4ad2-9c66-e95675537e44</guid><itunes:image href="https://artwork.captivate.fm/4e486d3b-92a1-46da-9ab1-c9777b03bf65/EF-CAPTIVATE-3000-x-3000-px-4.png"/><pubDate>Tue, 16 Dec 2025 18:00:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/fe44de36-8aea-4ad2-9c66-e95675537e44.mp3" length="89364226" type="audio/mpeg"/><itunes:duration>46:33</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>9</itunes:episode><podcast:episode>9</podcast:episode><podcast:season>1</podcast:season><podcast:alternateEnclosure type="video/youtube" title="🇺🇸 A youth dialogue on keeping the future human (Special Series)"><podcast:source uri="https://youtu.be/EieVPurj7Qk"/></podcast:alternateEnclosure></item><item><title>🇬🇧 A youth dialogue on keeping the future human</title><itunes:title>🇬🇧 A youth dialogue on keeping the future human</itunes:title><description><![CDATA[<p>In this special award-winning series for the Future of Life Institute's <em>Keep the Future Human</em> contest, host Svenia Busson sits down with teenagers from across Europe and the US.</p><p>In this episode, host Svenia Busson sits down with three international students based in Paris and Barcelona: Emma (13), Jack (14), and Hector (16). Moving between metaphors of movies and video game controllers, they tackle a fear that isn't about Terminator robots, but about something much quieter: human atrophy.</p><p>Together, they debate the "Wall-E Scenario"—a future where AI solves every problem so efficiently that humans forget how to walk, think, or care. They argue that our imperfections, our slowness, and our need to make an effort aren't bugs to be fixed by an algorithm—they are the very features that make life meaningful.</p><p><strong>In this episode, we discuss:</strong></p><ul><li><strong>The Wall-E Scenario:</strong> Jack’s fear that comfort will lead to cognitive laziness and the loss of critical thinking.</li><li><strong>The "Off" Button:</strong> Why retaining an "overriding capacity" is non-negotiable for the next generation.</li><li><strong>Profit vs. People:</strong> Hector’s call for AI that prioritizes equality over efficiency.</li><li><strong>The Beauty of Imperfection:</strong> Emma’s argument that a "human future" is one where we are allowed to make mistakes.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>In this special award-winning series for the Future of Life Institute's <em>Keep the Future Human</em> contest, host Svenia Busson sits down with teenagers from across Europe and the US.</p><p>In this episode, host Svenia Busson sits down with three international students based in Paris and Barcelona: Emma (13), Jack (14), and Hector (16). Moving between metaphors of movies and video game controllers, they tackle a fear that isn't about Terminator robots, but about something much quieter: human atrophy.</p><p>Together, they debate the "Wall-E Scenario"—a future where AI solves every problem so efficiently that humans forget how to walk, think, or care. They argue that our imperfections, our slowness, and our need to make an effort aren't bugs to be fixed by an algorithm—they are the very features that make life meaningful.</p><p><strong>In this episode, we discuss:</strong></p><ul><li><strong>The Wall-E Scenario:</strong> Jack’s fear that comfort will lead to cognitive laziness and the loss of critical thinking.</li><li><strong>The "Off" Button:</strong> Why retaining an "overriding capacity" is non-negotiable for the next generation.</li><li><strong>Profit vs. People:</strong> Hector’s call for AI that prioritizes equality over efficiency.</li><li><strong>The Beauty of Imperfection:</strong> Emma’s argument that a "human future" is one where we are allowed to make mistakes.</li></ul><br/>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">8e725dc3-59b0-4667-b6d9-725cc5b46072</guid><itunes:image href="https://artwork.captivate.fm/e5ae93a2-28ae-44db-8afd-fbdcfa51c992/EF-CAPTIVATE-3000-x-3000-px-3.png"/><pubDate>Tue, 16 Dec 2025 18:00:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/8e725dc3-59b0-4667-b6d9-725cc5b46072.mp3" length="65412755" type="audio/mpeg"/><itunes:duration>38:56</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>8</itunes:episode><podcast:episode>8</podcast:episode><podcast:season>1</podcast:season><podcast:alternateEnclosure type="video/youtube" title="🇬🇧 A youth dialogue on keeping the future human"><podcast:source uri="https://youtu.be/r3pgolIUUCM"/></podcast:alternateEnclosure></item><item><title>AI Safety: Protecting kids, schools, and society</title><itunes:title>AI Safety: Protecting kids, schools, and society</itunes:title><description><![CDATA[<p>In this episode of <em>Education Futures</em>, Svenia &amp; Laurent speak with <strong>Erin Mote</strong>, Co-Founder and Executive Director of <strong>InnovateEDU</strong>, about the urgent intersection of <strong>AI safety, youth online protection, and the future of learning systems</strong>. Erin, a leading technologist and policy voice — and a mother of two — explains why <strong>safety must come first</strong> in the EDSAFE AI Alliance framework and why protecting children means safeguarding not just their <strong>data</strong>, but their <strong>experience</strong> with AI systems.</p><p>Erin breaks down the risks of biased predictive systems, engagement-optimized consumer chatbots, and AI companions — and shares emerging U.S. legislation, including a groundbreaking <strong>package of 19 federal bills</strong> focused on kids’ online safety.</p><p>Together, they explore the coming disruption of the workforce: the disappearance of entry-level jobs, the difference between <strong>automation vs displacement</strong>, and what young people must learn <em>now</em> to thrive in an AI-shaped economy.</p><p>Erin also outlines what a future-ready school must look like: human-centered, deeply relational, grounded in learning science, and designed to build judgment, discernment, dialogue, and metacognition — skills AI cannot replace.</p><p>A powerful and urgent conversation for educators, policymakers, and parents navigating this arrival technology.</p>]]></description><content:encoded><![CDATA[<p>In this episode of <em>Education Futures</em>, Svenia &amp; Laurent speak with <strong>Erin Mote</strong>, Co-Founder and Executive Director of <strong>InnovateEDU</strong>, about the urgent intersection of <strong>AI safety, youth online protection, and the future of learning systems</strong>. Erin, a leading technologist and policy voice — and a mother of two — explains why <strong>safety must come first</strong> in the EDSAFE AI Alliance framework and why protecting children means safeguarding not just their <strong>data</strong>, but their <strong>experience</strong> with AI systems.</p><p>Erin breaks down the risks of biased predictive systems, engagement-optimized consumer chatbots, and AI companions — and shares emerging U.S. legislation, including a groundbreaking <strong>package of 19 federal bills</strong> focused on kids’ online safety.</p><p>Together, they explore the coming disruption of the workforce: the disappearance of entry-level jobs, the difference between <strong>automation vs displacement</strong>, and what young people must learn <em>now</em> to thrive in an AI-shaped economy.</p><p>Erin also outlines what a future-ready school must look like: human-centered, deeply relational, grounded in learning science, and designed to build judgment, discernment, dialogue, and metacognition — skills AI cannot replace.</p><p>A powerful and urgent conversation for educators, policymakers, and parents navigating this arrival technology.</p>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">9b7bb463-0e0a-4b9f-983e-d8837769f210</guid><itunes:image href="https://artwork.captivate.fm/6453ef77-ad58-4774-afcd-b4fb3e745382/EF-CAPTIVATE-3000-x-3000-px.png"/><pubDate>Fri, 12 Dec 2025 07:00:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/9b7bb463-0e0a-4b9f-983e-d8837769f210.mp3" length="24483231" type="audio/mpeg"/><itunes:duration>51:00</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>7</itunes:episode><podcast:episode>7</podcast:episode><podcast:season>1</podcast:season><podcast:alternateEnclosure type="video/youtube" title="AI Safety: Protecting kids, schools, and society with Erin Mote, InnovateEDU"><podcast:source uri="https://youtu.be/4nVX7wSyK3Y"/></podcast:alternateEnclosure></item><item><title>Raising an AI-literate generation</title><itunes:title>Raising an AI-literate generation</itunes:title><description><![CDATA[<p>In this episode of <em>Education Futures</em>, Svenia sits down with Corey Layne Crouch, educator and Chief Program Officer of AI for Education, to explore what it really means to build <em>AI literacy</em> across school communities. Corey shares her journey from high-school English teacher to school leader, EdTech designer, and ultimately advocate for responsible, human-centered AI adoption.</p><p>She unpacks her C-Framework for AI literacy — helping students, teachers, and parents understand how generative AI works, how to use it safely, ethically, and effectively, and why foundational knowledge matters more than quick prompting tips. Corey and Svenia discuss students’ growing fears that AI may “make them lazy,” the rising risks of AI companionship tools for adolescents, and why intentional, human-centered learning is more important than ever.</p><p>Together, they imagine the school of 2035, explore the opportunities of generative AI to enhance relationships and creativity, and outline what education systems must do now to protect young people and empower them to thrive in an AI-saturated world.</p><p>A hopeful and essential conversation about keeping learning deeply human.</p><p>Links:</p><p>AI for Education – https://www.aiforeducation.io/</p><p>Women in AI &amp; Education Community - https://www.aiforeducation.io/women-in-ai-ed</p><p>Jonathan Haidt’s research on social media, adolescent mental health, and smartphone-driven developmental changes, his book&nbsp; “The Anxious Generation” is worth a read: https://www.amazon.fr/Anxious-Generation-Rewiring-Childhood-Epidemic/dp/0593655036</p><p>Chicago Public Schools: AI Guidebook: https://www.cps.edu/strategic-initiatives/ai-guidebook/</p><p>The Rhythm Project: Michelle Culver: <a href="https://www.therithmproject.org/" rel="noopener noreferrer" target="_blank">https://www.therithmproject.org/</a></p>]]></description><content:encoded><![CDATA[<p>In this episode of <em>Education Futures</em>, Svenia sits down with Corey Layne Crouch, educator and Chief Program Officer of AI for Education, to explore what it really means to build <em>AI literacy</em> across school communities. Corey shares her journey from high-school English teacher to school leader, EdTech designer, and ultimately advocate for responsible, human-centered AI adoption.</p><p>She unpacks her C-Framework for AI literacy — helping students, teachers, and parents understand how generative AI works, how to use it safely, ethically, and effectively, and why foundational knowledge matters more than quick prompting tips. Corey and Svenia discuss students’ growing fears that AI may “make them lazy,” the rising risks of AI companionship tools for adolescents, and why intentional, human-centered learning is more important than ever.</p><p>Together, they imagine the school of 2035, explore the opportunities of generative AI to enhance relationships and creativity, and outline what education systems must do now to protect young people and empower them to thrive in an AI-saturated world.</p><p>A hopeful and essential conversation about keeping learning deeply human.</p><p>Links:</p><p>AI for Education – https://www.aiforeducation.io/</p><p>Women in AI &amp; Education Community - https://www.aiforeducation.io/women-in-ai-ed</p><p>Jonathan Haidt’s research on social media, adolescent mental health, and smartphone-driven developmental changes, his book&nbsp; “The Anxious Generation” is worth a read: https://www.amazon.fr/Anxious-Generation-Rewiring-Childhood-Epidemic/dp/0593655036</p><p>Chicago Public Schools: AI Guidebook: https://www.cps.edu/strategic-initiatives/ai-guidebook/</p><p>The Rhythm Project: Michelle Culver: <a href="https://www.therithmproject.org/" rel="noopener noreferrer" target="_blank">https://www.therithmproject.org/</a></p>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">51ba6552-0c28-4fa3-9451-374ea3b7a188</guid><itunes:image href="https://artwork.captivate.fm/ed4641de-852e-46e7-8f0a-0f4981fff557/EDUCATION-FUTURES-PODCAST-CAPTIVATE-3000-x-3000-px-5.png"/><pubDate>Thu, 04 Dec 2025 06:00:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/51ba6552-0c28-4fa3-9451-374ea3b7a188.mp3" length="24910176" type="audio/mpeg"/><itunes:duration>51:54</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>6</itunes:episode><podcast:episode>6</podcast:episode><podcast:season>1</podcast:season><podcast:alternateEnclosure type="video/youtube" title="Raising an AI-literate generation with Corey Lane Crouch"><podcast:source uri="https://youtu.be/GKWf5mA0XyI"/></podcast:alternateEnclosure></item><item><title>How to think critically in the age of AI</title><itunes:title>How to think critically in the age of AI</itunes:title><description><![CDATA[<p>Tech philosopher and author Tom Chatfield joins Education Futures for a deep exploration of what it really means to learn, think, and stay human in an age of powerful AI systems.</p><p>We talk about the central paradox of our era: the skills required to use AI wisely are the very ones AI risks eroding. Tom explains why LLMs should <em>prompt us</em> rather than replace our thinking, and how critical thinking, dialogue, embodiment, and emotional self-mastery remain irreplaceably human. We dive into practical ways to teach critical thinking, what Plato and Descartes would say about our technological moment, why pausing is essential, and how to design a school for 2040 that truly uplifts learners. A hopeful, nuanced conversation about intelligence—human and artificial—and how to build a future where technology serves our flourishing.</p><p>Tom's substack: https://tomchatfield.substack.com/</p><p>Tom's latest report "AI and the future of pedagogy" for Sage: https://www.sagepub.com/explore-our-content/white-papers/2025/11/03/ai-and-the-future-of-pedagogy</p><p>Tom's latest book "Wise Animals" https://www.panmacmillan.com/authors/tom-chatfield/wise-animals/9781529079746</p><p>Tom's children book "Your brilliant brain": https://amzn.eu/d/cuFzpmo</p><p>Tom's book to teach critical thinking: "Critical Thinking: Your Guide to Effective Argument, Successful Analysis and Independent Study" https://a.co/d/5iZ1NI5</p><p>Tom's TED talk: https://www.ted.com/talks/tom_chatfield_7_ways_games_reward_the_brain</p><p>Game designer Adrian Hon's work: https://mssv.net/about/</p>]]></description><content:encoded><![CDATA[<p>Tech philosopher and author Tom Chatfield joins Education Futures for a deep exploration of what it really means to learn, think, and stay human in an age of powerful AI systems.</p><p>We talk about the central paradox of our era: the skills required to use AI wisely are the very ones AI risks eroding. Tom explains why LLMs should <em>prompt us</em> rather than replace our thinking, and how critical thinking, dialogue, embodiment, and emotional self-mastery remain irreplaceably human. We dive into practical ways to teach critical thinking, what Plato and Descartes would say about our technological moment, why pausing is essential, and how to design a school for 2040 that truly uplifts learners. A hopeful, nuanced conversation about intelligence—human and artificial—and how to build a future where technology serves our flourishing.</p><p>Tom's substack: https://tomchatfield.substack.com/</p><p>Tom's latest report "AI and the future of pedagogy" for Sage: https://www.sagepub.com/explore-our-content/white-papers/2025/11/03/ai-and-the-future-of-pedagogy</p><p>Tom's latest book "Wise Animals" https://www.panmacmillan.com/authors/tom-chatfield/wise-animals/9781529079746</p><p>Tom's children book "Your brilliant brain": https://amzn.eu/d/cuFzpmo</p><p>Tom's book to teach critical thinking: "Critical Thinking: Your Guide to Effective Argument, Successful Analysis and Independent Study" https://a.co/d/5iZ1NI5</p><p>Tom's TED talk: https://www.ted.com/talks/tom_chatfield_7_ways_games_reward_the_brain</p><p>Game designer Adrian Hon's work: https://mssv.net/about/</p>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">9b739bfd-c0c7-4a57-af8a-b2f344eb9e4a</guid><itunes:image href="https://artwork.captivate.fm/0ae6b0bb-eaf5-48ce-89a5-ba9d7504fdd7/EDUCATION-FUTURES-PODCAST-CAPTIVATE-3000-x-3000-px-4.png"/><pubDate>Wed, 26 Nov 2025 06:00:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/9b739bfd-c0c7-4a57-af8a-b2f344eb9e4a.mp3" length="25317268" type="audio/mpeg"/><itunes:duration>52:45</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>5</itunes:episode><podcast:episode>5</podcast:episode><podcast:season>1</podcast:season></item><item><title>What is school for in the age of AI?</title><itunes:title>What is school for in the age of AI?</itunes:title><description><![CDATA[<p>In this episode, Svenia Busson and Laurent Jolie speak with Chris Bush, an Australian teacher, school leader, and Churchill Fellow exploring how artificial intelligence can make education more equitable and human.</p><p>Chris shares how he built <em>Mitch</em>, an empathetic AI tutor that answered 1,500 student questions in one night—proving that well-designed AI can extend teachers’ reach while caring for students’ wellbeing. His story of <em>Ahmed</em>, a student who doubled his grades thanks to Mitch, reveals AI’s potential to bridge learning gaps for those who can’t afford private tutoring.</p><p>The conversation dives into insights from Chris’s Churchill Fellowship report on equitable AI in education, uncovering disparities between rich and poor schools, and between the global North and South. They discuss inspiring models like Chicago’s teacher-driven AI policy, the Alpha School in Texas, and national approaches in Finland and Estonia.</p><p>Together, they reflect on a deeper question: What is school for in the age of AI?</p><p> From redefining teacher roles to amplifying student voice, they imagine a future where AI supports—not replaces—human relationships, creativity, and critical thinking.</p><p>Here are some of the insights Chris shared:</p><p>Chris Bush’s AI playbook and report on <em>Equitable AI in Education</em> (Churchill Fellowship) <a href="http://www.chrisbushai.com/" rel="noopener noreferrer" target="_blank">www.chrisbushai.com</a>&nbsp;</p><p>PlayLab (AI tools for teachers) https://www.playlab.ai/ to create Chatbots like "Mitch"</p><p>Oxford University’s <em>AI Community of Practice (</em>AI COP Oxford)<em> led by Dr Sara Ratner </em><a href="https://www.linkedin.com/groups/13399032/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/groups/13399032/</a>&nbsp;</p><p>“Thrive” a book by Valerie Hannon : <a href="https://www.amazon.com.au/Thrive-Purpose-Schools-Changing-World/dp/1108819974" rel="noopener noreferrer" target="_blank">https://www.amazon.com.au/Thrive-Purpose-Schools-Changing-World/dp/1108819974</a>&nbsp;</p><p><a href="https://www.aiforeducation.io/" rel="noopener noreferrer" target="_blank"><em>AI for Education</em></a> an organization training educators to AI literacy topics in the US (https://www.aiforeducation.io/)</p><p>Report re: global disparity in AI use: <a href="https://www.microsoft.com/en-us/research/group/aiei/ai-diffusion/msockid=3043950b234760192cb984bd274766c4" rel="noopener noreferrer" target="_blank">https://www.microsoft.com/en-us/research/group/aiei/ai-diffusion/msockid=3043950b234760192cb984bd274766c4</a> </p>]]></description><content:encoded><![CDATA[<p>In this episode, Svenia Busson and Laurent Jolie speak with Chris Bush, an Australian teacher, school leader, and Churchill Fellow exploring how artificial intelligence can make education more equitable and human.</p><p>Chris shares how he built <em>Mitch</em>, an empathetic AI tutor that answered 1,500 student questions in one night—proving that well-designed AI can extend teachers’ reach while caring for students’ wellbeing. His story of <em>Ahmed</em>, a student who doubled his grades thanks to Mitch, reveals AI’s potential to bridge learning gaps for those who can’t afford private tutoring.</p><p>The conversation dives into insights from Chris’s Churchill Fellowship report on equitable AI in education, uncovering disparities between rich and poor schools, and between the global North and South. They discuss inspiring models like Chicago’s teacher-driven AI policy, the Alpha School in Texas, and national approaches in Finland and Estonia.</p><p>Together, they reflect on a deeper question: What is school for in the age of AI?</p><p> From redefining teacher roles to amplifying student voice, they imagine a future where AI supports—not replaces—human relationships, creativity, and critical thinking.</p><p>Here are some of the insights Chris shared:</p><p>Chris Bush’s AI playbook and report on <em>Equitable AI in Education</em> (Churchill Fellowship) <a href="http://www.chrisbushai.com/" rel="noopener noreferrer" target="_blank">www.chrisbushai.com</a>&nbsp;</p><p>PlayLab (AI tools for teachers) https://www.playlab.ai/ to create Chatbots like "Mitch"</p><p>Oxford University’s <em>AI Community of Practice (</em>AI COP Oxford)<em> led by Dr Sara Ratner </em><a href="https://www.linkedin.com/groups/13399032/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/groups/13399032/</a>&nbsp;</p><p>“Thrive” a book by Valerie Hannon : <a href="https://www.amazon.com.au/Thrive-Purpose-Schools-Changing-World/dp/1108819974" rel="noopener noreferrer" target="_blank">https://www.amazon.com.au/Thrive-Purpose-Schools-Changing-World/dp/1108819974</a>&nbsp;</p><p><a href="https://www.aiforeducation.io/" rel="noopener noreferrer" target="_blank"><em>AI for Education</em></a> an organization training educators to AI literacy topics in the US (https://www.aiforeducation.io/)</p><p>Report re: global disparity in AI use: <a href="https://www.microsoft.com/en-us/research/group/aiei/ai-diffusion/msockid=3043950b234760192cb984bd274766c4" rel="noopener noreferrer" target="_blank">https://www.microsoft.com/en-us/research/group/aiei/ai-diffusion/msockid=3043950b234760192cb984bd274766c4</a> </p>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">6034a8ef-2f9d-4329-aa25-7bfc64c32e63</guid><itunes:image href="https://artwork.captivate.fm/555cf608-9b6b-4a67-8994-6ef07f00b60b/EDUCATION-FUTURES-PODCAST-CAPTIVATE-3000-x-3000-px-3.png"/><pubDate>Thu, 20 Nov 2025 01:00:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/6034a8ef-2f9d-4329-aa25-7bfc64c32e63.mp3" length="23724217" type="audio/mpeg"/><itunes:duration>49:25</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>4</itunes:episode><podcast:episode>4</podcast:episode><podcast:season>1</podcast:season></item><item><title>A father and AI engineer on parenting in the age of AI</title><itunes:title>A father and AI engineer on parenting in the age of AI</itunes:title><description><![CDATA[<p>In this episode of Education Futures, host Svenia Busson sits down with Samuel Path, a senior engineer in AI and a father of four, to explore the evolving landscape of artificial intelligence and its impact on parenting. Samuel shares his journey from a curious child hacking video games to becoming a passionate AI enthusiast. They discuss the transformative power of AI in the workplace, the importance of lifelong learning, and how to prepare the next generation for an AI-driven world. Tune in for a thought-provoking conversation on the opportunities and challenges of raising children in the age of AI.</p><p>Here are some of the insights Sam shared:</p><p><a href="https://eurekalabs.ai/" rel="noopener noreferrer" target="_blank">https://eurekalabs.ai/</a> - Educating people to AI founded by Andrej Karpathy</p><p><a href="https://kurzgesagt.org/" rel="noopener noreferrer" target="_blank">https://kurzgesagt.org/</a> - Great videos to spark creativity about science &amp; technology</p><p><a href="https://khanacademy.org/" rel="noopener noreferrer" target="_blank">https://khanacademy.org/</a> + <a href="https://scratch.mit.edu/" rel="noopener noreferrer" target="_blank">https://scratch.mit.edu/</a> are the online tools he uses for his kids</p><p>The video where Kurzgesagt discusses this problem of slop:<a href="https://www.youtube.com/watch?v=_zfN9wnPvU0" rel="noopener noreferrer" target="_blank"> https://www.youtube.com/watch?v=_zfN9wnPvU0</a></p><p>On Demis Hassabis winning the Nobel Prize in Chemistry (2024)</p><p><a href="https://deepmind.google/blog/demis-hassabis-john-jumper-awarded-nobel-prize-in-chemistry/" rel="noopener noreferrer" target="_blank">https://deepmind.google/blog/demis-hassabis-john-jumper-awarded-nobel-prize-in-chemistry/</a></p><p>HBR article on AI slop that's destroying coworker productivity</p><p><a href="https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity" rel="noopener noreferrer" target="_blank">https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity</a></p><p>Sam recommends reading the AI education death spiral by Anand Sanwal: <a href="https://anandsanwal.me/ai-education-death-spiral/" rel="noopener noreferrer" target="_blank">https://anandsanwal.me/ai-education-death-spiral/</a></p><p>Self-driving technology in China is now even more advanced than in San Francisco (as of Nov 12th, 2025)</p><p><a href="https://www.wsj.com/business/autos/china-robotaxi-self-driving-waymo-254ce0a1?st=BCZFqE" rel="noopener noreferrer" target="_blank">https://www.wsj.com/business/autos/china-robotaxi-self-driving-waymo-254ce0a1?st=BCZFqE</a></p><p>The article about hope, sand and soil by Riwa Harfoush</p><p><a href="https://substack.com/@riwaharfoush/p-177572330" rel="noopener noreferrer" target="_blank">https://substack.com/@riwaharfoush/p-177572330</a></p>]]></description><content:encoded><![CDATA[<p>In this episode of Education Futures, host Svenia Busson sits down with Samuel Path, a senior engineer in AI and a father of four, to explore the evolving landscape of artificial intelligence and its impact on parenting. Samuel shares his journey from a curious child hacking video games to becoming a passionate AI enthusiast. They discuss the transformative power of AI in the workplace, the importance of lifelong learning, and how to prepare the next generation for an AI-driven world. Tune in for a thought-provoking conversation on the opportunities and challenges of raising children in the age of AI.</p><p>Here are some of the insights Sam shared:</p><p><a href="https://eurekalabs.ai/" rel="noopener noreferrer" target="_blank">https://eurekalabs.ai/</a> - Educating people to AI founded by Andrej Karpathy</p><p><a href="https://kurzgesagt.org/" rel="noopener noreferrer" target="_blank">https://kurzgesagt.org/</a> - Great videos to spark creativity about science &amp; technology</p><p><a href="https://khanacademy.org/" rel="noopener noreferrer" target="_blank">https://khanacademy.org/</a> + <a href="https://scratch.mit.edu/" rel="noopener noreferrer" target="_blank">https://scratch.mit.edu/</a> are the online tools he uses for his kids</p><p>The video where Kurzgesagt discusses this problem of slop:<a href="https://www.youtube.com/watch?v=_zfN9wnPvU0" rel="noopener noreferrer" target="_blank"> https://www.youtube.com/watch?v=_zfN9wnPvU0</a></p><p>On Demis Hassabis winning the Nobel Prize in Chemistry (2024)</p><p><a href="https://deepmind.google/blog/demis-hassabis-john-jumper-awarded-nobel-prize-in-chemistry/" rel="noopener noreferrer" target="_blank">https://deepmind.google/blog/demis-hassabis-john-jumper-awarded-nobel-prize-in-chemistry/</a></p><p>HBR article on AI slop that's destroying coworker productivity</p><p><a href="https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity" rel="noopener noreferrer" target="_blank">https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity</a></p><p>Sam recommends reading the AI education death spiral by Anand Sanwal: <a href="https://anandsanwal.me/ai-education-death-spiral/" rel="noopener noreferrer" target="_blank">https://anandsanwal.me/ai-education-death-spiral/</a></p><p>Self-driving technology in China is now even more advanced than in San Francisco (as of Nov 12th, 2025)</p><p><a href="https://www.wsj.com/business/autos/china-robotaxi-self-driving-waymo-254ce0a1?st=BCZFqE" rel="noopener noreferrer" target="_blank">https://www.wsj.com/business/autos/china-robotaxi-self-driving-waymo-254ce0a1?st=BCZFqE</a></p><p>The article about hope, sand and soil by Riwa Harfoush</p><p><a href="https://substack.com/@riwaharfoush/p-177572330" rel="noopener noreferrer" target="_blank">https://substack.com/@riwaharfoush/p-177572330</a></p>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">87cddb81-e78c-4307-bc31-c8ac8b5c04b8</guid><itunes:image href="https://artwork.captivate.fm/240597a0-04c9-410e-ba7c-491661c19db3/EDUCATION-FUTURES-PODCAST-CAPTIVATE-3000-x-3000-px-2.png"/><pubDate>Wed, 12 Nov 2025 18:08:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/87cddb81-e78c-4307-bc31-c8ac8b5c04b8.mp3" length="29756831" type="audio/mpeg"/><itunes:duration>01:02:00</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>3</itunes:episode><podcast:episode>3</podcast:episode><podcast:season>1</podcast:season><podcast:alternateEnclosure type="video/youtube" title="A father and AI engineer on parenting in the age of AI (with Samuel Path)"><podcast:source uri="https://youtu.be/GDAb3J84bBY"/></podcast:alternateEnclosure></item><item><title>Empowering teachers in the age of AI</title><itunes:title>Empowering teachers in the age of AI</itunes:title><description><![CDATA[<p>In this episode of Education Futures, Svenia Busson and Laurent Jolie engage with Diana Knodel and Ksenia Sokolyanskaya from Fobizz to explore the future of education in the age of AI. They discuss Fobizz's initiatives in AI literacy, the importance of lifelong learning, and the ethical considerations of AI in education. The conversation highlights the need for AI literacy, the role of technology in schools, and the challenges of integrating AI into education systems globally.</p><p>Books recommended during the episode: </p><p>https://www.goodreads.com/book/show/214384219-the-promises-and-perils-of-ai-in-education</p><p>https://www.beltz.de/fachmedien/paedagogik/produkte/details/56407-schule-2035.html (in german)</p>]]></description><content:encoded><![CDATA[<p>In this episode of Education Futures, Svenia Busson and Laurent Jolie engage with Diana Knodel and Ksenia Sokolyanskaya from Fobizz to explore the future of education in the age of AI. They discuss Fobizz's initiatives in AI literacy, the importance of lifelong learning, and the ethical considerations of AI in education. The conversation highlights the need for AI literacy, the role of technology in schools, and the challenges of integrating AI into education systems globally.</p><p>Books recommended during the episode: </p><p>https://www.goodreads.com/book/show/214384219-the-promises-and-perils-of-ai-in-education</p><p>https://www.beltz.de/fachmedien/paedagogik/produkte/details/56407-schule-2035.html (in german)</p>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">d829ef7b-6f29-40c9-a93d-90ddc63f5f84</guid><itunes:image href="https://artwork.captivate.fm/d44cb60e-23b4-433e-99ba-f1151afe482e/EDUCATION-FUTURES-PODCAST-CAPTIVATE-3000-x-3000-px-1.png"/><pubDate>Thu, 06 Nov 2025 21:38:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/d829ef7b-6f29-40c9-a93d-90ddc63f5f84.mp3" length="22416840" type="audio/mpeg"/><itunes:duration>46:42</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>2</itunes:episode><podcast:episode>2</podcast:episode><podcast:season>1</podcast:season><podcast:alternateEnclosure type="video/youtube" title="Empowering teachers in the age of AI with Diana and Ksenia from Fobizz"><podcast:source uri="https://youtu.be/6-5n0ybQJ8Q"/></podcast:alternateEnclosure></item><item><title>What is AI doing to our kids&apos; brains?</title><itunes:title>What is AI doing to our kids&apos; brains?</itunes:title><description><![CDATA[<p>In this episode, neuroscientist Grégoire Borst discusses the intersection of neuroscience and education, particularly in the context of AI's impact on young minds. He emphasizes the importance of critical thinking, the role of memorization, and the necessity of teaching students how to learn effectively. Grégoire also explores the future of education, the role of teachers, and the need for lifelong learning to adapt to a rapidly changing world. He debunks common neuro myths and highlights the importance of fostering critical thinking skills in students to prepare them for the future.</p>]]></description><content:encoded><![CDATA[<p>In this episode, neuroscientist Grégoire Borst discusses the intersection of neuroscience and education, particularly in the context of AI's impact on young minds. He emphasizes the importance of critical thinking, the role of memorization, and the necessity of teaching students how to learn effectively. Grégoire also explores the future of education, the role of teachers, and the need for lifelong learning to adapt to a rapidly changing world. He debunks common neuro myths and highlights the importance of fostering critical thinking skills in students to prepare them for the future.</p>]]></content:encoded><link><![CDATA[https://www.educationfutures.ai]]></link><guid isPermaLink="false">4b4588e3-7c66-4b84-ac21-681a764a66eb</guid><itunes:image href="https://artwork.captivate.fm/1dda9436-b49e-441f-b1b4-233fca743d5a/EDUCATION-FUTURES-PODCAST-CAPTIVATE-3000-x-3000-px.png"/><pubDate>Sun, 19 Oct 2025 15:30:00 +0200</pubDate><enclosure url="https://episodes.captivate.fm/episode/4b4588e3-7c66-4b84-ac21-681a764a66eb.mp3" length="25973464" type="audio/mpeg"/><itunes:duration>54:07</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>1</itunes:episode><podcast:episode>1</podcast:episode><podcast:season>1</podcast:season></item></channel></rss>