<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet href="https://feeds.captivate.fm/style.xsl" type="text/xsl"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:podcast="https://podcastindex.org/namespace/1.0"><channel><atom:link href="https://feeds.captivate.fm/accelerating-ai-ethics/" rel="self" type="application/rss+xml"/><title><![CDATA[Accelerating AI Ethics]]></title><podcast:guid>89996b96-3e60-5a48-afe8-70789f1fbac4</podcast:guid><lastBuildDate>Tue, 10 Mar 2026 12:05:37 +0000</lastBuildDate><generator>Captivate.fm</generator><language><![CDATA[en]]></language><copyright><![CDATA[Copyright 2026 Accelerator Fellowship Programme, University of Oxford]]></copyright><managingEditor>Accelerator Fellowship Programme, University of Oxford</managingEditor><itunes:summary><![CDATA[AI is transforming our world. 
But there are many ethical considerations from how AI is changing our ways of working to potentially deepening social inequalities. Instead of creating new opportunities. 
That's why we're here, to spark urgent conversations about the most pressing ethical issues in AI. 
The Accelerator Fellowship Programme at the Institute for Ethics in AI, University of Oxford brings together experts from civil society, industry, government and academia to address these ethical challenges head-on. 
We will explore topics such as the implications of AI for creativity, healthcare, global regulation and many more. 
Our podcast will feature guests from diverse backgrounds and disciplines because we believe it is important to hear all perspectives and create an exclusive space where diverse opinions are welcome. 
Most episodes will be hosted by Dr Caroline Green, Director of Research at the Institute for Ethics in AI and Lead of the Accelerator Fellowship Programme. Find out more about us here: https://afp.oxford-aiethics.ox.ac.uk/ ]]></itunes:summary><itunes:image href="https://artwork.captivate.fm/c13d1cd4-aa64-4cf3-9e5c-52049962e204/1dEfEvl9qZ06t-7QGLTom5Iy.jpg"/><itunes:owner><itunes:name>Accelerator Fellowship Programme, University of Oxford</itunes:name></itunes:owner><itunes:author>Accelerator Fellowship Programme, University of Oxford</itunes:author><description>AI is transforming our world. 
But there are many ethical considerations from how AI is changing our ways of working to potentially deepening social inequalities. Instead of creating new opportunities. 
That&apos;s why we&apos;re here, to spark urgent conversations about the most pressing ethical issues in AI. 
The Accelerator Fellowship Programme at the Institute for Ethics in AI, University of Oxford brings together experts from civil society, industry, government and academia to address these ethical challenges head-on. 
We will explore topics such as the implications of AI for creativity, healthcare, global regulation and many more. 
Our podcast will feature guests from diverse backgrounds and disciplines because we believe it is important to hear all perspectives and create an exclusive space where diverse opinions are welcome. 
Most episodes will be hosted by Dr Caroline Green, Director of Research at the Institute for Ethics in AI and Lead of the Accelerator Fellowship Programme. Find out more about us here: https://afp.oxford-aiethics.ox.ac.uk/ </description><link>https://accelerating-ai-ethics.captivate.fm</link><atom:link href="https://pubsubhubbub.appspot.com" rel="hub"/><itunes:subtitle><![CDATA[Exploring bold ideas, innovative thinking, and creative responses to ethical challenges of AI]]></itunes:subtitle><itunes:explicit>false</itunes:explicit><itunes:type>episodic</itunes:type><itunes:category text="Technology"></itunes:category><itunes:category text="Society &amp; Culture"><itunes:category text="Philosophy"/></itunes:category><itunes:category text="Science"><itunes:category text="Social Sciences"/></itunes:category><podcast:locked>no</podcast:locked><podcast:medium>podcast</podcast:medium><item><title>Thick Alignment and the Future of AI Governance: A Conversation with Professor Alondra Nelson</title><itunes:title>Thick Alignment and the Future of AI Governance: A Conversation with Professor Alondra Nelson</itunes:title><description><![CDATA[<p>In this episode Dr Caroline Green from Oxford's Institute for Ethics in AI speaks to Professor Alondra Nelson about the social foundations of AI governance and thick alignment, a broader approach to aligning artificial intelligence with human values. Professor Nelson reflects on her interdisciplinary career bridging science, social research, and policy, including her role in developing the White House Blueprint for an AI Bill of Rights. She discusses how public engagement, trust, and clear communication are essential for ensuring that AI systems serve society rather than deepen existing inequalities.</p><p>Together, they explore the importance of interdisciplinary collaboration in AI ethics and governance, highlighting the risks of deploying powerful technologies without meaningful public dialogue. The conversation examines emerging challenges such as emotional relationships with AI chatbots, the role of governments and companies in shaping responsible innovation, and how inclusive policymaking can help align AI development with democratic values and the public good.</p>]]></description><content:encoded><![CDATA[<p>In this episode Dr Caroline Green from Oxford's Institute for Ethics in AI speaks to Professor Alondra Nelson about the social foundations of AI governance and thick alignment, a broader approach to aligning artificial intelligence with human values. Professor Nelson reflects on her interdisciplinary career bridging science, social research, and policy, including her role in developing the White House Blueprint for an AI Bill of Rights. She discusses how public engagement, trust, and clear communication are essential for ensuring that AI systems serve society rather than deepen existing inequalities.</p><p>Together, they explore the importance of interdisciplinary collaboration in AI ethics and governance, highlighting the risks of deploying powerful technologies without meaningful public dialogue. The conversation examines emerging challenges such as emotional relationships with AI chatbots, the role of governments and companies in shaping responsible innovation, and how inclusive policymaking can help align AI development with democratic values and the public good.</p>]]></content:encoded><link><![CDATA[https://accelerating-ai-ethics.captivate.fm]]></link><guid isPermaLink="false">27033ff2-efb3-4dbf-bc70-60af7d2e182e</guid><itunes:image href="https://artwork.captivate.fm/9b5ec228-1798-4096-8f61-7f7110e272fe/AFP-Podcast-episode-art-name-will-be-generated-in-software.png"/><pubDate>Tue, 10 Mar 2026 12:00:00 +0000</pubDate><enclosure url="https://episodes.captivate.fm/episode/27033ff2-efb3-4dbf-bc70-60af7d2e182e.mp3" length="88611862" type="audio/mpeg"/><itunes:duration>01:01:32</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>7</itunes:episode><podcast:episode>7</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/1d2602d6-ce09-4bcd-ada7-8f363e112703/index.html" type="text/html"/></item><item><title>Connecting global conversations on ethical AI: the Coded Bias World Tour and AI in Africa: Challenges and Opportunities</title><itunes:title>Connecting global conversations on ethical AI: the Coded Bias World Tour and AI in Africa: Challenges and Opportunities</itunes:title><description><![CDATA[<p>In this episode Dr Caroline Green from Oxford's Institute for Ethics in AI speaks to Dr Joy Buolamwini and Angela Oduor Lungati, Executive Director of Ushahidi, about building ethical, community-centered AI from Africa and beyond. Angela discusses her work leading Ushahidi, an open-source platform used globally to crowdsource crisis and human rights data, and how community-generated knowledge can shape more inclusive and context-aware AI systems.Joy reflects on her journey from uncovering bias in facial recognition systems to founding the Algorithmic Justice League, sharing how her research - featured in the documentary <em>Coded Bias - </em>exposes the social harms embedded in AI.</p><p>Together, they explore women’s leadership in technology, the importance of grassroots voices in shaping AI governance, and the need for data justice, language representation, and accountability across the AI supply chain. The conversation highlights Kenya’s national AI strategy, Africa’s growing role as a builder - not just consumer - of AI, and the environmental and labour impacts of AI infrastructure. The episode closes with reflections on creativity, humanity, and the role of art in guiding responsible innovation.</p>]]></description><content:encoded><![CDATA[<p>In this episode Dr Caroline Green from Oxford's Institute for Ethics in AI speaks to Dr Joy Buolamwini and Angela Oduor Lungati, Executive Director of Ushahidi, about building ethical, community-centered AI from Africa and beyond. Angela discusses her work leading Ushahidi, an open-source platform used globally to crowdsource crisis and human rights data, and how community-generated knowledge can shape more inclusive and context-aware AI systems.Joy reflects on her journey from uncovering bias in facial recognition systems to founding the Algorithmic Justice League, sharing how her research - featured in the documentary <em>Coded Bias - </em>exposes the social harms embedded in AI.</p><p>Together, they explore women’s leadership in technology, the importance of grassroots voices in shaping AI governance, and the need for data justice, language representation, and accountability across the AI supply chain. The conversation highlights Kenya’s national AI strategy, Africa’s growing role as a builder - not just consumer - of AI, and the environmental and labour impacts of AI infrastructure. The episode closes with reflections on creativity, humanity, and the role of art in guiding responsible innovation.</p>]]></content:encoded><link><![CDATA[https://accelerating-ai-ethics.captivate.fm]]></link><guid isPermaLink="false">b117a5e3-8961-4f8a-b592-e590ca2e9801</guid><itunes:image href="https://artwork.captivate.fm/ef641e73-902d-4ef6-ab94-147336fa3866/Episode-5-Joy-and-Angela-artwork.jpg"/><pubDate>Mon, 23 Feb 2026 16:00:00 +0000</pubDate><enclosure url="https://episodes.captivate.fm/episode/b117a5e3-8961-4f8a-b592-e590ca2e9801.mp3" length="66135101" type="audio/mpeg"/><itunes:duration>55:06</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>6</itunes:episode><podcast:episode>6</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/31c1457f-2abb-41e9-9a17-60a0ada9d7d9/index.html" type="text/html"/><podcast:alternateEnclosure type="video/youtube" title="African Voices in AI | The Nairobi Conversation"><podcast:source uri="https://youtu.be/jYhx-qjQEJ8"/></podcast:alternateEnclosure></item><item><title>Accelerating AI Ethics: Is Our Privacy Law Ready for the Age of AI?</title><itunes:title>Accelerating AI Ethics: Is Our Privacy Law Ready for the Age of AI?</itunes:title><description><![CDATA[<p>In this episode of <em>Accelerating AI Ethics</em> from Oxford’s Institute for Ethics in AI, Dr Caroline Green speaks with Professor Ignacio Cofone, Professor of AI Law and Regulation, about his book <em>The Privacy Fallacy: Harm and Power in the Information Economy</em>.</p><p>They discuss why today’s consent-based privacy laws are ill-suited to an AI-driven data economy, how data harms are often relational rather than individual, and why focusing on data use - rather than data collection - is key to preventing exploitation while enabling innovation. The conversation also explores power dynamics in the tech ecosystem, harm liability, and what meaningful reform of data protection law could look like in the age of AI.</p>]]></description><content:encoded><![CDATA[<p>In this episode of <em>Accelerating AI Ethics</em> from Oxford’s Institute for Ethics in AI, Dr Caroline Green speaks with Professor Ignacio Cofone, Professor of AI Law and Regulation, about his book <em>The Privacy Fallacy: Harm and Power in the Information Economy</em>.</p><p>They discuss why today’s consent-based privacy laws are ill-suited to an AI-driven data economy, how data harms are often relational rather than individual, and why focusing on data use - rather than data collection - is key to preventing exploitation while enabling innovation. The conversation also explores power dynamics in the tech ecosystem, harm liability, and what meaningful reform of data protection law could look like in the age of AI.</p>]]></content:encoded><link><![CDATA[https://accelerating-ai-ethics.captivate.fm]]></link><guid isPermaLink="false">ce7b7035-8315-434e-9946-91da34c7ebcd</guid><itunes:image href="https://artwork.captivate.fm/cef2e49c-bc37-4343-a74e-221ed29b74d4/AFP-Ignacio-Podcast-art-cover.jpg"/><pubDate>Tue, 20 Jan 2026 16:00:00 +0000</pubDate><enclosure url="https://episodes.captivate.fm/episode/ce7b7035-8315-434e-9946-91da34c7ebcd.mp3" length="75170715" type="audio/mpeg"/><itunes:duration>39:09</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>5</itunes:episode><podcast:episode>5</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/40eb13f5-edaa-4df0-9ca6-ba6fe87f20e4/index.html" type="text/html"/><podcast:alternateEnclosure type="video/youtube" title="Accelerating AI Ethics: Is Our Privacy Law Ready for the Age of AI?"><podcast:source uri="https://youtu.be/gjHOeb8nebM"/></podcast:alternateEnclosure></item><item><title>From Philosophy to Code: The Role of the Humanities in the AI Age</title><itunes:title>From Philosophy to Code: The Role of the Humanities in the AI Age</itunes:title><description><![CDATA[<p>Brendan McCord Podcast: https://blog.cosmos-institute.org/p/the-claude-boys</p>]]></description><content:encoded><![CDATA[<p>Brendan McCord Podcast: https://blog.cosmos-institute.org/p/the-claude-boys</p>]]></content:encoded><link><![CDATA[https://accelerating-ai-ethics.captivate.fm]]></link><guid isPermaLink="false">8ed3d38b-3231-4a28-a8ce-9e4a485cf7a2</guid><itunes:image href="https://artwork.captivate.fm/09b24bec-b00d-4fb6-96e4-6578d52288ea/AFP-Podcast-episode-art-name-will-be-generated-in-software.jpg"/><pubDate>Mon, 10 Nov 2025 16:00:00 +0000</pubDate><enclosure url="https://episodes.captivate.fm/episode/8ed3d38b-3231-4a28-a8ce-9e4a485cf7a2.mp3" length="53747901" type="audio/mpeg"/><itunes:duration>44:24</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>4</itunes:episode><podcast:episode>4</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/105984e1-c493-4053-b7dc-ddcc2fd381ec/index.html" type="text/html"/><podcast:alternateEnclosure type="video/youtube" title="Accelerating AI Ethics Podcast: From Philosophy to Code: The Role of the Humanities in the AI Age"><podcast:source uri="https://youtu.be/SIhNy40uPrg"/></podcast:alternateEnclosure></item><item><title>6-Pack of Care: Ambassador Audrey Tang and Dr Caroline Green Introduce the Civic Care Approach</title><itunes:title>6-Pack of Care: Ambassador Audrey Tang and Dr Caroline Green Introduce the Civic Care Approach</itunes:title><description><![CDATA[<p><strong>Episode Summary</strong></p><p>In this episode, Dr. Caroline Green is joined by Ambassador Audrey Tang to introduce the <strong>“6-Pack of Care”</strong> framework—a practical architecture for embedding <em>civic care</em> into AI governance. Moving beyond abstract debates about AI futures, Tang and Green explore how attentiveness, responsibility, competence, responsiveness, solidarity, and symbiosis can form the foundation for AI systems that strengthen human relationships rather than undermine them. From real-world applications in social care to global policy discussions, this conversation offers hopeful, actionable pathways for creating technology that supports pluralism, community, and relational health.</p><p><strong>Guest Bio – Ambassador Audrey Tang</strong></p><p><strong>Ambassador Audrey Tang</strong> is a Fellow of the Institute for Ethics in AI's Accelerator Programme, and is a Taiwanese digital minister, civic hacker, and global advocate for digital democracy. As Taiwan’s former Minister of Digital Affairs, Tang pioneered radical transparency, open government, and participatory digital tools that brought citizens directly into policy-making. Known for their leadership in building pluralistic, collaborative frameworks for technology governance, Tang continues to advise international bodies, research institutes, and civic groups on AI ethics, digital rights, and democraticinnovation. Their work bridges philosophy, policy, and engineering, focusing on how technology can nurture civic participation and collective flourishing.</p><p><strong>Topics Covered</strong></p><ul><li>Moving from the vision of <strong>plurality</strong> to the architecture of <strong>civic care</strong></li><li>Defining <em>civic care</em> as designing AI around relational health and community needs</li><li>The <strong>6-Pack of Care</strong> framework:</li></ul><br/><ol><li><strong>Attentiveness </strong>– noticing needs before optimising outcomes</li><li> <strong>Responsibility</strong> – public pledges, accountability, and alignment assemblies</li><li><strong>Competence </strong>– delivering support that strengthens, not weakens, human relationships</li><li><strong>Responsiveness</strong> – designing adaptable systems that empower those closest to harms</li><li><strong>Solidarity </strong>– building infrastructures of cooperation, interoperability, and portability</li><li><strong>Symbiosis</strong> – bounded, community-rooted AI (the <em>kami</em> metaphor) instead of singularity</li></ol><br/><ul><li>Applications of civic care in <strong>social care systems</strong> and <strong>family caregiving</strong></li><li>The role of <strong>AI in co-production</strong> and amplifying unheard voices in policymaking</li><li>Tang’s reflections on <strong>telepresence, co-presence, and re-presence</strong> in diplomacy and civic life</li><li>Practical tools such as <strong>alignment assemblies</strong>, <strong>sense-making</strong>, and <strong>WEVAL.org</strong></li><li>Why <strong>plurality, solidarity, and symbiosis</strong> must guide AI policy and global governance</li></ul><br/><p><strong>Resources and Links</strong></p><ul><li><strong>The 6-Pack of Care microsite</strong> – <a href="https://6pack.care" rel="noopener noreferrer" target="_blank">https://6pack.care</a></li><li><strong>Accelerator Fellowship Programme</strong> – Institute for Ethics in AI, University of Oxford</li><li><strong>WEVAL Wiki Evaluation Platform)</strong> – <a href="https://weval.org" rel="noopener noreferrer" target="_blank">https://weval.org</a></li><li><strong>Dedicate </strong>(AI care assistant for family caregivers) – <a href="https://dedicate.life" rel="noopener noreferrer" target="_blank">https://dedicate.life</a></li><li><strong>Collective Intelligence Project</strong> – <a href="https://collective-intelligence-project.org" rel="noopener noreferrer" target="_blank">https://collective-intelligence-project.org</a></li></ul><br/>]]></description><content:encoded><![CDATA[<p><strong>Episode Summary</strong></p><p>In this episode, Dr. Caroline Green is joined by Ambassador Audrey Tang to introduce the <strong>“6-Pack of Care”</strong> framework—a practical architecture for embedding <em>civic care</em> into AI governance. Moving beyond abstract debates about AI futures, Tang and Green explore how attentiveness, responsibility, competence, responsiveness, solidarity, and symbiosis can form the foundation for AI systems that strengthen human relationships rather than undermine them. From real-world applications in social care to global policy discussions, this conversation offers hopeful, actionable pathways for creating technology that supports pluralism, community, and relational health.</p><p><strong>Guest Bio – Ambassador Audrey Tang</strong></p><p><strong>Ambassador Audrey Tang</strong> is a Fellow of the Institute for Ethics in AI's Accelerator Programme, and is a Taiwanese digital minister, civic hacker, and global advocate for digital democracy. As Taiwan’s former Minister of Digital Affairs, Tang pioneered radical transparency, open government, and participatory digital tools that brought citizens directly into policy-making. Known for their leadership in building pluralistic, collaborative frameworks for technology governance, Tang continues to advise international bodies, research institutes, and civic groups on AI ethics, digital rights, and democraticinnovation. Their work bridges philosophy, policy, and engineering, focusing on how technology can nurture civic participation and collective flourishing.</p><p><strong>Topics Covered</strong></p><ul><li>Moving from the vision of <strong>plurality</strong> to the architecture of <strong>civic care</strong></li><li>Defining <em>civic care</em> as designing AI around relational health and community needs</li><li>The <strong>6-Pack of Care</strong> framework:</li></ul><br/><ol><li><strong>Attentiveness </strong>– noticing needs before optimising outcomes</li><li> <strong>Responsibility</strong> – public pledges, accountability, and alignment assemblies</li><li><strong>Competence </strong>– delivering support that strengthens, not weakens, human relationships</li><li><strong>Responsiveness</strong> – designing adaptable systems that empower those closest to harms</li><li><strong>Solidarity </strong>– building infrastructures of cooperation, interoperability, and portability</li><li><strong>Symbiosis</strong> – bounded, community-rooted AI (the <em>kami</em> metaphor) instead of singularity</li></ol><br/><ul><li>Applications of civic care in <strong>social care systems</strong> and <strong>family caregiving</strong></li><li>The role of <strong>AI in co-production</strong> and amplifying unheard voices in policymaking</li><li>Tang’s reflections on <strong>telepresence, co-presence, and re-presence</strong> in diplomacy and civic life</li><li>Practical tools such as <strong>alignment assemblies</strong>, <strong>sense-making</strong>, and <strong>WEVAL.org</strong></li><li>Why <strong>plurality, solidarity, and symbiosis</strong> must guide AI policy and global governance</li></ul><br/><p><strong>Resources and Links</strong></p><ul><li><strong>The 6-Pack of Care microsite</strong> – <a href="https://6pack.care" rel="noopener noreferrer" target="_blank">https://6pack.care</a></li><li><strong>Accelerator Fellowship Programme</strong> – Institute for Ethics in AI, University of Oxford</li><li><strong>WEVAL Wiki Evaluation Platform)</strong> – <a href="https://weval.org" rel="noopener noreferrer" target="_blank">https://weval.org</a></li><li><strong>Dedicate </strong>(AI care assistant for family caregivers) – <a href="https://dedicate.life" rel="noopener noreferrer" target="_blank">https://dedicate.life</a></li><li><strong>Collective Intelligence Project</strong> – <a href="https://collective-intelligence-project.org" rel="noopener noreferrer" target="_blank">https://collective-intelligence-project.org</a></li></ul><br/>]]></content:encoded><link><![CDATA[https://accelerating-ai-ethics.captivate.fm]]></link><guid isPermaLink="false">807eb9d2-c855-40f7-aa51-fb052e1f7615</guid><itunes:image href="https://artwork.captivate.fm/7a48ffd8-2bd5-4ebc-b88b-f5e958898784/AFP-Podcast-episode-art-Civic-Care-Approach.jpg"/><pubDate>Wed, 24 Sep 2025 16:00:00 +0000</pubDate><enclosure url="https://episodes.captivate.fm/episode/807eb9d2-c855-40f7-aa51-fb052e1f7615.mp3" length="98379047" type="audio/mpeg"/><itunes:duration>58:33</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>3</itunes:episode><podcast:episode>3</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/8928e737-1446-40cc-900a-ea452670f670/index.html" type="text/html"/></item><item><title>AI and Human Rights: Professor Yuval Shany on AI, Law, and Global Accountability</title><itunes:title>AI and Human Rights: Professor Yuval Shany on AI, Law, and Global Accountability</itunes:title><description><![CDATA[<p><strong>Episode summary</strong></p><p>How can human rights frameworks keep pace with the rapid development and global impact of artificial intelligence? In this episode of <em>Accelerating AI Ethics</em>, Professor Yuval Shany, a leading international law scholar and 2024–25 Fellow of the Accelerator Fellowship Programme at the Institute for Ethics in AI, University of Oxford<strong>, </strong>explores how legal systems can help ensure AI supports, rather than threatens, fundamental rights.</p><p>In conversation with Dr Caroline Green, Professor Shany considers the case for an AI Bill of Rights, the challenge of regulating powerful private actors, and how international law might evolve to meet the demands of a technological era. This timely and far-reaching conversation addresses the legal, ethical, and democratic foundations of AI governance.</p><p><strong>Professor Yuval Shany</strong></p><p>Hersch Lauterpacht Chair in Public International Law at the Hebrew University of Jerusalem, former Chair of the United Nations Human Rights Committee, and 2024–25 Fellow of the Accelerator Fellowship Programme at the Institute for Ethics in AI, University of Oxford. Professor Shany’s research focuses on international law, human rights, and the regulation of emerging technologies. </p><p><strong>Topics covered</strong></p><ul><li>The rationale for an AI Bill of Rights</li><li>Emerging gaps between private technological power and public oversight</li><li>Why international law is still a vital tool for AI governance</li><li>Balancing innovation and legitimacy in legal frameworks</li><li>Opportunities and constraints in current human rights instruments</li><li>How democratic accountability must be embedded into the design of AI systems</li></ul><br/><p><strong>Resources and links</strong></p><ul><li><a href="https://artificialintelligenceact.eu/">European Commission – Artificial Intelligence Act (AIA)</a></li><li><a href="https://www.whitehouse.gov/ostp/ai-bill-of-rights/">U.S. Blueprint for an AI Bill of Rights</a></li><li><a href="https://oecd.ai/en/ai-principles">OECD AI Principles</a></li><li><a href="https://www.coe.int/en/web/artificial-intelligence/cai">Council of Europe – Committee on Artificial Intelligence (CAI)</a></li><li><a href="https://unesdoc.unesco.org/ark:/48223/pf0000380455">UNESCO Recommendation on the Ethics of Artificial Intelligence</a></li><li><a href="https://www.ohchr.org/en/treaty-bodies/ccpr">United Nations Human Rights Committeent...</a></li></ul><br/>]]></description><content:encoded><![CDATA[<p><strong>Episode summary</strong></p><p>How can human rights frameworks keep pace with the rapid development and global impact of artificial intelligence? In this episode of <em>Accelerating AI Ethics</em>, Professor Yuval Shany, a leading international law scholar and 2024–25 Fellow of the Accelerator Fellowship Programme at the Institute for Ethics in AI, University of Oxford<strong>, </strong>explores how legal systems can help ensure AI supports, rather than threatens, fundamental rights.</p><p>In conversation with Dr Caroline Green, Professor Shany considers the case for an AI Bill of Rights, the challenge of regulating powerful private actors, and how international law might evolve to meet the demands of a technological era. This timely and far-reaching conversation addresses the legal, ethical, and democratic foundations of AI governance.</p><p><strong>Professor Yuval Shany</strong></p><p>Hersch Lauterpacht Chair in Public International Law at the Hebrew University of Jerusalem, former Chair of the United Nations Human Rights Committee, and 2024–25 Fellow of the Accelerator Fellowship Programme at the Institute for Ethics in AI, University of Oxford. Professor Shany’s research focuses on international law, human rights, and the regulation of emerging technologies. </p><p><strong>Topics covered</strong></p><ul><li>The rationale for an AI Bill of Rights</li><li>Emerging gaps between private technological power and public oversight</li><li>Why international law is still a vital tool for AI governance</li><li>Balancing innovation and legitimacy in legal frameworks</li><li>Opportunities and constraints in current human rights instruments</li><li>How democratic accountability must be embedded into the design of AI systems</li></ul><br/><p><strong>Resources and links</strong></p><ul><li><a href="https://artificialintelligenceact.eu/">European Commission – Artificial Intelligence Act (AIA)</a></li><li><a href="https://www.whitehouse.gov/ostp/ai-bill-of-rights/">U.S. Blueprint for an AI Bill of Rights</a></li><li><a href="https://oecd.ai/en/ai-principles">OECD AI Principles</a></li><li><a href="https://www.coe.int/en/web/artificial-intelligence/cai">Council of Europe – Committee on Artificial Intelligence (CAI)</a></li><li><a href="https://unesdoc.unesco.org/ark:/48223/pf0000380455">UNESCO Recommendation on the Ethics of Artificial Intelligence</a></li><li><a href="https://www.ohchr.org/en/treaty-bodies/ccpr">United Nations Human Rights Committeent...</a></li></ul><br/>]]></content:encoded><link><![CDATA[https://accelerating-ai-ethics.captivate.fm]]></link><guid isPermaLink="false">a5d93344-904a-42ac-b6e4-ae77458aaf80</guid><itunes:image href="https://artwork.captivate.fm/4892054b-c1b5-4e1a-93c5-0af1a334f55b/OsaV0KN6ThsSqwfcCXXgPY9n.jpg"/><pubDate>Fri, 01 Aug 2025 16:00:00 +0000</pubDate><enclosure url="https://episodes.captivate.fm/episode/a5d93344-904a-42ac-b6e4-ae77458aaf80.mp3" length="59205924" type="audio/mpeg"/><itunes:duration>49:20</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>2</itunes:episode><podcast:episode>2</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/9c148040-c075-41fb-b956-40cbeaa03448/index.html" type="text/html"/></item><item><title>AI and Democracy: Ambassador Audrey Tang on Plurality in Practice, Transparency and Collective Intelligence</title><itunes:title>AI and Democracy: Ambassador Audrey Tang on Plurality in Practice, Transparency and Collective Intelligence</itunes:title><description><![CDATA[<p><strong>Episode summary</strong></p><p>What if AI could strengthen democracy instead of destabilising it? </p><p>In this opening episode, Ambassador Audrey Tang, Taiwan’s first Digital Minister and a Fellow of the Accelerator Fellowship Programme at the Institute for Ethics in AI, University of Oxford, shares a bold and hopeful vision of digital innovation shaped by the values of openness, accountability, and civic empowerment. In conversation with Dr Caroline Green, Tang reflects on her own journey from civic hacker to government minister, the role of “radical transparency” in building trust, and how plurality can serve as a design principle for both technology and democracy.</p><p><strong>Ambassador Audrey Tang</strong></p><p>Fellow of the Accelerator Fellowship Programme at the Institute for Ethics in AI, University of Oxford. Digital Minister (Taiwan, 2016–2024), Civic Technologist, and 2024–25 Fellow at Oxford’s AI Ethics Accelerator. Ambassador Tang is internationally recognised for pioneering work in open government, participatory democracy, and civic technology, including the development of the<strong> </strong><em>vTaiwan</em> platform and the global project <em>Plurality</em>.</p><p><strong>Topics covered</strong></p><ul><li>What “plurality” means in a technological context</li><li>Using AI to support collective intelligence, not replace it</li><li>The practice of radical transparency in digital governance</li><li>How pro-social media and open-source approaches build trust</li><li>Reflections on moving from civic activism into public office</li><li>New directions for her Fellowship project at Oxford</li></ul><br/><p><strong>Resources and links</strong></p><ul><li><a href="https://www.youtube.com/watch?v=idudNrLy8ek">Short film: Good Enough Ancestors</a></li><li><a href="https://plurality.net">Ambassador Tang's project</a></li><li><a href="https://afp.oxford-aiethics.ox.ac.uk/people/ambassador-audrey-tang">Ambassador Tang_AFP Fellow</a></li></ul><br/>]]></description><content:encoded><![CDATA[<p><strong>Episode summary</strong></p><p>What if AI could strengthen democracy instead of destabilising it? </p><p>In this opening episode, Ambassador Audrey Tang, Taiwan’s first Digital Minister and a Fellow of the Accelerator Fellowship Programme at the Institute for Ethics in AI, University of Oxford, shares a bold and hopeful vision of digital innovation shaped by the values of openness, accountability, and civic empowerment. In conversation with Dr Caroline Green, Tang reflects on her own journey from civic hacker to government minister, the role of “radical transparency” in building trust, and how plurality can serve as a design principle for both technology and democracy.</p><p><strong>Ambassador Audrey Tang</strong></p><p>Fellow of the Accelerator Fellowship Programme at the Institute for Ethics in AI, University of Oxford. Digital Minister (Taiwan, 2016–2024), Civic Technologist, and 2024–25 Fellow at Oxford’s AI Ethics Accelerator. Ambassador Tang is internationally recognised for pioneering work in open government, participatory democracy, and civic technology, including the development of the<strong> </strong><em>vTaiwan</em> platform and the global project <em>Plurality</em>.</p><p><strong>Topics covered</strong></p><ul><li>What “plurality” means in a technological context</li><li>Using AI to support collective intelligence, not replace it</li><li>The practice of radical transparency in digital governance</li><li>How pro-social media and open-source approaches build trust</li><li>Reflections on moving from civic activism into public office</li><li>New directions for her Fellowship project at Oxford</li></ul><br/><p><strong>Resources and links</strong></p><ul><li><a href="https://www.youtube.com/watch?v=idudNrLy8ek">Short film: Good Enough Ancestors</a></li><li><a href="https://plurality.net">Ambassador Tang's project</a></li><li><a href="https://afp.oxford-aiethics.ox.ac.uk/people/ambassador-audrey-tang">Ambassador Tang_AFP Fellow</a></li></ul><br/>]]></content:encoded><link><![CDATA[https://accelerating-ai-ethics.captivate.fm]]></link><guid isPermaLink="false">09c27af1-5f54-444d-aaf3-234df42eaf3c</guid><itunes:image href="https://artwork.captivate.fm/104acb18-ebc3-47d6-a425-fdaabb82af2f/FQLkEmCX1AY2lOruENOlZkj-.jpg"/><pubDate>Tue, 29 Jul 2025 00:15:00 +0000</pubDate><enclosure url="https://episodes.captivate.fm/episode/09c27af1-5f54-444d-aaf3-234df42eaf3c.mp3" length="97877287" type="audio/mpeg"/><itunes:duration>58:15</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>1</itunes:episode><podcast:episode>1</podcast:episode><podcast:season>1</podcast:season></item><item><title>Accelerating AI Ethics TRAILER</title><itunes:title>Accelerating AI Ethics TRAILER</itunes:title><description><![CDATA[<p>AI is transforming our world.&nbsp;</p><p>But there are many ethical considerations from how AI is changing our ways of working to potentially deepening social inequalities. Instead of creating new opportunities.&nbsp; That's why we're here, to spark urgent conversations about the most pressing ethical issues in AI.&nbsp;</p><p>The Accelerator Fellowship Programme at the Institute for Ethics in AI, University of Oxford brings together experts from civil society, industry, government and academia to address these ethical challenges head-on.&nbsp;</p><p>We explore topics such as the implications of AI for creativity, healthcare, global regulation and many more.&nbsp;</p><p>Our podcast features guests from diverse backgrounds and disciplines because we believe it is important to hear all perspectives and create an exclusive space where diverse opinions are welcome.&nbsp;</p><p>Most episodes will be hosted by Dr Caroline Green, Director of Research at the Institute for Ethics in AI and Lead of the Accelerator Fellowship Programme.</p><p>Find out more about us here: [research_links] </p>]]></description><content:encoded><![CDATA[<p>AI is transforming our world.&nbsp;</p><p>But there are many ethical considerations from how AI is changing our ways of working to potentially deepening social inequalities. Instead of creating new opportunities.&nbsp; That's why we're here, to spark urgent conversations about the most pressing ethical issues in AI.&nbsp;</p><p>The Accelerator Fellowship Programme at the Institute for Ethics in AI, University of Oxford brings together experts from civil society, industry, government and academia to address these ethical challenges head-on.&nbsp;</p><p>We explore topics such as the implications of AI for creativity, healthcare, global regulation and many more.&nbsp;</p><p>Our podcast features guests from diverse backgrounds and disciplines because we believe it is important to hear all perspectives and create an exclusive space where diverse opinions are welcome.&nbsp;</p><p>Most episodes will be hosted by Dr Caroline Green, Director of Research at the Institute for Ethics in AI and Lead of the Accelerator Fellowship Programme.</p><p>Find out more about us here: [research_links] </p>]]></content:encoded><link><![CDATA[https://accelerating-ai-ethics.captivate.fm]]></link><guid isPermaLink="false">7d9c896e-5f5d-4052-a38c-ce0f3ef4f68b</guid><itunes:image href="https://artwork.captivate.fm/38992372-f6cf-4f38-9de7-bac8a0c132d2/HnVjPg1EHKrFp_j2TctrhJP8.jpg"/><pubDate>Mon, 28 Jul 2025 16:00:00 +0000</pubDate><enclosure url="https://episodes.captivate.fm/episode/7d9c896e-5f5d-4052-a38c-ce0f3ef4f68b.mp3" length="1982610" type="audio/mpeg"/><itunes:duration>01:11</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>trailer</itunes:episodeType><itunes:season>1</itunes:season><podcast:season>1</podcast:season></item></channel></rss>