<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet href="https://feeds.captivate.fm/style.xsl" type="text/xsl"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:podcast="https://podcastindex.org/namespace/1.0"><channel><atom:link href="https://feeds.captivate.fm/techpolicypress/" rel="self" type="application/rss+xml"/><title><![CDATA[The Tech Policy Press Podcast]]></title><podcast:guid>3baf15a6-f889-5358-aa3f-cd2fe2c1e4f3</podcast:guid><lastBuildDate>Sun, 19 Apr 2026 13:00:09 +0000</lastBuildDate><generator>Captivate.fm</generator><language><![CDATA[en]]></language><copyright><![CDATA[Copyright 2026 Tech Policy Press]]></copyright><managingEditor>Tech Policy Press</managingEditor><itunes:summary><![CDATA[Tech Policy Press is a nonprofit media and community venture intended to provoke new ideas, debate and discussion at the intersection of technology and democracy. 

You can find us at https://techpolicy.press/, where you can join the newsletter.]]></itunes:summary><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><itunes:owner><itunes:name>Tech Policy Press</itunes:name></itunes:owner><itunes:author>Tech Policy Press</itunes:author><description>Tech Policy Press is a nonprofit media and community venture intended to provoke new ideas, debate and discussion at the intersection of technology and democracy. 

You can find us at https://techpolicy.press/, where you can join the newsletter.</description><link>https://techpolicy.press</link><atom:link href="https://pubsubhubbub.appspot.com" rel="hub"/><itunes:subtitle><![CDATA[The intersection of technology and democracy]]></itunes:subtitle><itunes:explicit>false</itunes:explicit><itunes:type>episodic</itunes:type><itunes:category text="Technology"></itunes:category><itunes:category text="News"><itunes:category text="Tech News"/></itunes:category><itunes:category text="News"><itunes:category text="Politics"/></itunes:category><podcast:locked>no</podcast:locked><podcast:medium>podcast</podcast:medium><item><title>Why Palantir&apos;s ImmigrationOS Endangers Democracy and the Rule of Law</title><itunes:title>Why Palantir&apos;s ImmigrationOS Endangers Democracy and the Rule of Law</itunes:title><description><![CDATA[<p>What if the most consequential immigration policy decisions in America aren't being made by elected officials, or even by government agencies—but by software? Right now, a sprawling ecosystem of private technology vendors is quietly reshaping who gets flagged, detained, and deported in the United States. At the center of it is Palantir's <a href="https://www.wired.com/story/ice-palantir-immigrationos/" rel="noopener noreferrer" target="_blank">ImmigrationOS</a>, a platform for end-to-end automated enforcement. But it’s just one piece of a much larger machine.</p><p>Today we’ll hear from the authors of <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6345099" rel="noopener noreferrer" target="_blank">a new law review article</a> that argues that private tech vendors have become a third governing power in American immigration—sitting between the federal government and the states, encoding policy into code, and building infrastructure that increasingly poses a threat to democracy and the rule of law. Guests include:</p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Chinmayi Sharma</strong>, an associate professor at Fordham Law School who is also affiliated with the Strauss Center at University of Texas, the Atlantic Council Cyber Statecraft Initiative, the Center for Democracy and Technology, the Georgetown Center on Privacy and Technology, and the Center for AI and Digital Policy.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Sam Adler,</strong> a third year law student at Fordham Law School.</li></ol><br/>]]></description><content:encoded><![CDATA[<p>What if the most consequential immigration policy decisions in America aren't being made by elected officials, or even by government agencies—but by software? Right now, a sprawling ecosystem of private technology vendors is quietly reshaping who gets flagged, detained, and deported in the United States. At the center of it is Palantir's <a href="https://www.wired.com/story/ice-palantir-immigrationos/" rel="noopener noreferrer" target="_blank">ImmigrationOS</a>, a platform for end-to-end automated enforcement. But it’s just one piece of a much larger machine.</p><p>Today we’ll hear from the authors of <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6345099" rel="noopener noreferrer" target="_blank">a new law review article</a> that argues that private tech vendors have become a third governing power in American immigration—sitting between the federal government and the states, encoding policy into code, and building infrastructure that increasingly poses a threat to democracy and the rule of law. Guests include:</p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Chinmayi Sharma</strong>, an associate professor at Fordham Law School who is also affiliated with the Strauss Center at University of Texas, the Atlantic Council Cyber Statecraft Initiative, the Center for Democracy and Technology, the Georgetown Center on Privacy and Technology, and the Center for AI and Digital Policy.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Sam Adler,</strong> a third year law student at Fordham Law School.</li></ol><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/why-palantirs-immigrationos-endangers-democracy-and-the-rule-of-law]]></link><guid isPermaLink="false">9f94823a-efdb-4795-8cda-f275ebf8302c</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 19 Apr 2026 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/9f94823a-efdb-4795-8cda-f275ebf8302c.mp3" length="40046469" type="audio/mpeg"/><itunes:duration>41:43</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>What to Do If the AI Bubble Bursts</title><itunes:title>What to Do If the AI Bubble Bursts</itunes:title><description><![CDATA[<p>If you read, watch, or listen to financial news, you’ll find there is a boom in discussion over whether the AI boom is a bubble, and what the consequences might be if it bursts. Today’s guest says that if such a crash occurs, it will represent a significant policy opportunity—a potential point of intervention that could lead to meaningful reform of the tech sector.</p><p><strong>Asad Ramzanali</strong> is the Director of AI and Technology Policy at the Vanderbilt Policy Accelerator for Political Economy and Regulation, and author of the recent report, "<a href="https://cdn.vanderbilt.edu/vu-URL/wp-content/uploads/sites/412/2026/03/23144242/After-the-AI-Crash.pdf" rel="noopener noreferrer" target="_blank">After the AI Crash</a>."</p><p>"Instead of waiting for the crisis and hastily developing insufficient policies, lawmakers should prepare for this anticipated crisis now," he says.</p>]]></description><content:encoded><![CDATA[<p>If you read, watch, or listen to financial news, you’ll find there is a boom in discussion over whether the AI boom is a bubble, and what the consequences might be if it bursts. Today’s guest says that if such a crash occurs, it will represent a significant policy opportunity—a potential point of intervention that could lead to meaningful reform of the tech sector.</p><p><strong>Asad Ramzanali</strong> is the Director of AI and Technology Policy at the Vanderbilt Policy Accelerator for Political Economy and Regulation, and author of the recent report, "<a href="https://cdn.vanderbilt.edu/vu-URL/wp-content/uploads/sites/412/2026/03/23144242/After-the-AI-Crash.pdf" rel="noopener noreferrer" target="_blank">After the AI Crash</a>."</p><p>"Instead of waiting for the crisis and hastily developing insufficient policies, lawmakers should prepare for this anticipated crisis now," he says.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/what-to-do-if-the-ai-bubble-bursts]]></link><guid isPermaLink="false">a49da349-5fa7-41fb-8ebf-d6371b66198a</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 12 Apr 2026 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/a49da349-5fa7-41fb-8ebf-d6371b66198a.mp3" length="29304891" type="audio/mpeg"/><itunes:duration>30:32</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Project Maven and the Age of AI Warfare</title><itunes:title>Project Maven and the Age of AI Warfare</itunes:title><description><![CDATA[<p>Project Maven, a Department of Defense program launched in April 2017 to apply AI in military targeting and logistics, is now being used in live combat. <strong>Katrina Manson</strong> is a reporter and the author of <em><a href="https://www.strandbooks.com/project-maven-a-marine-colonel-his-team-and-the-dawn-of-ai-warfare-9781324123316.html" rel="noopener noreferrer" target="_blank">Project Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare</a></em><a href="https://www.strandbooks.com/project-maven-a-marine-colonel-his-team-and-the-dawn-of-ai-warfare-9781324123316.html" rel="noopener noreferrer" target="_blank">,</a> a book just published by W.W. Norton &amp; Company that tells the history of the program. <strong>Justin Hendrix</strong> spoke to her about the book and about recent events, including the use of AI targeting in the war in Iran and the battle between the Pentagon and Anthropic over 'red lines' such as the use of AI for lethal autonomous weapons. </p>]]></description><content:encoded><![CDATA[<p>Project Maven, a Department of Defense program launched in April 2017 to apply AI in military targeting and logistics, is now being used in live combat. <strong>Katrina Manson</strong> is a reporter and the author of <em><a href="https://www.strandbooks.com/project-maven-a-marine-colonel-his-team-and-the-dawn-of-ai-warfare-9781324123316.html" rel="noopener noreferrer" target="_blank">Project Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare</a></em><a href="https://www.strandbooks.com/project-maven-a-marine-colonel-his-team-and-the-dawn-of-ai-warfare-9781324123316.html" rel="noopener noreferrer" target="_blank">,</a> a book just published by W.W. Norton &amp; Company that tells the history of the program. <strong>Justin Hendrix</strong> spoke to her about the book and about recent events, including the use of AI targeting in the war in Iran and the battle between the Pentagon and Anthropic over 'red lines' such as the use of AI for lethal autonomous weapons. </p>]]></content:encoded><link><![CDATA[https://techpolicy.press/maven]]></link><guid isPermaLink="false">124d986b-2118-45e0-a12f-a34d6351f01a</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Thu, 09 Apr 2026 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/124d986b-2118-45e0-a12f-a34d6351f01a.mp3" length="45289758" type="audio/mpeg"/><itunes:duration>47:11</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>X is a Preferred Tool for American Propaganda. What Does It Mean?</title><itunes:title>X is a Preferred Tool for American Propaganda. What Does It Mean?</itunes:title><description><![CDATA[<p>Last week, The Guardian <a href="https://www.theguardian.com/us-news/2026/mar/30/embassies-campaign-marco-rubio-elon-musk" rel="noopener noreferrer" target="_blank">reported</a> that United States Secretary of State <strong>Marco Rubio</strong> has directed American embassies and consulates to counter foreign propaganda. Notably, the cable apparently endorses <strong>Elon Musk’s</strong> X as an “innovative” tool to help do it, even as it directs diplomats to coordinate with the US military’s psychological operations unit to counter what the administration deems as disinformation. </p><p>Today’s guest is <strong>Kate Klonick</strong>, a law professor at St. John's University and a senior editor at Lawfare. In a <a href="https://www.lawfaremedia.org/article/the-state-department-s-x-directive-and-the-end-of-platform-independence" rel="noopener noreferrer" target="_blank">piece on Lawfare</a> last week, Klonick says that the State Department issuing a formal cable endorsing a specific social media platform for use in its messaging—and doing so in the same document that it encourages collaboration with military psychological operations—would have been nearly unthinkable until recent months. But it’s just the latest in a series of developments that suggest Elon Musk’s X is regarded as the preferred tool of the state. Let’s jump right in. </p>]]></description><content:encoded><![CDATA[<p>Last week, The Guardian <a href="https://www.theguardian.com/us-news/2026/mar/30/embassies-campaign-marco-rubio-elon-musk" rel="noopener noreferrer" target="_blank">reported</a> that United States Secretary of State <strong>Marco Rubio</strong> has directed American embassies and consulates to counter foreign propaganda. Notably, the cable apparently endorses <strong>Elon Musk’s</strong> X as an “innovative” tool to help do it, even as it directs diplomats to coordinate with the US military’s psychological operations unit to counter what the administration deems as disinformation. </p><p>Today’s guest is <strong>Kate Klonick</strong>, a law professor at St. John's University and a senior editor at Lawfare. In a <a href="https://www.lawfaremedia.org/article/the-state-department-s-x-directive-and-the-end-of-platform-independence" rel="noopener noreferrer" target="_blank">piece on Lawfare</a> last week, Klonick says that the State Department issuing a formal cable endorsing a specific social media platform for use in its messaging—and doing so in the same document that it encourages collaboration with military psychological operations—would have been nearly unthinkable until recent months. But it’s just the latest in a series of developments that suggest Elon Musk’s X is regarded as the preferred tool of the state. Let’s jump right in. </p>]]></content:encoded><link><![CDATA[https://techpolicy.press/x-is-a-preferred-tool-for-american-propaganda-what-does-it-mean]]></link><guid isPermaLink="false">749540c3-1c25-41d3-9f1a-289a1f60ad64</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 05 Apr 2026 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/749540c3-1c25-41d3-9f1a-289a1f60ad64.mp3" length="32621816" type="audio/mpeg"/><itunes:duration>33:59</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Olivier Sylvain Wants to Reclaim the Internet from Big Tech</title><itunes:title>Olivier Sylvain Wants to Reclaim the Internet from Big Tech</itunes:title><description><![CDATA[<p>This was a landmark week for tech accountability in US courts. Juries in New Mexico and California <a href="https://www.techpolicy.press/landmark-verdicts-could-unleash-new-legal-playbook-over-social-media-harms/" rel="noopener noreferrer" target="_blank">delivered verdicts</a> finding tech giants Meta and Google liable for harms to young users on their platforms, decisions that are projected to open the door to more lawsuits alleging that social media creates addiction or endangers kids.</p><p>Today’s guest sees these developments as positive and in line with the types of thinking he believes will help improve the internet. <strong>Olivier Sylvain</strong> is a professor at Fordham Law School and the author of a <a href="https://www.bookculture.com/book/9781967190126" rel="noopener noreferrer" target="_blank">new book</a> titled <em>Reclaiming the Internet: How Big Tech Took Control—and How We Can Take It Back</em>, published by Columbia Global Reports. </p><p><strong>Justin Hendrix </strong>interviewed him at <a href="https://www.bookculture.com/event/112th-olivier-sylvain-justin-hendrix" rel="noopener noreferrer" target="_blank">Book Culture</a>, a bookstore on 112th Street in New York City.</p>]]></description><content:encoded><![CDATA[<p>This was a landmark week for tech accountability in US courts. Juries in New Mexico and California <a href="https://www.techpolicy.press/landmark-verdicts-could-unleash-new-legal-playbook-over-social-media-harms/" rel="noopener noreferrer" target="_blank">delivered verdicts</a> finding tech giants Meta and Google liable for harms to young users on their platforms, decisions that are projected to open the door to more lawsuits alleging that social media creates addiction or endangers kids.</p><p>Today’s guest sees these developments as positive and in line with the types of thinking he believes will help improve the internet. <strong>Olivier Sylvain</strong> is a professor at Fordham Law School and the author of a <a href="https://www.bookculture.com/book/9781967190126" rel="noopener noreferrer" target="_blank">new book</a> titled <em>Reclaiming the Internet: How Big Tech Took Control—and How We Can Take It Back</em>, published by Columbia Global Reports. </p><p><strong>Justin Hendrix </strong>interviewed him at <a href="https://www.bookculture.com/event/112th-olivier-sylvain-justin-hendrix" rel="noopener noreferrer" target="_blank">Book Culture</a>, a bookstore on 112th Street in New York City.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/olivier-sylvain-wants-to-reclaim-the-internet-from-big-tech]]></link><guid isPermaLink="false">c523feec-0f2b-4435-8116-9bdf61ace29d</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 29 Mar 2026 09:15:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/c523feec-0f2b-4435-8116-9bdf61ace29d.mp3" length="43804309" type="audio/mpeg"/><itunes:duration>45:38</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>How to Study the Phenomenon of Tech Hype</title><itunes:title>How to Study the Phenomenon of Tech Hype</itunes:title><description><![CDATA[<p>AI hype is everywhere, and the CEOs of many tech firms are promising that the tech will soon eclipse human intelligence. The trillions in investment towards this goal and the massive deployment of capital and the human and natural resources it purchases both requires this kind of hype and causes it to compound. </p><p>Today’s guests are studying this phenomenon from a variety of perspectives, building out a line of inquiry they call "<a href="https://hypestudies.org/" rel="noopener noreferrer" target="_blank">Hype Studies</a>." It's the subject of <a href="https://www.techpolicy.press/category/the-hype-studies-series/" rel="noopener noreferrer" target="_blank">an occasional series of contributions</a> to Tech Policy Press. Guests include:</p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Jascha Bareis</strong>, a postdoctoral political scientist at the University of Fribourg;</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Andreu Belsunces Gonçalves</strong>, a sociologist of design and technology pursuing a PhD at the Tecnopolítica unit of the Open University of Catalonia;</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Marché Arends</strong>, a South African independent investigative journalist.</li></ol><br/>]]></description><content:encoded><![CDATA[<p>AI hype is everywhere, and the CEOs of many tech firms are promising that the tech will soon eclipse human intelligence. The trillions in investment towards this goal and the massive deployment of capital and the human and natural resources it purchases both requires this kind of hype and causes it to compound. </p><p>Today’s guests are studying this phenomenon from a variety of perspectives, building out a line of inquiry they call "<a href="https://hypestudies.org/" rel="noopener noreferrer" target="_blank">Hype Studies</a>." It's the subject of <a href="https://www.techpolicy.press/category/the-hype-studies-series/" rel="noopener noreferrer" target="_blank">an occasional series of contributions</a> to Tech Policy Press. Guests include:</p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Jascha Bareis</strong>, a postdoctoral political scientist at the University of Fribourg;</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Andreu Belsunces Gonçalves</strong>, a sociologist of design and technology pursuing a PhD at the Tecnopolítica unit of the Open University of Catalonia;</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Marché Arends</strong>, a South African independent investigative journalist.</li></ol><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/how-to-study-the-phenomenon-of-tech-hype]]></link><guid isPermaLink="false">07535ed9-59a7-45c5-859d-86903c6cff63</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 29 Mar 2026 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/07535ed9-59a7-45c5-859d-86903c6cff63.mp3" length="43754158" type="audio/mpeg"/><itunes:duration>45:35</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Considering How AI Destroys Democratic Institutions</title><itunes:title>Considering How AI Destroys Democratic Institutions</itunes:title><description><![CDATA[<p>Across the world, governments and other institutions are racing to apply artificial intelligence in countless ways. In <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5870623" rel="noopener noreferrer" target="_blank">a draft paper</a> forthcoming in the UC Law Journal titled "How AI Destroys Institutions," Boston University law professors Woodrow Hartzog and Jessica Silbey argue that the design of AI systems—from large language models to predictive and automated decision tools—is fundamentally incompatible with the civic institutions that hold democratic society together, including the rule of law, universities, a free press, and civic life itself. This isn't necessarily because AI is being misused or falling into the wrong hands, they say—in most instances AI is working exactly as intended and, in doing so, eroding the expertise, decision-making structures, and human connection that give institutions their legitimacy.</p>]]></description><content:encoded><![CDATA[<p>Across the world, governments and other institutions are racing to apply artificial intelligence in countless ways. In <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5870623" rel="noopener noreferrer" target="_blank">a draft paper</a> forthcoming in the UC Law Journal titled "How AI Destroys Institutions," Boston University law professors Woodrow Hartzog and Jessica Silbey argue that the design of AI systems—from large language models to predictive and automated decision tools—is fundamentally incompatible with the civic institutions that hold democratic society together, including the rule of law, universities, a free press, and civic life itself. This isn't necessarily because AI is being misused or falling into the wrong hands, they say—in most instances AI is working exactly as intended and, in doing so, eroding the expertise, decision-making structures, and human connection that give institutions their legitimacy.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/considering-how-ai-destroys-democratic-institutions]]></link><guid isPermaLink="false">ba46dc42-71a8-4d9d-a719-dddeb926e9e0</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 22 Mar 2026 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/ba46dc42-71a8-4d9d-a719-dddeb926e9e0.mp3" length="51626458" type="audio/mpeg"/><itunes:duration>43:01</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Google Employees Push Back on Government Surveillance Contracts</title><itunes:title>Google Employees Push Back on Government Surveillance Contracts</itunes:title><description><![CDATA[<p>Early this year, following the deaths of <strong>Keith Porter</strong>, <strong>Renee Good</strong>, and <strong>Alex Pretti</strong> at the hands of federal agents and the violent immigration raids on communities across the United States, 1,500 Google workers signed <u><a href="https://www.googlers-against-ice.com/" rel="noopener noreferrer" target="_blank">a new petition</a></u> <u><a href="https://www.cnbc.com/2026/02/07/nearly-a-thousand-google-workers-sign-letter-urging-company-to-divest-from-ice-cbp.html" rel="noopener noreferrer" target="_blank">demanding the company cut contracts</a></u> with Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP).</p><p><strong>Justin Hendrix</strong> spoke to two of the employees who signed the petition about why they signed it, the environment inside the company, and how they think about the risk they face for speaking out. </p>]]></description><content:encoded><![CDATA[<p>Early this year, following the deaths of <strong>Keith Porter</strong>, <strong>Renee Good</strong>, and <strong>Alex Pretti</strong> at the hands of federal agents and the violent immigration raids on communities across the United States, 1,500 Google workers signed <u><a href="https://www.googlers-against-ice.com/" rel="noopener noreferrer" target="_blank">a new petition</a></u> <u><a href="https://www.cnbc.com/2026/02/07/nearly-a-thousand-google-workers-sign-letter-urging-company-to-divest-from-ice-cbp.html" rel="noopener noreferrer" target="_blank">demanding the company cut contracts</a></u> with Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP).</p><p><strong>Justin Hendrix</strong> spoke to two of the employees who signed the petition about why they signed it, the environment inside the company, and how they think about the risk they face for speaking out. </p>]]></content:encoded><link><![CDATA[https://techpolicy.press/google-employees-push-back-on-government-surveillance-contracts]]></link><guid isPermaLink="false">f19637b5-b294-449f-bd70-ab09192bc5f0</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 15 Mar 2026 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/f19637b5-b294-449f-bd70-ab09192bc5f0.mp3" length="40206749" type="audio/mpeg"/><itunes:duration>33:30</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>How to Regulate Deepfake Financial Fraud</title><itunes:title>How to Regulate Deepfake Financial Fraud</itunes:title><description><![CDATA[<p>Online fraud has become one of the fastest-growing criminal enterprises on the planet. Deepfake fraud cases are surging, and Deloitte analysts project that generative AI-driven banking fraud alone could climb to roughly as much as <u><a href="https://www.deloitte.com/us/en/insights/industry/financial-services/deepfake-banking-fraud-risk-on-the-rise.html" rel="noopener noreferrer" target="_blank">$40 billion</a></u> in the US alone by 2027.</p><p>The problem is not just the volume. It's the architecture. These are no longer opportunistic scams—they are industrialized, AI-assisted operations, and the synthetic media tools that power them are becoming cheaper and more convincing by the month.</p><p>A <a href="https://datasociety.net/library/deepfake-financial-fraud/" rel="noopener noreferrer" target="_blank">new report on deepfake financial fraud</a> from Data &amp; Society maps this threat. <strong>Justin Hendrix</strong> spoke to its authors, including:</p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Alice Marwick</strong>, director of research at Data &amp; Society, and</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Anya Schiffrin</strong>, co-director of the tech policy and innovation concentration at Columbia University’s School of International and Public Affairs.</li></ol><br/>]]></description><content:encoded><![CDATA[<p>Online fraud has become one of the fastest-growing criminal enterprises on the planet. Deepfake fraud cases are surging, and Deloitte analysts project that generative AI-driven banking fraud alone could climb to roughly as much as <u><a href="https://www.deloitte.com/us/en/insights/industry/financial-services/deepfake-banking-fraud-risk-on-the-rise.html" rel="noopener noreferrer" target="_blank">$40 billion</a></u> in the US alone by 2027.</p><p>The problem is not just the volume. It's the architecture. These are no longer opportunistic scams—they are industrialized, AI-assisted operations, and the synthetic media tools that power them are becoming cheaper and more convincing by the month.</p><p>A <a href="https://datasociety.net/library/deepfake-financial-fraud/" rel="noopener noreferrer" target="_blank">new report on deepfake financial fraud</a> from Data &amp; Society maps this threat. <strong>Justin Hendrix</strong> spoke to its authors, including:</p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Alice Marwick</strong>, director of research at Data &amp; Society, and</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Anya Schiffrin</strong>, co-director of the tech policy and innovation concentration at Columbia University’s School of International and Public Affairs.</li></ol><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/how-to-regulate-deepfake-financial-fraud]]></link><guid isPermaLink="false">6129abf5-171a-4922-8fec-f8a3a14ddb6d</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Fri, 13 Mar 2026 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/6129abf5-171a-4922-8fec-f8a3a14ddb6d.mp3" length="34388137" type="audio/mpeg"/><itunes:duration>35:49</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Cindy Cohn on How to Sustain the Fight Against Authoritarianism</title><itunes:title>Cindy Cohn on How to Sustain the Fight Against Authoritarianism</itunes:title><description><![CDATA[<p>Today's guest has spent thirty years on the front lines of one of the defining battles at the intersection of technology and democracy: privacy and the fight for who controls your digital life. <strong>Cindy Cohn</strong> is the executive director of the <a href="https://www.eff.org/" rel="noopener noreferrer" target="_blank">Electronic Frontier Foundation</a> (EFF), and she has been in the room for some of the most consequential fights over digital rights since the internet became part of everyday life—from fighting for encryption in the 90s, to the NSA mass surveillance revelations, to battling FBI gag orders that kept Americans in the dark about government data requests, and now for the fight against the grave civil rights and privacy abuses of the Trump administration.</p><p>Now, as she’s preparing to step down from her role at EFF, she's telling her story, and trying to recruit a new generation to the fight. Her new book, <em><a href="https://www.eff.org/Privacys-Defender" rel="noopener noreferrer" target="_blank">Privacy's Defender</a></em>, out March 10 from MIT Press, weaves her personal journey with the legal battles she's fought on behalf of whistleblowers, researchers, innovators, and everyday people. </p>]]></description><content:encoded><![CDATA[<p>Today's guest has spent thirty years on the front lines of one of the defining battles at the intersection of technology and democracy: privacy and the fight for who controls your digital life. <strong>Cindy Cohn</strong> is the executive director of the <a href="https://www.eff.org/" rel="noopener noreferrer" target="_blank">Electronic Frontier Foundation</a> (EFF), and she has been in the room for some of the most consequential fights over digital rights since the internet became part of everyday life—from fighting for encryption in the 90s, to the NSA mass surveillance revelations, to battling FBI gag orders that kept Americans in the dark about government data requests, and now for the fight against the grave civil rights and privacy abuses of the Trump administration.</p><p>Now, as she’s preparing to step down from her role at EFF, she's telling her story, and trying to recruit a new generation to the fight. Her new book, <em><a href="https://www.eff.org/Privacys-Defender" rel="noopener noreferrer" target="_blank">Privacy's Defender</a></em>, out March 10 from MIT Press, weaves her personal journey with the legal battles she's fought on behalf of whistleblowers, researchers, innovators, and everyday people. </p>]]></content:encoded><link><![CDATA[https://techpolicy.press/cindy-cohn-on-how-to-sustain-the-fight-against-authoritarianism]]></link><guid isPermaLink="false">353ea30d-6b8c-4a91-ad4b-df3d1d3cc8c7</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 08 Mar 2026 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/353ea30d-6b8c-4a91-ad4b-df3d1d3cc8c7.mp3" length="49192851" type="audio/mpeg"/><itunes:duration>41:00</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>In Age of Disruption, a Defense of Incrementalism</title><itunes:title>In Age of Disruption, a Defense of Incrementalism</itunes:title><description><![CDATA[<p>In their new book, <em><a href="https://www.cambridge.org/core/books/move-slow-and-upgrade/93EAB1B110C5AD50D2395B149DF98EC6#fndtn-information" rel="noopener noreferrer" target="_blank">Move Slow and Upgrade: The Power of Incremental Innovation</a></em>, <strong>Evan Selinger,</strong> a professor in the Department of Philosophy at Rochester Institute of Technology and<strong> Albert Fox Cahn</strong>, founder in residence of the Surveillance Technology Oversight Project (STOP), argue that society is over-fixated on disruptive innovation over the kind of steady incrementalism that can deliver sustainable returns over longer time frames. They argue in favor of more careful deliberation and adopting what they call the “upgrader’s mindset,” which should be applied whenever “disruptive changes would pose the greatest social risk.”</p>]]></description><content:encoded><![CDATA[<p>In their new book, <em><a href="https://www.cambridge.org/core/books/move-slow-and-upgrade/93EAB1B110C5AD50D2395B149DF98EC6#fndtn-information" rel="noopener noreferrer" target="_blank">Move Slow and Upgrade: The Power of Incremental Innovation</a></em>, <strong>Evan Selinger,</strong> a professor in the Department of Philosophy at Rochester Institute of Technology and<strong> Albert Fox Cahn</strong>, founder in residence of the Surveillance Technology Oversight Project (STOP), argue that society is over-fixated on disruptive innovation over the kind of steady incrementalism that can deliver sustainable returns over longer time frames. They argue in favor of more careful deliberation and adopting what they call the “upgrader’s mindset,” which should be applied whenever “disruptive changes would pose the greatest social risk.”</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/in-age-of-disruption-a-defense-of-incrementalism]]></link><guid isPermaLink="false">9b9901c0-deef-4f10-99df-e59f25e61f35</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 01 Mar 2026 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/9b9901c0-deef-4f10-99df-e59f25e61f35.mp3" length="54904800" type="audio/mpeg"/><itunes:duration>45:45</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>How to Think About the Anthropic-Pentagon Dispute</title><itunes:title>How to Think About the Anthropic-Pentagon Dispute</itunes:title><description><![CDATA[<p>The Pentagon wants AI that can fight wars — without limits. One of the United States’ leading AI companies says there are lines it won't cross. And this week, that standoff turned into an all-out confrontation. </p><p>To discuss the implications of the dispute between Anthropic and the Pentagon, including the determination that the company represents a supply chain risk, <strong>Justin Hendrix</strong> spoke to two experts:</p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Kat Duffy</strong>, senior fellow for digital and cyberspace policy at the Council on Foreign Relations, and</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Amos Toh</strong>, senior counsel in the Liberty and National Security Program at the Brennan Center for Justice.</li></ol><br/>]]></description><content:encoded><![CDATA[<p>The Pentagon wants AI that can fight wars — without limits. One of the United States’ leading AI companies says there are lines it won't cross. And this week, that standoff turned into an all-out confrontation. </p><p>To discuss the implications of the dispute between Anthropic and the Pentagon, including the determination that the company represents a supply chain risk, <strong>Justin Hendrix</strong> spoke to two experts:</p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Kat Duffy</strong>, senior fellow for digital and cyberspace policy at the Council on Foreign Relations, and</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Amos Toh</strong>, senior counsel in the Liberty and National Security Program at the Brennan Center for Justice.</li></ol><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/how-to-think-about-the-anthropic-pentagon-dispute]]></link><guid isPermaLink="false">7205411f-3f8e-4940-8402-b2d4f4f2f5f0</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sat, 28 Feb 2026 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/7205411f-3f8e-4940-8402-b2d4f4f2f5f0.mp3" length="42260399" type="audio/mpeg"/><itunes:duration>44:01</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>How to Get Paid to Polarize on TikTok</title><itunes:title>How to Get Paid to Polarize on TikTok</itunes:title><description><![CDATA[<p>Concerns about synthetic media and coordinated manipulation of online platforms have moved from theoretical worry to documented reality. Researchers, regulators, and civil society organizations are working to understand how algorithmically driven content recommendation systems can be exploited — not just by ideologically motivated actors, but by ordinary users pursuing financial gain.</p><p>Fundación Maldita.es is a Spanish nonprofit that has been working on information integrity and fact-checking since 2017. Its most recent investigation focuses on TikTok, and what they found raises pointed questions about the platform's creator monetization program. Researchers at Maldita <a href="https://files.maldita.es/maldita/uploads/2026/01/6973f8eda0e1b.pdf" rel="noopener noreferrer" target="_blank">documented a network</a> of hundreds of accounts — spanning eighteen countries — that were producing AI-generated videos of protests that never happened, and doing so not out of any discernible political motive, but to accumulate followers, qualify for TikTok's revenue-sharing program, and, in some cases, sell the accounts outright. </p><p>In this episode, Justin Hendrix is joined by Maldita associate director for public policy <strong>Carlos Hernández-Echevarría</strong> and public policy officer <strong>Marina Sacristán</strong>.</p>]]></description><content:encoded><![CDATA[<p>Concerns about synthetic media and coordinated manipulation of online platforms have moved from theoretical worry to documented reality. Researchers, regulators, and civil society organizations are working to understand how algorithmically driven content recommendation systems can be exploited — not just by ideologically motivated actors, but by ordinary users pursuing financial gain.</p><p>Fundación Maldita.es is a Spanish nonprofit that has been working on information integrity and fact-checking since 2017. Its most recent investigation focuses on TikTok, and what they found raises pointed questions about the platform's creator monetization program. Researchers at Maldita <a href="https://files.maldita.es/maldita/uploads/2026/01/6973f8eda0e1b.pdf" rel="noopener noreferrer" target="_blank">documented a network</a> of hundreds of accounts — spanning eighteen countries — that were producing AI-generated videos of protests that never happened, and doing so not out of any discernible political motive, but to accumulate followers, qualify for TikTok's revenue-sharing program, and, in some cases, sell the accounts outright. </p><p>In this episode, Justin Hendrix is joined by Maldita associate director for public policy <strong>Carlos Hernández-Echevarría</strong> and public policy officer <strong>Marina Sacristán</strong>.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/how-to-get-paid-to-polarize-on-tiktok]]></link><guid isPermaLink="false">0658608a-f42f-4284-b87b-1a59c5f024bb</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 22 Feb 2026 09:30:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/0658608a-f42f-4284-b87b-1a59c5f024bb.mp3" length="28765729" type="audio/mpeg"/><itunes:duration>29:58</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>How to Become an Algorithmic Problem</title><itunes:title>How to Become an Algorithmic Problem</itunes:title><description><![CDATA[<p>As AI technologies proliferate, a growing number of people are asking what it means to live in a world dominated by algorithms and automated systems—and what gets lost when those systems optimize human behavior at scale. These questions sit at the intersection of political theory, technology policy, and everyday life, and they are drawing scholars from fields well outside computer science into the conversation.</p><p>José Marichal is a political scientist at California Lutheran University who has been writing and teaching about technology and politics for more than two decades. Marichal's new book, <em><a href="https://bristoluniversitypress.co.uk/you-must-become-an-algorithmic-problem" rel="noopener noreferrer" target="_blank">You Must Become an Algorithmic Problem: Renegotiating the Socio-Technical Contract</a></em>, considers the age of recommendation systems and large language models. Drawing on political philosophy, he argues that individuals have entered into an implicit bargain with technology companies, trading unpredictability and novelty for the convenience of algorithmically curated experience. The consequences of that bargain, he contends, reach beyond personal preference and into the foundations of liberal democratic citizenship.</p>]]></description><content:encoded><![CDATA[<p>As AI technologies proliferate, a growing number of people are asking what it means to live in a world dominated by algorithms and automated systems—and what gets lost when those systems optimize human behavior at scale. These questions sit at the intersection of political theory, technology policy, and everyday life, and they are drawing scholars from fields well outside computer science into the conversation.</p><p>José Marichal is a political scientist at California Lutheran University who has been writing and teaching about technology and politics for more than two decades. Marichal's new book, <em><a href="https://bristoluniversitypress.co.uk/you-must-become-an-algorithmic-problem" rel="noopener noreferrer" target="_blank">You Must Become an Algorithmic Problem: Renegotiating the Socio-Technical Contract</a></em>, considers the age of recommendation systems and large language models. Drawing on political philosophy, he argues that individuals have entered into an implicit bargain with technology companies, trading unpredictability and novelty for the convenience of algorithmically curated experience. The consequences of that bargain, he contends, reach beyond personal preference and into the foundations of liberal democratic citizenship.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/how-to-become-an-algorithmic-problem]]></link><guid isPermaLink="false">6d60f8e3-cb6e-4af5-a9f9-c3d38283661a</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 22 Feb 2026 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/6d60f8e3-cb6e-4af5-a9f9-c3d38283661a.mp3" length="44701280" type="audio/mpeg"/><itunes:duration>46:34</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The Digital Services Act is a Lightning Rod for Debate</title><itunes:title>The Digital Services Act is a Lightning Rod for Debate</itunes:title><description><![CDATA[<p>This week marks the second <u><a href="https://dsa-observatory.eu/conference-2026/" rel="noopener noreferrer" target="_blank">DSA and Platform Regulation conference</a></u> in Amsterdam, where experts will convene to consider the Digital Services Act (DSA) two years after it entered full effect across the European Union. Over that period, the law has been tested by national elections, geopolitical tensions, high-profile enforcement actions, and the rapid rise of generative AI. It has become both a benchmark for platform accountability and a political lightning rod.</p><p>Ahead of the conference, Tech Policy Press senior editor <strong>Ramsha Jahangir</strong> spoke with members of the DSA Observatory, which is organizing the conference, to take stock. What have these first years of enforcement clarified? Where does opacity remain? And what does it mean to conduct DSA research in today’s political climate? Guests include:</p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>John Albert</strong>, associate researcher, DSA Observatory.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Paddy Leerssen</strong>, postdoctoral researcher at the University of Amsterdam and part of the DSA Observatory.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Magdelena Jozwiak</strong>, associate researcher at the DSA Observatory.</li></ol><br/>]]></description><content:encoded><![CDATA[<p>This week marks the second <u><a href="https://dsa-observatory.eu/conference-2026/" rel="noopener noreferrer" target="_blank">DSA and Platform Regulation conference</a></u> in Amsterdam, where experts will convene to consider the Digital Services Act (DSA) two years after it entered full effect across the European Union. Over that period, the law has been tested by national elections, geopolitical tensions, high-profile enforcement actions, and the rapid rise of generative AI. It has become both a benchmark for platform accountability and a political lightning rod.</p><p>Ahead of the conference, Tech Policy Press senior editor <strong>Ramsha Jahangir</strong> spoke with members of the DSA Observatory, which is organizing the conference, to take stock. What have these first years of enforcement clarified? Where does opacity remain? And what does it mean to conduct DSA research in today’s political climate? Guests include:</p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>John Albert</strong>, associate researcher, DSA Observatory.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Paddy Leerssen</strong>, postdoctoral researcher at the University of Amsterdam and part of the DSA Observatory.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong>Magdelena Jozwiak</strong>, associate researcher at the DSA Observatory.</li></ol><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/the-digital-services-act-is-a-lightning-rod-for-debate]]></link><guid isPermaLink="false">54d87780-4faf-4c1b-9ce0-64cd6029ca56</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 15 Feb 2026 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/54d87780-4faf-4c1b-9ce0-64cd6029ca56.mp3" length="36180763" type="audio/mpeg"/><itunes:duration>30:09</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>What Carrie Goldberg Has Learned from Suing Big Tech</title><itunes:title>What Carrie Goldberg Has Learned from Suing Big Tech</itunes:title><description><![CDATA[<p>A wave of lawsuits in the Unites States is targeting tech firms for their product design decisions. Lawyer <strong>Carrie Goldberg</strong> has played a role in establishing the product liability theory that underlies them. As the founder of <a href="https://www.cagoldberglaw.com/" rel="noopener noreferrer" target="_blank">C.A. Goldberg, PLLC</a>, in 2017, her firm brought a lawsuit that sought to apply product liability theory to a tech platform — <a href="https://knightcolumbia.org/authors/carrie-goldberg" rel="noopener noreferrer" target="_blank">Herrick v. Grindr</a> — arguing that a dangerous app design, not just user behavior, was the source of harm. In 2022, Goldberg was appointed to the Plaintiffs’ Steering Committee in the federal social media multidistrict litigation. She’s led cases against Amazon, Meta, and Omegle, has <a href="https://www.judiciary.senate.gov/imo/media/doc/2025-02-19_-_testimony_-_goldberg.pdf" rel="noopener noreferrer" target="_blank">testified before the Senate Judiciary Committee</a> on child safety issues, and is the author of <em><a href="https://www.nobodys-victim.com/" rel="noopener noreferrer" target="_blank">Nobody's Victim: Fighting Psychos, Stalkers, Pervs, and Trolls</a></em>. <strong>Justin Hendrix</strong> spoke to her from her offices in Brooklyn about what she's learned over the last decade, and about some ongoing litigation that remains in dispute. </p>]]></description><content:encoded><![CDATA[<p>A wave of lawsuits in the Unites States is targeting tech firms for their product design decisions. Lawyer <strong>Carrie Goldberg</strong> has played a role in establishing the product liability theory that underlies them. As the founder of <a href="https://www.cagoldberglaw.com/" rel="noopener noreferrer" target="_blank">C.A. Goldberg, PLLC</a>, in 2017, her firm brought a lawsuit that sought to apply product liability theory to a tech platform — <a href="https://knightcolumbia.org/authors/carrie-goldberg" rel="noopener noreferrer" target="_blank">Herrick v. Grindr</a> — arguing that a dangerous app design, not just user behavior, was the source of harm. In 2022, Goldberg was appointed to the Plaintiffs’ Steering Committee in the federal social media multidistrict litigation. She’s led cases against Amazon, Meta, and Omegle, has <a href="https://www.judiciary.senate.gov/imo/media/doc/2025-02-19_-_testimony_-_goldberg.pdf" rel="noopener noreferrer" target="_blank">testified before the Senate Judiciary Committee</a> on child safety issues, and is the author of <em><a href="https://www.nobodys-victim.com/" rel="noopener noreferrer" target="_blank">Nobody's Victim: Fighting Psychos, Stalkers, Pervs, and Trolls</a></em>. <strong>Justin Hendrix</strong> spoke to her from her offices in Brooklyn about what she's learned over the last decade, and about some ongoing litigation that remains in dispute. </p>]]></content:encoded><link><![CDATA[https://techpolicy.press/what-carrie-goldberg-has-learned-from-suing-big-tech]]></link><guid isPermaLink="false">084843f6-8c0f-4d60-b5ec-1f02b4a7b360</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 08 Feb 2026 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/084843f6-8c0f-4d60-b5ec-1f02b4a7b360.mp3" length="39415745" type="audio/mpeg"/><itunes:duration>41:03</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>AI, Surveillance and the Siege of Minneapolis</title><itunes:title>AI, Surveillance and the Siege of Minneapolis</itunes:title><description><![CDATA[<p>"Operation Metro Surge" — the massive immigration enforcement operation playing out right now in Minnesota — was billed as a targeted effort to apprehend undocumented immigrants. But what it has exposed goes far beyond immigration enforcement. It has pulled back the curtain on a sprawling surveillance apparatus that incorporates artificial intelligence, facial recognition, and other novel tools — not just to enable the raids that have turned violent and, in some cases, deadly; but also to silence dissent, to intimidate entire communities, and to discourage people from even watching what masked federal agents are doing in their own neighborhoods.</p><p>To discuss these events and the prospects for reform, <strong>Justin Hendrix </strong>spoke to <strong>Irna Landrum</strong>, a senior campaigner at Kairos Fellowship and author of a recent piece on Tech Policy Press, "<a href="https://www.techpolicy.press/how-ice-uses-ai-to-automate-authoritarianism/" rel="noopener noreferrer" target="_blank">How ICE Uses AI to Automate Authoritarianism</a>," and <strong>Alejandra Montoya-Boyer</strong>, vice president for the Center for Civil Rights and Technology at the Leadership Conference on Civil and Human Rights, which has <a href="https://civilrights.org/resource/dhs-funding-reform/" rel="noopener noreferrer" target="_blank">called for reforms</a> at the Department of Homeland Security and its component agencies.</p>]]></description><content:encoded><![CDATA[<p>"Operation Metro Surge" — the massive immigration enforcement operation playing out right now in Minnesota — was billed as a targeted effort to apprehend undocumented immigrants. But what it has exposed goes far beyond immigration enforcement. It has pulled back the curtain on a sprawling surveillance apparatus that incorporates artificial intelligence, facial recognition, and other novel tools — not just to enable the raids that have turned violent and, in some cases, deadly; but also to silence dissent, to intimidate entire communities, and to discourage people from even watching what masked federal agents are doing in their own neighborhoods.</p><p>To discuss these events and the prospects for reform, <strong>Justin Hendrix </strong>spoke to <strong>Irna Landrum</strong>, a senior campaigner at Kairos Fellowship and author of a recent piece on Tech Policy Press, "<a href="https://www.techpolicy.press/how-ice-uses-ai-to-automate-authoritarianism/" rel="noopener noreferrer" target="_blank">How ICE Uses AI to Automate Authoritarianism</a>," and <strong>Alejandra Montoya-Boyer</strong>, vice president for the Center for Civil Rights and Technology at the Leadership Conference on Civil and Human Rights, which has <a href="https://civilrights.org/resource/dhs-funding-reform/" rel="noopener noreferrer" target="_blank">called for reforms</a> at the Department of Homeland Security and its component agencies.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/ai-surveillance-and-the-siege-of-minneapolis]]></link><guid isPermaLink="false">9be79162-0508-4f70-9dcb-3ebf51bfeb03</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Thu, 05 Feb 2026 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/9be79162-0508-4f70-9dcb-3ebf51bfeb03.mp3" length="36171124" type="audio/mpeg"/><itunes:duration>37:41</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>How to Apply the &apos;Tyrant Test&apos; to Technology</title><itunes:title>How to Apply the &apos;Tyrant Test&apos; to Technology</itunes:title><description><![CDATA[<p>In his forthcoming book, <em><a href="https://nyupress.org/9781479838288/your-data-will-be-used-against-you/" rel="noopener noreferrer" target="_blank">Your Data Will Be Used Against You</a></em>, George Washington University Law School professor <strong>Andrew Guthrie Ferguson</strong> explores how the rise of sensor-driven technologies, social media monitoring, and artificial intelligence can be weaponized against democratic values and personal freedoms. Smart cars, smart homes, smart watches—these devices track our most private activities, and that data can be accessed by police and prosecutors looking for incriminating clues. What should legislatures, courts, and individuals do to protect civil liberties?</p>]]></description><content:encoded><![CDATA[<p>In his forthcoming book, <em><a href="https://nyupress.org/9781479838288/your-data-will-be-used-against-you/" rel="noopener noreferrer" target="_blank">Your Data Will Be Used Against You</a></em>, George Washington University Law School professor <strong>Andrew Guthrie Ferguson</strong> explores how the rise of sensor-driven technologies, social media monitoring, and artificial intelligence can be weaponized against democratic values and personal freedoms. Smart cars, smart homes, smart watches—these devices track our most private activities, and that data can be accessed by police and prosecutors looking for incriminating clues. What should legislatures, courts, and individuals do to protect civil liberties?</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/how-to-apply-the-tyrant-test-to-technology]]></link><guid isPermaLink="false">8b947967-c971-40bc-8b05-cb0799823776</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 01 Feb 2026 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/8b947967-c971-40bc-8b05-cb0799823776.mp3" length="53566303" type="audio/mpeg"/><itunes:duration>44:38</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Documenting Terror on the Streets of Minneapolis</title><itunes:title>Documenting Terror on the Streets of Minneapolis</itunes:title><description><![CDATA[<p>The killing of 37-year old nurse <strong>Alex Pretti</strong> by federal agents in Minneapolis was filmed from multiple angles by residents of the city, and local government officials have implored the public to share evidence of immigration enforcement agents committing acts of violence with investigators. But what are the challenges of using such artifacts in the pursuit of accountability? And what is there to learn from other efforts to use video, including from social media platforms, as evidence when seeking justice for crimes by state actors? Inequality.org managing editor and Tech Policy Press fellow <strong>Chris Mills Rodrigo</strong> joins <strong>Justin Hendrix</strong> to discuss these questions and more.</p>]]></description><content:encoded><![CDATA[<p>The killing of 37-year old nurse <strong>Alex Pretti</strong> by federal agents in Minneapolis was filmed from multiple angles by residents of the city, and local government officials have implored the public to share evidence of immigration enforcement agents committing acts of violence with investigators. But what are the challenges of using such artifacts in the pursuit of accountability? And what is there to learn from other efforts to use video, including from social media platforms, as evidence when seeking justice for crimes by state actors? Inequality.org managing editor and Tech Policy Press fellow <strong>Chris Mills Rodrigo</strong> joins <strong>Justin Hendrix</strong> to discuss these questions and more.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/documenting-terror-on-the-streets-of-minneapolis]]></link><guid isPermaLink="false">1d833d93-29ac-4740-b163-3727d37e1fa3</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 25 Jan 2026 09:15:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/1d833d93-29ac-4740-b163-3727d37e1fa3.mp3" length="24189497" type="audio/mpeg"/><itunes:duration>20:09</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Unpacking the Rise of &apos;Smart Authoritarianism&apos; in China</title><itunes:title>Unpacking the Rise of &apos;Smart Authoritarianism&apos; in China</itunes:title><description><![CDATA[<p>Today's guest is <strong>Jennifer Lind</strong>,  an associate professor of government at Dartmouth, a fellow at Chatham House London, and the author of the new book <em><a href="https://www.cornellpress.cornell.edu/book/9781501784163/autocracy-20/#bookTabs=1" rel="noopener noreferrer" target="_blank">Autocracy 2.0: How China’s Rise Reinvented Tyranny</a>, </em>just out from Cornell Press. The book introduces the concept of 'smart authoritarianism,' a strategy that seeks to preserve political dominance while minimizing the economic damage of repression. It’s a sharp and unsettling argument—and one that is worth considering as a wave of autocratization continues to sweep across the globe, increasingly enabled by new technologies. </p>]]></description><content:encoded><![CDATA[<p>Today's guest is <strong>Jennifer Lind</strong>,  an associate professor of government at Dartmouth, a fellow at Chatham House London, and the author of the new book <em><a href="https://www.cornellpress.cornell.edu/book/9781501784163/autocracy-20/#bookTabs=1" rel="noopener noreferrer" target="_blank">Autocracy 2.0: How China’s Rise Reinvented Tyranny</a>, </em>just out from Cornell Press. The book introduces the concept of 'smart authoritarianism,' a strategy that seeks to preserve political dominance while minimizing the economic damage of repression. It’s a sharp and unsettling argument—and one that is worth considering as a wave of autocratization continues to sweep across the globe, increasingly enabled by new technologies. </p>]]></content:encoded><link><![CDATA[https://techpolicy.press/unpacking-the-rise-of-smart-authoritarianism-in-china]]></link><guid isPermaLink="false">0abdb5e2-d738-4297-9275-dc36dc16291a</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 25 Jan 2026 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/0abdb5e2-d738-4297-9275-dc36dc16291a.mp3" length="41091778" type="audio/mpeg"/><itunes:duration>42:48</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>How Trump&apos;s AI Policy Promotes Ethnonationalism</title><itunes:title>How Trump&apos;s AI Policy Promotes Ethnonationalism</itunes:title><description><![CDATA[<p>In a <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6065414" rel="noopener noreferrer" target="_blank">forthcoming paper</a>, George Washington University Law School scholar <strong>Spencer Overton</strong> argues that the Trump administration's AI policy is consistent with its broader efforts to advance ethnonationalism. By eliminating policies intended to ensure safeguards against algorithmic bias—and recasting work on such problems as ideological threats to innovation—Trump's policies embed exclusion into the technological infrastructure of the future. As a growing body of research suggests, when AI systems operate without regulation, they default to dominant patterns that reproduce racial inequality and suppress cultural pluralism.</p>]]></description><content:encoded><![CDATA[<p>In a <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6065414" rel="noopener noreferrer" target="_blank">forthcoming paper</a>, George Washington University Law School scholar <strong>Spencer Overton</strong> argues that the Trump administration's AI policy is consistent with its broader efforts to advance ethnonationalism. By eliminating policies intended to ensure safeguards against algorithmic bias—and recasting work on such problems as ideological threats to innovation—Trump's policies embed exclusion into the technological infrastructure of the future. As a growing body of research suggests, when AI systems operate without regulation, they default to dominant patterns that reproduce racial inequality and suppress cultural pluralism.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/how-trumps-ai-policy-promotes-ethnonationalism]]></link><guid isPermaLink="false">5310d526-16da-425b-ae78-4755ccede0d1</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 18 Jan 2026 09:15:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/5310d526-16da-425b-ae78-4755ccede0d1.mp3" length="64924306" type="audio/mpeg"/><itunes:duration>54:06</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>New Book Challenges Assumptions on Digital Governance in China</title><itunes:title>New Book Challenges Assumptions on Digital Governance in China</itunes:title><description><![CDATA[<p>A new book titled <em><a href="https://www.cambridge.org/core/books/governing-digital-china/58304FC2D1D6ECE1A3B44BE475FA16CD" rel="noopener noreferrer" target="_blank">Governing Digital China</a></em> offers crucial insights into China's governance ecosystem. Written by <strong>Daniela Stockmann</strong>, a professor at the Hertie School in Berlin and director of the Center for Digital Governance, and <strong>Ting Luo</strong>, an associate professor in artificial intelligence and government at the University of Birmingham, the book reveals a more complex reality than simple top-down control.</p><p>The authors show how massive tech companies like Tencent and Alibaba have become essential partners to the Chinese state, blending corporate and government power. At the same time, citizens exercise bottom-up influence, shaping how both platforms and the state respond to their needs. The result is what the authors call "popular corporatism"—a form of digital authoritarianism that operates quite differently than you might expect.</p>]]></description><content:encoded><![CDATA[<p>A new book titled <em><a href="https://www.cambridge.org/core/books/governing-digital-china/58304FC2D1D6ECE1A3B44BE475FA16CD" rel="noopener noreferrer" target="_blank">Governing Digital China</a></em> offers crucial insights into China's governance ecosystem. Written by <strong>Daniela Stockmann</strong>, a professor at the Hertie School in Berlin and director of the Center for Digital Governance, and <strong>Ting Luo</strong>, an associate professor in artificial intelligence and government at the University of Birmingham, the book reveals a more complex reality than simple top-down control.</p><p>The authors show how massive tech companies like Tencent and Alibaba have become essential partners to the Chinese state, blending corporate and government power. At the same time, citizens exercise bottom-up influence, shaping how both platforms and the state respond to their needs. The result is what the authors call "popular corporatism"—a form of digital authoritarianism that operates quite differently than you might expect.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/new-book-challenges-assumptions-on-digital-governance-in-china]]></link><guid isPermaLink="false">57dba397-bf59-44c6-ac2d-9d40988246e2</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 18 Jan 2026 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/57dba397-bf59-44c6-ac2d-9d40988246e2.mp3" length="46817397" type="audio/mpeg"/><itunes:duration>48:46</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>What to Expect from US States on Child Online Safety in 2026</title><itunes:title>What to Expect from US States on Child Online Safety in 2026</itunes:title><description><![CDATA[<p>2026 is poised to be another landmark year for the child online safety debate in the United States.</p><p>In recent years, states have passed dozens of bills aimed at expanding protections for kids as they navigate risks on social media platforms, AI chatbots and other pools, with more likely on the way. Lawmakers in Washington, meanwhile, are considering a flurry of proposals that could set a national standard on the issue. But many of these efforts are facing legal limbo as industry and some digital rights groups allege they violate constitutional rights and trample on privacy.</p><p>Tech Policy Press senior editor <strong>Cristiano Lima-Strong</strong> spoke to three experts tracking the issue to assess the current policy landscape in the United States and how it may shift in 2026, particularly as state legislators continue to take up the cause:</p><ul><li><strong>Amina Fazlullah</strong>&nbsp;is head of tech policy advocacy at <a href="https://www.commonsensemedia.org/" rel="noopener noreferrer" target="_blank">Common Sense Media</a>, a group that advocates for child online safety measures. She previously served as a tech policy fellow for Mozilla and as director of policy at the Benton Foundation.</li><li><strong>Joel Thayer</strong>&nbsp;is president of the <a href="https://digitalprogress.tech/" rel="noopener noreferrer" target="_blank">Digital Progress Institute</a>, a think tank that advocates for age verification policies. He previously clerked for Federal Trade Commission official Maureen Ohlhausen and served as policy counsel for the tech trade group The App Association.</li><li><strong>Kate Ruane</strong>&nbsp;is the director of the Free Expression Project at the <a href="https://cdt.org/" rel="noopener noreferrer" target="_blank">Center for Democracy and Technology</a>, a nonprofit that advocates for digital rights. She previously served as lead public policy specialist for the Wikimedia Foundation and as senior legislative counsel for the ACLU.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>2026 is poised to be another landmark year for the child online safety debate in the United States.</p><p>In recent years, states have passed dozens of bills aimed at expanding protections for kids as they navigate risks on social media platforms, AI chatbots and other pools, with more likely on the way. Lawmakers in Washington, meanwhile, are considering a flurry of proposals that could set a national standard on the issue. But many of these efforts are facing legal limbo as industry and some digital rights groups allege they violate constitutional rights and trample on privacy.</p><p>Tech Policy Press senior editor <strong>Cristiano Lima-Strong</strong> spoke to three experts tracking the issue to assess the current policy landscape in the United States and how it may shift in 2026, particularly as state legislators continue to take up the cause:</p><ul><li><strong>Amina Fazlullah</strong>&nbsp;is head of tech policy advocacy at <a href="https://www.commonsensemedia.org/" rel="noopener noreferrer" target="_blank">Common Sense Media</a>, a group that advocates for child online safety measures. She previously served as a tech policy fellow for Mozilla and as director of policy at the Benton Foundation.</li><li><strong>Joel Thayer</strong>&nbsp;is president of the <a href="https://digitalprogress.tech/" rel="noopener noreferrer" target="_blank">Digital Progress Institute</a>, a think tank that advocates for age verification policies. He previously clerked for Federal Trade Commission official Maureen Ohlhausen and served as policy counsel for the tech trade group The App Association.</li><li><strong>Kate Ruane</strong>&nbsp;is the director of the Free Expression Project at the <a href="https://cdt.org/" rel="noopener noreferrer" target="_blank">Center for Democracy and Technology</a>, a nonprofit that advocates for digital rights. She previously served as lead public policy specialist for the Wikimedia Foundation and as senior legislative counsel for the ACLU.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/what-to-expect-from-us-states-on-child-online-safety-in-2026]]></link><guid isPermaLink="false">860d9060-85bb-47ca-9e6d-c8f7a4ef7003</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 11 Jan 2026 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/860d9060-85bb-47ca-9e6d-c8f7a4ef7003.mp3" length="37967103" type="audio/mpeg"/><itunes:duration>39:33</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The Policy Implications of Grok&apos;s &apos;Mass Digital Undressing Spree&apos;</title><itunes:title>The Policy Implications of Grok&apos;s &apos;Mass Digital Undressing Spree&apos;</itunes:title><description><![CDATA[<p>In what <a href="https://www.reuters.com/legal/litigation/grok-says-safeguard-lapses-led-images-minors-minimal-clothing-x-2026-01-02/" rel="noopener noreferrer" target="_blank">Reuters</a> called a "mass digital undressing spree," <strong>Elon Musk </strong>is provoking outrage after his Grok chatbot responded to user prompts to remove the clothing from images of women and pose them in bikinis and to create "sexualized images of children" and post them on X. To discuss the controversy and the broader policy implications of generative AI with regard to child sexual abuse material and nonconsensual intimate imagery, <strong>Justin Hendrix</strong> spoke to <strong>Riana Pfefferkorn</strong>, a policy fellow at the Stanford Institute for Human-Centered AI and author of <a href="https://cyber.fsi.stanford.edu/news/ai-csam-report" rel="noopener noreferrer" target="_blank">numerous</a> <a href="https://partnershiponai.org/hai-researchers-framework-case-study/" rel="noopener noreferrer" target="_blank">reports</a> and <a href="https://www.lawfaremedia.org/article/addressing-computer-generated-child-sex-abuse-imagery-legal-framework-and-policy-implications" rel="noopener noreferrer" target="_blank">articles</a> on these subjects, including for <a href="https://www.techpolicy.press/author/riana-pfefferkorn/" rel="noopener noreferrer" target="_blank">Tech Policy Press</a>.</p>]]></description><content:encoded><![CDATA[<p>In what <a href="https://www.reuters.com/legal/litigation/grok-says-safeguard-lapses-led-images-minors-minimal-clothing-x-2026-01-02/" rel="noopener noreferrer" target="_blank">Reuters</a> called a "mass digital undressing spree," <strong>Elon Musk </strong>is provoking outrage after his Grok chatbot responded to user prompts to remove the clothing from images of women and pose them in bikinis and to create "sexualized images of children" and post them on X. To discuss the controversy and the broader policy implications of generative AI with regard to child sexual abuse material and nonconsensual intimate imagery, <strong>Justin Hendrix</strong> spoke to <strong>Riana Pfefferkorn</strong>, a policy fellow at the Stanford Institute for Human-Centered AI and author of <a href="https://cyber.fsi.stanford.edu/news/ai-csam-report" rel="noopener noreferrer" target="_blank">numerous</a> <a href="https://partnershiponai.org/hai-researchers-framework-case-study/" rel="noopener noreferrer" target="_blank">reports</a> and <a href="https://www.lawfaremedia.org/article/addressing-computer-generated-child-sex-abuse-imagery-legal-framework-and-policy-implications" rel="noopener noreferrer" target="_blank">articles</a> on these subjects, including for <a href="https://www.techpolicy.press/author/riana-pfefferkorn/" rel="noopener noreferrer" target="_blank">Tech Policy Press</a>.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/the-policy-implications-of-groks-mass-digital-undressing-spree]]></link><guid isPermaLink="false">d7d950cb-5a8c-4948-985f-78f74f62cdf8</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 04 Jan 2026 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/d7d950cb-5a8c-4948-985f-78f74f62cdf8.mp3" length="30240740" type="audio/mpeg"/><itunes:duration>31:30</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Through to Thriving: Insights from the Field</title><itunes:title>Through to Thriving: Insights from the Field</itunes:title><description><![CDATA[<p>Tech Policy Press fellow <strong>Anika Collier Navaroli</strong> joined <strong>Justin Hendrix</strong> to discuss insights from her special 2025 series of podcasts, <a href="https://www.techpolicy.press/category/through-to-thriving-a-special-podcast-series-hosted-by-anika-collier-navaroli/" rel="noopener noreferrer" target="_blank"><em>Through to Thriving</em></a><em>. </em>They discussed insights from her interviews over the course of the year with <strong>Ellen Pao</strong>, <strong>Jerrel Peterson</strong>, <strong>Alice Hunsberger</strong>, <strong>Vaishnavi J</strong>, <strong>Desmond Patton</strong>, <strong>Nora Benavidez</strong>, <strong>Mimi Ọnụọha</strong>, <strong>Timnit Gebru</strong>, <strong>Jasmine McNealy</strong>, <strong>Naomi Nix</strong>, and <strong>Chris Gilliard</strong>.</p>]]></description><content:encoded><![CDATA[<p>Tech Policy Press fellow <strong>Anika Collier Navaroli</strong> joined <strong>Justin Hendrix</strong> to discuss insights from her special 2025 series of podcasts, <a href="https://www.techpolicy.press/category/through-to-thriving-a-special-podcast-series-hosted-by-anika-collier-navaroli/" rel="noopener noreferrer" target="_blank"><em>Through to Thriving</em></a><em>. </em>They discussed insights from her interviews over the course of the year with <strong>Ellen Pao</strong>, <strong>Jerrel Peterson</strong>, <strong>Alice Hunsberger</strong>, <strong>Vaishnavi J</strong>, <strong>Desmond Patton</strong>, <strong>Nora Benavidez</strong>, <strong>Mimi Ọnụọha</strong>, <strong>Timnit Gebru</strong>, <strong>Jasmine McNealy</strong>, <strong>Naomi Nix</strong>, and <strong>Chris Gilliard</strong>.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/through-to-thriving-insights-from-the-field]]></link><guid isPermaLink="false">7e1ab00b-5074-4a7b-a755-93ad9702337d</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 21 Dec 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/7e1ab00b-5074-4a7b-a755-93ad9702337d.mp3" length="31589051" type="audio/mpeg"/><itunes:duration>32:54</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>A Critical Look at Trump&apos;s AI Executive Order</title><itunes:title>A Critical Look at Trump&apos;s AI Executive Order</itunes:title><description><![CDATA[<p> On Thursday, US President <strong>Donald Trump</strong> invited reporters into the Oval Office to watch him <a href="https://www.techpolicy.press/trump-signs-executive-order-to-combat-state-ai-regulation/" rel="noopener noreferrer" target="_blank">sign an executive order</a> intended to limit state regulation of artificial intelligence. Trump said AI is a strategic priority for the United States, and that there must be a central source of approval for the companies that develop it.  Today's guest is <strong>Olivier Sylvain</strong>, a professor of law at Fordham Law School and a senior policy research fellow at the Knight First Amendment Institute at Columbia University.  He's the author of "<a href="https://www.techpolicy.press/why-trumps-ai-eo-will-be-doa-in-court/" rel="noopener noreferrer" target="_blank">Why Trump’s AI EO Will be DOA in Court</a>," a perspective published on Tech Policy Press.</p>]]></description><content:encoded><![CDATA[<p> On Thursday, US President <strong>Donald Trump</strong> invited reporters into the Oval Office to watch him <a href="https://www.techpolicy.press/trump-signs-executive-order-to-combat-state-ai-regulation/" rel="noopener noreferrer" target="_blank">sign an executive order</a> intended to limit state regulation of artificial intelligence. Trump said AI is a strategic priority for the United States, and that there must be a central source of approval for the companies that develop it.  Today's guest is <strong>Olivier Sylvain</strong>, a professor of law at Fordham Law School and a senior policy research fellow at the Knight First Amendment Institute at Columbia University.  He's the author of "<a href="https://www.techpolicy.press/why-trumps-ai-eo-will-be-doa-in-court/" rel="noopener noreferrer" target="_blank">Why Trump’s AI EO Will be DOA in Court</a>," a perspective published on Tech Policy Press.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/a-critical-look-at-trumps-ai-executive-order]]></link><guid isPermaLink="false">1c3a9bdd-dab0-486a-9ebb-f7ac98c01982</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 14 Dec 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/1c3a9bdd-dab0-486a-9ebb-f7ac98c01982.mp3" length="31707537" type="audio/mpeg"/><itunes:duration>26:25</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Unpacking the Politics of the EU&apos;s €120M Fine of Musk’s X</title><itunes:title>Unpacking the Politics of the EU&apos;s €120M Fine of Musk’s X</itunes:title><description><![CDATA[<p>On Friday, the European Commission <a href="https://www.techpolicy.press/brussels-fines-musks-x-120m-firing-shot-in-transatlantic-tech-showdown/" rel="noopener noreferrer" target="_blank">fined Elon Musk’s X</a> €120 million&nbsp;for breaching the Digital Services Act, delivering the first-ever non-compliance decision under the European Union’s flagship tech regulation. By Saturday, <strong>Elon Musk</strong> was calling for no less than the abolition of the EU. To discuss the enforcement action, the politics surrounding it, and a variety of other issues related to digital regulation in Europe, <strong>Justin Hendrix</strong> spoke to <strong>Joris van Hoboken</strong>, a professor at the <a href="https://www.ivir.nl/" rel="noopener noreferrer" target="_blank">Institute for Information Law</a> (IViR) at the University of Amsterdam, and part of the core team of the <a href="https://dsa-observatory.eu/#about" rel="noopener noreferrer" target="_blank">Digital Services Act (DSA) Observatory</a>.</p>]]></description><content:encoded><![CDATA[<p>On Friday, the European Commission <a href="https://www.techpolicy.press/brussels-fines-musks-x-120m-firing-shot-in-transatlantic-tech-showdown/" rel="noopener noreferrer" target="_blank">fined Elon Musk’s X</a> €120 million&nbsp;for breaching the Digital Services Act, delivering the first-ever non-compliance decision under the European Union’s flagship tech regulation. By Saturday, <strong>Elon Musk</strong> was calling for no less than the abolition of the EU. To discuss the enforcement action, the politics surrounding it, and a variety of other issues related to digital regulation in Europe, <strong>Justin Hendrix</strong> spoke to <strong>Joris van Hoboken</strong>, a professor at the <a href="https://www.ivir.nl/" rel="noopener noreferrer" target="_blank">Institute for Information Law</a> (IViR) at the University of Amsterdam, and part of the core team of the <a href="https://dsa-observatory.eu/#about" rel="noopener noreferrer" target="_blank">Digital Services Act (DSA) Observatory</a>.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/unpacking-the-politics-of-the-eus-120m-fine-of-musks-x]]></link><guid isPermaLink="false">626e5e7c-d1d3-440a-8f61-f5eede49e8ca</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 07 Dec 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/626e5e7c-d1d3-440a-8f61-f5eede49e8ca.mp3" length="40094120" type="audio/mpeg"/><itunes:duration>41:46</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Exploring Belief and Belonging in a Fractured Online Age</title><itunes:title>Exploring Belief and Belonging in a Fractured Online Age</itunes:title><description><![CDATA[<p>On this podcast, for years we’ve discussed issues such as conspiracy theories, mis- and disinformation, polarization, and the ways in which the design and incentives on today’s technology platforms exacerbate them. Today’s guest is <strong>Calum Lister Matheson</strong>,  associate professor and chair of the Department of Communication at the University of Pittsburgh and a faculty member of the Pittsburgh Psychoanalytic Center. He's the author of <a href="https://www.rutgersuniversitypress.org/post-weird/9781978840164" rel="noopener noreferrer" target="_blank"><em>Post-Weird: Fragmentation, Community, and the Decline of the Mainstream</em></a>, a new book from Rutgers University Press that applies a different lens on the question as he searches for insights into the seemingly inexplicable behaviors of communities such as serpent handlers, pro-anorexia groups, believers in pseudoscience, and conspiracy theorists that deny the reality of gun violence in schools.</p>]]></description><content:encoded><![CDATA[<p>On this podcast, for years we’ve discussed issues such as conspiracy theories, mis- and disinformation, polarization, and the ways in which the design and incentives on today’s technology platforms exacerbate them. Today’s guest is <strong>Calum Lister Matheson</strong>,  associate professor and chair of the Department of Communication at the University of Pittsburgh and a faculty member of the Pittsburgh Psychoanalytic Center. He's the author of <a href="https://www.rutgersuniversitypress.org/post-weird/9781978840164" rel="noopener noreferrer" target="_blank"><em>Post-Weird: Fragmentation, Community, and the Decline of the Mainstream</em></a>, a new book from Rutgers University Press that applies a different lens on the question as he searches for insights into the seemingly inexplicable behaviors of communities such as serpent handlers, pro-anorexia groups, believers in pseudoscience, and conspiracy theorists that deny the reality of gun violence in schools.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/exploring-belief-and-belonging-in-a-fractured-online-age]]></link><guid isPermaLink="false">dabe4b8c-4c49-4a28-a247-7fbaf61ad2eb</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Thu, 04 Dec 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/dabe4b8c-4c49-4a28-a247-7fbaf61ad2eb.mp3" length="74175745" type="audio/mpeg"/><itunes:duration>51:31</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Considering Trust and Safety&apos;s Past, Present, and Future</title><itunes:title>Considering Trust and Safety&apos;s Past, Present, and Future</itunes:title><description><![CDATA[<p>The past few years have seen a great deal of introspection about a professional field which has come to be known as 'trust and safety,' comprised of the people who develop, oversee, and enforce social media policies and community guidelines. Many scholars and advocates describe it as having reached a turning point, mostly for the worst. </p><p>Joining Tech Policy Press contributing editor <strong>Dean Jackson</strong> to discuss the evolution of trust and safety—not coincidentally, the title of their <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5401604" rel="noopener noreferrer" target="_blank">forthcoming article</a> In the <em>Emory Law Journal</em>—are professors of law <strong>Danielle Keats Citron</strong> and <strong>Ari Ezra Waldman. </strong>Also joining the conversation is <strong>Jeff Allen</strong>, the chief research officer at the Integrity Institute, a nonprofit whose membership is composed of trust and safety industry professionals.</p>]]></description><content:encoded><![CDATA[<p>The past few years have seen a great deal of introspection about a professional field which has come to be known as 'trust and safety,' comprised of the people who develop, oversee, and enforce social media policies and community guidelines. Many scholars and advocates describe it as having reached a turning point, mostly for the worst. </p><p>Joining Tech Policy Press contributing editor <strong>Dean Jackson</strong> to discuss the evolution of trust and safety—not coincidentally, the title of their <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5401604" rel="noopener noreferrer" target="_blank">forthcoming article</a> In the <em>Emory Law Journal</em>—are professors of law <strong>Danielle Keats Citron</strong> and <strong>Ari Ezra Waldman. </strong>Also joining the conversation is <strong>Jeff Allen</strong>, the chief research officer at the Integrity Institute, a nonprofit whose membership is composed of trust and safety industry professionals.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/considering-trust-and-safetys-past-present-and-future]]></link><guid isPermaLink="false">81eb3b87-8eae-4b99-b673-10a931dfbb8b</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 30 Nov 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/81eb3b87-8eae-4b99-b673-10a931dfbb8b.mp3" length="57071187" type="audio/mpeg"/><itunes:duration>59:27</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>What Is Europe Trying to Achieve With Its Omnibus and Sovereignty Push?</title><itunes:title>What Is Europe Trying to Achieve With Its Omnibus and Sovereignty Push?</itunes:title><description><![CDATA[<p>This week, the European Commission&nbsp;<a href="https://ec.europa.eu/commission/presscorner/detail/en/ip_25_2718" rel="noopener noreferrer" target="_blank"><u>unveiled</u></a>&nbsp;a sweeping plan to overhaul how the EU enforces its digital and privacy rules as part of a ‘<a href="https://ec.europa.eu/commission/presscorner/detail/en/ip_25_2718" rel="noopener noreferrer" target="_blank"><u>Digital Omnibus</u></a>,’ aiming to ease compliance burdens and speed up implementation of the bloc’s landmark laws. Branded as a “simplification” initiative, the omnibus proposal touches core areas of EU tech regulation — notably the AI Act and the General Data Protection Regulation (GDPR).The Commission argues that this update is necessary to ensure practical implementation of the laws, but civil society organizations&nbsp;<a href="https://peoplevsbig.tech/the-eu-must-uphold-hard-won-protections-for-digital-human-rights/" rel="noopener noreferrer" target="_blank"><u>see the proposed reform</u></a>&nbsp;as the “biggest rollback of digital fundamental rights in EU history.”</p><p>At the same time, leaders are talking loudly about digital sovereignty — including&nbsp;<a href="https://www.techpolicy.press/at-the-sovereignty-summit-europe-put-startups-on-stage-and-kept-big-tech-in-control/" rel="noopener noreferrer" target="_blank"><u>at last week’s summit</u></a>&nbsp;in Berlin. But with the Omnibus appearing to weaken protections and tilt power toward large tech firms, what kind of sovereignty is actually being built?</p><p>Tech Policy Press associate editor <strong>Ramsha Jahangir</strong> spoke to two experts to understand what the EU is trying to achieve:</p><ul><li><strong>Leevi Saari</strong>, EU Policy Fellow at AI Now Institute</li><li><strong>Julia Smakman</strong>, Senior Researcher at the Ada Lovelace Institute</li></ul><br/>]]></description><content:encoded><![CDATA[<p>This week, the European Commission&nbsp;<a href="https://ec.europa.eu/commission/presscorner/detail/en/ip_25_2718" rel="noopener noreferrer" target="_blank"><u>unveiled</u></a>&nbsp;a sweeping plan to overhaul how the EU enforces its digital and privacy rules as part of a ‘<a href="https://ec.europa.eu/commission/presscorner/detail/en/ip_25_2718" rel="noopener noreferrer" target="_blank"><u>Digital Omnibus</u></a>,’ aiming to ease compliance burdens and speed up implementation of the bloc’s landmark laws. Branded as a “simplification” initiative, the omnibus proposal touches core areas of EU tech regulation — notably the AI Act and the General Data Protection Regulation (GDPR).The Commission argues that this update is necessary to ensure practical implementation of the laws, but civil society organizations&nbsp;<a href="https://peoplevsbig.tech/the-eu-must-uphold-hard-won-protections-for-digital-human-rights/" rel="noopener noreferrer" target="_blank"><u>see the proposed reform</u></a>&nbsp;as the “biggest rollback of digital fundamental rights in EU history.”</p><p>At the same time, leaders are talking loudly about digital sovereignty — including&nbsp;<a href="https://www.techpolicy.press/at-the-sovereignty-summit-europe-put-startups-on-stage-and-kept-big-tech-in-control/" rel="noopener noreferrer" target="_blank"><u>at last week’s summit</u></a>&nbsp;in Berlin. But with the Omnibus appearing to weaken protections and tilt power toward large tech firms, what kind of sovereignty is actually being built?</p><p>Tech Policy Press associate editor <strong>Ramsha Jahangir</strong> spoke to two experts to understand what the EU is trying to achieve:</p><ul><li><strong>Leevi Saari</strong>, EU Policy Fellow at AI Now Institute</li><li><strong>Julia Smakman</strong>, Senior Researcher at the Ada Lovelace Institute</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/what-is-europe-trying-to-achieve-with-its-omnibus-and-sovereignty-push]]></link><guid isPermaLink="false">f7089fbf-40b0-47d8-b0ae-ffcdc829b99e</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 23 Nov 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/f7089fbf-40b0-47d8-b0ae-ffcdc829b99e.mp3" length="27058720" type="audio/mpeg"/><itunes:duration>28:11</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Through to Thriving: Protecting Our Privacy with Chris Gilliard</title><itunes:title>Through to Thriving: Protecting Our Privacy with Chris Gilliard</itunes:title><description><![CDATA[<p>In the latest episode in her special podcast series, <a href="https://www.techpolicy.press/category/through-to-thriving-a-special-podcast-series-hosted-by-anika-collier-navaroli/" rel="noopener noreferrer" target="_blank"><em>Through to Thriving</em></a>, Tech Policy Press fellow <strong>Anika Collier Navaroli </strong>talks about protecting privacy with <strong>Chris Gilliard</strong>. Gilliard is co-director of the <a href="https://www.criticalinternet.org/about" rel="noopener noreferrer" target="_blank">Critical Internet Studies Institute</a> and the author of <em>Luxury Surveillance</em>, a forthcoming book from MIT Press.</p>]]></description><content:encoded><![CDATA[<p>In the latest episode in her special podcast series, <a href="https://www.techpolicy.press/category/through-to-thriving-a-special-podcast-series-hosted-by-anika-collier-navaroli/" rel="noopener noreferrer" target="_blank"><em>Through to Thriving</em></a>, Tech Policy Press fellow <strong>Anika Collier Navaroli </strong>talks about protecting privacy with <strong>Chris Gilliard</strong>. Gilliard is co-director of the <a href="https://www.criticalinternet.org/about" rel="noopener noreferrer" target="_blank">Critical Internet Studies Institute</a> and the author of <em>Luxury Surveillance</em>, a forthcoming book from MIT Press.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/through-to-thriving-protecting-our-privacy-with-chris-gilliard]]></link><guid isPermaLink="false">225c0884-31ec-481f-b60d-7337ab995094</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sat, 15 Nov 2025 07:45:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/225c0884-31ec-481f-b60d-7337ab995094.mp3" length="56486478" type="audio/mpeg"/><itunes:duration>58:50</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The Past, Present, and Future of the US Information Integrity Field</title><itunes:title>The Past, Present, and Future of the US Information Integrity Field</itunes:title><description><![CDATA[<p>To discuss the past, present and future of information integrity work, Tech Policy Press contributing editor <strong>Dean Jackson</strong> spoke to American University Center for Security, Innovation and New Technology (CSINT) nonresident fellow <strong>Adam Fivenson </strong>and assistant professor and CSINT director <strong>Samantha Bradshaw.</strong></p>]]></description><content:encoded><![CDATA[<p>To discuss the past, present and future of information integrity work, Tech Policy Press contributing editor <strong>Dean Jackson</strong> spoke to American University Center for Security, Innovation and New Technology (CSINT) nonresident fellow <strong>Adam Fivenson </strong>and assistant professor and CSINT director <strong>Samantha Bradshaw.</strong></p>]]></content:encoded><link><![CDATA[https://techpolicy.press/the-past-present-and-future-of-the-us-information-integrity-field]]></link><guid isPermaLink="false">7226e94c-6690-44a2-a427-d748b383a44a</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sat, 15 Nov 2025 07:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/7226e94c-6690-44a2-a427-d748b383a44a.mp3" length="58048376" type="audio/mpeg"/><itunes:duration>48:22</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>What Are the Implications if the AI Boom Turns to Bust?</title><itunes:title>What Are the Implications if the AI Boom Turns to Bust?</itunes:title><description><![CDATA[<p>This episode considers whether today’s massive AI investment boom reflects real economic fundamentals or an unsustainable bubble, and how a potential crash could reshape AI policy, public sentiment, and narratives about the future that are embraced and advanced not only by Silicon Valley billionaires, but also by politicians and governments.&nbsp;<strong>Justin Hendrix</strong> is joined by:</p><ul><li><strong>Ryan Cummings</strong>, chief of staff at the <a href="https://siepr.stanford.edu/" rel="noopener noreferrer" target="_blank">Stanford Institute for Economic Policy Research</a> and coauthor of a recent <a href="https://www.nytimes.com/2025/10/14/opinion/ai-bubble-stock-market-tech-stocks.html" rel="noopener noreferrer" target="_blank"><em>New York Times</em> opinion</a> on the possibility of an AI bubble;</li><li><strong>Sarah West</strong>, co-director of the <a href="https://ainowinstitute.org/" rel="noopener noreferrer" target="_blank">AI Now Institute</a> and coauthor of a <em>Wall Street Journal </em><a href="https://www.wsj.com/opinion/you-may-already-be-bailing-out-the-ai-business-dd67d452?gaa_at=eafs&amp;gaa_n=AWEtsqdhe2ahuSQtZr9GX6EDI2j6JJVXvugm09CfqHH5sL4TwQLWhzitWr7UIQtaIHQ%3D&amp;gaa_ts=69163834&amp;gaa_sig=SE66NB4nUFrUF77x1rPMkWJEeNISQU3NLA39Bum26W0L1hMd8MaUP1U0Fh4u7FI52QJhh2CcqH3mseK_4pwJug%3D%3D" rel="noopener noreferrer" target="_blank">opinion</a>, "You May Already Be Bailing Out the AI Business"; and</li><li><strong>Brian Merchant</strong>, author of the newsletter <a href="https://www.bloodinthemachine.com/" rel="noopener noreferrer" target="_blank">Blood in the Machine</a>, a journalist in residence at the AI Now Institute, and author of a <a href="https://www.wired.com/story/ai-bubble-will-burst/" rel="noopener noreferrer" target="_blank">recent piece</a> in <em>Wired</em> on signals that suggest a bubble.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>This episode considers whether today’s massive AI investment boom reflects real economic fundamentals or an unsustainable bubble, and how a potential crash could reshape AI policy, public sentiment, and narratives about the future that are embraced and advanced not only by Silicon Valley billionaires, but also by politicians and governments.&nbsp;<strong>Justin Hendrix</strong> is joined by:</p><ul><li><strong>Ryan Cummings</strong>, chief of staff at the <a href="https://siepr.stanford.edu/" rel="noopener noreferrer" target="_blank">Stanford Institute for Economic Policy Research</a> and coauthor of a recent <a href="https://www.nytimes.com/2025/10/14/opinion/ai-bubble-stock-market-tech-stocks.html" rel="noopener noreferrer" target="_blank"><em>New York Times</em> opinion</a> on the possibility of an AI bubble;</li><li><strong>Sarah West</strong>, co-director of the <a href="https://ainowinstitute.org/" rel="noopener noreferrer" target="_blank">AI Now Institute</a> and coauthor of a <em>Wall Street Journal </em><a href="https://www.wsj.com/opinion/you-may-already-be-bailing-out-the-ai-business-dd67d452?gaa_at=eafs&amp;gaa_n=AWEtsqdhe2ahuSQtZr9GX6EDI2j6JJVXvugm09CfqHH5sL4TwQLWhzitWr7UIQtaIHQ%3D&amp;gaa_ts=69163834&amp;gaa_sig=SE66NB4nUFrUF77x1rPMkWJEeNISQU3NLA39Bum26W0L1hMd8MaUP1U0Fh4u7FI52QJhh2CcqH3mseK_4pwJug%3D%3D" rel="noopener noreferrer" target="_blank">opinion</a>, "You May Already Be Bailing Out the AI Business"; and</li><li><strong>Brian Merchant</strong>, author of the newsletter <a href="https://www.bloodinthemachine.com/" rel="noopener noreferrer" target="_blank">Blood in the Machine</a>, a journalist in residence at the AI Now Institute, and author of a <a href="https://www.wired.com/story/ai-bubble-will-burst/" rel="noopener noreferrer" target="_blank">recent piece</a> in <em>Wired</em> on signals that suggest a bubble.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/what-are-the-implications-if-the-ai-boom-turns-to-bust]]></link><guid isPermaLink="false">519bf10f-36e9-4e73-bb9d-413bf6684dec</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Wed, 12 Nov 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/519bf10f-36e9-4e73-bb9d-413bf6684dec.mp3" length="61279213" type="audio/mpeg"/><itunes:duration>51:04</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Why Independent Researchers Need Better Access to Platform Data</title><itunes:title>Why Independent Researchers Need Better Access to Platform Data</itunes:title><description><![CDATA[<p>This episode was recorded in Barcelona at this year’s <strong>Mozilla Festival. </strong>One session at the festival focused on how to get better access to data for independent researchers to study technology platforms and products and their effects on society. It coincided with the launch of the <strong>Knight-Georgetown Institute’s</strong> report, “<a href="https://kgi.georgetown.edu/research-and-commentary/better-access/" rel="noopener noreferrer" target="_blank">Better Access: Data for the Common Good</a>,” the product of a year-long effort to create “a roadmap for expanding access to high-influence public platform data – the narrow slice of public platform data that has the greatest impact on civic life,” with input from individuals across the research community, civil society, and journalism. </p><p>In a gazebo near the Mozilla Festival mainstage, <strong>Justin Hendrix</strong> hosted a podcast discussion with three people working on questions related to data access and advocating for independent technology research:</p><ul><li><strong>Peter Chapman</strong>, associate director of the Knight-Georgetown Institute;</li><li><strong>Brandi Geurkink</strong>, executive director of the Coalition for Independent Tech Research and a former campaigner and fellow at Mozilla; and</li><li><strong>LK Seiling</strong>, a researcher at the Weizenbaum Institute in Berlin and coordinator of the DSA40 Data Access Collaboratory.</li></ul><br/><p><em>Thanks to the Mozilla Foundation and to Francisco, the audio engineer on site at the festival.</em></p>]]></description><content:encoded><![CDATA[<p>This episode was recorded in Barcelona at this year’s <strong>Mozilla Festival. </strong>One session at the festival focused on how to get better access to data for independent researchers to study technology platforms and products and their effects on society. It coincided with the launch of the <strong>Knight-Georgetown Institute’s</strong> report, “<a href="https://kgi.georgetown.edu/research-and-commentary/better-access/" rel="noopener noreferrer" target="_blank">Better Access: Data for the Common Good</a>,” the product of a year-long effort to create “a roadmap for expanding access to high-influence public platform data – the narrow slice of public platform data that has the greatest impact on civic life,” with input from individuals across the research community, civil society, and journalism. </p><p>In a gazebo near the Mozilla Festival mainstage, <strong>Justin Hendrix</strong> hosted a podcast discussion with three people working on questions related to data access and advocating for independent technology research:</p><ul><li><strong>Peter Chapman</strong>, associate director of the Knight-Georgetown Institute;</li><li><strong>Brandi Geurkink</strong>, executive director of the Coalition for Independent Tech Research and a former campaigner and fellow at Mozilla; and</li><li><strong>LK Seiling</strong>, a researcher at the Weizenbaum Institute in Berlin and coordinator of the DSA40 Data Access Collaboratory.</li></ul><br/><p><em>Thanks to the Mozilla Foundation and to Francisco, the audio engineer on site at the festival.</em></p>]]></content:encoded><link><![CDATA[https://techpolicy.press/why-independent-researchers-need-better-access-to-platform-data]]></link><guid isPermaLink="false">40e24b9a-f4f0-432e-9c5a-09db1255da37</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 09 Nov 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/40e24b9a-f4f0-432e-9c5a-09db1255da37.mp3" length="41424047" type="audio/mpeg"/><itunes:duration>43:09</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Through to Thriving: Connecting Art and Policy with Mimi Ọnụọha</title><itunes:title>Through to Thriving: Connecting Art and Policy with Mimi Ọnụọha</itunes:title><description><![CDATA[<p>For her special series of podcasts, <a href="https://www.techpolicy.press/category/through-to-thriving-a-special-podcast-series-hosted-by-anika-collier-navaroli/" rel="noopener noreferrer" target="_blank"><em>Through to Thriving</em></a>, Tech Policy Press fellow <strong>Anika Collier Navaroli </strong>spoke to artist <strong>Mimi Ọnụọha</strong>, whose <a href="https://mimionuoha.com/" rel="noopener noreferrer" target="_blank">work</a> "questions and exposes the contradictory logics of technological progress." The discussion ranged across changing trends in nomenclature of data and artificial intelligence, the role of art in bearing witness to authoritarianism, the interventions and projects that Ọnụọha has created about the datafication of society, and why artists and policy practitioners should work more closely together to build a more just and equitable future.</p><p><br></p>]]></description><content:encoded><![CDATA[<p>For her special series of podcasts, <a href="https://www.techpolicy.press/category/through-to-thriving-a-special-podcast-series-hosted-by-anika-collier-navaroli/" rel="noopener noreferrer" target="_blank"><em>Through to Thriving</em></a>, Tech Policy Press fellow <strong>Anika Collier Navaroli </strong>spoke to artist <strong>Mimi Ọnụọha</strong>, whose <a href="https://mimionuoha.com/" rel="noopener noreferrer" target="_blank">work</a> "questions and exposes the contradictory logics of technological progress." The discussion ranged across changing trends in nomenclature of data and artificial intelligence, the role of art in bearing witness to authoritarianism, the interventions and projects that Ọnụọha has created about the datafication of society, and why artists and policy practitioners should work more closely together to build a more just and equitable future.</p><p><br></p>]]></content:encoded><link><![CDATA[https://techpolicy.press/through-to-thriving-connecting-art-and-policy-with-mimi-nha]]></link><guid isPermaLink="false">9f9de6a2-fc59-4664-9044-1296d20ffb50</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 02 Nov 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/9f9de6a2-fc59-4664-9044-1296d20ffb50.mp3" length="48449538" type="audio/mpeg"/><itunes:duration>50:28</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Ryan Calo Wants to Change the Relationship Between Law and Technology</title><itunes:title>Ryan Calo Wants to Change the Relationship Between Law and Technology</itunes:title><description><![CDATA[<p><strong>Ryan Calo</strong> is a professor at the University of Washington School of Law with a joint appointment at the Information School and an adjunct appointment at the Paul G. Allen School of Computer Science and Engineering. He is a founding co-director of the <a href="https://techpolicylab.uw.edu/" rel="noopener noreferrer" target="_blank">UW Tech Policy Lab</a> and a co-founder of the <a href="https://www.cip.uw.edu/" rel="noopener noreferrer" target="_blank">UW Center for an Informed Public</a>. </p><p>In his new book, <a href="https://academic.oup.com/book/61252" rel="noopener noreferrer" target="_blank"><em>Law and Technology: A Methodical Approach</em></a>, published by Oxford University Press, Calo argues that if the purpose of technology is to expand human capabilities and affordances in the name of innovation, the purpose of law is to establish the expectations, incentives, and boundaries that guide that expansion toward human flourishing. The book "calls for a proactive legal scholarship that inventories societal values and configures technology accordingly."</p>]]></description><content:encoded><![CDATA[<p><strong>Ryan Calo</strong> is a professor at the University of Washington School of Law with a joint appointment at the Information School and an adjunct appointment at the Paul G. Allen School of Computer Science and Engineering. He is a founding co-director of the <a href="https://techpolicylab.uw.edu/" rel="noopener noreferrer" target="_blank">UW Tech Policy Lab</a> and a co-founder of the <a href="https://www.cip.uw.edu/" rel="noopener noreferrer" target="_blank">UW Center for an Informed Public</a>. </p><p>In his new book, <a href="https://academic.oup.com/book/61252" rel="noopener noreferrer" target="_blank"><em>Law and Technology: A Methodical Approach</em></a>, published by Oxford University Press, Calo argues that if the purpose of technology is to expand human capabilities and affordances in the name of innovation, the purpose of law is to establish the expectations, incentives, and boundaries that guide that expansion toward human flourishing. The book "calls for a proactive legal scholarship that inventories societal values and configures technology accordingly."</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/ryan-calo-wants-to-change-the-relationship-between-law-and-technology]]></link><guid isPermaLink="false">6bd7ecca-dc67-4ebe-b5c4-bc6c3012593d</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 26 Oct 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/6bd7ecca-dc67-4ebe-b5c4-bc6c3012593d.mp3" length="34649337" type="audio/mpeg"/><itunes:duration>36:06</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Evaluating Instagram&apos;s Promises to Protect Teens</title><itunes:title>Evaluating Instagram&apos;s Promises to Protect Teens</itunes:title><description><![CDATA[<p>Instagram has spent years making promises about how it intends to protect minors on its platform. To explore its past shortcomings—and the questions lawmakers and regulators should be asking—I spoke with two of the authors of a new report that offers a comprehensive assessment of Instagram’s record on protecting teens:</p><ul><li><strong>Laura Edelson</strong>, an assistant professor of computer science at Northeastern University and co-director of Cybersecurity for Democracy, and </li><li><strong>Arturo Béjar</strong>, the former director of ‘Protect and Care’ at Facebook who has since become a whistleblower and safety advocate.</li></ul><br/><p>Edelson and Béjar are two of the authors of “Teen Accounts, Broken Promises: How Instagram is Failing to Protect Minors.” The <a href="https://fairplayforkids.org/wp-content/uploads/2025/09/Teen-Accounts-Broken-Promises-How-Instagram-is-failing-to-protect-minors.pdf" rel="noopener noreferrer" target="_blank">report</a> is based on a comprehensive review of teen accounts and safety tools, and includes a range of recommendations to the company and to regulators.</p>]]></description><content:encoded><![CDATA[<p>Instagram has spent years making promises about how it intends to protect minors on its platform. To explore its past shortcomings—and the questions lawmakers and regulators should be asking—I spoke with two of the authors of a new report that offers a comprehensive assessment of Instagram’s record on protecting teens:</p><ul><li><strong>Laura Edelson</strong>, an assistant professor of computer science at Northeastern University and co-director of Cybersecurity for Democracy, and </li><li><strong>Arturo Béjar</strong>, the former director of ‘Protect and Care’ at Facebook who has since become a whistleblower and safety advocate.</li></ul><br/><p>Edelson and Béjar are two of the authors of “Teen Accounts, Broken Promises: How Instagram is Failing to Protect Minors.” The <a href="https://fairplayforkids.org/wp-content/uploads/2025/09/Teen-Accounts-Broken-Promises-How-Instagram-is-failing-to-protect-minors.pdf" rel="noopener noreferrer" target="_blank">report</a> is based on a comprehensive review of teen accounts and safety tools, and includes a range of recommendations to the company and to regulators.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/evaluating-instagrams-promises-to-protect-teens]]></link><guid isPermaLink="false">03d9ae29-f1af-48f7-bf1b-c4a7a209a1fd</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 19 Oct 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/03d9ae29-f1af-48f7-bf1b-c4a7a209a1fd.mp3" length="52908512" type="audio/mpeg"/><itunes:duration>44:05</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The Open Internet is Dead. What Comes Next?</title><itunes:title>The Open Internet is Dead. What Comes Next?</itunes:title><description><![CDATA[<p><strong>Mallory Knodel</strong>,  executive director of the Social Web Foundation and founder of a weekly newsletter called the Internet Exchange, and <strong>Burcu Kilic</strong>, a senior fellow at Canada’s Center for International Governance Innovation, or CIGI, are the authors of a recent <a href="https://internet.exchangepoint.tech/big-tech-redefined-the-open-internet-to-serve-its-own-interests/" rel="noopener noreferrer" target="_blank">post</a> on the Internet Exchange titled “Big Tech Redefined the Open Internet to Serve Its Own Interests,” which explores how the idea of the ‘open internet’ has been hollowed out by decades of policy choices and corporate consolidation. </p><p>Kilic traces the problem back to the 1990s, when the US government adopted a hands-off, industry-led approach to regulating the web, paving the way for surveillance capitalism and the dominance of Big Tech. Knodel explains how large companies have co-opted the language of openness and interoperability to defend monopolistic control. The two argue that trade policy, weak enforcement of regulations like the GDPR, and the rise of AI have deepened global dependencies on a few powerful firms, while the current AI moment risks repeating the same mistakes. </p><p>They say to push back we must call for coordinated, democratic alternatives: stronger antitrust action, public digital infrastructure, and grassroots efforts to rebuild truly open, interoperable, and civic-minded technology systems.</p>]]></description><content:encoded><![CDATA[<p><strong>Mallory Knodel</strong>,  executive director of the Social Web Foundation and founder of a weekly newsletter called the Internet Exchange, and <strong>Burcu Kilic</strong>, a senior fellow at Canada’s Center for International Governance Innovation, or CIGI, are the authors of a recent <a href="https://internet.exchangepoint.tech/big-tech-redefined-the-open-internet-to-serve-its-own-interests/" rel="noopener noreferrer" target="_blank">post</a> on the Internet Exchange titled “Big Tech Redefined the Open Internet to Serve Its Own Interests,” which explores how the idea of the ‘open internet’ has been hollowed out by decades of policy choices and corporate consolidation. </p><p>Kilic traces the problem back to the 1990s, when the US government adopted a hands-off, industry-led approach to regulating the web, paving the way for surveillance capitalism and the dominance of Big Tech. Knodel explains how large companies have co-opted the language of openness and interoperability to defend monopolistic control. The two argue that trade policy, weak enforcement of regulations like the GDPR, and the rise of AI have deepened global dependencies on a few powerful firms, while the current AI moment risks repeating the same mistakes. </p><p>They say to push back we must call for coordinated, democratic alternatives: stronger antitrust action, public digital infrastructure, and grassroots efforts to rebuild truly open, interoperable, and civic-minded technology systems.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/the-open-internet-is-dead-what-comes-next]]></link><guid isPermaLink="false">b78588de-a52a-4a1c-8aa6-22dbcd24d423</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 12 Oct 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/b78588de-a52a-4a1c-8aa6-22dbcd24d423.mp3" length="47560529" type="audio/mpeg"/><itunes:duration>49:33</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>What We Can Learn from the First Digital Services Act Out-of-Court Dispute Settlements?</title><itunes:title>What We Can Learn from the First Digital Services Act Out-of-Court Dispute Settlements?</itunes:title><description><![CDATA[<p>It’s been three years since Europe’s Digital Services Act (DSA) came into effect, a sweeping set of&nbsp;rules meant to hold online platforms accountable for how they moderate content and protect users. One component of the law allows users to challenge online platform content moderation decisions through independent, certified bodies rather than judicial proceedings.&nbsp;Under Article 21 of the DSA, these “Out-of-Court Dispute Settlement“ bodies are intended to play a crucial role in resolving disputes over moderation decisions, whether it's about content takedowns, demonetization, account suspensions, or even decisions to leave flagged content online.</p><p>One such out-of-court dispute settlement body is called Appeals Centre Europe. It was established last year as an independent entity with a grant from the Oversight Board Trust, which administers Oversight Board, the content moderation 'supreme court' created and funded by Meta. Appeals Centre Europe has released a&nbsp;<a href="https://www.appealscentre.eu/wp-content/uploads/2025/09/Appeals-Centre-Europe-Transparency-Report.pdf" rel="noopener noreferrer" target="_blank">new transparency report</a>, and the numbers are striking: of the 1,500 disputes the Centre has ruled on, over three-quarters of the platforms’ original decisions were overturned, either because they were incorrect, or because the platform didn’t provide the content for review at all.</p><p>Tech Policy Press associate editor <strong>Ramsha Jahangir</strong> spoke to two experts to unpack what the early wave of disputes tells us about how the system is working, and how platforms are applying their own rules:</p><ul><li><strong>Thomas Hughes</strong>&nbsp;is the CEO of Appeals Center Europe</li><li><strong>Paddy Leerssen</strong>&nbsp;is a postdoctoral researcher at the University of Amsterdam and part of the DSA Observatory, which monitors the implementation of the DSA. </li></ul><br/>]]></description><content:encoded><![CDATA[<p>It’s been three years since Europe’s Digital Services Act (DSA) came into effect, a sweeping set of&nbsp;rules meant to hold online platforms accountable for how they moderate content and protect users. One component of the law allows users to challenge online platform content moderation decisions through independent, certified bodies rather than judicial proceedings.&nbsp;Under Article 21 of the DSA, these “Out-of-Court Dispute Settlement“ bodies are intended to play a crucial role in resolving disputes over moderation decisions, whether it's about content takedowns, demonetization, account suspensions, or even decisions to leave flagged content online.</p><p>One such out-of-court dispute settlement body is called Appeals Centre Europe. It was established last year as an independent entity with a grant from the Oversight Board Trust, which administers Oversight Board, the content moderation 'supreme court' created and funded by Meta. Appeals Centre Europe has released a&nbsp;<a href="https://www.appealscentre.eu/wp-content/uploads/2025/09/Appeals-Centre-Europe-Transparency-Report.pdf" rel="noopener noreferrer" target="_blank">new transparency report</a>, and the numbers are striking: of the 1,500 disputes the Centre has ruled on, over three-quarters of the platforms’ original decisions were overturned, either because they were incorrect, or because the platform didn’t provide the content for review at all.</p><p>Tech Policy Press associate editor <strong>Ramsha Jahangir</strong> spoke to two experts to unpack what the early wave of disputes tells us about how the system is working, and how platforms are applying their own rules:</p><ul><li><strong>Thomas Hughes</strong>&nbsp;is the CEO of Appeals Center Europe</li><li><strong>Paddy Leerssen</strong>&nbsp;is a postdoctoral researcher at the University of Amsterdam and part of the DSA Observatory, which monitors the implementation of the DSA. </li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/what-we-can-learn-from-the-first-digital-services-act-out-of-court-dispute-settlements]]></link><guid isPermaLink="false">225664b2-713e-48e2-bbad-07226c552b95</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Wed, 08 Oct 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/225664b2-713e-48e2-bbad-07226c552b95.mp3" length="31607089" type="audio/mpeg"/><itunes:duration>32:55</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Governing Babel: John Wihbey on Platforms, Power, and the Future of Free Expression</title><itunes:title>Governing Babel: John Wihbey on Platforms, Power, and the Future of Free Expression</itunes:title><description><![CDATA[<p>Drawn from the biblical story in the book of Genesis, “Babel” has come to stand for the challenge of communication across linguistic, cultural, and ideological divides—the confusion and fragmentation that arise when we no longer share a common tongue or understanding. Today’s guest <strong>John Wihbey</strong>,  an associate professor of media Innovation at Northeastern University and the author of a new book titled <em>Governing Babel: The Debate Over Social Media Platforms and Free Speech—And What Comes Next</em> that tries to find an answer to how we can create the space to imagine a different information environment that promotes democracy and consensus rather than division and violence. <a href="https://mitpress.mit.edu/9780262049917/governing-babel/" rel="noopener noreferrer" target="_blank">The book is out October 7 from MIT Press</a>.</p>]]></description><content:encoded><![CDATA[<p>Drawn from the biblical story in the book of Genesis, “Babel” has come to stand for the challenge of communication across linguistic, cultural, and ideological divides—the confusion and fragmentation that arise when we no longer share a common tongue or understanding. Today’s guest <strong>John Wihbey</strong>,  an associate professor of media Innovation at Northeastern University and the author of a new book titled <em>Governing Babel: The Debate Over Social Media Platforms and Free Speech—And What Comes Next</em> that tries to find an answer to how we can create the space to imagine a different information environment that promotes democracy and consensus rather than division and violence. <a href="https://mitpress.mit.edu/9780262049917/governing-babel/" rel="noopener noreferrer" target="_blank">The book is out October 7 from MIT Press</a>.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/governing-babel-john-wihbey-on-platforms-power-and-the-future-of-free-expression]]></link><guid isPermaLink="false">a96b1f5c-780b-4b2c-936f-74d29026a943</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 05 Oct 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/a96b1f5c-780b-4b2c-936f-74d29026a943.mp3" length="49481769" type="audio/mpeg"/><itunes:duration>41:14</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Following DOGE, US States Pursue &apos;Efficiency&apos; Initiatives</title><itunes:title>Following DOGE, US States Pursue &apos;Efficiency&apos; Initiatives</itunes:title><description><![CDATA[<p>Across the United States, dozens of state governments have attempted to establish their own efficiency initiatives, some molded in the image of the federal Department of Government Efficiency (DOGE). A common theme across many of these initiatives is the "stated goal of identifying and eliminating inefficiencies in state government using artificial intelligence (AI)" and promoting "expanded access to existing state data systems," <a href="https://cdt.org/insights/doge-ifying-government-with-data-tech-what-states-can-learn-from-the-federal-doge-fallout/" rel="noopener noreferrer" target="_blank">according to a recent analysis</a> by <strong>Maddy Dwyer</strong>, a policy analyst at the Center for Democracy and Technology.</p><p>To learn more about what these efforts look like and to consider the broader question of AI’s use in government, <strong>Justin Hendrix </strong>spoke to Dwyer and <strong>Ben Green</strong>, an assistant professor in the University of Michigan School of Information and in the Gerald R. Ford School of Public Policy, who has <a href="https://www.techpolicy.press/author/ben-green/" rel="noopener noreferrer" target="_blank">written about DOGE and the use of AI in government</a> for Tech Policy Press.</p>]]></description><content:encoded><![CDATA[<p>Across the United States, dozens of state governments have attempted to establish their own efficiency initiatives, some molded in the image of the federal Department of Government Efficiency (DOGE). A common theme across many of these initiatives is the "stated goal of identifying and eliminating inefficiencies in state government using artificial intelligence (AI)" and promoting "expanded access to existing state data systems," <a href="https://cdt.org/insights/doge-ifying-government-with-data-tech-what-states-can-learn-from-the-federal-doge-fallout/" rel="noopener noreferrer" target="_blank">according to a recent analysis</a> by <strong>Maddy Dwyer</strong>, a policy analyst at the Center for Democracy and Technology.</p><p>To learn more about what these efforts look like and to consider the broader question of AI’s use in government, <strong>Justin Hendrix </strong>spoke to Dwyer and <strong>Ben Green</strong>, an assistant professor in the University of Michigan School of Information and in the Gerald R. Ford School of Public Policy, who has <a href="https://www.techpolicy.press/author/ben-green/" rel="noopener noreferrer" target="_blank">written about DOGE and the use of AI in government</a> for Tech Policy Press.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/following-doge-us-states-pursue-efficiency-initiatives]]></link><guid isPermaLink="false">6ff0efad-e124-49eb-8a05-04cd22acef6c</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 28 Sep 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/6ff0efad-e124-49eb-8a05-04cd22acef6c.mp3" length="49779040" type="audio/mpeg"/><itunes:duration>41:29</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>California Becomes Frontline in Battle Over AI Companions</title><itunes:title>California Becomes Frontline in Battle Over AI Companions</itunes:title><description><![CDATA[<p>With two new bills headed to the desk of Governor Governor Gavin Newsom (D), California could soon pass the most significant guardrails for AI companions in the nation, sparking a lobbying brawl between consumer advocates and tech industry groups.</p><p>In a <a href="https://www.techpolicy.press/inside-the-lobbying-frenzy-over-californias-ai-companion-bills/" rel="noopener noreferrer" target="_blank">recent report</a> for Tech Policy Press,&nbsp; associate editor <strong>Cristiano Lima-Strong</strong> detailed how groups are pouring tens if not hundreds of thousands of dollars into the lobbying fight, which has gained steam amid mounting scrutiny of the products.&nbsp;</p><p>Tech Policy Press CEO and Editor <strong>Justin Hendrix</strong> spoke to Cristiano about the findings, and what the state's legislative battle could mean for AI regulation in the United States.&nbsp;</p><p><em>This reporting was supported by a grant from the </em><a href="https://www.tarbellcenter.org/" rel="noopener noreferrer" target="_blank"><em>Tarbell Center for AI Journalism</em></a><em>.</em></p>]]></description><content:encoded><![CDATA[<p>With two new bills headed to the desk of Governor Governor Gavin Newsom (D), California could soon pass the most significant guardrails for AI companions in the nation, sparking a lobbying brawl between consumer advocates and tech industry groups.</p><p>In a <a href="https://www.techpolicy.press/inside-the-lobbying-frenzy-over-californias-ai-companion-bills/" rel="noopener noreferrer" target="_blank">recent report</a> for Tech Policy Press,&nbsp; associate editor <strong>Cristiano Lima-Strong</strong> detailed how groups are pouring tens if not hundreds of thousands of dollars into the lobbying fight, which has gained steam amid mounting scrutiny of the products.&nbsp;</p><p>Tech Policy Press CEO and Editor <strong>Justin Hendrix</strong> spoke to Cristiano about the findings, and what the state's legislative battle could mean for AI regulation in the United States.&nbsp;</p><p><em>This reporting was supported by a grant from the </em><a href="https://www.tarbellcenter.org/" rel="noopener noreferrer" target="_blank"><em>Tarbell Center for AI Journalism</em></a><em>.</em></p>]]></content:encoded><link><![CDATA[https://techpolicy.press/california-becomes-frontline-in-battle-over-ai-companions]]></link><guid isPermaLink="false">69b10ab0-128e-4622-98d8-149de85142a8</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Fri, 26 Sep 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/69b10ab0-128e-4622-98d8-149de85142a8.mp3" length="25757905" type="audio/mpeg"/><itunes:duration>21:28</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Setting a &apos;Tech Agenda&apos; for Climate Week</title><itunes:title>Setting a &apos;Tech Agenda&apos; for Climate Week</itunes:title><description><![CDATA[<p>​From September 21–28, New York City will host Climate Week. Leaders from business, politics, academia, and civil society will gather to share ideas and develop strategies to address the climate crisis.</p><p>​The tech industry intersects with climate concerns in a number of ways, not least of which is through its own growing demand for natural resources and energy, particularly to power data centers. What should a “tech agenda” for Climate Week include? What are the most important issues that need attention, and how should challenges and opportunities be framed?</p><p>​Last week, Tech Policy Press hosted a live recording of The Tech Policy Press Podcast to get at these questions and more. <strong>Justin Hendrix </strong>was joined by three expert guests:</p><ul><li>​<strong>Alix Dunn</strong>, founder and CEO of&nbsp;<a href="https://www.themaybe.org/about" rel="noopener noreferrer" target="_blank">The Maybe</a></li><li>​<strong>Tamara Kneese</strong>, director of&nbsp;<a href="https://datasociety.net/" rel="noopener noreferrer" target="_blank">Data &amp; Society</a>'s Climate, Technology, and Justice Program</li><li><strong>​Holly Alpine</strong>, co-Founder of the&nbsp;<a href="https://www.enabledemissions.com/" rel="noopener noreferrer" target="_blank">Enabled Emissions Campaign</a></li></ul><br/>]]></description><content:encoded><![CDATA[<p>​From September 21–28, New York City will host Climate Week. Leaders from business, politics, academia, and civil society will gather to share ideas and develop strategies to address the climate crisis.</p><p>​The tech industry intersects with climate concerns in a number of ways, not least of which is through its own growing demand for natural resources and energy, particularly to power data centers. What should a “tech agenda” for Climate Week include? What are the most important issues that need attention, and how should challenges and opportunities be framed?</p><p>​Last week, Tech Policy Press hosted a live recording of The Tech Policy Press Podcast to get at these questions and more. <strong>Justin Hendrix </strong>was joined by three expert guests:</p><ul><li>​<strong>Alix Dunn</strong>, founder and CEO of&nbsp;<a href="https://www.themaybe.org/about" rel="noopener noreferrer" target="_blank">The Maybe</a></li><li>​<strong>Tamara Kneese</strong>, director of&nbsp;<a href="https://datasociety.net/" rel="noopener noreferrer" target="_blank">Data &amp; Society</a>'s Climate, Technology, and Justice Program</li><li><strong>​Holly Alpine</strong>, co-Founder of the&nbsp;<a href="https://www.enabledemissions.com/" rel="noopener noreferrer" target="_blank">Enabled Emissions Campaign</a></li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/setting-a-tech-agenda-for-climate-week]]></link><guid isPermaLink="false">1feefaf5-a08a-4be7-a70b-4998c9afd308</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 21 Sep 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/1feefaf5-a08a-4be7-a70b-4998c9afd308.mp3" length="67506798" type="audio/mpeg"/><itunes:duration>56:15</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Assessing Tech Platform Responses Following the Assassination of Charlie Kirk</title><itunes:title>Assessing Tech Platform Responses Following the Assassination of Charlie Kirk</itunes:title><description><![CDATA[<p><strong>Charlie Kirk</strong>, a conservative activist and co-founder of Turning Point USA, died Wednesday after he was&nbsp;shot at an event&nbsp;at Utah Valley University.&nbsp;Kirk’s assassination was instantly broadcast to the world from multiple perspectives on social media platforms including TikTok, Instagram, YouTube and X. But in the hours and days that have followed, the video and various derivative versions of it have proliferated alongside an increasingly divisive debate over Kirk’s legacy, the possible motives of the assassin, and the political implications. </p><p>It is clear that, in some cases, the tech platforms are struggling to enforce their own content moderation rules, raising questions about their policies and investments in trust and safety, even as AI generated material plays a more significant role in the information ecosystem. </p><p>To learn more about these phenomena, <strong>Justin Hendrix</strong> spoke to <em>Wired</em> senior correspondent <strong>Lauren Goode</strong>, <a href="https://www.wired.com/story/charlie-kirk-shot-videos-spread-social-media/" rel="noopener noreferrer" target="_blank">who is covering this story</a>.</p>]]></description><content:encoded><![CDATA[<p><strong>Charlie Kirk</strong>, a conservative activist and co-founder of Turning Point USA, died Wednesday after he was&nbsp;shot at an event&nbsp;at Utah Valley University.&nbsp;Kirk’s assassination was instantly broadcast to the world from multiple perspectives on social media platforms including TikTok, Instagram, YouTube and X. But in the hours and days that have followed, the video and various derivative versions of it have proliferated alongside an increasingly divisive debate over Kirk’s legacy, the possible motives of the assassin, and the political implications. </p><p>It is clear that, in some cases, the tech platforms are struggling to enforce their own content moderation rules, raising questions about their policies and investments in trust and safety, even as AI generated material plays a more significant role in the information ecosystem. </p><p>To learn more about these phenomena, <strong>Justin Hendrix</strong> spoke to <em>Wired</em> senior correspondent <strong>Lauren Goode</strong>, <a href="https://www.wired.com/story/charlie-kirk-shot-videos-spread-social-media/" rel="noopener noreferrer" target="_blank">who is covering this story</a>.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/assessing-tech-platform-responses-following-the-assassination-of-charlie-kirk]]></link><guid isPermaLink="false">7cfdbf85-bde5-4228-ab64-85da2687372c</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 14 Sep 2025 10:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/7cfdbf85-bde5-4228-ab64-85da2687372c.mp3" length="20540622" type="audio/mpeg"/><itunes:duration>21:24</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Across the US, Activists Are Organizing to Oppose Data Centers</title><itunes:title>Across the US, Activists Are Organizing to Oppose Data Centers</itunes:title><description><![CDATA[<p>Demand for computing power is fueling a massive surge in investment in data centers worldwide. McKinsey estimates spending will hit $6.7 trillion by 2030, with more than $1 trillion expected in the U.S. alone over the next five years. As this boom accelerates, public scrutiny is intensifying. Communities across the country are raising questions about environmental impacts, energy demands, and the broader social and economic consequences of this rapid buildout. </p><p>To learn more about these debates—and the efforts to shape the industry’s future—<strong>Justin Hendrix</strong> spoke with two activists: one working at the national level, and another organizing locally in their own community.</p><ul><li> <strong>Vivek Bharathan </strong>is<strong> </strong>a member of the No Desert Data Center Coalition in Tucson, Arizona.</li><li><strong>Steven Renderos </strong>is executive director of MediaJustice, an advocacy organization that just released a report titled <a href="https://mediajustice.org/resource/the-people-say-no-report/" rel="noopener noreferrer" target="_blank"><em>The People Say No: Resisting Data Centers in the South</em></a>.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>Demand for computing power is fueling a massive surge in investment in data centers worldwide. McKinsey estimates spending will hit $6.7 trillion by 2030, with more than $1 trillion expected in the U.S. alone over the next five years. As this boom accelerates, public scrutiny is intensifying. Communities across the country are raising questions about environmental impacts, energy demands, and the broader social and economic consequences of this rapid buildout. </p><p>To learn more about these debates—and the efforts to shape the industry’s future—<strong>Justin Hendrix</strong> spoke with two activists: one working at the national level, and another organizing locally in their own community.</p><ul><li> <strong>Vivek Bharathan </strong>is<strong> </strong>a member of the No Desert Data Center Coalition in Tucson, Arizona.</li><li><strong>Steven Renderos </strong>is executive director of MediaJustice, an advocacy organization that just released a report titled <a href="https://mediajustice.org/resource/the-people-say-no-report/" rel="noopener noreferrer" target="_blank"><em>The People Say No: Resisting Data Centers in the South</em></a>.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/across-the-us-activists-are-organizing-to-oppose-data-centers]]></link><guid isPermaLink="false">df36b0ff-da60-4271-a426-8127a90ccb44</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 14 Sep 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/df36b0ff-da60-4271-a426-8127a90ccb44.mp3" length="53607488" type="audio/mpeg"/><itunes:duration>44:40</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Through to Thriving: Centering Young People with Vaishnavi J</title><itunes:title>Through to Thriving: Centering Young People with Vaishnavi J</itunes:title><description><![CDATA[<p>For the latest episode in her <a href="https://www.techpolicy.press/category/through-to-thriving-a-special-podcast-series-hosted-by-anika-collier-navaroli/" rel="noopener noreferrer" target="_blank">series of podcast discussions</a>, <em>Through to Thriving</em>, Tech Policy Press fellow <strong>Anika Collier Navaroli </strong>spoke to <strong>Vaishnavi J</strong>, founder and principal of Vyanams Strategies (VYS), a trust and safety advisory firm focusing on youth safety, and a former safety leader at Meta, Twitter, and Google. Anika and Vaishnavi discussed a range of issues on the theme of how to center the views and needs of young people in trust and safety and tech policy development. They considered the importance of protecting the human rights of children, the debates around recent age assurance and age verification regulations, the trade-offs between safety and privacy, and the implications of what Vaishnavi called an “asymmetry” of knowledge across the tech policy community.</p>]]></description><content:encoded><![CDATA[<p>For the latest episode in her <a href="https://www.techpolicy.press/category/through-to-thriving-a-special-podcast-series-hosted-by-anika-collier-navaroli/" rel="noopener noreferrer" target="_blank">series of podcast discussions</a>, <em>Through to Thriving</em>, Tech Policy Press fellow <strong>Anika Collier Navaroli </strong>spoke to <strong>Vaishnavi J</strong>, founder and principal of Vyanams Strategies (VYS), a trust and safety advisory firm focusing on youth safety, and a former safety leader at Meta, Twitter, and Google. Anika and Vaishnavi discussed a range of issues on the theme of how to center the views and needs of young people in trust and safety and tech policy development. They considered the importance of protecting the human rights of children, the debates around recent age assurance and age verification regulations, the trade-offs between safety and privacy, and the implications of what Vaishnavi called an “asymmetry” of knowledge across the tech policy community.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/through-to-thriving-centering-young-people-with-vaishnavi-j]]></link><guid isPermaLink="false">d5bb6685-de26-46ef-b1fc-f6af8a84c92a</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 07 Sep 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/d5bb6685-de26-46ef-b1fc-f6af8a84c92a.mp3" length="46796622" type="audio/mpeg"/><itunes:duration>48:45</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Seeing Like a Platform</title><itunes:title>Seeing Like a Platform</itunes:title><description><![CDATA[<p>Today’s guest is <strong>Petter Törnberg</strong>, who with <strong>Justus Uitermark</strong> is one of the authors of a new book, titled <a href="https://www.taylorfrancis.com/books/oa-edit/10.4324/9781003326861/seeing-like-platform-justus-uitermark-petter-t%C3%B6rnberg?_gl=1*2z5x02*_gcl_au*MTgzMjA5OTU4MC4xNzU2MzA2NDUy*_ga*NDUxOTczMzI3LjE3NTYzMDY0NTI.*_ga_0HYE8YG0M6*czE3NTYzMDY0NTEkbzEkZzEkdDE3NTYzMDY0NTIkajU5JGwwJGgw" rel="noopener noreferrer" target="_blank"><em>Seeing Like a Platform: An Inquiry into the Condition of Digital Modernity</em></a><em>, </em>that sets out to address the “entanglement of epistemology, technology, and politics in digital modernity,” and what studying that entanglement can tell us about the workings of power. The book is part of a part of a series of research monographs that intend to encourage social scientists to embrace a “complex systems approach to studying the social world.” </p>]]></description><content:encoded><![CDATA[<p>Today’s guest is <strong>Petter Törnberg</strong>, who with <strong>Justus Uitermark</strong> is one of the authors of a new book, titled <a href="https://www.taylorfrancis.com/books/oa-edit/10.4324/9781003326861/seeing-like-platform-justus-uitermark-petter-t%C3%B6rnberg?_gl=1*2z5x02*_gcl_au*MTgzMjA5OTU4MC4xNzU2MzA2NDUy*_ga*NDUxOTczMzI3LjE3NTYzMDY0NTI.*_ga_0HYE8YG0M6*czE3NTYzMDY0NTEkbzEkZzEkdDE3NTYzMDY0NTIkajU5JGwwJGgw" rel="noopener noreferrer" target="_blank"><em>Seeing Like a Platform: An Inquiry into the Condition of Digital Modernity</em></a><em>, </em>that sets out to address the “entanglement of epistemology, technology, and politics in digital modernity,” and what studying that entanglement can tell us about the workings of power. The book is part of a part of a series of research monographs that intend to encourage social scientists to embrace a “complex systems approach to studying the social world.” </p>]]></content:encoded><link><![CDATA[https://techpolicy.press/seeing-like-a-platform]]></link><guid isPermaLink="false">629584cb-d974-4d4b-9a21-121498cf9ded</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 31 Aug 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/629584cb-d974-4d4b-9a21-121498cf9ded.mp3" length="48597788" type="audio/mpeg"/><itunes:duration>40:30</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Inside the Lobbying Blitz Over Colorado&apos;s AI Law</title><itunes:title>Inside the Lobbying Blitz Over Colorado&apos;s AI Law</itunes:title><description><![CDATA[<p>Last year, Colorado signed a first-of-its-kind artificial intelligence measure into law. The Colorado AI Act would require developers of high-risk AI systems to take reasonable steps to prevent harms to consumers, such as algorithmic discrimination, including by conducting impact assessments on their tools.</p><p>But last week, the state kicked off a special session where lawmakers held frenzied negotiations over whether to expand or dilute its protections. The chapter unfolded amid fierce lobbying by industry groups and consumer advocates. Ultimately, the state legislature punted on amending the law but agreed to delay its implementation from February to June of next year. The move likely tees up another round of contentious talks over one of the nation’s most sprawling AI statues.</p><p>This week, Tech Policy Press associate editor <strong>Cristiano Lima-Strong</strong> spoke to two local reporters who have been <a href="https://coloradosun.com/2025/08/25/colorado-ai-law-tweak-dies/" rel="noopener noreferrer" target="_blank">closely tracking</a> the saga for the <em>Colorado Sun</em>: political reporter and editor <strong>Jesse Paul</strong> and politics and policy reporter <strong>Taylor Dolven.</strong></p>]]></description><content:encoded><![CDATA[<p>Last year, Colorado signed a first-of-its-kind artificial intelligence measure into law. The Colorado AI Act would require developers of high-risk AI systems to take reasonable steps to prevent harms to consumers, such as algorithmic discrimination, including by conducting impact assessments on their tools.</p><p>But last week, the state kicked off a special session where lawmakers held frenzied negotiations over whether to expand or dilute its protections. The chapter unfolded amid fierce lobbying by industry groups and consumer advocates. Ultimately, the state legislature punted on amending the law but agreed to delay its implementation from February to June of next year. The move likely tees up another round of contentious talks over one of the nation’s most sprawling AI statues.</p><p>This week, Tech Policy Press associate editor <strong>Cristiano Lima-Strong</strong> spoke to two local reporters who have been <a href="https://coloradosun.com/2025/08/25/colorado-ai-law-tweak-dies/" rel="noopener noreferrer" target="_blank">closely tracking</a> the saga for the <em>Colorado Sun</em>: political reporter and editor <strong>Jesse Paul</strong> and politics and policy reporter <strong>Taylor Dolven.</strong></p>]]></content:encoded><link><![CDATA[https://techpolicy.press/inside-the-lobbying-blitz-over-colorados-ai-law]]></link><guid isPermaLink="false">3ee12497-59c7-4768-8006-a16ad2868a4b</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Fri, 29 Aug 2025 08:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/3ee12497-59c7-4768-8006-a16ad2868a4b.mp3" length="27077068" type="audio/mpeg"/><itunes:duration>22:34</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>New Insights on Tech and the Crisis of Democracy</title><itunes:title>New Insights on Tech and the Crisis of Democracy</itunes:title><description><![CDATA[<p>On this podcast, we’ve come back again and again to questions around mis- and disinformation, propaganda, rumors, and the role that digital platforms play in anti-democratic phenomena. In a new book published this summer by Oxford University Press called <a href="https://global.oup.com/academic/product/connective-action-and-the-rise-of-the-far-right-9780197794937?cc=us&amp;lang=en&amp;" rel="noopener noreferrer" target="_blank"><em>Connective Action and the Rise of the Far-Right: Platforms, Politics, and the Crisis of Democracy</em></a>, a group of scholars from varied research traditions set out to find new ways to marry more traditional political science with computational social science approaches to understand the phenomenon of democratic backsliding and to bring some clarity to the present moment, particularly in the United States. </p><p><strong>Justin Hendrix</strong> had the chance to speak to two of the volume’s editors and two of its authors:</p><ul><li><strong>Steven Livingston</strong>,  a professor and founding director of the Institute for Data Democracy and Politics at the George Washington University;</li><li><strong>Michael Miller</strong>,  managing director of the Moynihan Center at the City College of New York;</li><li><strong>Kate Starbird</strong>,  a professor at the University of Washington and a co-founder of the Center for an Informed Public; and</li><li><strong>Josephine Lukito</strong>,  assistant professor at the University of Texas at Austin and senior faculty research associate at the Center for Media Engagement. </li></ul><br/>]]></description><content:encoded><![CDATA[<p>On this podcast, we’ve come back again and again to questions around mis- and disinformation, propaganda, rumors, and the role that digital platforms play in anti-democratic phenomena. In a new book published this summer by Oxford University Press called <a href="https://global.oup.com/academic/product/connective-action-and-the-rise-of-the-far-right-9780197794937?cc=us&amp;lang=en&amp;" rel="noopener noreferrer" target="_blank"><em>Connective Action and the Rise of the Far-Right: Platforms, Politics, and the Crisis of Democracy</em></a>, a group of scholars from varied research traditions set out to find new ways to marry more traditional political science with computational social science approaches to understand the phenomenon of democratic backsliding and to bring some clarity to the present moment, particularly in the United States. </p><p><strong>Justin Hendrix</strong> had the chance to speak to two of the volume’s editors and two of its authors:</p><ul><li><strong>Steven Livingston</strong>,  a professor and founding director of the Institute for Data Democracy and Politics at the George Washington University;</li><li><strong>Michael Miller</strong>,  managing director of the Moynihan Center at the City College of New York;</li><li><strong>Kate Starbird</strong>,  a professor at the University of Washington and a co-founder of the Center for an Informed Public; and</li><li><strong>Josephine Lukito</strong>,  assistant professor at the University of Texas at Austin and senior faculty research associate at the Center for Media Engagement. </li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/new-insights-on-tech-and-the-crisis-of-democracy]]></link><guid isPermaLink="false">b7442bb2-17b9-4036-93ce-d5e5b8b1cc1a</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 24 Aug 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/b7442bb2-17b9-4036-93ce-d5e5b8b1cc1a.mp3" length="60770958" type="audio/mpeg"/><itunes:duration>50:39</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Through to Thriving: Pursuing The Truth with Dr. Jasmine McNealy and Naomi Nix</title><itunes:title>Through to Thriving: Pursuing The Truth with Dr. Jasmine McNealy and Naomi Nix</itunes:title><description><![CDATA[<p>In the latest installment in her <a href="https://www.techpolicy.press/category/through-to-thriving-a-special-podcast-series-hosted-by-anika-collier-navaroli/" rel="noopener noreferrer" target="_blank">series of podcasts</a> called <em>Through to Thriving,</em> Tech Policy Press fellow <strong>Anika Collier Navoroli</strong> speaks with <strong>Dr. Jasmine McNealy</strong>, an attorney, critical public interest technologist, and professor in the Department of Media Production, Management, and Technology at the University of Florida;, and <strong>Naomi Nix</strong>, a staff writer for<em style="font-size: 1.125rem;"> The Washington Post,</em> where she reports on technology and social media companies. </p><p>They discuss how they found themselves on the path through journalism and into a focus on tech and tech policy, the distinctions between truth and facts and whether there has ever been such a thing as a singular truth, how communities of color have historically seen and filled the gaps in mainstream media coverage, the rise of news influencers, and how journalists can regain the trust of the public.</p>]]></description><content:encoded><![CDATA[<p>In the latest installment in her <a href="https://www.techpolicy.press/category/through-to-thriving-a-special-podcast-series-hosted-by-anika-collier-navaroli/" rel="noopener noreferrer" target="_blank">series of podcasts</a> called <em>Through to Thriving,</em> Tech Policy Press fellow <strong>Anika Collier Navoroli</strong> speaks with <strong>Dr. Jasmine McNealy</strong>, an attorney, critical public interest technologist, and professor in the Department of Media Production, Management, and Technology at the University of Florida;, and <strong>Naomi Nix</strong>, a staff writer for<em style="font-size: 1.125rem;"> The Washington Post,</em> where she reports on technology and social media companies. </p><p>They discuss how they found themselves on the path through journalism and into a focus on tech and tech policy, the distinctions between truth and facts and whether there has ever been such a thing as a singular truth, how communities of color have historically seen and filled the gaps in mainstream media coverage, the rise of news influencers, and how journalists can regain the trust of the public.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/through-to-thriving-pursuing-the-truth-with-dr-jasmine-mcnealy-and-naomi-nix]]></link><guid isPermaLink="false">a3fa1d6e-e5f4-4419-b480-3faad696dd9e</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 17 Aug 2025 09:15:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/a3fa1d6e-e5f4-4419-b480-3faad696dd9e.mp3" length="49238628" type="audio/mpeg"/><itunes:duration>51:17</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Technology and Democracy in the New India</title><itunes:title>Technology and Democracy in the New India</itunes:title><description><![CDATA[<p>Today’s guest, journalist <strong>Rahul Bhatia</strong>, has written a book that is part journalistic account, part history, and part memoir titled <a href="https://www.goodreads.com/book/show/209473262-the-new-india" rel="noopener noreferrer" target="_blank"><em>The New India: The Unmaking of the World's Largest Democracy</em></a>.</p><p>Reviewing the book in <a href="https://www.theguardian.com/books/article/2024/jul/31/the-new-india-by-rahul-bhatia-review-how-nationalism-changed-a-country" rel="noopener noreferrer" target="_blank"><em>The Guardian</em></a>, <strong>Salil Tripathi</strong> writes that “Bhatia’s remarkable book is an absorbing account of&nbsp;India’s transformation from the world’s largest democracy to something more like the world’s most populous country that regularly holds elections.”&nbsp;</p><p>Bhatia considers the role of technology, including taking a close look at Aadhaar—India’s national biometric identification program—in order to consider the role it plays in the modern state and what the motivations behind it reveal.</p>]]></description><content:encoded><![CDATA[<p>Today’s guest, journalist <strong>Rahul Bhatia</strong>, has written a book that is part journalistic account, part history, and part memoir titled <a href="https://www.goodreads.com/book/show/209473262-the-new-india" rel="noopener noreferrer" target="_blank"><em>The New India: The Unmaking of the World's Largest Democracy</em></a>.</p><p>Reviewing the book in <a href="https://www.theguardian.com/books/article/2024/jul/31/the-new-india-by-rahul-bhatia-review-how-nationalism-changed-a-country" rel="noopener noreferrer" target="_blank"><em>The Guardian</em></a>, <strong>Salil Tripathi</strong> writes that “Bhatia’s remarkable book is an absorbing account of&nbsp;India’s transformation from the world’s largest democracy to something more like the world’s most populous country that regularly holds elections.”&nbsp;</p><p>Bhatia considers the role of technology, including taking a close look at Aadhaar—India’s national biometric identification program—in order to consider the role it plays in the modern state and what the motivations behind it reveal.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/technology-and-democracy-in-the-new-india]]></link><guid isPermaLink="false">b823f212-eb5f-4bcc-ae6a-9205c5954cce</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 17 Aug 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/b823f212-eb5f-4bcc-ae6a-9205c5954cce.mp3" length="45027699" type="audio/mpeg"/><itunes:duration>46:54</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>A Conversation with Jeff Horwitz on Meta&apos;s Flawed Rules for AI Chatbots</title><itunes:title>A Conversation with Jeff Horwitz on Meta&apos;s Flawed Rules for AI Chatbots</itunes:title><description><![CDATA[<p>On Thursday, Reuters tech reporter <strong>Jeff Horwitz</strong>, who broke the story of the Facebook Papers back in 2021&nbsp;when he was at the <em>Wall Street Journal</em>, published two pieces, both detailing new revelations about Meta’s approach to AI chatbots. </p><p>In a Reuters special report, Horwitz <a href="https://www.reuters.com/investigates/special-report/meta-ai-chatbot-death/" rel="noopener noreferrer" target="_blank">tells the story</a> of a man with a cognitive impairment who died while attempting to travel to meet a chatbot character he believed was real. And in <a href="https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/" rel="noopener noreferrer" target="_blank">a related article</a>, Horwitz reports on an internal Meta policy document that appears to endorse its chatbots engaging with children “in conversations that are romantic or sensual,” as well as other concerning behaviors. </p><p>Earlier today, <strong>Justin Hendrix</strong> caught up with Horwitz about the reports and what they tell us about Silicon Valley’s no holds barred pursuit of AI, even at the expense of the safety of vulnerable people and children.</p>]]></description><content:encoded><![CDATA[<p>On Thursday, Reuters tech reporter <strong>Jeff Horwitz</strong>, who broke the story of the Facebook Papers back in 2021&nbsp;when he was at the <em>Wall Street Journal</em>, published two pieces, both detailing new revelations about Meta’s approach to AI chatbots. </p><p>In a Reuters special report, Horwitz <a href="https://www.reuters.com/investigates/special-report/meta-ai-chatbot-death/" rel="noopener noreferrer" target="_blank">tells the story</a> of a man with a cognitive impairment who died while attempting to travel to meet a chatbot character he believed was real. And in <a href="https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/" rel="noopener noreferrer" target="_blank">a related article</a>, Horwitz reports on an internal Meta policy document that appears to endorse its chatbots engaging with children “in conversations that are romantic or sensual,” as well as other concerning behaviors. </p><p>Earlier today, <strong>Justin Hendrix</strong> caught up with Horwitz about the reports and what they tell us about Silicon Valley’s no holds barred pursuit of AI, even at the expense of the safety of vulnerable people and children.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/a-conversation-with-jeff-horwitz-on-metas-flawed-rules-for-ai-chatbots]]></link><guid isPermaLink="false">a4ce41e3-0d5a-45bc-92fa-e7be925183de</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Thu, 14 Aug 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/a4ce41e3-0d5a-45bc-92fa-e7be925183de.mp3" length="29026324" type="audio/mpeg"/><itunes:duration>24:11</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Daniel Solove on Privacy, Technology, and the Rule of Law</title><itunes:title>Daniel Solove on Privacy, Technology, and the Rule of Law</itunes:title><description><![CDATA[<p><strong>Daniel J. Solove</strong> is the Eugene L. and Barbara A. Bernard Professor of Intellectual Property and Technology Law at the George Washington University Law School.&nbsp;The project of his latest book, <a href="https://global.oup.com/academic/product/on-privacy-and-technology-9780197771686" rel="noopener noreferrer" target="_blank"><em>On Privacy and Technology</em></a>, is to synthesize twenty five years of thinking about privacy into a “succinct and accessible” volume and to help the reader understand “the relationship between law, technology, and privacy” in rapidly changing world.&nbsp;<strong>Justin Hendrix</strong> spoke to him about the book and how recent events in the United States relate to his areas of concern.</p>]]></description><content:encoded><![CDATA[<p><strong>Daniel J. Solove</strong> is the Eugene L. and Barbara A. Bernard Professor of Intellectual Property and Technology Law at the George Washington University Law School.&nbsp;The project of his latest book, <a href="https://global.oup.com/academic/product/on-privacy-and-technology-9780197771686" rel="noopener noreferrer" target="_blank"><em>On Privacy and Technology</em></a>, is to synthesize twenty five years of thinking about privacy into a “succinct and accessible” volume and to help the reader understand “the relationship between law, technology, and privacy” in rapidly changing world.&nbsp;<strong>Justin Hendrix</strong> spoke to him about the book and how recent events in the United States relate to his areas of concern.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/daniel-solove-on-privacy-technology-and-the-rule-of-law]]></link><guid isPermaLink="false">8b8503a3-223d-440d-9caf-7883230006cd</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 10 Aug 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/8b8503a3-223d-440d-9caf-7883230006cd.mp3" length="46116558" type="audio/mpeg"/><itunes:duration>48:02</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Through to Thriving: Advocating for Change with Nora Benavidez</title><itunes:title>Through to Thriving: Advocating for Change with Nora Benavidez</itunes:title><description><![CDATA[<p><em>Through To Thriving</em> is a a special series of podcast episodes hosted by Tech Policy Press fellow <strong>Anika Collier Navaroli</strong>. With her guests, Anika is imagining futures beyond our current moment. For this episode, she spoke with <strong>Nora Benavidez</strong>, senior counsel and director of digital justice and civil rights at the nonprofit Free Press.&nbsp;Anika and Nora discussed the past and present state of platform accountability advocacy, the steps of building a campaign, the possibility of forming a creative agency to support advocates, and what to make of so called “woke AI.”</p><p>This episode and conversation about advocating for change is dedicated to the memory and life of our former colleague and tech accountability researcher and advocate Brandi Collins-Dexter. </p><p><br></p>]]></description><content:encoded><![CDATA[<p><em>Through To Thriving</em> is a a special series of podcast episodes hosted by Tech Policy Press fellow <strong>Anika Collier Navaroli</strong>. With her guests, Anika is imagining futures beyond our current moment. For this episode, she spoke with <strong>Nora Benavidez</strong>, senior counsel and director of digital justice and civil rights at the nonprofit Free Press.&nbsp;Anika and Nora discussed the past and present state of platform accountability advocacy, the steps of building a campaign, the possibility of forming a creative agency to support advocates, and what to make of so called “woke AI.”</p><p>This episode and conversation about advocating for change is dedicated to the memory and life of our former colleague and tech accountability researcher and advocate Brandi Collins-Dexter. </p><p><br></p>]]></content:encoded><link><![CDATA[https://techpolicy.press/through-to-thriving-advocating-for-change-with-nora-benavidez]]></link><guid isPermaLink="false">b336d2df-c2b6-418c-9c0a-8af053b444a1</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 03 Aug 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/b336d2df-c2b6-418c-9c0a-8af053b444a1.mp3" length="46752209" type="audio/mpeg"/><itunes:duration>48:42</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Unpacking China&apos;s Global AI Governance Plan</title><itunes:title>Unpacking China&apos;s Global AI Governance Plan</itunes:title><description><![CDATA[<p>On Saturday, July 26, three days after&nbsp;the Trump administration published its&nbsp;<a href="https://www.techpolicy.press/unpacking-trumps-ai-action-plan-gutting-rules-and-speeding-rollout/" rel="noopener noreferrer" target="_blank">AI action plan</a>, China’s foreign ministry released that country’s <a href="https://www.reuters.com/world/china/china-proposes-new-global-ai-cooperation-organisation-2025-07-26/" rel="noopener noreferrer" target="_blank">action plan for&nbsp;global AI governance</a>. As the US pursues “global dominance,” China is communicating a different posture. What should we know about China’s <a href="https://www.gov.cn/yaowen/liebiao/202507/content_7033929.htm" rel="noopener noreferrer" target="_blank">plan</a>, and how does it contrast with the US plan? What's at stake in the competition between the two superpowers?</p><p>To answer these questions, <strong>Justin Hendrix</strong> reached out to a close observer of China's tech policy.  <a href="https://digichina.stanford.edu/people/graham-webster/?page=1&amp;sort_order=desc&amp;sort_by=iso_date" rel="noopener noreferrer" target="_blank"><strong>Graham Webster</strong></a> is a lecturer and research scholar at Stanford University in the Program on Geopolitics, Technology, and Governance, and he is the Editor-in-Chief of the <a href="https://digichina.stanford.edu/about/" rel="noopener noreferrer" target="_blank">DigiChina Project</a>, a "collaborative effort to analyze and understand Chinese technology policy developments through direct engagement with primary sources, providing analysis, context, translation, and expert opinion." Webster attended the World Artificial Intelligence Conference in Shanghai.</p>]]></description><content:encoded><![CDATA[<p>On Saturday, July 26, three days after&nbsp;the Trump administration published its&nbsp;<a href="https://www.techpolicy.press/unpacking-trumps-ai-action-plan-gutting-rules-and-speeding-rollout/" rel="noopener noreferrer" target="_blank">AI action plan</a>, China’s foreign ministry released that country’s <a href="https://www.reuters.com/world/china/china-proposes-new-global-ai-cooperation-organisation-2025-07-26/" rel="noopener noreferrer" target="_blank">action plan for&nbsp;global AI governance</a>. As the US pursues “global dominance,” China is communicating a different posture. What should we know about China’s <a href="https://www.gov.cn/yaowen/liebiao/202507/content_7033929.htm" rel="noopener noreferrer" target="_blank">plan</a>, and how does it contrast with the US plan? What's at stake in the competition between the two superpowers?</p><p>To answer these questions, <strong>Justin Hendrix</strong> reached out to a close observer of China's tech policy.  <a href="https://digichina.stanford.edu/people/graham-webster/?page=1&amp;sort_order=desc&amp;sort_by=iso_date" rel="noopener noreferrer" target="_blank"><strong>Graham Webster</strong></a> is a lecturer and research scholar at Stanford University in the Program on Geopolitics, Technology, and Governance, and he is the Editor-in-Chief of the <a href="https://digichina.stanford.edu/about/" rel="noopener noreferrer" target="_blank">DigiChina Project</a>, a "collaborative effort to analyze and understand Chinese technology policy developments through direct engagement with primary sources, providing analysis, context, translation, and expert opinion." Webster attended the World Artificial Intelligence Conference in Shanghai.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/unpacking-chinas-global-ai-governance-plan]]></link><guid isPermaLink="false">b441b6de-9c7c-4e30-bb99-733f69a44a07</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sat, 02 Aug 2025 08:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/b441b6de-9c7c-4e30-bb99-733f69a44a07.mp3" length="58991917" type="audio/mpeg"/><itunes:duration>49:10</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Considering Trump’s AI Plan and the Future It Portends</title><itunes:title>Considering Trump’s AI Plan and the Future It Portends</itunes:title><description><![CDATA[<p>Yesterday, United States <strong>President Donald Trump</strong> took to the stage at the "Winning the AI Race Summit" to promote the administration's <a href="https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf" rel="noopener noreferrer" target="_blank">AI Action Plan</a>. Shortly after it was published, Tech Policy Press editor <strong>Justin Hendrix </strong>sat down with <strong>Sarah Myers West</strong>, the co-director of the AI Now Institute; <strong>Maia Woluchem</strong>, the program director of the Trustworthy Infrastructures team at Data and Society; and <strong>Ryan Gerety</strong>, the director of the Athena Coalition, to discuss the plan and what it portends for the future.</p>]]></description><content:encoded><![CDATA[<p>Yesterday, United States <strong>President Donald Trump</strong> took to the stage at the "Winning the AI Race Summit" to promote the administration's <a href="https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf" rel="noopener noreferrer" target="_blank">AI Action Plan</a>. Shortly after it was published, Tech Policy Press editor <strong>Justin Hendrix </strong>sat down with <strong>Sarah Myers West</strong>, the co-director of the AI Now Institute; <strong>Maia Woluchem</strong>, the program director of the Trustworthy Infrastructures team at Data and Society; and <strong>Ryan Gerety</strong>, the director of the Athena Coalition, to discuss the plan and what it portends for the future.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/considering-trumps-ai-plan-and-the-future-it-portends]]></link><guid isPermaLink="false">36869689-82a2-41e2-8f5b-a8620f2bfcdd</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Thu, 24 Jul 2025 13:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/36869689-82a2-41e2-8f5b-a8620f2bfcdd.mp3" length="51795277" type="audio/mpeg"/><itunes:duration>53:57</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Centering Disability Rights in US Tech Policy 35 Years After ADA</title><itunes:title>Centering Disability Rights in US Tech Policy 35 Years After ADA</itunes:title><description><![CDATA[<p>This weekend, the Americans with Disabilities Act (ADA) turns 35. Signed into law on July 26, 1990, the law provides broad anti-discrimination protections for people with disabilities in the US, and has impacted how people with disabilities interact with various technologies. To discuss how the law has aged and what the fight for equity and inclusion looks like going forward, Tech Policy Press fellow <strong>Ariana Aboulafia</strong> spoke with three leaders working at the intersection of disability and technology:</p><ul><li><strong>Maitreya Shah</strong> is the tech policy director at the American Association of People with Disabilities.</li><li><strong>Blake Reid </strong>is a professor at the University of Colorado.</li><li><strong>Cynthia Bennett</strong> is a senior research scientist at Google.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>This weekend, the Americans with Disabilities Act (ADA) turns 35. Signed into law on July 26, 1990, the law provides broad anti-discrimination protections for people with disabilities in the US, and has impacted how people with disabilities interact with various technologies. To discuss how the law has aged and what the fight for equity and inclusion looks like going forward, Tech Policy Press fellow <strong>Ariana Aboulafia</strong> spoke with three leaders working at the intersection of disability and technology:</p><ul><li><strong>Maitreya Shah</strong> is the tech policy director at the American Association of People with Disabilities.</li><li><strong>Blake Reid </strong>is a professor at the University of Colorado.</li><li><strong>Cynthia Bennett</strong> is a senior research scientist at Google.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/centering-disability-rights-in-us-tech-policy-35-years-after-ada]]></link><guid isPermaLink="false">afe7816f-3da8-48a5-82b8-390f1a0fa47b</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Thu, 24 Jul 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/afe7816f-3da8-48a5-82b8-390f1a0fa47b.mp3" length="44575061" type="audio/mpeg"/><itunes:duration>46:26</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Through to Thriving: Finding Balance and Resilience in the Trust &amp; Safety Field</title><itunes:title>Through to Thriving: Finding Balance and Resilience in the Trust &amp; Safety Field</itunes:title><description><![CDATA[<p>Tech Policy Press fellow <strong>Anika Collier Navaroli</strong> is the host of&nbsp;<a href="https://www.techpolicy.press/category/through-to-thriving-a-special-podcast-series-hosted-by-anika-collier-navaroli/" rel="noopener noreferrer" target="_blank"><em>Through to Thriving</em></a>, a special podcast series where she talks with technology policy practitioners to explore futures beyond our current moment. For this episode, Anika spoke with two experts on Trust &amp; Safety about balance and resilience in a notoriously difficult field.&nbsp;</p><ul><li><strong>Alice Hunsberger</strong> is the head of Trust &amp; Safety at Musubi, a firm that sells AI content moderation solutions. </li><li><strong>Jerrel Peterson</strong> is the director of content policy at Spotify. </li></ul><br/><p>Hunsberger and Peterson discussed how they broke into the field, their observations about the current state of the industry, how to better the working relationship between civil society and industry, and their advice for the next generation of practitioners. </p>]]></description><content:encoded><![CDATA[<p>Tech Policy Press fellow <strong>Anika Collier Navaroli</strong> is the host of&nbsp;<a href="https://www.techpolicy.press/category/through-to-thriving-a-special-podcast-series-hosted-by-anika-collier-navaroli/" rel="noopener noreferrer" target="_blank"><em>Through to Thriving</em></a>, a special podcast series where she talks with technology policy practitioners to explore futures beyond our current moment. For this episode, Anika spoke with two experts on Trust &amp; Safety about balance and resilience in a notoriously difficult field.&nbsp;</p><ul><li><strong>Alice Hunsberger</strong> is the head of Trust &amp; Safety at Musubi, a firm that sells AI content moderation solutions. </li><li><strong>Jerrel Peterson</strong> is the director of content policy at Spotify. </li></ul><br/><p>Hunsberger and Peterson discussed how they broke into the field, their observations about the current state of the industry, how to better the working relationship between civil society and industry, and their advice for the next generation of practitioners. </p>]]></content:encoded><link><![CDATA[https://techpolicy.press/through-to-thriving-finding-balance-and-resilience-in-the-trust-safety-field]]></link><guid isPermaLink="false">9f979300-e4ef-41f9-aa44-e8a3e4a92b17</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 20 Jul 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/9f979300-e4ef-41f9-aa44-e8a3e4a92b17.mp3" length="52565185" type="audio/mpeg"/><itunes:duration>54:45</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>How the EU&apos;s Voluntary AI Code is Testing Industry and Regulators Alike</title><itunes:title>How the EU&apos;s Voluntary AI Code is Testing Industry and Regulators Alike</itunes:title><description><![CDATA[<p>Last week, following months of negotiation and just weeks before the first legal deadlines under the EU AI Act take effect, the European Commission published the final Code of Practice on General-Purpose AI. The Code is voluntary and intended to help companies demonstrate compliance with the AI Act. It sets out detailed expectations around transparency, copyright, and measures to mitigate systemic risks. Signatories will need to publish summaries of training data, avoid unauthorized use of copyrighted content, and establish internal frameworks to monitor risks. Companies that sign on will see a “reduced administrative burden” and greater legal clarity, the Commission said. At the same time, both European and American tech companies have raised concerns about the AI Act’s implementation timeline, with some calling to “stop the clock” on the AI Act’s rollout.</p><p>To learn more, Tech Policy Press associate editor <strong>Ramsha Jahangir</strong> spoke to <strong>Luca Bertuzzi</strong>, senior AI correspondent at MLex, to unpack the final Code of Practice on GPAI, why it matters, and how it fits into the broader rollout of the AI Act.</p>]]></description><content:encoded><![CDATA[<p>Last week, following months of negotiation and just weeks before the first legal deadlines under the EU AI Act take effect, the European Commission published the final Code of Practice on General-Purpose AI. The Code is voluntary and intended to help companies demonstrate compliance with the AI Act. It sets out detailed expectations around transparency, copyright, and measures to mitigate systemic risks. Signatories will need to publish summaries of training data, avoid unauthorized use of copyrighted content, and establish internal frameworks to monitor risks. Companies that sign on will see a “reduced administrative burden” and greater legal clarity, the Commission said. At the same time, both European and American tech companies have raised concerns about the AI Act’s implementation timeline, with some calling to “stop the clock” on the AI Act’s rollout.</p><p>To learn more, Tech Policy Press associate editor <strong>Ramsha Jahangir</strong> spoke to <strong>Luca Bertuzzi</strong>, senior AI correspondent at MLex, to unpack the final Code of Practice on GPAI, why it matters, and how it fits into the broader rollout of the AI Act.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/how-the-eus-voluntary-ai-code-is-testing-industry-and-regulators-alike]]></link><guid isPermaLink="false">165115c2-fa4f-4880-ba91-ab36ed8f0273</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 13 Jul 2025 09:15:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/165115c2-fa4f-4880-ba91-ab36ed8f0273.mp3" length="25986318" type="audio/mpeg"/><itunes:duration>21:39</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>How US States Are Shaping AI Policy Amid Federal Debate and Industry Pushback</title><itunes:title>How US States Are Shaping AI Policy Amid Federal Debate and Industry Pushback</itunes:title><description><![CDATA[<p>In the United States, state legislatures are key players in shaping artificial intelligence policy, as lawmakers attempt to navigate a thicket of politics surrounding complex issues ranging from AI safety, deepfakes, and algorithmic discrimination to workplace automation and government use of AI. The decision by the US Senate to exclude a moratorium on the enforcement of state AI laws from the budget reconciliation package passed by Congress and signed by President Donald Trump over the July 4 weekend leaves the door open for more significant state-level AI policymaking.</p><p>To take stock of where things stand on state AI policymaking, Tech Policy Press associate editor <strong>Cristiano Lima-Strong</strong> spoke to two experts:</p><ul><li><strong>Scott Babwah Brennen</strong>, director of NYU’s Center on Technology Policy, and </li><li><strong>Hayley Tsukayama</strong>, associate director of legislative activism at the Electronic Frontier Foundation (EFF).</li></ul><br/>]]></description><content:encoded><![CDATA[<p>In the United States, state legislatures are key players in shaping artificial intelligence policy, as lawmakers attempt to navigate a thicket of politics surrounding complex issues ranging from AI safety, deepfakes, and algorithmic discrimination to workplace automation and government use of AI. The decision by the US Senate to exclude a moratorium on the enforcement of state AI laws from the budget reconciliation package passed by Congress and signed by President Donald Trump over the July 4 weekend leaves the door open for more significant state-level AI policymaking.</p><p>To take stock of where things stand on state AI policymaking, Tech Policy Press associate editor <strong>Cristiano Lima-Strong</strong> spoke to two experts:</p><ul><li><strong>Scott Babwah Brennen</strong>, director of NYU’s Center on Technology Policy, and </li><li><strong>Hayley Tsukayama</strong>, associate director of legislative activism at the Electronic Frontier Foundation (EFF).</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/how-us-states-are-shaping-ai-policy-amid-federal-debate-and-industry-pushback]]></link><guid isPermaLink="false">8dbaca84-c27a-46a1-beee-92d91d741bc5</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 13 Jul 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/8dbaca84-c27a-46a1-beee-92d91d741bc5.mp3" length="29072093" type="audio/mpeg"/><itunes:duration>30:17</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Protecting Privacy and Dissent in an Age of Authoritarianism and AI</title><itunes:title>Protecting Privacy and Dissent in an Age of Authoritarianism and AI</itunes:title><description><![CDATA[<p><strong>Helen Nissenbaum</strong>, a philosopher, is a professor at Cornell Tech and in the Information Science Department at Cornell University. She is director of the&nbsp;<a href="https://www.dli.tech.cornell.edu/" rel="noopener noreferrer" target="_blank">Digital Life Initiative</a> at Cornell Tech, which was launched in 2017 to explore&nbsp;societal perspectives&nbsp;surrounding the development and application of digital technology. Her work on contextual privacy, trust, accountability, security, and values in technology design led her to work with collaborators on projects such as TrackMeNot, a tool to mask a user's real search history by sending search engines a cloud of ‘ghost’ queries, and AdNauseam, a browser extension that obfuscates a user’s browsing data to protect from tracking by advertising networks. </p><p>Building on such projects, in 2015, she coauthored a book with <strong>Finn Brunton</strong> called<em>&nbsp;</em><a href="https://mitpress.mit.edu/books/obfuscation" rel="noopener noreferrer" target="_blank"><em>Obfuscation: A User’s Guide for Privacy and Protest</em></a>. The book detailed ideas on mitigating and defeating digital surveillance. With concerns about surveillance surging in a time of rising authoritarianism and the advent of powerful artificial intelligence technologies, <strong>Justin Hendrix</strong> reached out to Professor Nissenbaum to find out what she’s thinking in this moment, and how her ideas can be applied to present day phenomena.</p>]]></description><content:encoded><![CDATA[<p><strong>Helen Nissenbaum</strong>, a philosopher, is a professor at Cornell Tech and in the Information Science Department at Cornell University. She is director of the&nbsp;<a href="https://www.dli.tech.cornell.edu/" rel="noopener noreferrer" target="_blank">Digital Life Initiative</a> at Cornell Tech, which was launched in 2017 to explore&nbsp;societal perspectives&nbsp;surrounding the development and application of digital technology. Her work on contextual privacy, trust, accountability, security, and values in technology design led her to work with collaborators on projects such as TrackMeNot, a tool to mask a user's real search history by sending search engines a cloud of ‘ghost’ queries, and AdNauseam, a browser extension that obfuscates a user’s browsing data to protect from tracking by advertising networks. </p><p>Building on such projects, in 2015, she coauthored a book with <strong>Finn Brunton</strong> called<em>&nbsp;</em><a href="https://mitpress.mit.edu/books/obfuscation" rel="noopener noreferrer" target="_blank"><em>Obfuscation: A User’s Guide for Privacy and Protest</em></a>. The book detailed ideas on mitigating and defeating digital surveillance. With concerns about surveillance surging in a time of rising authoritarianism and the advent of powerful artificial intelligence technologies, <strong>Justin Hendrix</strong> reached out to Professor Nissenbaum to find out what she’s thinking in this moment, and how her ideas can be applied to present day phenomena.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/protecting-privacy-and-dissent-in-an-age-of-authoritarianism-and-ai]]></link><guid isPermaLink="false">a5116b64-c173-4c2d-b075-c0955da2c403</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 06 Jul 2025 09:15:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/a5116b64-c173-4c2d-b075-c0955da2c403.mp3" length="44108650" type="audio/mpeg"/><itunes:duration>45:57</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Considering the Human Rights Impacts of LLM Content Moderation</title><itunes:title>Considering the Human Rights Impacts of LLM Content Moderation</itunes:title><description><![CDATA[<p>At Tech Policy Press we’ve been tracking the emerging application of generative AI systems in content moderation. Recently, the European Center for Not-for-Profit Law (ECNL) released a comprehensive report titled <a href="https://ecnl.org/sites/default/files/2025-04/ECNL_LLM_CM_Excecutive%20Summary_2025.pdf" rel="noopener noreferrer" target="_blank"><em>Algorithmic Gatekeepers: The Human Rights Impacts of LLM Content Moderation</em></a>, which looks at the opportunities and challenges of using generative AI in content moderation systems at scale.&nbsp;<strong>Justin Hendrix </strong>spoke to its primary author, ECNL senior legal manager <strong>Marlena Wisniak</strong>.</p>]]></description><content:encoded><![CDATA[<p>At Tech Policy Press we’ve been tracking the emerging application of generative AI systems in content moderation. Recently, the European Center for Not-for-Profit Law (ECNL) released a comprehensive report titled <a href="https://ecnl.org/sites/default/files/2025-04/ECNL_LLM_CM_Excecutive%20Summary_2025.pdf" rel="noopener noreferrer" target="_blank"><em>Algorithmic Gatekeepers: The Human Rights Impacts of LLM Content Moderation</em></a>, which looks at the opportunities and challenges of using generative AI in content moderation systems at scale.&nbsp;<strong>Justin Hendrix </strong>spoke to its primary author, ECNL senior legal manager <strong>Marlena Wisniak</strong>.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/considering-the-human-rights-impacts-of-llm-content-moderation]]></link><guid isPermaLink="false">9498004b-9b44-4855-bf5f-6e6cb0321410</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 06 Jul 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/9498004b-9b44-4855-bf5f-6e6cb0321410.mp3" length="40096974" type="audio/mpeg"/><itunes:duration>41:46</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Interrogating Tech Power and Democratic Crisis</title><itunes:title>Interrogating Tech Power and Democratic Crisis</itunes:title><description><![CDATA[<p>If you’ve been reading <strong>Tech Policy Press</strong> closely over the last three weeks, you may have come across one or more posts from collaboration with <strong>Data &amp; Society</strong> called “<a href="https://www.techpolicy.press/tech-power-and-the-crisis-of-democracy/" rel="noopener noreferrer" target="_blank"><em>Ideologies of Control: A Series on&nbsp;Tech&nbsp;Power and Democratic Crisis</em></a>.” The articles in the series examine how powerful tech billionaires and authoritarian leaders and thinkers are leveraging AI and digital infrastructure to advance anti-democratic agendas, consolidate control, and reshape society in ways that threaten privacy, labor rights, environmental sustainability, and democratic governance. </p><p>For this episode, <strong>Justin Hendrix</strong> spoke to four of the authors who made contributions to the series, including:</p><ul><li><strong>Jacob Metcalf</strong>,  program director of the AI On the Ground Initiative at Data &amp; Society;</li><li><strong>Tamara Kneese,</strong>  program director of the Climate, Technology and Justice program at Data &amp; Society;</li><li><strong>Reem Suleiman</strong>,  outgoing US advocacy lead at the Mozilla Foundation and  member of the city of Oakland's Privacy Advisory Commission; and </li><li><strong>Kevin De Liban</strong>, founder of TechTonic Justice.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>If you’ve been reading <strong>Tech Policy Press</strong> closely over the last three weeks, you may have come across one or more posts from collaboration with <strong>Data &amp; Society</strong> called “<a href="https://www.techpolicy.press/tech-power-and-the-crisis-of-democracy/" rel="noopener noreferrer" target="_blank"><em>Ideologies of Control: A Series on&nbsp;Tech&nbsp;Power and Democratic Crisis</em></a>.” The articles in the series examine how powerful tech billionaires and authoritarian leaders and thinkers are leveraging AI and digital infrastructure to advance anti-democratic agendas, consolidate control, and reshape society in ways that threaten privacy, labor rights, environmental sustainability, and democratic governance. </p><p>For this episode, <strong>Justin Hendrix</strong> spoke to four of the authors who made contributions to the series, including:</p><ul><li><strong>Jacob Metcalf</strong>,  program director of the AI On the Ground Initiative at Data &amp; Society;</li><li><strong>Tamara Kneese,</strong>  program director of the Climate, Technology and Justice program at Data &amp; Society;</li><li><strong>Reem Suleiman</strong>,  outgoing US advocacy lead at the Mozilla Foundation and  member of the city of Oakland's Privacy Advisory Commission; and </li><li><strong>Kevin De Liban</strong>, founder of TechTonic Justice.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/interrogating-tech-power-and-democratic-crisis]]></link><guid isPermaLink="false">3db74747-ab3f-4dd8-8874-9024a2562f13</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 29 Jun 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/3db74747-ab3f-4dd8-8874-9024a2562f13.mp3" length="34418626" type="audio/mpeg"/><itunes:duration>35:51</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Through to Thriving: Honoring Our Elders with Dr. Timnit Gebru</title><itunes:title>Through to Thriving: Honoring Our Elders with Dr. Timnit Gebru</itunes:title><description><![CDATA[<p>For a special series of episodes dubbed Through to Thriving that will air throughout the year, Tech Policy Press fellow <strong>Anika Collier Navaroli</strong> is hosting discussions intended to help us imagine possible futures—for tech and tech policy, for democracy, and society—beyond the moment we are in.</p><p>The third episode in the series features her conversation with <strong>Dr. Timnit Gebru</strong>, the founder and executive director of the Distributed Artificial Intelligence Research Institute. Last year, Dr. Gebru wrote an <em>New York Times </em><a href="https://www.nytimes.com/2024/12/05/special-series/tech-protest-gaza-artificial-intelligence.html" rel="noopener noreferrer" target="_blank">opinion</a> that asked, “Who Is Tech Really For?” In the piece, she also asked, “what would an internet that served my elders look like?”&nbsp;</p><p>This year, DAIR has continued to ask these questions by hosting an <a href="https://www.eventbrite.com/e/dair-presents-imagining-possible-futures-tickets-1295350711849" rel="noopener noreferrer" target="_blank">event</a> and a <a href="https://www.dair-institute.org/blog/" rel="noopener noreferrer" target="_blank">blog</a> called Possible Futures that imagines “what the world can look like when we design and deploy technology that centers the needs of our communities. In one of these pieces, Dr. Gebru, along with her colleagues <strong>Asmelash Teka Hadgu</strong> and <strong>Dr. Alex Hanna</strong> describe “An Internet for Our Elders.”</p>]]></description><content:encoded><![CDATA[<p>For a special series of episodes dubbed Through to Thriving that will air throughout the year, Tech Policy Press fellow <strong>Anika Collier Navaroli</strong> is hosting discussions intended to help us imagine possible futures—for tech and tech policy, for democracy, and society—beyond the moment we are in.</p><p>The third episode in the series features her conversation with <strong>Dr. Timnit Gebru</strong>, the founder and executive director of the Distributed Artificial Intelligence Research Institute. Last year, Dr. Gebru wrote an <em>New York Times </em><a href="https://www.nytimes.com/2024/12/05/special-series/tech-protest-gaza-artificial-intelligence.html" rel="noopener noreferrer" target="_blank">opinion</a> that asked, “Who Is Tech Really For?” In the piece, she also asked, “what would an internet that served my elders look like?”&nbsp;</p><p>This year, DAIR has continued to ask these questions by hosting an <a href="https://www.eventbrite.com/e/dair-presents-imagining-possible-futures-tickets-1295350711849" rel="noopener noreferrer" target="_blank">event</a> and a <a href="https://www.dair-institute.org/blog/" rel="noopener noreferrer" target="_blank">blog</a> called Possible Futures that imagines “what the world can look like when we design and deploy technology that centers the needs of our communities. In one of these pieces, Dr. Gebru, along with her colleagues <strong>Asmelash Teka Hadgu</strong> and <strong>Dr. Alex Hanna</strong> describe “An Internet for Our Elders.”</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/through-to-thriving-honoring-our-elders-with-dr-timnit-gebru]]></link><guid isPermaLink="false">b2288e91-882d-435e-9be4-32f892513615</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 22 Jun 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/b2288e91-882d-435e-9be4-32f892513615.mp3" length="50833998" type="audio/mpeg"/><itunes:duration>52:57</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>AI Companions and the Law</title><itunes:title>AI Companions and the Law</itunes:title><description><![CDATA[<p>Concerns about AI chatbots delivering harmful, even profoundly dangerous advice or instructions to users is growing. There is deep concern over the effects of these interactions on children, and a growing number of stories—and lawsuits—about when things go wrong, particularly for teens. </p><p>In this conversation, <strong>Justin Hendrix</strong> is joined by three legal experts who are thinking deeply about how to address questions related to chatbots, and about the need for substantially more research on human-AI interaction:</p><ul><li><strong> Clare Huntington</strong>, Barbara Aronstein Black Professor of Law at Columbia Law School;</li><li><strong>Meetali Jain</strong>, founder and director of the Tech Justice Law Project; and </li><li><strong>Robert Mahari,</strong> associate director of Stanford's CodeX Center. </li></ul><br/>]]></description><content:encoded><![CDATA[<p>Concerns about AI chatbots delivering harmful, even profoundly dangerous advice or instructions to users is growing. There is deep concern over the effects of these interactions on children, and a growing number of stories—and lawsuits—about when things go wrong, particularly for teens. </p><p>In this conversation, <strong>Justin Hendrix</strong> is joined by three legal experts who are thinking deeply about how to address questions related to chatbots, and about the need for substantially more research on human-AI interaction:</p><ul><li><strong> Clare Huntington</strong>, Barbara Aronstein Black Professor of Law at Columbia Law School;</li><li><strong>Meetali Jain</strong>, founder and director of the Tech Justice Law Project; and </li><li><strong>Robert Mahari,</strong> associate director of Stanford's CodeX Center. </li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/ai-companions-and-the-law]]></link><guid isPermaLink="false">fd85c3ab-6b51-4fcb-8c9f-dc77a14263bd</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 15 Jun 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/fd85c3ab-6b51-4fcb-8c9f-dc77a14263bd.mp3" length="52745298" type="audio/mpeg"/><itunes:duration>54:57</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Addressing Questions Over Europe&apos;s AI Act, Digital Sovereignty, and More</title><itunes:title>Addressing Questions Over Europe&apos;s AI Act, Digital Sovereignty, and More</itunes:title><description><![CDATA[<p>In Europe, the digital regulatory landscape is in flux. Over the past few years, the EU has positioned itself as a global leader in tech regulation, rolling out landmark laws like the AI Act. But now, as the much-anticipated AI Act approaches implementation, the path forward is looking anything but smooth. Reports suggest the European Commission is considering a delay to the AI Act’s rollout due to mounting pressure from industry, difficulties in finalizing technical standards, and geopolitical tensions—including pushback from the US government. At the same time, a broader movement for Europe to reduce its dependence on Amercian tech is gaining momentum: What does this push for digital sovereignty actually mean? </p><p>To help us unpack all of this, Tech Policy Press associate editor <strong>Ramsha Jahangir</strong> spoke to&nbsp;<strong>Kai Zenner, </strong>Head of Office and Digital Policy Advisor to German MEP <strong>Axel Voss</strong>, and one of the more influential voices shaping the future of EU digital policy.</p>]]></description><content:encoded><![CDATA[<p>In Europe, the digital regulatory landscape is in flux. Over the past few years, the EU has positioned itself as a global leader in tech regulation, rolling out landmark laws like the AI Act. But now, as the much-anticipated AI Act approaches implementation, the path forward is looking anything but smooth. Reports suggest the European Commission is considering a delay to the AI Act’s rollout due to mounting pressure from industry, difficulties in finalizing technical standards, and geopolitical tensions—including pushback from the US government. At the same time, a broader movement for Europe to reduce its dependence on Amercian tech is gaining momentum: What does this push for digital sovereignty actually mean? </p><p>To help us unpack all of this, Tech Policy Press associate editor <strong>Ramsha Jahangir</strong> spoke to&nbsp;<strong>Kai Zenner, </strong>Head of Office and Digital Policy Advisor to German MEP <strong>Axel Voss</strong>, and one of the more influential voices shaping the future of EU digital policy.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/addressing-questions-over-europes-ai-act-digital-sovereignty-and-more]]></link><guid isPermaLink="false">0800c4fd-877e-4140-9924-8de69e07ecca</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 15 Jun 2025 08:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/0800c4fd-877e-4140-9924-8de69e07ecca.mp3" length="53029761" type="audio/mpeg"/><itunes:duration>44:11</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Technology, Labor Rights, and Political Power in Kenya and Across Africa</title><itunes:title>Technology, Labor Rights, and Political Power in Kenya and Across Africa</itunes:title><description><![CDATA[<p>In this episode, <strong>Justin Hendrix </strong>speaks with <strong>Nerima Wako-Ojiwa</strong>, director of <a href="https://siasaplace.com/" rel="noopener noreferrer" target="_blank">Siasa Place</a>, and <strong>Odanga Madung</strong>, a tech and society <a href="https://www.mozillafoundation.org/en/research/browse-authors/odanga-madung-380/" rel="noopener noreferrer" target="_blank">researcher</a> and <a href="https://x.com/odangaring?lang=en" rel="noopener noreferrer" target="_blank">journalist</a>, about the intersection of technology, labor rights, and political power in Kenya and across Africa. The conversation explores the ongoing struggles of content moderators and AI data annotators, who face exploitative working conditions while performing essential labor for major tech companies; the failure of platforms fail to address harmful biases and disinformation that particularly affect African contexts; the ways in which governments increasingly use platform failures as justification for internet censorship and surveillance; and the promise of youth and labor movements that point to a more just and democratic future.</p>]]></description><content:encoded><![CDATA[<p>In this episode, <strong>Justin Hendrix </strong>speaks with <strong>Nerima Wako-Ojiwa</strong>, director of <a href="https://siasaplace.com/" rel="noopener noreferrer" target="_blank">Siasa Place</a>, and <strong>Odanga Madung</strong>, a tech and society <a href="https://www.mozillafoundation.org/en/research/browse-authors/odanga-madung-380/" rel="noopener noreferrer" target="_blank">researcher</a> and <a href="https://x.com/odangaring?lang=en" rel="noopener noreferrer" target="_blank">journalist</a>, about the intersection of technology, labor rights, and political power in Kenya and across Africa. The conversation explores the ongoing struggles of content moderators and AI data annotators, who face exploitative working conditions while performing essential labor for major tech companies; the failure of platforms fail to address harmful biases and disinformation that particularly affect African contexts; the ways in which governments increasingly use platform failures as justification for internet censorship and surveillance; and the promise of youth and labor movements that point to a more just and democratic future.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/technology-labor-rights-and-political-power-in-kenya-and-across-africa]]></link><guid isPermaLink="false">6bbf790c-1724-42dc-a707-51fbdd187116</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 08 Jun 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/6bbf790c-1724-42dc-a707-51fbdd187116.mp3" length="43300260" type="audio/mpeg"/><itunes:duration>45:06</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Through to Thriving: Journeying to Joy with Dr. Desmond Upton Patton</title><itunes:title>Through to Thriving: Journeying to Joy with Dr. Desmond Upton Patton</itunes:title><description><![CDATA[<p>For a special series of episodes dubbed <em>Through to Thriving </em>that will air throughout the year, Tech Policy Press fellow <strong>Anika Collier Navaroli </strong>is hosting discussions intended to help us imagine possible futures—for tech and tech policy, for democracy, and society—beyond the moment we are in. </p><p>The second episode in the series features her conversation with <strong>Dr. Desmond Upton Patton</strong>, who has <a href="https://scholar.google.com/citations?user=Il-GLPIAAAAJ&amp;hl=en" rel="noopener noreferrer" target="_blank">long studied</a> the intersection of technology and social issues and <a href="https://www.asc.upenn.edu/people/faculty/desmond-upton-patton-msw-phd" rel="noopener noreferrer" target="_blank">advised companies</a> developing technologies and policies for social media and AI. Dr. Patton is the Brian and Randi Schwartz University Professor and Penn Integrates Knowledge University Professor at the University of Pennsylvania, and he serves on the board of Tech Policy Press.</p><p>Recently, Dr. Patton has been teaching a class within Annenberg and the School of Social Policy &amp; Practice called "<a href="https://www.asc.upenn.edu/news-events/news/journey-joy" rel="noopener noreferrer" target="_blank"><em>Journey to Joy: Designing a Happier Life</em></a>." In this episode, he discusses his personal and intellectual journey, and what the concept of joy has to do with technology and how we imagine the future.</p>]]></description><content:encoded><![CDATA[<p>For a special series of episodes dubbed <em>Through to Thriving </em>that will air throughout the year, Tech Policy Press fellow <strong>Anika Collier Navaroli </strong>is hosting discussions intended to help us imagine possible futures—for tech and tech policy, for democracy, and society—beyond the moment we are in. </p><p>The second episode in the series features her conversation with <strong>Dr. Desmond Upton Patton</strong>, who has <a href="https://scholar.google.com/citations?user=Il-GLPIAAAAJ&amp;hl=en" rel="noopener noreferrer" target="_blank">long studied</a> the intersection of technology and social issues and <a href="https://www.asc.upenn.edu/people/faculty/desmond-upton-patton-msw-phd" rel="noopener noreferrer" target="_blank">advised companies</a> developing technologies and policies for social media and AI. Dr. Patton is the Brian and Randi Schwartz University Professor and Penn Integrates Knowledge University Professor at the University of Pennsylvania, and he serves on the board of Tech Policy Press.</p><p>Recently, Dr. Patton has been teaching a class within Annenberg and the School of Social Policy &amp; Practice called "<a href="https://www.asc.upenn.edu/news-events/news/journey-joy" rel="noopener noreferrer" target="_blank"><em>Journey to Joy: Designing a Happier Life</em></a>." In this episode, he discusses his personal and intellectual journey, and what the concept of joy has to do with technology and how we imagine the future.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/through-to-thriving-journeying-to-joy-with-dr-desmond-upton-patton]]></link><guid isPermaLink="false">b3fdd10e-103a-42f7-a130-d3b5e9dd6419</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 08 Jun 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/b3fdd10e-103a-42f7-a130-d3b5e9dd6419.mp3" length="39566628" type="audio/mpeg"/><itunes:duration>41:13</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Canada&apos;s Post-Election Outlook on Tech Policy</title><itunes:title>Canada&apos;s Post-Election Outlook on Tech Policy</itunes:title><description><![CDATA[<p>Canadian political leaders are in a precarious moment. Fresh off the resignation of former Prime Minister <strong>Justin Trudeau</strong> and ascendancy of his successor, new Prime Minister and Liberal Party leader <strong>Mark Carney</strong>, the nation faces a brewing trade war with the United States and a deteriorating relationship with its president, <strong>Donald Trump</strong>.</p><p>In addition to managing those global tensions, Canadian leaders have a long to-do list on tech policy, including figuring out the nation’s approach to artificial intelligence and online harms. How will the new Carney-led government in Canada navigate those issues?</p><p>Tech Policy Press associate editor <strong>Cristiano Lima-Strong</strong> spoke to three experts to get a sense:</p><ul><li><strong>Renee Black</strong> is founder of goodbot, where she works on preventing harmful disinformation and bias, and establishing frameworks that protect digital rights.</li><li><strong>Maroussia Lévesque</strong> is a doctoral candidate and lecturer at Harvard Law School, an affiliate at the Berkman Klein Center, and a senior fellow at the Center for International Governance Innovation.</li><li><strong>Vass Bednar</strong> is a public policy entrepreneur working at the intersection of technology and public policy.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>Canadian political leaders are in a precarious moment. Fresh off the resignation of former Prime Minister <strong>Justin Trudeau</strong> and ascendancy of his successor, new Prime Minister and Liberal Party leader <strong>Mark Carney</strong>, the nation faces a brewing trade war with the United States and a deteriorating relationship with its president, <strong>Donald Trump</strong>.</p><p>In addition to managing those global tensions, Canadian leaders have a long to-do list on tech policy, including figuring out the nation’s approach to artificial intelligence and online harms. How will the new Carney-led government in Canada navigate those issues?</p><p>Tech Policy Press associate editor <strong>Cristiano Lima-Strong</strong> spoke to three experts to get a sense:</p><ul><li><strong>Renee Black</strong> is founder of goodbot, where she works on preventing harmful disinformation and bias, and establishing frameworks that protect digital rights.</li><li><strong>Maroussia Lévesque</strong> is a doctoral candidate and lecturer at Harvard Law School, an affiliate at the Berkman Klein Center, and a senior fellow at the Center for International Governance Innovation.</li><li><strong>Vass Bednar</strong> is a public policy entrepreneur working at the intersection of technology and public policy.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/canadas-post-election-outlook-on-tech-policy]]></link><guid isPermaLink="false">f01dd2aa-4bca-4575-b56e-0387cc92f815</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Fri, 06 Jun 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/f01dd2aa-4bca-4575-b56e-0387cc92f815.mp3" length="38645028" type="audio/mpeg"/><itunes:duration>40:15</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Taking on the AI Con</title><itunes:title>Taking on the AI Con</itunes:title><description><![CDATA[<p><strong>Emily M. Bender</strong> and <strong>Alex Hanna </strong>are the authors of a new book that <em>The Guardian</em> calls “refreshingly sarcastic” and <em>Business Insider</em> calls a “funny and irreverent deconstruction of AI.” They are also occasional contributors to Tech Policy Press. <strong>Justin Hendrix</strong> spoke to them about their new book, <a href="https://thecon.ai/" rel="noopener noreferrer" target="_blank"><em>The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want</em></a><em>,</em> just out from Harper Collins.&nbsp;</p>]]></description><content:encoded><![CDATA[<p><strong>Emily M. Bender</strong> and <strong>Alex Hanna </strong>are the authors of a new book that <em>The Guardian</em> calls “refreshingly sarcastic” and <em>Business Insider</em> calls a “funny and irreverent deconstruction of AI.” They are also occasional contributors to Tech Policy Press. <strong>Justin Hendrix</strong> spoke to them about their new book, <a href="https://thecon.ai/" rel="noopener noreferrer" target="_blank"><em>The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want</em></a><em>,</em> just out from Harper Collins.&nbsp;</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/taking-on-the-ai-con]]></link><guid isPermaLink="false">f1384818-2c89-48b2-8d8f-65ccb142c548</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 01 Jun 2025 09:01:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/f1384818-2c89-48b2-8d8f-65ccb142c548.mp3" length="35220713" type="audio/mpeg"/><itunes:duration>36:41</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Assessing the Relationship Between Information Ecosystems and Democracy&apos;s Woes</title><itunes:title>Assessing the Relationship Between Information Ecosystems and Democracy&apos;s Woes</itunes:title><description><![CDATA[<p>Earlier this year, an entity called the Observatory on Information and Democracy released a major report called <a href="https://observatory.informationdemocracy.org/report/information-ecosystem-and-troubled-democracy/" rel="noopener noreferrer" target="_blank"><em>INFORMATION ECOSYSTEMS AND TROUBLED DEMOCRACY: A Global Synthesis of the State of Knowledge on News Media, AI and Data Governance</em></a>. The report is the result of a combination of three research assessment panels comprised of over 60 volunteer researchers all coordinated by six rapporteurs and led by a scientific director that together considered over 1,600 sources on topics at the intersection of technology media and democracy ranging from trust in news and to mis- and disinformation is linked to societal and political polarization. <strong>Justin Hendrix</strong> spoke to that scientific director, <strong>Robin Mansell,</strong> and one of the other individuals involved in the project as chair of its steering committee, <strong>Courtney Radsch</strong>, who is also on the board of Tech Policy Press.&nbsp;</p>]]></description><content:encoded><![CDATA[<p>Earlier this year, an entity called the Observatory on Information and Democracy released a major report called <a href="https://observatory.informationdemocracy.org/report/information-ecosystem-and-troubled-democracy/" rel="noopener noreferrer" target="_blank"><em>INFORMATION ECOSYSTEMS AND TROUBLED DEMOCRACY: A Global Synthesis of the State of Knowledge on News Media, AI and Data Governance</em></a>. The report is the result of a combination of three research assessment panels comprised of over 60 volunteer researchers all coordinated by six rapporteurs and led by a scientific director that together considered over 1,600 sources on topics at the intersection of technology media and democracy ranging from trust in news and to mis- and disinformation is linked to societal and political polarization. <strong>Justin Hendrix</strong> spoke to that scientific director, <strong>Robin Mansell,</strong> and one of the other individuals involved in the project as chair of its steering committee, <strong>Courtney Radsch</strong>, who is also on the board of Tech Policy Press.&nbsp;</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/assessing-the-relationship-between-information-ecosystems-and-democracys-woes]]></link><guid isPermaLink="false">146bcce5-1ddf-4bdd-8997-f5991c583482</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 01 Jun 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/146bcce5-1ddf-4bdd-8997-f5991c583482.mp3" length="50182786" type="audio/mpeg"/><itunes:duration>52:16</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>An Interview with California&apos;s New State Chief Technology Innovation Officer</title><itunes:title>An Interview with California&apos;s New State Chief Technology Innovation Officer</itunes:title><description><![CDATA[<p>In February, California Governor <strong>Gavin Newsom</strong> <a href="https://www.gov.ca.gov/2025/02/13/governor-newsom-announces-appointments-2-13-25/" rel="noopener noreferrer" target="_blank">appointed</a> <strong>Vera Zakem</strong> as California’s State Chief Technology Innovation Officer at the&nbsp;<strong>California&nbsp;</strong>Department of Technology. Zakem brings deep experience from national security, democracy and human rights, and technology policy. Most recently, under former President Joe Biden, she served as the Chief Digital Democracy and Rights Officer at USAID, where she led global efforts to align emerging technologies with democratic values.&nbsp;Zakem assumes the role as California, like many governments, is accelerating its embrace of artificial intelligence. </p><p><strong>Justin Hendrix</strong> spoke with Zakem about the promise of state-led innovation and how to avoid its perils, what responsible AI governance might mean in practice, and how California might chart a course that’s both ambitious and accountable to its citizens.</p>]]></description><content:encoded><![CDATA[<p>In February, California Governor <strong>Gavin Newsom</strong> <a href="https://www.gov.ca.gov/2025/02/13/governor-newsom-announces-appointments-2-13-25/" rel="noopener noreferrer" target="_blank">appointed</a> <strong>Vera Zakem</strong> as California’s State Chief Technology Innovation Officer at the&nbsp;<strong>California&nbsp;</strong>Department of Technology. Zakem brings deep experience from national security, democracy and human rights, and technology policy. Most recently, under former President Joe Biden, she served as the Chief Digital Democracy and Rights Officer at USAID, where she led global efforts to align emerging technologies with democratic values.&nbsp;Zakem assumes the role as California, like many governments, is accelerating its embrace of artificial intelligence. </p><p><strong>Justin Hendrix</strong> spoke with Zakem about the promise of state-led innovation and how to avoid its perils, what responsible AI governance might mean in practice, and how California might chart a course that’s both ambitious and accountable to its citizens.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/an-interview-with-californias-new-state-chief-technology-and-innovation-officer]]></link><guid isPermaLink="false">f3a192a6-ca48-4251-b031-33032ae32846</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Thu, 29 May 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/f3a192a6-ca48-4251-b031-33032ae32846.mp3" length="26923377" type="audio/mpeg"/><itunes:duration>28:03</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Considering a New &apos;Civil Rights Approach to AI&apos;</title><itunes:title>Considering a New &apos;Civil Rights Approach to AI&apos;</itunes:title><description><![CDATA[<p>On May 29, the&nbsp;Center for Civil Rights and Technology at <em>The Leadership Conference on Civil and Human Rights</em> released its&nbsp;<a href="http://email.media.civilrights.org/c/eJwUyjFyhSAQANDTQOksuyBYUKTxHrAsXxK_OkjM5PaZ9K_EMptUSUs03gaHCy1Ob9FaJkoJfDVI2eW5zmJyZvEsyZSkW0RABw4XY2kmO_kCYLBwDlwpu6AsvKW0NHF72t7baxv3dPaX3uM2xnUr-lC4KlzbcZxPGu08ak9v-Tn7179TuOoeP7_v0Q5lYQhv17k3_p2uLvetR0whgV8qsLFLzsb6XBxQsYTCwUPRI1bGDBaRgwMQYm8EKhDBLAFTYP1E_AsAAP__SG5OPg" rel="noopener noreferrer" target="_blank">Innovation Framework</a>, which it calls a “new guiding document for companies that invest in, create, and use artificial intelligence (AI), to ensure that their AI systems protect and promote civil rights and are fair, trusted, and safe for all of us, especially communities historically pushed to the margins.” </p><p><strong>Justin Hendrix</strong> spoke to the Center’s&nbsp;senior policy advisor on Civil Rights and Technology, <strong>Frank Torres</strong>, about the framework, the ideas that informed it, and the Center’s interactions with industry.</p>]]></description><content:encoded><![CDATA[<p>On May 29, the&nbsp;Center for Civil Rights and Technology at <em>The Leadership Conference on Civil and Human Rights</em> released its&nbsp;<a href="http://email.media.civilrights.org/c/eJwUyjFyhSAQANDTQOksuyBYUKTxHrAsXxK_OkjM5PaZ9K_EMptUSUs03gaHCy1Ob9FaJkoJfDVI2eW5zmJyZvEsyZSkW0RABw4XY2kmO_kCYLBwDlwpu6AsvKW0NHF72t7baxv3dPaX3uM2xnUr-lC4KlzbcZxPGu08ak9v-Tn7179TuOoeP7_v0Q5lYQhv17k3_p2uLvetR0whgV8qsLFLzsb6XBxQsYTCwUPRI1bGDBaRgwMQYm8EKhDBLAFTYP1E_AsAAP__SG5OPg" rel="noopener noreferrer" target="_blank">Innovation Framework</a>, which it calls a “new guiding document for companies that invest in, create, and use artificial intelligence (AI), to ensure that their AI systems protect and promote civil rights and are fair, trusted, and safe for all of us, especially communities historically pushed to the margins.” </p><p><strong>Justin Hendrix</strong> spoke to the Center’s&nbsp;senior policy advisor on Civil Rights and Technology, <strong>Frank Torres</strong>, about the framework, the ideas that informed it, and the Center’s interactions with industry.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/considering-a-new-civil-rights-approach-to-ai]]></link><guid isPermaLink="false">6c3078c4-a3cb-443b-9466-a6f80d8576ea</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Thu, 29 May 2025 08:45:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/6c3078c4-a3cb-443b-9466-a6f80d8576ea.mp3" length="21744867" type="audio/mpeg"/><itunes:duration>22:39</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>A 10-Year Moratorium on Enforcing State AI Laws?</title><itunes:title>A 10-Year Moratorium on Enforcing State AI Laws?</itunes:title><description><![CDATA[<p>On Thursday, May 22, the United States House of Representatives <a href="https://www.techpolicy.press/us-house-passes-10year-moratorium-on-state-ai-laws/" rel="noopener noreferrer" target="_blank">narrowly advanced</a> a <a href="https://cdn.sanity.io/files/3tzzh18d/production/972234b764c6f2b3c3f5e3eabfd7ec63ae2fb81b.pdf" rel="noopener noreferrer" target="_blank">budget bill</a> that included the "Artificial Intelligence and Information Technology Modernization Initiative," which includes a 10-year moratorium on the enforcement of state AI laws. Tech Policy Press editor <strong>Justin Hendrix </strong>and associate editor <strong>Cristiano Lima-Strong</strong> discussed the moratorium, the contours of the debate around it, and its prospects in the Senate.</p>]]></description><content:encoded><![CDATA[<p>On Thursday, May 22, the United States House of Representatives <a href="https://www.techpolicy.press/us-house-passes-10year-moratorium-on-state-ai-laws/" rel="noopener noreferrer" target="_blank">narrowly advanced</a> a <a href="https://cdn.sanity.io/files/3tzzh18d/production/972234b764c6f2b3c3f5e3eabfd7ec63ae2fb81b.pdf" rel="noopener noreferrer" target="_blank">budget bill</a> that included the "Artificial Intelligence and Information Technology Modernization Initiative," which includes a 10-year moratorium on the enforcement of state AI laws. Tech Policy Press editor <strong>Justin Hendrix </strong>and associate editor <strong>Cristiano Lima-Strong</strong> discussed the moratorium, the contours of the debate around it, and its prospects in the Senate.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/a-10-year-moratorium-on-enforcing-state-ai-laws]]></link><guid isPermaLink="false">c2d39322-9a98-460a-985d-a7f99c38ad14</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 25 May 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/c2d39322-9a98-460a-985d-a7f99c38ad14.mp3" length="18996354" type="audio/mpeg"/><itunes:duration>19:47</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Decolonizing the Future: Karen Hao on Resisting the Empire of AI</title><itunes:title>Decolonizing the Future: Karen Hao on Resisting the Empire of AI</itunes:title><description><![CDATA[<p>In his <em>New York Times </em>review of the book, Columbia Law School professor and former White House official <strong>Tim Wu</strong> calls journalist <strong>Karen Hao’s</strong> new book, <a href="https://www.penguinrandomhouse.com/books/743569/empire-of-ai-by-karen-hao/" rel="noopener noreferrer" target="_blank"><em>Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI</em></a>, “a corrective to tech journalism that rarely leaves Silicon Valley.” </p><p>Hao has appeared on this podcast before, to <a href="https://www.techpolicy.press/the-sunday-show-information-disorder-and-who-profits-from-it/" rel="noopener noreferrer" target="_blank">help us understand</a> how the business model of social media platforms incentivizes the deterioration of information ecosystems, the <a href="https://www.techpolicy.press/the-saga-at-openai-lessons-for-policymakers/" rel="noopener noreferrer" target="_blank">series of events</a> around OpenAI CEO <strong>Sam Altman’s</strong> abrupt firing in 2023, and the <a href="https://www.techpolicy.press/deepseek-prompts-a-rethink/" rel="noopener noreferrer" target="_blank">furor around</a> the launch of DeepSeek last year. This week, <strong>Justin Hendrix </strong>spoke with Hao about the book, and what she imagines for the future.</p>]]></description><content:encoded><![CDATA[<p>In his <em>New York Times </em>review of the book, Columbia Law School professor and former White House official <strong>Tim Wu</strong> calls journalist <strong>Karen Hao’s</strong> new book, <a href="https://www.penguinrandomhouse.com/books/743569/empire-of-ai-by-karen-hao/" rel="noopener noreferrer" target="_blank"><em>Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI</em></a>, “a corrective to tech journalism that rarely leaves Silicon Valley.” </p><p>Hao has appeared on this podcast before, to <a href="https://www.techpolicy.press/the-sunday-show-information-disorder-and-who-profits-from-it/" rel="noopener noreferrer" target="_blank">help us understand</a> how the business model of social media platforms incentivizes the deterioration of information ecosystems, the <a href="https://www.techpolicy.press/the-saga-at-openai-lessons-for-policymakers/" rel="noopener noreferrer" target="_blank">series of events</a> around OpenAI CEO <strong>Sam Altman’s</strong> abrupt firing in 2023, and the <a href="https://www.techpolicy.press/deepseek-prompts-a-rethink/" rel="noopener noreferrer" target="_blank">furor around</a> the launch of DeepSeek last year. This week, <strong>Justin Hendrix </strong>spoke with Hao about the book, and what she imagines for the future.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/decolonizing-the-future-karen-hao-on-resisting-the-empire-of-ai]]></link><guid isPermaLink="false">faf8b11b-46b0-41d8-b75b-9325c779a96c</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Fri, 23 May 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/faf8b11b-46b0-41d8-b75b-9325c779a96c.mp3" length="42747726" type="audio/mpeg"/><itunes:duration>44:32</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>What the History of Internet Governance Tells Us About the Future of Tech Policy</title><itunes:title>What the History of Internet Governance Tells Us About the Future of Tech Policy</itunes:title><description><![CDATA[<p>Today’s guest is <strong>Milton L. Mueller</strong>,  a professor at the Georgia Institute of Technology in the School of Public Policy and the head of an advocacy policy analysis group called the Internet Governance Project. Mueller has long walked the halls and sat in the rooms where internet governance is discussed and debated, and has played a role in shaping global Internet policies and institutions. He’s the author of a new book called <a href="https://www.penguinrandomhouse.com/books/777542/declaring-independence-in-cyberspace-by-milton-l-mueller/" rel="noopener noreferrer" target="_blank"><em>Declaring Independence in Cyberspace: Internet Self-Governance and the End of US Control of ICANN</em></a>, which takes us into those rooms, telling the story of how and why the US government gave up its control of ICANN, a key internet governance institution responsible for internet names, numbers, and protocols. That history tells us a lot about where we are today when it comes to the broader geopolitics and governance of technology, and it has implications for the governance fights ahead, including over artificial intelligence.</p>]]></description><content:encoded><![CDATA[<p>Today’s guest is <strong>Milton L. Mueller</strong>,  a professor at the Georgia Institute of Technology in the School of Public Policy and the head of an advocacy policy analysis group called the Internet Governance Project. Mueller has long walked the halls and sat in the rooms where internet governance is discussed and debated, and has played a role in shaping global Internet policies and institutions. He’s the author of a new book called <a href="https://www.penguinrandomhouse.com/books/777542/declaring-independence-in-cyberspace-by-milton-l-mueller/" rel="noopener noreferrer" target="_blank"><em>Declaring Independence in Cyberspace: Internet Self-Governance and the End of US Control of ICANN</em></a>, which takes us into those rooms, telling the story of how and why the US government gave up its control of ICANN, a key internet governance institution responsible for internet names, numbers, and protocols. That history tells us a lot about where we are today when it comes to the broader geopolitics and governance of technology, and it has implications for the governance fights ahead, including over artificial intelligence.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/what-the-history-of-internet-governance-tells-us-about-the-future-of-tech-policy]]></link><guid isPermaLink="false">8e0311ad-23fc-4b0b-8467-376e3e35e01e</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 18 May 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/8e0311ad-23fc-4b0b-8467-376e3e35e01e.mp3" length="45799672" type="audio/mpeg"/><itunes:duration>47:42</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Between Borders and Lies: Fact-Checkers on Navigating the India-Pakistan Conflict</title><itunes:title>Between Borders and Lies: Fact-Checkers on Navigating the India-Pakistan Conflict</itunes:title><description><![CDATA[<p>In the wake of the most intense India-Pakistan escalation in two decades, experts are still trying to make sense of the role that the information war played in the physical one. In this episode, Tech Policy Press Associate Editor <strong>Ramsha Jahangir </strong>speaks to two experts from India and Pakistan who tirelessly&nbsp;navigated the deluge of rumor and disinformation during the crisis, and who came away with&nbsp;thoughts about the role of social media platforms and the incentives they create, particularly in times of conflict:</p><ul><li><strong>Pratik Sinha</strong>, co-founder and editor at&nbsp;<a href="https://www.altnews.in/" rel="noopener noreferrer" target="_blank">Alt News</a>—one of India’s major fact-checking websites, and<strong>&nbsp;</strong></li><li><strong>Asad Baig</strong>, founder of&nbsp;<a href="https://mediamatters.pk/" rel="noopener noreferrer" target="_blank">Media Matters for Democracy</a>—a non-profit focused on media literacy and development in Pakistan.</li></ul><br/><p>Sinha and Baig reflect on how the India-Pakistan conflict played out across digital platforms—and how it revealed a deeper, more dangerous dysfunction in the information ecosystem.&nbsp;</p>]]></description><content:encoded><![CDATA[<p>In the wake of the most intense India-Pakistan escalation in two decades, experts are still trying to make sense of the role that the information war played in the physical one. In this episode, Tech Policy Press Associate Editor <strong>Ramsha Jahangir </strong>speaks to two experts from India and Pakistan who tirelessly&nbsp;navigated the deluge of rumor and disinformation during the crisis, and who came away with&nbsp;thoughts about the role of social media platforms and the incentives they create, particularly in times of conflict:</p><ul><li><strong>Pratik Sinha</strong>, co-founder and editor at&nbsp;<a href="https://www.altnews.in/" rel="noopener noreferrer" target="_blank">Alt News</a>—one of India’s major fact-checking websites, and<strong>&nbsp;</strong></li><li><strong>Asad Baig</strong>, founder of&nbsp;<a href="https://mediamatters.pk/" rel="noopener noreferrer" target="_blank">Media Matters for Democracy</a>—a non-profit focused on media literacy and development in Pakistan.</li></ul><br/><p>Sinha and Baig reflect on how the India-Pakistan conflict played out across digital platforms—and how it revealed a deeper, more dangerous dysfunction in the information ecosystem.&nbsp;</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/between-borders-and-lies-fact-checkers-on-navigating-the-india-pakistan-conflict]]></link><guid isPermaLink="false">ad4174b2-ca4f-47d5-ba48-332b2c8eeaf6</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Tue, 13 May 2025 08:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/ad4174b2-ca4f-47d5-ba48-332b2c8eeaf6.mp3" length="26488752" type="audio/mpeg"/><itunes:duration>27:36</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Taking Stock of the Google Search Remedies Trial</title><itunes:title>Taking Stock of the Google Search Remedies Trial</itunes:title><description><![CDATA[<p> Last year, a United States federal judge <a href="https://www.courtlistener.com/docket/18552824/1033/united-states-of-america-v-google-llc/" rel="noopener noreferrer" target="_blank">ruled</a> that Google is a monopolist in the market for online search. For the past three weeks, the company and the Justice Department have been in court to hash out what remedies might look like. Tech Policy Press associate editor <strong>Cristiano Lima-Strong</strong> spoke to two experts who are following the case closely, including <strong>Karina Montoya, </strong>a senior reporter and analyst for Center for Journalism and Liberty at the Open Markets Institute, and <strong>Joseph Coniglio</strong>, the director of antitrust and innovation at the Information Technology and Innovation Foundation (ITIF).</p>]]></description><content:encoded><![CDATA[<p> Last year, a United States federal judge <a href="https://www.courtlistener.com/docket/18552824/1033/united-states-of-america-v-google-llc/" rel="noopener noreferrer" target="_blank">ruled</a> that Google is a monopolist in the market for online search. For the past three weeks, the company and the Justice Department have been in court to hash out what remedies might look like. Tech Policy Press associate editor <strong>Cristiano Lima-Strong</strong> spoke to two experts who are following the case closely, including <strong>Karina Montoya, </strong>a senior reporter and analyst for Center for Journalism and Liberty at the Open Markets Institute, and <strong>Joseph Coniglio</strong>, the director of antitrust and innovation at the Information Technology and Innovation Foundation (ITIF).</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/taking-stock-of-the-google-search-remedies-trial]]></link><guid isPermaLink="false">e6182102-6d6d-408c-83e1-7557df99b706</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 11 May 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/e6182102-6d6d-408c-83e1-7557df99b706.mp3" length="24469568" type="audio/mpeg"/><itunes:duration>33:59</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>xAI&apos;s Memphis Neighbors Push for Facts and Fairness</title><itunes:title>xAI&apos;s Memphis Neighbors Push for Facts and Fairness</itunes:title><description><![CDATA[<p>Last year, <strong>Elon Musk's</strong> xAI set up its "Colossus" supercomputer in an old Electrolux manufacturing facility in Memphis, Tennessee. Now, the residents of nearby neighborhoods are pushing for facts and fair treatment as the company looks to expand its footprint amid questions about its environmental impact. <strong>Justin Hendrix</strong> considers the state of play with <strong>Dara Kerr</strong>, a reporter for The Guardian; <strong>Amber Sherman</strong>, a Memphis activist; and artifacts from local media reporting over the past year.</p>]]></description><content:encoded><![CDATA[<p>Last year, <strong>Elon Musk's</strong> xAI set up its "Colossus" supercomputer in an old Electrolux manufacturing facility in Memphis, Tennessee. Now, the residents of nearby neighborhoods are pushing for facts and fair treatment as the company looks to expand its footprint amid questions about its environmental impact. <strong>Justin Hendrix</strong> considers the state of play with <strong>Dara Kerr</strong>, a reporter for The Guardian; <strong>Amber Sherman</strong>, a Memphis activist; and artifacts from local media reporting over the past year.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/xais-memphis-neighbors-push-for-facts-and-fairness]]></link><guid isPermaLink="false">81b1ebc9-838e-46ca-beb3-e866f4e50f95</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Tue, 06 May 2025 08:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/81b1ebc9-838e-46ca-beb3-e866f4e50f95.mp3" length="21557866" type="audio/mpeg"/><itunes:duration>25:40</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>How Venture Capital Warps the World</title><itunes:title>How Venture Capital Warps the World</itunes:title><description><![CDATA[<p><strong>Catherine Bracy</strong>&nbsp;is a civic technologist and community organizer whose work focuses on the intersection of technology and political and economic inequality. Justin Hendrix spoke with her about her new book, <a href="https://www.penguinrandomhouse.com/books/723091/world-eaters-by-catherine-bracy/" rel="noopener noreferrer" target="_blank"><em>World Eaters: How Venture Capital is Cannibalizing the Economy</em></a><em>. </em>In it, she suggests how the venture capital industry must be reformed to deliver true innovation that advances society rather than merely outsized returns for an increasingly monolithic set of investors. </p>]]></description><content:encoded><![CDATA[<p><strong>Catherine Bracy</strong>&nbsp;is a civic technologist and community organizer whose work focuses on the intersection of technology and political and economic inequality. Justin Hendrix spoke with her about her new book, <a href="https://www.penguinrandomhouse.com/books/723091/world-eaters-by-catherine-bracy/" rel="noopener noreferrer" target="_blank"><em>World Eaters: How Venture Capital is Cannibalizing the Economy</em></a><em>. </em>In it, she suggests how the venture capital industry must be reformed to deliver true innovation that advances society rather than merely outsized returns for an increasingly monolithic set of investors. </p>]]></content:encoded><link><![CDATA[https://techpolicy.press/how-venture-capital-warps-the-world]]></link><guid isPermaLink="false">5862d6da-1a88-4dd8-bff0-5ddb878de432</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 04 May 2025 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/5862d6da-1a88-4dd8-bff0-5ddb878de432.mp3" length="29806914" type="audio/mpeg"/><itunes:duration>35:29</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Adam Becker Takes Aim at Silicon Valley Nonsense</title><itunes:title>Adam Becker Takes Aim at Silicon Valley Nonsense</itunes:title><description><![CDATA[<p>From visions of AI paradise to the project to defeat death, many dangerous and unscientific ideas are driving Silicon Valley leaders. <strong>Justin Hendrix </strong>spoke to <strong>Adam Becker</strong>, a science journalist and author of <a href="https://www.hachettebookgroup.com/titles/adam-becker/more-everything-forever/9781541619593/?lens=basic-books" rel="noopener noreferrer" target="_blank"><em>MORE EVERYTHING FOREVER:&nbsp;AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity</em></a>, just out from Basic Books.</p>]]></description><content:encoded><![CDATA[<p>From visions of AI paradise to the project to defeat death, many dangerous and unscientific ideas are driving Silicon Valley leaders. <strong>Justin Hendrix </strong>spoke to <strong>Adam Becker</strong>, a science journalist and author of <a href="https://www.hachettebookgroup.com/titles/adam-becker/more-everything-forever/9781541619593/?lens=basic-books" rel="noopener noreferrer" target="_blank"><em>MORE EVERYTHING FOREVER:&nbsp;AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity</em></a>, just out from Basic Books.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/adam-becker-takes-aim-at-silicon-valley-nonsense]]></link><guid isPermaLink="false">3ef518d4-a636-4595-bb45-968540b00b67</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 27 Apr 2025 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/b8632c7d-9a00-461b-b352-2037a36399ca/TPP340-converted.mp3" length="33869632" type="audio/mpeg"/><itunes:duration>40:19</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Through to Thriving: Building Community with Ellen Pao</title><itunes:title>Through to Thriving: Building Community with Ellen Pao</itunes:title><description><![CDATA[<p>For a special series of episodes that will air throughout the year, Tech Policy Press fellow <strong>Anika Collier Navaroli </strong>is hosting a series of discussions intended to help us imagine possible futures—for tech and tech policy, for democracy, and society—beyond the moment we are in. Dubbed <em>Through to Thriving</em>, the first episode in the series features a discussion on how to build community and solidarity with <strong>Ellen Pao</strong>, currently the co-founder of a nonprofit called <a href="https://projectinclude.org/" rel="noopener noreferrer" target="_blank">Project Include</a>, which focuses on advancing diversity and inclusion in the tech sector. Previously, Pao was the interim CEO of Reddit and a venture capitalist. </p><p><br></p>]]></description><content:encoded><![CDATA[<p>For a special series of episodes that will air throughout the year, Tech Policy Press fellow <strong>Anika Collier Navaroli </strong>is hosting a series of discussions intended to help us imagine possible futures—for tech and tech policy, for democracy, and society—beyond the moment we are in. Dubbed <em>Through to Thriving</em>, the first episode in the series features a discussion on how to build community and solidarity with <strong>Ellen Pao</strong>, currently the co-founder of a nonprofit called <a href="https://projectinclude.org/" rel="noopener noreferrer" target="_blank">Project Include</a>, which focuses on advancing diversity and inclusion in the tech sector. Previously, Pao was the interim CEO of Reddit and a venture capitalist. </p><p><br></p>]]></content:encoded><link><![CDATA[https://techpolicy.press/through-to-thriving-building-community-with-ellen-pao]]></link><guid isPermaLink="false">acd994a5-e204-4948-b6ed-905d5e006638</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 20 Apr 2025 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/242773d5-106a-405b-bd27-b2c48ae6b9af/TPP339-converted.mp3" length="36089836" type="audio/mpeg"/><itunes:duration>50:07</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Researchers Defend the Scientific Consensus on Bias and Discrimination in AI</title><itunes:title>Researchers Defend the Scientific Consensus on Bias and Discrimination in AI</itunes:title><description><![CDATA[<p>Last month, a group of researchers <a href="https://www.aibiasconsensus.org/" rel="noopener noreferrer" target="_blank">published a letter</a> “Affirming the Scientific Consensus on Bias and Discrimination in AI.” The letter, published at a time when the Trump administration is rolling back policies and threatening research aimed at protecting people from bias and discrimination in AI, carries the signatures of more than 200 experts. </p><p>To learn more about their goals, <strong>Justin Hendrix</strong> spoke to three of the signatories:</p><ul><li><strong>J. Nathan Matias</strong>, an Assistant Professor in the Department of Communication and Information Science at Cornell University.</li><li><strong>Emma Pierson</strong>, an Assistant Professor of Computer Science at the University of California, Berkeley.</li><li><strong>Suresh Venkatasubramanian,</strong> a Professor of Computer Science and Data Science at Brown University.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>Last month, a group of researchers <a href="https://www.aibiasconsensus.org/" rel="noopener noreferrer" target="_blank">published a letter</a> “Affirming the Scientific Consensus on Bias and Discrimination in AI.” The letter, published at a time when the Trump administration is rolling back policies and threatening research aimed at protecting people from bias and discrimination in AI, carries the signatures of more than 200 experts. </p><p>To learn more about their goals, <strong>Justin Hendrix</strong> spoke to three of the signatories:</p><ul><li><strong>J. Nathan Matias</strong>, an Assistant Professor in the Department of Communication and Information Science at Cornell University.</li><li><strong>Emma Pierson</strong>, an Assistant Professor of Computer Science at the University of California, Berkeley.</li><li><strong>Suresh Venkatasubramanian,</strong> a Professor of Computer Science and Data Science at Brown University.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/researchers-defend-the-scientific-consensus-on-bias-and-discrimination-in-ai]]></link><guid isPermaLink="false">d6d5d4c6-714d-47e3-b0f6-14b11ce2bab0</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Wed, 16 Apr 2025 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/d007afa6-1916-4c75-b348-381d6b0cd17c/TPP338-converted.mp3" length="15848702" type="audio/mpeg"/><itunes:duration>18:52</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>A Guide to the FTC&apos;s Case Against Meta</title><itunes:title>A Guide to the FTC&apos;s Case Against Meta</itunes:title><description><![CDATA[<p>On Monday, April 14, the US Federal Trade Commission (FTC) will <a href="https://www.techpolicy.press/ftc-heads-to-trial-to-break-up-meta-what-you-need-to-know/" rel="noopener noreferrer" target="_blank">kick off its trial against Meta</a>. In process for years, the case is over whether <strong>Mark Zuckerberg’s</strong> company has an illegal monopoly over social media and whether it should be forced to spin off Instagram and WhatsApp.</p><p>To prepare to cover the arguments, Tech Policy Press Associate Editor <strong>Cristiano Lima-Strong</strong> spoke to two experts to better understand the issues at play.</p><ul><li><strong>William (Bill) Kovacic</strong> is a Professor of Law and Policy and Director of the Competition Law Center at the George Washington School of Law. From January 2006 to October 2011, he was a member of the Federal Trade Commission and chaired the agency from March 2008 to March 2009. And for nearly a decade, Professor Kovacic served as a Non-Executive Director with the United Kingdom's Competition and Markets Authority.</li><li><strong>Gene Kimmelman&nbsp;</strong>is a senior policy fellow at Yale’s Tobin Center for Economic Policy. He was the Justice Department’s deputy associate attorney general during the Biden administration, and he has served as chief counsel to the head of the DOJ Antitrust Division and the Senate Antitrust Subcommittee.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>On Monday, April 14, the US Federal Trade Commission (FTC) will <a href="https://www.techpolicy.press/ftc-heads-to-trial-to-break-up-meta-what-you-need-to-know/" rel="noopener noreferrer" target="_blank">kick off its trial against Meta</a>. In process for years, the case is over whether <strong>Mark Zuckerberg’s</strong> company has an illegal monopoly over social media and whether it should be forced to spin off Instagram and WhatsApp.</p><p>To prepare to cover the arguments, Tech Policy Press Associate Editor <strong>Cristiano Lima-Strong</strong> spoke to two experts to better understand the issues at play.</p><ul><li><strong>William (Bill) Kovacic</strong> is a Professor of Law and Policy and Director of the Competition Law Center at the George Washington School of Law. From January 2006 to October 2011, he was a member of the Federal Trade Commission and chaired the agency from March 2008 to March 2009. And for nearly a decade, Professor Kovacic served as a Non-Executive Director with the United Kingdom's Competition and Markets Authority.</li><li><strong>Gene Kimmelman&nbsp;</strong>is a senior policy fellow at Yale’s Tobin Center for Economic Policy. He was the Justice Department’s deputy associate attorney general during the Biden administration, and he has served as chief counsel to the head of the DOJ Antitrust Division and the Senate Antitrust Subcommittee.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/a-guide-to-the-ftcs-case-against-meta]]></link><guid isPermaLink="false">cb46c05b-ef77-4be2-a3e1-ea4219f81017</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 13 Apr 2025 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/d3939cff-32a5-4938-a75c-fd72bcbec370/TPP337-converted.mp3" length="25103695" type="audio/mpeg"/><itunes:duration>34:52</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>What We Don&apos;t Know About DSA Enforcement</title><itunes:title>What We Don&apos;t Know About DSA Enforcement</itunes:title><description><![CDATA[<p>On April 4, <em>The New York Times</em> reported that the European Commission is considering finding X, formerly Twitter, as part of its ongoing DSA investigation, which began in 2023. Tech Policy Press has discussed at length the extent and quality of transparency from platforms under the DSA, but there is limited insight into how the Commission is conducting its investigations into large online platforms and search engines. In most cases, the publicly available documents on cases are just press releases, while enforcement strategies and methods are not spelled out. </p><p>To delve into the challenges this lack of transparency presents and how it impacts the public's understanding of the DSA, Tech Policy Press Associate Editor <strong>Ramsha Jahangir</strong> spoke to two researchers:</p><ul><li><strong>Jacob van de Kerkhof, </strong>a PhD researcher at Utrecht University. His research is focused on the DSA and freedom of expression.</li><li><strong>Matteo Fabbri, </strong>a PhD candidate at IMT School for Advanced Studies in Lucca, Italy. Fabbri is also a visiting scholar at the Institute for Information Law at the University of Amsterdam. He recently published a <a href="https://www.researchgate.net/publication/389799746_The_Role_of_Requests_for_Information_in_Governing_Digital_Platforms_Under_the_Digital_Services_Act_The_Case_of_X" rel="noopener noreferrer" target="_blank">research article</a> titled "The Role of Requests for Information in Governing Digital Platforms Under the Digital Services Act: The Case of X."</li></ul><br/>]]></description><content:encoded><![CDATA[<p>On April 4, <em>The New York Times</em> reported that the European Commission is considering finding X, formerly Twitter, as part of its ongoing DSA investigation, which began in 2023. Tech Policy Press has discussed at length the extent and quality of transparency from platforms under the DSA, but there is limited insight into how the Commission is conducting its investigations into large online platforms and search engines. In most cases, the publicly available documents on cases are just press releases, while enforcement strategies and methods are not spelled out. </p><p>To delve into the challenges this lack of transparency presents and how it impacts the public's understanding of the DSA, Tech Policy Press Associate Editor <strong>Ramsha Jahangir</strong> spoke to two researchers:</p><ul><li><strong>Jacob van de Kerkhof, </strong>a PhD researcher at Utrecht University. His research is focused on the DSA and freedom of expression.</li><li><strong>Matteo Fabbri, </strong>a PhD candidate at IMT School for Advanced Studies in Lucca, Italy. Fabbri is also a visiting scholar at the Institute for Information Law at the University of Amsterdam. He recently published a <a href="https://www.researchgate.net/publication/389799746_The_Role_of_Requests_for_Information_in_Governing_Digital_Platforms_Under_the_Digital_Services_Act_The_Case_of_X" rel="noopener noreferrer" target="_blank">research article</a> titled "The Role of Requests for Information in Governing Digital Platforms Under the Digital Services Act: The Case of X."</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/what-we-dont-know-about-dsa-enforcement]]></link><guid isPermaLink="false">39c031ed-fb83-4be7-ad8f-5cd1df83a9fd</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Tue, 08 Apr 2025 01:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/1867ee1d-d7f4-4a4e-beb8-bbfe9ee72ebc/TPP336-converted.mp3" length="24979870" type="audio/mpeg"/><itunes:duration>29:44</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>DOGE and the United States of AI</title><itunes:title>DOGE and the United States of AI</itunes:title><description><![CDATA[<p>Across the United States and in some cities abroad yesterday, protestors took to the streets to resist the policies of US President <strong>Donald Trump</strong>. Dubbed the "Hands Off" protests, over 1,400 events took place, including in New York City, where protestors called for billionaire <strong>Elon Musk</strong> to be ousted from his role in government and for an end to the Department of Government Efficiency (DOGE), which has gutted government agencies and programs and sought to install artificial intelligence systems to purportedly identify wasteful spending and reduce the federal workforce.</p><p>In this conversation, <strong>Justin Hendrix</strong> is joined by four individuals who are following DOGE closely. The conversation touches on the broader context and history of attempts to use technology to streamline and improve government services, the apparent ideology behind DOGE and its conception of AI, and what the future may look like after DOGE. Guests include:</p><ul><li><strong>Eryk Salvaggio</strong>, a visiting professor at the Rochester Institute of Technology and a fellow at Tech Policy Press;</li><li><strong>Rebecca Williams</strong>, a senior strategist in the Privacy and Data Governance Unit at ACLU;</li><li><strong>Emily Tavoulareas</strong>, who teaches and conducts research at Georgetown's McCourt School for Public Policy and is leading a project to document the founding of the US Digital Service; and </li><li><strong>Matthew Kirschenbaum</strong>, Distinguished University Professor in the Department of English at the University of Maryland.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>Across the United States and in some cities abroad yesterday, protestors took to the streets to resist the policies of US President <strong>Donald Trump</strong>. Dubbed the "Hands Off" protests, over 1,400 events took place, including in New York City, where protestors called for billionaire <strong>Elon Musk</strong> to be ousted from his role in government and for an end to the Department of Government Efficiency (DOGE), which has gutted government agencies and programs and sought to install artificial intelligence systems to purportedly identify wasteful spending and reduce the federal workforce.</p><p>In this conversation, <strong>Justin Hendrix</strong> is joined by four individuals who are following DOGE closely. The conversation touches on the broader context and history of attempts to use technology to streamline and improve government services, the apparent ideology behind DOGE and its conception of AI, and what the future may look like after DOGE. Guests include:</p><ul><li><strong>Eryk Salvaggio</strong>, a visiting professor at the Rochester Institute of Technology and a fellow at Tech Policy Press;</li><li><strong>Rebecca Williams</strong>, a senior strategist in the Privacy and Data Governance Unit at ACLU;</li><li><strong>Emily Tavoulareas</strong>, who teaches and conducts research at Georgetown's McCourt School for Public Policy and is leading a project to document the founding of the US Digital Service; and </li><li><strong>Matthew Kirschenbaum</strong>, Distinguished University Professor in the Department of English at the University of Maryland.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/doge-and-the-united-states-of-ai]]></link><guid isPermaLink="false">68c0256a-71ee-466c-a6f3-d48cda1d44c3</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 06 Apr 2025 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/b81b69e3-b531-49ee-83f6-e98c933e6e1d/TPP335-converted.mp3" length="45332599" type="audio/mpeg"/><itunes:duration>53:58</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Part 2: Technology, Democracy, and Power—Journalism’s Role in a Time of Crisis</title><itunes:title>Part 2: Technology, Democracy, and Power—Journalism’s Role in a Time of Crisis</itunes:title><description><![CDATA[<p>On Tuesday, March 25th, Tech Policy Press hosted a webinar discussion to talk shop with others on the tech and democracy beat. We gathered seven colleagues from around the world to explore how tech journalists are grappling with the current political moment in the United States and beyond. In this episode, you'll hear the first session of the day, which features Tech Policy Press Associate Editor Ramsha Jahangir in discussion with​ <strong>Rina Chandran</strong>, Rest of World; <strong>Natalia Anteleva</strong>, Coda Story; <strong>Anupriya Datta</strong>, Euractiv; and <strong>Anisha Dutta</strong>, an award-winning investigative reporter.</p><p>​This discussion delved into the global implications of these developments and key lessons from reporting in various political contexts.&nbsp;Questions included:</p><ul><li>​What key narratives are emerging globally from recent shifts in US policy?</li><li>​How is the rise of a tech oligarchy shaping technology coverage outside the US?</li><li>​What practical lessons can journalists learn from reporting on technology and politics in non-Western contexts?</li></ul><br/>]]></description><content:encoded><![CDATA[<p>On Tuesday, March 25th, Tech Policy Press hosted a webinar discussion to talk shop with others on the tech and democracy beat. We gathered seven colleagues from around the world to explore how tech journalists are grappling with the current political moment in the United States and beyond. In this episode, you'll hear the first session of the day, which features Tech Policy Press Associate Editor Ramsha Jahangir in discussion with​ <strong>Rina Chandran</strong>, Rest of World; <strong>Natalia Anteleva</strong>, Coda Story; <strong>Anupriya Datta</strong>, Euractiv; and <strong>Anisha Dutta</strong>, an award-winning investigative reporter.</p><p>​This discussion delved into the global implications of these developments and key lessons from reporting in various political contexts.&nbsp;Questions included:</p><ul><li>​What key narratives are emerging globally from recent shifts in US policy?</li><li>​How is the rise of a tech oligarchy shaping technology coverage outside the US?</li><li>​What practical lessons can journalists learn from reporting on technology and politics in non-Western contexts?</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/part-2-technology-democracy-and-powerjournalisms-role-in-a-time-of-crisis]]></link><guid isPermaLink="false">50e16181-d50e-42e5-8ec4-844da83b06df</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 30 Mar 2025 09:15:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/9bd6c6d1-3b1f-4190-b64c-6b3c6c82e9e1/TPP334-converted.mp3" length="26563127" type="audio/mpeg"/><itunes:duration>36:54</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Part 1: Technology, Democracy, and Power—Journalism’s Role in a Time of Crisis</title><itunes:title>Part 1: Technology, Democracy, and Power—Journalism’s Role in a Time of Crisis</itunes:title><description><![CDATA[<p> On Tuesday, March 25th, Tech Policy Press hosted a webinar discussion to talk shop with others on the tech and democracy beat. We gathered seven colleagues from around the world to explore how tech journalists are grappling with the current political moment in the United States and beyond. In this episode, you'll hear the first session of the day, which features a discussion with <strong>Michael Masnick</strong> from Techdirt, <strong>Vittoria Elliot</strong> from <em>Wired</em>, and <strong>Emmanuel Maiberg</strong> from 404 Media.</p><p>This session explored the intersection of technology and the current political situation in the US. Key questions included: </p><ul><li>How are tech journalists addressing the current situation, and why is their perspective so crucial? </li><li>What critical questions are journalists covering the intersection of tech and democracy currently asking? </li><li>How does the field approach reporting on anti-democratic phenomena and the challenges journalists face in this work?</li></ul><br/>]]></description><content:encoded><![CDATA[<p> On Tuesday, March 25th, Tech Policy Press hosted a webinar discussion to talk shop with others on the tech and democracy beat. We gathered seven colleagues from around the world to explore how tech journalists are grappling with the current political moment in the United States and beyond. In this episode, you'll hear the first session of the day, which features a discussion with <strong>Michael Masnick</strong> from Techdirt, <strong>Vittoria Elliot</strong> from <em>Wired</em>, and <strong>Emmanuel Maiberg</strong> from 404 Media.</p><p>This session explored the intersection of technology and the current political situation in the US. Key questions included: </p><ul><li>How are tech journalists addressing the current situation, and why is their perspective so crucial? </li><li>What critical questions are journalists covering the intersection of tech and democracy currently asking? </li><li>How does the field approach reporting on anti-democratic phenomena and the challenges journalists face in this work?</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/part-1-technology-democracy-and-powerjournalisms-role-in-a-time-of-crisis]]></link><guid isPermaLink="false">ed52ba94-6533-464f-8247-5216cd76fd41</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 30 Mar 2025 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/5c77e5ab-bd75-4a21-be0e-773429429e88/TPP333-converted.mp3" length="38849991" type="audio/mpeg"/><itunes:duration>46:15</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>About that Signal Chat</title><itunes:title>About that Signal Chat</itunes:title><description><![CDATA[<p> Every now and again, a story that has a significant technology element really breaks through and drives the news cycle. This week, the Trump administration is reeling after <em>The Atlantic </em>magazine's <strong>Jeffrey Goldberg</strong> revealed that he was on the receiving end of Yemen strike plans in a Signal group chat between US Secretary of Defense <strong>Pete Hegseth </strong>and other top US national security officials. User behavior, a common failure point, appears to be to blame in this scenario. But what are the broader contours and questions that emerge from this scandal? To learn more, <strong>Justin Hendrix</strong> spoke to:</p><ul><li><strong>Ryan Goodman</strong> is the Anne and Joel Ehrenkranz Professor of Law at New York University School of Law and co-editor-in-chief of <em>Just Security</em>. He served as special counsel to the general counsel of the Department of Defense (2015-16).</li><li><strong>Cooper Quintin&nbsp;</strong>is a senior staff technologist at the Electronic Frontier Foundation (EFF). He has worked on projects including Privacy Badger, Canary Watch, and analysis of state-sponsored malware campaigns such as Dark Caracal.</li></ul><br/>]]></description><content:encoded><![CDATA[<p> Every now and again, a story that has a significant technology element really breaks through and drives the news cycle. This week, the Trump administration is reeling after <em>The Atlantic </em>magazine's <strong>Jeffrey Goldberg</strong> revealed that he was on the receiving end of Yemen strike plans in a Signal group chat between US Secretary of Defense <strong>Pete Hegseth </strong>and other top US national security officials. User behavior, a common failure point, appears to be to blame in this scenario. But what are the broader contours and questions that emerge from this scandal? To learn more, <strong>Justin Hendrix</strong> spoke to:</p><ul><li><strong>Ryan Goodman</strong> is the Anne and Joel Ehrenkranz Professor of Law at New York University School of Law and co-editor-in-chief of <em>Just Security</em>. He served as special counsel to the general counsel of the Department of Defense (2015-16).</li><li><strong>Cooper Quintin&nbsp;</strong>is a senior staff technologist at the Electronic Frontier Foundation (EFF). He has worked on projects including Privacy Badger, Canary Watch, and analysis of state-sponsored malware campaigns such as Dark Caracal.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/about-that-signal-chat]]></link><guid isPermaLink="false">22cc4e84-b6b9-4073-8e4b-a7a68587795f</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Thu, 27 Mar 2025 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/92df19e9-ae34-40eb-baad-a3bb2e1210e1/TPP332-converted.mp3" length="23319168" type="audio/mpeg"/><itunes:duration>27:46</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>A Conversation with Alvaro Bedoya on Trump&apos;s FTC Firings</title><itunes:title>A Conversation with Alvaro Bedoya on Trump&apos;s FTC Firings</itunes:title><description><![CDATA[<p>Last week, <strong>President Donald Trump</strong> ordered the firing of two Democratic members of the Federal Trade Commission, an independent agency that enforces federal consumer protection and competition laws and that, under former <strong>President Joe Biden</strong>, turned up its scrutiny of the tech sector's biggest companies. The two commissioners, <strong>Alvaro Bedoya</strong> and <strong>Rebecca Kelly Slaughter</strong>, plan to challenge Trump's firing, which they said will only benefit billionaire tech moguls like <strong>Mark Zuckerberg</strong> and <strong>Jeff Bezos</strong>.</p><p>Tech Policy Press Associate Editor <strong>Cristiano Lima-Strong </strong>spoke to Bedoya on Monday, March 24.</p>]]></description><content:encoded><![CDATA[<p>Last week, <strong>President Donald Trump</strong> ordered the firing of two Democratic members of the Federal Trade Commission, an independent agency that enforces federal consumer protection and competition laws and that, under former <strong>President Joe Biden</strong>, turned up its scrutiny of the tech sector's biggest companies. The two commissioners, <strong>Alvaro Bedoya</strong> and <strong>Rebecca Kelly Slaughter</strong>, plan to challenge Trump's firing, which they said will only benefit billionaire tech moguls like <strong>Mark Zuckerberg</strong> and <strong>Jeff Bezos</strong>.</p><p>Tech Policy Press Associate Editor <strong>Cristiano Lima-Strong </strong>spoke to Bedoya on Monday, March 24.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/a-conversation-with-alvaro-bedoya-on-trumps-ftc-firings]]></link><guid isPermaLink="false">d4f3675e-63e1-4e5a-bae1-9dc4c9d54d76</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Tue, 25 Mar 2025 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/cdbcf23d-dc88-4cb9-9b3e-d019c17c8a40/TPP331-converted.mp3" length="33286323" type="audio/mpeg"/><itunes:duration>39:38</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Is an Anti-Fascist Approach to Artificial Intelligence Possible?</title><itunes:title>Is an Anti-Fascist Approach to Artificial Intelligence Possible?</itunes:title><description><![CDATA[<p>What is necessary to develop a future that is less hospitable to authoritarianism and, indeed, to fascism? How do we build collective power against authoritarian forms of corporate and state power? Is an alternative form of computing possible? <strong>Dan McQuillan</strong> is the author of <a href="https://bristoluniversitypress.co.uk/resisting-ai" rel="noopener noreferrer" target="_blank"><em>Resisting AI: An Anti-fascist Approach to Artificial Intelligence</em></a>, published in 2022 by Bristol University Press.</p>]]></description><content:encoded><![CDATA[<p>What is necessary to develop a future that is less hospitable to authoritarianism and, indeed, to fascism? How do we build collective power against authoritarian forms of corporate and state power? Is an alternative form of computing possible? <strong>Dan McQuillan</strong> is the author of <a href="https://bristoluniversitypress.co.uk/resisting-ai" rel="noopener noreferrer" target="_blank"><em>Resisting AI: An Anti-fascist Approach to Artificial Intelligence</em></a>, published in 2022 by Bristol University Press.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/is-an-anti-fascist-approach-to-artificial-intelligence-possible]]></link><guid isPermaLink="false">5dcb9599-ebee-410a-a674-c8eba2e25b88</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 23 Mar 2025 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/4e11889b-bc77-4745-9b66-e9c765300897/TPP330-converted.mp3" length="37997627" type="audio/mpeg"/><itunes:duration>52:46</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>A Conversation with Dr. Alondra Nelson on AI and Democracy</title><itunes:title>A Conversation with Dr. Alondra Nelson on AI and Democracy</itunes:title><description><![CDATA[<p><strong>Dr. Alondra Nelson</strong> holds the Harold F. Linder Chair and leads the&nbsp;<a href="https://www.ias.edu/stsv-lab" rel="noopener noreferrer" target="_blank">Science, Technology, and Social Values Lab</a>&nbsp;at the Institute for Advanced Study, where she has served on the faculty since 2019. From 2021 to 2023, she was deputy assistant to President Joe Biden and acting director and principal deputy director for science and society of the White House Office of Science and Technology Policy. She was deeply involved in the Biden administration’s approach to artificial intelligence. She led the development of the White House “Blueprint for an AI Bill of Rights,” which informed President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.&nbsp;</p><p>To say the Trump administration has taken a different approach to AI and how to think about its role in government and in society would be an understatement. President Trump rescinded President Biden’s executive order and is at work developing a new approach to AI policy. At the Paris AI Action Summit in February, Vice President JD Vance promoted a vision of American dominance and challenged other nations that would seek to regulate American AI firms. And then there is DOGE, which is at work gutting federal agencies with the stated intent of replacing key government functions with AI systems and using AI to root out supposed fraud and waste.</p><p>This week, <strong>Justin Hendrix </strong>had the chance to speak with Dr. Nelson about how she’s thinking about these phenomena and the work to be done in the years ahead to secure a more just, democratic, and sustainable future.&nbsp;</p>]]></description><content:encoded><![CDATA[<p><strong>Dr. Alondra Nelson</strong> holds the Harold F. Linder Chair and leads the&nbsp;<a href="https://www.ias.edu/stsv-lab" rel="noopener noreferrer" target="_blank">Science, Technology, and Social Values Lab</a>&nbsp;at the Institute for Advanced Study, where she has served on the faculty since 2019. From 2021 to 2023, she was deputy assistant to President Joe Biden and acting director and principal deputy director for science and society of the White House Office of Science and Technology Policy. She was deeply involved in the Biden administration’s approach to artificial intelligence. She led the development of the White House “Blueprint for an AI Bill of Rights,” which informed President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.&nbsp;</p><p>To say the Trump administration has taken a different approach to AI and how to think about its role in government and in society would be an understatement. President Trump rescinded President Biden’s executive order and is at work developing a new approach to AI policy. At the Paris AI Action Summit in February, Vice President JD Vance promoted a vision of American dominance and challenged other nations that would seek to regulate American AI firms. And then there is DOGE, which is at work gutting federal agencies with the stated intent of replacing key government functions with AI systems and using AI to root out supposed fraud and waste.</p><p>This week, <strong>Justin Hendrix </strong>had the chance to speak with Dr. Nelson about how she’s thinking about these phenomena and the work to be done in the years ahead to secure a more just, democratic, and sustainable future.&nbsp;</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/a-conversation-with-dr-alondra-nelson-on-ai-and-democracy]]></link><guid isPermaLink="false">8a6dc903-b120-4f2d-b840-0c3b01215e8d</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 16 Mar 2025 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/0baf24ef-2452-4551-9686-bb4cc0a74359/TPP329-converted.mp3" length="25670340" type="audio/mpeg"/><itunes:duration>30:34</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Should AGI Really Be the Goal of Artificial Intelligence Research?</title><itunes:title>Should AGI Really Be the Goal of Artificial Intelligence Research?</itunes:title><description><![CDATA[<p>The goal of achieving "artificial general intelligence," or AGI, is shared by many in the AI field. OpenAI’s <a href="https://openai.com/charter/" rel="noopener noreferrer" target="_blank">charter</a> defines AGI as "highly autonomous systems that outperform humans at most economically valuable work,” and last summer, the company <a href="https://www.bloomberg.com/news/articles/2024-07-11/openai-sets-levels-to-track-progress-toward-superintelligent-ai" rel="noopener noreferrer" target="_blank">announced</a>&nbsp;its plan to achieve AGI within five years. While other experts at companies like Meta and Anthropic quibble with the term, many AI researchers recognize AGI as either an explicit or implicit goal. Google Deepmind went so far as to set out "<a href="https://deepmind.google/research/publications/66938/" rel="noopener noreferrer" target="_blank">Levels of AGI</a>,” identifying key principles&nbsp;and definitions of the term.&nbsp;</p><p>Today’s guests are among the authors of <a href="https://arxiv.org/abs/2502.03689" rel="noopener noreferrer" target="_blank">a new paper</a> that argues the field should stop treating AGI as the north-star goal of AI research. They include:</p><ul><li><strong>Eryk Salvaggio,</strong> a visiting professor in the Humanities Computing and Design department at the Rochester Institute of Technology and a Tech Policy Press fellow;</li><li><strong>Borhane Blili-Hamelin, </strong>an independent AI researcher and currently a data scientist at the Canadian bank TD; and</li><li><strong>Margaret Mitchell</strong>, chief ethics scientist at Hugging Face.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>The goal of achieving "artificial general intelligence," or AGI, is shared by many in the AI field. OpenAI’s <a href="https://openai.com/charter/" rel="noopener noreferrer" target="_blank">charter</a> defines AGI as "highly autonomous systems that outperform humans at most economically valuable work,” and last summer, the company <a href="https://www.bloomberg.com/news/articles/2024-07-11/openai-sets-levels-to-track-progress-toward-superintelligent-ai" rel="noopener noreferrer" target="_blank">announced</a>&nbsp;its plan to achieve AGI within five years. While other experts at companies like Meta and Anthropic quibble with the term, many AI researchers recognize AGI as either an explicit or implicit goal. Google Deepmind went so far as to set out "<a href="https://deepmind.google/research/publications/66938/" rel="noopener noreferrer" target="_blank">Levels of AGI</a>,” identifying key principles&nbsp;and definitions of the term.&nbsp;</p><p>Today’s guests are among the authors of <a href="https://arxiv.org/abs/2502.03689" rel="noopener noreferrer" target="_blank">a new paper</a> that argues the field should stop treating AGI as the north-star goal of AI research. They include:</p><ul><li><strong>Eryk Salvaggio,</strong> a visiting professor in the Humanities Computing and Design department at the Rochester Institute of Technology and a Tech Policy Press fellow;</li><li><strong>Borhane Blili-Hamelin, </strong>an independent AI researcher and currently a data scientist at the Canadian bank TD; and</li><li><strong>Margaret Mitchell</strong>, chief ethics scientist at Hugging Face.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/should-agi-really-be-the-goal-of-artificial-intelligence-research]]></link><guid isPermaLink="false">4d12522c-aea7-475d-89f7-0f2e3ba1765a</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 09 Mar 2025 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/12aeb771-2daf-4047-9485-c0ebf29c36db/TPP328-converted.mp3" length="30672154" type="audio/mpeg"/><itunes:duration>42:36</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Assessing Europe&apos;s Digital Markets Act One Year In</title><itunes:title>Assessing Europe&apos;s Digital Markets Act One Year In</itunes:title><description><![CDATA[<p>A year ago, Europe’s Digital Markets Act—the DMA—went into effect. The European Commission <a href="https://ec.europa.eu/commission/presscorner/detail/en/ip_24_1342" rel="noopener noreferrer" target="_blank">says</a> the purpose of the regulation is to make<strong> “</strong>digital markets in the EU more contestable and fairer.” In particular,&nbsp;the DMA regulates gatekeepers, the large digital platforms whose position gives them greater leverage over the digital economy. One year in, how has the DMA performed? Do Europeans enjoy more choice and competition? And what are the new politics of the DMA as European regulations are contested by the Trump administration and its supporters in US industry?&nbsp;</p><p>To answer these questions and more, Tech Policy Press contributing editor <strong>Dean Jackson</strong> spoke to a set of experts following a <a href="https://kgi.georgetown.edu/events/dma-and-beyond-conference/" rel="noopener noreferrer" target="_blank">conference</a> hosted by the Knight Georgetown Institute titled “DMA and Beyond.” His guests include:</p><ul><li><strong>Alissa Cooper</strong>, Executive Director of the Knight-Georgetown Institute&nbsp;(KGI)</li><li><strong>Anu Bradford</strong>, Henry L. Moses Professor of Law and International Organization at Columbia Law School</li><li><strong>Haeyoon Kim</strong>, a Non-Resident Fellow at the Korea Economic Institute (KEI), and</li><li><strong>Gunn Jiravuttipong,</strong> a&nbsp;JSD Candidate and Miller Fellow at Berkeley Law School.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>A year ago, Europe’s Digital Markets Act—the DMA—went into effect. The European Commission <a href="https://ec.europa.eu/commission/presscorner/detail/en/ip_24_1342" rel="noopener noreferrer" target="_blank">says</a> the purpose of the regulation is to make<strong> “</strong>digital markets in the EU more contestable and fairer.” In particular,&nbsp;the DMA regulates gatekeepers, the large digital platforms whose position gives them greater leverage over the digital economy. One year in, how has the DMA performed? Do Europeans enjoy more choice and competition? And what are the new politics of the DMA as European regulations are contested by the Trump administration and its supporters in US industry?&nbsp;</p><p>To answer these questions and more, Tech Policy Press contributing editor <strong>Dean Jackson</strong> spoke to a set of experts following a <a href="https://kgi.georgetown.edu/events/dma-and-beyond-conference/" rel="noopener noreferrer" target="_blank">conference</a> hosted by the Knight Georgetown Institute titled “DMA and Beyond.” His guests include:</p><ul><li><strong>Alissa Cooper</strong>, Executive Director of the Knight-Georgetown Institute&nbsp;(KGI)</li><li><strong>Anu Bradford</strong>, Henry L. Moses Professor of Law and International Organization at Columbia Law School</li><li><strong>Haeyoon Kim</strong>, a Non-Resident Fellow at the Korea Economic Institute (KEI), and</li><li><strong>Gunn Jiravuttipong,</strong> a&nbsp;JSD Candidate and Miller Fellow at Berkeley Law School.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/assessing-europes-digital-markets-act-one-year-in]]></link><guid isPermaLink="false">373d8f5b-10ed-4462-b050-a443cc221cc9</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 09 Mar 2025 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/95ce4edd-5d66-4618-88fc-7a6b696fad4b/TPP327-converted.mp3" length="49566822" type="audio/mpeg"/><itunes:duration>59:00</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Promising Opportunities, Distinct Risks: AI and Digital Public Squares</title><itunes:title>Promising Opportunities, Distinct Risks: AI and Digital Public Squares</itunes:title><description><![CDATA[<p>Could AI help design better, more democratic platforms and online environments for public discourse? What are the opportunities, challenges, and risks of deploying AI in contexts where people are engaged in political discussion? Today’s guests are among the more than two dozen authors of <a href="https://arxiv.org/abs/2412.09988" rel="noopener noreferrer" target="_blank">a new paper</a> on AI and the future of digital public squares:</p><ul><li><strong>Audrey Tang</strong>, Taiwan's Cyber Ambassador and former Digital Minister</li><li><strong>Ravi Iyer</strong>, managing director of the USC Marshall School  Neely Center for Ethical Leadership and Decision Making</li><li><strong>Beth Goldberg</strong>, head of R&amp;D at Jigsaw and a lecturer at Yale School of Public Policy</li></ul><br/>]]></description><content:encoded><![CDATA[<p>Could AI help design better, more democratic platforms and online environments for public discourse? What are the opportunities, challenges, and risks of deploying AI in contexts where people are engaged in political discussion? Today’s guests are among the more than two dozen authors of <a href="https://arxiv.org/abs/2412.09988" rel="noopener noreferrer" target="_blank">a new paper</a> on AI and the future of digital public squares:</p><ul><li><strong>Audrey Tang</strong>, Taiwan's Cyber Ambassador and former Digital Minister</li><li><strong>Ravi Iyer</strong>, managing director of the USC Marshall School  Neely Center for Ethical Leadership and Decision Making</li><li><strong>Beth Goldberg</strong>, head of R&amp;D at Jigsaw and a lecturer at Yale School of Public Policy</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/promising-opportunities-distinct-risks-ai-and-digital-public-squares]]></link><guid isPermaLink="false">077d8f45-a60d-4a6d-89a7-b76cc591e21d</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Thu, 06 Mar 2025 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/27e2360a-7f05-4e8a-892a-b50474337df8/TPP326-converted.mp3" length="42518076" type="audio/mpeg"/><itunes:duration>50:37</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Building Middleware for Bluesky: A Conversation with Blacksky Founder Rudy Fraser</title><itunes:title>Building Middleware for Bluesky: A Conversation with Blacksky Founder Rudy Fraser</itunes:title><description><![CDATA[<p>On this podcast, we regularly engage with questions about redesigning social media networks to make them more democratic, pluralist, and prosocial. One hypothesis people have about how to do that is through the decentralization of platforms and the introduction of middleware—tools built to give users more control over their social media experience and, thus, more autonomy in how they engage in public discourse. In this episode, you’ll hear a discussion with one entrepreneur building middleware for Bluesky: <strong>Rudy Fraser</strong>, the founder of <a href="https://www.blackskyweb.xyz/" rel="noopener noreferrer" target="_blank">Blacksky Algorithms</a> and a fellow at the Berkman Klein Center for Internet &amp; Society at Harvard University.</p>]]></description><content:encoded><![CDATA[<p>On this podcast, we regularly engage with questions about redesigning social media networks to make them more democratic, pluralist, and prosocial. One hypothesis people have about how to do that is through the decentralization of platforms and the introduction of middleware—tools built to give users more control over their social media experience and, thus, more autonomy in how they engage in public discourse. In this episode, you’ll hear a discussion with one entrepreneur building middleware for Bluesky: <strong>Rudy Fraser</strong>, the founder of <a href="https://www.blackskyweb.xyz/" rel="noopener noreferrer" target="_blank">Blacksky Algorithms</a> and a fellow at the Berkman Klein Center for Internet &amp; Society at Harvard University.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/building-middleware-for-bluesky-a-conversation-with-blacksky-founder-rudy-fraser]]></link><guid isPermaLink="false">fd6e3170-0661-4c34-a7c8-a19f4e0a65bd</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Mon, 03 Mar 2025 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/8f7173dc-800d-4cd0-afb8-f77997abc706/TPP325-converted.mp3" length="31896988" type="audio/mpeg"/><itunes:duration>37:58</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Inocencia en Juego: An Investigation into Groups Targeting Children on Facebook</title><itunes:title>Inocencia en Juego: An Investigation into Groups Targeting Children on Facebook</itunes:title><description><![CDATA[<p>Last week, Tech Policy Press joined the&nbsp;<a href="https://www.elclip.org/" rel="noopener noreferrer" target="_blank">Latin American Center for Investigative Journalism</a>&nbsp;(EL CLIP) in publishing <a href="https://www.techpolicy.press/latin-americas-children-at-risk-on-facebook-predators-stalk-children-in-celebrity-fan-groups/" rel="noopener noreferrer" target="_blank">a report</a> and <a href="https://www.techpolicy.press/inocencia-en-juego-an-investigation-into-groups-targeting-children-on-facebook/" rel="noopener noreferrer" target="_blank">series of articles</a> documenting how adult users use public Facebook groups to identify and target accounts that indicate they are children for sexual exploitation. </p><p>The “Innocence at Risk (Inocencia en Juego)” project, coordinated by EL CLIP with participation from Chequeado, includes a report from <strong>Lara Putnam</strong>, a professor of Latin American history and Director of the Civic Resilience Initiative of the Institute for Cyber Law, Policy, and Security at the University of Pittsburgh, and independent reports from journalists across Latin America investigating a pattern of behavior on the platform’s public groups in Colombia, Venezuela, and Argentina. They published their reports in <a href="https://www.elclip.org/" rel="noopener noreferrer" target="_blank">EL CLIP</a>, Chequeado, Crónica Uno, El Espectador, and Factchequeado. </p><p>This episode features a discussion with <strong>Lara Putnam</strong> and<strong> Pablo Medina Uribe,</strong> who led the project at EL CLIP.</p>]]></description><content:encoded><![CDATA[<p>Last week, Tech Policy Press joined the&nbsp;<a href="https://www.elclip.org/" rel="noopener noreferrer" target="_blank">Latin American Center for Investigative Journalism</a>&nbsp;(EL CLIP) in publishing <a href="https://www.techpolicy.press/latin-americas-children-at-risk-on-facebook-predators-stalk-children-in-celebrity-fan-groups/" rel="noopener noreferrer" target="_blank">a report</a> and <a href="https://www.techpolicy.press/inocencia-en-juego-an-investigation-into-groups-targeting-children-on-facebook/" rel="noopener noreferrer" target="_blank">series of articles</a> documenting how adult users use public Facebook groups to identify and target accounts that indicate they are children for sexual exploitation. </p><p>The “Innocence at Risk (Inocencia en Juego)” project, coordinated by EL CLIP with participation from Chequeado, includes a report from <strong>Lara Putnam</strong>, a professor of Latin American history and Director of the Civic Resilience Initiative of the Institute for Cyber Law, Policy, and Security at the University of Pittsburgh, and independent reports from journalists across Latin America investigating a pattern of behavior on the platform’s public groups in Colombia, Venezuela, and Argentina. They published their reports in <a href="https://www.elclip.org/" rel="noopener noreferrer" target="_blank">EL CLIP</a>, Chequeado, Crónica Uno, El Espectador, and Factchequeado. </p><p>This episode features a discussion with <strong>Lara Putnam</strong> and<strong> Pablo Medina Uribe,</strong> who led the project at EL CLIP.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/inocencia-en-juego-an-investigation-into-groups-targeting-children-on-facebook]]></link><guid isPermaLink="false">a3c6a109-5c67-4d41-b864-af83bb1d8a8b</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 02 Mar 2025 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/40513b6c-ffde-4d92-b352-c8ab275bba35/TPP324-converted.mp3" length="26424587" type="audio/mpeg"/><itunes:duration>31:27</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Watching the Watchers: The Future of the Privacy and Civil Liberties Oversight Board</title><itunes:title>Watching the Watchers: The Future of the Privacy and Civil Liberties Oversight Board</itunes:title><description><![CDATA[<p>On January 22, <strong>President Donald Trump</strong> terminated all three Democratic members of the Privacy and Civil Liberties Oversight Board (PCLOB), an intelligence watchdog charged with monitoring the United States government's compliance with procedural safeguards on surveillance activities. The PCLOB's independence is also of concern to the European Commission, which relies on its reports in its assessment of whether US intelligence practices are aligned with EU Data Protection Framework standards. </p><p>On February 24, two of the three terminated members filed suit against the government, arguing they were wrongfully terminated and must be reinstated. The outcome could determine the independence and effectiveness of the PCLOB going forward.</p><p>This episode explores what's at stake in this matter, and it features three segments, including:</p><ul><li>Excerpts from remarks by the remaining PCLOB board member, Republican <strong>Beth Williams</strong>, at the annual State of the Net conference on February 11 in Washington, DC;</li><li>An interview with former board member <strong>Travis LeBlanc</strong> conducted just days before he filed suit against the government;</li><li>An interview with <strong>Greg Nojeim</strong>, Senior Counsel and Director of the Security and Surveillance Project&nbsp;at the Center for Democracy &amp; Technology.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>On January 22, <strong>President Donald Trump</strong> terminated all three Democratic members of the Privacy and Civil Liberties Oversight Board (PCLOB), an intelligence watchdog charged with monitoring the United States government's compliance with procedural safeguards on surveillance activities. The PCLOB's independence is also of concern to the European Commission, which relies on its reports in its assessment of whether US intelligence practices are aligned with EU Data Protection Framework standards. </p><p>On February 24, two of the three terminated members filed suit against the government, arguing they were wrongfully terminated and must be reinstated. The outcome could determine the independence and effectiveness of the PCLOB going forward.</p><p>This episode explores what's at stake in this matter, and it features three segments, including:</p><ul><li>Excerpts from remarks by the remaining PCLOB board member, Republican <strong>Beth Williams</strong>, at the annual State of the Net conference on February 11 in Washington, DC;</li><li>An interview with former board member <strong>Travis LeBlanc</strong> conducted just days before he filed suit against the government;</li><li>An interview with <strong>Greg Nojeim</strong>, Senior Counsel and Director of the Security and Surveillance Project&nbsp;at the Center for Democracy &amp; Technology.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/watching-the-watchers-the-future-of-the-privacy-and-civil-liberties-oversight-board]]></link><guid isPermaLink="false">36ac3d8b-0c7a-42ca-b323-e3f1cac68809</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Fri, 28 Feb 2025 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/41b6658e-59dc-43ee-b664-70b36b1b5c8f/TPP323-converted.mp3" length="21899075" type="audio/mpeg"/><itunes:duration>30:25</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Evaluating the First Systemic Risk and Audit Reports Under the Digital Services Act</title><itunes:title>Evaluating the First Systemic Risk and Audit Reports Under the Digital Services Act</itunes:title><description><![CDATA[<p>Tech Policy Press Associate Editor <strong>Ramsha Jahangir</strong> hosts a roundtable discussion on the first systemic risk assessments and independent audit reports from Very Large Online Platforms and Search Engines produced in compliance with the European Union's Digital Services Act. Ramsha is joined by:</p><ul><li><strong>Hillary Ross</strong>, program lead at the Global Network Initiative (GNI);</li><li><strong>Magdalena Jozwiak</strong>, associate researcher at the DSA Observatory; and</li><li><strong>Svea Windwehr</strong>, the assistant director of EU policy at the Electronic Frontier Foundation (EFF).</li></ul><br/>]]></description><content:encoded><![CDATA[<p>Tech Policy Press Associate Editor <strong>Ramsha Jahangir</strong> hosts a roundtable discussion on the first systemic risk assessments and independent audit reports from Very Large Online Platforms and Search Engines produced in compliance with the European Union's Digital Services Act. Ramsha is joined by:</p><ul><li><strong>Hillary Ross</strong>, program lead at the Global Network Initiative (GNI);</li><li><strong>Magdalena Jozwiak</strong>, associate researcher at the DSA Observatory; and</li><li><strong>Svea Windwehr</strong>, the assistant director of EU policy at the Electronic Frontier Foundation (EFF).</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/evaluating-the-first-systemic-risk-and-audit-reports-under-the-digital-services-act]]></link><guid isPermaLink="false">526199a5-e6bd-4923-9920-cfb7242591aa</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 23 Feb 2025 07:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/3b0bb5ae-c092-4c23-98f7-1ac694a95825/TPP322-converted.mp3" length="32590744" type="audio/mpeg"/><itunes:duration>38:48</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Digital Rights Activists in Taiwan Driven by Memory and Threat of Authoritarianism</title><itunes:title>Digital Rights Activists in Taiwan Driven by Memory and Threat of Authoritarianism</itunes:title><description><![CDATA[<p>This week, <a href="https://www.rightscon.org/" rel="noopener noreferrer" target="_blank">RightsCon</a>, which bills itself as "the world’s leading summit on human rights in the digital age," descends on Taipei.<strong>&nbsp;</strong>To better understand the dynamics in the civil society community working on digital rights and tech policy matters in Taiwan, <strong>Justin Hendrix</strong> spoke to three experts:</p><ul><li><strong>Liu I-Chen</strong>&nbsp;(劉以正),&nbsp;Asia Program Officer at ARTICLE 19</li><li><strong>Kuan-Ju Chou</strong>&nbsp;(周冠汝), Deputy Secretary-General of&nbsp;the Taiwan Association for Human Rights&nbsp;</li><li><strong>Grace Huang</strong>&nbsp;(黃寬心), Director for Global Justice and Digital Freedom&nbsp;at Judicial Reform Foundation&nbsp;</li></ul><br/>]]></description><content:encoded><![CDATA[<p>This week, <a href="https://www.rightscon.org/" rel="noopener noreferrer" target="_blank">RightsCon</a>, which bills itself as "the world’s leading summit on human rights in the digital age," descends on Taipei.<strong>&nbsp;</strong>To better understand the dynamics in the civil society community working on digital rights and tech policy matters in Taiwan, <strong>Justin Hendrix</strong> spoke to three experts:</p><ul><li><strong>Liu I-Chen</strong>&nbsp;(劉以正),&nbsp;Asia Program Officer at ARTICLE 19</li><li><strong>Kuan-Ju Chou</strong>&nbsp;(周冠汝), Deputy Secretary-General of&nbsp;the Taiwan Association for Human Rights&nbsp;</li><li><strong>Grace Huang</strong>&nbsp;(黃寬心), Director for Global Justice and Digital Freedom&nbsp;at Judicial Reform Foundation&nbsp;</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/digital-rights-activists-in-taiwan-driven-by-memory-and-threat-of-authoritarianism]]></link><guid isPermaLink="false">228a17b4-0272-414d-bb22-b61c3a352c91</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 23 Feb 2025 02:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/aabbcea9-1940-4c51-88b1-8c95adf080d0/TPP321-converted.mp3" length="30352406" type="audio/mpeg"/><itunes:duration>42:09</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Paths Diverge at the Paris AI Summit</title><itunes:title>Paths Diverge at the Paris AI Summit</itunes:title><description><![CDATA[<p>At the Paris AI Action Summit on February 10-11, remarks by EU and US leaders indicated significant divergence on how to think about AI. But on balance, nations are moving decisively toward innovation and exploitation of this technology and away from containing it or restricting it. In this episode, <strong>Justin Hendrix</strong> surfaces voices from the Summit, as well as reactions and discussion on these matters at this year's State of the Net conference on February 11 in Washington, DC, including comments by <span style="font-family: var(--bs-font-sans-serif); font-size: 1.125rem; color: var(--bs-accordion-color);">Center for Democracy &amp; Technology vice president for policy </span><strong style="color: var(--bs-accordion-color);">Samir Jain</strong><span style="font-family: var(--bs-font-sans-serif); font-size: 1.125rem; color: var(--bs-accordion-color);">, Abundance Institute head of AI policy </span><strong style="color: var(--bs-accordion-color);">Neil Chilson</strong><span style="font-family: var(--bs-font-sans-serif); font-size: 1.125rem; color: var(--bs-accordion-color);">, and former Biden administration assistant director for AI policy </span><strong style="color: var(--bs-accordion-color);">Olivia Zhu</strong><span style="font-family: var(--bs-font-sans-serif); font-size: 1.125rem; color: var(--bs-accordion-color);">.</span></p>]]></description><content:encoded><![CDATA[<p>At the Paris AI Action Summit on February 10-11, remarks by EU and US leaders indicated significant divergence on how to think about AI. But on balance, nations are moving decisively toward innovation and exploitation of this technology and away from containing it or restricting it. In this episode, <strong>Justin Hendrix</strong> surfaces voices from the Summit, as well as reactions and discussion on these matters at this year's State of the Net conference on February 11 in Washington, DC, including comments by <span style="font-family: var(--bs-font-sans-serif); font-size: 1.125rem; color: var(--bs-accordion-color);">Center for Democracy &amp; Technology vice president for policy </span><strong style="color: var(--bs-accordion-color);">Samir Jain</strong><span style="font-family: var(--bs-font-sans-serif); font-size: 1.125rem; color: var(--bs-accordion-color);">, Abundance Institute head of AI policy </span><strong style="color: var(--bs-accordion-color);">Neil Chilson</strong><span style="font-family: var(--bs-font-sans-serif); font-size: 1.125rem; color: var(--bs-accordion-color);">, and former Biden administration assistant director for AI policy </span><strong style="color: var(--bs-accordion-color);">Olivia Zhu</strong><span style="font-family: var(--bs-font-sans-serif); font-size: 1.125rem; color: var(--bs-accordion-color);">.</span></p>]]></content:encoded><link><![CDATA[https://techpolicy.press/paths-diverge-at-the-paris-ai-summit]]></link><guid isPermaLink="false">e22c15e9-24f2-4cb0-a01a-2cf1fdaa6235</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 16 Feb 2025 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/acc5f8f3-fc4f-4323-8e56-1158e2e841da/TPP320-converted.mp3" length="19329562" type="audio/mpeg"/><itunes:duration>23:01</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>A National Heist? Evaluating Elon Musk’s March Through Washington</title><itunes:title>A National Heist? Evaluating Elon Musk’s March Through Washington</itunes:title><description><![CDATA[<p>As <strong>Donald Trump</strong>’s second presidency enters its third week, <strong>Elon Musk</strong> is center stage as the Department of Government Efficiency moves to gut federal agencies. In this episode,&nbsp;<strong>Justin Hendrix</strong>&nbsp;speaks with two experts who are following these events closely and thinking about what they tell us about the relationship between technology and power:</p><ul><li><strong>David Kaye</strong>, a professor of law at the University of California Irvine and formerly the UN Special Rapporteur on Freedom of Expression, and</li><li><strong>Yaël Eisenstat</strong>, director of policy impact at Cybersecurity for Democracy at New York University.</li></ul><br/><p><br></p>]]></description><content:encoded><![CDATA[<p>As <strong>Donald Trump</strong>’s second presidency enters its third week, <strong>Elon Musk</strong> is center stage as the Department of Government Efficiency moves to gut federal agencies. In this episode,&nbsp;<strong>Justin Hendrix</strong>&nbsp;speaks with two experts who are following these events closely and thinking about what they tell us about the relationship between technology and power:</p><ul><li><strong>David Kaye</strong>, a professor of law at the University of California Irvine and formerly the UN Special Rapporteur on Freedom of Expression, and</li><li><strong>Yaël Eisenstat</strong>, director of policy impact at Cybersecurity for Democracy at New York University.</li></ul><br/><p><br></p>]]></content:encoded><link><![CDATA[https://techpolicy.press/a-national-heist-evaluating-elon-musks-march-through-washington]]></link><guid isPermaLink="false">140defe9-2c5b-4d2b-a161-00d05b6af6bb</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 09 Feb 2025 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/1f662c91-d37b-488b-9e98-bb8d0beb5636/TPP319-converted.mp3" length="37670875" type="audio/mpeg"/><itunes:duration>44:51</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Online Lives, Space and Place: Exploring the Mobile City</title><itunes:title>Online Lives, Space and Place: Exploring the Mobile City</itunes:title><description><![CDATA[<p>Over the last two decades, as Berlin reinvented itself as a "creative city," social media both mirrored and shaped shifting social landscapes—offering new possibilities while also reinforcing inequalities. How did digital media practices reshape urban life? And what can Berlin’s story tell us about the broader relationship between technology, culture, and the places we live? Today’s guest is <strong>Jordan H. Kraemer</strong>, the author of a new book that tries to answer these questions and more. It's called <em>Mobile City: Emerging Media, Space, and Sociality in Contemporary Berlin</em>, published by <a href="https://www.cornellpress.cornell.edu/book/9781501778704/mobile-city/" rel="noopener noreferrer" target="_blank">Cornell University Press</a>.</p>]]></description><content:encoded><![CDATA[<p>Over the last two decades, as Berlin reinvented itself as a "creative city," social media both mirrored and shaped shifting social landscapes—offering new possibilities while also reinforcing inequalities. How did digital media practices reshape urban life? And what can Berlin’s story tell us about the broader relationship between technology, culture, and the places we live? Today’s guest is <strong>Jordan H. Kraemer</strong>, the author of a new book that tries to answer these questions and more. It's called <em>Mobile City: Emerging Media, Space, and Sociality in Contemporary Berlin</em>, published by <a href="https://www.cornellpress.cornell.edu/book/9781501778704/mobile-city/" rel="noopener noreferrer" target="_blank">Cornell University Press</a>.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/online-lives-space-and-place-exploring-the-mobile-city]]></link><guid isPermaLink="false">f3910103-603d-4da7-a643-fb36d2308052</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 09 Feb 2025 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/008c832e-dc22-4460-8fa3-2ec0108f0736/TPP318-converted.mp3" length="25729063" type="audio/mpeg"/><itunes:duration>35:44</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The Dangerous Combination of Technology and Capitalism</title><itunes:title>The Dangerous Combination of Technology and Capitalism</itunes:title><description><![CDATA[<p><strong>Justin Hendrix</strong> speaks with <strong>Jathan Sadowski</strong>,  a senior lecturer in the Faculty of Information Technology at Monash University in Melbourne, Australia; co-host of <a href="https://www.patreon.com/thismachinekills" rel="noopener noreferrer" target="_blank">This Machine Kills</a>, a weekly podcast on technology and political economy; and author of the <a href="https://www.ucpress.edu/flier/books/the-mechanic-and-the-luddite/paper" rel="noopener noreferrer" target="_blank">new book</a><strong> </strong><em>The Mechanic and the Luddite: A Ruthless Criticism of Technology and Capitalism</em> from the University of California Press.</p>]]></description><content:encoded><![CDATA[<p><strong>Justin Hendrix</strong> speaks with <strong>Jathan Sadowski</strong>,  a senior lecturer in the Faculty of Information Technology at Monash University in Melbourne, Australia; co-host of <a href="https://www.patreon.com/thismachinekills" rel="noopener noreferrer" target="_blank">This Machine Kills</a>, a weekly podcast on technology and political economy; and author of the <a href="https://www.ucpress.edu/flier/books/the-mechanic-and-the-luddite/paper" rel="noopener noreferrer" target="_blank">new book</a><strong> </strong><em>The Mechanic and the Luddite: A Ruthless Criticism of Technology and Capitalism</em> from the University of California Press.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/the-dangerous-combination-of-technology-and-capitalism]]></link><guid isPermaLink="false">ec52df12-52e6-4cc6-ad58-8735691ca5f1</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 02 Feb 2025 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/736a308d-33bc-4b16-96a4-cedecfc19447/TPP317-converted.mp3" length="42685858" type="audio/mpeg"/><itunes:duration>44:28</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>DeepSeek Prompts a Rethink</title><itunes:title>DeepSeek Prompts a Rethink</itunes:title><description><![CDATA[<p>If Chinese AI startup DeepSeek’s efficiency and performance achievements stand up to scrutiny, it could have big implications for the AI race. It could call into question the strategic approach that the biggest US firms appear to be taking and the wisdom of the current American policy approach to AI. </p><p>To discuss these issues, <strong>Justin Hendrix</strong> spoke to <strong>Karen Hao</strong>,  a reporter who covers AI. In recent years, she's reported on China and tech for the <em>Wall Street Journal, </em>written about AI for <em>The Atlantic</em>, and run a program for the Pulitzer Center  to teach other journalists how to report on AI. Hao has a book about OpenAI,  the AI industry, and its global impacts that will be released later this year. </p>]]></description><content:encoded><![CDATA[<p>If Chinese AI startup DeepSeek’s efficiency and performance achievements stand up to scrutiny, it could have big implications for the AI race. It could call into question the strategic approach that the biggest US firms appear to be taking and the wisdom of the current American policy approach to AI. </p><p>To discuss these issues, <strong>Justin Hendrix</strong> spoke to <strong>Karen Hao</strong>,  a reporter who covers AI. In recent years, she's reported on China and tech for the <em>Wall Street Journal, </em>written about AI for <em>The Atlantic</em>, and run a program for the Pulitzer Center  to teach other journalists how to report on AI. Hao has a book about OpenAI,  the AI industry, and its global impacts that will be released later this year. </p>]]></content:encoded><link><![CDATA[https://techpolicy.press/deepseek-prompts-a-rethink]]></link><guid isPermaLink="false">539948ea-6a5e-485c-b9ad-538e69e68009</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Tue, 28 Jan 2025 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/7a1653b7-0fdd-4313-b7da-faeeeb9f4219/TPP316-converted.mp3" length="20495467" type="audio/mpeg"/><itunes:duration>24:24</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Evaluating Trump&apos;s First Moves on Tech</title><itunes:title>Evaluating Trump&apos;s First Moves on Tech</itunes:title><description><![CDATA[<p>From Executive Orders on AI and cryptocurrency to "ending federal censorship," President Donald Trump had a busy first week in the White House. <strong>Justin Hendrix</strong> discussed the news with <strong>Damon Beres</strong>, a senior editor at&nbsp;<em>The Atlantic,&nbsp;</em>where he oversees the technology section. Beres <a href="https://www.theatlantic.com/technology/archive/2025/01/trump-musk-zuckerberg-silicon-valley-kisses-the-ring/681384/" rel="noopener noreferrer" target="_blank">wrote a piece</a> reflecting on Trump's inauguration titled "Billions of People in the Palm of Trump’s Hand."</p>]]></description><content:encoded><![CDATA[<p>From Executive Orders on AI and cryptocurrency to "ending federal censorship," President Donald Trump had a busy first week in the White House. <strong>Justin Hendrix</strong> discussed the news with <strong>Damon Beres</strong>, a senior editor at&nbsp;<em>The Atlantic,&nbsp;</em>where he oversees the technology section. Beres <a href="https://www.theatlantic.com/technology/archive/2025/01/trump-musk-zuckerberg-silicon-valley-kisses-the-ring/681384/" rel="noopener noreferrer" target="_blank">wrote a piece</a> reflecting on Trump's inauguration titled "Billions of People in the Palm of Trump’s Hand."</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/evaluating-trumps-first-moves-on-tech]]></link><guid isPermaLink="false">360dd44c-716c-41db-a41f-dd4f4448be9e</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 26 Jan 2025 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/480b828b-3c52-472b-9698-6b0be9bec112/TPP315-converted.mp3" length="31036914" type="audio/mpeg"/><itunes:duration>32:20</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>What&apos;s New at RightsCon? And How to Free Our Feeds</title><itunes:title>What&apos;s New at RightsCon? And How to Free Our Feeds</itunes:title><description><![CDATA[<p>This episode features two segments. First, we hear from<strong>&nbsp;Nikki Gladstone</strong>, director of&nbsp;<a href="https://www.rightscon.org/" target="_blank">Rightscon</a>, the annual conference organized by Access Now on issues at the intersection of human rights and technology. And in the second, you’ll hear from <strong>Robin Berjon</strong> and <strong>Sean McDonald</strong>, two of the folks behind <a href="https://freeourfeeds.com/" rel="noopener noreferrer" target="_blank">Free Our Feeds</a>, a new effort to raise a public interest foundation that will work to support making Bluesky’s underlying tech (the AT Protocol) resistant to billionaire capture.</p>]]></description><content:encoded><![CDATA[<p>This episode features two segments. First, we hear from<strong>&nbsp;Nikki Gladstone</strong>, director of&nbsp;<a href="https://www.rightscon.org/" target="_blank">Rightscon</a>, the annual conference organized by Access Now on issues at the intersection of human rights and technology. And in the second, you’ll hear from <strong>Robin Berjon</strong> and <strong>Sean McDonald</strong>, two of the folks behind <a href="https://freeourfeeds.com/" rel="noopener noreferrer" target="_blank">Free Our Feeds</a>, a new effort to raise a public interest foundation that will work to support making Bluesky’s underlying tech (the AT Protocol) resistant to billionaire capture.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/whats-new-at-rightscon-and-how-to-free-our-feeds]]></link><guid isPermaLink="false">a11b8011-5b65-4460-a93a-07d7d68a56a6</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 19 Jan 2025 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/111b28a7-6777-4e38-8648-3c34f38f5e67/TPP314-converted.mp3" length="26028720" type="audio/mpeg"/><itunes:duration>30:59</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The Dumbest Timeline: The Supreme Court Rules on TikTok</title><itunes:title>The Dumbest Timeline: The Supreme Court Rules on TikTok</itunes:title><description><![CDATA[<p>Today- Friday, January 17, 2025 - the US Supreme Court <a href="https://www.techpolicy.press/us-supreme-court-upholds-constitutionality-of-law-that-may-result-in-ban-of-tiktok/" rel="noopener noreferrer" target="_blank">delivered its&nbsp;order</a>&nbsp;upholding the constitutionality of the Protecting Americans from Foreign Adversary Controlled Applications Act, a law passed by Congress and signed by President Joe Biden in April 2024. The Court found that the Act, which effectively bans TikTok in the US unless its Chinese parent company, ByteDance, sells it, does not violate the First Amendment rights of TikTok, its users, or creators.</p><p>The decision clears the way for a ban to go into effect on January 19, 2025. Late this evening, TikTok issued a <a href="https://x.com/TikTokPolicy/status/1880424906820608180" rel="noopener noreferrer" target="_blank">statement</a> saying that “Unless the Biden Administration immediately provides a definitive statement to satisfy the most critical service providers assuring non-enforcement, unfortunately TikTok will be forced to go dark on January 19.” The White House had previously announced&nbsp;it <a href="https://apnews.com/article/tiktok-ban-trump-executive-order-1e95d9836bf6f8c0c245ed1c3234d968" rel="noopener noreferrer" target="_blank">would not enforce</a>&nbsp;the ban before President Biden leaves office on Monday. Unless Biden takes action, this may set President-elect <strong>Donald Trump</strong> up to somehow come to TikTok’s rescue.&nbsp;</p><p>To learn more about the ruling and what may happen next, J<strong>ustin Hendrix</strong>  spoke to <strong>Kate Klonick</strong>, an associate professor of law at St. John's University and a fellow at Brookings, Harvard's Berkman Klein Center, and the Yale Information Society Project. The conversation also touches on recent moves by Meta’s founder and CEO, <strong>Mark Zuckerberg</strong>, to ingratiate himself to the incoming Trump administration.&nbsp;</p>]]></description><content:encoded><![CDATA[<p>Today- Friday, January 17, 2025 - the US Supreme Court <a href="https://www.techpolicy.press/us-supreme-court-upholds-constitutionality-of-law-that-may-result-in-ban-of-tiktok/" rel="noopener noreferrer" target="_blank">delivered its&nbsp;order</a>&nbsp;upholding the constitutionality of the Protecting Americans from Foreign Adversary Controlled Applications Act, a law passed by Congress and signed by President Joe Biden in April 2024. The Court found that the Act, which effectively bans TikTok in the US unless its Chinese parent company, ByteDance, sells it, does not violate the First Amendment rights of TikTok, its users, or creators.</p><p>The decision clears the way for a ban to go into effect on January 19, 2025. Late this evening, TikTok issued a <a href="https://x.com/TikTokPolicy/status/1880424906820608180" rel="noopener noreferrer" target="_blank">statement</a> saying that “Unless the Biden Administration immediately provides a definitive statement to satisfy the most critical service providers assuring non-enforcement, unfortunately TikTok will be forced to go dark on January 19.” The White House had previously announced&nbsp;it <a href="https://apnews.com/article/tiktok-ban-trump-executive-order-1e95d9836bf6f8c0c245ed1c3234d968" rel="noopener noreferrer" target="_blank">would not enforce</a>&nbsp;the ban before President Biden leaves office on Monday. Unless Biden takes action, this may set President-elect <strong>Donald Trump</strong> up to somehow come to TikTok’s rescue.&nbsp;</p><p>To learn more about the ruling and what may happen next, J<strong>ustin Hendrix</strong>  spoke to <strong>Kate Klonick</strong>, an associate professor of law at St. John's University and a fellow at Brookings, Harvard's Berkman Klein Center, and the Yale Information Society Project. The conversation also touches on recent moves by Meta’s founder and CEO, <strong>Mark Zuckerberg</strong>, to ingratiate himself to the incoming Trump administration.&nbsp;</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/the-dumbest-timeline-the-supreme-court-rules-on-tiktok]]></link><guid isPermaLink="false">c5d8eee3-6fb1-40a2-b1a3-f2ea9ac6e814</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Fri, 17 Jan 2025 21:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/503bfae8-39ab-4d57-a507-4b6603cd6295/TPP313-converted.mp3" length="30673290" type="audio/mpeg"/><itunes:duration>36:31</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Addressing the &quot;Cursed Equilibrium&quot; of Social Media Algorithms</title><itunes:title>Addressing the &quot;Cursed Equilibrium&quot; of Social Media Algorithms</itunes:title><description><![CDATA[<p>Last fall, Cornell University PhD candidate <strong>Cristiana Firullo</strong> gave a presentation at the <a href="https://conferences.law.stanford.edu/tsrc/wp-content/uploads/sites/160/2024/09/TSRC-Agenda-9.11.2024.pdf" rel="noopener noreferrer" target="_blank">Trust and Safety Research Conference</a> at Stanford University during a session on understanding algorithms and online environments. Titled "The Cursed Equilibrium of Algorithmic Traumatization," the talk focused on the work Firullo is doing with her colleagues at Cornell to try to understand why social media recommendation systems may produce harmful effects on users. Audio reporter <strong>Rebecca Rand</strong> spoke to Firullo about their hypotheses.</p>]]></description><content:encoded><![CDATA[<p>Last fall, Cornell University PhD candidate <strong>Cristiana Firullo</strong> gave a presentation at the <a href="https://conferences.law.stanford.edu/tsrc/wp-content/uploads/sites/160/2024/09/TSRC-Agenda-9.11.2024.pdf" rel="noopener noreferrer" target="_blank">Trust and Safety Research Conference</a> at Stanford University during a session on understanding algorithms and online environments. Titled "The Cursed Equilibrium of Algorithmic Traumatization," the talk focused on the work Firullo is doing with her colleagues at Cornell to try to understand why social media recommendation systems may produce harmful effects on users. Audio reporter <strong>Rebecca Rand</strong> spoke to Firullo about their hypotheses.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/addressing-the-cursed-equilibrium-of-social-media-algorithms]]></link><guid isPermaLink="false">eab9d5c2-ad12-4e94-b752-1a24607401f3</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 12 Jan 2025 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/e4bc672c-60c2-42c1-8276-710bcf87524f/TPP312-converted.mp3" length="9128693" type="audio/mpeg"/><itunes:duration>10:52</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>What to Watch on US State Tech Policy in 2025</title><itunes:title>What to Watch on US State Tech Policy in 2025</itunes:title><description><![CDATA[<p>Even as the new year ushers in a new administration and Congress in the US at the federal level, dozens of states are kicking off new legislative sessions and are expected to pursue various tech policy goals. <strong>Justin Hendrix</strong> spoke to three experts to get a sense of the trends unfolding across the states on the regulation of AI, privacy, child online safety, and related issues:</p><ul><li><strong> Keir Lamont</strong>, senior director at the Future of Privacy Forum (FPF) and author of <a href="https://www.linkedin.com/newsletters/the-patchwork-dispatch-7009554885507436546/" rel="noopener noreferrer" target="_blank">The Patchwork Dispatch</a>, a newsletter on state tech policy issues;</li><li><strong> Caitriona Fitzgerald</strong>, deputy director at the Electronic Privacy Information Center (EPIC), which runs a <a href="https://epic.org/issues/privacy-laws/state-laws/" rel="noopener noreferrer" target="_blank">state privacy policy project</a> and <a href="https://epic.org/aiscorecard/" rel="noopener noreferrer" target="_blank">scores</a> AI legislation;</li><li><strong> Scott Babwah Brennen</strong>, director of the Center on Technology Policy at New York University and an author of a <a href="https://techpolicynyu.org/wp-content/uploads/2024/12/state-tech-policy_2024_CTP_CSMaP_final.pdf" rel="noopener noreferrer" target="_blank">recent report</a> on trends in state tech policy.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>Even as the new year ushers in a new administration and Congress in the US at the federal level, dozens of states are kicking off new legislative sessions and are expected to pursue various tech policy goals. <strong>Justin Hendrix</strong> spoke to three experts to get a sense of the trends unfolding across the states on the regulation of AI, privacy, child online safety, and related issues:</p><ul><li><strong> Keir Lamont</strong>, senior director at the Future of Privacy Forum (FPF) and author of <a href="https://www.linkedin.com/newsletters/the-patchwork-dispatch-7009554885507436546/" rel="noopener noreferrer" target="_blank">The Patchwork Dispatch</a>, a newsletter on state tech policy issues;</li><li><strong> Caitriona Fitzgerald</strong>, deputy director at the Electronic Privacy Information Center (EPIC), which runs a <a href="https://epic.org/issues/privacy-laws/state-laws/" rel="noopener noreferrer" target="_blank">state privacy policy project</a> and <a href="https://epic.org/aiscorecard/" rel="noopener noreferrer" target="_blank">scores</a> AI legislation;</li><li><strong> Scott Babwah Brennen</strong>, director of the Center on Technology Policy at New York University and an author of a <a href="https://techpolicynyu.org/wp-content/uploads/2024/12/state-tech-policy_2024_CTP_CSMaP_final.pdf" rel="noopener noreferrer" target="_blank">recent report</a> on trends in state tech policy.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/what-to-watch-on-us-state-tech-policy-in-2025]]></link><guid isPermaLink="false">7e912e82-12e7-499b-8843-34aa5bcb0cf0</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 05 Jan 2025 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/85e00cb2-cc0e-4c72-8b14-87b38564a47c/TPP311-converted.mp3" length="34090172" type="audio/mpeg"/><itunes:duration>40:35</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Imagining 2025 and Beyond with Dr. Ruha Benjamin</title><itunes:title>Imagining 2025 and Beyond with Dr. Ruha Benjamin</itunes:title><description><![CDATA[<p>This week’s guest is <strong>Dr. Ruha Benjamin</strong>, Alexander Stewart 1886 Professor of African American Studies at Princeton University and Founding Director of the <a href="https://www.thejustdatalab.com/" rel="noopener noreferrer" target="_blank">IDA B. WELLS Just Data Lab</a>. Benjamin was recently named a <a href="https://www.macfound.org/fellows/class-of-2024/ruha-benjamin" rel="noopener noreferrer" target="_blank">2024 MacArthur Fellow</a>, and she’s written and edited multiple books, including 2019’s <a href="https://www.wiley.com/en-us/Race+After+Technology:+Abolitionist+Tools+for+the+New+Jim+Code-p-9781509526437" rel="noopener noreferrer" target="_blank"><em>Race After Technology</em></a> and 2022’s <a href="https://bookshop.org/books/viral-justice-how-we-grow-the-world-we-want/9780691222882" rel="noopener noreferrer" target="_blank"><em>Viral Justice</em></a><em>. </em>Last week she joined <strong>Justin Hendrix</strong> to discuss her latest book, <a href="https://bookshop.org/p/books/imagination-a-manifesto-ruha-benjamin/20074537?ean=9781324020974" rel="noopener noreferrer" target="_blank"><em>Imagination: A Manifesto</em></a>, published this year by WW Norton &amp; Company.</p>]]></description><content:encoded><![CDATA[<p>This week’s guest is <strong>Dr. Ruha Benjamin</strong>, Alexander Stewart 1886 Professor of African American Studies at Princeton University and Founding Director of the <a href="https://www.thejustdatalab.com/" rel="noopener noreferrer" target="_blank">IDA B. WELLS Just Data Lab</a>. Benjamin was recently named a <a href="https://www.macfound.org/fellows/class-of-2024/ruha-benjamin" rel="noopener noreferrer" target="_blank">2024 MacArthur Fellow</a>, and she’s written and edited multiple books, including 2019’s <a href="https://www.wiley.com/en-us/Race+After+Technology:+Abolitionist+Tools+for+the+New+Jim+Code-p-9781509526437" rel="noopener noreferrer" target="_blank"><em>Race After Technology</em></a> and 2022’s <a href="https://bookshop.org/books/viral-justice-how-we-grow-the-world-we-want/9780691222882" rel="noopener noreferrer" target="_blank"><em>Viral Justice</em></a><em>. </em>Last week she joined <strong>Justin Hendrix</strong> to discuss her latest book, <a href="https://bookshop.org/p/books/imagination-a-manifesto-ruha-benjamin/20074537?ean=9781324020974" rel="noopener noreferrer" target="_blank"><em>Imagination: A Manifesto</em></a>, published this year by WW Norton &amp; Company.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/imagining-2025-and-beyond-with-dr-ruha-benjamin]]></link><guid isPermaLink="false">847c6fe7-5320-47dc-b816-c93db11c6f88</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 22 Dec 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/c33fc0c3-a6b2-4b0a-a6eb-a5e485febaad/TPP310-converted.mp3" length="29004505" type="audio/mpeg"/><itunes:duration>40:17</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>How to Remedy Google&apos;s Search Monopoly</title><itunes:title>How to Remedy Google&apos;s Search Monopoly</itunes:title><description><![CDATA[<p>This close to the end of 2024, it’s clear that one of the most significant tech stories of the year was the outcome of the Google search antitrust case. It will also make headlines next year and beyond as the remedies phase gets worked out in the courts. For this episode, <strong>Justin Hendrix</strong> turns the host duties over to someone who has <a href="https://kgi.georgetown.edu/research-and-commentary/considerations-for-effective-search-competition-remedies/" rel="noopener noreferrer" target="_blank">looked closely</a> at this issue: <strong>Alissa Cooper</strong>, the Executive Director of the&nbsp;<a href="https://kgi.georgetown.edu/" rel="noopener noreferrer" target="_blank">Knight-Georgetown Institute</a> (KGI). Alissa hosted a conversation with three individuals who are following the remedies phase with an expert eye, including:</p><ul><li><strong>Cristina Caffarra</strong> is&nbsp;a competition economist and an honorary Professor at University College London, and cofounder of the Competition Research Policy Network at CEPR (Centre for Economic Policy Research), London.</li><li><strong>Kate Brennan</strong> is associate director at the AI Now Institute; and</li><li><strong>David Dinielli</strong> is an attorney and a visiting clinical lecturer and senior research scholar at Yale Law School.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>This close to the end of 2024, it’s clear that one of the most significant tech stories of the year was the outcome of the Google search antitrust case. It will also make headlines next year and beyond as the remedies phase gets worked out in the courts. For this episode, <strong>Justin Hendrix</strong> turns the host duties over to someone who has <a href="https://kgi.georgetown.edu/research-and-commentary/considerations-for-effective-search-competition-remedies/" rel="noopener noreferrer" target="_blank">looked closely</a> at this issue: <strong>Alissa Cooper</strong>, the Executive Director of the&nbsp;<a href="https://kgi.georgetown.edu/" rel="noopener noreferrer" target="_blank">Knight-Georgetown Institute</a> (KGI). Alissa hosted a conversation with three individuals who are following the remedies phase with an expert eye, including:</p><ul><li><strong>Cristina Caffarra</strong> is&nbsp;a competition economist and an honorary Professor at University College London, and cofounder of the Competition Research Policy Network at CEPR (Centre for Economic Policy Research), London.</li><li><strong>Kate Brennan</strong> is associate director at the AI Now Institute; and</li><li><strong>David Dinielli</strong> is an attorney and a visiting clinical lecturer and senior research scholar at Yale Law School.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/how-to-remedy-googles-search-monopoly]]></link><guid isPermaLink="false">ff4a3dcf-6a44-47b1-bba0-39787289df3a</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 15 Dec 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/342e0ab8-4768-410a-8c32-f0296a45f69c/TPP309-converted.mp3" length="41519485" type="audio/mpeg"/><itunes:duration>57:40</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Towards Resilience: A Conversation with Kate Starbird About the Future of Online Elections Discourse</title><itunes:title>Towards Resilience: A Conversation with Kate Starbird About the Future of Online Elections Discourse</itunes:title><description><![CDATA[<p><strong>Kate Starbird </strong>is a professor in the Department of Human Centered Design &amp; Engineering and director of the Emerging Capacities of Mass Participation Laboratory at the University of Washington, and co-founder of the University of Washington's Center for an Informed Public. <strong>Justin Hendrix</strong> interviewed her about her team’s ongoing efforts to study online rumors, including during the 2024 US election; the differences between the left and right media ecosystems in the US; and how she believes the research field is changing.&nbsp;</p>]]></description><content:encoded><![CDATA[<p><strong>Kate Starbird </strong>is a professor in the Department of Human Centered Design &amp; Engineering and director of the Emerging Capacities of Mass Participation Laboratory at the University of Washington, and co-founder of the University of Washington's Center for an Informed Public. <strong>Justin Hendrix</strong> interviewed her about her team’s ongoing efforts to study online rumors, including during the 2024 US election; the differences between the left and right media ecosystems in the US; and how she believes the research field is changing.&nbsp;</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/towards-resilience-a-conversation-with-kate-starbird-about-the-future-of-online-elections-discourse]]></link><guid isPermaLink="false">235a227f-7948-412d-8fa7-35664aa7cfa9</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 08 Dec 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/ca9d3f78-f997-45df-b15b-287d01321856/TPP308-converted.mp3" length="23837570" type="audio/mpeg"/><itunes:duration>33:06</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Petra Molnar on Migration in the Age of Artificial Intelligence</title><itunes:title>Petra Molnar on Migration in the Age of Artificial Intelligence</itunes:title><description><![CDATA[<p>Mass migration presents a challenge to democracy in multiple ways. Chief among them is that anti-immigrant sentiment often plays a major role in the advance of illiberal and anti-democratic politics. We've seen this play out in the United States, where President-elect Donald Trump has promised a dramatic crackdown on immigration and the mass deportation of millions. </p><p>But the scale of today's migration may be dwarfed by what's to come. How has the movement of people affected the politics driving the development of surveillance, biometrics, big data and artificial intelligence technologies? And how do these technologies employed at borders and in governments themselves drive policy and change the way we think about the movement of people?</p><p>Today's guest has spent years traveling the world to study how technology is being deployed in border regions and conflict zones, and she's written a book about it. <a href="https://www.petramolnar.com/" rel="noopener noreferrer" target="_blank">Petra Molnar</a> is a lawyer and an anthropologist and the author of <a href="https://thenewpress.com/books/walls-have-eyes" rel="noopener noreferrer" target="_blank"><em>The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence</em></a>.  </p>]]></description><content:encoded><![CDATA[<p>Mass migration presents a challenge to democracy in multiple ways. Chief among them is that anti-immigrant sentiment often plays a major role in the advance of illiberal and anti-democratic politics. We've seen this play out in the United States, where President-elect Donald Trump has promised a dramatic crackdown on immigration and the mass deportation of millions. </p><p>But the scale of today's migration may be dwarfed by what's to come. How has the movement of people affected the politics driving the development of surveillance, biometrics, big data and artificial intelligence technologies? And how do these technologies employed at borders and in governments themselves drive policy and change the way we think about the movement of people?</p><p>Today's guest has spent years traveling the world to study how technology is being deployed in border regions and conflict zones, and she's written a book about it. <a href="https://www.petramolnar.com/" rel="noopener noreferrer" target="_blank">Petra Molnar</a> is a lawyer and an anthropologist and the author of <a href="https://thenewpress.com/books/walls-have-eyes" rel="noopener noreferrer" target="_blank"><em>The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence</em></a>.  </p>]]></content:encoded><link><![CDATA[https://techpolicy.press/petra-molnar-on-migration-in-the-age-of-artificial-intelligence]]></link><guid isPermaLink="false">102d32d7-1a9a-48e3-94b0-3326dd2dbf2f</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 08 Dec 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/66e078a6-891d-4d2e-9514-b6ff9e0d1f3c/TPP307-converted.mp3" length="23625683" type="audio/mpeg"/><itunes:duration>32:49</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Robert Gorwa Tackles the Politics of Platform Regulation</title><itunes:title>Robert Gorwa Tackles the Politics of Platform Regulation</itunes:title><description><![CDATA[<p><strong style="color: var(--bs-accordion-color);">Robert Gorwa </strong><span style="font-family: var(--bs-font-sans-serif); font-size: 1.125rem; color: var(--bs-accordion-color);">is the author of a new book titled </span><a href="https://global.oup.com/academic/product/the-politics-of-platform-regulation-9780197692868?cc=us&amp;lang=en&amp;" rel="noopener noreferrer" target="_blank" style="color: var(--bs-accordion-color);"><em>The Politics of Platform Regulation: How Governments Shape Online Content Moderation</em></a><span style="font-family: var(--bs-font-sans-serif); font-size: 1.125rem; color: var(--bs-accordion-color);">, published by Oxford University Press. (The book is available open access- </span><a href="https://fdslive.oup.com/www.oup.com/academic/pdf/openaccess/9780197692868.pdf" rel="noopener noreferrer" target="_blank" style="color: var(--bs-accordion-color);">download a free copy here</a><span style="font-family: var(--bs-font-sans-serif); font-size: 1.125rem; color: var(--bs-accordion-color);">.) It is an analysis of how and why governments around the world engage in platform regulation. The lessons he draws from case studies of key regulatory developments in Europe, the United States, New Zealand, and Australia help explain the adoption of different regulatory strategies by these governments and the underlying politics that shape their approach. </span></p>]]></description><content:encoded><![CDATA[<p><strong style="color: var(--bs-accordion-color);">Robert Gorwa </strong><span style="font-family: var(--bs-font-sans-serif); font-size: 1.125rem; color: var(--bs-accordion-color);">is the author of a new book titled </span><a href="https://global.oup.com/academic/product/the-politics-of-platform-regulation-9780197692868?cc=us&amp;lang=en&amp;" rel="noopener noreferrer" target="_blank" style="color: var(--bs-accordion-color);"><em>The Politics of Platform Regulation: How Governments Shape Online Content Moderation</em></a><span style="font-family: var(--bs-font-sans-serif); font-size: 1.125rem; color: var(--bs-accordion-color);">, published by Oxford University Press. (The book is available open access- </span><a href="https://fdslive.oup.com/www.oup.com/academic/pdf/openaccess/9780197692868.pdf" rel="noopener noreferrer" target="_blank" style="color: var(--bs-accordion-color);">download a free copy here</a><span style="font-family: var(--bs-font-sans-serif); font-size: 1.125rem; color: var(--bs-accordion-color);">.) It is an analysis of how and why governments around the world engage in platform regulation. The lessons he draws from case studies of key regulatory developments in Europe, the United States, New Zealand, and Australia help explain the adoption of different regulatory strategies by these governments and the underlying politics that shape their approach. </span></p>]]></content:encoded><link><![CDATA[https://techpolicy.press/robert-gorwa-tackles-the-politics-of-platform-regulation]]></link><guid isPermaLink="false">c821a051-3097-416a-9edf-76560824a886</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 01 Dec 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/b7d47d5f-22ca-43a7-a082-27c0def12577/TPP306-converted.mp3" length="31141425" type="audio/mpeg"/><itunes:duration>43:15</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Evan Greer Asks the Tech Accountability Movement to Draw a Line</title><itunes:title>Evan Greer Asks the Tech Accountability Movement to Draw a Line</itunes:title><description><![CDATA[<p>At its November 21st "Summit of the Future of the Internet," billionaire <strong>Frank McCourt's</strong> Project Liberty <a href="https://www.projectliberty.io/news/charlamagne-tha-god-to-kick-off-day-one-of-project-libertys-summit-on-the-future-of-the-internet/" rel="noopener noreferrer" target="_blank">hosted</a> a panel discussion featuring <strong>Congresswoman Nancy Mace</strong>, a Republican from South Carolina, on a panel with <strong>Congressman Ro Khanna</strong>, a Democrat from California, that was moderated by the media personality <strong>Charlemagne the God</strong>. Last month, Congresswoman Mace led an effort to ban transgender women from using female bathrooms at the US Capitol in response to the election of <strong>Sarah McBride</strong>, who is set to be the first openly transgender person in Congress representing voters in Delaware. <strong>Evan Greer</strong>, director of Fight for the Future, a tech advocacy organization, <a href="https://www.nbcnews.com/tech/internet/rep-nancy-mace-confronted-trans-activist-anti-trans-bathroom-rhetoric-rcna181408" rel="noopener noreferrer" target="_blank">took the opportunity</a> to confront Congresswoman Mace's bigotry during the Project Liberty conference. </p><p><strong>Justin Hendrix</strong> spoke to Evan last week about the incident, where she believes the tech accountability and digital rights movement should draw the line when it comes to engaging with far-right politicians, and how we can go about building spaces where we can imagine a different future that is truly just and liberatory. </p>]]></description><content:encoded><![CDATA[<p>At its November 21st "Summit of the Future of the Internet," billionaire <strong>Frank McCourt's</strong> Project Liberty <a href="https://www.projectliberty.io/news/charlamagne-tha-god-to-kick-off-day-one-of-project-libertys-summit-on-the-future-of-the-internet/" rel="noopener noreferrer" target="_blank">hosted</a> a panel discussion featuring <strong>Congresswoman Nancy Mace</strong>, a Republican from South Carolina, on a panel with <strong>Congressman Ro Khanna</strong>, a Democrat from California, that was moderated by the media personality <strong>Charlemagne the God</strong>. Last month, Congresswoman Mace led an effort to ban transgender women from using female bathrooms at the US Capitol in response to the election of <strong>Sarah McBride</strong>, who is set to be the first openly transgender person in Congress representing voters in Delaware. <strong>Evan Greer</strong>, director of Fight for the Future, a tech advocacy organization, <a href="https://www.nbcnews.com/tech/internet/rep-nancy-mace-confronted-trans-activist-anti-trans-bathroom-rhetoric-rcna181408" rel="noopener noreferrer" target="_blank">took the opportunity</a> to confront Congresswoman Mace's bigotry during the Project Liberty conference. </p><p><strong>Justin Hendrix</strong> spoke to Evan last week about the incident, where she believes the tech accountability and digital rights movement should draw the line when it comes to engaging with far-right politicians, and how we can go about building spaces where we can imagine a different future that is truly just and liberatory. </p>]]></content:encoded><link><![CDATA[https://techpolicy.press/evan-greer-asks-the-tech-accountability-movement-to-draw-a-line]]></link><guid isPermaLink="false">4cb0eeca-de4c-46e0-b1a3-7f276bf6afb4</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 01 Dec 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/96762eb0-00b5-44d2-9191-e8951b18e3b2/TPP305-converted.mp3" length="20165273" type="audio/mpeg"/><itunes:duration>28:00</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Documenting the Assault on Disinformation and Hate Speech Research</title><itunes:title>Documenting the Assault on Disinformation and Hate Speech Research</itunes:title><description><![CDATA[<p>During his recent campaign, President-elect <strong>Donald Trump</strong> made various promises consistent with the ongoing effort by <strong>Elon Musk</strong> and MAGA Republicans to target researchers and civil society groups that study issues such as propaganda and mis- and disinformation. </p><p>Today's guest has looked deeply at this effort, conducting an analysis of over 1800 pages of primary documents to identify the strategic approaches employed by these parties, including the House Judiciary Select Subcommittee on the Weaponization of the Federal Government, and the outcomes and broader democratic implications of the campaign. <strong>Philip M. Napoli</strong> is the James R. Shepley Professor of Public Policy, the Director of the DeWitt Wallace Center for Media &amp; Democracy, and Senior Associate Dean for Faculty and Research for the Sanford School at Duke University. His findings are published in a <a href="https://www.tandfonline.com/doi/abs/10.1080/01972243.2024.2419105" rel="noopener noreferrer" target="_blank">new paper</a> <em>The Information Society</em> titled "In pursuit of ignorance: The institutional assault on disinformation and hate speech research."</p>]]></description><content:encoded><![CDATA[<p>During his recent campaign, President-elect <strong>Donald Trump</strong> made various promises consistent with the ongoing effort by <strong>Elon Musk</strong> and MAGA Republicans to target researchers and civil society groups that study issues such as propaganda and mis- and disinformation. </p><p>Today's guest has looked deeply at this effort, conducting an analysis of over 1800 pages of primary documents to identify the strategic approaches employed by these parties, including the House Judiciary Select Subcommittee on the Weaponization of the Federal Government, and the outcomes and broader democratic implications of the campaign. <strong>Philip M. Napoli</strong> is the James R. Shepley Professor of Public Policy, the Director of the DeWitt Wallace Center for Media &amp; Democracy, and Senior Associate Dean for Faculty and Research for the Sanford School at Duke University. His findings are published in a <a href="https://www.tandfonline.com/doi/abs/10.1080/01972243.2024.2419105" rel="noopener noreferrer" target="_blank">new paper</a> <em>The Information Society</em> titled "In pursuit of ignorance: The institutional assault on disinformation and hate speech research."</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/documenting-the-assault-on-disinformation-and-hate-speech-research]]></link><guid isPermaLink="false">9646265a-6c7e-4148-8479-11f6e5f9fc51</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 24 Nov 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/0b0240ef-68ff-44fb-a7f3-ef4162c4964e/TPP304-converted.mp3" length="21253971" type="audio/mpeg"/><itunes:duration>29:31</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The Race for AI Supremacy</title><itunes:title>The Race for AI Supremacy</itunes:title><description><![CDATA[<p><strong>Parmy Olson</strong> is a Bloomberg Opinion columnist covering technology regulation, artificial intelligence, and social media. Her <a href="https://us.macmillan.com/books/9781250361622/supremacy" rel="noopener noreferrer" target="_blank">new book</a>, <em>Supremacy: AI, ChatGPT, and the Race that Will Change the World</em> tells a tale of rivalry and ambition as it chronicles the rush to exploit artificial intelligence. The book explores the trajectories of <strong>Sam Altman</strong> and <strong>Demis Hassabis</strong> and their roles in advancing artificial intelligence, the challenges posed by corporate power, and the extraordinary economic stakes of the current race to achieve technological supremacy.</p>]]></description><content:encoded><![CDATA[<p><strong>Parmy Olson</strong> is a Bloomberg Opinion columnist covering technology regulation, artificial intelligence, and social media. Her <a href="https://us.macmillan.com/books/9781250361622/supremacy" rel="noopener noreferrer" target="_blank">new book</a>, <em>Supremacy: AI, ChatGPT, and the Race that Will Change the World</em> tells a tale of rivalry and ambition as it chronicles the rush to exploit artificial intelligence. The book explores the trajectories of <strong>Sam Altman</strong> and <strong>Demis Hassabis</strong> and their roles in advancing artificial intelligence, the challenges posed by corporate power, and the extraordinary economic stakes of the current race to achieve technological supremacy.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/the-race-for-ai-supremacy]]></link><guid isPermaLink="false">cbecf2ea-bb27-4257-9daf-6d8b00a425c8</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 17 Nov 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/18ccb799-a1a3-457f-93ce-c50d692386d8/TPP303-converted.mp3" length="28393219" type="audio/mpeg"/><itunes:duration>39:26</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Salvation, Abundance, Apocalypse: Is Technology the World&apos;s Most Powerful Religion?</title><itunes:title>Salvation, Abundance, Apocalypse: Is Technology the World&apos;s Most Powerful Religion?</itunes:title><description><![CDATA[<p>These days, if you see someone with their head bowed, you’re much more likely observing them staring into their phone than in prayer. But from digital rituals to the promises of abundance from Silicon Valley elites, has technology become the world’s most powerful religion? What kinds of promises of salvation and abundance are its leaders making? And how can thinking about technology in this way help us generate ways to reform our approach to it, particularly if we aim to restore humanist principles?</p><p>Today’s guest is <strong>Greg Epstein</strong>, who drew on lessons from his vocation as a humanist chaplain at Harvard and MIT to write a new book, <a href="https://mitpress.mit.edu/9780262049207/tech-agnostic/" rel="noopener noreferrer" target="_blank">just out from MIT Press</a>, called <em>Tech Agnostic: How Technology Became the World's Most Powerful Religion, and Why It Desperately Needs a Reformation. </em></p>]]></description><content:encoded><![CDATA[<p>These days, if you see someone with their head bowed, you’re much more likely observing them staring into their phone than in prayer. But from digital rituals to the promises of abundance from Silicon Valley elites, has technology become the world’s most powerful religion? What kinds of promises of salvation and abundance are its leaders making? And how can thinking about technology in this way help us generate ways to reform our approach to it, particularly if we aim to restore humanist principles?</p><p>Today’s guest is <strong>Greg Epstein</strong>, who drew on lessons from his vocation as a humanist chaplain at Harvard and MIT to write a new book, <a href="https://mitpress.mit.edu/9780262049207/tech-agnostic/" rel="noopener noreferrer" target="_blank">just out from MIT Press</a>, called <em>Tech Agnostic: How Technology Became the World's Most Powerful Religion, and Why It Desperately Needs a Reformation. </em></p>]]></content:encoded><link><![CDATA[https://techpolicy.press/salvation-abundance-apocalypse-is-technology-the-worlds-most-powerful-religion]]></link><guid isPermaLink="false">fcaa9cad-910f-48e7-84d8-921c55cf727c</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 10 Nov 2024 07:30:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/de7c5ccd-8e56-4db1-9a7a-18c7177cbd06/TPP372-converted.mp3" length="34274849" type="audio/mpeg"/><itunes:duration>47:36</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>What Kafka Can Teach Us About Privacy in the Age of AI</title><itunes:title>What Kafka Can Teach Us About Privacy in the Age of AI</itunes:title><description><![CDATA[<p>Today’s guest is Boston University School of Law professor&nbsp;<strong>Woodrow Hartzog</strong>, who, with&nbsp;the George Washington University Law School's <strong>Daniel Solove,</strong>&nbsp;is one of the authors of <a href="https://scholarship.law.bu.edu/faculty_scholarship/3820/" rel="noopener noreferrer" target="_blank">a recent paper</a> that explored the novelist <strong>Franz Kafka’s</strong> worldview as a vehicle to arrive at key insights for regulating privacy in the age of AI. The conversation explores why privacy-as-control models, which rely on individual consent and choice, fail in the digital age, especially with the advent of AI systems. Hartzog argues for a "societal structure model" of privacy protection that would impose substantive obligations on companies and set baseline protections for everyone rather than relying on individual consent. Kafka's work is a lens to examine how people often make choices against their own interests when confronted with complex technological systems, and how AI is amplifying these existing privacy and control problems.</p>]]></description><content:encoded><![CDATA[<p>Today’s guest is Boston University School of Law professor&nbsp;<strong>Woodrow Hartzog</strong>, who, with&nbsp;the George Washington University Law School's <strong>Daniel Solove,</strong>&nbsp;is one of the authors of <a href="https://scholarship.law.bu.edu/faculty_scholarship/3820/" rel="noopener noreferrer" target="_blank">a recent paper</a> that explored the novelist <strong>Franz Kafka’s</strong> worldview as a vehicle to arrive at key insights for regulating privacy in the age of AI. The conversation explores why privacy-as-control models, which rely on individual consent and choice, fail in the digital age, especially with the advent of AI systems. Hartzog argues for a "societal structure model" of privacy protection that would impose substantive obligations on companies and set baseline protections for everyone rather than relying on individual consent. Kafka's work is a lens to examine how people often make choices against their own interests when confronted with complex technological systems, and how AI is amplifying these existing privacy and control problems.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/what-kafka-can-teach-us-about-privacy-in-the-age-of-ai]]></link><guid isPermaLink="false">6ba9c07a-f293-4f5c-b34f-d91d6987a01e</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 03 Nov 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/611a3fe3-673a-45f7-a64d-9f077d579b26/TPP301-converted.mp3" length="27208327" type="audio/mpeg"/><itunes:duration>37:47</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Are Platforms Prepared for the Post-Election Period?</title><itunes:title>Are Platforms Prepared for the Post-Election Period?</itunes:title><description><![CDATA[<p>On Tuesday, November 5th, the final ballots will be cast in the 2024 US presidential election. But the process is far from over. How prepared are social media platforms for the post-election period? What should we make of characters like Elon Musk, who is actively advancing conspiracy theories and false claims about the integrity of the election? And what can we do going forward to support election workers and administrators on the frontlines facing threats and disinformation? To help answer these questions, <strong>Justin Hendrix</strong> spoke with three experts: </p><ul><li><strong>Katie Harbath</strong>, CEO of Anchor Change and chief global affairs officer at Duco Experts;</li><li><strong>Nicole Schneidman</strong>, technology policy strategist at Protect Democracy; and</li><li><strong>Dean Jackson</strong>, principal of Public Circle LLC and a reporting fellow at Tech Policy Press.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>On Tuesday, November 5th, the final ballots will be cast in the 2024 US presidential election. But the process is far from over. How prepared are social media platforms for the post-election period? What should we make of characters like Elon Musk, who is actively advancing conspiracy theories and false claims about the integrity of the election? And what can we do going forward to support election workers and administrators on the frontlines facing threats and disinformation? To help answer these questions, <strong>Justin Hendrix</strong> spoke with three experts: </p><ul><li><strong>Katie Harbath</strong>, CEO of Anchor Change and chief global affairs officer at Duco Experts;</li><li><strong>Nicole Schneidman</strong>, technology policy strategist at Protect Democracy; and</li><li><strong>Dean Jackson</strong>, principal of Public Circle LLC and a reporting fellow at Tech Policy Press.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/are-platforms-prepared-for-the-post-election-period]]></link><guid isPermaLink="false">e77c324d-6012-4d84-915c-5c546968a33f</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sat, 02 Nov 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/c499bd8e-d20f-4990-8a5e-7b4ed61beda2/TPP299-converted.mp3" length="32639792" type="audio/mpeg"/><itunes:duration>45:20</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>What Role Might Elon Musk Play in the Post-Election Period?</title><itunes:title>What Role Might Elon Musk Play in the Post-Election Period?</itunes:title><description><![CDATA[<p>If you’re trying to game out the potential role of technology in the post-election period in the US, there is a significant "X" factor. When he purchased the social media platform formerly known as Twitter, “Elon Musk didn’t just get a social network—he got a political weapon.” So <a href="https://www.theatlantic.com/technology/archive/2024/10/elon-musk-x-political-weapon/680463/" rel="noopener noreferrer" target="_blank">says</a> today’s guest, a journalist who is one of the keenest observers of phenomena on the internet: <strong>Charlie Warzel</strong>, a staff writer at <a href="https://www.theatlantic.com/author/charlie-warzel/" rel="noopener noreferrer" target="_blank"><em>The Atlantic</em></a> and the author of its newsletter <a href="https://www.theatlantic.com/newsletters/sign-up/galaxy-brain/" rel="noopener noreferrer" target="_blank">Galaxy Brain</a>. <strong>Justin Hendrix</strong> caught up with him about what to make of Musk and the broader health of the information environment.</p>]]></description><content:encoded><![CDATA[<p>If you’re trying to game out the potential role of technology in the post-election period in the US, there is a significant "X" factor. When he purchased the social media platform formerly known as Twitter, “Elon Musk didn’t just get a social network—he got a political weapon.” So <a href="https://www.theatlantic.com/technology/archive/2024/10/elon-musk-x-political-weapon/680463/" rel="noopener noreferrer" target="_blank">says</a> today’s guest, a journalist who is one of the keenest observers of phenomena on the internet: <strong>Charlie Warzel</strong>, a staff writer at <a href="https://www.theatlantic.com/author/charlie-warzel/" rel="noopener noreferrer" target="_blank"><em>The Atlantic</em></a> and the author of its newsletter <a href="https://www.theatlantic.com/newsletters/sign-up/galaxy-brain/" rel="noopener noreferrer" target="_blank">Galaxy Brain</a>. <strong>Justin Hendrix</strong> caught up with him about what to make of Musk and the broader health of the information environment.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/what-role-might-elon-musk-play-in-the-post-election-period]]></link><guid isPermaLink="false">4fa4a875-92c3-4555-a143-a7d60036ea5a</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sat, 02 Nov 2024 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/05e0a1db-f384-442d-a4a4-75fd58fb0e55/TPP300-converted.mp3" length="38449633" type="audio/mpeg"/><itunes:duration>53:24</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Three Perspectives on Generative AI and Elections</title><itunes:title>Three Perspectives on Generative AI and Elections</itunes:title><description><![CDATA[<p>In this episode, <strong>Justin Hendrix</strong> speaks with three researchers who recently published projects looking at the intersection of generative AI with elections around the world, including:</p><ul><li><strong>Samuel Woolley, </strong>Dietrich Chair of Disinformation Studies at the University of Pittsburgh and one of the authors of a set of studies titled <a href="https://mediaengagement.org/research/generative-artificial-intelligence-and-elections/" rel="noopener noreferrer" target="_blank">Generative Artificial Intelligence and Elections</a>;</li><li><strong>Lindsay Gorman, </strong>Managing Director and Senior Fellow of the Technology Program at the German Marshall Fund of the United States and an author of a report and online resource titled <a href="https://www.gmfus.org/spitting-images-tracking-deepfakes-and-generative-ai-elections" rel="noopener noreferrer" target="_blank">Spitting Images:&nbsp;Tracking Deepfakes and Generative AI in Elections</a>; and</li><li><strong>Scott Babwah Brennen</strong>, Director of the NYU Center on Technology Policy and one of the authors of a deep dive into the literature on the <a href="https://techpolicynyu.org/wp-content/uploads/2024/10/CTP_Will-AI-content-labels-work_final.pdf" rel="noopener noreferrer" target="_blank">effectiveness of AI content labels</a> and another on the <a href="https://techpolicynyu.org/wp-content/uploads/2024/10/CTP_In-Disclaimers-We-Trust.pdf" rel="noopener noreferrer" target="_blank">efficacy of recent US state legislation</a> requiring labels on political ads that use generative AI.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>In this episode, <strong>Justin Hendrix</strong> speaks with three researchers who recently published projects looking at the intersection of generative AI with elections around the world, including:</p><ul><li><strong>Samuel Woolley, </strong>Dietrich Chair of Disinformation Studies at the University of Pittsburgh and one of the authors of a set of studies titled <a href="https://mediaengagement.org/research/generative-artificial-intelligence-and-elections/" rel="noopener noreferrer" target="_blank">Generative Artificial Intelligence and Elections</a>;</li><li><strong>Lindsay Gorman, </strong>Managing Director and Senior Fellow of the Technology Program at the German Marshall Fund of the United States and an author of a report and online resource titled <a href="https://www.gmfus.org/spitting-images-tracking-deepfakes-and-generative-ai-elections" rel="noopener noreferrer" target="_blank">Spitting Images:&nbsp;Tracking Deepfakes and Generative AI in Elections</a>; and</li><li><strong>Scott Babwah Brennen</strong>, Director of the NYU Center on Technology Policy and one of the authors of a deep dive into the literature on the <a href="https://techpolicynyu.org/wp-content/uploads/2024/10/CTP_Will-AI-content-labels-work_final.pdf" rel="noopener noreferrer" target="_blank">effectiveness of AI content labels</a> and another on the <a href="https://techpolicynyu.org/wp-content/uploads/2024/10/CTP_In-Disclaimers-We-Trust.pdf" rel="noopener noreferrer" target="_blank">efficacy of recent US state legislation</a> requiring labels on political ads that use generative AI.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/three-perspectives-on-generative-ai-and-elections]]></link><guid isPermaLink="false">70334e06-3ec2-458a-ba92-1ca8ca7912b7</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 27 Oct 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/1cf79b7e-8123-4f10-b858-f287c17baaca/TPP298-converted.mp3" length="28660624" type="audio/mpeg"/><itunes:duration>39:48</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Unpacking the Principles of the Digital Services Act with Martin Husovec</title><itunes:title>Unpacking the Principles of the Digital Services Act with Martin Husovec</itunes:title><description><![CDATA[<p><strong>Martin Husovec&nbsp;</strong>is an associate law professor at the&nbsp;London School of Economics and Political Science (LSE). He works on questions at the intersection of technology and digital liberties, particularly platform regulation, intellectual property and freedom of expression. He's the author of <a href="https://husovec.eu/book/" rel="noopener noreferrer" target="_blank"><em>Principles of the Digital Services Act</em></a><em>,</em> just out from Oxford University Press. <strong>Justin Hendrix</strong> spoke to him about the rollout of the DSA, what to make of progress on trusted flaggers and out-of-court dispute resolution bodies, how transparency and reporting on things like 'systemic risk' is playing out, and whether the DSA is up to the ambitious goals policymakers set for it. </p>]]></description><content:encoded><![CDATA[<p><strong>Martin Husovec&nbsp;</strong>is an associate law professor at the&nbsp;London School of Economics and Political Science (LSE). He works on questions at the intersection of technology and digital liberties, particularly platform regulation, intellectual property and freedom of expression. He's the author of <a href="https://husovec.eu/book/" rel="noopener noreferrer" target="_blank"><em>Principles of the Digital Services Act</em></a><em>,</em> just out from Oxford University Press. <strong>Justin Hendrix</strong> spoke to him about the rollout of the DSA, what to make of progress on trusted flaggers and out-of-court dispute resolution bodies, how transparency and reporting on things like 'systemic risk' is playing out, and whether the DSA is up to the ambitious goals policymakers set for it. </p>]]></content:encoded><link><![CDATA[https://techpolicy.press/unpacking-the-principles-of-the-digital-services-act-with-martin-husovec]]></link><guid isPermaLink="false">42108c0a-28f6-4c91-8011-3050239c0cad</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 27 Oct 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/efaaa899-5de2-4758-a92a-b01995bebbce/TPP297-converted.mp3" length="34607752" type="audio/mpeg"/><itunes:duration>48:04</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Mary Anne Franks Challenges First Amendment Orthodoxy</title><itunes:title>Mary Anne Franks Challenges First Amendment Orthodoxy</itunes:title><description><![CDATA[<p>In her <a href="https://www.hachettebookgroup.com/titles/mary-anne-franks/fearless-speech/9781645030539/?lens=bold-type-books" rel="noopener noreferrer" target="_blank">new book</a>, <em>Fearless Speech: Breaking Free from the First Amendment</em>, <strong>Dr. Mary Anne Franks</strong> challenges First Amendment orthodoxy and critiques “reckless speech,” which endangers vulnerable groups and protects corporate interests, in order to advance “fearless speech,” which seeks to advance equality and democracy.</p>]]></description><content:encoded><![CDATA[<p>In her <a href="https://www.hachettebookgroup.com/titles/mary-anne-franks/fearless-speech/9781645030539/?lens=bold-type-books" rel="noopener noreferrer" target="_blank">new book</a>, <em>Fearless Speech: Breaking Free from the First Amendment</em>, <strong>Dr. Mary Anne Franks</strong> challenges First Amendment orthodoxy and critiques “reckless speech,” which endangers vulnerable groups and protects corporate interests, in order to advance “fearless speech,” which seeks to advance equality and democracy.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/mary-anne-franks-takes-on-first-amendment-orthodoxy]]></link><guid isPermaLink="false">cf6fec2f-0b65-44a3-8997-4b0434de5d41</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 20 Oct 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/ed905a30-f38d-44a4-8898-63d830ef8c5a/TPP296-converted.mp3" length="33734742" type="audio/mpeg"/><itunes:duration>46:51</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Secure Messaging Apps and Election Integrity</title><itunes:title>Secure Messaging Apps and Election Integrity</itunes:title><description><![CDATA[<p>With <strong>Sam Woolley</strong>, <strong>Mariana Olaizola Rosenblat</strong> and <strong>Inga K. Trauthig</strong> are authors of a <a href="https://bhr.stern.nyu.edu/publication/safeguarding-encrypted-messaging-platforms/" rel="noopener noreferrer" target="_blank">new report</a> from the NYU Stern Center for Business and Human Rights and the Propaganda Research Lab at the Center for Media Engagement at the University of Texas at Austin titled "Covert Campaigns: Safeguarding Encrypted Messaging Platforms from Voter Manipulation." <strong>Justin Hendrix </strong>caught up with them to learn more about how<strong> </strong>political propagandists are exploiting the features of encrypted messaging platforms to manipulate voters, and what can be done about it without breaking the promise of encryption for all users.</p>]]></description><content:encoded><![CDATA[<p>With <strong>Sam Woolley</strong>, <strong>Mariana Olaizola Rosenblat</strong> and <strong>Inga K. Trauthig</strong> are authors of a <a href="https://bhr.stern.nyu.edu/publication/safeguarding-encrypted-messaging-platforms/" rel="noopener noreferrer" target="_blank">new report</a> from the NYU Stern Center for Business and Human Rights and the Propaganda Research Lab at the Center for Media Engagement at the University of Texas at Austin titled "Covert Campaigns: Safeguarding Encrypted Messaging Platforms from Voter Manipulation." <strong>Justin Hendrix </strong>caught up with them to learn more about how<strong> </strong>political propagandists are exploiting the features of encrypted messaging platforms to manipulate voters, and what can be done about it without breaking the promise of encryption for all users.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/secure-messaging-apps-and-election-integrity]]></link><guid isPermaLink="false">2826648e-b984-43f2-a8f9-e557c928ad50</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 20 Oct 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/c5b252ad-eb91-4852-ae20-d2c19563ac4e/TPP295-converted.mp3" length="27341236" type="audio/mpeg"/><itunes:duration>37:58</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Governing the Fediverse: A Field Study</title><itunes:title>Governing the Fediverse: A Field Study</itunes:title><description><![CDATA[<p>A lot of folks frustrated with major social media platforms are migrating to alternatives like Mastodon and Bluesky, which operate on decentralized protocols. This summer, <strong>Erin Kissane</strong>&nbsp;and&nbsp;<strong>Darius Kazemi</strong>&nbsp;<a href="https://fediverse-governance.github.io/" rel="noopener noreferrer" target="_blank">released a report</a> on the governance on fediverse microblogging servers and the moderation practices of the people who run them. <strong>Justin Hendrix</strong> caught up with <strong>Erin Kissane </strong>about their findings, including the emerging forms of diplomacy between different server operators, the types of political and policy decisions moderators must make, and the need for more resources and tooling to enable better governance across the fediverse.</p>]]></description><content:encoded><![CDATA[<p>A lot of folks frustrated with major social media platforms are migrating to alternatives like Mastodon and Bluesky, which operate on decentralized protocols. This summer, <strong>Erin Kissane</strong>&nbsp;and&nbsp;<strong>Darius Kazemi</strong>&nbsp;<a href="https://fediverse-governance.github.io/" rel="noopener noreferrer" target="_blank">released a report</a> on the governance on fediverse microblogging servers and the moderation practices of the people who run them. <strong>Justin Hendrix</strong> caught up with <strong>Erin Kissane </strong>about their findings, including the emerging forms of diplomacy between different server operators, the types of political and policy decisions moderators must make, and the need for more resources and tooling to enable better governance across the fediverse.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/governing-the-fediverse-a-field-study]]></link><guid isPermaLink="false">aaf1f847-89df-4a72-8bef-9c828876b2d0</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 20 Oct 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/d321c4cd-a24f-447b-a916-ca229bf799c1/TPP294-converted.mp3" length="33445084" type="audio/mpeg"/><itunes:duration>46:27</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Election Meddling, Censorship, and More Bad News in 2024 Freedom on the Net Report</title><itunes:title>Election Meddling, Censorship, and More Bad News in 2024 Freedom on the Net Report</itunes:title><description><![CDATA[<p>The results in this year’s installment of the Freedom House <a href="https://freedomhouse.org/report/freedom-net/2024/struggle-trust-online" rel="noopener noreferrer" target="_blank"><em>Freedom on the Net</em></a> report generally follow the same distressing trajectory as prior reports, marking a 14th consecutive year in declines in internet freedom around the world. But in this year of elections, the Freedom House analysts also identified a set of concerning phenomena related to this most fundamental act of democracy and how governments are asserting themselves, for better or worse. <strong>Justin Hendrix</strong> spoke to report authors <strong>Allie Funk</strong> and <strong>Kian Vesteinsson</strong> about their findings.</p>]]></description><content:encoded><![CDATA[<p>The results in this year’s installment of the Freedom House <a href="https://freedomhouse.org/report/freedom-net/2024/struggle-trust-online" rel="noopener noreferrer" target="_blank"><em>Freedom on the Net</em></a> report generally follow the same distressing trajectory as prior reports, marking a 14th consecutive year in declines in internet freedom around the world. But in this year of elections, the Freedom House analysts also identified a set of concerning phenomena related to this most fundamental act of democracy and how governments are asserting themselves, for better or worse. <strong>Justin Hendrix</strong> spoke to report authors <strong>Allie Funk</strong> and <strong>Kian Vesteinsson</strong> about their findings.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/election-meddling-censorship-and-more-bad-news-in-2024-freedom-on-the-net-report]]></link><guid isPermaLink="false">4ac2ed23-f134-4350-985a-f033c87d8535</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sat, 19 Oct 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/935cbc4e-7557-4d5c-92ea-5daada2ccf4e/TPP292-converted.mp3" length="23373660" type="audio/mpeg"/><itunes:duration>32:28</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Independent Researchers and Journalists Mourn the Loss of CrowdTangle</title><itunes:title>Independent Researchers and Journalists Mourn the Loss of CrowdTangle</itunes:title><description><![CDATA[<p>In this episode, we're crashing a funeral... for <strong>CrowdTangle</strong>, a piece of software that allowed journalists and independent researchers to get insights into social media. Not our usual material, but this particular loss marks a huge blow in the ongoing fight for public access to data from the platforms, and underscores why we need to continue to fight for transparency. And the folks convened by the <strong>Knight-Georgetown Institute</strong>&nbsp;and the <strong>Coalition for Independent Technology Research</strong> refused to let it go unmarked.</p>]]></description><content:encoded><![CDATA[<p>In this episode, we're crashing a funeral... for <strong>CrowdTangle</strong>, a piece of software that allowed journalists and independent researchers to get insights into social media. Not our usual material, but this particular loss marks a huge blow in the ongoing fight for public access to data from the platforms, and underscores why we need to continue to fight for transparency. And the folks convened by the <strong>Knight-Georgetown Institute</strong>&nbsp;and the <strong>Coalition for Independent Technology Research</strong> refused to let it go unmarked.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/independent-researchers-and-journalists-mourn-the-loss-of-crowdtangle]]></link><guid isPermaLink="false">cfc1aff7-780d-40d2-bb76-861e2d7b4fe0</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Fri, 18 Oct 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/59942c3a-2869-465c-8ed8-d7fa8d41898f/TPP293-converted.mp3" length="19035851" type="audio/mpeg"/><itunes:duration>26:26</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>From King James to Google: Barry Lynn on the Antitrust Revolution</title><itunes:title>From King James to Google: Barry Lynn on the Antitrust Revolution</itunes:title><description><![CDATA[<p><strong>Barry Lynn</strong> is the executive director of the Open Markets Institute in Washington DC and the author of this month's cover essay in Harper's titled "<a href="https://harpers.org/archive/2024/10/the-antitrust-revolution-big-tech-barry-c-lynn/" rel="noopener noreferrer" target="_blank">The Antitrust Revolution: Liberal democracy’s last stand against Big Tech</a>." <strong>Justin Hendrix</strong> spoke to him about his essay, about the remedy framework proposed by the US Department of Justice following the ruling in the Google search antitrust trial, and about what to anticipate for the antitrust movement following the 2024 US presidential election.</p>]]></description><content:encoded><![CDATA[<p><strong>Barry Lynn</strong> is the executive director of the Open Markets Institute in Washington DC and the author of this month's cover essay in Harper's titled "<a href="https://harpers.org/archive/2024/10/the-antitrust-revolution-big-tech-barry-c-lynn/" rel="noopener noreferrer" target="_blank">The Antitrust Revolution: Liberal democracy’s last stand against Big Tech</a>." <strong>Justin Hendrix</strong> spoke to him about his essay, about the remedy framework proposed by the US Department of Justice following the ruling in the Google search antitrust trial, and about what to anticipate for the antitrust movement following the 2024 US presidential election.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/from-king-james-to-google-barry-lynn-on-the-antitrust-revolution]]></link><guid isPermaLink="false">cce70318-b6c5-4593-b2ea-e17061500ade</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 13 Oct 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/9e6ff6fa-6d2b-45d5-8f02-3149a4671507/TPP291-converted.mp3" length="25460724" type="audio/mpeg"/><itunes:duration>35:22</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The Evolution of Online Political Advertising: A Conversation with Who Targets Me&apos;s Sam Jeffers</title><itunes:title>The Evolution of Online Political Advertising: A Conversation with Who Targets Me&apos;s Sam Jeffers</itunes:title><description><![CDATA[<p>Today’s guest is <strong>Sam Jeffers</strong>, cofounder and executive director of <a href="https://whotargets.me/en/" rel="noopener noreferrer" target="_blank">Who Targets Me</a>. Jeffers has spent several yearshas spent several years building a suite of capabilities to make political advertising more transparent,&nbsp;including <a href="https://whotargets.me/a-guide-to-the-who-targets-me-browser-extension/" rel="noopener noreferrer" target="_blank">tools for individuals</a> and&nbsp;<a href="https://whotargets.me/data" rel="noopener noreferrer" target="_blank">data and support for academics, researchers and journalists</a>. His organization also advocates for&nbsp;<a href="https://whotargets.me/category/policy" rel="noopener noreferrer" target="_blank">better policy</a>&nbsp;from platforms, regulators and governments. (You can <a href="https://chromewebstore.google.com/detail/who-targets-me/fcejbjalmgocomoinikjejnkimlnoljp" rel="noopener noreferrer" target="_blank">download</a> the Who Targets Me browser extension to contribute your data to the project.)</p>]]></description><content:encoded><![CDATA[<p>Today’s guest is <strong>Sam Jeffers</strong>, cofounder and executive director of <a href="https://whotargets.me/en/" rel="noopener noreferrer" target="_blank">Who Targets Me</a>. Jeffers has spent several yearshas spent several years building a suite of capabilities to make political advertising more transparent,&nbsp;including <a href="https://whotargets.me/a-guide-to-the-who-targets-me-browser-extension/" rel="noopener noreferrer" target="_blank">tools for individuals</a> and&nbsp;<a href="https://whotargets.me/data" rel="noopener noreferrer" target="_blank">data and support for academics, researchers and journalists</a>. His organization also advocates for&nbsp;<a href="https://whotargets.me/category/policy" rel="noopener noreferrer" target="_blank">better policy</a>&nbsp;from platforms, regulators and governments. (You can <a href="https://chromewebstore.google.com/detail/who-targets-me/fcejbjalmgocomoinikjejnkimlnoljp" rel="noopener noreferrer" target="_blank">download</a> the Who Targets Me browser extension to contribute your data to the project.)</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/the-evolution-of-online-political-advertising-a-conversation-with-who-targets-mes-sam-jeffers]]></link><guid isPermaLink="false">3ca5fe8f-1c5b-4b64-94c6-8f50f2494721</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Fri, 11 Oct 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/9b248595-3f09-48f1-bfe3-e4441daa30a8/TPP290-converted.mp3" length="21240805" type="audio/mpeg"/><itunes:duration>29:30</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Assessing Systemic Risk Under the Digital Services Act</title><itunes:title>Assessing Systemic Risk Under the Digital Services Act</itunes:title><description><![CDATA[<p>One of the most significant concepts in Europe’s Digital Services Act is that of “systemic risk,” which relates to the spread<strong> </strong>of illegal content, or content that might have foreseeable negative effects on the exercise of fundamental rights or on on civic discourse, electoral processes, public security and so forth. The DSA requires companies to carry out risk assessments to detail whether they are adequately addressing such risks on their platforms. What exactly amounts to systemic risk and how exactly to go about assessing it is still up in the air in these early days of the DSA’s implementation. </p><p>In today’s episode, Tech Policy Press Staff Writer <strong>Gabby Miller </strong>speaks with three experts <a href="https://globalnetworkinitiative.org/wp-content/uploads/GNI-DTSP-Forum-Summary.pdf" rel="noopener noreferrer" target="_blank">involved in conversations</a> to try and get to best practices:</p><ul><li><strong>Jason Pielemeier</strong>, Executive Director of the Global Network Initiative;</li><li><strong>David Sullivan</strong>, Executive Director of the Digital Trust &amp; Safety Partnership; and</li><li><strong>Chantal Joris</strong>, Senior Legal Officer at Article 19</li></ul><br/>]]></description><content:encoded><![CDATA[<p>One of the most significant concepts in Europe’s Digital Services Act is that of “systemic risk,” which relates to the spread<strong> </strong>of illegal content, or content that might have foreseeable negative effects on the exercise of fundamental rights or on on civic discourse, electoral processes, public security and so forth. The DSA requires companies to carry out risk assessments to detail whether they are adequately addressing such risks on their platforms. What exactly amounts to systemic risk and how exactly to go about assessing it is still up in the air in these early days of the DSA’s implementation. </p><p>In today’s episode, Tech Policy Press Staff Writer <strong>Gabby Miller </strong>speaks with three experts <a href="https://globalnetworkinitiative.org/wp-content/uploads/GNI-DTSP-Forum-Summary.pdf" rel="noopener noreferrer" target="_blank">involved in conversations</a> to try and get to best practices:</p><ul><li><strong>Jason Pielemeier</strong>, Executive Director of the Global Network Initiative;</li><li><strong>David Sullivan</strong>, Executive Director of the Digital Trust &amp; Safety Partnership; and</li><li><strong>Chantal Joris</strong>, Senior Legal Officer at Article 19</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/assessing-systemic-risk-under-the-digital-services-act]]></link><guid isPermaLink="false">16bee0a4-7121-4a6b-8e15-247cdf06ae41</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 06 Oct 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/1487c458-bebe-4f1e-bb98-8dd44adca775/TPP289-converted.mp3" length="33202163" type="audio/mpeg"/><itunes:duration>46:07</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Unpacking New Mexico&apos;s Complaint Against Snap Inc.</title><itunes:title>Unpacking New Mexico&apos;s Complaint Against Snap Inc.</itunes:title><description><![CDATA[<p>Last week, <em>Wall Street Journal</em> technology reporter <strong>Jeff Horwitz</strong> <a href="https://www.wsj.com/tech/snap-failed-to-warn-users-about-sextortion-risks-state-lawsuit-alleges-0b170fc7" rel="noopener noreferrer" target="_blank">first reported</a> on details of an <a href="https://nmdoj.gov/wp-content/uploads/2024-10-01-SNAP-NM-Amended-Complaint_Redacted.pdf" rel="noopener noreferrer" target="_blank">unredacted version</a> of a complaint against Snap brought by New Mexico Attorney General <strong>Raúl Torrez. </strong>Tech Policy Press editor <strong>Justin Hendrix </strong>spoke to Horwitz about its details, and questions it leaves unanswered.</p>]]></description><content:encoded><![CDATA[<p>Last week, <em>Wall Street Journal</em> technology reporter <strong>Jeff Horwitz</strong> <a href="https://www.wsj.com/tech/snap-failed-to-warn-users-about-sextortion-risks-state-lawsuit-alleges-0b170fc7" rel="noopener noreferrer" target="_blank">first reported</a> on details of an <a href="https://nmdoj.gov/wp-content/uploads/2024-10-01-SNAP-NM-Amended-Complaint_Redacted.pdf" rel="noopener noreferrer" target="_blank">unredacted version</a> of a complaint against Snap brought by New Mexico Attorney General <strong>Raúl Torrez. </strong>Tech Policy Press editor <strong>Justin Hendrix </strong>spoke to Horwitz about its details, and questions it leaves unanswered.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/unpacking-new-mexicos-complaint-against-snap-inc-]]></link><guid isPermaLink="false">da60ab6f-b487-4da1-8819-f6ccbd5310c5</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 06 Oct 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/195c864a-93df-48e0-8c88-ecbb8840fa55/Horwitz-Snap-converted.mp3" length="24221883" type="audio/mpeg"/><itunes:duration>33:38</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>AI Snake Oil: Separating Hype from Reality</title><itunes:title>AI Snake Oil: Separating Hype from Reality</itunes:title><description><![CDATA[<p><strong>Arvind Narayanan</strong> and <strong>Sayash Kapoor</strong> are the authors of <a href="https://press.princeton.edu/books/hardcover/9780691249131/ai-snake-oil?srsltid=AfmBOoo6HzLpa-hybtg4CRhp-eNsrD9B8JfyqLYQXa1PbJoQzwFvr4pK" rel="noopener noreferrer" target="_blank"><em>AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference</em>,</a> published September 24 by Princeton University Press. In this conversation, <strong>Justin Hendrix </strong>focuses in particular on the book's Chapter 6, "Why Can't AI Fix Social Media?"</p>]]></description><content:encoded><![CDATA[<p><strong>Arvind Narayanan</strong> and <strong>Sayash Kapoor</strong> are the authors of <a href="https://press.princeton.edu/books/hardcover/9780691249131/ai-snake-oil?srsltid=AfmBOoo6HzLpa-hybtg4CRhp-eNsrD9B8JfyqLYQXa1PbJoQzwFvr4pK" rel="noopener noreferrer" target="_blank"><em>AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference</em>,</a> published September 24 by Princeton University Press. In this conversation, <strong>Justin Hendrix </strong>focuses in particular on the book's Chapter 6, "Why Can't AI Fix Social Media?"</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/ai-snake-oil-separating-hype-from-reality]]></link><guid isPermaLink="false">19b788f4-0fd6-4f2e-aacd-0f40024ae495</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 29 Sep 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/a507925d-be24-483a-be18-658737017b2c/TPP287-converted.mp3" length="30015376" type="audio/mpeg"/><itunes:duration>35:44</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Assessing Platform Preparedness for the 2024 US Election</title><itunes:title>Assessing Platform Preparedness for the 2024 US Election</itunes:title><description><![CDATA[<p>The Institute for Strategic Dialogue (ISD) recently <a href="https://www.isdglobal.org/isd-publications/us-election-platform-preparedness/" rel="noopener noreferrer" target="_blank">assessed</a> social media platforms’ policies, public commitments, and product interventions related to election integrity across six major issue areas: platform integrity, violent extremism and hate speech, internal and external resourcing, transparency, political advertising and state-affiliated media.&nbsp;<strong>Justin Hendrix</strong> spoke to two of the report's authors: ISD's Director of Technology &amp; Society, <strong>Isabelle Frances-Wright,</strong> and its Senior US digital Policy Manager, <strong>Ellen Jacobs</strong>. ISD's assessment included Snap, Facebook, Instagram, TikTok, YouTube, and X.</p>]]></description><content:encoded><![CDATA[<p>The Institute for Strategic Dialogue (ISD) recently <a href="https://www.isdglobal.org/isd-publications/us-election-platform-preparedness/" rel="noopener noreferrer" target="_blank">assessed</a> social media platforms’ policies, public commitments, and product interventions related to election integrity across six major issue areas: platform integrity, violent extremism and hate speech, internal and external resourcing, transparency, political advertising and state-affiliated media.&nbsp;<strong>Justin Hendrix</strong> spoke to two of the report's authors: ISD's Director of Technology &amp; Society, <strong>Isabelle Frances-Wright,</strong> and its Senior US digital Policy Manager, <strong>Ellen Jacobs</strong>. ISD's assessment included Snap, Facebook, Instagram, TikTok, YouTube, and X.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/assessing-platform-preparedness-for-the-2024-us-election]]></link><guid isPermaLink="false">f4dfa2e2-4b21-4368-b679-b2c2e802ed06</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Wed, 25 Sep 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/0d2ddbf1-0f8f-4129-b0f6-d398811c19d9/TPP286-converted.mp3" length="23677710" type="audio/mpeg"/><itunes:duration>32:53</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Gary Marcus Wants to Tame Silicon Valley</title><itunes:title>Gary Marcus Wants to Tame Silicon Valley</itunes:title><description><![CDATA[<p><strong>Gary Marcus</strong> writes that the companies developing artificial intelligence systems want the citizens of democracies “to absorb all the&nbsp;negative externalities<em>”</em> that might arise from their products, “such as the damage to democracy from Generative AI–produced misinformation, or cybercrime and kidnapping schemes using deepfaked voice clones—without them paying a nickel.” And, he says, we need to fight back.&nbsp;His new book is called <a href="https://mitpress.mit.edu/9780262551069/taming-silicon-valley/" rel="noopener noreferrer" target="_blank"><em>Taming Silicon Valley: How We Can Ensure That AI Works for Us</em></a>, published by MIT Technology Press on September 17, 2024.</p>]]></description><content:encoded><![CDATA[<p><strong>Gary Marcus</strong> writes that the companies developing artificial intelligence systems want the citizens of democracies “to absorb all the&nbsp;negative externalities<em>”</em> that might arise from their products, “such as the damage to democracy from Generative AI–produced misinformation, or cybercrime and kidnapping schemes using deepfaked voice clones—without them paying a nickel.” And, he says, we need to fight back.&nbsp;His new book is called <a href="https://mitpress.mit.edu/9780262551069/taming-silicon-valley/" rel="noopener noreferrer" target="_blank"><em>Taming Silicon Valley: How We Can Ensure That AI Works for Us</em></a>, published by MIT Technology Press on September 17, 2024.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/gary-marcus-wants-to-tame-silicon-valley]]></link><guid isPermaLink="false">1e3d6545-54b5-444b-ab10-7c6bcbc231c8</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 22 Sep 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/08ddba46-1982-49a5-adea-a58ea3a195f9/TPP285-converted.mp3" length="32053914" type="audio/mpeg"/><itunes:duration>44:31</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Resisting the Tech Coup: A Conversation with Marietje Schaake</title><itunes:title>Resisting the Tech Coup: A Conversation with Marietje Schaake</itunes:title><description><![CDATA[<p><strong>Marietje Schaake</strong> is the author of <a href="https://press.princeton.edu/books/hardcover/9780691241173/the-tech-coup?srsltid=AfmBOoopbhd11mLFy9LiUtT8tUU1a7aADDsygDHG7N4XurvoIgKdAfSJ" rel="noopener noreferrer" target="_blank"><em>The Tech Coup: How to Save Democracy from Silicon Valley</em></a><em>. </em><strong>Dr. Alondra Nelson,</strong> a Professor&nbsp;at the&nbsp;Institute for Advanced Study, who served as deputy assistant to <strong>President Joe Biden</strong> and&nbsp;Acting Director&nbsp;of the&nbsp;White House Office of Science and Technology Policy&nbsp;(OSTP), calls Schaake “a twenty-first century Tocqueville” who “looks at Silicon Valley and its impact on democratic society with an outsider’s gimlet eye.” Nobel prize winner <strong>Maria Ressa </strong>says Schaake's new book “exposes the unchecked, corrosive power that is undermining democracy, human rights, and our global order.” And author and activist <strong>Cory Doctorow</strong> says the book offers “A thorough and necessary explanation of the parade of policy failures that enshittified the internet—and a sound prescription for its disenshittification.” <strong>Justin Hendrix</strong> spoke to Schaake just before the book's publication on September 24, 2024.</p>]]></description><content:encoded><![CDATA[<p><strong>Marietje Schaake</strong> is the author of <a href="https://press.princeton.edu/books/hardcover/9780691241173/the-tech-coup?srsltid=AfmBOoopbhd11mLFy9LiUtT8tUU1a7aADDsygDHG7N4XurvoIgKdAfSJ" rel="noopener noreferrer" target="_blank"><em>The Tech Coup: How to Save Democracy from Silicon Valley</em></a><em>. </em><strong>Dr. Alondra Nelson,</strong> a Professor&nbsp;at the&nbsp;Institute for Advanced Study, who served as deputy assistant to <strong>President Joe Biden</strong> and&nbsp;Acting Director&nbsp;of the&nbsp;White House Office of Science and Technology Policy&nbsp;(OSTP), calls Schaake “a twenty-first century Tocqueville” who “looks at Silicon Valley and its impact on democratic society with an outsider’s gimlet eye.” Nobel prize winner <strong>Maria Ressa </strong>says Schaake's new book “exposes the unchecked, corrosive power that is undermining democracy, human rights, and our global order.” And author and activist <strong>Cory Doctorow</strong> says the book offers “A thorough and necessary explanation of the parade of policy failures that enshittified the internet—and a sound prescription for its disenshittification.” <strong>Justin Hendrix</strong> spoke to Schaake just before the book's publication on September 24, 2024.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/resisting-the-tech-coup-a-conversation-with-marietje-schaake]]></link><guid isPermaLink="false">cd16e1a6-1eb2-4ff5-8af7-27effd3840b4</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 22 Sep 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/5265f3cc-70e8-4520-9416-ae80901672e2/TPP284-converted.mp3" length="26604567" type="audio/mpeg"/><itunes:duration>36:57</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Thierry Breton Resigns- What Does it Mean for European Tech Regulation?</title><itunes:title>Thierry Breton Resigns- What Does it Mean for European Tech Regulation?</itunes:title><description><![CDATA[<p>In 2019, <strong>Thierry Breton</strong>, a French business executive who became the France’s Minister of Finance from 2005 to 2007, was nominated by President&nbsp;<strong>Emmanuel Macron</strong>&nbsp;to become a member of the&nbsp;European Commission&nbsp;for the&nbsp;Internal Market.&nbsp;In that role his name and face were closely associated with Europe’s push to regulate digital markets and the passage of legislation such as the Digital Services Act and the EU’s AI Act.&nbsp;</p><p>On Monday, September 16 - in <a href="https://x.com/ThierryBreton/status/1835565206639972734" rel="noopener noreferrer" target="_blank">a letter</a> that called into question EU Commission President <strong>Ursula von der Leyen’s</strong> governance - Breton resigned his post. While certain tech executives may be happy to see him go- <strong>Elon Musk</strong> <a href="https://x.com/elonmusk/status/1836340621851885869" rel="noopener noreferrer" target="_blank">posted</a> “bon voyage” to the news - his departure spells change for Europe’s approach to tech going forward. To learn more, <strong>Justin Hendrix</strong> reached out to a European journalist who is <a href="https://www.linkedin.com/posts/luca-bertuzzi-186729130_now-that-the-dust-has-settled-after-two-extremely-activity-7242140181607104513-pj5j?utm_source=share&amp;utm_medium=member_desktop" rel="noopener noreferrer" target="_blank">covering</a> these matters closely, and who has been kind enough to share his reporting on the EU AI Act with Tech Policy Press in the past: MLex Senior AI Correspondent <strong>Luca Bertuzzi</strong>.</p>]]></description><content:encoded><![CDATA[<p>In 2019, <strong>Thierry Breton</strong>, a French business executive who became the France’s Minister of Finance from 2005 to 2007, was nominated by President&nbsp;<strong>Emmanuel Macron</strong>&nbsp;to become a member of the&nbsp;European Commission&nbsp;for the&nbsp;Internal Market.&nbsp;In that role his name and face were closely associated with Europe’s push to regulate digital markets and the passage of legislation such as the Digital Services Act and the EU’s AI Act.&nbsp;</p><p>On Monday, September 16 - in <a href="https://x.com/ThierryBreton/status/1835565206639972734" rel="noopener noreferrer" target="_blank">a letter</a> that called into question EU Commission President <strong>Ursula von der Leyen’s</strong> governance - Breton resigned his post. While certain tech executives may be happy to see him go- <strong>Elon Musk</strong> <a href="https://x.com/elonmusk/status/1836340621851885869" rel="noopener noreferrer" target="_blank">posted</a> “bon voyage” to the news - his departure spells change for Europe’s approach to tech going forward. To learn more, <strong>Justin Hendrix</strong> reached out to a European journalist who is <a href="https://www.linkedin.com/posts/luca-bertuzzi-186729130_now-that-the-dust-has-settled-after-two-extremely-activity-7242140181607104513-pj5j?utm_source=share&amp;utm_medium=member_desktop" rel="noopener noreferrer" target="_blank">covering</a> these matters closely, and who has been kind enough to share his reporting on the EU AI Act with Tech Policy Press in the past: MLex Senior AI Correspondent <strong>Luca Bertuzzi</strong>.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/thierry-breton-resigns-what-does-it-mean-for-european-tech-regulation]]></link><guid isPermaLink="false">e2fe06ca-270a-42eb-948a-f3ea2d425a71</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Fri, 20 Sep 2024 22:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/ba05e01d-99d8-4b7a-8ec3-87173a0717ae/TPP283-converted.mp3" length="14930024" type="audio/mpeg"/><itunes:duration>20:44</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Free Speech vs. Sovereignty?</title><itunes:title>Free Speech vs. Sovereignty?</itunes:title><description><![CDATA[<p><strong>Paris Marx</strong>, a Canadian tech critic, recently authored <a href="https://disconnect.blog/pavel-durov-and-elon-musk-are-not-free-speech-champions/" rel="noopener noreferrer" target="_blank">a post under the headline</a> "Pavel Durov and Elon Musk are not free speech champions: The actions against Telegram and Twitter/X are about sovereignty, not speech." <strong>Justin Hendrix</strong> spoke to Paris about his assessment of these matters, and why those making claims in defense of free speech in the wake of Brazil’s ban on X and Telegram founder and CEO Pavel Durov’s arrest in France may in fact be undermining free expression and internet freedoms in the long run.&nbsp;</p>]]></description><content:encoded><![CDATA[<p><strong>Paris Marx</strong>, a Canadian tech critic, recently authored <a href="https://disconnect.blog/pavel-durov-and-elon-musk-are-not-free-speech-champions/" rel="noopener noreferrer" target="_blank">a post under the headline</a> "Pavel Durov and Elon Musk are not free speech champions: The actions against Telegram and Twitter/X are about sovereignty, not speech." <strong>Justin Hendrix</strong> spoke to Paris about his assessment of these matters, and why those making claims in defense of free speech in the wake of Brazil’s ban on X and Telegram founder and CEO Pavel Durov’s arrest in France may in fact be undermining free expression and internet freedoms in the long run.&nbsp;</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/free-speech-vs-sovereignty]]></link><guid isPermaLink="false">8ea8ae6c-219d-42d0-9cb5-47309aa04fb8</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 15 Sep 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/d9dc5bea-5b48-4447-81df-a83e9cbbd5dd/TPP282-converted.mp3" length="37275535" type="audio/mpeg"/><itunes:duration>44:23</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Understanding Systemic Risks under the Digital Services Act</title><itunes:title>Understanding Systemic Risks under the Digital Services Act</itunes:title><description><![CDATA[<p>At Tech Policy Press, we’re closely following the implementation of the Digital Services Act, the European Union law designed to regulate online platforms and services. One of the DSA’s key objectives is to identify and mitigate systemic risks.But <span style="font-family: var(--bs-font-sans-serif); font-size: 1.125rem; color: var(--bs-accordion-color);">how do we gauge what rises to the level of a systemic risk? How do we get the sort of information we need from platforms to identify and mitigate systemic risk, and how do we create the kinds of collaborations between regulators and the research community that are necessary to answer complex questions?</span></p><p><strong>Ramsha Jahangir</strong>, a reporting fellow at Tech Policy Press, recently discussed these questions with <strong>Dr. Oliver Marsh</strong>, who is head of tech research at Algorithm Watch, an NGO with offices in Berlin and Zurich that works on issues at the intersection of technology and society. Dr. Marsh has been leading research on systemic risks and the DSA’s approach, and just put out a <a href="https://algorithmwatch.org/en/wp-content/uploads/2024/08/AlgorithmWatch-Researching-Systemic-Risks-under-the-DSA-240726_v2.pdf" rel="noopener noreferrer" target="_blank">detailed summary</a> of his work.&nbsp;</p>]]></description><content:encoded><![CDATA[<p>At Tech Policy Press, we’re closely following the implementation of the Digital Services Act, the European Union law designed to regulate online platforms and services. One of the DSA’s key objectives is to identify and mitigate systemic risks.But <span style="font-family: var(--bs-font-sans-serif); font-size: 1.125rem; color: var(--bs-accordion-color);">how do we gauge what rises to the level of a systemic risk? How do we get the sort of information we need from platforms to identify and mitigate systemic risk, and how do we create the kinds of collaborations between regulators and the research community that are necessary to answer complex questions?</span></p><p><strong>Ramsha Jahangir</strong>, a reporting fellow at Tech Policy Press, recently discussed these questions with <strong>Dr. Oliver Marsh</strong>, who is head of tech research at Algorithm Watch, an NGO with offices in Berlin and Zurich that works on issues at the intersection of technology and society. Dr. Marsh has been leading research on systemic risks and the DSA’s approach, and just put out a <a href="https://algorithmwatch.org/en/wp-content/uploads/2024/08/AlgorithmWatch-Researching-Systemic-Risks-under-the-DSA-240726_v2.pdf" rel="noopener noreferrer" target="_blank">detailed summary</a> of his work.&nbsp;</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/understanding-systemic-risks-under-the-digital-services-act]]></link><guid isPermaLink="false">d2bf2770-a751-4a74-a074-7fac4d53301e</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 15 Sep 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/a6a03f5a-6fc5-489b-a10a-c01701d64258/TPP281-converted.mp3" length="17616931" type="audio/mpeg"/><itunes:duration>20:58</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Google Online Advertising Antitrust Trial Kicks Off In a DC Court</title><itunes:title>Google Online Advertising Antitrust Trial Kicks Off In a DC Court</itunes:title><description><![CDATA[<p>Today is Monday, September 9th. Today <strong>Judge Leonie Brinkema</strong> of the US District Court for the Eastern District of Virginia is presiding over the start of a trial in which the United States Department of Justice accuses Google of violating antitrust law, abusing its power in the market for online advertising. Google <a href="https://blog.google/outreach-initiatives/public-policy/google-ad-tech-sept-2024/" rel="noopener noreferrer" target="_blank">contests</a> the allegations against it. </p><p>To get a bit more detail on what to expect, <strong>Justin Hendrix</strong> spoke to two individuals covering the case closely who take a critical view of Google, the government’s allegations about its power in the online advertising market, and the company’s effect on journalism and the overall media and information ecosystem:</p><ul><li><strong>Sarah Kay Wiley</strong>, director of policy at <a href="https://checkmyads.org/" rel="noopener noreferrer" target="_blank">Check My Ads</a>, which is running a <a href="https://www.usvgoogleads.com/" rel="noopener noreferrer" target="_blank">comprehensive tracker</a> on the case;</li><li><strong>Karina Montoya</strong>, a senior reporter and policy analyst at the <a href="https://www.journalismliberty.org/publications/google-adtech-trials-starts-today" rel="noopener noreferrer" target="_blank">Center for Journalism and Liberty</a>, a program of the Open Markets Institute, who has covered the case extensively <a href="https://www.techpolicy.press/author/karina-montoya/" rel="noopener noreferrer" target="_blank">for Tech Policy Press</a>.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>Today is Monday, September 9th. Today <strong>Judge Leonie Brinkema</strong> of the US District Court for the Eastern District of Virginia is presiding over the start of a trial in which the United States Department of Justice accuses Google of violating antitrust law, abusing its power in the market for online advertising. Google <a href="https://blog.google/outreach-initiatives/public-policy/google-ad-tech-sept-2024/" rel="noopener noreferrer" target="_blank">contests</a> the allegations against it. </p><p>To get a bit more detail on what to expect, <strong>Justin Hendrix</strong> spoke to two individuals covering the case closely who take a critical view of Google, the government’s allegations about its power in the online advertising market, and the company’s effect on journalism and the overall media and information ecosystem:</p><ul><li><strong>Sarah Kay Wiley</strong>, director of policy at <a href="https://checkmyads.org/" rel="noopener noreferrer" target="_blank">Check My Ads</a>, which is running a <a href="https://www.usvgoogleads.com/" rel="noopener noreferrer" target="_blank">comprehensive tracker</a> on the case;</li><li><strong>Karina Montoya</strong>, a senior reporter and policy analyst at the <a href="https://www.journalismliberty.org/publications/google-adtech-trials-starts-today" rel="noopener noreferrer" target="_blank">Center for Journalism and Liberty</a>, a program of the Open Markets Institute, who has covered the case extensively <a href="https://www.techpolicy.press/author/karina-montoya/" rel="noopener noreferrer" target="_blank">for Tech Policy Press</a>.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/google-online-advertising-antitrust-trial-kicks-off-in-a-dc-court]]></link><guid isPermaLink="false">37ff743f-c803-423b-8e90-c2c19000fbf9</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Mon, 09 Sep 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/4a9e69db-ba26-445e-88c3-8a885613a107/TPP280-converted.mp3" length="20962850" type="audio/mpeg"/><itunes:duration>34:56</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>What&apos;s Going On In California?</title><itunes:title>What&apos;s Going On In California?</itunes:title><description><![CDATA[<p>Thirty tech bills went through the law making sausage grinder in California this past session, and now <strong>Governor Gavin Newsom</strong> is about to decide the fate of 19 that passed the state legislature. The Governor now has until the end of September to sign or veto the bills, or to permit them to become law without his signature.&nbsp;</p><p>To learn a little more about some of the key pieces of legislation and the overall atmosphere around tech regulation in California, <strong>Justin Hendrix </strong>spoke to two journalists who live and work in the state and cover these issues regularly:</p><ul><li><strong>Jesús Alvarado</strong>, a reporting fellow at Tech Policy Press and author of <a href="https://www.techpolicy.press/a-look-at-californias-sweeping-ai-safety-bill/" rel="noopener noreferrer" target="_blank">a recent post on SB 1047</a>, a key piece of the California legislation; </li><li><strong>Khari Johnson</strong>, a technology reporter at CalMatters,a fellow in the Digital Technology for Democracy Lab at the Karsh Institute for Democracy at the University of Virginia, and the author of a <a href="https://calmatters.org/economy/technology/2024/09/california-ai-safety-regulations-bills/" rel="noopener noreferrer" target="_blank">recent article</a> on the California legislation.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>Thirty tech bills went through the law making sausage grinder in California this past session, and now <strong>Governor Gavin Newsom</strong> is about to decide the fate of 19 that passed the state legislature. The Governor now has until the end of September to sign or veto the bills, or to permit them to become law without his signature.&nbsp;</p><p>To learn a little more about some of the key pieces of legislation and the overall atmosphere around tech regulation in California, <strong>Justin Hendrix </strong>spoke to two journalists who live and work in the state and cover these issues regularly:</p><ul><li><strong>Jesús Alvarado</strong>, a reporting fellow at Tech Policy Press and author of <a href="https://www.techpolicy.press/a-look-at-californias-sweeping-ai-safety-bill/" rel="noopener noreferrer" target="_blank">a recent post on SB 1047</a>, a key piece of the California legislation; </li><li><strong>Khari Johnson</strong>, a technology reporter at CalMatters,a fellow in the Digital Technology for Democracy Lab at the Karsh Institute for Democracy at the University of Virginia, and the author of a <a href="https://calmatters.org/economy/technology/2024/09/california-ai-safety-regulations-bills/" rel="noopener noreferrer" target="_blank">recent article</a> on the California legislation.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/whats-going-on-in-california]]></link><guid isPermaLink="false">730ff00f-ac3f-435a-8f27-216b18689eda</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 08 Sep 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/2a1a102f-3f47-4857-ac33-723db465e827/TPP279-converted.mp3" length="24136309" type="audio/mpeg"/><itunes:duration>33:31</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Platforms and Elections: the Global State of Play</title><itunes:title>Platforms and Elections: the Global State of Play</itunes:title><description><![CDATA[<p>On August 26th, <strong>Justin Hendrix</strong> moderated <a href="https://www.ssrc.org/events/platforms-and-elections-the-global-state-of-play/" rel="noopener noreferrer" target="_blank">a panel</a> convened by the Social Science Research Council at its offices in Brooklyn, New York. The panel was titled “Platforms and Elections: the Global State of Play, and it featured:</p><ul><li><strong>Dr. Shannon McGregor</strong>, associate professor at the UNC Hussman School of Journalism and Media and a principal investigator with the Center for Information Technology in Public Life (CITAP);</li><li><strong>Dr. Jonathan Corpus Ong</strong>, professor of global digital media. at the University of Massachusetts at Amherst, inaugural director of the Global Technology for Social Justice Lab; and</li><li><strong>Dr. Chris Tenove</strong>, research associate and instructor at the School of Public Policy and Global Affairs and Assistant Director of the Center for the Study of Democratic Institutions, the University of British Columbia.</li></ul><br/><p>This episode features a lightly edited recording of the conversation, which touches on topics ranging from the role of civil society and independent researchers in engaging with efforts to protect the integrity of elections and mitigate the spread of misinformation to current questions about how generative AI may impact politics.</p>]]></description><content:encoded><![CDATA[<p>On August 26th, <strong>Justin Hendrix</strong> moderated <a href="https://www.ssrc.org/events/platforms-and-elections-the-global-state-of-play/" rel="noopener noreferrer" target="_blank">a panel</a> convened by the Social Science Research Council at its offices in Brooklyn, New York. The panel was titled “Platforms and Elections: the Global State of Play, and it featured:</p><ul><li><strong>Dr. Shannon McGregor</strong>, associate professor at the UNC Hussman School of Journalism and Media and a principal investigator with the Center for Information Technology in Public Life (CITAP);</li><li><strong>Dr. Jonathan Corpus Ong</strong>, professor of global digital media. at the University of Massachusetts at Amherst, inaugural director of the Global Technology for Social Justice Lab; and</li><li><strong>Dr. Chris Tenove</strong>, research associate and instructor at the School of Public Policy and Global Affairs and Assistant Director of the Center for the Study of Democratic Institutions, the University of British Columbia.</li></ul><br/><p>This episode features a lightly edited recording of the conversation, which touches on topics ranging from the role of civil society and independent researchers in engaging with efforts to protect the integrity of elections and mitigate the spread of misinformation to current questions about how generative AI may impact politics.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/platforms-and-elections-the-global-state-of-play]]></link><guid isPermaLink="false">06f71604-1492-43e6-a09a-06956e4d8902</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 08 Sep 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/1714a91b-a670-4d5f-8bc7-0f948b417c53/TPP278-converted.mp3" length="40945466" type="audio/mpeg"/><itunes:duration>56:52</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Understanding the People Who Turn Lies Into Reality</title><itunes:title>Understanding the People Who Turn Lies Into Reality</itunes:title><description><![CDATA[<p><strong>Renée DiResta</strong>, who serves on the board of Tech Policy Press and has been an occasional contributor, is the author of <a href="https://hachettebookgroup.com/titles/renee-diresta/invisible-rulers/9781541703377/" rel="noopener noreferrer" target="_blank"><em>Invisible Rulers: The People Who Turn Lies Into Reality</em></a>, published by Hachette Book Group in June. <strong>Justin Hendrix</strong> had a chance to catch up with DiResta last week to discuss some of the key ideas in the book, and how she sees them playing out in current moment headed into the 2024 US election.</p>]]></description><content:encoded><![CDATA[<p><strong>Renée DiResta</strong>, who serves on the board of Tech Policy Press and has been an occasional contributor, is the author of <a href="https://hachettebookgroup.com/titles/renee-diresta/invisible-rulers/9781541703377/" rel="noopener noreferrer" target="_blank"><em>Invisible Rulers: The People Who Turn Lies Into Reality</em></a>, published by Hachette Book Group in June. <strong>Justin Hendrix</strong> had a chance to catch up with DiResta last week to discuss some of the key ideas in the book, and how she sees them playing out in current moment headed into the 2024 US election.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/understanding-the-people-who-turn-lies-into-reality]]></link><guid isPermaLink="false">befa6a9e-db50-4393-89c7-781bd19850a1</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 01 Sep 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/fc5422dd-a133-422c-a6ad-23747086006d/Rene-e-DiResta-Invisible-Rulers-converted.mp3" length="34226568" type="audio/mpeg"/><itunes:duration>47:32</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Brazilian Judge Orders the Suspension of Elon Musk&apos;s X</title><itunes:title>Brazilian Judge Orders the Suspension of Elon Musk&apos;s X</itunes:title><description><![CDATA[<p>The billionaire owner of the social media platform X, <strong>Elon Musk</strong>, has been in a prolonged dispute with a Supreme Court Judge in Brazil regarding X’s content moderation practices. Earlier this year, <strong>Judge Alexandre de Moraes</strong> launched an investigation into X after Musk defied a court order to block accounts that supported former right-wing president <strong>Jair Bolsonaro</strong> and were accused of spreading misinformation and hate speech.</p><p>On Friday afternoon, August 30, following a standoff over an order requiring X to appoint a new legal representative in Brazil, the Judge <a href="https://nucleo.jor.br/curtas/2024-08-30-bloqueio-x-autorizado-stf/" rel="noopener noreferrer" target="_blank">issued an order to suspend X</a> in the country. </p><p><strong>Justin Hendrix</strong> spoke to three people following the situation closely in Brazil: <strong>Laís Martins</strong>, a journalist at the <a href="https://www.intercept.com.br/equipe/lais-martins/" rel="noopener noreferrer" target="_blank">The Intercept in Brazil</a>; <strong>Sérgio Spagnuolo, </strong>executive director &amp; founder of the data-driven tech news organization <a href="https://nucleo.jor.br/" rel="noopener noreferrer" target="_blank">Nucleo Journalism</a>; and <strong>Dr. Ivar Alberto Hartmann</strong>, an associate professor at the <a href="https://www.insper.edu.br/pt/docentes/ivar-alberto-glasherster-martins-lange-hartmann" rel="noopener noreferrer" target="_blank">Insper Institute of Education and Research in Brazil</a>.</p>]]></description><content:encoded><![CDATA[<p>The billionaire owner of the social media platform X, <strong>Elon Musk</strong>, has been in a prolonged dispute with a Supreme Court Judge in Brazil regarding X’s content moderation practices. Earlier this year, <strong>Judge Alexandre de Moraes</strong> launched an investigation into X after Musk defied a court order to block accounts that supported former right-wing president <strong>Jair Bolsonaro</strong> and were accused of spreading misinformation and hate speech.</p><p>On Friday afternoon, August 30, following a standoff over an order requiring X to appoint a new legal representative in Brazil, the Judge <a href="https://nucleo.jor.br/curtas/2024-08-30-bloqueio-x-autorizado-stf/" rel="noopener noreferrer" target="_blank">issued an order to suspend X</a> in the country. </p><p><strong>Justin Hendrix</strong> spoke to three people following the situation closely in Brazil: <strong>Laís Martins</strong>, a journalist at the <a href="https://www.intercept.com.br/equipe/lais-martins/" rel="noopener noreferrer" target="_blank">The Intercept in Brazil</a>; <strong>Sérgio Spagnuolo, </strong>executive director &amp; founder of the data-driven tech news organization <a href="https://nucleo.jor.br/" rel="noopener noreferrer" target="_blank">Nucleo Journalism</a>; and <strong>Dr. Ivar Alberto Hartmann</strong>, an associate professor at the <a href="https://www.insper.edu.br/pt/docentes/ivar-alberto-glasherster-martins-lange-hartmann" rel="noopener noreferrer" target="_blank">Insper Institute of Education and Research in Brazil</a>.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/brazilian-judge-orders-the-suspension-of-elon-musks-x]]></link><guid isPermaLink="false">0eef64f2-154f-4bfb-b718-c44fc3878907</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Fri, 30 Aug 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/a3865ec9-6bc7-4a94-a945-5c1fb9414f7a/TPP276-converted.mp3" length="37993529" type="audio/mpeg"/><itunes:duration>52:46</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>A Conversation with Mark Surman, President of Mozilla</title><itunes:title>A Conversation with Mark Surman, President of Mozilla</itunes:title><description><![CDATA[<p><strong>Justin Hendrix </strong>speaks with <strong>Mark Surman</strong>, President of Mozilla, about Mozilla’s work promoting open source AI, the importance of competition in the tech sector, and the regulatory challenges facing the industry. Surman discusses Mozilla's initiatives in AI investment and development, and reflects on what the recent ruling the Google search cases might mean for the future of Mozilla and the tech economy. And, Surman shares his hopes for the future- that we can arrive at a tech economy that is not purely extractive, but rather one that respects people’s values and dignity.&nbsp;</p>]]></description><content:encoded><![CDATA[<p><strong>Justin Hendrix </strong>speaks with <strong>Mark Surman</strong>, President of Mozilla, about Mozilla’s work promoting open source AI, the importance of competition in the tech sector, and the regulatory challenges facing the industry. Surman discusses Mozilla's initiatives in AI investment and development, and reflects on what the recent ruling the Google search cases might mean for the future of Mozilla and the tech economy. And, Surman shares his hopes for the future- that we can arrive at a tech economy that is not purely extractive, but rather one that respects people’s values and dignity.&nbsp;</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/a-conversation-with-mark-surman-president-of-mozilla]]></link><guid isPermaLink="false">e1cfddfc-53d4-4206-a79f-461df51c4e01</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 25 Aug 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/13911f9e-cbc7-4ca8-ac65-96d88c9083a4/TPP275-converted.mp3" length="17650623" type="audio/mpeg"/><itunes:duration>24:31</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Design Codes and the Courts</title><itunes:title>Design Codes and the Courts</itunes:title><description><![CDATA[<p>On Friday, August 16, the United States Ninth Circuit Court of Appeals&nbsp;<a href="https://cdn.sanity.io/files/3tzzh18d/production/9e19c8974ba93d881c6b8629801d0bab7aea18ce.pdf" rel="noopener noreferrer" target="_blank">issued a ruling</a>&nbsp;in&nbsp;<a href="https://www.techpolicy.press/tracker/california-age-appropriate-design-code/" rel="noopener noreferrer" target="_blank"><em>NetChoice v. Bonta</em></a>, partially upholding and partially vacating a preliminary injunction against California's Age-Appropriate Design Code Act. The <a href="https://www.techpolicy.press/federal-appeals-court-narrows-injunction-of-california-ageappropriate-design-code/" rel="noopener noreferrer" target="_blank">court affirmed</a> that certain provisions of the law are likely to violate the First Amendment by compelling online businesses to assess and mitigate potential harms to children, but it vacated the broader injunction, remanding the case to the district court for further consideration of other parts of the statute, including restrictions on the collection and use of children's data.&nbsp;</p><p>In this episode, <strong>Justin Hendrix</strong> recounts the basics of the Ninth Circuit ruling. And in a second segment that was recorded just days before Friday's ruling, Tech Policy Press fellow <strong>Dean Jackson</strong> is joined by Tech Justice Law Project executive director <strong>Meetali Jain</strong> and USC Marshall School Neely Center managing director <strong>Ravi Iyer</strong> for a discussion on key questions that were before the Ninth Circuit and their implications for future efforts at tech regulation.</p>]]></description><content:encoded><![CDATA[<p>On Friday, August 16, the United States Ninth Circuit Court of Appeals&nbsp;<a href="https://cdn.sanity.io/files/3tzzh18d/production/9e19c8974ba93d881c6b8629801d0bab7aea18ce.pdf" rel="noopener noreferrer" target="_blank">issued a ruling</a>&nbsp;in&nbsp;<a href="https://www.techpolicy.press/tracker/california-age-appropriate-design-code/" rel="noopener noreferrer" target="_blank"><em>NetChoice v. Bonta</em></a>, partially upholding and partially vacating a preliminary injunction against California's Age-Appropriate Design Code Act. The <a href="https://www.techpolicy.press/federal-appeals-court-narrows-injunction-of-california-ageappropriate-design-code/" rel="noopener noreferrer" target="_blank">court affirmed</a> that certain provisions of the law are likely to violate the First Amendment by compelling online businesses to assess and mitigate potential harms to children, but it vacated the broader injunction, remanding the case to the district court for further consideration of other parts of the statute, including restrictions on the collection and use of children's data.&nbsp;</p><p>In this episode, <strong>Justin Hendrix</strong> recounts the basics of the Ninth Circuit ruling. And in a second segment that was recorded just days before Friday's ruling, Tech Policy Press fellow <strong>Dean Jackson</strong> is joined by Tech Justice Law Project executive director <strong>Meetali Jain</strong> and USC Marshall School Neely Center managing director <strong>Ravi Iyer</strong> for a discussion on key questions that were before the Ninth Circuit and their implications for future efforts at tech regulation.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/design-codes-and-the-courts]]></link><guid isPermaLink="false">6e425b10-9ba1-4531-989b-0b877bc8834f</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 18 Aug 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/41d75c26-72b4-4107-9659-47a67c1b403e/TPP274-converted.mp3" length="20069047" type="audio/mpeg"/><itunes:duration>27:52</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>New Mexico Attorney General Raúl Torrez on His Lawsuit Against Meta</title><itunes:title>New Mexico Attorney General Raúl Torrez on His Lawsuit Against Meta</itunes:title><description><![CDATA[<p><strong>Raúl Torrez</strong> was sworn in as New Mexico’s 32nd Attorney General in January 2023. Last December, <strong>Attorney General Torrez</strong> <a href="https://www.techpolicy.press/tracker/new-mexico-v-meta-inc-and-mark-zuckerberg/" rel="noopener noreferrer" target="_blank">filed a lawsuit</a> against Meta<strong> </strong>for allegedly failing to protect children from sexual abuse, online solicitation, and human trafficking. The outcome of this case could have broader implications for how online platforms are regulated and held accountable for user safety in the future, including through litigation. </p><p><strong>Justin Hendrix</strong> spoke to <strong>Attorney General Torrez</strong> in advance of a panel discussion he participated in alongside the Attorney General of Virginia at the 2024 Coalition to End Exploitation Global Summit on Wednesday, August 7, 2024 in Washington DC. </p>]]></description><content:encoded><![CDATA[<p><strong>Raúl Torrez</strong> was sworn in as New Mexico’s 32nd Attorney General in January 2023. Last December, <strong>Attorney General Torrez</strong> <a href="https://www.techpolicy.press/tracker/new-mexico-v-meta-inc-and-mark-zuckerberg/" rel="noopener noreferrer" target="_blank">filed a lawsuit</a> against Meta<strong> </strong>for allegedly failing to protect children from sexual abuse, online solicitation, and human trafficking. The outcome of this case could have broader implications for how online platforms are regulated and held accountable for user safety in the future, including through litigation. </p><p><strong>Justin Hendrix</strong> spoke to <strong>Attorney General Torrez</strong> in advance of a panel discussion he participated in alongside the Attorney General of Virginia at the 2024 Coalition to End Exploitation Global Summit on Wednesday, August 7, 2024 in Washington DC. </p>]]></content:encoded><link><![CDATA[https://techpolicy.press/new-mexico-attorney-general-raul-torrez-on-his-lawsuit-against-meta]]></link><guid isPermaLink="false">13336670-e582-4023-b2a5-defdfd59fa34</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 11 Aug 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/6a7c8063-3cb9-4b1c-88a3-5362246d5e90/Rau-l-Torrez-converted.mp3" length="21129821" type="audio/mpeg"/><itunes:duration>29:21</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Using AI to Engage People about Conspiracy Beliefs</title><itunes:title>Using AI to Engage People about Conspiracy Beliefs</itunes:title><description><![CDATA[<p>In May, <strong>Justin Hendrix</strong> moderated a discussion with <strong>David Rand</strong>, who is a professor of Management Science and Brain and Cognitive Sciences at MIT, the director of the Applied Cooperation Initiative, and an affiliate of the MIT Institute of Data, Systems, and Society and the Initiative on the Digital Economy. David's work cuts across fields such as cognitive science, behavioral economics, and social psychology, and with his collaborators he's done a substantial amount of work on the psychological underpinnings of belief in misinformation and conspiracy theories.</p><p>David is one of the authors, with <strong>Thomas Costello</strong> and <strong>Gordon Pennycook</strong>, of a paper published this spring titled "<a href="https://osf.io/preprints/psyarxiv/xcwdn" rel="noopener noreferrer" target="_blank">Durably reducing conspiracy beliefs through dialogues with AI.</a>" The paper considers the potential for people to enter into dialogues with LLMs and whether such exchanges can change the minds of conspiracy theory believers. According to the study, dialogues with GPT-4 Turbo reduced belief in various conspiracy theories, with effects lasting many months. Even more intriguingly, these dialogues seemed to have a spillover effect, reducing belief in unrelated conspiracies and influencing conspiracy-related behaviors.</p><p>While these findings are certainly promising, the experiment raises a variety of questions. Some are specific under the premise of the experiment- such as how compelling and tailored does the counter-evidence need to be, and how well do the LLMs perform? What happens if and when they make mistakes or hallucinate? And some of the questions are bigger picture- are there ethical implications in using AI in this manner? Can these results be replicated and scaled in real-world applications, such as on social media platforms, and is that a good idea? Is an internet where various AI agents and systems are poking and prodding us and trying to shape or change our beliefs a good thing? This episode contains an edited recording of the discussion, which was <a href="https://www.betaworks.com/event/the-role-of-ai-in-conspiracy-culture-a-conversation-with-justin-hendrix-and-david-g-rand" rel="noopener noreferrer" target="_blank">hosted at Betaworks</a>.</p>]]></description><content:encoded><![CDATA[<p>In May, <strong>Justin Hendrix</strong> moderated a discussion with <strong>David Rand</strong>, who is a professor of Management Science and Brain and Cognitive Sciences at MIT, the director of the Applied Cooperation Initiative, and an affiliate of the MIT Institute of Data, Systems, and Society and the Initiative on the Digital Economy. David's work cuts across fields such as cognitive science, behavioral economics, and social psychology, and with his collaborators he's done a substantial amount of work on the psychological underpinnings of belief in misinformation and conspiracy theories.</p><p>David is one of the authors, with <strong>Thomas Costello</strong> and <strong>Gordon Pennycook</strong>, of a paper published this spring titled "<a href="https://osf.io/preprints/psyarxiv/xcwdn" rel="noopener noreferrer" target="_blank">Durably reducing conspiracy beliefs through dialogues with AI.</a>" The paper considers the potential for people to enter into dialogues with LLMs and whether such exchanges can change the minds of conspiracy theory believers. According to the study, dialogues with GPT-4 Turbo reduced belief in various conspiracy theories, with effects lasting many months. Even more intriguingly, these dialogues seemed to have a spillover effect, reducing belief in unrelated conspiracies and influencing conspiracy-related behaviors.</p><p>While these findings are certainly promising, the experiment raises a variety of questions. Some are specific under the premise of the experiment- such as how compelling and tailored does the counter-evidence need to be, and how well do the LLMs perform? What happens if and when they make mistakes or hallucinate? And some of the questions are bigger picture- are there ethical implications in using AI in this manner? Can these results be replicated and scaled in real-world applications, such as on social media platforms, and is that a good idea? Is an internet where various AI agents and systems are poking and prodding us and trying to shape or change our beliefs a good thing? This episode contains an edited recording of the discussion, which was <a href="https://www.betaworks.com/event/the-role-of-ai-in-conspiracy-culture-a-conversation-with-justin-hendrix-and-david-g-rand" rel="noopener noreferrer" target="_blank">hosted at Betaworks</a>.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/using-ai-to-engage-people-about-conspiracy-beliefs]]></link><guid isPermaLink="false">ee140f3b-38c0-4687-9ffd-ca1fdfb75921</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 04 Aug 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/55437a25-48d5-4a97-878d-70b33ecfcb75/TPP271-converted.mp3" length="25835619" type="audio/mpeg"/><itunes:duration>35:53</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Data Workers, In Their Own Words</title><itunes:title>Data Workers, In Their Own Words</itunes:title><description><![CDATA[<p>The Distributed AI Research Institute, or DAIR—which seeks to conduct community-rooted AI research that is independent from the technology industry—has launched a new project called the <a href="https://data-workers.org/" rel="noopener noreferrer" target="_blank">Data Workers' Inquiry</a> to invite data workers to create their own research and recount their experiences. The project is supported by DAIR, the Weizenbaum Institute, and TU Berlin.&nbsp;For this episode, journalist and audio producer <strong style="color: var(--bs-accordion-color);">Rebecca Rand</strong><span style="font-family: var(--bs-font-sans-serif); font-size: 1.125rem; color: var(--bs-accordion-color);"> parsed some of the ideas and experiences discussed at a virtual launch event for the inquiry that took place earlier this month.&nbsp;</span></p>]]></description><content:encoded><![CDATA[<p>The Distributed AI Research Institute, or DAIR—which seeks to conduct community-rooted AI research that is independent from the technology industry—has launched a new project called the <a href="https://data-workers.org/" rel="noopener noreferrer" target="_blank">Data Workers' Inquiry</a> to invite data workers to create their own research and recount their experiences. The project is supported by DAIR, the Weizenbaum Institute, and TU Berlin.&nbsp;For this episode, journalist and audio producer <strong style="color: var(--bs-accordion-color);">Rebecca Rand</strong><span style="font-family: var(--bs-font-sans-serif); font-size: 1.125rem; color: var(--bs-accordion-color);"> parsed some of the ideas and experiences discussed at a virtual launch event for the inquiry that took place earlier this month.&nbsp;</span></p>]]></content:encoded><link><![CDATA[https://techpolicy.press/data-workers-in-their-own-words]]></link><guid isPermaLink="false">f6639216-b352-4dfd-9624-c1330d3a0a04</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 28 Jul 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/478509cd-5e42-487a-a2a2-0e0a31c7d786/TPP270-converted.mp3" length="20963051" type="audio/mpeg"/><itunes:duration>29:07</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Silicon Valley Leaders Cast Their Lot with Donald Trump</title><itunes:title>Silicon Valley Leaders Cast Their Lot with Donald Trump</itunes:title><description><![CDATA[<p>In the past week, multiple Silicon Valley billionaires announced endorsements of former President and 2024 Republican nominee Donald Trump. To dig a bit deeper into their motivations to support Trump and his new running mate, Ohio Senator and former venture capitalist J.D. Vance, Justin Hendrix invited on three sharp observers of politics and technology, including:</p><ul><li><strong>Henry Farrell</strong>, a professor of the international affairs and democracy at Johns Hopkins University and the recent co-author with Abraham Newman of <em>Underground Empire: How America Weaponized the World Economy</em>.</li><li><strong>Elizabeth Spiers</strong>, a writer and digital strategist and contributing writer for the <em>New York Times</em>, and co-host the Slate Money Podcast.</li><li><strong>Dave Karpf</strong>, an associate professor at George Washington University in the School of Media and Public Affairs.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>In the past week, multiple Silicon Valley billionaires announced endorsements of former President and 2024 Republican nominee Donald Trump. To dig a bit deeper into their motivations to support Trump and his new running mate, Ohio Senator and former venture capitalist J.D. Vance, Justin Hendrix invited on three sharp observers of politics and technology, including:</p><ul><li><strong>Henry Farrell</strong>, a professor of the international affairs and democracy at Johns Hopkins University and the recent co-author with Abraham Newman of <em>Underground Empire: How America Weaponized the World Economy</em>.</li><li><strong>Elizabeth Spiers</strong>, a writer and digital strategist and contributing writer for the <em>New York Times</em>, and co-host the Slate Money Podcast.</li><li><strong>Dave Karpf</strong>, an associate professor at George Washington University in the School of Media and Public Affairs.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/silicon-valley-leaders-cast-their-lot-with-donald-trump]]></link><guid isPermaLink="false">1edad128-4cf6-4e69-8a10-018af884aa0e</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 21 Jul 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/144c8641-bf88-43a3-a3a9-4c78b43de1e0/TPP269-converted.mp3" length="37964938" type="audio/mpeg"/><itunes:duration>45:12</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The Future of Privacy in the Age of AI</title><itunes:title>The Future of Privacy in the Age of AI</itunes:title><description><![CDATA[<p>It goes without saying that privacy and the creation of laws and regulations around it are fundamental to determining how we will live and work with technology, and whether technology operates in service of democratic societies or only in service of governments and corporations. A couple of weeks ago, <strong>Justin Hendrix</strong> had a chance to speak with two leaders from the Future of Privacy Forum (FPF)-<strong>Jules Polonetsky</strong>, its CEO, and <strong>Anne J. Flanagan</strong>, the head of its new Center on AI. They discussed the recent US Supreme Court decision to overturn the Chevron doctrine and its implications for privacy legislation in the United States, the fierce battle over privacy laws in the US, and potential conflicts between Europe's General Data Protection Regulation (GDPR) and the new AI Act. And, they talked about how the 15-year-old Future of Privacy Forum envisions its role in the age of artificial intelligence.</p>]]></description><content:encoded><![CDATA[<p>It goes without saying that privacy and the creation of laws and regulations around it are fundamental to determining how we will live and work with technology, and whether technology operates in service of democratic societies or only in service of governments and corporations. A couple of weeks ago, <strong>Justin Hendrix</strong> had a chance to speak with two leaders from the Future of Privacy Forum (FPF)-<strong>Jules Polonetsky</strong>, its CEO, and <strong>Anne J. Flanagan</strong>, the head of its new Center on AI. They discussed the recent US Supreme Court decision to overturn the Chevron doctrine and its implications for privacy legislation in the United States, the fierce battle over privacy laws in the US, and potential conflicts between Europe's General Data Protection Regulation (GDPR) and the new AI Act. And, they talked about how the 15-year-old Future of Privacy Forum envisions its role in the age of artificial intelligence.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/the-future-of-privacy-in-the-age-of-ai]]></link><guid isPermaLink="false">ebcb0db8-9158-4c28-be50-f8d7b5aac054</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 21 Jul 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/10d99640-bd90-49d6-9289-2413e9e49c11/TPP268-converted.mp3" length="32089328" type="audio/mpeg"/><itunes:duration>44:34</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>What Comes After Murthy v Missouri</title><itunes:title>What Comes After Murthy v Missouri</itunes:title><description><![CDATA[<p>On June 26, the US Supreme Court <a href="https://www.techpolicy.press/supreme-court-rejects-the-conspiracy-theory-behind-murthy-v-missouri/" rel="noopener noreferrer" target="_blank">issued</a> a 6-3 ruling in <em>Murthy v Missouri</em>, a cased that considered whether the Biden administration violated the First Amendment in its efforts to address COVID-19 mis- and disinformation on social media. Tech Policy press fellow <strong>Dean Jackson</strong>, who studied the case <a href="https://www.techpolicy.press/first-amendment-defenders-and-the-supreme-court-should-reject-the-jawboning-bogeyman/" rel="noopener noreferrer" target="_blank">closely</a>, discussed the outcome and what it means for the future with three experts:</p><ul><li><strong>Olga Belogolova</strong>, director of the Emerging Technologies Initiative at the Johns Hopkins School of Advanced International Studies (SAIS);</li><li><strong>Mayze Teitler, </strong>a legal fellow at the Knight First Amendment Institute; and</li><li><strong>Nina Jankowicz</strong>, co-Founder and CEO of the American Sunlight Project.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>On June 26, the US Supreme Court <a href="https://www.techpolicy.press/supreme-court-rejects-the-conspiracy-theory-behind-murthy-v-missouri/" rel="noopener noreferrer" target="_blank">issued</a> a 6-3 ruling in <em>Murthy v Missouri</em>, a cased that considered whether the Biden administration violated the First Amendment in its efforts to address COVID-19 mis- and disinformation on social media. Tech Policy press fellow <strong>Dean Jackson</strong>, who studied the case <a href="https://www.techpolicy.press/first-amendment-defenders-and-the-supreme-court-should-reject-the-jawboning-bogeyman/" rel="noopener noreferrer" target="_blank">closely</a>, discussed the outcome and what it means for the future with three experts:</p><ul><li><strong>Olga Belogolova</strong>, director of the Emerging Technologies Initiative at the Johns Hopkins School of Advanced International Studies (SAIS);</li><li><strong>Mayze Teitler, </strong>a legal fellow at the Knight First Amendment Institute; and</li><li><strong>Nina Jankowicz</strong>, co-Founder and CEO of the American Sunlight Project.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/what-comes-after-murthy-v-missouri]]></link><guid isPermaLink="false">88a792fb-bb21-40f6-a29d-b79b9819bb86</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 14 Jul 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/e26b096e-f56e-4943-9533-14eb4385a3db/TPP267-converted.mp3" length="48438962" type="audio/mpeg"/><itunes:duration>57:40</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Data Rights in the Age of AI</title><itunes:title>Data Rights in the Age of AI</itunes:title><description><![CDATA[<p>In this episode, <strong>David Carroll</strong>, an associate professor of media design in the MFA Design and Technology graduate program at the School of Art, Media and Technology at Parsons School of Design&nbsp;at The New School, speaks to <strong>Ravi Naik</strong>, legal director at AWO, a consultancy with offices in London, Brussels, and Paris that works on a range of data protection and tech policy issues. Their discussion delves into the evolution of data protection from the Cambridge Analytica scandal to current questions provoked by generative AI, with a focus on a GDPR complaint against OpenAI brought by Noyb, the non-profit founded by Austrian activist <strong>Max Schrems</strong>.</p>]]></description><content:encoded><![CDATA[<p>In this episode, <strong>David Carroll</strong>, an associate professor of media design in the MFA Design and Technology graduate program at the School of Art, Media and Technology at Parsons School of Design&nbsp;at The New School, speaks to <strong>Ravi Naik</strong>, legal director at AWO, a consultancy with offices in London, Brussels, and Paris that works on a range of data protection and tech policy issues. Their discussion delves into the evolution of data protection from the Cambridge Analytica scandal to current questions provoked by generative AI, with a focus on a GDPR complaint against OpenAI brought by Noyb, the non-profit founded by Austrian activist <strong>Max Schrems</strong>.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/data-rights-in-the-age-of-ai]]></link><guid isPermaLink="false">18564de1-e449-4e44-b32f-ce8cf59dab4f</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 14 Jul 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/f97ac094-56c5-4182-b4b3-be8ec3e29f3e/TPP266-converted.mp3" length="30794084" type="audio/mpeg"/><itunes:duration>42:46</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Considering the Ethics of AI Assistants</title><itunes:title>Considering the Ethics of AI Assistants</itunes:title><description><![CDATA[<p>In April, Google DeepMind <a href="https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/ethics-of-advanced-ai-assistants/the-ethics-of-advanced-ai-assistants-2024-i.pdf" rel="noopener noreferrer" target="_blank">published a paper</a> that boasts 57 authors, including experts from a range of disciplines in different parts of Google, including DeepMind, Jigsaw, and Google Research, as well as researchers from academic institutions such as Oxford, University College London, Delft University of Technology, University of Edinburgh, and a think tank at Georgetown, the Center for Security and Emerging Technology. The paper speculates about the ethical and societal risks posed by the types of AI assistants Google and other tech firms want to build, which the authors say are “likely to have a profound impact on our individual and collective lives.” </p><p><strong>Justin Hendrix</strong> the chance to speak to two of the papers authors about some of these issues:</p><ul><li><strong>Shannon Vallor,</strong> a professor of AI and data ethics at the University of Edinburgh and director of the Center for Technomoral Futures in the Edinburgh Futures Institute; and</li><li><strong>Iason Gabriel</strong>, a research scientist at Google DeepMind in its ethics research team.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>In April, Google DeepMind <a href="https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/ethics-of-advanced-ai-assistants/the-ethics-of-advanced-ai-assistants-2024-i.pdf" rel="noopener noreferrer" target="_blank">published a paper</a> that boasts 57 authors, including experts from a range of disciplines in different parts of Google, including DeepMind, Jigsaw, and Google Research, as well as researchers from academic institutions such as Oxford, University College London, Delft University of Technology, University of Edinburgh, and a think tank at Georgetown, the Center for Security and Emerging Technology. The paper speculates about the ethical and societal risks posed by the types of AI assistants Google and other tech firms want to build, which the authors say are “likely to have a profound impact on our individual and collective lives.” </p><p><strong>Justin Hendrix</strong> the chance to speak to two of the papers authors about some of these issues:</p><ul><li><strong>Shannon Vallor,</strong> a professor of AI and data ethics at the University of Edinburgh and director of the Center for Technomoral Futures in the Edinburgh Futures Institute; and</li><li><strong>Iason Gabriel</strong>, a research scientist at Google DeepMind in its ethics research team.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/considering-the-ethics-of-ai-assistants]]></link><guid isPermaLink="false">d073bb69-2d69-453a-bcda-78f4cf9f308a</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 07 Jul 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/b27f8478-3f1d-4f18-bbc9-ce86d1e0f59d/TPP265-converted.mp3" length="45007108" type="audio/mpeg"/><itunes:duration>53:35</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Big Tech and the News</title><itunes:title>Big Tech and the News</itunes:title><description><![CDATA[<p>News and journalism organizations and dominant tech companies are in a years-long battle over content, clicks and revenue, and the tech companies are winning. What are policy options that encourage both the sustainability and quality of news content on popular online platforms? In this episode, <strong>Rebecca Rand</strong> explores perspectives on the subject, drawing on a conversation hosted by <strong>Justin Hendrix</strong> with experts <strong>Anya Schiffrin</strong> and <strong>Cory Doctorow</strong> at the Knight Foundation's <a href="https://knightfoundation.org/events/informed-and-engaged/informed-conference-2024/#agenda" rel="noopener noreferrer" target="_blank">INFORMED conference</a> earlier this year. </p>]]></description><content:encoded><![CDATA[<p>News and journalism organizations and dominant tech companies are in a years-long battle over content, clicks and revenue, and the tech companies are winning. What are policy options that encourage both the sustainability and quality of news content on popular online platforms? In this episode, <strong>Rebecca Rand</strong> explores perspectives on the subject, drawing on a conversation hosted by <strong>Justin Hendrix</strong> with experts <strong>Anya Schiffrin</strong> and <strong>Cory Doctorow</strong> at the Knight Foundation's <a href="https://knightfoundation.org/events/informed-and-engaged/informed-conference-2024/#agenda" rel="noopener noreferrer" target="_blank">INFORMED conference</a> earlier this year. </p>]]></content:encoded><link><![CDATA[https://techpolicy.press/big-tech-and-the-news]]></link><guid isPermaLink="false">75574639-1165-46ad-84ac-6b19e071ec93</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 30 Jun 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/2e2aa585-f545-4bc9-a77d-239762c31e65/TPP264-converted.mp3" length="39876909" type="audio/mpeg"/><itunes:duration>41:32</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Understanding the Digital Silk Road</title><itunes:title>Understanding the Digital Silk Road</itunes:title><description><![CDATA[<p>In October 2023, during the third Belt and Road Forum in Beijing, China's leader Xi Jinping signaled a shift in focus from more grandiose physical infrastructure projects to 'small yet smart' initiatives. This shift underscores the need to understand China's ambitions to reshape global digital governance, moving away from an open and free internet towards a model rooted in government control and mass surveillance. </p><p>The advocacy group Article 19 documents this shift in a recent report titled "<a href="https://www.article19.org/resources/china-the-rise-of-digital-repression-in-the-indo-pacific/" rel="noopener noreferrer" target="_blank">The Digital Silk Road: China and the Rise of Digital Repression in the Indo-Pacific</a>," examining China's influence on digital infrastructure and governance in Cambodia, Malaysia, Nepal, and Thailand. As the Indo-Pacific remains strategically significant for China in deploying next-generation technologies, the report argues that assessing China’s regional partnerships and their implications for digital repression is crucial for understanding its broader ambitions to reshape global digital norms. </p><p>To discuss these issues in more depth, <strong>Justin Hendrix</strong> is joined by:</p><ul><li><strong>Michael Caster</strong>, Asia Digital Program Manager at&nbsp;ARTICLE 19; and</li><li><strong>Catherine Tai, </strong>the deputy director for Asia and the Pacific team at Center for International Enterprise (CIPE).</li></ul><br/>]]></description><content:encoded><![CDATA[<p>In October 2023, during the third Belt and Road Forum in Beijing, China's leader Xi Jinping signaled a shift in focus from more grandiose physical infrastructure projects to 'small yet smart' initiatives. This shift underscores the need to understand China's ambitions to reshape global digital governance, moving away from an open and free internet towards a model rooted in government control and mass surveillance. </p><p>The advocacy group Article 19 documents this shift in a recent report titled "<a href="https://www.article19.org/resources/china-the-rise-of-digital-repression-in-the-indo-pacific/" rel="noopener noreferrer" target="_blank">The Digital Silk Road: China and the Rise of Digital Repression in the Indo-Pacific</a>," examining China's influence on digital infrastructure and governance in Cambodia, Malaysia, Nepal, and Thailand. As the Indo-Pacific remains strategically significant for China in deploying next-generation technologies, the report argues that assessing China’s regional partnerships and their implications for digital repression is crucial for understanding its broader ambitions to reshape global digital norms. </p><p>To discuss these issues in more depth, <strong>Justin Hendrix</strong> is joined by:</p><ul><li><strong>Michael Caster</strong>, Asia Digital Program Manager at&nbsp;ARTICLE 19; and</li><li><strong>Catherine Tai, </strong>the deputy director for Asia and the Pacific team at Center for International Enterprise (CIPE).</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/understanding-the-digital-silk-road]]></link><guid isPermaLink="false">a0a85f83-6adb-43a6-bad6-0ee06e8ad866</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 23 Jun 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/ed291f5c-3dd3-44d7-b3db-cf2e77e557c1/TPP263-converted.mp3" length="35802391" type="audio/mpeg"/><itunes:duration>49:44</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Internet Governance Is At A Crossroads</title><itunes:title>Internet Governance Is At A Crossroads</itunes:title><description><![CDATA[<p>In this episode, we explore a topic that sits at the heart of global digital policy: the contrasting visions of internet governance championed by the United States and its Western allies versus those promoted by China and nations in its orbit. This debate is playing out across various international venues and has profound implications for the future of digital rights, privacy, and the open internet. <strong>Justin Hendrix</strong> is joined by experts at the Atlantic Council that study these issues from a variety of angles and across multiple geographies, including:</p><ul><li><strong>Rose Jackson</strong>, the director of the Democracy + Tech Initiative within the&nbsp;Atlantic Council&nbsp;Technology Programs;</li><li><strong>Konstantinos Komaitis, </strong>a nonresident fellow with the Democracy + Tech Initiative of the&nbsp;Atlantic Council's&nbsp;Digital Forensic Research Lab;</li><li><strong>Kenton Thibaut</strong>, a senior resident China fellow at the&nbsp;Atlantic Council's&nbsp;Digital Forensic Research Lab; and</li><li><strong>Iria Puyosa</strong>, a senior research fellow at the Atlantic Council’s Digital Forensic Research Lab.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>In this episode, we explore a topic that sits at the heart of global digital policy: the contrasting visions of internet governance championed by the United States and its Western allies versus those promoted by China and nations in its orbit. This debate is playing out across various international venues and has profound implications for the future of digital rights, privacy, and the open internet. <strong>Justin Hendrix</strong> is joined by experts at the Atlantic Council that study these issues from a variety of angles and across multiple geographies, including:</p><ul><li><strong>Rose Jackson</strong>, the director of the Democracy + Tech Initiative within the&nbsp;Atlantic Council&nbsp;Technology Programs;</li><li><strong>Konstantinos Komaitis, </strong>a nonresident fellow with the Democracy + Tech Initiative of the&nbsp;Atlantic Council's&nbsp;Digital Forensic Research Lab;</li><li><strong>Kenton Thibaut</strong>, a senior resident China fellow at the&nbsp;Atlantic Council's&nbsp;Digital Forensic Research Lab; and</li><li><strong>Iria Puyosa</strong>, a senior research fellow at the Atlantic Council’s Digital Forensic Research Lab.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/internet-governance-is-at-a-crossroads]]></link><guid isPermaLink="false">9fcd550e-dd9a-44fa-a019-c6c1239f83fe</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 23 Jun 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/9e706a60-a408-445a-b4ed-7377b64ad683/TPP262-converted.mp3" length="36539980" type="audio/mpeg"/><itunes:duration>50:45</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>How China Regulates Tech</title><itunes:title>How China Regulates Tech</itunes:title><description><![CDATA[<p>Angela Zhang is the author of <a href="https://global.oup.com/academic/product/high-wire-9780197682258" rel="noopener noreferrer" target="_blank"><em>High Wire: How China Regulates Big Tech and Governs Its Economy</em></a>, published this year by Oxford University Press. With a career in the practice of law and in teaching it, Zhang has held roles King’s College London and at New York University School of Law, and most recently served as Director of&nbsp;Philip K. H. Wong Center for Chinese Law at the University of Hong Kong. She will join the University of Southern California as a Professor of Law in fall 2024.</p>]]></description><content:encoded><![CDATA[<p>Angela Zhang is the author of <a href="https://global.oup.com/academic/product/high-wire-9780197682258" rel="noopener noreferrer" target="_blank"><em>High Wire: How China Regulates Big Tech and Governs Its Economy</em></a>, published this year by Oxford University Press. With a career in the practice of law and in teaching it, Zhang has held roles King’s College London and at New York University School of Law, and most recently served as Director of&nbsp;Philip K. H. Wong Center for Chinese Law at the University of Hong Kong. She will join the University of Southern California as a Professor of Law in fall 2024.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/how-china-regulates-tech]]></link><guid isPermaLink="false">cceb9208-a1c4-4175-8fc3-7f64be460f30</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 23 Jun 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/d52025b4-cb62-45a7-b594-58f123623b76/TPP261-converted.mp3" length="28370650" type="audio/mpeg"/><itunes:duration>39:24</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The Demise of CrowdTangle and What It Means for Independent Technology Research</title><itunes:title>The Demise of CrowdTangle and What It Means for Independent Technology Research</itunes:title><description><![CDATA[<p>A topic we returned to often in this podcast is the dire need for independent technology researchers to have access to platform data. Without it, we cannot understand the extent of the harms and effects of social media on people and on society, and we cannot understand the limits of those harms. This makes it difficult to respond in acute moments such as elections, and to understand issues such as the relationship between tech platforms and social cohesion, or mental health, or any number of the other issues policymakers care about.  </p><p>In this episode, <strong>Justin Hendrix </strong>speaks with two people on the front lines of the fight to secure access to data, including <a href="https://foundation.mozilla.org/en/campaigns/open-letter-to-meta-support-crowdtangle-through-2024-and-maintain-crowdtangle-approach/" rel="noopener noreferrer" target="_blank">advocating</a> for Meta to do better in light of the impending deprecation of CrowdTangle, a tool used by researchers study Meta's products, including Facebook and Instagram. They are:</p><ul><li><strong>Brandi Guerkink</strong>, the executive director of the <a href="https://independenttechresearch.org/" rel="noopener noreferrer" target="_blank">Coalition for Independent Technology Research</a>, and</li><li><strong>Claire Pershan</strong>, EU advocacy lead at the <a href="https://foundation.mozilla.org/en/?gad_source=1" rel="noopener noreferrer" target="_blank">Mozilla Foundation</a>.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>A topic we returned to often in this podcast is the dire need for independent technology researchers to have access to platform data. Without it, we cannot understand the extent of the harms and effects of social media on people and on society, and we cannot understand the limits of those harms. This makes it difficult to respond in acute moments such as elections, and to understand issues such as the relationship between tech platforms and social cohesion, or mental health, or any number of the other issues policymakers care about.  </p><p>In this episode, <strong>Justin Hendrix </strong>speaks with two people on the front lines of the fight to secure access to data, including <a href="https://foundation.mozilla.org/en/campaigns/open-letter-to-meta-support-crowdtangle-through-2024-and-maintain-crowdtangle-approach/" rel="noopener noreferrer" target="_blank">advocating</a> for Meta to do better in light of the impending deprecation of CrowdTangle, a tool used by researchers study Meta's products, including Facebook and Instagram. They are:</p><ul><li><strong>Brandi Guerkink</strong>, the executive director of the <a href="https://independenttechresearch.org/" rel="noopener noreferrer" target="_blank">Coalition for Independent Technology Research</a>, and</li><li><strong>Claire Pershan</strong>, EU advocacy lead at the <a href="https://foundation.mozilla.org/en/?gad_source=1" rel="noopener noreferrer" target="_blank">Mozilla Foundation</a>.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/the-demise-of-crowdtangle-and-what-it-means-for-independent-technology-research]]></link><guid isPermaLink="false">c956f085-1614-48da-a79c-dc8c6fb01bce</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Fri, 21 Jun 2024 00:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/1a47a908-c90f-4cc3-8620-5026c7ccadc6/TPP260-converted.mp3" length="19809812" type="audio/mpeg"/><itunes:duration>27:31</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Finding the Humanity in an Automated World</title><itunes:title>Finding the Humanity in an Automated World</itunes:title><description><![CDATA[<p><strong>Madhumita Murgia</strong>, AI editor at the <em>Financial Times, </em>is the author of a new book called <a href="https://us.macmillan.com/books/9781250867391/codedependent" rel="noopener noreferrer" target="_blank"><em>Code Dependent: Living in the Shadow of AI</em></a>. The book combines reporting and research to provide a look at the role that AI and automated decision-making is playing in reshaping our lives, our politics, and our economies across the world. </p>]]></description><content:encoded><![CDATA[<p><strong>Madhumita Murgia</strong>, AI editor at the <em>Financial Times, </em>is the author of a new book called <a href="https://us.macmillan.com/books/9781250867391/codedependent" rel="noopener noreferrer" target="_blank"><em>Code Dependent: Living in the Shadow of AI</em></a>. The book combines reporting and research to provide a look at the role that AI and automated decision-making is playing in reshaping our lives, our politics, and our economies across the world. </p>]]></content:encoded><link><![CDATA[https://techpolicy.press/finding-the-humanity-in-an-automated-world]]></link><guid isPermaLink="false">02674990-2d64-44bb-be64-f2647ed8c5fa</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Tue, 18 Jun 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/a94918d9-ddbd-4583-a29a-3514c6210db5/TPP259mp3-converted.mp3" length="24621560" type="audio/mpeg"/><itunes:duration>34:12</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>A Conversation with White House Office of Science and Technology Policy Director Arati Prabhakar</title><itunes:title>A Conversation with White House Office of Science and Technology Policy Director Arati Prabhakar</itunes:title><description><![CDATA[<p><strong>Dr. Arati Prabhakar </strong>the Director of the White House Office of Science and Technology Policy and Technology Policy and Science Advisor to <strong>President Joe Biden</strong>. This week, she hosted an event in Washington DC called "AI&nbsp;Aspirations: R&amp;D for Public Missions."<strong> </strong>Speakers included executive branch officials and agency leaders, from the Secretary of Education to the Food and Drug Administration Commissioner, as well as lawmakers such as Senators <strong>Amy Klobuchar</strong> and <strong>Mark Warner</strong>, and Representative <strong>Don Beyer</strong>. Prior to the event, <strong>Justin Hendrix</strong> spoke to Dr. Prabhakar about OSTP's priorities.</p>]]></description><content:encoded><![CDATA[<p><strong>Dr. Arati Prabhakar </strong>the Director of the White House Office of Science and Technology Policy and Technology Policy and Science Advisor to <strong>President Joe Biden</strong>. This week, she hosted an event in Washington DC called "AI&nbsp;Aspirations: R&amp;D for Public Missions."<strong> </strong>Speakers included executive branch officials and agency leaders, from the Secretary of Education to the Food and Drug Administration Commissioner, as well as lawmakers such as Senators <strong>Amy Klobuchar</strong> and <strong>Mark Warner</strong>, and Representative <strong>Don Beyer</strong>. Prior to the event, <strong>Justin Hendrix</strong> spoke to Dr. Prabhakar about OSTP's priorities.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/a-conversation-with-white-house-office-of-science-and-technology-policy-director-arati-prabhakar]]></link><guid isPermaLink="false">b0f0d2ce-9072-4dae-b26b-6af1f65b0a77</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 16 Jun 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/bfc6e487-e1a0-4d6c-b91a-5328d2cbdb76/TPP251-converted.mp3" length="29639053" type="audio/mpeg"/><itunes:duration>35:17</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>AI and Epistemic Risk: A Coming Crisis?</title><itunes:title>AI and Epistemic Risk: A Coming Crisis?</itunes:title><description><![CDATA[<p>What are the risks to democracy as AI is incorporated more and more into the systems and platforms we use to find and share information and engage in communication? In this episode, <strong>Justin Hendrix</strong> speaks with <strong>Elise Silva</strong>, a postdoctoral associate at the University of Pittsburgh Cyber Institute for Law, Policy, and Security, and <strong>John Wihbey</strong>, an associate professor at Northeastern University in the College of Arts, Media, and Design. Silva is the author of a <a href="https://www.techpolicy.press/ai-powered-search-and-the-rise-of-googles-concierge-wikipedia/" rel="noopener noreferrer" target="_blank">recent piece in Tech Policy Press</a> titled "AI-Powered Search and the Rise of Google’s 'Concierge Wikipedia.'” Wihbey is the author a paper <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4805026" rel="noopener noreferrer" target="_blank">published last month</a> titled "AI and Epistemic Risk for Democracy: A Coming Crisis of Public Knowledge?"</p>]]></description><content:encoded><![CDATA[<p>What are the risks to democracy as AI is incorporated more and more into the systems and platforms we use to find and share information and engage in communication? In this episode, <strong>Justin Hendrix</strong> speaks with <strong>Elise Silva</strong>, a postdoctoral associate at the University of Pittsburgh Cyber Institute for Law, Policy, and Security, and <strong>John Wihbey</strong>, an associate professor at Northeastern University in the College of Arts, Media, and Design. Silva is the author of a <a href="https://www.techpolicy.press/ai-powered-search-and-the-rise-of-googles-concierge-wikipedia/" rel="noopener noreferrer" target="_blank">recent piece in Tech Policy Press</a> titled "AI-Powered Search and the Rise of Google’s 'Concierge Wikipedia.'” Wihbey is the author a paper <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4805026" rel="noopener noreferrer" target="_blank">published last month</a> titled "AI and Epistemic Risk for Democracy: A Coming Crisis of Public Knowledge?"</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/ai-and-epistemic-risk-a-coming-crisis]]></link><guid isPermaLink="false">6124e9ca-5c0a-49e1-bf61-daeeaa4567af</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Mon, 10 Jun 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/9680bc7d-9521-4bf4-bd9f-acaffc31e70c/TPP250-converted.mp3" length="33061415" type="audio/mpeg"/><itunes:duration>45:55</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>What&apos;s Next for Tech Policy in India After the Elections</title><itunes:title>What&apos;s Next for Tech Policy in India After the Elections</itunes:title><description><![CDATA[<p>What role did technology play in India's elections, and what impact will the outcome have on tech policy in the country? Joining <strong>Justin Hendrix</strong> are three experts: <strong>Amber Sinha</strong> and <strong>Vandinika Shukla</strong>, both fellows at Tech Policy Press, and <strong>Prateek Waghre</strong>, the executive director at the Internet Freedom Foundation. Plus, Tech Policy Press program manager <strong>Prithvi Iyer</strong> sums up the election result. </p>]]></description><content:encoded><![CDATA[<p>What role did technology play in India's elections, and what impact will the outcome have on tech policy in the country? Joining <strong>Justin Hendrix</strong> are three experts: <strong>Amber Sinha</strong> and <strong>Vandinika Shukla</strong>, both fellows at Tech Policy Press, and <strong>Prateek Waghre</strong>, the executive director at the Internet Freedom Foundation. Plus, Tech Policy Press program manager <strong>Prithvi Iyer</strong> sums up the election result. </p>]]></content:encoded><link><![CDATA[https://techpolicy.press/whats-next-for-tech-policy-in-india-after-the-elections]]></link><guid isPermaLink="false">48f5736b-0428-4f4e-a5e1-4fa78200960b</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 09 Jun 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/db51e5fc-c75b-4640-8c88-c28769362b69/TPP249-converted.mp3" length="34482669" type="audio/mpeg"/><itunes:duration>47:54</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>How Are Political Campaigners in the US Using Generative AI?</title><itunes:title>How Are Political Campaigners in the US Using Generative AI?</itunes:title><description><![CDATA[<p>The guests in this episode are authors of <a href="https://mediaengagement.org/research/generative-ai-elections-and-beyond/" rel="noopener noreferrer" target="_blank">a new study</a> titled <em>Political Machines: Understanding the Role of AI in the US 2024 Elections and Beyond</em>. The study is based on interviews with a variety of individuals who are currently grappling with how generative AI tools and systems will change the way the work. </p><p>In a series of field interviews, the authors spoke with three vendors of political generative AI tools, a political candidate, a legal expert, a technology expert, an extremism expert, a digital organizer, a trust and safety industry professional, four Republican campaign consultants, and eight Democratic campaign consultants. Joining <strong>Justin Hendrix</strong> to discuss the results are:</p><ul><li><strong>Dean Jackson, </strong>the principal at Public Circle LLC and a reporting fellow with Tech Policy Press;</li><li><strong>Zelly Martin</strong>, a PhD candidate at the University of Texas at Austin and a senior research fellow at the Propaganda Research Lab at the Center for Media Engagement; and </li><li><strong>Inga Trauthig</strong>, head of research at the Propaganda Research Lab at the Center for Media Engagement at the University of Texas at Austin.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>The guests in this episode are authors of <a href="https://mediaengagement.org/research/generative-ai-elections-and-beyond/" rel="noopener noreferrer" target="_blank">a new study</a> titled <em>Political Machines: Understanding the Role of AI in the US 2024 Elections and Beyond</em>. The study is based on interviews with a variety of individuals who are currently grappling with how generative AI tools and systems will change the way the work. </p><p>In a series of field interviews, the authors spoke with three vendors of political generative AI tools, a political candidate, a legal expert, a technology expert, an extremism expert, a digital organizer, a trust and safety industry professional, four Republican campaign consultants, and eight Democratic campaign consultants. Joining <strong>Justin Hendrix</strong> to discuss the results are:</p><ul><li><strong>Dean Jackson, </strong>the principal at Public Circle LLC and a reporting fellow with Tech Policy Press;</li><li><strong>Zelly Martin</strong>, a PhD candidate at the University of Texas at Austin and a senior research fellow at the Propaganda Research Lab at the Center for Media Engagement; and </li><li><strong>Inga Trauthig</strong>, head of research at the Propaganda Research Lab at the Center for Media Engagement at the University of Texas at Austin.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/how-are-political-campaigners-in-the-us-using-generative-ai]]></link><guid isPermaLink="false">697ec960-a1b4-40de-80b2-64eb22d0b6d5</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Thu, 06 Jun 2024 07:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/478ae304-005d-495d-b5c2-3501bfc78f1d/TPP248-converted.mp3" length="40895002" type="audio/mpeg"/><itunes:duration>48:41</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The Role of Shareholder Activism in Tech Accountability</title><itunes:title>The Role of Shareholder Activism in Tech Accountability</itunes:title><description><![CDATA[<p>This episode focuses on the role of shareholder activism in pursuing transparency and accountability from tech firms. In a week where board resolutions are up for a vote at Meta and Alphabet related to each company's development and deployment of artificial intelligence, <strong>Justin Hendrix </strong>spoke to five individuals working at the intersection of sustainable investing in tech accountability:</p><ul><li><strong>Michael Connor</strong>, Executive Director of Open MIC</li><li><strong>Jessica Dheere</strong>, Advocacy Director at Open MIC</li><li><strong>Natasha Lamb</strong>, Chief Investment Officer at Arjuna Capital</li><li><strong>Jonas Kron</strong>, Chief Advocacy Officer at Trillium Asset Management</li><li><strong>Christina O'Connell</strong>, Senior Manager for Shareholder Engagement and Investments at Ekō</li></ul><br/>]]></description><content:encoded><![CDATA[<p>This episode focuses on the role of shareholder activism in pursuing transparency and accountability from tech firms. In a week where board resolutions are up for a vote at Meta and Alphabet related to each company's development and deployment of artificial intelligence, <strong>Justin Hendrix </strong>spoke to five individuals working at the intersection of sustainable investing in tech accountability:</p><ul><li><strong>Michael Connor</strong>, Executive Director of Open MIC</li><li><strong>Jessica Dheere</strong>, Advocacy Director at Open MIC</li><li><strong>Natasha Lamb</strong>, Chief Investment Officer at Arjuna Capital</li><li><strong>Jonas Kron</strong>, Chief Advocacy Officer at Trillium Asset Management</li><li><strong>Christina O'Connell</strong>, Senior Manager for Shareholder Engagement and Investments at Ekō</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/the-role-of-shareholder-activism-in-tech-accountability]]></link><guid isPermaLink="false">d179576d-84ff-49ca-8f37-509a03ebeac6</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 02 Jun 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/8f5b5f29-42c3-4421-9ad5-e9b768d44ae5/TPP247-converted.mp3" length="30120444" type="audio/mpeg"/><itunes:duration>41:50</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Shadow Report on AI Addresses What the US Senate Missed</title><itunes:title>Shadow Report on AI Addresses What the US Senate Missed</itunes:title><description><![CDATA[<p>As we documented in Tech Policy Press, when the US Senate AI working group released its roadmap on policy on May 17th, many outside organizations were underwhelmed at best, and some were fiercely critical of the closed door process that produced it. In the days after the report was announced,  a group of nonprofit and academic organizations put out what they call a "<a href="https://senateshadowreport.com/" rel="noopener noreferrer" target="_blank">shadow report</a>" to the US Senate AI policy roadmap. The shadow report is intended as a complement or counterpoint to the Senate working group's product. It collects a bibliography of research and proposals from civil society and academia and addresses several issues the Senators largely passed over. To learn more, <strong>Justin Hendrix </strong>spoke to some of the report's authors, including:</p><ul><li><strong>Sarah West</strong>, co-executive director of the AI Now Institute</li><li><strong>Nasser Eledroos</strong>, policy lead on technology at Color of Change</li><li><strong>Paramita Shah</strong>, executive director of Just Futures Law</li><li><strong>Cynthia Conti-Cook</strong>, director of research and policy at the Surveillance Resistance Lab</li></ul><br/>]]></description><content:encoded><![CDATA[<p>As we documented in Tech Policy Press, when the US Senate AI working group released its roadmap on policy on May 17th, many outside organizations were underwhelmed at best, and some were fiercely critical of the closed door process that produced it. In the days after the report was announced,  a group of nonprofit and academic organizations put out what they call a "<a href="https://senateshadowreport.com/" rel="noopener noreferrer" target="_blank">shadow report</a>" to the US Senate AI policy roadmap. The shadow report is intended as a complement or counterpoint to the Senate working group's product. It collects a bibliography of research and proposals from civil society and academia and addresses several issues the Senators largely passed over. To learn more, <strong>Justin Hendrix </strong>spoke to some of the report's authors, including:</p><ul><li><strong>Sarah West</strong>, co-executive director of the AI Now Institute</li><li><strong>Nasser Eledroos</strong>, policy lead on technology at Color of Change</li><li><strong>Paramita Shah</strong>, executive director of Just Futures Law</li><li><strong>Cynthia Conti-Cook</strong>, director of research and policy at the Surveillance Resistance Lab</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/shadow-report-on-ai-addresses-what-the-us-senate-missed]]></link><guid isPermaLink="false">00410281-a85a-441c-b1d9-65fa153de734</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 26 May 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/f459e3fc-ccb4-4f86-b20f-0016115ccf15/TPP245-converted.mp3" length="27664094" type="audio/mpeg"/><itunes:duration>38:25</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>A Perspective on Meta&apos;s Moderation of Palestinian Voices</title><itunes:title>A Perspective on Meta&apos;s Moderation of Palestinian Voices</itunes:title><description><![CDATA[<p>A conversation with <strong>Marwa Fatafta</strong>, who serves as policy and advocacy director for the nonprofit Access now, which has worked on digital civil rights, connectivity and censorship issues for the past 15 years. Along with other groups, Access Now has <a href="https://www.accessnow.org/press-release/meta-must-take-immediate-action/" rel="noopener noreferrer" target="_blank">engaged Meta in recent months</a> over what it says is the “systematic censorship of Palestinian voices” amidst the Israel-Hamas war in Gaza.&nbsp;</p>]]></description><content:encoded><![CDATA[<p>A conversation with <strong>Marwa Fatafta</strong>, who serves as policy and advocacy director for the nonprofit Access now, which has worked on digital civil rights, connectivity and censorship issues for the past 15 years. Along with other groups, Access Now has <a href="https://www.accessnow.org/press-release/meta-must-take-immediate-action/" rel="noopener noreferrer" target="_blank">engaged Meta in recent months</a> over what it says is the “systematic censorship of Palestinian voices” amidst the Israel-Hamas war in Gaza.&nbsp;</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/a-perspective-on-metas-moderation-of-palestinian-voices]]></link><guid isPermaLink="false">2138bc1c-afee-44e4-b2e4-131e2ba05996</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 26 May 2024 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/b1305aa3-d93b-42d6-b1ee-500b9c53d583/TPP246-converted.mp3" length="29197583" type="audio/mpeg"/><itunes:duration>40:33</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>AI: Past, Present, and Future with Chris Stokel-Walker</title><itunes:title>AI: Past, Present, and Future with Chris Stokel-Walker</itunes:title><description><![CDATA[<p>One tech journalist whose byline always draws me in is <strong>Chris Stokel-Walker</strong>.&nbsp;He writes for multiple publications including&nbsp;<em>The New York Times</em>, <em>The Washington Post</em>, <em>The Economist</em>, <em>Wired</em>, <em>Fast Company</em>, and <em>New Scientist</em>. Now, he’s got a new book out: <a href="https://chaptersbookstore.com/products/how-ai-ate-the-world-a-brief-history-of-artificial-intelligence-and-its-long-future" rel="noopener noreferrer" target="_blank"><em>How AI Ate the World: A Brief History of Artificial Intelligence - And Its Long Future</em></a><em>. </em>Last week, I had the chance to speak with him about it, and about how he covers technology and tech policy generally.</p>]]></description><content:encoded><![CDATA[<p>One tech journalist whose byline always draws me in is <strong>Chris Stokel-Walker</strong>.&nbsp;He writes for multiple publications including&nbsp;<em>The New York Times</em>, <em>The Washington Post</em>, <em>The Economist</em>, <em>Wired</em>, <em>Fast Company</em>, and <em>New Scientist</em>. Now, he’s got a new book out: <a href="https://chaptersbookstore.com/products/how-ai-ate-the-world-a-brief-history-of-artificial-intelligence-and-its-long-future" rel="noopener noreferrer" target="_blank"><em>How AI Ate the World: A Brief History of Artificial Intelligence - And Its Long Future</em></a><em>. </em>Last week, I had the chance to speak with him about it, and about how he covers technology and tech policy generally.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/ai-past-present-and-future-with-chris-stokel-walker]]></link><guid isPermaLink="false">f26223fe-a53b-43b5-be43-77762aa3c373</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 19 May 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/ec9c9f93-7503-413d-9b19-d8ffad4adc88/TPP244-converted.mp3" length="30594783" type="audio/mpeg"/><itunes:duration>36:25</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Prioritizing Civil Rights in US AI Policy: Claudia Ruiz and Alejandra Montoya-Boyer</title><itunes:title>Prioritizing Civil Rights in US AI Policy: Claudia Ruiz and Alejandra Montoya-Boyer</itunes:title><description><![CDATA[<p>On Wednesday, May 15, 2024, a bipartisan US Senate working group led by Majority Leader Sen. Chuck Schumer (D-NY)&nbsp;released a report&nbsp;titled "Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate." Just hours after the report was released, <strong>Justin Hendrix</strong> spoke to two civil rights advocates who are working on AI policy about the good and the bad of the Senate report, and more broadly about how to set AI policy priorities that ensure a brighter future for all:</p><ul><li><strong>Alejandra Montoya-Boyer</strong>,<strong> </strong>Senior Director at the Center for Civil Rights &amp; Tech at the Leadership Conference on Civil and Human Rights</li><li><strong>Claudia Ruiz</strong>, Senior Civil Rights Policy Analyst at&nbsp;UnidosUS</li></ul><br/>]]></description><content:encoded><![CDATA[<p>On Wednesday, May 15, 2024, a bipartisan US Senate working group led by Majority Leader Sen. Chuck Schumer (D-NY)&nbsp;released a report&nbsp;titled "Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate." Just hours after the report was released, <strong>Justin Hendrix</strong> spoke to two civil rights advocates who are working on AI policy about the good and the bad of the Senate report, and more broadly about how to set AI policy priorities that ensure a brighter future for all:</p><ul><li><strong>Alejandra Montoya-Boyer</strong>,<strong> </strong>Senior Director at the Center for Civil Rights &amp; Tech at the Leadership Conference on Civil and Human Rights</li><li><strong>Claudia Ruiz</strong>, Senior Civil Rights Policy Analyst at&nbsp;UnidosUS</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/prioritizing-civil-rights-in-us-ai-policy-claudia-ruiz-and-alejandra-montoya-boyer]]></link><guid isPermaLink="false">5cd32f28-9c75-41ac-ba5d-3e7a64df593f</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 19 May 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/6552cb6d-e9de-4f9d-a55a-cb63b7505216/TPP243-converted.mp3" length="25501614" type="audio/mpeg"/><itunes:duration>35:25</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>What We&apos;re Talking About When We Talk About Rural AI</title><itunes:title>What We&apos;re Talking About When We Talk About Rural AI</itunes:title><description><![CDATA[<p>Last October, <strong>Dr. Jasmine McNealy</strong>, as an associate professor at the&nbsp;University of Florida, a Senior Fellow in Tech Policy with the&nbsp;Mozilla Foundation, and a Faculty Associate at the&nbsp;Berkman Klein Center for Internet &amp; Society&nbsp;at Harvard University, <a href="https://www.techpolicy.press/we-need-a-policy-agenda-for-rural-ai/" rel="noopener noreferrer" target="_blank">wrote in Tech Policy Press</a> about the need for a policy agenda for "Rural AI." “Rural communities matter,” she wrote. “And that means they should matter when it comes to the development of policies on artificial intelligence.” </p><p>The piece was a preview of sorts to a two-day workshop Dr. McNealy organized at the University of Florida in Gainesville that touched on topics ranging from connectivity to bias and discrimination in algorithmic systems to the connection between AI and natural resources. <strong>Justin Hendrix </strong>attended the workshop, and<strong> </strong>recently he checked in with Dr. McNealy and three of the other attendees he met there:</p><ul><li><strong>Michaela Henley</strong>, program director and curriculum writer at Black Tech Futures and a senior research fellow representing Black Tech Futures at the Siegel Family Endowment;</li><li><strong>Dr. Dominique Harrison</strong>, founding principal of Equity Innovation Ventures; and</li><li><strong>Dr. Theodora Dryer</strong>, who is director of the Water Justice and Technology Studio, founder of the Critical Carbon Computing Collective, and teaches on technology and environmental justice at New York University.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>Last October, <strong>Dr. Jasmine McNealy</strong>, as an associate professor at the&nbsp;University of Florida, a Senior Fellow in Tech Policy with the&nbsp;Mozilla Foundation, and a Faculty Associate at the&nbsp;Berkman Klein Center for Internet &amp; Society&nbsp;at Harvard University, <a href="https://www.techpolicy.press/we-need-a-policy-agenda-for-rural-ai/" rel="noopener noreferrer" target="_blank">wrote in Tech Policy Press</a> about the need for a policy agenda for "Rural AI." “Rural communities matter,” she wrote. “And that means they should matter when it comes to the development of policies on artificial intelligence.” </p><p>The piece was a preview of sorts to a two-day workshop Dr. McNealy organized at the University of Florida in Gainesville that touched on topics ranging from connectivity to bias and discrimination in algorithmic systems to the connection between AI and natural resources. <strong>Justin Hendrix </strong>attended the workshop, and<strong> </strong>recently he checked in with Dr. McNealy and three of the other attendees he met there:</p><ul><li><strong>Michaela Henley</strong>, program director and curriculum writer at Black Tech Futures and a senior research fellow representing Black Tech Futures at the Siegel Family Endowment;</li><li><strong>Dr. Dominique Harrison</strong>, founding principal of Equity Innovation Ventures; and</li><li><strong>Dr. Theodora Dryer</strong>, who is director of the Water Justice and Technology Studio, founder of the Critical Carbon Computing Collective, and teaches on technology and environmental justice at New York University.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/what-were-talking-about-when-we-talk-about-rural-ai]]></link><guid isPermaLink="false">c985f86a-bfd0-47b4-b504-dc4f80fba82d</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 12 May 2024 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/b7bf1bc3-29a5-45f6-a5a1-b205b09422b6/TPP242-converted.mp3" length="30475401" type="audio/mpeg"/><itunes:duration>42:20</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>A Hippocratic Oath for AI? A Conversation with Chinmayi Sharma</title><itunes:title>A Hippocratic Oath for AI? A Conversation with Chinmayi Sharma</itunes:title><description><![CDATA[<p>The Hippocratic oath, named for a Greek physician who lived ~2,500 years ago that some call the father of modern medicine, is one of the earliest examples of an expression of professional ethics. It is a symbol of a profession that has built in a number of protections for patient interests, with ethical frameworks and requirements that seek to assure they are maintained.</p><p>Today’s guest is <strong>Chinmayi Sharma</strong>, an Associate Professor at Fordham Law School. Sharma thinks there should be a similar professional ethics framework in place for the developers of AI systems, and she’s written a <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4759742" rel="noopener noreferrer" target="_blank">substantial paper</a> on the 'why' and the 'how' of her proposal.&nbsp;</p>]]></description><content:encoded><![CDATA[<p>The Hippocratic oath, named for a Greek physician who lived ~2,500 years ago that some call the father of modern medicine, is one of the earliest examples of an expression of professional ethics. It is a symbol of a profession that has built in a number of protections for patient interests, with ethical frameworks and requirements that seek to assure they are maintained.</p><p>Today’s guest is <strong>Chinmayi Sharma</strong>, an Associate Professor at Fordham Law School. Sharma thinks there should be a similar professional ethics framework in place for the developers of AI systems, and she’s written a <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4759742" rel="noopener noreferrer" target="_blank">substantial paper</a> on the 'why' and the 'how' of her proposal.&nbsp;</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/a-hippocratic-oath-for-ai-a-conversation-with-chinmayi-sharma]]></link><guid isPermaLink="false">2b1b3aec-1c64-496a-8676-fe7be37f5657</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sat, 11 May 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/f8c2f669-6410-401e-9f85-e9d76cd20b67/TPP241-converted.mp3" length="33327356" type="audio/mpeg"/><itunes:duration>46:17</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Don&apos;t Hype Disinfo, Say Disinfo Experts</title><itunes:title>Don&apos;t Hype Disinfo, Say Disinfo Experts</itunes:title><description><![CDATA[<p>One topic we come back to again and again on this podcast is disinformation. In many episodes, we’ve discussed various phenomena related to this ambiguous term, and we’ve tried to use science to guide the way.</p><p>But the guests in this episode suggest that in the broader political discourse, the term is more than over used. Often, they say, lawmakers and other elites that employ it are crossing the line into hyping the effects of disinformation, which they say only helps propagandists and diminishes trust in society. To learn more <strong>Justin Hendrix</strong> spoke with <strong>Gavin Wilde</strong>, <strong>Thomas Rid</strong>, and <strong>Olga Belogolova</strong>, who with <strong>Lee Foster</strong> are the authors of an essay in the publication <em>Foreign Affairs</em> titled "<a href="https://www.foreignaffairs.com/russian-federation/dont-hype-disinformation-threat" rel="noopener noreferrer" target="_blank">Don’t Hype the Disinformation: Downplaying the Risk Helps Foreign Propagandists, But So Does Exaggerating It</a>."&nbsp;</p>]]></description><content:encoded><![CDATA[<p>One topic we come back to again and again on this podcast is disinformation. In many episodes, we’ve discussed various phenomena related to this ambiguous term, and we’ve tried to use science to guide the way.</p><p>But the guests in this episode suggest that in the broader political discourse, the term is more than over used. Often, they say, lawmakers and other elites that employ it are crossing the line into hyping the effects of disinformation, which they say only helps propagandists and diminishes trust in society. To learn more <strong>Justin Hendrix</strong> spoke with <strong>Gavin Wilde</strong>, <strong>Thomas Rid</strong>, and <strong>Olga Belogolova</strong>, who with <strong>Lee Foster</strong> are the authors of an essay in the publication <em>Foreign Affairs</em> titled "<a href="https://www.foreignaffairs.com/russian-federation/dont-hype-disinformation-threat" rel="noopener noreferrer" target="_blank">Don’t Hype the Disinformation: Downplaying the Risk Helps Foreign Propagandists, But So Does Exaggerating It</a>."&nbsp;</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/dont-hype-disinfo-say-disinfo-experts]]></link><guid isPermaLink="false">1277f757-e176-47f9-ab03-318ae719a816</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 05 May 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/94498f86-5f37-49e4-9ba1-b8c1faa8e92b/TPP240-converted.mp3" length="31383196" type="audio/mpeg"/><itunes:duration>43:35</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Resisting AI and the Consolidation of Power</title><itunes:title>Resisting AI and the Consolidation of Power</itunes:title><description><![CDATA[<p>In an introduction to a <a href="https://firstmonday.org/ojs/index.php/fm" rel="noopener noreferrer" target="_blank">special issue</a> of the journal <em>First Monday</em> on topics related to AI and&nbsp;power, <strong>Jenna Burrell</strong> and <strong>Jacob Metcalf</strong> argue that<strong> </strong>"what can and cannot be said inside of mainstream computer science publications appears to be constrained by the power, wealth, and ideology of a small cohort of industrialists. The result is that shaping discourse about the AI industry is itself a form of power that cannot be named inside of computer science." The papers in the journal go on to interrogate the epistemic culture of AI safety, the promise of utopia through artificial general intelligence how to debunk robot rights, and more. </p><p>To learn more about some of the ideas in the special issue, <strong>Justin Hendrix</strong> spoke to Burrell, Metcalf, and two of the other authors of papers included in it: <strong>Shazeda Ahmed</strong> and <strong>Émile P. Torres</strong>.</p>]]></description><content:encoded><![CDATA[<p>In an introduction to a <a href="https://firstmonday.org/ojs/index.php/fm" rel="noopener noreferrer" target="_blank">special issue</a> of the journal <em>First Monday</em> on topics related to AI and&nbsp;power, <strong>Jenna Burrell</strong> and <strong>Jacob Metcalf</strong> argue that<strong> </strong>"what can and cannot be said inside of mainstream computer science publications appears to be constrained by the power, wealth, and ideology of a small cohort of industrialists. The result is that shaping discourse about the AI industry is itself a form of power that cannot be named inside of computer science." The papers in the journal go on to interrogate the epistemic culture of AI safety, the promise of utopia through artificial general intelligence how to debunk robot rights, and more. </p><p>To learn more about some of the ideas in the special issue, <strong>Justin Hendrix</strong> spoke to Burrell, Metcalf, and two of the other authors of papers included in it: <strong>Shazeda Ahmed</strong> and <strong>Émile P. Torres</strong>.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/resisting-ai-and-the-consolidation-of-power]]></link><guid isPermaLink="false">c1adc7e1-3e46-41cb-871d-d7031be21f5b</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sat, 04 May 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/f9ea7dc1-6e1f-4112-8720-7325edb1bd75/TPP239-converted.mp3" length="38389868" type="audio/mpeg"/><itunes:duration>53:19</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>What&apos;s Next for TikTok, and US Tech Policy</title><itunes:title>What&apos;s Next for TikTok, and US Tech Policy</itunes:title><description><![CDATA[<p>Last week President Joe Biden signed into law a measure that would force the Chinese firm ByteDance to divest its ownership of TikTok, or risk the app being banned in the US. The measure also included restrictions on the sale of personal data to foreign entities. What are the implications of these moves for US and global tech policy going forward? What will the inevitable legal challenges look like?</p><p>To learn more, <strong>Justin Hendrix</strong> spoke with <strong>Anupam Chander</strong>, law professor at Georgetown and a visiting scholar at the Institute for Rebooting Social Media at Harvard University; <strong>Rose Jackson</strong>, the director of the Democracy and Tech Initiative at the Atlantic Council; and <strong>Justin Sherman</strong>, CEO of global cyber strategies and adjunct professor at Duke University.</p>]]></description><content:encoded><![CDATA[<p>Last week President Joe Biden signed into law a measure that would force the Chinese firm ByteDance to divest its ownership of TikTok, or risk the app being banned in the US. The measure also included restrictions on the sale of personal data to foreign entities. What are the implications of these moves for US and global tech policy going forward? What will the inevitable legal challenges look like?</p><p>To learn more, <strong>Justin Hendrix</strong> spoke with <strong>Anupam Chander</strong>, law professor at Georgetown and a visiting scholar at the Institute for Rebooting Social Media at Harvard University; <strong>Rose Jackson</strong>, the director of the Democracy and Tech Initiative at the Atlantic Council; and <strong>Justin Sherman</strong>, CEO of global cyber strategies and adjunct professor at Duke University.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/whats-next-for-tiktok-and-us-tech-policy]]></link><guid isPermaLink="false">810fd941-b073-4839-a37e-41586512d842</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 28 Apr 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/da4dbd8f-321f-43d4-815f-e93c692dd679/TPP238-converted.mp3" length="35671773" type="audio/mpeg"/><itunes:duration>49:33</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Securing Privacy Rights to Advance Civil Rights</title><itunes:title>Securing Privacy Rights to Advance Civil Rights</itunes:title><description><![CDATA[<p>Subcommittee on Innovation, Data, and Commerce held a hearing: “Legislative Solutions to Protect Kids Online and Ensure Americans’ Data Privacy Rights.” Between the Kids Online Safety Act (KOSA) and the American Privacy Rights Act (APRA), both of which have bipartisan and bicameral support, Congress may be closer to acting on the issues than it has been recent memory.</p><p>One of the witnesses that the hearing was <strong>David Brody</strong>, who is managing attorney of the Digital Justice Initiative of the Lawyers' Committee for Civil Rights Under Law. <strong>Justin Hendrix</strong> caught up with Brody the day after the hearing, we spoke about the challenges of advancing the American Privacy Rights Act, and why he connects fundamental data to privacy rights to so many of the other issues that the Lawyers' Committee cares about, including voting rights and how to counter disinformation that targets communities of color.</p>]]></description><content:encoded><![CDATA[<p>Subcommittee on Innovation, Data, and Commerce held a hearing: “Legislative Solutions to Protect Kids Online and Ensure Americans’ Data Privacy Rights.” Between the Kids Online Safety Act (KOSA) and the American Privacy Rights Act (APRA), both of which have bipartisan and bicameral support, Congress may be closer to acting on the issues than it has been recent memory.</p><p>One of the witnesses that the hearing was <strong>David Brody</strong>, who is managing attorney of the Digital Justice Initiative of the Lawyers' Committee for Civil Rights Under Law. <strong>Justin Hendrix</strong> caught up with Brody the day after the hearing, we spoke about the challenges of advancing the American Privacy Rights Act, and why he connects fundamental data to privacy rights to so many of the other issues that the Lawyers' Committee cares about, including voting rights and how to counter disinformation that targets communities of color.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/securing-privacy-rights-to-advance-civil-rights]]></link><guid isPermaLink="false">ac2b3a66-cbdb-4981-9201-60727cd1aa2e</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 21 Apr 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/3da7bad7-7202-44a5-8b33-83d4c73f8c0d/TPP237-converted.mp3" length="19908343" type="audio/mpeg"/><itunes:duration>27:39</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The Societal Impacts of Foundation Models, and Access to Data for Researchers</title><itunes:title>The Societal Impacts of Foundation Models, and Access to Data for Researchers</itunes:title><description><![CDATA[<p>This episode features two conversations. Both relate to efforts to better understand the impact of technology on society. </p><p>In the first, we’ll hear from <strong>Sayash Kapoor</strong>, a PhD candidate at the Department of Computer Science and the Center for Information Technology Policy at Princeton University, and&nbsp;<strong>Rishi Bommasani</strong>, the society lead at the Stanford Center for Research on Foundation Models. They are two of the authors of a recent paper titled <a href="https://arxiv.org/abs/2403.07918" rel="noopener noreferrer" target="_blank">On the Societal Impact of Open Foundation Models</a><strong>. </strong></p><p>And in the second, we’ll hear from Politico Chief Technology Correspondent <strong>Mark Scott</strong> about the US-EU Trade and Technology Council (TTC) meeting, and what he’s learned about the question of access to social media platform data by <a href="https://www.techpolicy.press/survey-new-laws-mandate-access-to-social-media-data-but-obstacles-remain/" rel="noopener noreferrer" target="_blank">interviewing over 50 stakeholders</a>, including regulators, researchers, and platform executives.</p>]]></description><content:encoded><![CDATA[<p>This episode features two conversations. Both relate to efforts to better understand the impact of technology on society. </p><p>In the first, we’ll hear from <strong>Sayash Kapoor</strong>, a PhD candidate at the Department of Computer Science and the Center for Information Technology Policy at Princeton University, and&nbsp;<strong>Rishi Bommasani</strong>, the society lead at the Stanford Center for Research on Foundation Models. They are two of the authors of a recent paper titled <a href="https://arxiv.org/abs/2403.07918" rel="noopener noreferrer" target="_blank">On the Societal Impact of Open Foundation Models</a><strong>. </strong></p><p>And in the second, we’ll hear from Politico Chief Technology Correspondent <strong>Mark Scott</strong> about the US-EU Trade and Technology Council (TTC) meeting, and what he’s learned about the question of access to social media platform data by <a href="https://www.techpolicy.press/survey-new-laws-mandate-access-to-social-media-data-but-obstacles-remain/" rel="noopener noreferrer" target="_blank">interviewing over 50 stakeholders</a>, including regulators, researchers, and platform executives.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/the-societal-impacts-of-foundation-models-and-access-to-data-for-researchers]]></link><guid isPermaLink="false">46254d1a-19f7-4da7-8890-e54d62a1d286</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 14 Apr 2024 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/193392f7-9f41-4c93-96b9-773d779104ce/TPP236-converted.mp3" length="41274464" type="audio/mpeg"/><itunes:duration>57:20</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Elon Musk&apos;s X Loses in Court: Why It Matters for Independent Technology Research</title><itunes:title>Elon Musk&apos;s X Loses in Court: Why It Matters for Independent Technology Research</itunes:title><description><![CDATA[<p>Last week, a federal judge granted a <a href="https://www.pacermonitor.com/view/DL7CJ7Y/X_Corp_v_Center_for_Countering_Digital__candce-23-03836__0075.0.pdf?mcid=tGE3TEOA" rel="noopener noreferrer" target="_blank">motion to dismiss and strike</a> a lawsuit brought by X Corp, formerly known as Twitter, against a nonprofit research outfit called The Center for Countering Digital Hate (CCDH).&nbsp;&nbsp;To learn more about why the ruling matters, <strong>Justin Hendrix</strong> spoke to <strong>Alex Abdo</strong>, the litigation director at the Knight First Amendment Institute at Columbia University;  <strong>Imran Ahmed</strong>, the CEO and founder of the Center for Countering Digital Hate; and <strong>Roberta Kaplan</strong>, a partner at the law firm of Kaplan, Hecker, and Fink, which represented CCDH in this matter.&nbsp;</p>]]></description><content:encoded><![CDATA[<p>Last week, a federal judge granted a <a href="https://www.pacermonitor.com/view/DL7CJ7Y/X_Corp_v_Center_for_Countering_Digital__candce-23-03836__0075.0.pdf?mcid=tGE3TEOA" rel="noopener noreferrer" target="_blank">motion to dismiss and strike</a> a lawsuit brought by X Corp, formerly known as Twitter, against a nonprofit research outfit called The Center for Countering Digital Hate (CCDH).&nbsp;&nbsp;To learn more about why the ruling matters, <strong>Justin Hendrix</strong> spoke to <strong>Alex Abdo</strong>, the litigation director at the Knight First Amendment Institute at Columbia University;  <strong>Imran Ahmed</strong>, the CEO and founder of the Center for Countering Digital Hate; and <strong>Roberta Kaplan</strong>, a partner at the law firm of Kaplan, Hecker, and Fink, which represented CCDH in this matter.&nbsp;</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/elon-musks-x-loses-in-court-why-it-matters-for-independent-technology-research]]></link><guid isPermaLink="false">4af1f72f-6d15-4d8a-af39-7f35eb29dcaf</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 07 Apr 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/66dbf403-9892-4532-a336-c2632a3dd577/TPP235-converted.mp3" length="39454760" type="audio/mpeg"/><itunes:duration>54:48</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Nathan Schneider on Democratic Design for Online Life</title><itunes:title>Nathan Schneider on Democratic Design for Online Life</itunes:title><description><![CDATA[<p>On this show, when we talk about technology and democracy,  guests are often talking about the relationship between technology and existing democratic systems. Today's guest wants us to think more expansively about what doing democracy means and the role the technology can play in it. <strong>Nathan Schneider</strong>, an assistant professor of media studies at the University of Colorado Boulder, is the author of <a href="https://www.ucpress.edu/book/9780520393943/governable-spaces" rel="noopener noreferrer" target="_blank"><em>Governable Spaces: Democratic Design for Online Life</em></a>.</p>]]></description><content:encoded><![CDATA[<p>On this show, when we talk about technology and democracy,  guests are often talking about the relationship between technology and existing democratic systems. Today's guest wants us to think more expansively about what doing democracy means and the role the technology can play in it. <strong>Nathan Schneider</strong>, an assistant professor of media studies at the University of Colorado Boulder, is the author of <a href="https://www.ucpress.edu/book/9780520393943/governable-spaces" rel="noopener noreferrer" target="_blank"><em>Governable Spaces: Democratic Design for Online Life</em></a>.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/nathan-schneider-on-democratic-design-for-online-life]]></link><guid isPermaLink="false">cf103708-56c6-4c34-b8d2-33d1fe2101b8</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sat, 06 Apr 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/642e3627-cdb3-4d9a-9a2c-8584fbf1b014/TPP234-converted.mp3" length="28195226" type="audio/mpeg"/><itunes:duration>39:10</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Reforming Tech Amidst a Global Backlash Against Women&apos;s Rights</title><itunes:title>Reforming Tech Amidst a Global Backlash Against Women&apos;s Rights</itunes:title><description><![CDATA[<p>Last year, researchers at Human Rights Watch wrote about the global backlash against women’s rights. In multiple countries, they say, hard-won progress has been reversed amidst a wave of anti-feminist rhetoric and policies, and it may take decades to reverse the trajectory. It’s against that backdrop that today’s guest pursues concerns at the intersection of tech and digital rights with women’s human rights.&nbsp;<strong>Justin Hendrix</strong> speaks with <strong>Lucy Purdon, </strong>the founder of <a href="https://www.courage-everywhere.com/" rel="noopener noreferrer" target="_blank">Courage Everywhere</a> and author of a <a href="https://foundation.mozilla.org/en/blog/fellow-research-reform-exploitative-digital-advertising-to-safeguard-womens-health-and-digital-rights-from-deteriorating/" rel="noopener noreferrer" target="_blank">recent report</a> for the Mozilla Foundation titled "Unfinished Business: Incorporating a Gender Perspective into Digital Advertising Reform in the UK and EU."</p>]]></description><content:encoded><![CDATA[<p>Last year, researchers at Human Rights Watch wrote about the global backlash against women’s rights. In multiple countries, they say, hard-won progress has been reversed amidst a wave of anti-feminist rhetoric and policies, and it may take decades to reverse the trajectory. It’s against that backdrop that today’s guest pursues concerns at the intersection of tech and digital rights with women’s human rights.&nbsp;<strong>Justin Hendrix</strong> speaks with <strong>Lucy Purdon, </strong>the founder of <a href="https://www.courage-everywhere.com/" rel="noopener noreferrer" target="_blank">Courage Everywhere</a> and author of a <a href="https://foundation.mozilla.org/en/blog/fellow-research-reform-exploitative-digital-advertising-to-safeguard-womens-health-and-digital-rights-from-deteriorating/" rel="noopener noreferrer" target="_blank">recent report</a> for the Mozilla Foundation titled "Unfinished Business: Incorporating a Gender Perspective into Digital Advertising Reform in the UK and EU."</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/reforming-tech-amidst-a-global-backlash-against-womens-rights]]></link><guid isPermaLink="false">b29c7805-2222-47b1-96de-5d538f9e3dc2</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 31 Mar 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/744c728b-64e5-4f8d-839e-f7f4df633197/TPP234-converted.mp3" length="25212887" type="audio/mpeg"/><itunes:duration>35:01</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Unpacking the Oral Argument in Murthy v Missouri</title><itunes:title>Unpacking the Oral Argument in Murthy v Missouri</itunes:title><description><![CDATA[<p>On Monday, March 18, the US Supreme Court heard oral argument in <a href="https://www.techpolicy.press/backgrounder-supreme-court-oral-argument-murthy-v-missouri/" rel="noopener noreferrer" target="_blank">Murthy v Missouri</a>. In this episode, Tech Policy Press reporting fellow <strong>Dean Jackson</strong> is joined by two experts- St. John's University School of Law associate professor <strong>Kate Klonick</strong> and UNC Center on Technology Policy director <strong>Matt Perault</strong>- to digest the oral argument, what it tells us about which way the Court might go, and what more should be done to create good policy on government interactions with social media platforms when it comes to content moderation and speech.</p>]]></description><content:encoded><![CDATA[<p>On Monday, March 18, the US Supreme Court heard oral argument in <a href="https://www.techpolicy.press/backgrounder-supreme-court-oral-argument-murthy-v-missouri/" rel="noopener noreferrer" target="_blank">Murthy v Missouri</a>. In this episode, Tech Policy Press reporting fellow <strong>Dean Jackson</strong> is joined by two experts- St. John's University School of Law associate professor <strong>Kate Klonick</strong> and UNC Center on Technology Policy director <strong>Matt Perault</strong>- to digest the oral argument, what it tells us about which way the Court might go, and what more should be done to create good policy on government interactions with social media platforms when it comes to content moderation and speech.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/unpacking-the-oral-argument-in-murthy-v-missouri]]></link><guid isPermaLink="false">b8a7af53-e56a-48c4-b74c-44817f6577a3</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 24 Mar 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/4c1947fd-4491-4b8b-b0c0-2672945d53ee/TPP233-converted.mp3" length="43433593" type="audio/mpeg"/><itunes:duration>51:42</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>What&apos;s at Stake in Murthy v Missouri?</title><itunes:title>What&apos;s at Stake in Murthy v Missouri?</itunes:title><description><![CDATA[<p>On March 18, the US Supreme Court will hear oral argument in Murthy v Missouri, a case that asks the justices to consider whether the government coerced or “significantly encouraged” social media executives to remove disfavored speech in violation of the First Amendment during the COVID-19 pandemic. </p><p>Tech Policy Press reporting fellow <strong>Dean Jackson</strong> speaks to experts including the Knight First Amendment Institute at Columbia University's <strong>Mayze Teitler</strong> and <strong>Jennifer Jones</strong>, and the Tech Justice Law Project's <strong>Meetali Jain</strong>.</p>]]></description><content:encoded><![CDATA[<p>On March 18, the US Supreme Court will hear oral argument in Murthy v Missouri, a case that asks the justices to consider whether the government coerced or “significantly encouraged” social media executives to remove disfavored speech in violation of the First Amendment during the COVID-19 pandemic. </p><p>Tech Policy Press reporting fellow <strong>Dean Jackson</strong> speaks to experts including the Knight First Amendment Institute at Columbia University's <strong>Mayze Teitler</strong> and <strong>Jennifer Jones</strong>, and the Tech Justice Law Project's <strong>Meetali Jain</strong>.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/whats-at-stake-in-murthy-v-missouri]]></link><guid isPermaLink="false">5d94aad4-8bdd-433d-8441-b8e8095664f0</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 17 Mar 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/f98ec0b8-0f30-4bfe-be93-9daee5c7056c/TPP232-converted.mp3" length="69938691" type="audio/mpeg"/><itunes:duration>01:23:16</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Exploring the Intersection of Information Integrity, Race, and US Elections</title><itunes:title>Exploring the Intersection of Information Integrity, Race, and US Elections</itunes:title><description><![CDATA[<p>At INFORMED 2024, a conference hosted by the Knight Foundation in January, one panel focused on the subject of information integrity, race, and US elections. The conversation was compelling, and the panelists agreed to reprise it for this podcast. So today we're turning over the mic to <strong>Spencer Overton</strong>, a Professor of Law at the George Washington University, and the director of the GW Law School's Multiracial Democracy Project.</p><p>He's joined by three other experts, including: </p><ul><li><strong>Brandi Collins-Dexter</strong>, a media and technology fellow at Harvard's Shorenstein Center, a fellow at the National Center on Race and Digital Justice, and the author of the recent book, <em>Black Skinhead: Reflections on Blackness and Our Political Future</em>. Brandi is developing a podcast of her own with MediaJustice that explores 1980s era media, racialized conspiracism, and politics in Chicago;</li><li><strong>Dr. Danielle Brown</strong>, a social movement and media researcher who holds the 1855 Community and Urban Journalism professorship at Michigan State and is the founding director of the LIFT project, which is focused on mapping, networking and resourcing, trusted messengers to dismantle mis- and disinformation narratives that circulate in Black communities and about Black communities; and</li><li><strong>Kathryn Peters</strong>, who was the inaugural executive director of University of North Carolina's Center for Information, Technology, and Public Life and was the co-founder of Democracy Works, where she built programs to help more Americans navigate how to vote. These days, she's working on a variety of projects to empower voters and address election mis- and disinformation.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>At INFORMED 2024, a conference hosted by the Knight Foundation in January, one panel focused on the subject of information integrity, race, and US elections. The conversation was compelling, and the panelists agreed to reprise it for this podcast. So today we're turning over the mic to <strong>Spencer Overton</strong>, a Professor of Law at the George Washington University, and the director of the GW Law School's Multiracial Democracy Project.</p><p>He's joined by three other experts, including: </p><ul><li><strong>Brandi Collins-Dexter</strong>, a media and technology fellow at Harvard's Shorenstein Center, a fellow at the National Center on Race and Digital Justice, and the author of the recent book, <em>Black Skinhead: Reflections on Blackness and Our Political Future</em>. Brandi is developing a podcast of her own with MediaJustice that explores 1980s era media, racialized conspiracism, and politics in Chicago;</li><li><strong>Dr. Danielle Brown</strong>, a social movement and media researcher who holds the 1855 Community and Urban Journalism professorship at Michigan State and is the founding director of the LIFT project, which is focused on mapping, networking and resourcing, trusted messengers to dismantle mis- and disinformation narratives that circulate in Black communities and about Black communities; and</li><li><strong>Kathryn Peters</strong>, who was the inaugural executive director of University of North Carolina's Center for Information, Technology, and Public Life and was the co-founder of Democracy Works, where she built programs to help more Americans navigate how to vote. These days, she's working on a variety of projects to empower voters and address election mis- and disinformation.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/exploring-the-intersection-of-information-integrity-race-and-us-elections]]></link><guid isPermaLink="false">9f46d443-04ab-4c64-8b8a-8f25cf18276b</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 10 Mar 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/c4cbfb9f-d735-44bf-9794-5cfe720f8aa4/TPP231-converted.mp3" length="35530745" type="audio/mpeg"/><itunes:duration>49:21</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>US Supreme Court Considers Florida and Texas Social Media Laws</title><itunes:title>US Supreme Court Considers Florida and Texas Social Media Laws</itunes:title><description><![CDATA[<p>On Monday, Feb. 26, 2024, the US Supreme Court heard oral arguments for&nbsp;<em>Moody v. NetChoice, LLC</em>&nbsp;and&nbsp;<em>NetChoice, LLC v. Paxton</em>. The cases are on similar but distinct state laws in Florida and Texas that would restrict social media companies’ ability to moderate content on their platforms. <strong>Justin Hendrix</strong> speaks with Tech Policy Press staff writer <strong>Gabby Miller </strong>and contributing editor <strong>Ben Lennett</strong> about key highlights from the discussion.</p>]]></description><content:encoded><![CDATA[<p>On Monday, Feb. 26, 2024, the US Supreme Court heard oral arguments for&nbsp;<em>Moody v. NetChoice, LLC</em>&nbsp;and&nbsp;<em>NetChoice, LLC v. Paxton</em>. The cases are on similar but distinct state laws in Florida and Texas that would restrict social media companies’ ability to moderate content on their platforms. <strong>Justin Hendrix</strong> speaks with Tech Policy Press staff writer <strong>Gabby Miller </strong>and contributing editor <strong>Ben Lennett</strong> about key highlights from the discussion.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/us-supreme-court-considers-florida-and-texas-social-media-laws]]></link><guid isPermaLink="false">a9fa409e-38a2-4663-b691-a094dc69bd7e</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 03 Mar 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/f6677eb7-3ceb-4aaf-a489-63951decefad/TPP230-converted.mp3" length="23804575" type="audio/mpeg"/><itunes:duration>28:20</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>What Leverage Remains to Preserve Free Expression in Hong Kong?</title><itunes:title>What Leverage Remains to Preserve Free Expression in Hong Kong?</itunes:title><description><![CDATA[<p>This week, a public consultation period ended for a new Hong Kong national security law, known as Article 23. Article 23 ostensibly targets a wide array of crimes, including treason, theft of state secrets, espionage, sabotage, sedition, and "external interference" from foreign governments. The Hong Kong legislature, dominated by pro-Beijing lawmakers, is expected to approve it, even as its critics argue that the law criminalizes basic human rights, such as the freedom of expression, signaling a further erosion of the liberties once enjoyed by the residents of Hong Kong.</p><p>To learn more about what is happening in Hong Kong and what role tech firms and other outside voices could be doing to preserve freedoms for the people of Hong Kong, <strong>Justin Hendrix</strong> spoke to three experts who are following developments there closely:</p><ul><li><strong>Chung Ching Kwong</strong>, senior analyst at the Inter-Parliamentary Alliance on China</li><li><strong>Lokman Tsui</strong>, a fellow at Citizen Lab at University of Toronto, and</li><li><strong>Michael Caster</strong>, the Asia Digital Program Manager with Article 19.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>This week, a public consultation period ended for a new Hong Kong national security law, known as Article 23. Article 23 ostensibly targets a wide array of crimes, including treason, theft of state secrets, espionage, sabotage, sedition, and "external interference" from foreign governments. The Hong Kong legislature, dominated by pro-Beijing lawmakers, is expected to approve it, even as its critics argue that the law criminalizes basic human rights, such as the freedom of expression, signaling a further erosion of the liberties once enjoyed by the residents of Hong Kong.</p><p>To learn more about what is happening in Hong Kong and what role tech firms and other outside voices could be doing to preserve freedoms for the people of Hong Kong, <strong>Justin Hendrix</strong> spoke to three experts who are following developments there closely:</p><ul><li><strong>Chung Ching Kwong</strong>, senior analyst at the Inter-Parliamentary Alliance on China</li><li><strong>Lokman Tsui</strong>, a fellow at Citizen Lab at University of Toronto, and</li><li><strong>Michael Caster</strong>, the Asia Digital Program Manager with Article 19.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/what-leverage-remains-to-preserve-free-expression-in-hong-kong]]></link><guid isPermaLink="false">33e7d13a-c66c-41d6-8c9b-97804ae5cac6</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Thu, 29 Feb 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/d3f9cf8c-870f-4ebd-a5e1-639a8d34dfc3/TPP229-converted.mp3" length="32799975" type="audio/mpeg"/><itunes:duration>45:33</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>How to Counter Disinformation Based on Science</title><itunes:title>How to Counter Disinformation Based on Science</itunes:title><description><![CDATA[<p>If you’ve been listening to this podcast for a while, you know we’ve spent countless hours together talking about the problems of mis- and disinformation, and what to do about them. And, we’ve tried to focus on the science, on empirical research that can inform efforts to design a better media and technology environment that helps rather than hurts democracy and social cohesion.&nbsp;</p><p>Today’s guests are <strong>Jon Bateman</strong> and <strong>Dean Jackson</strong>. The two have just <a href="https://carnegieendowment.org/2024/01/31/countering-disinformation-effectively-evidence-based-policy-guide-pub-91476" rel="noopener noreferrer" target="_blank">produced a report</a> for the Carnegie Endowment for International Peace that looks at what is known about a variety of interventions against disinformation, and provides evidence that should guide policy in governments and at technology platforms. </p>]]></description><content:encoded><![CDATA[<p>If you’ve been listening to this podcast for a while, you know we’ve spent countless hours together talking about the problems of mis- and disinformation, and what to do about them. And, we’ve tried to focus on the science, on empirical research that can inform efforts to design a better media and technology environment that helps rather than hurts democracy and social cohesion.&nbsp;</p><p>Today’s guests are <strong>Jon Bateman</strong> and <strong>Dean Jackson</strong>. The two have just <a href="https://carnegieendowment.org/2024/01/31/countering-disinformation-effectively-evidence-based-policy-guide-pub-91476" rel="noopener noreferrer" target="_blank">produced a report</a> for the Carnegie Endowment for International Peace that looks at what is known about a variety of interventions against disinformation, and provides evidence that should guide policy in governments and at technology platforms. </p>]]></content:encoded><link><![CDATA[https://techpolicy.press/how-to-counter-disinformation-based-on-science]]></link><guid isPermaLink="false">63058f73-13d2-467e-bf38-afd07a5ac8c9</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 25 Feb 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/d0c6090b-de0b-4063-84bd-a41aa8e40a3c/TPP228-converted.mp3" length="40282180" type="audio/mpeg"/><itunes:duration>47:57</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Evaluating the Role of Media in the January 6 Attack on the US Capitol</title><itunes:title>Evaluating the Role of Media in the January 6 Attack on the US Capitol</itunes:title><description><![CDATA[<p>A <a href="https://global.oup.com/academic/product/media-and-january-6th-9780197758533?lang=en&amp;cc=us#" rel="noopener noreferrer" target="_blank">new book</a> that ships this week from Oxford University Press titled simply <em>Media and January 6th</em> assembles a varied collection of experts that aim to shed light on the interplay between the media and the bloody coup attempt that then President <strong>Donald Trump</strong> led to try to hang on to power after he lost the 2020 election to <strong>Joe Biden</strong>. It delves into the reasons behind the occurrence of January 6th and highlights the pivotal role of media in this context. </p><p>The book is structured to explore three essential inquiries: What is our interpretation of January 6, 2021? How should research evolve post-January 6, 2021? And what measures can be taken to avert a similar incident in the future? <strong>Justin Hendrix </strong>spoke to three of the book's four editors: <strong>Khadijah Costley White</strong>, <strong>Daniel Kreiss</strong>, and <strong>Shannon C. McGregor</strong>.</p>]]></description><content:encoded><![CDATA[<p>A <a href="https://global.oup.com/academic/product/media-and-january-6th-9780197758533?lang=en&amp;cc=us#" rel="noopener noreferrer" target="_blank">new book</a> that ships this week from Oxford University Press titled simply <em>Media and January 6th</em> assembles a varied collection of experts that aim to shed light on the interplay between the media and the bloody coup attempt that then President <strong>Donald Trump</strong> led to try to hang on to power after he lost the 2020 election to <strong>Joe Biden</strong>. It delves into the reasons behind the occurrence of January 6th and highlights the pivotal role of media in this context. </p><p>The book is structured to explore three essential inquiries: What is our interpretation of January 6, 2021? How should research evolve post-January 6, 2021? And what measures can be taken to avert a similar incident in the future? <strong>Justin Hendrix </strong>spoke to three of the book's four editors: <strong>Khadijah Costley White</strong>, <strong>Daniel Kreiss</strong>, and <strong>Shannon C. McGregor</strong>.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/evaluating-the-role-of-media-in-the-january-6-attack-on-the-us-capitol]]></link><guid isPermaLink="false">57ac196d-22cf-47af-a601-170541895e62</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 25 Feb 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/4d14fddd-79c7-4611-9fff-f06ed31fc5cf/TPP227-converted.mp3" length="34132948" type="audio/mpeg"/><itunes:duration>47:24</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Pakistan and the Intersection of Tech &amp; Elections</title><itunes:title>Pakistan and the Intersection of Tech &amp; Elections</itunes:title><description><![CDATA[<p>It's become trite to say there are a lot of elections taking place this year. But of course, technology is playing a role in them all. </p><p>At Tech Policy Press, we're lucky to have a group of seven fellows this year who are based on four continents. They are paying close attention to elections in the nations they know best. To learn more about the recent election in Pakistan, its chaotic aftermath, and the unique role of technology and events there, I spoke to one of our fellows last week: <strong>Ramsha Jahangir</strong>, a Pakistani journalist currently based in the Netherlands.</p>]]></description><content:encoded><![CDATA[<p>It's become trite to say there are a lot of elections taking place this year. But of course, technology is playing a role in them all. </p><p>At Tech Policy Press, we're lucky to have a group of seven fellows this year who are based on four continents. They are paying close attention to elections in the nations they know best. To learn more about the recent election in Pakistan, its chaotic aftermath, and the unique role of technology and events there, I spoke to one of our fellows last week: <strong>Ramsha Jahangir</strong>, a Pakistani journalist currently based in the Netherlands.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/pakistan-and-the-intersection-of-tech-elections]]></link><guid isPermaLink="false">e0d33efb-fde5-404d-9f4e-384214f3b55d</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sat, 24 Feb 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/2206c6de-d2b3-426c-8199-fb0e37d2662c/TPP226-converted.mp3" length="12870958" type="audio/mpeg"/><itunes:duration>17:53</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Ranking Content On Signals Other Than User Engagement</title><itunes:title>Ranking Content On Signals Other Than User Engagement</itunes:title><description><![CDATA[<p>Today's guests are <strong>Jonathan Stray</strong>, a senior scientist at the Center for Human Compatible AI at the University of California Berkeley, and <strong>Ravi Iyer</strong>, managing director of the Neely Center at the University of Southern California's Marshall School. Both are keenly interested in what happens when platforms optimize for variables other than engagement, and whether they can in fact optimize for prosocial outcomes. With several coauthors, they recently <a href="https://arxiv.org/pdf/2402.06831.pdf" rel="noopener noreferrer" target="_blank">published a paper</a> based in large part on discussion at an 8-hour working group session featuring representatives from seven major content-ranking platforms and former employees of another major platform, as well as university and independent researchers. The authors say "there is much unrealized potential in using non-engagement signals. These signals can improve outcomes both for platforms and for society as a whole."</p>]]></description><content:encoded><![CDATA[<p>Today's guests are <strong>Jonathan Stray</strong>, a senior scientist at the Center for Human Compatible AI at the University of California Berkeley, and <strong>Ravi Iyer</strong>, managing director of the Neely Center at the University of Southern California's Marshall School. Both are keenly interested in what happens when platforms optimize for variables other than engagement, and whether they can in fact optimize for prosocial outcomes. With several coauthors, they recently <a href="https://arxiv.org/pdf/2402.06831.pdf" rel="noopener noreferrer" target="_blank">published a paper</a> based in large part on discussion at an 8-hour working group session featuring representatives from seven major content-ranking platforms and former employees of another major platform, as well as university and independent researchers. The authors say "there is much unrealized potential in using non-engagement signals. These signals can improve outcomes both for platforms and for society as a whole."</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/ranking-content-on-signals-other-than-user-engagement]]></link><guid isPermaLink="false">bd7e4d00-e91d-488d-82cc-01e750cfa419</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 18 Feb 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/cdc30a1c-e960-4443-8ce8-b59729581d2b/TPP225-converted.mp3" length="29085113" type="audio/mpeg"/><itunes:duration>34:37</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>FTC Commissioner Alvaro Bedoya on Algorithmic Fairness, Voice Cloning, and the Future</title><itunes:title>FTC Commissioner Alvaro Bedoya on Algorithmic Fairness, Voice Cloning, and the Future</itunes:title><description><![CDATA[<p>In May 2022, <strong>Alvaro Bedoya</strong> was sworn in as a Commissioner of the US Federal Trade Commission following his nomination by President Joe Biden and confirmation in the Senate. In this conversation, Commissioner Bedoya discusses a <a href="https://www.ftc.gov/news-events/news/press-releases/2023/12/rite-aid-banned-using-ai-facial-recognition-after-ftc-says-retailer-deployed-technology-without" rel="noopener noreferrer" target="_blank">recent settlement</a> over the commercial use of facial recognition technologies and what it should signal to other businesses, voice cloning and the <a href="https://www.ftc.gov/news-events/news/press-releases/2024/02/ftc-proposes-new-protections-combat-ai-impersonation-individuals" rel="noopener noreferrer" target="_blank">growing problem</a> of impersonations utilizing AI, and how he thinks about the future.  </p>]]></description><content:encoded><![CDATA[<p>In May 2022, <strong>Alvaro Bedoya</strong> was sworn in as a Commissioner of the US Federal Trade Commission following his nomination by President Joe Biden and confirmation in the Senate. In this conversation, Commissioner Bedoya discusses a <a href="https://www.ftc.gov/news-events/news/press-releases/2023/12/rite-aid-banned-using-ai-facial-recognition-after-ftc-says-retailer-deployed-technology-without" rel="noopener noreferrer" target="_blank">recent settlement</a> over the commercial use of facial recognition technologies and what it should signal to other businesses, voice cloning and the <a href="https://www.ftc.gov/news-events/news/press-releases/2024/02/ftc-proposes-new-protections-combat-ai-impersonation-individuals" rel="noopener noreferrer" target="_blank">growing problem</a> of impersonations utilizing AI, and how he thinks about the future.  </p>]]></content:encoded><link><![CDATA[https://techpolicy.press/ftc-commissioner-alvaro-bedoya-on-algorithmic-fairness-voice-cloning-and-the-future]]></link><guid isPermaLink="false">4af9921a-41ed-4a69-80ce-92bf891f6b3e</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 18 Feb 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/0eaae577-9772-46e0-bd86-338f77f7818b/TPP224-converted.mp3" length="24021612" type="audio/mpeg"/><itunes:duration>33:22</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Imagining AI Countergovernance</title><itunes:title>Imagining AI Countergovernance</itunes:title><description><![CDATA[<p>Multiple past episodes of this podcast have focused on the topic of AI governance. But today’s guest, <strong>Blair Attard-Frost</strong>, has <a href="https://www.midnightsunmag.ca/ai-countergovernance/" rel="noopener noreferrer" target="_blank">put forward</a> a set of ideas they term "AI countergovernance." These are alternative mechanisms for community-led and worker-led governance that serve as means for resisting or contesting power, particularly as it manifests in AI systems and the companies and governments that advance them.&nbsp;</p>]]></description><content:encoded><![CDATA[<p>Multiple past episodes of this podcast have focused on the topic of AI governance. But today’s guest, <strong>Blair Attard-Frost</strong>, has <a href="https://www.midnightsunmag.ca/ai-countergovernance/" rel="noopener noreferrer" target="_blank">put forward</a> a set of ideas they term "AI countergovernance." These are alternative mechanisms for community-led and worker-led governance that serve as means for resisting or contesting power, particularly as it manifests in AI systems and the companies and governments that advance them.&nbsp;</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/imagining-ai-countergovernance]]></link><guid isPermaLink="false">6ab1dfb7-16af-44cf-80a9-ae0b351bce95</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 11 Feb 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/428c13c5-d535-49be-b2e5-79080575053c/TPP223-converted.mp3" length="27701802" type="audio/mpeg"/><itunes:duration>38:28</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Tech CEOs Face the US Senate on Child Safety</title><itunes:title>Tech CEOs Face the US Senate on Child Safety</itunes:title><description><![CDATA[<p>On Wednesday, January 31st, the US Senate Judiciary Committee hosted a hearing titled "Big Tech and the Online Child Sexual Exploitation Crisis." The CEOs of Meta, TikTok, X, Discord and Snap were called to the Capitol to answer questions from lawmakers on their efforts to protect children from sexual exploitation, drug trafficking, dangerous content, and other online harms. Gabby Miller reported on the hearing from New York, and Haajrah Gilani reported from Washington D.C.</p>]]></description><content:encoded><![CDATA[<p>On Wednesday, January 31st, the US Senate Judiciary Committee hosted a hearing titled "Big Tech and the Online Child Sexual Exploitation Crisis." The CEOs of Meta, TikTok, X, Discord and Snap were called to the Capitol to answer questions from lawmakers on their efforts to protect children from sexual exploitation, drug trafficking, dangerous content, and other online harms. Gabby Miller reported on the hearing from New York, and Haajrah Gilani reported from Washington D.C.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/tech-ceos-face-the-us-senate-on-child-safety]]></link><guid isPermaLink="false">a870825e-5e62-4433-88ce-5f7f7f4b266f</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 04 Feb 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/69d090a9-d22a-4fde-a267-1e9cf9577587/TPP222-converted.mp3" length="15480582" type="audio/mpeg"/><itunes:duration>21:30</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>How to Assess AI Governance Tools</title><itunes:title>How to Assess AI Governance Tools</itunes:title><description><![CDATA[<p>Last year, the <strong>World Privacy Forum</strong>, a nonprofit research organization, conducted an <a href="https://www.worldprivacyforum.org/wp-content/uploads/2023/12/WPF_Risky_Analysis_December_2023_fs.pdf" rel="noopener noreferrer" target="_blank">international review of AI governance tools</a>. The organization analyzed various documents, frameworks, and technical material related to AI governance from around the world. Importantly, the review found that a significant percentage of the AI governance tools include faulty AI fixes that could ultimately undermine the fairness and explainability of AI systems.&nbsp;</p><p><strong>Justin Hendrix</strong> talked to <strong>Kate Kaye</strong>, one of the report’s authors, about a range of issues it covers, from the involvement of large tech companies in shaping AI governance tools the role of organizations like the OECD in developing AI governance tools, to the need to consult people and communities that are often overlooked when making decisions about how to think about AI.</p>]]></description><content:encoded><![CDATA[<p>Last year, the <strong>World Privacy Forum</strong>, a nonprofit research organization, conducted an <a href="https://www.worldprivacyforum.org/wp-content/uploads/2023/12/WPF_Risky_Analysis_December_2023_fs.pdf" rel="noopener noreferrer" target="_blank">international review of AI governance tools</a>. The organization analyzed various documents, frameworks, and technical material related to AI governance from around the world. Importantly, the review found that a significant percentage of the AI governance tools include faulty AI fixes that could ultimately undermine the fairness and explainability of AI systems.&nbsp;</p><p><strong>Justin Hendrix</strong> talked to <strong>Kate Kaye</strong>, one of the report’s authors, about a range of issues it covers, from the involvement of large tech companies in shaping AI governance tools the role of organizations like the OECD in developing AI governance tools, to the need to consult people and communities that are often overlooked when making decisions about how to think about AI.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/how-to-assess-ai-governance-tools]]></link><guid isPermaLink="false">58988eba-9f5f-41b2-ace7-75ed83a37267</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 28 Jan 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/b721420c-c174-46b7-b760-0b3ad850cc78/TPP221-converted.mp3" length="26137907" type="audio/mpeg"/><itunes:duration>36:18</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>How to Defend Independent Technology Research from Corporate and Political Opposition</title><itunes:title>How to Defend Independent Technology Research from Corporate and Political Opposition</itunes:title><description><![CDATA[<p>In October 2022, a group of researchers published <a href="https://independenttechresearch.org/manifesto-the-coalition-for-independent-technology-research/" rel="noopener noreferrer" target="_blank">a manifesto</a> establishing a Coalition for Independent Technology Research. </p><p>“Society needs trustworthy, independent research to relieve the harms of digital technologies and advance the common good,” they wrote. “Research can help us understand ourselves more clearly, identify problems, hold power accountable, imagine the world we want, and test ideas for change. In a democracy, this knowledge comes from academics, journalists, civil society, and community scientists, among others. Because independent research on digital technologies is a powerful force for the common good, it also faces powerful opposition.”</p><p>In the months since that document was published, that opposition has grown. From investigations in Congress to lawsuits aimed at specific researchers, there is a backlash particularly against those who study communications and media, especially where the subjects of that research are often those most interested in advancing false and misleading claims about issues including elections and public health.&nbsp;</p><p><strong>Justin Hendrix, </strong>who is a member of the coalition, caught up with <strong>Brandi Geurkink</strong>, who <a href="https://independenttechresearch.org/welcoming-brandi-geurkink-executive-director/" rel="noopener noreferrer" target="_blank">was hired</a> as the coalition's first Executive Director in December 2023, to discuss its priorities.</p>]]></description><content:encoded><![CDATA[<p>In October 2022, a group of researchers published <a href="https://independenttechresearch.org/manifesto-the-coalition-for-independent-technology-research/" rel="noopener noreferrer" target="_blank">a manifesto</a> establishing a Coalition for Independent Technology Research. </p><p>“Society needs trustworthy, independent research to relieve the harms of digital technologies and advance the common good,” they wrote. “Research can help us understand ourselves more clearly, identify problems, hold power accountable, imagine the world we want, and test ideas for change. In a democracy, this knowledge comes from academics, journalists, civil society, and community scientists, among others. Because independent research on digital technologies is a powerful force for the common good, it also faces powerful opposition.”</p><p>In the months since that document was published, that opposition has grown. From investigations in Congress to lawsuits aimed at specific researchers, there is a backlash particularly against those who study communications and media, especially where the subjects of that research are often those most interested in advancing false and misleading claims about issues including elections and public health.&nbsp;</p><p><strong>Justin Hendrix, </strong>who is a member of the coalition, caught up with <strong>Brandi Geurkink</strong>, who <a href="https://independenttechresearch.org/welcoming-brandi-geurkink-executive-director/" rel="noopener noreferrer" target="_blank">was hired</a> as the coalition's first Executive Director in December 2023, to discuss its priorities.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/how-to-defend-independent-technology-research-from-corporate-and-political-opposition]]></link><guid isPermaLink="false">2c5e7c12-5f9d-4914-a00c-8df490c1de39</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 21 Jan 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/78424c60-d2dd-44f4-bbed-f1642e796453/TPP220-converted.mp3" length="29867304" type="audio/mpeg"/><itunes:duration>41:29</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Questioning OpenAI&apos;s Nonprofit Status</title><itunes:title>Questioning OpenAI&apos;s Nonprofit Status</itunes:title><description><![CDATA[<p>Today’s guest is <strong>Robert Weissman</strong>, president of the nonprofit consumer advocacy organization Public Citizen. He is the author of <a href="https://www.citizen.org/article/letter-to-california-attorney-general-on-openais-nonprofit-status/" rel="noopener noreferrer" target="_blank">a letter</a> addressed to the California Attorney General that raises significant concerns about OpenAI’s 501(c)(3) nonprofit status. The letter questions whether OpenAI has deviated from its nonprofit purposes, alleging that it may be acting under the control of its for-profit subsidiary, potentially violating its nonprofit mission. The letter raises broader issues about the future of AI and how it will be governed.&nbsp;</p>]]></description><content:encoded><![CDATA[<p>Today’s guest is <strong>Robert Weissman</strong>, president of the nonprofit consumer advocacy organization Public Citizen. He is the author of <a href="https://www.citizen.org/article/letter-to-california-attorney-general-on-openais-nonprofit-status/" rel="noopener noreferrer" target="_blank">a letter</a> addressed to the California Attorney General that raises significant concerns about OpenAI’s 501(c)(3) nonprofit status. The letter questions whether OpenAI has deviated from its nonprofit purposes, alleging that it may be acting under the control of its for-profit subsidiary, potentially violating its nonprofit mission. The letter raises broader issues about the future of AI and how it will be governed.&nbsp;</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/questioning-openais-nonprofit-status]]></link><guid isPermaLink="false">e93da0bf-b1ea-485c-97e8-ef2509ea609d</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 14 Jan 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/f21661d7-d48a-4e9d-af69-afbfbc0d3a2c/TPP219-converted.mp3" length="14279050" type="audio/mpeg"/><itunes:duration>19:50</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Evaluating Social Media&apos;s Role in the Israel-Hamas War</title><itunes:title>Evaluating Social Media&apos;s Role in the Israel-Hamas War</itunes:title><description><![CDATA[<p>Today is the three month anniversary of the vicious Hamas attack and abduction of hostages that ignited the current war in Gaza. Just before the New Year, the Atlantic Council’s Digital Forensic Research Lab (DFRLab) published a report titled “<a href="https://view.atlanticcouncil.org/social-media-gaza/p/1" rel="noopener noreferrer" target="_blank">Distortion by Design: How Social Media Platforms Shaped Our Initial Understanding of the Israel-Hamas Conflict.</a>” This week, <strong>Justin Hendrix</strong> spoke to the report’s authors—<strong> Emerson T. Brooking</strong>, <strong>Layla Mashkoor</strong>, and <strong>Jacqueline Malaret</strong>— about their observations of the role that platforms operated by X, Meta, Telegram, and TikTok have played in shaping perceptions of the initial attack and the brutal ongoing Israeli siege of Gaza, which now continues into its fourth month.&nbsp;</p><p>“Evident across all platforms,” they write, “is the intertwined nature of content moderation and political expression—and the critical role that social media will play in preserving the historical record.”</p>]]></description><content:encoded><![CDATA[<p>Today is the three month anniversary of the vicious Hamas attack and abduction of hostages that ignited the current war in Gaza. Just before the New Year, the Atlantic Council’s Digital Forensic Research Lab (DFRLab) published a report titled “<a href="https://view.atlanticcouncil.org/social-media-gaza/p/1" rel="noopener noreferrer" target="_blank">Distortion by Design: How Social Media Platforms Shaped Our Initial Understanding of the Israel-Hamas Conflict.</a>” This week, <strong>Justin Hendrix</strong> spoke to the report’s authors—<strong> Emerson T. Brooking</strong>, <strong>Layla Mashkoor</strong>, and <strong>Jacqueline Malaret</strong>— about their observations of the role that platforms operated by X, Meta, Telegram, and TikTok have played in shaping perceptions of the initial attack and the brutal ongoing Israeli siege of Gaza, which now continues into its fourth month.&nbsp;</p><p>“Evident across all platforms,” they write, “is the intertwined nature of content moderation and political expression—and the critical role that social media will play in preserving the historical record.”</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/evaluating-social-medias-role-in-the-israel-hamas-war]]></link><guid isPermaLink="false">59fdf833-539c-480a-ad56-45a940e8794a</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 07 Jan 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/95689878-eab2-48d4-ac8a-b621bfec0b45/TPP218-converted.mp3" length="33264968" type="audio/mpeg"/><itunes:duration>46:12</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Exposing the Rotten Reality of AI Training Data</title><itunes:title>Exposing the Rotten Reality of AI Training Data</itunes:title><description><![CDATA[<p>In a <a href="https://stacks.stanford.edu/file/druid:kh752sm9123/ml_training_data_csam_report-2023-12-23.pdf" rel="noopener noreferrer" target="_blank">report</a> released December 20, 2023, the Stanford Internet Observatory said it had detected more than 1,000 instances of verified child sexual abuse imagery in a significant dataset utilized for training generative AI systems such as Stable Diffusion 1.5. </p><p>This troubling discovery builds on prior research into the “<a href="https://arxiv.org/pdf/2110.01963.pdf" rel="noopener noreferrer" target="_blank">dubious curation</a>” of large-scale datasets used to train AI systems, and raises concerns that such content may contributed to the capability of AI image generators in producing realistic counterfeit images of child sexual exploitation, in addition to other harmful and biased material.&nbsp;</p><p><strong>Justin Hendrix</strong> spoke the report’s author, Stanford Internet Observatory Chief Technologist <strong>David Thiel</strong>.</p>]]></description><content:encoded><![CDATA[<p>In a <a href="https://stacks.stanford.edu/file/druid:kh752sm9123/ml_training_data_csam_report-2023-12-23.pdf" rel="noopener noreferrer" target="_blank">report</a> released December 20, 2023, the Stanford Internet Observatory said it had detected more than 1,000 instances of verified child sexual abuse imagery in a significant dataset utilized for training generative AI systems such as Stable Diffusion 1.5. </p><p>This troubling discovery builds on prior research into the “<a href="https://arxiv.org/pdf/2110.01963.pdf" rel="noopener noreferrer" target="_blank">dubious curation</a>” of large-scale datasets used to train AI systems, and raises concerns that such content may contributed to the capability of AI image generators in producing realistic counterfeit images of child sexual exploitation, in addition to other harmful and biased material.&nbsp;</p><p><strong>Justin Hendrix</strong> spoke the report’s author, Stanford Internet Observatory Chief Technologist <strong>David Thiel</strong>.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/exposing-the-rotten-reality-of-ai-training-data]]></link><guid isPermaLink="false">bd40eb74-0337-43db-bf1c-409dfd8ab916</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 31 Dec 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/4f3739f4-1b1b-48de-9592-71fb725d1dc9/TPP217-converted.mp3" length="28998015" type="audio/mpeg"/><itunes:duration>40:16</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>An FDA for AI?</title><itunes:title>An FDA for AI?</itunes:title><description><![CDATA[<p>If you’ve listened to some of the dialogue in hearings on Capitol Hill about how to regulate AI, you’ve heard various folks suggest the need for a regulatory agency to govern, in particular, general purpose AI systems that can be deployed across a wide range of applications. One existing agency is often mentioned as a potential model: the Food and Drug Administration (FDA). But how would applying the FDA work in practice? Where does the model break down when it comes to AI and related technologies, which are different in many ways from the types of things the FDA looks at day to day? To answer these questions, <strong>Justin Hendrix</strong> spoke to <strong>Merlin Stein</strong> and <strong>Connor Dunlop</strong>, the authors of a <a href="https://www.adalovelaceinstitute.org/report/safe-before-sale/" rel="noopener noreferrer" target="_blank">new report</a> published by the Ada Lovelace Institute titled <em>Safe before sale: Learnings from the FDA’s model of life sciences oversight for foundation models.</em></p>]]></description><content:encoded><![CDATA[<p>If you’ve listened to some of the dialogue in hearings on Capitol Hill about how to regulate AI, you’ve heard various folks suggest the need for a regulatory agency to govern, in particular, general purpose AI systems that can be deployed across a wide range of applications. One existing agency is often mentioned as a potential model: the Food and Drug Administration (FDA). But how would applying the FDA work in practice? Where does the model break down when it comes to AI and related technologies, which are different in many ways from the types of things the FDA looks at day to day? To answer these questions, <strong>Justin Hendrix</strong> spoke to <strong>Merlin Stein</strong> and <strong>Connor Dunlop</strong>, the authors of a <a href="https://www.adalovelaceinstitute.org/report/safe-before-sale/" rel="noopener noreferrer" target="_blank">new report</a> published by the Ada Lovelace Institute titled <em>Safe before sale: Learnings from the FDA’s model of life sciences oversight for foundation models.</em></p>]]></content:encoded><link><![CDATA[https://techpolicy.press/an-fda-for-ai]]></link><guid isPermaLink="false">165b6ce1-de4a-44d2-bce8-6f9a55323514</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 24 Dec 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/b1348f35-39fb-449d-8a79-ea7a7a1c494a/TPP216-converted.mp3" length="24754862" type="audio/mpeg"/><itunes:duration>34:23</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>What Are We Building, and Why?</title><itunes:title>What Are We Building, and Why?</itunes:title><description><![CDATA[<p>At the end of this year in which the hype around artificial intelligence seemed to increase in volume with each passing week, it’s worth stepping back and asking whether we need to slow down and put just as much effort into questions about what it is we are building and why. </p><p>In today’s episode, we’re going to hear from two researchers at two different points in their careers who spend their days grappling with questions about how we can develop systems and modes of thinking about systems that lead to more just and equitable outcomes, and that preserve our humanity and the planet:</p><ul><li><strong>Dr. Batya Friedman</strong>&nbsp;is a Professor in the Information School and holds adjunct appointments in the Paul G. Allen School of Computer Science &amp; Engineering, the School of Law, and the Department of Human Centered Design and Engineering at the University of Washington, where she co-directs the Value Sensitive Design Lab and the UW Tech Policy Lab.</li><li><strong>Dr. Aylin Caliskan </strong>is an Assistant Professor in the Information School at the Paul G. Allen School of Computer Science &amp; Engineering, is an affiliate of the UW Tech Policy Lab, part of the Responsible AI Systems and Experiences Center, the NLP Group, and the Value Sensitive Design Lab. She is also co-director elect for the Tech Policy Lab, a role she will assume when Dr. Friedman retires from the university.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>At the end of this year in which the hype around artificial intelligence seemed to increase in volume with each passing week, it’s worth stepping back and asking whether we need to slow down and put just as much effort into questions about what it is we are building and why. </p><p>In today’s episode, we’re going to hear from two researchers at two different points in their careers who spend their days grappling with questions about how we can develop systems and modes of thinking about systems that lead to more just and equitable outcomes, and that preserve our humanity and the planet:</p><ul><li><strong>Dr. Batya Friedman</strong>&nbsp;is a Professor in the Information School and holds adjunct appointments in the Paul G. Allen School of Computer Science &amp; Engineering, the School of Law, and the Department of Human Centered Design and Engineering at the University of Washington, where she co-directs the Value Sensitive Design Lab and the UW Tech Policy Lab.</li><li><strong>Dr. Aylin Caliskan </strong>is an Assistant Professor in the Information School at the Paul G. Allen School of Computer Science &amp; Engineering, is an affiliate of the UW Tech Policy Lab, part of the Responsible AI Systems and Experiences Center, the NLP Group, and the Value Sensitive Design Lab. She is also co-director elect for the Tech Policy Lab, a role she will assume when Dr. Friedman retires from the university.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/what-are-we-building-and-why]]></link><guid isPermaLink="false">c8281152-1dd6-4ee3-be1e-d06b149d9e11</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 17 Dec 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/32060e42-d20e-4480-a9b3-44b741007779/TPP215-converted.mp3" length="35109397" type="audio/mpeg"/><itunes:duration>48:46</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Europe Advances Its AI Act</title><itunes:title>Europe Advances Its AI Act</itunes:title><description><![CDATA[<p>In April 2021, the European Commission introduced the first regulatory framework for AI within the EU. This Friday, after a marathon set of negotiations, EU policymakers reached a political consensus on the details of the legislation. This AI Act represents the most significant comprehensive effort in the world’s democracies to regulate a technology that promises major social and economic impact. While the AI Act will still have to go through a few final procedural steps before its enactment, the contours of it are now set. To find out more about what was decided, <strong>Justin Hendrix</strong> spoke to one journalist who reported directly on the negotiations in Brussels: <strong>Luca Bertuzzi</strong>, technology editor at EURACTIV.</p>]]></description><content:encoded><![CDATA[<p>In April 2021, the European Commission introduced the first regulatory framework for AI within the EU. This Friday, after a marathon set of negotiations, EU policymakers reached a political consensus on the details of the legislation. This AI Act represents the most significant comprehensive effort in the world’s democracies to regulate a technology that promises major social and economic impact. While the AI Act will still have to go through a few final procedural steps before its enactment, the contours of it are now set. To find out more about what was decided, <strong>Justin Hendrix</strong> spoke to one journalist who reported directly on the negotiations in Brussels: <strong>Luca Bertuzzi</strong>, technology editor at EURACTIV.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/europe-advances-its-ai-act]]></link><guid isPermaLink="false">e4d83cd2-fb5c-4119-8291-118c575acbde</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 10 Dec 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/539c9cc9-3240-4c0c-b73b-ac85fbb39579/TPP214-converted.mp3" length="19981674" type="audio/mpeg"/><itunes:duration>27:45</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Tracking Oversight of Surveillance in the US and EU</title><itunes:title>Tracking Oversight of Surveillance in the US and EU</itunes:title><description><![CDATA[<p>In both the US and Europe, policymakers are making important decisions about the governance of the bulk collection of communications and data for intelligence purposes. In the US, some of these questions are at the fore as Congress considers how to extend the Foreign Intelligence Surveillance Act's Section 702 program, which is set to expire at the start of 2024.&nbsp;To get a sense of how the broader policy debate around government surveillance is advancing in both the US and Europe, <strong>Justin Hendrix</strong> spoke to two experts on the subject who happened to be meeting together in Washington DC last week: <strong>Dr. Thorsten Wetzling,</strong> head of the Digital Rights, Surveillance and Democracy research unit of the Berlin think tank Stiftung Neue Verantwortung (SNV), and <strong>Greg Nojeim</strong>, Director of the Security and Surveillance Project at the Center for Democracy and Technology (CDT).</p>]]></description><content:encoded><![CDATA[<p>In both the US and Europe, policymakers are making important decisions about the governance of the bulk collection of communications and data for intelligence purposes. In the US, some of these questions are at the fore as Congress considers how to extend the Foreign Intelligence Surveillance Act's Section 702 program, which is set to expire at the start of 2024.&nbsp;To get a sense of how the broader policy debate around government surveillance is advancing in both the US and Europe, <strong>Justin Hendrix</strong> spoke to two experts on the subject who happened to be meeting together in Washington DC last week: <strong>Dr. Thorsten Wetzling,</strong> head of the Digital Rights, Surveillance and Democracy research unit of the Berlin think tank Stiftung Neue Verantwortung (SNV), and <strong>Greg Nojeim</strong>, Director of the Security and Surveillance Project at the Center for Democracy and Technology (CDT).</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/tracking-oversight-of-surveillance-in-the-us-and-eu]]></link><guid isPermaLink="false">e9dba459-7015-422b-832d-68c1f3ad7283</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 10 Dec 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/d062a899-3f8f-4e52-9b64-157b71b6b3f9/TPP213-converted.mp3" length="27119388" type="audio/mpeg"/><itunes:duration>37:40</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Checking on the Progress of Content Moderators in Africa</title><itunes:title>Checking on the Progress of Content Moderators in Africa</itunes:title><description><![CDATA[<p>For the past two years, there has been a steady stream of news out of Kenya about the relationships between major tech firms – including <a href="https://time.com/6147458/facebook-africa-content-moderation-employee-treatment/" rel="noopener noreferrer" target="_blank">Meta</a>, <a href="https://time.com/6293271/tiktok-bytedance-kenya-moderator-lawsuit/" rel="noopener noreferrer" target="_blank">TikTok</a> and <a href="https://time.com/6247678/openai-chatgpt-kenya-workers/" rel="noopener noreferrer" target="_blank">OpenAI</a> – and outsourcing firms like Sama and Majorel that have employed content moderators on their behalf. In the spring of this year, more than 150 moderators <a href="https://time.com/6275995/chatgpt-facebook-african-workers-union/" rel="noopener noreferrer" target="_blank">announced</a> the formation of the African Content Moderators Union, which advocates for better pay and working conditions, and a <a href="https://www.theguardian.com/technology/2023/oct/16/meta-settlement-talks-with-kenyan-content-moderators-break-down-facebook" rel="noopener noreferrer" target="_blank">lawsuit</a> against Meta is working its way through Kenya’s courts. This month will see an important ruling in that case. </p><p>To learn more about the situation on the ground and what it’s been like for the individuals involved in this fight while the legal progress unfolds, <strong>Justin Hendrix</strong> spoke to <strong>Njenga Kimani</strong>, a researcher at Siasa Place, a youth-led, prodemocracy NGO based in Nairobi, and three moderators who’ve worked on platforms including TikTok, Meta, and OpenAI: <strong>James Oyange Odhiambo, Sonia Kgomo, </strong>and<strong> Richard Mathenge</strong>.</p>]]></description><content:encoded><![CDATA[<p>For the past two years, there has been a steady stream of news out of Kenya about the relationships between major tech firms – including <a href="https://time.com/6147458/facebook-africa-content-moderation-employee-treatment/" rel="noopener noreferrer" target="_blank">Meta</a>, <a href="https://time.com/6293271/tiktok-bytedance-kenya-moderator-lawsuit/" rel="noopener noreferrer" target="_blank">TikTok</a> and <a href="https://time.com/6247678/openai-chatgpt-kenya-workers/" rel="noopener noreferrer" target="_blank">OpenAI</a> – and outsourcing firms like Sama and Majorel that have employed content moderators on their behalf. In the spring of this year, more than 150 moderators <a href="https://time.com/6275995/chatgpt-facebook-african-workers-union/" rel="noopener noreferrer" target="_blank">announced</a> the formation of the African Content Moderators Union, which advocates for better pay and working conditions, and a <a href="https://www.theguardian.com/technology/2023/oct/16/meta-settlement-talks-with-kenyan-content-moderators-break-down-facebook" rel="noopener noreferrer" target="_blank">lawsuit</a> against Meta is working its way through Kenya’s courts. This month will see an important ruling in that case. </p><p>To learn more about the situation on the ground and what it’s been like for the individuals involved in this fight while the legal progress unfolds, <strong>Justin Hendrix</strong> spoke to <strong>Njenga Kimani</strong>, a researcher at Siasa Place, a youth-led, prodemocracy NGO based in Nairobi, and three moderators who’ve worked on platforms including TikTok, Meta, and OpenAI: <strong>James Oyange Odhiambo, Sonia Kgomo, </strong>and<strong> Richard Mathenge</strong>.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/checking-on-the-progress-of-content-moderators-in-africa]]></link><guid isPermaLink="false">be62b5ae-2b1f-4b94-946a-4935b1164479</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 03 Dec 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/28e4c7bb-2113-4dc7-a230-5bb5ced1c0c7/TPP212-converted.mp3" length="30204568" type="audio/mpeg"/><itunes:duration>41:57</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The Saga at OpenAI: Lessons for Policymakers</title><itunes:title>The Saga at OpenAI: Lessons for Policymakers</itunes:title><description><![CDATA[<p>To learn more about the recent leadership crisis at OpenAI and what lessons policymakers should take from it, <strong>Justin Hendrix</strong> spoke to <strong>Karen Hao</strong>, a contributing writer at <em>The Atlantic</em> who is currently working on a book about OpenAI. With staff writer <strong>Charlie Warzel</strong>, Hao wrote a piece for <em>The Atlantic </em>under the headline "<a href="https://www.theatlantic.com/technology/archive/2023/11/sam-altman-open-ai-chatgpt-chaos/676050/" rel="noopener noreferrer" target="_blank">Inside the Chaos at OpenAI</a>," drawing on conversations with current and former employees of the company.</p>]]></description><content:encoded><![CDATA[<p>To learn more about the recent leadership crisis at OpenAI and what lessons policymakers should take from it, <strong>Justin Hendrix</strong> spoke to <strong>Karen Hao</strong>, a contributing writer at <em>The Atlantic</em> who is currently working on a book about OpenAI. With staff writer <strong>Charlie Warzel</strong>, Hao wrote a piece for <em>The Atlantic </em>under the headline "<a href="https://www.theatlantic.com/technology/archive/2023/11/sam-altman-open-ai-chatgpt-chaos/676050/" rel="noopener noreferrer" target="_blank">Inside the Chaos at OpenAI</a>," drawing on conversations with current and former employees of the company.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/the-saga-at-openai-lessons-for-policymakers]]></link><guid isPermaLink="false">a3ca8003-4e7e-4a6f-aa01-1e99e70b5252</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 26 Nov 2023 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/cd8dad2b-b0b1-4eda-a14a-f6e307057355/TPP212-converted.mp3" length="25652873" type="audio/mpeg"/><itunes:duration>30:32</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>AI and Harms to Artists and Creators</title><itunes:title>AI and Harms to Artists and Creators</itunes:title><description><![CDATA[<p>On November 15, the Open Markets Institute and the AI Now Institute <a href="https://www.openmarketsinstitute.org/publications/event-ai-and-the-public-interest" rel="noopener noreferrer" target="_blank">hosted an event</a> in Washington D.C. featuring discussion on how to understand the promise, threats, and practical regulatory challenges presented by artificial intelligence. </p><p><strong>Justin Hendrix</strong> moderated a discussion on harms to artists and creators, exploring questions around copyright and fair use, the ways in which AI is shaping the entire incentive structure for creative labor, and the economic impacts of the "junkification" of online content. The panelists included <strong>Liz Pelly</strong>, a freelance journalist specialized in the music industry; <strong>Ashley Irwin</strong>, President of the Society of Composers &amp; Lyricists; and <strong>Jen Jacobsen, </strong>Executive Director of the Artist Rights Alliance.</p>]]></description><content:encoded><![CDATA[<p>On November 15, the Open Markets Institute and the AI Now Institute <a href="https://www.openmarketsinstitute.org/publications/event-ai-and-the-public-interest" rel="noopener noreferrer" target="_blank">hosted an event</a> in Washington D.C. featuring discussion on how to understand the promise, threats, and practical regulatory challenges presented by artificial intelligence. </p><p><strong>Justin Hendrix</strong> moderated a discussion on harms to artists and creators, exploring questions around copyright and fair use, the ways in which AI is shaping the entire incentive structure for creative labor, and the economic impacts of the "junkification" of online content. The panelists included <strong>Liz Pelly</strong>, a freelance journalist specialized in the music industry; <strong>Ashley Irwin</strong>, President of the Society of Composers &amp; Lyricists; and <strong>Jen Jacobsen, </strong>Executive Director of the Artist Rights Alliance.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/ai-and-harms-to-artists-and-creators]]></link><guid isPermaLink="false">8e0017e4-8888-47f0-95b2-5b1b6f280bcc</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 19 Nov 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/ff6422e2-e66b-42b6-8b27-c9e50c9353c8/TPP211-converted.mp3" length="26180855" type="audio/mpeg"/><itunes:duration>36:22</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Broken Code: A Conversation with Jeff Horwitz</title><itunes:title>Broken Code: A Conversation with Jeff Horwitz</itunes:title><description><![CDATA[<p>This episode explores <a href="https://www.penguinrandomhouse.com/books/712678/broken-code-by-jeff-horwitz/" rel="noopener noreferrer" target="_blank"><em>Broken Code: Inside Facebook and the Fight to Expose its Harmful Secrets</em></a>, a new book by <em>Wall Street Journal</em> technology reporter <strong>Jeff Horwitz</strong>. His relentless coverage of Meta, including first reporting on the documents brought forward by whistleblower <strong>Frances Haugen</strong> in the fall of 2021, has been pivotal in shedding light on the complex interplay between social media platforms, society, and democracy. <strong>Justin Hendrix</strong> talks to him about his journey, new details revealed in the book, and the impact his reporting has had in driving platform accountability both in the United States and internationally.&nbsp;</p>]]></description><content:encoded><![CDATA[<p>This episode explores <a href="https://www.penguinrandomhouse.com/books/712678/broken-code-by-jeff-horwitz/" rel="noopener noreferrer" target="_blank"><em>Broken Code: Inside Facebook and the Fight to Expose its Harmful Secrets</em></a>, a new book by <em>Wall Street Journal</em> technology reporter <strong>Jeff Horwitz</strong>. His relentless coverage of Meta, including first reporting on the documents brought forward by whistleblower <strong>Frances Haugen</strong> in the fall of 2021, has been pivotal in shedding light on the complex interplay between social media platforms, society, and democracy. <strong>Justin Hendrix</strong> talks to him about his journey, new details revealed in the book, and the impact his reporting has had in driving platform accountability both in the United States and internationally.&nbsp;</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/broken-code-a-conversation-with-jeff-horwitz]]></link><guid isPermaLink="false">b7558ae1-a4e8-4d88-a5f1-cc1f1e89223b</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Tue, 14 Nov 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/a7d0e9e9-1270-4539-94b9-3b097a13c38f/TPP210-converted.mp3" length="32687757" type="audio/mpeg"/><itunes:duration>38:55</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Policing the City: A Conversation with Matthew Guariglia</title><itunes:title>Policing the City: A Conversation with Matthew Guariglia</itunes:title><description><![CDATA[<p>Today's guest is <a href="https://www.eff.org/about/staff/dr-matthew-guariglia-0" rel="noopener noreferrer" target="_blank">Dr. Matthew Guariglia</a>, a senior policy analyst for the Electronic Frontier Foundation and author of the new book, <a href="https://www.dukeupress.edu/police-and-the-empire-city" rel="noopener noreferrer" target="_blank"><em>Police and the Empire City: Race and the Origins of Modern Policing in New York</em></a>, just out from Duke University Press. Guariglia says we're really living in a world of police surveillance built in the early 20th century, even as police departments wield powers that only a few years ago we thought might only be in the hands of federal intelligence agencies. </p>]]></description><content:encoded><![CDATA[<p>Today's guest is <a href="https://www.eff.org/about/staff/dr-matthew-guariglia-0" rel="noopener noreferrer" target="_blank">Dr. Matthew Guariglia</a>, a senior policy analyst for the Electronic Frontier Foundation and author of the new book, <a href="https://www.dukeupress.edu/police-and-the-empire-city" rel="noopener noreferrer" target="_blank"><em>Police and the Empire City: Race and the Origins of Modern Policing in New York</em></a>, just out from Duke University Press. Guariglia says we're really living in a world of police surveillance built in the early 20th century, even as police departments wield powers that only a few years ago we thought might only be in the hands of federal intelligence agencies. </p>]]></content:encoded><link><![CDATA[https://techpolicy.press/policing-the-city-a-conversation-with-matthew-guariglia]]></link><guid isPermaLink="false">884a49e1-b572-4982-87a2-e7ca3e2d78c6</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 12 Nov 2023 06:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/fa6c6cfb-cde8-4486-abf6-745d5de38e16/TPP209-converted.mp3" length="23724842" type="audio/mpeg"/><itunes:duration>32:57</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Artificial Intelligence and Your Voice</title><itunes:title>Artificial Intelligence and Your Voice</itunes:title><description><![CDATA[<p>Today’s guest is <a href="https://wiebketoussaint.com/" rel="noopener noreferrer" target="_blank"><strong>Wiebke Hutiri</strong></a>, a researcher with a particular expertise in&nbsp;design patterns for detecting and mitigating bias&nbsp;in AI systems. Her recent work has focused on voice biometrics, including work on an <a href="https://www.faireva.org/home" rel="noopener noreferrer" target="_blank">open source project called Fair EVA</a> that gathers resources for researchers and developers to audit bias and discrimination in voice technology. <strong>Justin Hendrix</strong> spoke to Hutiri about voice biometrics, voice synthesis, and a range of issues and concerns these technologies present alongside their benefits.</p>]]></description><content:encoded><![CDATA[<p>Today’s guest is <a href="https://wiebketoussaint.com/" rel="noopener noreferrer" target="_blank"><strong>Wiebke Hutiri</strong></a>, a researcher with a particular expertise in&nbsp;design patterns for detecting and mitigating bias&nbsp;in AI systems. Her recent work has focused on voice biometrics, including work on an <a href="https://www.faireva.org/home" rel="noopener noreferrer" target="_blank">open source project called Fair EVA</a> that gathers resources for researchers and developers to audit bias and discrimination in voice technology. <strong>Justin Hendrix</strong> spoke to Hutiri about voice biometrics, voice synthesis, and a range of issues and concerns these technologies present alongside their benefits.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/artificial-intelligence-and-your-voice]]></link><guid isPermaLink="false">a9458e89-4a5e-481a-9bfd-626c44fcde85</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 05 Nov 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/b058eddf-5ad6-43b2-9819-ed339e790907/TPP208-converted.mp3" length="29239274" type="audio/mpeg"/><itunes:duration>40:37</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>A Design Code for Big Tech</title><itunes:title>A Design Code for Big Tech</itunes:title><description><![CDATA[<p>Today’s guest is <strong>Ravi Iyer</strong>, a data scientist and moral psychologist at the Psychology of Technology Institute, which is a project of the University of Southern California Marshall School’s Neely Center for Ethical Leadership and Decision Making and the University of California-Berkeley’s Haas School of Business. He is also a former Facebook executive, and at the company he worked on a variety of civic integrity issues. </p><p>The Neely Center has developed a <a href="https://uscneelycenter.substack.com/p/introducing-the-neely-center-design" rel="noopener noreferrer" target="_blank">design code</a> that seeks to address a number of concerns about the harms of social media, including issues related to child online safety. It is endorsed by individuals and organizations ranging from academics at NYU and USC to the Tech Justice Law Project and New Public, as well as technologists that have worked at platforms such as Twitter, Facebook, and Google.&nbsp;<strong>Justin Hendrix</strong> spoke to Iyer about the details of the proposed code, and in particular how they relate to the debate over child online safety.</p>]]></description><content:encoded><![CDATA[<p>Today’s guest is <strong>Ravi Iyer</strong>, a data scientist and moral psychologist at the Psychology of Technology Institute, which is a project of the University of Southern California Marshall School’s Neely Center for Ethical Leadership and Decision Making and the University of California-Berkeley’s Haas School of Business. He is also a former Facebook executive, and at the company he worked on a variety of civic integrity issues. </p><p>The Neely Center has developed a <a href="https://uscneelycenter.substack.com/p/introducing-the-neely-center-design" rel="noopener noreferrer" target="_blank">design code</a> that seeks to address a number of concerns about the harms of social media, including issues related to child online safety. It is endorsed by individuals and organizations ranging from academics at NYU and USC to the Tech Justice Law Project and New Public, as well as technologists that have worked at platforms such as Twitter, Facebook, and Google.&nbsp;<strong>Justin Hendrix</strong> spoke to Iyer about the details of the proposed code, and in particular how they relate to the debate over child online safety.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/a-design-code-for-big-tech]]></link><guid isPermaLink="false">3acea954-e2dc-4c2f-8efd-0ebba4d2e43b</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 29 Oct 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/fefc18ad-fdf0-4569-a46e-7bc8b6c591c5/TPP207-converted.mp3" length="23742993" type="audio/mpeg"/><itunes:duration>32:59</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Unpacking the Bangalore Ideology</title><itunes:title>Unpacking the Bangalore Ideology</itunes:title><description><![CDATA[<p>At the September G20 summit in Delhi, the government of prime minister Narendra Modi promoted the country’s digital public infrastructure (DPI) as a model for the world for how to develop digital systems that enable countries to deliver social services and provide access to infrastructure and economic opportunities to residents. Other world leaders were enthusiastic about the pitch, endorsing a common framework for DPI systems. </p><p>But even as an Indian vision for DPI appears to be attractive beyond that country’s borders, what are the ideas and events that shaped India’s approach? Today's guest is <strong>Mila Samdub</strong>, a researcher at the Information Society Project at Yale Law School who recently published an essay titled “<a href="https://TheBangaloreIdeology:HowanamoraltechnocracypowersModi’sIndia" rel="noopener noreferrer" target="_blank">The Bangalore Ideology: How an amoral technocracy powers Modi’s India</a>,” looking at histories of technocratic ideas in India, and how they have combined with Modi’s particular brand of populism.</p>]]></description><content:encoded><![CDATA[<p>At the September G20 summit in Delhi, the government of prime minister Narendra Modi promoted the country’s digital public infrastructure (DPI) as a model for the world for how to develop digital systems that enable countries to deliver social services and provide access to infrastructure and economic opportunities to residents. Other world leaders were enthusiastic about the pitch, endorsing a common framework for DPI systems. </p><p>But even as an Indian vision for DPI appears to be attractive beyond that country’s borders, what are the ideas and events that shaped India’s approach? Today's guest is <strong>Mila Samdub</strong>, a researcher at the Information Society Project at Yale Law School who recently published an essay titled “<a href="https://TheBangaloreIdeology:HowanamoraltechnocracypowersModi’sIndia" rel="noopener noreferrer" target="_blank">The Bangalore Ideology: How an amoral technocracy powers Modi’s India</a>,” looking at histories of technocratic ideas in India, and how they have combined with Modi’s particular brand of populism.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/unpacking-the-bangalore-ideology]]></link><guid isPermaLink="false">5ef6ce63-fa64-4ce2-8c65-1c5a03d79d56</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 22 Oct 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/fc6e71c1-4db7-4610-9858-00b7a1d66195/TPP206-converted.mp3" length="29056932" type="audio/mpeg"/><itunes:duration>34:35</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>How to Control Our Appetite for Misinformation</title><itunes:title>How to Control Our Appetite for Misinformation</itunes:title><description><![CDATA[<p>A lot is written about the supply side of mis- and disinformation, including how propagandists and political leaders are using messages and platforms to impact public opinion. But less is written about the demand side. When it comes to false beliefs that each of us adopt and harbor to help us understand the world and events in it, what are the incentives and social dimensions that each of us as individuals and as members of the community are responding to that drive our appetite for misinformation? </p><p>Today’s guest has devoted her research to this subject, and has just published a book that serves as a very accessible entry point to the latest scholarship on this question.&nbsp;<strong>Dannagal Young</strong> is a Professor of Communication and Political Science at the University of Delaware and the author of <a href="https://www.press.jhu.edu/books/title/12834/wrong#book__details" rel="noopener noreferrer" target="_blank"><em>Wrong: How Media, Politics, and Identity Drive our Appetite for Misinformation</em></a>.</p>]]></description><content:encoded><![CDATA[<p>A lot is written about the supply side of mis- and disinformation, including how propagandists and political leaders are using messages and platforms to impact public opinion. But less is written about the demand side. When it comes to false beliefs that each of us adopt and harbor to help us understand the world and events in it, what are the incentives and social dimensions that each of us as individuals and as members of the community are responding to that drive our appetite for misinformation? </p><p>Today’s guest has devoted her research to this subject, and has just published a book that serves as a very accessible entry point to the latest scholarship on this question.&nbsp;<strong>Dannagal Young</strong> is a Professor of Communication and Political Science at the University of Delaware and the author of <a href="https://www.press.jhu.edu/books/title/12834/wrong#book__details" rel="noopener noreferrer" target="_blank"><em>Wrong: How Media, Politics, and Identity Drive our Appetite for Misinformation</em></a>.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/how-to-control-our-appetite-for-misinformation]]></link><guid isPermaLink="false">b5b786e7-fdc4-414f-9aaa-3fa7c44b7dbd</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 15 Oct 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/dea29a3a-b1c0-4f54-8dc1-f55f2f00a612/TPP205-converted.mp3" length="31184779" type="audio/mpeg"/><itunes:duration>43:19</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Digital Empires: A Conversation with Anu Bradford</title><itunes:title>Digital Empires: A Conversation with Anu Bradford</itunes:title><description><![CDATA[<p>There is a&nbsp;term&nbsp;you've likely heard on the <em>Tech Policy Press</em> podcast in the past: the&nbsp;<em>Brussels Effect</em>. The term is meant to describe the&nbsp;European&nbsp;Union’s outsized influence&nbsp;on&nbsp;global markets through its regulations. You may not know that the term was first coined by <strong>Anu Bradford</strong>, a professor at Columbia Law School. She wrote a book about it called&nbsp;<em>The Brussels Effect: How the European Union Rules the World</em>. </p><p>Now, she has a new book, just out from Oxford University Press, called&nbsp;<a href="https://global.oup.com/academic/product/digital-empires-9780197649268?cc=us&amp;lang=en&amp;" rel="noopener noreferrer" target="_blank"><em>Digital&nbsp;Empires: The&nbsp;Global&nbsp;Battle&nbsp;to&nbsp;Regulate&nbsp;Technology</em></a> The book describes the geopolitical competition to establish digital governance models between the US, the EU, and China. <strong>Justin Hendrix</strong> had the opportunity to speak to Bradford about the book, and why she thinks the US government, by failing to regulate its tech companies, may ultimately imperil not only the US model but internet freedom more broadly.</p>]]></description><content:encoded><![CDATA[<p>There is a&nbsp;term&nbsp;you've likely heard on the <em>Tech Policy Press</em> podcast in the past: the&nbsp;<em>Brussels Effect</em>. The term is meant to describe the&nbsp;European&nbsp;Union’s outsized influence&nbsp;on&nbsp;global markets through its regulations. You may not know that the term was first coined by <strong>Anu Bradford</strong>, a professor at Columbia Law School. She wrote a book about it called&nbsp;<em>The Brussels Effect: How the European Union Rules the World</em>. </p><p>Now, she has a new book, just out from Oxford University Press, called&nbsp;<a href="https://global.oup.com/academic/product/digital-empires-9780197649268?cc=us&amp;lang=en&amp;" rel="noopener noreferrer" target="_blank"><em>Digital&nbsp;Empires: The&nbsp;Global&nbsp;Battle&nbsp;to&nbsp;Regulate&nbsp;Technology</em></a> The book describes the geopolitical competition to establish digital governance models between the US, the EU, and China. <strong>Justin Hendrix</strong> had the opportunity to speak to Bradford about the book, and why she thinks the US government, by failing to regulate its tech companies, may ultimately imperil not only the US model but internet freedom more broadly.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/digital-empires-a-conversation-with-anu-bradford]]></link><guid isPermaLink="false">fbac72f8-c0c3-461b-9dc7-6e70daafbebc</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 08 Oct 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/d91c2741-1e0c-4c35-8ef4-4819644e1da7/TPP204-converted.mp3" length="33042399" type="audio/mpeg"/><itunes:duration>45:54</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Artificial Intelligence as a Tool of Repression</title><itunes:title>Artificial Intelligence as a Tool of Repression</itunes:title><description><![CDATA[<p>The 13th installment of the <a href="https://freedomhouse.org/report/freedom-net/2023/repressive-power-artificial-intelligence" rel="noopener noreferrer" target="_blank">Freedom on the Net report from Freedom House</a> finds that "while advances in artificial intelligence offer benefits for society, they have also been used to increase the scale and efficiency of digital repression." <strong>Justin Hendrix</strong> spoke with two of the report's authors- <strong>Allie Funk </strong>and <strong>Kian Vesteinsson </strong>about their findings, which unfortunately do not represent a change of trajectory from prior years.</p>]]></description><content:encoded><![CDATA[<p>The 13th installment of the <a href="https://freedomhouse.org/report/freedom-net/2023/repressive-power-artificial-intelligence" rel="noopener noreferrer" target="_blank">Freedom on the Net report from Freedom House</a> finds that "while advances in artificial intelligence offer benefits for society, they have also been used to increase the scale and efficiency of digital repression." <strong>Justin Hendrix</strong> spoke with two of the report's authors- <strong>Allie Funk </strong>and <strong>Kian Vesteinsson </strong>about their findings, which unfortunately do not represent a change of trajectory from prior years.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/artificial-intelligence-as-a-tool-of-repression]]></link><guid isPermaLink="false">74b3948a-c664-49c7-bc00-703a36d43e5b</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Wed, 04 Oct 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/4f7902a4-d2a8-4981-b415-52f50081fea1/TPP203-converted.mp3" length="31317689" type="audio/mpeg"/><itunes:duration>43:30</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The EU AI Act Enters Final Negotiations</title><itunes:title>The EU AI Act Enters Final Negotiations</itunes:title><description><![CDATA[<p>While US Senators are busy holding hearings and forums and posing for pictures with the CEOs of AI companies, the European Union is just months away from passing sweeping regulation of artificial intelligence.&nbsp;</p><p>As negotiations continue between the European Parliament, Council, and Commission, <strong>Justin Hendrix</strong> spoke to one observer who is paying close attention to every detail: the Ada Lovelace Institute's European Public Policy Lead, <strong>Connor Dunlop</strong>. Connor recently <a href="https://www.adalovelaceinstitute.org/policy-briefing/eu-ai-act-trilogues/" rel="noopener noreferrer" target="_blank">published a briefing</a> on five areas of focus for the trilogue negotiations that recommence next week.</p>]]></description><content:encoded><![CDATA[<p>While US Senators are busy holding hearings and forums and posing for pictures with the CEOs of AI companies, the European Union is just months away from passing sweeping regulation of artificial intelligence.&nbsp;</p><p>As negotiations continue between the European Parliament, Council, and Commission, <strong>Justin Hendrix</strong> spoke to one observer who is paying close attention to every detail: the Ada Lovelace Institute's European Public Policy Lead, <strong>Connor Dunlop</strong>. Connor recently <a href="https://www.adalovelaceinstitute.org/policy-briefing/eu-ai-act-trilogues/" rel="noopener noreferrer" target="_blank">published a briefing</a> on five areas of focus for the trilogue negotiations that recommence next week.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/the-eu-ai-act-enters-final-negotiations]]></link><guid isPermaLink="false">3570fd43-4229-40f3-a0bb-89b2a73aa1bf</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 01 Oct 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/6c41ef36-1898-4700-a556-5ce869653846/TPP202-converted.mp3" length="21703522" type="audio/mpeg"/><itunes:duration>25:50</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The Luddites and Lessons for the Next Rebellion</title><itunes:title>The Luddites and Lessons for the Next Rebellion</itunes:title><description><![CDATA[<p>In <a href="https://www.hachettebookgroup.com/titles/brian-merchant/blood-in-the-machine/9780316487740/?lens=little-brown" rel="noopener noreferrer" target="_blank"><em>Blood in the Machine: The Origins of the Rebellion Against Big Tech</em></a><em>, Los Angeles Times</em> technology columnist<em> </em><strong>Brian Merchant</strong> has written a new history of perhaps one of the most famous movements for worker rights and power in the face of automation. The book sets the record straight on the Luddites, and unpacks what today’s workers can learn from them.&nbsp;</p>]]></description><content:encoded><![CDATA[<p>In <a href="https://www.hachettebookgroup.com/titles/brian-merchant/blood-in-the-machine/9780316487740/?lens=little-brown" rel="noopener noreferrer" target="_blank"><em>Blood in the Machine: The Origins of the Rebellion Against Big Tech</em></a><em>, Los Angeles Times</em> technology columnist<em> </em><strong>Brian Merchant</strong> has written a new history of perhaps one of the most famous movements for worker rights and power in the face of automation. The book sets the record straight on the Luddites, and unpacks what today’s workers can learn from them.&nbsp;</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/the-luddites-and-lessons-for-the-next-rebellion]]></link><guid isPermaLink="false">d7162df4-0c27-4d6c-a835-fdb5e037dbb4</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Wed, 27 Sep 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/33fa075b-7de1-4556-afc7-b3fe23deffac/TPP201-converted.mp3" length="33263761" type="audio/mpeg"/><itunes:duration>39:36</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Graphic Content, Trauma and Meaning: A Conversation with Alexa Koenig and Andrea Lampros</title><itunes:title>Graphic Content, Trauma and Meaning: A Conversation with Alexa Koenig and Andrea Lampros</itunes:title><description><![CDATA[<p>The ubiquity of cameras in our phones and our environment, coupled with massive social media networks that can share images and video in an instant, means we see often graphic and disturbing images with great frequency. How are people processing such material? And how is it different for people working in newsrooms, social media companies, and human rights and social justice organizations? What protections might be put in place to protect people from vicarious trauma and other harms, and what is the ultimate benefit of doing this work?</p><p>In their new book, <a href="https://www.graphicthebook.com/home#press" rel="noopener noreferrer" target="_blank"><em>Graphic: Trauma and Meaning in Our Online Lives</em></a>, University of California Berkeley scholars <strong>Alexa Koenig</strong> and <strong>Andrea Lampros</strong> set out to answer those questions.</p>]]></description><content:encoded><![CDATA[<p>The ubiquity of cameras in our phones and our environment, coupled with massive social media networks that can share images and video in an instant, means we see often graphic and disturbing images with great frequency. How are people processing such material? And how is it different for people working in newsrooms, social media companies, and human rights and social justice organizations? What protections might be put in place to protect people from vicarious trauma and other harms, and what is the ultimate benefit of doing this work?</p><p>In their new book, <a href="https://www.graphicthebook.com/home#press" rel="noopener noreferrer" target="_blank"><em>Graphic: Trauma and Meaning in Our Online Lives</em></a>, University of California Berkeley scholars <strong>Alexa Koenig</strong> and <strong>Andrea Lampros</strong> set out to answer those questions.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/graphic-content-trauma-and-meaning-a-conversation-with-alexa-koenig-and-andrea-lampros]]></link><guid isPermaLink="false">f45ca473-a876-452c-a68e-2d11ea172a07</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 24 Sep 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/f7ec6474-6cc0-4f62-ac5e-dda80b237e45/TPP200-converted.mp3" length="32653788" type="audio/mpeg"/><itunes:duration>38:52</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Your Face Belongs to Us: A Conversation with Kashmir Hill</title><itunes:title>Your Face Belongs to Us: A Conversation with Kashmir Hill</itunes:title><description><![CDATA[<p>In 2019, journalist <strong>Kashmir Hill</strong> had just joined <em>The New York Times</em> when she got a tip about the existence of a company called Clearview AI that claimed it could identify almost anyone with a photo. But the company was hard to contact, and people who knew about it didn’t want to talk. Hill resorted to old fashioned shoe-leather reporting, trying to track down the company and its executives. By January of 2020, the Times was ready to report what she had learned in a piece titled “<a href="https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html" rel="noopener noreferrer" target="_blank">The Secretive Company That Might End Privacy as We Know It.</a>”&nbsp;</p><p>Three years later, Hill has published a book that tells the story of Clearview AI, but with the benefit of a great deal more reporting and study on the social, political, and technological forces behind it.&nbsp;It's called <a href="https://www.penguinrandomhouse.com/books/691288/your-face-belongs-to-us-by-kashmir-hill/" rel="noopener noreferrer" target="_blank"><em>Your Face Belongs to Us: A Secretive Startup's Quest to End Privacy As We Know It</em></a>, just out from Penguin Random House.</p>]]></description><content:encoded><![CDATA[<p>In 2019, journalist <strong>Kashmir Hill</strong> had just joined <em>The New York Times</em> when she got a tip about the existence of a company called Clearview AI that claimed it could identify almost anyone with a photo. But the company was hard to contact, and people who knew about it didn’t want to talk. Hill resorted to old fashioned shoe-leather reporting, trying to track down the company and its executives. By January of 2020, the Times was ready to report what she had learned in a piece titled “<a href="https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html" rel="noopener noreferrer" target="_blank">The Secretive Company That Might End Privacy as We Know It.</a>”&nbsp;</p><p>Three years later, Hill has published a book that tells the story of Clearview AI, but with the benefit of a great deal more reporting and study on the social, political, and technological forces behind it.&nbsp;It's called <a href="https://www.penguinrandomhouse.com/books/691288/your-face-belongs-to-us-by-kashmir-hill/" rel="noopener noreferrer" target="_blank"><em>Your Face Belongs to Us: A Secretive Startup's Quest to End Privacy As We Know It</em></a>, just out from Penguin Random House.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/your-face-belongs-to-us-a-conversation-with-kashmir-hill]]></link><guid isPermaLink="false">6ffbd4e6-c144-42f3-8ed4-7e2db672539e</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 24 Sep 2023 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/1ae98b73-662e-40b5-ad93-a719a0fc056a/TPP199-converted.mp3" length="24546760" type="audio/mpeg"/><itunes:duration>34:06</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The Problem with the &quot;Big&quot; in Big Tech</title><itunes:title>The Problem with the &quot;Big&quot; in Big Tech</itunes:title><description><![CDATA[<p>Today’s episode features two segments, both of which consider the scale of technology platforms and their power over markets and people. </p><p>In the first, <strong>Rebecca Rand</strong> delivers a conversation with University of Technology Sydney researcher <strong>Dr. Luis Lozano-Paredes</strong> about a community of drivers in Colombia who have hacked together a way to preserve their power alongside the adoption of ride sharing apps. </p><p>And in the second, <strong>Justin Hendrix</strong> speaks with Columbia University Law School Professor of Law, Science and Technology <strong>Tim Wu</strong>, who recently spent two years on the National Economic Council in the White House as Special Assistant to the President for Competition and Technology. The conversation touches on privacy legislation, ideas about competition and scale, and Wu's observations on the landmark antitrust trial between the Justice Department and Google, which wrapped up its first week of testimony on Friday. The conversation took place at the All Tech is Human Responsible Tech Summit, hosted with the Consulate General of Canada in New York, on September 14th.</p>]]></description><content:encoded><![CDATA[<p>Today’s episode features two segments, both of which consider the scale of technology platforms and their power over markets and people. </p><p>In the first, <strong>Rebecca Rand</strong> delivers a conversation with University of Technology Sydney researcher <strong>Dr. Luis Lozano-Paredes</strong> about a community of drivers in Colombia who have hacked together a way to preserve their power alongside the adoption of ride sharing apps. </p><p>And in the second, <strong>Justin Hendrix</strong> speaks with Columbia University Law School Professor of Law, Science and Technology <strong>Tim Wu</strong>, who recently spent two years on the National Economic Council in the White House as Special Assistant to the President for Competition and Technology. The conversation touches on privacy legislation, ideas about competition and scale, and Wu's observations on the landmark antitrust trial between the Justice Department and Google, which wrapped up its first week of testimony on Friday. The conversation took place at the All Tech is Human Responsible Tech Summit, hosted with the Consulate General of Canada in New York, on September 14th.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/the-problem-with-the-big-in-big-tech]]></link><guid isPermaLink="false">716d7e99-d60f-4828-9e73-ce705097d77a</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 17 Sep 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/18a4e79e-3d26-4487-b4b2-b1433e2bd071/TPP198-converted.mp3" length="41906895" type="audio/mpeg"/><itunes:duration>43:39</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Assessing the Problem of Disinformation</title><itunes:title>Assessing the Problem of Disinformation</itunes:title><description><![CDATA[<p>This episode features two segments on the subject of disinformation.&nbsp;</p><p>In the first, <strong>Rebecca Rand</strong> speaks with <strong>Dr. Shelby Grossman</strong>, a research scholar at the&nbsp;Stanford Internet Observatory, on <a href="https://osf.io/preprints/socarxiv/fp87b/" rel="noopener noreferrer" target="_blank">recent research</a> that looks at whether AI can write persuasive propaganda.&nbsp;</p><p>In the second segment, <strong>Justin Hendrix</strong> speaks with <strong>Dr. Kirsty Park</strong>, the Policy Lead at the European Media Observatory Ireland, and <strong>Stephan Mündges</strong>, the manager of the Institute of Journalism at TU Dortmund University and one of the coordinators of the German-Austrian Digital Media Observatory, about the <a href="https://techpolicy.press/wp-content/uploads/2023/09/CoP-Monitor-Report.pdf" rel="noopener noreferrer" target="_blank">report</a> they authored that looks in detail at baseline reporting from big technology platforms that are part of the EU Code of Practice on Disinformation.</p>]]></description><content:encoded><![CDATA[<p>This episode features two segments on the subject of disinformation.&nbsp;</p><p>In the first, <strong>Rebecca Rand</strong> speaks with <strong>Dr. Shelby Grossman</strong>, a research scholar at the&nbsp;Stanford Internet Observatory, on <a href="https://osf.io/preprints/socarxiv/fp87b/" rel="noopener noreferrer" target="_blank">recent research</a> that looks at whether AI can write persuasive propaganda.&nbsp;</p><p>In the second segment, <strong>Justin Hendrix</strong> speaks with <strong>Dr. Kirsty Park</strong>, the Policy Lead at the European Media Observatory Ireland, and <strong>Stephan Mündges</strong>, the manager of the Institute of Journalism at TU Dortmund University and one of the coordinators of the German-Austrian Digital Media Observatory, about the <a href="https://techpolicy.press/wp-content/uploads/2023/09/CoP-Monitor-Report.pdf" rel="noopener noreferrer" target="_blank">report</a> they authored that looks in detail at baseline reporting from big technology platforms that are part of the EU Code of Practice on Disinformation.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/assessing-the-problem-of-disinformation]]></link><guid isPermaLink="false">9b7976fc-8db9-4628-b0db-115f56d1f3ae</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 10 Sep 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/250deee8-e937-4bad-9537-91fbb44493d2/TPP197-converted.mp3" length="27761200" type="audio/mpeg"/><itunes:duration>33:03</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Paul Gowder on The Networked Leviathan</title><itunes:title>Paul Gowder on The Networked Leviathan</itunes:title><description><![CDATA[<p>One of the problems we come back to again and again on the <em>Tech Policy Press </em>podcast is the problem of how to govern social media platforms.&nbsp;</p><p>Today’s guest is <strong>Paul Gowder</strong>, Professor of Law and Associate Dean of Research and Intellectual Life at Northwestern University's&nbsp;Pritzker School of Law&nbsp;and a founding fellow of the&nbsp;Integrity Institute. Gowder is the author of <a href="https://networked-leviathan.com/" rel="noopener noreferrer" target="_blank"><em>The Networked Leviathan: For Democratic Platforms</em></a>, a book that he says takes an institutional political science approach to the problem of tech platform governance, arguing “that the goals of effective governance capacity development and of global justice” can come together, and that we can build “worldwide direct democratic institutions to exercise public authority over the operations of the big platforms.”</p>]]></description><content:encoded><![CDATA[<p>One of the problems we come back to again and again on the <em>Tech Policy Press </em>podcast is the problem of how to govern social media platforms.&nbsp;</p><p>Today’s guest is <strong>Paul Gowder</strong>, Professor of Law and Associate Dean of Research and Intellectual Life at Northwestern University's&nbsp;Pritzker School of Law&nbsp;and a founding fellow of the&nbsp;Integrity Institute. Gowder is the author of <a href="https://networked-leviathan.com/" rel="noopener noreferrer" target="_blank"><em>The Networked Leviathan: For Democratic Platforms</em></a>, a book that he says takes an institutional political science approach to the problem of tech platform governance, arguing “that the goals of effective governance capacity development and of global justice” can come together, and that we can build “worldwide direct democratic institutions to exercise public authority over the operations of the big platforms.”</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/paul-gowder-on-the-networked-leviathian]]></link><guid isPermaLink="false">c89d6c56-d24a-490e-a343-c0acc3515a64</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 03 Sep 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/f3cd73ad-ee05-4838-947b-1d2bdaf9db2b/TPP196-converted.mp3" length="40217077" type="audio/mpeg"/><itunes:duration>55:51</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Choosing Our Words Carefully</title><itunes:title>Choosing Our Words Carefully</itunes:title><description><![CDATA[<p>This episode features two segments. In the first, <strong>Rebecca Rand</strong> speaks with <strong>Alina Leidinger</strong>, a researcher at the Institute for Logic, Language and Computation at the University of Amsterdam about her research- with coauthor <strong>Richard Rogers</strong>- into which stereotypes are moderated and under-moderated in search engine autocompletion. In the second segment, <strong>Justin Hendrix</strong> speaks with Associated Press investigative journalist <strong>Garance Burke</strong> about a new chapter in the AP Stylebook offering guidance on how to report on artificial intelligence.</p>]]></description><content:encoded><![CDATA[<p>This episode features two segments. In the first, <strong>Rebecca Rand</strong> speaks with <strong>Alina Leidinger</strong>, a researcher at the Institute for Logic, Language and Computation at the University of Amsterdam about her research- with coauthor <strong>Richard Rogers</strong>- into which stereotypes are moderated and under-moderated in search engine autocompletion. In the second segment, <strong>Justin Hendrix</strong> speaks with Associated Press investigative journalist <strong>Garance Burke</strong> about a new chapter in the AP Stylebook offering guidance on how to report on artificial intelligence.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/choosing-our-words-carefully]]></link><guid isPermaLink="false">350171be-4ccb-419a-acac-52a1a7f33820</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 27 Aug 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/1336a33a-2cd2-4dc0-8547-7187b7be2317/TPP195-converted.mp3" length="20088773" type="audio/mpeg"/><itunes:duration>27:54</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Containing Big Tech</title><itunes:title>Containing Big Tech</itunes:title><description><![CDATA[<p>This episode features two segments. In the first, <strong>Rebecca Rand</strong> considers the social consequences of "machine allocation behavior" with Cornell researchers <strong>Houston Claure</strong> and <strong>Malte Jung</strong>, authors of a <a href="https://www.seyunkim.com/assets/pdf/machineAllocation.pdf" rel="noopener noreferrer" target="_blank">recent paper</a> on the topic with coauthors <strong>Seyun Kim</strong> and <strong>René Kizilcec</strong>.</p><p>In the second segment, <strong>Justin Hendrix</strong> speaks with <strong>Tom Kemp</strong>, author of a <a href="https://www.tomkemp.ai/containing-big-tech" rel="noopener noreferrer" target="_blank">new book out August 22</a> from Fast Company Press titled <em>Containing Big Tech: How to Protect Our Civil Rights, Economy, and Democracy.</em></p>]]></description><content:encoded><![CDATA[<p>This episode features two segments. In the first, <strong>Rebecca Rand</strong> considers the social consequences of "machine allocation behavior" with Cornell researchers <strong>Houston Claure</strong> and <strong>Malte Jung</strong>, authors of a <a href="https://www.seyunkim.com/assets/pdf/machineAllocation.pdf" rel="noopener noreferrer" target="_blank">recent paper</a> on the topic with coauthors <strong>Seyun Kim</strong> and <strong>René Kizilcec</strong>.</p><p>In the second segment, <strong>Justin Hendrix</strong> speaks with <strong>Tom Kemp</strong>, author of a <a href="https://www.tomkemp.ai/containing-big-tech" rel="noopener noreferrer" target="_blank">new book out August 22</a> from Fast Company Press titled <em>Containing Big Tech: How to Protect Our Civil Rights, Economy, and Democracy.</em></p>]]></content:encoded><link><![CDATA[https://techpolicy.press/containing-big-tech]]></link><guid isPermaLink="false">b930650d-1fd8-494c-8c73-858967739a24</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 20 Aug 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/6044cd1e-48ee-44c5-8b2f-69901c0d72ac/TPP194-converted.mp3" length="29033727" type="audio/mpeg"/><itunes:duration>40:19</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Assessing India&apos;s Digital Personal Data Protection Bill</title><itunes:title>Assessing India&apos;s Digital Personal Data Protection Bill</itunes:title><description><![CDATA[<p>This week, Indian legislators approved a data protection law that will govern the processing of data in the country. The bill creates a data protection board and gives the government new powers, including to request information from companies and to issue orders to block content. While there is still work to do to determine how the law will be administered, it joins a range of new tech policy laws and regulations enacted against a backdrop of the increasing centralization of power in India’s government.</p><p>To discuss the bill, <strong>Justin Hendrix</strong> is joined by <strong>Aditi Agrawal</strong>, an independent technology journalist based in New Delhi; <strong>Kamesh Shekar</strong>, a tech policy expert who leads the privacy and data governance vertical at The Dialogue, a think tank based in Delhi; and <strong>Prateek Waghre</strong>, the Policy Director at the Internet Freedom Foundation, a digital rights advocacy organization based in India.</p>]]></description><content:encoded><![CDATA[<p>This week, Indian legislators approved a data protection law that will govern the processing of data in the country. The bill creates a data protection board and gives the government new powers, including to request information from companies and to issue orders to block content. While there is still work to do to determine how the law will be administered, it joins a range of new tech policy laws and regulations enacted against a backdrop of the increasing centralization of power in India’s government.</p><p>To discuss the bill, <strong>Justin Hendrix</strong> is joined by <strong>Aditi Agrawal</strong>, an independent technology journalist based in New Delhi; <strong>Kamesh Shekar</strong>, a tech policy expert who leads the privacy and data governance vertical at The Dialogue, a think tank based in Delhi; and <strong>Prateek Waghre</strong>, the Policy Director at the Internet Freedom Foundation, a digital rights advocacy organization based in India.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/assessing-indias-digital-personal-data-protection-bill]]></link><guid isPermaLink="false">e19f5b5c-6839-458d-aef5-0a7206830ff3</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 13 Aug 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/3f0103cb-bb40-4669-9ea4-84921c8f9d78/TPP193-converted.mp3" length="42056825" type="audio/mpeg"/><itunes:duration>58:25</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The State of State AI Laws</title><itunes:title>The State of State AI Laws</itunes:title><description><![CDATA[<p>Lots of voices are calling for the regulation of artificial intelligence. In the US, at present it seems there is no federal legislation close to becoming law. But in 2023 legislative sessions in states across the country, there has been a surge in AI laws proposed and passed, and some have already taken effect. To learn more about this wave of legislation, I spoke to two people who just posted a <a href="https://epic.org/the-state-of-state-ai-laws-2023/" rel="noopener noreferrer" target="_blank">comprehensive review of AI laws</a> in US states: <strong>Katrina Zhu</strong>, a law clerk at the Electronic Privacy Information Center (EPIC) and a law student at the UCLA School of Law, and EPIC senior counsel <strong>Ben Winters</strong>.</p><p><br></p>]]></description><content:encoded><![CDATA[<p>Lots of voices are calling for the regulation of artificial intelligence. In the US, at present it seems there is no federal legislation close to becoming law. But in 2023 legislative sessions in states across the country, there has been a surge in AI laws proposed and passed, and some have already taken effect. To learn more about this wave of legislation, I spoke to two people who just posted a <a href="https://epic.org/the-state-of-state-ai-laws-2023/" rel="noopener noreferrer" target="_blank">comprehensive review of AI laws</a> in US states: <strong>Katrina Zhu</strong>, a law clerk at the Electronic Privacy Information Center (EPIC) and a law student at the UCLA School of Law, and EPIC senior counsel <strong>Ben Winters</strong>.</p><p><br></p>]]></content:encoded><link><![CDATA[https://techpolicy.press/the-state-of-state-ai-laws]]></link><guid isPermaLink="false">642eeb31-e398-4a73-9005-4c5fbd250492</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 06 Aug 2023 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/9f2e7d56-ff61-4635-8752-21afcffda64b/TPP191-converted.mp3" length="20707303" type="audio/mpeg"/><itunes:duration>24:39</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Examining the Meta 2020 US Election Research Partnership</title><itunes:title>Examining the Meta 2020 US Election Research Partnership</itunes:title><description><![CDATA[<p>A <a href="https://about.fb.com/news/2020/08/research-impact-of-facebook-and-instagram-on-us-election/" rel="noopener noreferrer" target="_blank">unique collaboration</a> between social scientists and Meta to conduct research on Facebook and Instagram during the height of the 2020 US election has at long last produced its first work products. The release of four peer-reviewed studies last week in <em>Science</em> and <em>Nature </em>mark the first of as many as sixteen studies that promise fresh insights into the complex dynamics of social media and public discourse. </p><p>But beyond the findings of the research, the partnership between Meta and some of the most prominent researchers in the field has been held up as a model. With active discussions ongoing in multiple jurisdictions about how best to facilitate access to platform data for independent researchers, it’s worth scrutinizing the strengths and weaknesses of this partnership. And to do that, <strong>Justin Hendrix</strong> is joined by one researcher who was able to observe and evaluate nearly every detail of the process for the last three years: the project's rapporteur, <strong>Michael Wagner</strong>, who in his day job is a professor in the University of&nbsp;Wisconsin-Madison's School of Journalism and Mass Communication.</p>]]></description><content:encoded><![CDATA[<p>A <a href="https://about.fb.com/news/2020/08/research-impact-of-facebook-and-instagram-on-us-election/" rel="noopener noreferrer" target="_blank">unique collaboration</a> between social scientists and Meta to conduct research on Facebook and Instagram during the height of the 2020 US election has at long last produced its first work products. The release of four peer-reviewed studies last week in <em>Science</em> and <em>Nature </em>mark the first of as many as sixteen studies that promise fresh insights into the complex dynamics of social media and public discourse. </p><p>But beyond the findings of the research, the partnership between Meta and some of the most prominent researchers in the field has been held up as a model. With active discussions ongoing in multiple jurisdictions about how best to facilitate access to platform data for independent researchers, it’s worth scrutinizing the strengths and weaknesses of this partnership. And to do that, <strong>Justin Hendrix</strong> is joined by one researcher who was able to observe and evaluate nearly every detail of the process for the last three years: the project's rapporteur, <strong>Michael Wagner</strong>, who in his day job is a professor in the University of&nbsp;Wisconsin-Madison's School of Journalism and Mass Communication.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/examining-the-meta-2020-us-election-research-partnership]]></link><guid isPermaLink="false">053ccd81-d954-4690-9ee6-28e15d7c8c45</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Wed, 02 Aug 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/4291ba48-76cb-4268-8326-ecbc8d1a09a7/Examining-the-Meta-2020-US-Election-Research-Partnership-conver.mp3" length="31754830" type="audio/mpeg"/><itunes:duration>37:48</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Alex Winter on The YouTube Effect</title><itunes:title>Alex Winter on The YouTube Effect</itunes:title><description><![CDATA[<p>In today’s podcast, <strong>Justin Hendrix </strong>talks with<strong> </strong>director, writer and actor&nbsp;<strong>Alex Winter,</strong> whose new documentary, <a href="https://www.yteffect.com/" rel="noopener noreferrer" target="_blank"><em>The YouTube Effect</em></a>, is in select theaters now and will be available on streaming platforms on August 8th. The film's creators assert that "the story of YouTube is the great dilemma of our times; the technology revolution has made our lives easier and more enriched, while also presenting dangers and challenges that make the world a more perilous place."</p>]]></description><content:encoded><![CDATA[<p>In today’s podcast, <strong>Justin Hendrix </strong>talks with<strong> </strong>director, writer and actor&nbsp;<strong>Alex Winter,</strong> whose new documentary, <a href="https://www.yteffect.com/" rel="noopener noreferrer" target="_blank"><em>The YouTube Effect</em></a>, is in select theaters now and will be available on streaming platforms on August 8th. The film's creators assert that "the story of YouTube is the great dilemma of our times; the technology revolution has made our lives easier and more enriched, while also presenting dangers and challenges that make the world a more perilous place."</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/alex-winter-on-the-youtube-effect]]></link><guid isPermaLink="false">d34b96d9-fb5f-4104-b4be-52219981b258</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 30 Jul 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/409943f8-5866-4818-9b46-685e9bd45bec/TPP189-converted.mp3" length="23622785" type="audio/mpeg"/><itunes:duration>28:07</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Ifeoma Ajunwa on The Quantifed Worker</title><itunes:title>Ifeoma Ajunwa on The Quantifed Worker</itunes:title><description><![CDATA[<p>Today’s guest on the podcast is <strong>Ifeoma Ajunwa</strong>, the AI.Humanity Professor of Law and Ethics and Director of AI and the Law Program at Emory Law School, and author of the <a href="https://www.cambridge.org/core/books/quantified-worker/CDA274EFF118E3AB6E583424D95DF40D" rel="noopener noreferrer" target="_blank"><em>Quantified Worker: Law and Technology in the Modern Workplace</em></a>. from Cambridge University Press. The book considers how data and artificial intelligence are changing the workplace, and whether the law is more equipped to help workers in this transition, or to provide for the interests of employers.</p>]]></description><content:encoded><![CDATA[<p>Today’s guest on the podcast is <strong>Ifeoma Ajunwa</strong>, the AI.Humanity Professor of Law and Ethics and Director of AI and the Law Program at Emory Law School, and author of the <a href="https://www.cambridge.org/core/books/quantified-worker/CDA274EFF118E3AB6E583424D95DF40D" rel="noopener noreferrer" target="_blank"><em>Quantified Worker: Law and Technology in the Modern Workplace</em></a>. from Cambridge University Press. The book considers how data and artificial intelligence are changing the workplace, and whether the law is more equipped to help workers in this transition, or to provide for the interests of employers.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/ifeoma-ajunwa-on-the-quantifed-worker]]></link><guid isPermaLink="false">66c503c0-cff3-4587-93df-0d9b90c53483</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 23 Jul 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/97116a44-3a1a-4ee8-8c0e-4aa9d57ebd8a/TPP190-converted.mp3" length="29193298" type="audio/mpeg"/><itunes:duration>40:33</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Justine Bateman on AI, Labor, and the Future of Entertainment</title><itunes:title>Justine Bateman on AI, Labor, and the Future of Entertainment</itunes:title><description><![CDATA[<p>Artificial intelligence will likely impact every type of job. But this summer, Hollywood actors and writers have raised substantial concerns about the ways in which generative AI systems may be used to replace aspects of their human craft. The Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) and the Writers Guild of America (WGA) are currently joined in a dual strike, hoping to make progress on a range of labor grievances with the studios and streaming companies that employ them. </p><p>Today’s guest is <strong>Justine Bateman</strong>, a writer, director, producer, author, and member of the Directors Guild of America (DGA), the WGA, and SAG-AFTRA. Bateman has been on both sides of the camera for much of her life, and has a particularly sharp perspective on how AI may change the entertainment industry, and why it matters to all workers that the unions are standing up on these issues now.</p>]]></description><content:encoded><![CDATA[<p>Artificial intelligence will likely impact every type of job. But this summer, Hollywood actors and writers have raised substantial concerns about the ways in which generative AI systems may be used to replace aspects of their human craft. The Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) and the Writers Guild of America (WGA) are currently joined in a dual strike, hoping to make progress on a range of labor grievances with the studios and streaming companies that employ them. </p><p>Today’s guest is <strong>Justine Bateman</strong>, a writer, director, producer, author, and member of the Directors Guild of America (DGA), the WGA, and SAG-AFTRA. Bateman has been on both sides of the camera for much of her life, and has a particularly sharp perspective on how AI may change the entertainment industry, and why it matters to all workers that the unions are standing up on these issues now.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/justine-bateman-on-ai-labor-and-the-future-of-entertainment]]></link><guid isPermaLink="false">f643e373-e88d-43a3-90fe-c3b0a87fb7bd</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 23 Jul 2023 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/5f094fa4-f44c-4ad8-aef3-b2173be0d283/TPP189-converted.mp3" length="20699241" type="audio/mpeg"/><itunes:duration>28:45</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Content Moderation, Encryption, and the Law</title><itunes:title>Content Moderation, Encryption, and the Law</itunes:title><description><![CDATA[<p>One of the most urgent debates in tech policy at the moment concerns encrypted communications. At issue in proposed legislation, such as the UK’s Online Safety Bill or the EARN It Act put forward in the US Senate, is whether such laws break the privacy promise of end to end encryption by requiring content moderation mechanisms like client-side scanning. But to what extent are such moderation techniques legal under existing laws that limit the monitoring and interception of communications?&nbsp;</p><p>Today’s guest is <strong>James Grimmelmann</strong>, a legal scholar with a computer science background who recently <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4457414" rel="noopener noreferrer" target="_blank">conducted a review</a> of various moderation technologies to determine how they might hold up in under US federal communication privacy regimes including the Wiretap Act, the Stored Communications Act, and the Communications Assistance for Law Enforcement Act (CALEA). The conversation touches on how technologies like server side and client side scanning work, the extent to which the law may fail to accommodate or even contemplate such technologies, and where the encryption debate is headed as these technologies advance.</p>]]></description><content:encoded><![CDATA[<p>One of the most urgent debates in tech policy at the moment concerns encrypted communications. At issue in proposed legislation, such as the UK’s Online Safety Bill or the EARN It Act put forward in the US Senate, is whether such laws break the privacy promise of end to end encryption by requiring content moderation mechanisms like client-side scanning. But to what extent are such moderation techniques legal under existing laws that limit the monitoring and interception of communications?&nbsp;</p><p>Today’s guest is <strong>James Grimmelmann</strong>, a legal scholar with a computer science background who recently <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4457414" rel="noopener noreferrer" target="_blank">conducted a review</a> of various moderation technologies to determine how they might hold up in under US federal communication privacy regimes including the Wiretap Act, the Stored Communications Act, and the Communications Assistance for Law Enforcement Act (CALEA). The conversation touches on how technologies like server side and client side scanning work, the extent to which the law may fail to accommodate or even contemplate such technologies, and where the encryption debate is headed as these technologies advance.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/content-moderation-encryption-and-the-law]]></link><guid isPermaLink="false">af6e767b-836d-463d-9513-2753df3ea6b9</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 16 Jul 2023 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/a468d340-73c6-481c-97c0-0abffb569480/TPP188-converted.mp3" length="27691784" type="audio/mpeg"/><itunes:duration>38:28</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Extended Reality and the Law</title><itunes:title>Extended Reality and the Law</itunes:title><description><![CDATA[<p>Tomorrow's virtual worlds will be governed, at least at first, by today's legal and regulatory regimes. How will privacy law, torts, IP, or even criminal law apply in 'extended reality' (XR)?</p><p>Drawing from the discussion at a conference hosted earlier this year at Stanford University called "<a href="https://cyber.fsi.stanford.edu/events/existing-law-and-extended-reality" rel="noopener noreferrer" target="_blank">Existing Law and Extended Reality</a>," this episode asks what challenges will emerge from human behavior and interaction-- with one another and with technology-- inside XR experiences, and what choices governments and tech companies will face in addressing those challenges.</p><p>This episode of <em>The Sunday Show</em> was produced by <em>Tech Policy Press</em> audio and reporting intern <strong>Rebecca Rand</strong>, and features the voices of experts such as <strong>Brittan Heller</strong> (the organizer of the Stanford conference), <strong>Mary Anne Franks</strong>, <strong>Kent Bye</strong>, <strong>Jameson Spivack</strong>, <strong>Joseph Palmer</strong>, <strong>Eugene Volokh</strong>, <strong>Amie Stepanovich</strong>, <strong>Susan Aaronson</strong>, <strong>Florence G'Sell</strong>, and <strong>Avi Bar Zeev</strong>.</p>]]></description><content:encoded><![CDATA[<p>Tomorrow's virtual worlds will be governed, at least at first, by today's legal and regulatory regimes. How will privacy law, torts, IP, or even criminal law apply in 'extended reality' (XR)?</p><p>Drawing from the discussion at a conference hosted earlier this year at Stanford University called "<a href="https://cyber.fsi.stanford.edu/events/existing-law-and-extended-reality" rel="noopener noreferrer" target="_blank">Existing Law and Extended Reality</a>," this episode asks what challenges will emerge from human behavior and interaction-- with one another and with technology-- inside XR experiences, and what choices governments and tech companies will face in addressing those challenges.</p><p>This episode of <em>The Sunday Show</em> was produced by <em>Tech Policy Press</em> audio and reporting intern <strong>Rebecca Rand</strong>, and features the voices of experts such as <strong>Brittan Heller</strong> (the organizer of the Stanford conference), <strong>Mary Anne Franks</strong>, <strong>Kent Bye</strong>, <strong>Jameson Spivack</strong>, <strong>Joseph Palmer</strong>, <strong>Eugene Volokh</strong>, <strong>Amie Stepanovich</strong>, <strong>Susan Aaronson</strong>, <strong>Florence G'Sell</strong>, and <strong>Avi Bar Zeev</strong>.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/extended-reality-and-the-law]]></link><guid isPermaLink="false">031a900c-3dc0-4e3e-b131-b9579455e4b7</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 09 Jul 2023 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/88fc029b-2b9f-4a6f-8e8e-230f40eac94a/TPP187-converted.mp3" length="40103644" type="audio/mpeg"/><itunes:duration>41:46</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Reading the Civic Information Handbook</title><itunes:title>Reading the Civic Information Handbook</itunes:title><description><![CDATA[<p>This spring, <strong>Karen Kornbluh</strong> and <strong>Adrienne Goldstein</strong> from the German Marshall Fund’s Digital Innovation and Democracy Initiative published a document they call the <a href="https://citap.pubpub.org/pub/zypjuy9b/release/1" rel="noopener noreferrer" target="_blank">Civic Information Handbook</a>, which they produced in collaboration with University of North Carolina at Chapel Hill Center for Information, Technology, and Public Life (CITAP). Civic information—“important information needed to participate in democracy—is too often drowned out by viral falsehoods, including conspiracy theories.” The Handbook is intended as a resource to help knowledge-producing organizations in the “amplification of fact-based information.” </p><p>To learn more about the handbook and the ideas on which it is based, <strong>Justin Hendrix</strong> spoke to GMF research assistant <strong>Adrienne Goldstein</strong>, as well as <strong>Kathryn Peters</strong>, executive director of UNC CITAP.</p>]]></description><content:encoded><![CDATA[<p>This spring, <strong>Karen Kornbluh</strong> and <strong>Adrienne Goldstein</strong> from the German Marshall Fund’s Digital Innovation and Democracy Initiative published a document they call the <a href="https://citap.pubpub.org/pub/zypjuy9b/release/1" rel="noopener noreferrer" target="_blank">Civic Information Handbook</a>, which they produced in collaboration with University of North Carolina at Chapel Hill Center for Information, Technology, and Public Life (CITAP). Civic information—“important information needed to participate in democracy—is too often drowned out by viral falsehoods, including conspiracy theories.” The Handbook is intended as a resource to help knowledge-producing organizations in the “amplification of fact-based information.” </p><p>To learn more about the handbook and the ideas on which it is based, <strong>Justin Hendrix</strong> spoke to GMF research assistant <strong>Adrienne Goldstein</strong>, as well as <strong>Kathryn Peters</strong>, executive director of UNC CITAP.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/reading-the-civic-information-handbook]]></link><guid isPermaLink="false">23e47147-4fea-43e0-b030-1a8ad4e051b3</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Thu, 06 Jul 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/d826f9ab-26b0-4f93-8fd9-047362e24d45/TPP186-converted.mp3" length="18962843" type="audio/mpeg"/><itunes:duration>22:34</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Your Guides Through the Hellscape of AI Hype</title><itunes:title>Your Guides Through the Hellscape of AI Hype</itunes:title><description><![CDATA[<p><strong>Alex Hanna</strong>, the director of research at the Distributed AI Research Institute and <strong>Emily M. Bender</strong>, a professor of linguistics at the University of Washington, are the hosts of <a href="https://peertube.dair-institute.org/c/mystery_ai_hype_theater/videos" rel="noopener noreferrer" target="_blank"><em>Mystery AI Hype Theater 3000</em></a>, a show that seeks to "break down the&nbsp;AI hype, separate fact from fiction, and science from bloviation." <strong>Justin Hendrix</strong> spoke to Alex and Emily about the show's origins, and what they hope will come of the effort to scrutinize statements about the potential of AI that are often fantastical.</p>]]></description><content:encoded><![CDATA[<p><strong>Alex Hanna</strong>, the director of research at the Distributed AI Research Institute and <strong>Emily M. Bender</strong>, a professor of linguistics at the University of Washington, are the hosts of <a href="https://peertube.dair-institute.org/c/mystery_ai_hype_theater/videos" rel="noopener noreferrer" target="_blank"><em>Mystery AI Hype Theater 3000</em></a>, a show that seeks to "break down the&nbsp;AI hype, separate fact from fiction, and science from bloviation." <strong>Justin Hendrix</strong> spoke to Alex and Emily about the show's origins, and what they hope will come of the effort to scrutinize statements about the potential of AI that are often fantastical.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/your-guides-through-the-hellscape-of-ai-hype]]></link><guid isPermaLink="false">120adff0-0c72-4086-958a-dc21141632f0</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 02 Jul 2023 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/b59c3bd8-e953-4545-8175-ba2b8dc27c93/TPP185-converted.mp3" length="21570407" type="audio/mpeg"/><itunes:duration>25:41</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The Implications of Canada&apos;s Online News Act</title><itunes:title>The Implications of Canada&apos;s Online News Act</itunes:title><description><![CDATA[<p>Last week, Canada passed the Online News Act, legislation that requires tech platforms to remunerate Canadian news outlets, and the platforms are not happy. In response, Google announced it will remove links to Canadian news outlets from its products. Meta&nbsp;also said it would remove Canadian news from Facebook and Instagram.&nbsp;</p><p>The Act itself has yet to be implemented- it has to first go through a regulatory process to sort out how it will work. So, these moves by the platforms may be a tactic in the negotiation of the particulars. But the platforms also clearly want to send a message to other jurisdictions where similar legislation is under consideration.</p><p>For an expert opinion on the politics surrounding Canada’s Online News Act and its broader implications, <em>Tech Policy Press</em> Contributing Editor <strong>Ben Lennett</strong> spoke to one person who has been following it closely from his perch in Montreal. <strong>Taylor Owen</strong>&nbsp;is the Beaverbrook Chair in Media, Ethics and Communications, the founding director of&nbsp;The Center for Media, Technology and Democracy, and an Associate Professor in the Max Bell School of Public Policy at McGill University.&nbsp;</p>]]></description><content:encoded><![CDATA[<p>Last week, Canada passed the Online News Act, legislation that requires tech platforms to remunerate Canadian news outlets, and the platforms are not happy. In response, Google announced it will remove links to Canadian news outlets from its products. Meta&nbsp;also said it would remove Canadian news from Facebook and Instagram.&nbsp;</p><p>The Act itself has yet to be implemented- it has to first go through a regulatory process to sort out how it will work. So, these moves by the platforms may be a tactic in the negotiation of the particulars. But the platforms also clearly want to send a message to other jurisdictions where similar legislation is under consideration.</p><p>For an expert opinion on the politics surrounding Canada’s Online News Act and its broader implications, <em>Tech Policy Press</em> Contributing Editor <strong>Ben Lennett</strong> spoke to one person who has been following it closely from his perch in Montreal. <strong>Taylor Owen</strong>&nbsp;is the Beaverbrook Chair in Media, Ethics and Communications, the founding director of&nbsp;The Center for Media, Technology and Democracy, and an Associate Professor in the Max Bell School of Public Policy at McGill University.&nbsp;</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/the-implications-of-canadas-online-news-act]]></link><guid isPermaLink="false">9902828e-da20-450a-8cf1-b4012b63412b</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Fri, 30 Jun 2023 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/54c285e9-44ef-422f-b727-7cc4943c661c/TPP184-converted.mp3" length="30467870" type="audio/mpeg"/><itunes:duration>36:16</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Exploring Global Governance of Artificial Intelligence</title><itunes:title>Exploring Global Governance of Artificial Intelligence</itunes:title><description><![CDATA[<p>Over the past few months, there have been a range of voices calling for the urgent regulation of artificial intelligence. Comparisons to the problems of nuclear proliferation abound, so perhaps it’s no surprise that some want a new international body similar to the International Atomic Energy Agency (IAEA). But when it comes to AI and global governance, there’s already a lot in play- from ethics councils to various schemes for industry governance, activity on standards, various international agreements, and legislation that will have international impact, such as the EU’s AI Act.&nbsp;</p><p>To help get his head around the complicated, evolving ecology of global AI governance, Justin Hendrix spoke to two of the three authors of a <a href="https://osf.io/preprints/socarxiv/ubxgk" rel="noopener noreferrer" target="_blank">recent paper</a> in the <em>Annual Review of Law and Social Science</em> that attempts to take stock of and explore the tensions between different approaches, including <strong>Michael Veale</strong>, an associate professor in the Faculty of Laws at University College London, where he works on the intersection of computer science, law, and policy; and <strong>Robert Gorwa</strong>, a postdoctoral researcher at the Berlin Social Science Center, a large publicly-funded research institute in Germany.</p>]]></description><content:encoded><![CDATA[<p>Over the past few months, there have been a range of voices calling for the urgent regulation of artificial intelligence. Comparisons to the problems of nuclear proliferation abound, so perhaps it’s no surprise that some want a new international body similar to the International Atomic Energy Agency (IAEA). But when it comes to AI and global governance, there’s already a lot in play- from ethics councils to various schemes for industry governance, activity on standards, various international agreements, and legislation that will have international impact, such as the EU’s AI Act.&nbsp;</p><p>To help get his head around the complicated, evolving ecology of global AI governance, Justin Hendrix spoke to two of the three authors of a <a href="https://osf.io/preprints/socarxiv/ubxgk" rel="noopener noreferrer" target="_blank">recent paper</a> in the <em>Annual Review of Law and Social Science</em> that attempts to take stock of and explore the tensions between different approaches, including <strong>Michael Veale</strong>, an associate professor in the Faculty of Laws at University College London, where he works on the intersection of computer science, law, and policy; and <strong>Robert Gorwa</strong>, a postdoctoral researcher at the Berlin Social Science Center, a large publicly-funded research institute in Germany.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/exploring-global-governance-of-artificial-intelligence]]></link><guid isPermaLink="false">f22c6635-0469-46fb-a8cb-3da4e7681e71</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 25 Jun 2023 07:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/a0c6b16c-639a-4494-8b11-e7abfb262648/TPP182-converted.mp3" length="39858691" type="audio/mpeg"/><itunes:duration>47:27</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>A Conversation with Meredith Whittaker, President of Signal</title><itunes:title>A Conversation with Meredith Whittaker, President of Signal</itunes:title><description><![CDATA[<p>Earlier this month, <strong>Justin Hendrix</strong> traveled to RightsCon, the big gathering of individuals and organizations concerned with human rights and technology organized by Access Now. The sprawling event had hundreds of sessions on a wide range of themes, but one topic discussed across multiple tracks was the importance of encrypted communications, especially to groups such as political dissidents and journalists. </p><p>A key panel at RightsCon featured Signal President <strong>Meredith Whittaker</strong>, who spoke out about policies proposed in legislatures around the world that threaten the promise of end-to-end encryption to preserve the privacy of messages sent between individuals and groups. Leaders of encrypted apps have pulled together of late to speak out against the proposed UK Online Safety Bill, signing letters and appearing at events. Shortly after RightsCon, Hendrix connected with Whittaker to learn more about Signal’s posture against such legislation, why she sees encrypted communications as so crucial to freedom and human rights, and how the company thinks about safety and its role in the broader digital ecosystem.</p>]]></description><content:encoded><![CDATA[<p>Earlier this month, <strong>Justin Hendrix</strong> traveled to RightsCon, the big gathering of individuals and organizations concerned with human rights and technology organized by Access Now. The sprawling event had hundreds of sessions on a wide range of themes, but one topic discussed across multiple tracks was the importance of encrypted communications, especially to groups such as political dissidents and journalists. </p><p>A key panel at RightsCon featured Signal President <strong>Meredith Whittaker</strong>, who spoke out about policies proposed in legislatures around the world that threaten the promise of end-to-end encryption to preserve the privacy of messages sent between individuals and groups. Leaders of encrypted apps have pulled together of late to speak out against the proposed UK Online Safety Bill, signing letters and appearing at events. Shortly after RightsCon, Hendrix connected with Whittaker to learn more about Signal’s posture against such legislation, why she sees encrypted communications as so crucial to freedom and human rights, and how the company thinks about safety and its role in the broader digital ecosystem.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/a-conversation-with-meredith-whittaker-president-of-signal]]></link><guid isPermaLink="false">625cde22-6b27-4605-b18a-ab3b70f32388</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 18 Jun 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/77be7cb4-4ee1-4deb-bac1-7ddc6db28f0c/TPP181-converted.mp3" length="27899639" type="audio/mpeg"/><itunes:duration>38:45</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Recoding America: A Conversation with Jennifer Pahlka</title><itunes:title>Recoding America: A Conversation with Jennifer Pahlka</itunes:title><description><![CDATA[<p>In the United States, it’s fair to say that federal, state and local governments have struggled in the era of digitalization. Decades in to that era, there is still a gap between the policy outcomes we seek and what citizens often get when they engage with government agencies and services online. At its worst this gap means people aren’t receiving critical services that sustain their lives; and at the very least it reduces faith in government to be able to solve problems right at the moment when it’s clear the collective challenges we face are going to&nbsp;</p><p><strong>Jennifer Pahlka</strong>, who served in President <strong>Barack Obama</strong>’s administration as deputy chief technology officer and founded the nonprofit Code for America, has written a book that asks us to reexamine how government works, and how it should work, in the digital age.&nbsp;It's called <a href="https://us.macmillan.com/books/9781250266774/recodingamerica" rel="noopener noreferrer" target="_blank"><em>Recoding America: Why Government Is Failing in the Digital Age and How We Can Do Better</em></a><em>, </em>and it's the subject of the podcast today.</p>]]></description><content:encoded><![CDATA[<p>In the United States, it’s fair to say that federal, state and local governments have struggled in the era of digitalization. Decades in to that era, there is still a gap between the policy outcomes we seek and what citizens often get when they engage with government agencies and services online. At its worst this gap means people aren’t receiving critical services that sustain their lives; and at the very least it reduces faith in government to be able to solve problems right at the moment when it’s clear the collective challenges we face are going to&nbsp;</p><p><strong>Jennifer Pahlka</strong>, who served in President <strong>Barack Obama</strong>’s administration as deputy chief technology officer and founded the nonprofit Code for America, has written a book that asks us to reexamine how government works, and how it should work, in the digital age.&nbsp;It's called <a href="https://us.macmillan.com/books/9781250266774/recodingamerica" rel="noopener noreferrer" target="_blank"><em>Recoding America: Why Government Is Failing in the Digital Age and How We Can Do Better</em></a><em>, </em>and it's the subject of the podcast today.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/recoding-america-a-conversation-with-jennifer-pahlka]]></link><guid isPermaLink="false">dd2eb274-5705-4c33-bc24-d57099cf0d17</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 11 Jun 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/99119934-5d4f-42eb-8f85-acc39f2f1e60/TPP180-converted.mp3" length="32352681" type="audio/mpeg"/><itunes:duration>44:56</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>A Recap of the US-EU Trade and Technology Council Meeting with Mark Scott</title><itunes:title>A Recap of the US-EU Trade and Technology Council Meeting with Mark Scott</itunes:title><description><![CDATA[<p>Last week, a group of very important people, including the U.S Secretaries of State and Commerce and trade representatives from President Joe Biden’s administration, met with top European Union officials in the heart of the Swedish Lapland for the fourth Ministerial meeting of the U.S.-EU Trade and Technology Council, or “TTC”. Pressing needs were tackled, new initiatives were launched, commitments were made, and cooperation was deepened on a range of tech policy issues, at least according to the press releases. </p><p>To hear an unvarnished view from someone who was at the meeting about what might actually come of it all, <strong>Justin Hendrix</strong> invited on a journalist who is, in my opinion, one the best tech policy reporters in the world: <strong>Mark Scott</strong>, Chief Technology Correspondent for <em>Politico.</em> </p>]]></description><content:encoded><![CDATA[<p>Last week, a group of very important people, including the U.S Secretaries of State and Commerce and trade representatives from President Joe Biden’s administration, met with top European Union officials in the heart of the Swedish Lapland for the fourth Ministerial meeting of the U.S.-EU Trade and Technology Council, or “TTC”. Pressing needs were tackled, new initiatives were launched, commitments were made, and cooperation was deepened on a range of tech policy issues, at least according to the press releases. </p><p>To hear an unvarnished view from someone who was at the meeting about what might actually come of it all, <strong>Justin Hendrix</strong> invited on a journalist who is, in my opinion, one the best tech policy reporters in the world: <strong>Mark Scott</strong>, Chief Technology Correspondent for <em>Politico.</em> </p>]]></content:encoded><link><![CDATA[https://techpolicy.press/a-recap-of-the-us-eu-trade-and-technology-council-meeting-with-mark-scott]]></link><guid isPermaLink="false">9a49af9c-c5bf-43c2-9ebd-a4bd0576f020</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 04 Jun 2023 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/3ebdadb3-8d26-44ea-8fc0-bc5abab0722c/TPP179-converted.mp3" length="16111660" type="audio/mpeg"/><itunes:duration>26:51</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Responsible Release and Accountability for Generative AI Systems</title><itunes:title>Responsible Release and Accountability for Generative AI Systems</itunes:title><description><![CDATA[<p>Today’s show has two segments both focused on generative AI. In the first segment, <strong>Justin Hendrix</strong> speaks with <strong>Irene Solaiman</strong>, a researcher who has put a lot of thought into <a href="https://www.arxiv-vanity.com/papers/2302.04844/" rel="noopener noreferrer" target="_blank">evaluating the release strategies</a> for generative AI systems. Organizations big and small have pursued different methods for release of these systems, some holding their models and details about them very close, and some pursuing a more open approach. </p><p>And in the second segment, <strong>Justin Hendrix</strong> speaks with <strong>Calli Schroeder</strong> and <strong>Ben Winters</strong> at the Electronic Privacy Information Center about a <a href="https://epic.org/new-epic-report-sheds-light-on-generative-a-i-harms/" rel="noopener noreferrer" target="_blank">new report</a> they helped write about the harms of generative AI, and what to do about them.</p>]]></description><content:encoded><![CDATA[<p>Today’s show has two segments both focused on generative AI. In the first segment, <strong>Justin Hendrix</strong> speaks with <strong>Irene Solaiman</strong>, a researcher who has put a lot of thought into <a href="https://www.arxiv-vanity.com/papers/2302.04844/" rel="noopener noreferrer" target="_blank">evaluating the release strategies</a> for generative AI systems. Organizations big and small have pursued different methods for release of these systems, some holding their models and details about them very close, and some pursuing a more open approach. </p><p>And in the second segment, <strong>Justin Hendrix</strong> speaks with <strong>Calli Schroeder</strong> and <strong>Ben Winters</strong> at the Electronic Privacy Information Center about a <a href="https://epic.org/new-epic-report-sheds-light-on-generative-a-i-harms/" rel="noopener noreferrer" target="_blank">new report</a> they helped write about the harms of generative AI, and what to do about them.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/responsible-release-and-accountability-for-generative-ai-systems]]></link><guid isPermaLink="false">b5cbbc93-b477-468c-a0d1-294bfba2e0a5</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 28 May 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/f3b3b824-fd2a-4b83-a2db-0b715acc7835/TPP178-converted.mp3" length="34995236" type="audio/mpeg"/><itunes:duration>48:36</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The Supreme Court Decides: A Final Word on Gonzalez v. Google and Twitter v. Taamneh with Anupam Chander</title><itunes:title>The Supreme Court Decides: A Final Word on Gonzalez v. Google and Twitter v. Taamneh with Anupam Chander</itunes:title><description><![CDATA[<p>Last week, the Supreme Court released decisions in <em>Gonzalez v. Google</em>, <em>LLC</em>, and <em>Twitter, Inc. v. Taamneh</em>. In this episode we’ll discuss what it tells us about how the Court is thinking about social media and intermediary liability, and what it might tell us about future cases the Court may hear. I’m joined by an expert who follows these issues closely, and has shared his expertise with us on this podcast before: <strong>Anupam Chander</strong>, a law professor at Georgetown University.</p>]]></description><content:encoded><![CDATA[<p>Last week, the Supreme Court released decisions in <em>Gonzalez v. Google</em>, <em>LLC</em>, and <em>Twitter, Inc. v. Taamneh</em>. In this episode we’ll discuss what it tells us about how the Court is thinking about social media and intermediary liability, and what it might tell us about future cases the Court may hear. I’m joined by an expert who follows these issues closely, and has shared his expertise with us on this podcast before: <strong>Anupam Chander</strong>, a law professor at Georgetown University.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/the-supreme-court-decides-a-final-word-on-gonzalez-v-google-and-twitter-v-taamneh-with-anupam-chander]]></link><guid isPermaLink="false">6130cc75-2de0-4577-8592-98739bf5d52e</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 21 May 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/723900ca-1a11-4313-9506-c0bcfba5359c/TPP177-converted.mp3" length="14749596" type="audio/mpeg"/><itunes:duration>20:29</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Nick Seaver on Computing Taste: Algorithms and the Makers of Music Recommendation</title><itunes:title>Nick Seaver on Computing Taste: Algorithms and the Makers of Music Recommendation</itunes:title><description><![CDATA[<p>Today’s episode features a discussion with <strong>Nick Seaver</strong>, a professor at Tufts University and the author of <a href="https://press.uchicago.edu/ucp/books/book/chicago/C/bo183892298.html" rel="noopener noreferrer" target="_blank"><em>Computing Taste: Algorithms and the Makers of Music Recommendation</em></a> from the University of Chicago Press. Nick is an anthropologist who studies how people use technology to make sense of cultural things. His book is the product of ethnographic observation and conversations with developers working on music recommendation algorithms and other systems designed to understand and cater to user preferences. His research gives us a better understanding of the motivations of the executives and engineers designing systems to command our attention, which he considers to be “a currency, a capacity, a filter, a spotlight, and a moral responsibility.”</p>]]></description><content:encoded><![CDATA[<p>Today’s episode features a discussion with <strong>Nick Seaver</strong>, a professor at Tufts University and the author of <a href="https://press.uchicago.edu/ucp/books/book/chicago/C/bo183892298.html" rel="noopener noreferrer" target="_blank"><em>Computing Taste: Algorithms and the Makers of Music Recommendation</em></a> from the University of Chicago Press. Nick is an anthropologist who studies how people use technology to make sense of cultural things. His book is the product of ethnographic observation and conversations with developers working on music recommendation algorithms and other systems designed to understand and cater to user preferences. His research gives us a better understanding of the motivations of the executives and engineers designing systems to command our attention, which he considers to be “a currency, a capacity, a filter, a spotlight, and a moral responsibility.”</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/nick-seaver-on-computing-taste-algorithms-and-the-makers-of-music-recommendation]]></link><guid isPermaLink="false">475bec04-5394-4d9b-96ae-2903bef2af42</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 14 May 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/88149fda-e5c0-48c1-ac0f-e4f5be333da4/TPP176-converted.mp3" length="35993605" type="audio/mpeg"/><itunes:duration>49:59</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Malcolm Harris on Palo Alto and the Project of Silicon Valley</title><itunes:title>Malcolm Harris on Palo Alto and the Project of Silicon Valley</itunes:title><description><![CDATA[<p><strong>Justin Hendrix</strong> speaks to writer <strong>Malcolm Harris</strong> about his book, <a href="https://www.hachettebookgroup.com/titles/malcolm-harris/palo-alto/9780316592031/?lens=little-brown" rel="noopener noreferrer" target="_blank"><em>PALO ALTO: A HISTORY OF CALIFORNIA, CAPITALISM, AND THE WORLD</em></a>, which considers the historical antecedents for the project of Silicon Valley.</p>]]></description><content:encoded><![CDATA[<p><strong>Justin Hendrix</strong> speaks to writer <strong>Malcolm Harris</strong> about his book, <a href="https://www.hachettebookgroup.com/titles/malcolm-harris/palo-alto/9780316592031/?lens=little-brown" rel="noopener noreferrer" target="_blank"><em>PALO ALTO: A HISTORY OF CALIFORNIA, CAPITALISM, AND THE WORLD</em></a>, which considers the historical antecedents for the project of Silicon Valley.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/malcolm-harris-on-palo-alto-and-the-project-of-silicon-valley]]></link><guid isPermaLink="false">d84795ad-0c2c-4221-a014-b2e9ba3f3a56</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 07 May 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/3ca60466-6e48-4989-890c-934f78d2aea8/TPP175-converted.mp3" length="29497169" type="audio/mpeg"/><itunes:duration>49:10</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Gus Hurwitz on Technology and the Law</title><itunes:title>Gus Hurwitz on Technology and the Law</itunes:title><description><![CDATA[<p>Recently <strong>Justin Hendrix</strong> caught up with <strong>Gus Hurwitz</strong>, a professor of law at the University of Nebraska and the director of the Governance and Technology Center. He’s also the Director of Law and Economics Programs at the International Center for Law and Economics, a Portland based think tank that focuses on antitrust law and economics policy issues. Hurwitz told Hendrix he’s leaving Nebraska at the end of the semester for a new position that is soon to be announced.&nbsp;</p><p>The conversation covered a range of topics, from how to think about the relationship between technology and the law, how to get engineers to engage with ethical and legal concepts, the view of the coastal tech policy discourse from Hurwitz’s vantage in the middle of the country, the role and politics of the Federal Trade Commission, and why he finds some inspiration in Frank Herbert’s <em>Dune</em>.</p>]]></description><content:encoded><![CDATA[<p>Recently <strong>Justin Hendrix</strong> caught up with <strong>Gus Hurwitz</strong>, a professor of law at the University of Nebraska and the director of the Governance and Technology Center. He’s also the Director of Law and Economics Programs at the International Center for Law and Economics, a Portland based think tank that focuses on antitrust law and economics policy issues. Hurwitz told Hendrix he’s leaving Nebraska at the end of the semester for a new position that is soon to be announced.&nbsp;</p><p>The conversation covered a range of topics, from how to think about the relationship between technology and the law, how to get engineers to engage with ethical and legal concepts, the view of the coastal tech policy discourse from Hurwitz’s vantage in the middle of the country, the role and politics of the Federal Trade Commission, and why he finds some inspiration in Frank Herbert’s <em>Dune</em>.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/gus-hurwitz-on-technology-and-the-law]]></link><guid isPermaLink="false">4ff7c1fa-13bb-44d3-acd2-52c9ebfeebb8</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Wed, 03 May 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/dc36561d-4ecb-4b29-9b93-36c59b8376b1/TPP171-converted.mp3" length="36019913" type="audio/mpeg"/><itunes:duration>50:02</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Twitter Whistleblower Anika Collier Navaroli Looks Forward</title><itunes:title>Twitter Whistleblower Anika Collier Navaroli Looks Forward</itunes:title><description><![CDATA[<p>In the course of its investigation into the insurrection at the US Capitol, the House Select Committee on January 6th spoke to hundreds of witnesses, including social media executives with insight into the role that platforms played in propagating the false claims that motivated violence that day, and in connecting and facilitating the movement and organization of people that sought to overthrow the election.</p><p>One of the individuals that testified to the Select Committee was a former Twitter official, <strong>Anika Collier Navaroli</strong>. <strong>Justin Hendrix</strong> had a chance to speak with Anika earlier this month, to hear how her thinking has evolved in this time under the spotlight, and what she’s hoping to do next to continue her journey as an intellectual and an activist working at the intersection of tech, media and democracy.</p>]]></description><content:encoded><![CDATA[<p>In the course of its investigation into the insurrection at the US Capitol, the House Select Committee on January 6th spoke to hundreds of witnesses, including social media executives with insight into the role that platforms played in propagating the false claims that motivated violence that day, and in connecting and facilitating the movement and organization of people that sought to overthrow the election.</p><p>One of the individuals that testified to the Select Committee was a former Twitter official, <strong>Anika Collier Navaroli</strong>. <strong>Justin Hendrix</strong> had a chance to speak with Anika earlier this month, to hear how her thinking has evolved in this time under the spotlight, and what she’s hoping to do next to continue her journey as an intellectual and an activist working at the intersection of tech, media and democracy.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/twitter-whistleblower-anika-collier-navaroli-looks-forward]]></link><guid isPermaLink="false">f2f2835c-f9e6-4799-a85d-d8ab96b8552d</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 30 Apr 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/e881f2d3-7cab-4ce8-a482-3a7d8522442c/TPP174-converted.mp3" length="23843366" type="audio/mpeg"/><itunes:duration>39:44</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>A Conversation with Baroness Beeban Kidron on Child Online Safety</title><itunes:title>A Conversation with Baroness Beeban Kidron on Child Online Safety</itunes:title><description><![CDATA[<p><em>Tech Policy Press </em>editor <strong>Justin Hendrix</strong> is joined by a UK lawmaker and advocate who has been influential in the global push for more protections for children online. <strong>Baroness Beeban Kidron</strong> OBE&nbsp;is a Crossbench member of the House of Lords and sits on the Democracy and Digital Technologies Committee, and she’s a Commissioner for UNESCO's Broadband Commission for Sustainable Development, where she is a member of the Working Group on Child Online Safety. She’s the Founder and Chair of 5Rights Foundation, which seeks to ensure children and young people are afforded the right to participate in the digital world “creatively, knowledgeably and fearlessly.”&nbsp;</p><p>5Rights played a key role in advancing the UK Children’s Code, as well as the California Age Appropriate Design Code Act, passed last year. Baroness Kidron discussed the broad trajectory of efforts to address online child safety, what she thinks about the legal challenge to the California law and some of the harsher provisions of child safety laws in other parts of the country, and where she believes the fight for child digital safety is headed in the future.&nbsp;</p>]]></description><content:encoded><![CDATA[<p><em>Tech Policy Press </em>editor <strong>Justin Hendrix</strong> is joined by a UK lawmaker and advocate who has been influential in the global push for more protections for children online. <strong>Baroness Beeban Kidron</strong> OBE&nbsp;is a Crossbench member of the House of Lords and sits on the Democracy and Digital Technologies Committee, and she’s a Commissioner for UNESCO's Broadband Commission for Sustainable Development, where she is a member of the Working Group on Child Online Safety. She’s the Founder and Chair of 5Rights Foundation, which seeks to ensure children and young people are afforded the right to participate in the digital world “creatively, knowledgeably and fearlessly.”&nbsp;</p><p>5Rights played a key role in advancing the UK Children’s Code, as well as the California Age Appropriate Design Code Act, passed last year. Baroness Kidron discussed the broad trajectory of efforts to address online child safety, what she thinks about the legal challenge to the California law and some of the harsher provisions of child safety laws in other parts of the country, and where she believes the fight for child digital safety is headed in the future.&nbsp;</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/a-conversation-with-baroness-beeban-kidron-on-child-online-safety]]></link><guid isPermaLink="false">0c8df21a-5d0c-4e63-92fd-347938807c55</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Thu, 27 Apr 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/a2978797-e236-4584-9b2b-7677a9828647/TPP173-converted.mp3" length="27349413" type="audio/mpeg"/><itunes:duration>45:35</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>A Conversation with Denmark&apos;s Tech Ambassador, Anne Marie Engtoft Larsen</title><itunes:title>A Conversation with Denmark&apos;s Tech Ambassador, Anne Marie Engtoft Larsen</itunes:title><description><![CDATA[<p>In this episode, <em>Tech Policy Press </em>board member and UCLA School of Law postdoctoral research fellow <strong>Courtney Radsch</strong> interviews <strong>Anne Marie Engtoft Larsen</strong>, Denmark’s Tech Ambassador, who represents the Danish Government to the global tech industry and in global governance forums on emerging technologies.&nbsp;The discussion focuses on the role of tech in society, how to regulate artificial intelligence, how to accommodate non-English and indigenous languages in a tech ecosystem focused on scale, and how to capitalize journalism in the age of social media. </p>]]></description><content:encoded><![CDATA[<p>In this episode, <em>Tech Policy Press </em>board member and UCLA School of Law postdoctoral research fellow <strong>Courtney Radsch</strong> interviews <strong>Anne Marie Engtoft Larsen</strong>, Denmark’s Tech Ambassador, who represents the Danish Government to the global tech industry and in global governance forums on emerging technologies.&nbsp;The discussion focuses on the role of tech in society, how to regulate artificial intelligence, how to accommodate non-English and indigenous languages in a tech ecosystem focused on scale, and how to capitalize journalism in the age of social media. </p>]]></content:encoded><link><![CDATA[https://techpolicy.press/a-conversation-with-denmarks-tech-ambassador-anne-marie-engtoft-larsen]]></link><guid isPermaLink="false">2af0f1ff-9f3b-4528-bd9c-60f43424cfb4</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 23 Apr 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/c77beb6f-fb8b-4941-af9d-c89857764641/TPP172-converted.mp3" length="26084742" type="audio/mpeg"/><itunes:duration>43:28</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>AI Accountability and the Risks of Social Interfaces</title><itunes:title>AI Accountability and the Risks of Social Interfaces</itunes:title><description><![CDATA[<p>This episode features two segments. We’ll hear from <strong>Ellen P. Goodman</strong>, Senior Advisor for Algorithmic Justice at the U.S. National Telecommunications and Information Administration (NTIA), which just&nbsp;<a href="https://ntia.gov/issues/artificial-intelligence/request-for-comments" rel="noopener noreferrer" target="_blank">launched an inquiry</a>&nbsp;seeking comment on “what policies will help businesses, government, and the public be able to trust that Artificial Intelligence (AI) systems work as claimed – and without causing harm.”&nbsp;</p><p>And, we’ll speak with <strong>Dr. Michal Luria</strong>, a Research Fellow at the Center for Democracy &amp; Technology who had a column in Wired this month under the headline,<em> </em><a href="https://www.wired.com/story/chatgpt-social-roles-psychology/" rel="noopener noreferrer" target="_blank"><em>Your ChatGPT Relationship Status Shouldn’t Be Complicated.</em></a><strong><em> </em></strong>She says the way people talk to each other is influenced by their social roles, but ChatGPT is blurring the lines of communication.&nbsp;</p>]]></description><content:encoded><![CDATA[<p>This episode features two segments. We’ll hear from <strong>Ellen P. Goodman</strong>, Senior Advisor for Algorithmic Justice at the U.S. National Telecommunications and Information Administration (NTIA), which just&nbsp;<a href="https://ntia.gov/issues/artificial-intelligence/request-for-comments" rel="noopener noreferrer" target="_blank">launched an inquiry</a>&nbsp;seeking comment on “what policies will help businesses, government, and the public be able to trust that Artificial Intelligence (AI) systems work as claimed – and without causing harm.”&nbsp;</p><p>And, we’ll speak with <strong>Dr. Michal Luria</strong>, a Research Fellow at the Center for Democracy &amp; Technology who had a column in Wired this month under the headline,<em> </em><a href="https://www.wired.com/story/chatgpt-social-roles-psychology/" rel="noopener noreferrer" target="_blank"><em>Your ChatGPT Relationship Status Shouldn’t Be Complicated.</em></a><strong><em> </em></strong>She says the way people talk to each other is influenced by their social roles, but ChatGPT is blurring the lines of communication.&nbsp;</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/ai-accountability-and-the-risks-of-social-interfaces]]></link><guid isPermaLink="false">e8585b84-54bc-4540-85d1-e97621f15e67</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Fri, 21 Apr 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/c7c8575d-1983-4f44-a884-129569706201/TPP170-converted.mp3" length="27014456" type="audio/mpeg"/><itunes:duration>37:31</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Is OpenAI Cultivating Fear to Sell AI?</title><itunes:title>Is OpenAI Cultivating Fear to Sell AI?</itunes:title><description><![CDATA[<p>In this episode, <strong>Justin Hendrix</strong> is joined by a columnist and author who’s spent the last few years thinking about a past era of automation, a process that yielded him a valuable perspective when considering this moment in time. <em>Los Angeles Times</em> technology columnist <strong>Brian Merchant</strong> is the author of a recent column under the headline, "<a href="https://www.latimes.com/business/technology/story/2023-03-31/column-afraid-of-ai-the-startups-selling-it-want-you-to-be" rel="noopener noreferrer" target="_blank">Afraid of AI? The startups selling it want you to be</a>," and the forthcoming book <a href="https://www.hachettebookgroup.com/titles/brian-merchant/blood-in-the-machine/9780316487740/?lens=little-brown" rel="noopener noreferrer" target="_blank"><em>Blood in the Machine: The Origins of the Rebellion Against Big Tech</em></a>, which tells the story of the 19th century Luddite movement.</p>]]></description><content:encoded><![CDATA[<p>In this episode, <strong>Justin Hendrix</strong> is joined by a columnist and author who’s spent the last few years thinking about a past era of automation, a process that yielded him a valuable perspective when considering this moment in time. <em>Los Angeles Times</em> technology columnist <strong>Brian Merchant</strong> is the author of a recent column under the headline, "<a href="https://www.latimes.com/business/technology/story/2023-03-31/column-afraid-of-ai-the-startups-selling-it-want-you-to-be" rel="noopener noreferrer" target="_blank">Afraid of AI? The startups selling it want you to be</a>," and the forthcoming book <a href="https://www.hachettebookgroup.com/titles/brian-merchant/blood-in-the-machine/9780316487740/?lens=little-brown" rel="noopener noreferrer" target="_blank"><em>Blood in the Machine: The Origins of the Rebellion Against Big Tech</em></a>, which tells the story of the 19th century Luddite movement.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/is-openai-cultivating-fear-to-sell-ai]]></link><guid isPermaLink="false">8e2f3da7-42db-4611-8c47-0778d2e290e3</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Tue, 18 Apr 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/e76265b3-b704-4411-8304-4eea53592189/TPP169-converted.mp3" length="23282826" type="audio/mpeg"/><itunes:duration>32:20</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The Cambridge Analytica Scandal, Five Years Later: Part 2</title><itunes:title>The Cambridge Analytica Scandal, Five Years Later: Part 2</itunes:title><description><![CDATA[<p>This is Part 2 of two episodes looking back on the Cambridge Analytica scandal, which arguably kicked off five years ago when the <em>New York Times</em> and the <em>Guardian</em> published articles on March 17, 2018. The <em>Times</em> headline was “<a href="https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html" rel="noopener noreferrer" target="_blank">How Trump Consultants Exploited the Data of Millions</a>,” while the <em>Guardian</em> went with “<a href="https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election" rel="noopener noreferrer" target="_blank">Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach</a>.”</p><p>That number, and the scale of the scandal, would only grow in the weeks and months ahead. It served as a major catalyzing moment for privacy concerns in the social media age. In these two episodes we’ll look back on what has happened since, the extent to which perceptions of what happened have changed or been challenged, and what unresolved questions that emerged from the scandal mean for the future.</p><p>In this second episode, we’ll hear a panel discussion hosted by the Bipartisan Policy Center that I helped moderate at the end of March. The panel featured <strong>Katie Harbath</strong>, a former Facebook executive who is now a Fellow in the Digital Democracy Project at the Bipartisan Policy Center; <strong>Alex Lundry,</strong> Co-Founder, Tunnl, Deep Root Analytics; and <strong>Matthew Rosenberg, </strong>a Washington-based Correspondent for the <em>New York Times</em> and one of the individuals on the byline of that first story on Cambridge Analytica.</p>]]></description><content:encoded><![CDATA[<p>This is Part 2 of two episodes looking back on the Cambridge Analytica scandal, which arguably kicked off five years ago when the <em>New York Times</em> and the <em>Guardian</em> published articles on March 17, 2018. The <em>Times</em> headline was “<a href="https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html" rel="noopener noreferrer" target="_blank">How Trump Consultants Exploited the Data of Millions</a>,” while the <em>Guardian</em> went with “<a href="https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election" rel="noopener noreferrer" target="_blank">Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach</a>.”</p><p>That number, and the scale of the scandal, would only grow in the weeks and months ahead. It served as a major catalyzing moment for privacy concerns in the social media age. In these two episodes we’ll look back on what has happened since, the extent to which perceptions of what happened have changed or been challenged, and what unresolved questions that emerged from the scandal mean for the future.</p><p>In this second episode, we’ll hear a panel discussion hosted by the Bipartisan Policy Center that I helped moderate at the end of March. The panel featured <strong>Katie Harbath</strong>, a former Facebook executive who is now a Fellow in the Digital Democracy Project at the Bipartisan Policy Center; <strong>Alex Lundry,</strong> Co-Founder, Tunnl, Deep Root Analytics; and <strong>Matthew Rosenberg, </strong>a Washington-based Correspondent for the <em>New York Times</em> and one of the individuals on the byline of that first story on Cambridge Analytica.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/the-cambridge-analytica-scandal-five-years-later-part-2]]></link><guid isPermaLink="false">b4eca88a-4e2d-42df-bdd8-22f422efd80a</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 16 Apr 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/0a2e1a00-af8e-4a32-818b-b6ca65b94d07/TPP168-converted.mp3" length="44169922" type="audio/mpeg"/><itunes:duration>01:01:21</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The Cambridge Analytica Scandal, Five Years Later: Part 1</title><itunes:title>The Cambridge Analytica Scandal, Five Years Later: Part 1</itunes:title><description><![CDATA[<p>This is Part 1 of two episodes looking back on the Cambridge Analytica scandal, which arguably kicked off five years ago when the <em>New York Times</em> and the <em>Guardian</em> published articles on March 17, 2018. The <em>Times</em> headline was “<a href="https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html" rel="noopener noreferrer" target="_blank">How Trump Consultants Exploited the Data of Millions</a>,” while the <em>Guardian</em> went with “<a href="https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election" rel="noopener noreferrer" target="_blank">Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach</a>.”</p><p>That number, and the scale of the scandal, would only grow in the weeks and months ahead. It served as a major catalyzing moment for privacy concerns in the social media age. In these two episodes we’ll look back on what has happened since, the extent to which perceptions of what happened have changed or been challenged, and what unresolved questions that emerged from the scandal mean for the future.</p><p>In this first episode, <strong>Justin Hendrix</strong> speaks with <strong>David Carroll</strong>, a professor of media design in the MFA Design and Technology graduate program at the School of Art, Media and Technology at Parsons School of Design at The New School. Carroll legally challenged Cambridge Analytica in the UK courts to recapture his 2016 voter profile using European data protection law, events that were chronicled in the 2019 Netflix documentary <em>The Great Hack</em>. </p>]]></description><content:encoded><![CDATA[<p>This is Part 1 of two episodes looking back on the Cambridge Analytica scandal, which arguably kicked off five years ago when the <em>New York Times</em> and the <em>Guardian</em> published articles on March 17, 2018. The <em>Times</em> headline was “<a href="https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html" rel="noopener noreferrer" target="_blank">How Trump Consultants Exploited the Data of Millions</a>,” while the <em>Guardian</em> went with “<a href="https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election" rel="noopener noreferrer" target="_blank">Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach</a>.”</p><p>That number, and the scale of the scandal, would only grow in the weeks and months ahead. It served as a major catalyzing moment for privacy concerns in the social media age. In these two episodes we’ll look back on what has happened since, the extent to which perceptions of what happened have changed or been challenged, and what unresolved questions that emerged from the scandal mean for the future.</p><p>In this first episode, <strong>Justin Hendrix</strong> speaks with <strong>David Carroll</strong>, a professor of media design in the MFA Design and Technology graduate program at the School of Art, Media and Technology at Parsons School of Design at The New School. Carroll legally challenged Cambridge Analytica in the UK courts to recapture his 2016 voter profile using European data protection law, events that were chronicled in the 2019 Netflix documentary <em>The Great Hack</em>. </p>]]></content:encoded><link><![CDATA[https://techpolicy.press/the-cambridge-analytica-scandal-five-years-later-part-1]]></link><guid isPermaLink="false">4addbf66-90bd-42e7-a8fd-16601efd0d4f</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 16 Apr 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/b6ee672f-a0cc-434a-997e-d385fa1c7402/TPP167-converted.mp3" length="28078909" type="audio/mpeg"/><itunes:duration>39:00</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Behind the Mic with Quinta Jurecic, Bridget Todd &amp; Justin Hendrix</title><itunes:title>Behind the Mic with Quinta Jurecic, Bridget Todd &amp; Justin Hendrix</itunes:title><description><![CDATA[<p>Two weeks ago, <em>Tech Policy Press</em> editor <strong>Justin Hendrix</strong> participated in <a href="https://techandsociety.georgetown.edu/tech-and-society-week/" rel="noopener noreferrer" target="_blank">Tech and Society</a> week, a series of events across Georgetown’s campus hosted by <strong>Emily Tavoulareas</strong>, Managing Chair of the Georgetown Initiative on Tech &amp; Society.&nbsp;The panel featured a discussion between three podcast hosts focused on tech and tech policy, including Hendrix and:</p><ul><li><strong>Bridget Todd, </strong>director of public communications for Ultraviolet, a gender justice organization trying to build a more feminist, anti-racist internet and the creator and host of the iHeartRadio tech and culture podcast <a href="https://www.tangoti.com/" rel="noopener noreferrer" target="_blank"><em>There Are No Girls on the Internet</em></a></li><li><strong>Quinta Jurecic, </strong>a fellow in governance studies at the Brookings Institution, a senior editor at <em>Lawfare</em>, and a contributing writer at <em>The Atlantic</em>. Jurecic is one of an array of hosts on the<em> Lawfare</em> podcast, and she’s the co-host of a long running series called <a href="https://www.lawfareblog.com/topic/arbiters-truth" rel="noopener noreferrer" target="_blank"><em>Arbiters of Truth</em></a> that focuses on the information ecosystem.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>Two weeks ago, <em>Tech Policy Press</em> editor <strong>Justin Hendrix</strong> participated in <a href="https://techandsociety.georgetown.edu/tech-and-society-week/" rel="noopener noreferrer" target="_blank">Tech and Society</a> week, a series of events across Georgetown’s campus hosted by <strong>Emily Tavoulareas</strong>, Managing Chair of the Georgetown Initiative on Tech &amp; Society.&nbsp;The panel featured a discussion between three podcast hosts focused on tech and tech policy, including Hendrix and:</p><ul><li><strong>Bridget Todd, </strong>director of public communications for Ultraviolet, a gender justice organization trying to build a more feminist, anti-racist internet and the creator and host of the iHeartRadio tech and culture podcast <a href="https://www.tangoti.com/" rel="noopener noreferrer" target="_blank"><em>There Are No Girls on the Internet</em></a></li><li><strong>Quinta Jurecic, </strong>a fellow in governance studies at the Brookings Institution, a senior editor at <em>Lawfare</em>, and a contributing writer at <em>The Atlantic</em>. Jurecic is one of an array of hosts on the<em> Lawfare</em> podcast, and she’s the co-host of a long running series called <a href="https://www.lawfareblog.com/topic/arbiters-truth" rel="noopener noreferrer" target="_blank"><em>Arbiters of Truth</em></a> that focuses on the information ecosystem.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/behind-the-mic-with-quinta-jurecic-bridget-todd-justin-hendrix]]></link><guid isPermaLink="false">358b758d-a6ed-49d5-bcc0-3f88f83b2cce</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 09 Apr 2023 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/0c166667-3dea-4cb2-a9dd-f2e703e96be3/TPP166-converted.mp3" length="34356144" type="audio/mpeg"/><itunes:duration>47:43</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Gaia Bernstein on Gaining Control over Addictive Technologies</title><itunes:title>Gaia Bernstein on Gaining Control over Addictive Technologies</itunes:title><description><![CDATA[<p>Across the United States, there is a growing number of lawsuits that seek to hold tech firms accountable for various alleged harms. My guest today is tracking such suits closely. <strong>Gaia Bernstein</strong> is a Law Professor, Co-Director of the Institute for Privacy Protection and Co-Director of the Gibbons Institute for Law Science and Technology at the Seton Hall University School of Law. She writes teaches and lectures in the intersection of law, technology, health and privacy, and she is the author of a new book on the subject, just out from Cambridge University Press, titled <a href="https://www.amazon.com/Unwired-Gaining-Control-Addictive-Technologies/dp/1009257935" rel="noopener noreferrer" target="_blank"><em>Unwired: Gaining Control over Addictive Technologies</em></a><em>.</em></p>]]></description><content:encoded><![CDATA[<p>Across the United States, there is a growing number of lawsuits that seek to hold tech firms accountable for various alleged harms. My guest today is tracking such suits closely. <strong>Gaia Bernstein</strong> is a Law Professor, Co-Director of the Institute for Privacy Protection and Co-Director of the Gibbons Institute for Law Science and Technology at the Seton Hall University School of Law. She writes teaches and lectures in the intersection of law, technology, health and privacy, and she is the author of a new book on the subject, just out from Cambridge University Press, titled <a href="https://www.amazon.com/Unwired-Gaining-Control-Addictive-Technologies/dp/1009257935" rel="noopener noreferrer" target="_blank"><em>Unwired: Gaining Control over Addictive Technologies</em></a><em>.</em></p>]]></content:encoded><link><![CDATA[https://techpolicy.press/gaia-bernstein-on-gaining-control-over-addictive-technologies]]></link><guid isPermaLink="false">ba7f8ff4-b8e2-4d2a-af59-0ae2adc6cbf9</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 02 Apr 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/efd51d87-aafc-4290-99e0-514492d54717/TPP165-converted.mp3" length="22192769" type="audio/mpeg"/><itunes:duration>36:59</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>More Than a Glitch: A Conversation with Meredith Broussard</title><itunes:title>More Than a Glitch: A Conversation with Meredith Broussard</itunes:title><description><![CDATA[<p>Is technology ultimately neutral? Are the biases we discover in the systems we interact with today just bugs or defects that we can trust will be addressed in version 2.0 or 3.0 of the system? Or is there something inherently wrong with the tech industry’s approach to developing algorithms and software?&nbsp;</p><p>In today’s podcast, we speak to the author of a new book that takes on this question. In <a href="https://mitpress.mit.edu/9780262047654/more-than-a-glitch/" rel="noopener noreferrer" target="_blank"><em>More than a Glitch. Confronting Race, Gender, and Ability Bias in Tech</em></a><em>, </em>data scientist and journalist <strong>Meredith Broussard</strong> considers the ways in which racism, sexism, and ableism are coded into systems, and what we must do to ensure a more inclusive future. </p>]]></description><content:encoded><![CDATA[<p>Is technology ultimately neutral? Are the biases we discover in the systems we interact with today just bugs or defects that we can trust will be addressed in version 2.0 or 3.0 of the system? Or is there something inherently wrong with the tech industry’s approach to developing algorithms and software?&nbsp;</p><p>In today’s podcast, we speak to the author of a new book that takes on this question. In <a href="https://mitpress.mit.edu/9780262047654/more-than-a-glitch/" rel="noopener noreferrer" target="_blank"><em>More than a Glitch. Confronting Race, Gender, and Ability Bias in Tech</em></a><em>, </em>data scientist and journalist <strong>Meredith Broussard</strong> considers the ways in which racism, sexism, and ableism are coded into systems, and what we must do to ensure a more inclusive future. </p>]]></content:encoded><link><![CDATA[https://techpolicy.press/more-than-a-glitch-a-conversation-with-meredith-broussard]]></link><guid isPermaLink="false">735c3878-db2e-4841-af42-6960a3cc8ea1</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 26 Mar 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/46ffe9f8-cf8f-43fd-bba6-aaed864ece38/TPP164-converted.mp3" length="25165502" type="audio/mpeg"/><itunes:duration>34:57</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Generative AI, Section 230 and Liability: Assessing the Questions</title><itunes:title>Generative AI, Section 230 and Liability: Assessing the Questions</itunes:title><description><![CDATA[<p>In this episode of the podcast, we hear three perspectives on generative AI systems and the extent to which their makers may be exposed to potential liability. I spoke to three experts, each with their own views on questions such as whether Section 230 of the Communications Decency Act-- which has provided broad immunity to internet platforms that host third party content-- will apply to systems like ChatGPT. </p><p>Guests, in order of appearance, include:&nbsp;</p><ul><li><strong>Jess Miers,</strong> legal advocacy counsel at the Chamber of Progress, an industry coalition whose partners include Meta, Apple, Google, Amazon, and others;</li><li><strong>James Grimmelmann</strong>, a law professor at Cornell with appointments at Cornell Tech and Cornell Law School;</li><li><strong>Hany Farid</strong>, a professor at the University of California Berkeley with a joint appointment in the computer and information science departments.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>In this episode of the podcast, we hear three perspectives on generative AI systems and the extent to which their makers may be exposed to potential liability. I spoke to three experts, each with their own views on questions such as whether Section 230 of the Communications Decency Act-- which has provided broad immunity to internet platforms that host third party content-- will apply to systems like ChatGPT. </p><p>Guests, in order of appearance, include:&nbsp;</p><ul><li><strong>Jess Miers,</strong> legal advocacy counsel at the Chamber of Progress, an industry coalition whose partners include Meta, Apple, Google, Amazon, and others;</li><li><strong>James Grimmelmann</strong>, a law professor at Cornell with appointments at Cornell Tech and Cornell Law School;</li><li><strong>Hany Farid</strong>, a professor at the University of California Berkeley with a joint appointment in the computer and information science departments.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/generative-ai-section-230-and-liability-assessing-the-questions]]></link><guid isPermaLink="false">33b42694-a4cb-408c-93c1-475c2398de32</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Thu, 23 Mar 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/46c0d734-6e0c-458b-860c-03e738b6fac1/TPP163-converted.mp3" length="54562678" type="audio/mpeg"/><itunes:duration>01:15:47</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>A History of Data from the Age of Reason to the Age of Algorithms</title><itunes:title>A History of Data from the Age of Reason to the Age of Algorithms</itunes:title><description><![CDATA[<p>At Columbia University, data scientist <strong>Chris Wiggins </strong>and historian <strong>Matthew Jones</strong> teach a course called <em>Data: Past, Present and Future</em>. Out of this collaboration has come a book, <a href="https://wwnorton.com/books/how-data-happened" rel="noopener noreferrer" target="_blank"><em>How Data Happened: A History from the Age of Reason to the Age of Algorithms</em></a>, to be published on Tuesday, March 21st by W.W. Norton. It should be required reading for anyone working with data of any sort to solve problems. The book promises a sweeping history of data and its technical, political, and ethical impact on people and power. </p>]]></description><content:encoded><![CDATA[<p>At Columbia University, data scientist <strong>Chris Wiggins </strong>and historian <strong>Matthew Jones</strong> teach a course called <em>Data: Past, Present and Future</em>. Out of this collaboration has come a book, <a href="https://wwnorton.com/books/how-data-happened" rel="noopener noreferrer" target="_blank"><em>How Data Happened: A History from the Age of Reason to the Age of Algorithms</em></a>, to be published on Tuesday, March 21st by W.W. Norton. It should be required reading for anyone working with data of any sort to solve problems. The book promises a sweeping history of data and its technical, political, and ethical impact on people and power. </p>]]></content:encoded><link><![CDATA[https://techpolicy.press/a-history-of-data-from-the-age-of-reason-to-the-age-of-algorithms]]></link><guid isPermaLink="false">27809e9e-de18-4aef-88a0-5335408df518</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 19 Mar 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/05388d49-a128-47d0-b784-957e0ca22f3a/TPP162-converted.mp3" length="27867471" type="audio/mpeg"/><itunes:duration>46:27</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>A Conversation with Tobias Bacherle</title><itunes:title>A Conversation with Tobias Bacherle</itunes:title><description><![CDATA[<p>Answers on how best to regulate technology differ depending on the values and politics of any particular jurisdiction. Yet it’s worth looking for points of consensus. In general these days, we in the United States have a lot to learn from lawmakers and regulators in Europe, who are further down the path in their regulatory experiments. </p><p>In this episode, <strong>Justin Hendrix</strong> speaks with one German lawmaker, <strong>Tobias Bacherle</strong>, who was elected to the Bundestag in 2021 representing Alliance 90/The Greens.&nbsp;The conversation touches on issues including encryption, the Digital Services Act, the US-EU Trade and Technology Council, and the relationship between tech and the environment.</p>]]></description><content:encoded><![CDATA[<p>Answers on how best to regulate technology differ depending on the values and politics of any particular jurisdiction. Yet it’s worth looking for points of consensus. In general these days, we in the United States have a lot to learn from lawmakers and regulators in Europe, who are further down the path in their regulatory experiments. </p><p>In this episode, <strong>Justin Hendrix</strong> speaks with one German lawmaker, <strong>Tobias Bacherle</strong>, who was elected to the Bundestag in 2021 representing Alliance 90/The Greens.&nbsp;The conversation touches on issues including encryption, the Digital Services Act, the US-EU Trade and Technology Council, and the relationship between tech and the environment.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/a-conversation-with-tobias-bacherle]]></link><guid isPermaLink="false">ffcb7966-190b-46eb-be41-f39983bd351b</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Fri, 17 Mar 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/f8b8cb84-f807-4142-84f5-a4abb162a450/TPP161-converted.mp3" length="26063463" type="audio/mpeg"/><itunes:duration>43:26</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Peter Pomerantzev on Tech, Media and Democracy</title><itunes:title>Peter Pomerantzev on Tech, Media and Democracy</itunes:title><description><![CDATA[<p>In the spring, <em>Tech Policy Press</em> editor <strong>Justin Hendrix</strong> teaches a course called Tech, Media and Democracy that is a partnership of faculty at NYU, Cornell Tech, CUNY’s Queens College, The New School and Columbia Journalism School. The course hosts a range of expert speakers on issues at the intersection of those topics, and graduate students in journalism, information science, computer science, media studies and design collaborate to produce prototypes and investigations of key issues. </p><p>A recent guest speaker was <strong>Peter Pomerantzev</strong>, an author and researcher who is concerned with propaganda, polarization and how we come to understand the world around us. <strong>Emily Bell</strong>, director of the Tow Center at Columbia and one of the faculty on the course, led the discussion, which ranges from topics including the information component of the war in Ukraine to the tension between democracy and authoritarianism to the role of journalism and technology in shaping public discourse.</p>]]></description><content:encoded><![CDATA[<p>In the spring, <em>Tech Policy Press</em> editor <strong>Justin Hendrix</strong> teaches a course called Tech, Media and Democracy that is a partnership of faculty at NYU, Cornell Tech, CUNY’s Queens College, The New School and Columbia Journalism School. The course hosts a range of expert speakers on issues at the intersection of those topics, and graduate students in journalism, information science, computer science, media studies and design collaborate to produce prototypes and investigations of key issues. </p><p>A recent guest speaker was <strong>Peter Pomerantzev</strong>, an author and researcher who is concerned with propaganda, polarization and how we come to understand the world around us. <strong>Emily Bell</strong>, director of the Tow Center at Columbia and one of the faculty on the course, led the discussion, which ranges from topics including the information component of the war in Ukraine to the tension between democracy and authoritarianism to the role of journalism and technology in shaping public discourse.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/peter-pomerantzev-on-tech-media-and-democracy]]></link><guid isPermaLink="false">0869d067-6bdd-47a4-b380-a0c1865421c9</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 12 Mar 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/93839639-52bf-494f-a65b-d3bf0b93a818/TPP160-converted.mp3" length="42603458" type="audio/mpeg"/><itunes:duration>44:23</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Mitigating the Ethical and Legal Risks of Synthetic Media and Generative AI</title><itunes:title>Mitigating the Ethical and Legal Risks of Synthetic Media and Generative AI</itunes:title><description><![CDATA[<p>In this episode we look at questions around ethical, legal and business risks surrounding so-called generative AI and synthetic media, and the opportunity that exists if they are employed responsibly. </p><p>The first segment features <strong>Matthew Ferraro</strong>, an attorney at the firm WilmerHale who counsels clients about such risks and, with his colleagues, recently wrote a piece for <em>Tech Policy Press</em> on the "<a href="https://techpolicy.press/ten-legal-and-business-risks-of-chatbots-and-generative-ai/" rel="noopener noreferrer" target="_blank">Ten Legal and Business Risks of Chatbots and Generative AI</a>." </p><p>And the second segment features <strong>Claire Leibowicz </strong>from the Partnership on AI and <strong>Sam Gregory</strong> from the human rights organization WITNESS, who worked together with other partners to develop a set of <a href="https://syntheticmedia.partnershiponai.org/" rel="noopener noreferrer" target="_blank">Responsible Practices for Synthetic Media</a>.</p>]]></description><content:encoded><![CDATA[<p>In this episode we look at questions around ethical, legal and business risks surrounding so-called generative AI and synthetic media, and the opportunity that exists if they are employed responsibly. </p><p>The first segment features <strong>Matthew Ferraro</strong>, an attorney at the firm WilmerHale who counsels clients about such risks and, with his colleagues, recently wrote a piece for <em>Tech Policy Press</em> on the "<a href="https://techpolicy.press/ten-legal-and-business-risks-of-chatbots-and-generative-ai/" rel="noopener noreferrer" target="_blank">Ten Legal and Business Risks of Chatbots and Generative AI</a>." </p><p>And the second segment features <strong>Claire Leibowicz </strong>from the Partnership on AI and <strong>Sam Gregory</strong> from the human rights organization WITNESS, who worked together with other partners to develop a set of <a href="https://syntheticmedia.partnershiponai.org/" rel="noopener noreferrer" target="_blank">Responsible Practices for Synthetic Media</a>.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/mitigating-the-ethical-and-legal-risks-of-synthetic-media-and-generative-ai]]></link><guid isPermaLink="false">1b04b103-e388-488f-93dd-9002d92c7290</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 05 Mar 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/2bae1d49-a97d-483a-83be-05b65b2bebfc/TPP159-converted.mp3" length="43792063" type="audio/mpeg"/><itunes:duration>01:00:49</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Of Legislators and Large Language Models</title><itunes:title>Of Legislators and Large Language Models</itunes:title><description><![CDATA[<p>How will so-called "generative AI" tools such as OpenAI's ChatGPT change our politics, and change the way we interact with our representatives in democratic government? This episode features three segments, with:</p><ul><li><strong>Kadia Goba</strong>, a politics reporter at <em>Semafor </em>and author of a <a href="https://www.semafor.com/article/02/13/2023/the-members-of-congress-trying-to-prevent-ai-pocalypse" rel="noopener noreferrer" target="_blank">recent report</a> on the AI Caucus in the U.S. House of Representatives;</li><li><strong>Micah Sifry</strong>, an expert observer of the relationship between tech and politics and the author of <em>The Connector</em>, a Substack newsletter on&nbsp;democracy, organizing, movements and tech, where he <a href="https://theconnector.substack.com/p/how-chatgpt-3-will-transform-politics" rel="noopener noreferrer" target="_blank">recently wrote</a> about ChatGPT and politics;</li><li><strong>Zach Graves</strong>, executive director of Lincoln Network, and <strong>Marci Harris</strong>, CEO and co-founder of PopVox.com, co-authors with <strong>Daniel Schuman</strong> at DemandProgress of a <a href="https://techpolicy.press/bots-in-congress-the-risks-and-benefits-of-emerging-ai-tools-in-the-legislative-branch/" rel="noopener noreferrer" target="_blank">recent essay</a> in <em>Tech Policy Press</em> on the risks and benefits of emerging AI tools in the legislative branch. </li></ul><br/>]]></description><content:encoded><![CDATA[<p>How will so-called "generative AI" tools such as OpenAI's ChatGPT change our politics, and change the way we interact with our representatives in democratic government? This episode features three segments, with:</p><ul><li><strong>Kadia Goba</strong>, a politics reporter at <em>Semafor </em>and author of a <a href="https://www.semafor.com/article/02/13/2023/the-members-of-congress-trying-to-prevent-ai-pocalypse" rel="noopener noreferrer" target="_blank">recent report</a> on the AI Caucus in the U.S. House of Representatives;</li><li><strong>Micah Sifry</strong>, an expert observer of the relationship between tech and politics and the author of <em>The Connector</em>, a Substack newsletter on&nbsp;democracy, organizing, movements and tech, where he <a href="https://theconnector.substack.com/p/how-chatgpt-3-will-transform-politics" rel="noopener noreferrer" target="_blank">recently wrote</a> about ChatGPT and politics;</li><li><strong>Zach Graves</strong>, executive director of Lincoln Network, and <strong>Marci Harris</strong>, CEO and co-founder of PopVox.com, co-authors with <strong>Daniel Schuman</strong> at DemandProgress of a <a href="https://techpolicy.press/bots-in-congress-the-risks-and-benefits-of-emerging-ai-tools-in-the-legislative-branch/" rel="noopener noreferrer" target="_blank">recent essay</a> in <em>Tech Policy Press</em> on the risks and benefits of emerging AI tools in the legislative branch. </li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/of-legislators-and-large-language-models]]></link><guid isPermaLink="false">eb8ad944-0c57-4844-bc6d-570c46cbbac8</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sat, 04 Mar 2023 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/99d28290-28a5-477e-af05-362ce08285e2/TPP158-converted.mp3" length="34150148" type="audio/mpeg"/><itunes:duration>56:55</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>An Exit Interview with a Hill Staffer</title><itunes:title>An Exit Interview with a Hill Staffer</itunes:title><description><![CDATA[<p>The past few years have seen a number of high profile&nbsp;hearings on Capitol Hill, with lawmakers expressing&nbsp;concern and even outrage at tech CEOs often for their failures to just satisfy their own policies. And, there have been high profile investigations by certain committees, including the investigation of competition in digital markets in the House Judiciary Committee and its Subcommittee on Antitrust, Commercial and Administrative Law. But when it comes to passing laws, Congress has made little progress in the domain of tech policy.&nbsp;</p><p>An academic and a tech policy expert, today’s guest played an active role in the investigations and legislative proposals led by Democrats over the last few years. <strong>Anna Lenhart</strong> served as a staffer on the House Judiciary Committee Antitrust Subcommittee under then <strong>Chairman David Cicilline </strong>(R-RI), where she supported tech oversight and investigations. And, she was senior technology policy Advisor to <strong>Representative&nbsp;Lori Trahan</strong> (D-MA), who serves on the Energy and Commerce Committee. I caught up with Anna for a kind of exit interview, as she recently left Congress to return to academia and a handful of projects focused on some of the issues she cared most about in her time on the Hill.&nbsp;</p>]]></description><content:encoded><![CDATA[<p>The past few years have seen a number of high profile&nbsp;hearings on Capitol Hill, with lawmakers expressing&nbsp;concern and even outrage at tech CEOs often for their failures to just satisfy their own policies. And, there have been high profile investigations by certain committees, including the investigation of competition in digital markets in the House Judiciary Committee and its Subcommittee on Antitrust, Commercial and Administrative Law. But when it comes to passing laws, Congress has made little progress in the domain of tech policy.&nbsp;</p><p>An academic and a tech policy expert, today’s guest played an active role in the investigations and legislative proposals led by Democrats over the last few years. <strong>Anna Lenhart</strong> served as a staffer on the House Judiciary Committee Antitrust Subcommittee under then <strong>Chairman David Cicilline </strong>(R-RI), where she supported tech oversight and investigations. And, she was senior technology policy Advisor to <strong>Representative&nbsp;Lori Trahan</strong> (D-MA), who serves on the Energy and Commerce Committee. I caught up with Anna for a kind of exit interview, as she recently left Congress to return to academia and a handful of projects focused on some of the issues she cared most about in her time on the Hill.&nbsp;</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/an-exit-interview-with-a-hill-staffer]]></link><guid isPermaLink="false">b392ccd6-37e1-4365-9ff6-0517010c12df</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 26 Feb 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/fdde4351-e9db-4237-9cda-8c23cd0195b0/TPP156-converted.mp3" length="31416953" type="audio/mpeg"/><itunes:duration>43:38</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The People Powering Amazon&apos;s Trickle-Down Monopoly</title><itunes:title>The People Powering Amazon&apos;s Trickle-Down Monopoly</itunes:title><description><![CDATA[<p>Amazon is one of the world’s largest and most powerful companies. Yet one of the engines of its might is largely invisible to customers- its vast network of millions of third party sellers. In today’s episode we talk with <strong>Moira Weigel</strong>, an Assistant Professor of Communications Studies at Northeastern University and the author of a recent report for Data &amp; Society, <a href="https://datasociety.net/library/amazons-trickle-down-monopoly/" rel="noopener noreferrer" target="_blank"><em>Amazon's Trickle Down Monopoly: Third Party Sellers and the Transformation of Small Businesses</em></a>. For the report, Weigel spent a good amount of time trying to understand experience of the people operating the small businesses that power Amazon’s global expansion.&nbsp;</p>]]></description><content:encoded><![CDATA[<p>Amazon is one of the world’s largest and most powerful companies. Yet one of the engines of its might is largely invisible to customers- its vast network of millions of third party sellers. In today’s episode we talk with <strong>Moira Weigel</strong>, an Assistant Professor of Communications Studies at Northeastern University and the author of a recent report for Data &amp; Society, <a href="https://datasociety.net/library/amazons-trickle-down-monopoly/" rel="noopener noreferrer" target="_blank"><em>Amazon's Trickle Down Monopoly: Third Party Sellers and the Transformation of Small Businesses</em></a>. For the report, Weigel spent a good amount of time trying to understand experience of the people operating the small businesses that power Amazon’s global expansion.&nbsp;</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/the-people-powering-amazons-trickle-down-monopoly]]></link><guid isPermaLink="false">fa1ad101-f9b1-45d4-a0bd-abde04e7c5ac</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 26 Feb 2023 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/7daf9b31-8fba-415b-8546-f845032302c1/TPP157-converted.mp3" length="18024318" type="audio/mpeg"/><itunes:duration>30:02</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>A Deep Dive Into Gonzalez v. Google</title><itunes:title>A Deep Dive Into Gonzalez v. Google</itunes:title><description><![CDATA[<p>This episode features four segments that dive into <em>Gonzalez v. Google</em>, a case before the Supreme Court that could have major implications on platform liability for online speech. First, we get a primer on the basics of the case itself; then, three separate perspectives on it. </p><p>Asking the questions is <strong>Ben Lennett</strong>, a tech policy researcher and writer focused on understanding the impact of social media and digital platforms on democracy. He has worked in various research and advocacy roles for the past decade, including serving as the Editor in Chief of Recoding.tech and as policy director for the Open Technology Institute at the New America Foundation.</p><p>Ben’s first interview is with two student editors at the publication <em>Just Security</em>, <strong>Aaron Fisher</strong> and <strong>Justin Cole</strong>, whom <em>Tech Policy Press</em> worked with this week to co-publish a <a href="https://techpolicy.press/mapping-the-key-arguments-in-supreme-court-amicus-briefs-in-gonzalez-v-google/" rel="noopener noreferrer" target="_blank">review of key arguments in the amicus briefs</a> filed with the Court on the Gonzalez case. Then, we’ll hear three successive interviews, with <strong>Mary McCord</strong>, Executive Director of the Institute for Constitutional Advocacy and Protection (ICAP) and a Visiting Professor of Law at Georgetown University Law Center;&nbsp;<strong>Anupam Chander</strong>, a Professor of Law and Technology at Georgetown Law; and <strong>David Brody</strong>, Managing Attorney of the Digital Justice Initiative at the Lawyer’s Committee for Civil Rights Under the Law.&nbsp;</p>]]></description><content:encoded><![CDATA[<p>This episode features four segments that dive into <em>Gonzalez v. Google</em>, a case before the Supreme Court that could have major implications on platform liability for online speech. First, we get a primer on the basics of the case itself; then, three separate perspectives on it. </p><p>Asking the questions is <strong>Ben Lennett</strong>, a tech policy researcher and writer focused on understanding the impact of social media and digital platforms on democracy. He has worked in various research and advocacy roles for the past decade, including serving as the Editor in Chief of Recoding.tech and as policy director for the Open Technology Institute at the New America Foundation.</p><p>Ben’s first interview is with two student editors at the publication <em>Just Security</em>, <strong>Aaron Fisher</strong> and <strong>Justin Cole</strong>, whom <em>Tech Policy Press</em> worked with this week to co-publish a <a href="https://techpolicy.press/mapping-the-key-arguments-in-supreme-court-amicus-briefs-in-gonzalez-v-google/" rel="noopener noreferrer" target="_blank">review of key arguments in the amicus briefs</a> filed with the Court on the Gonzalez case. Then, we’ll hear three successive interviews, with <strong>Mary McCord</strong>, Executive Director of the Institute for Constitutional Advocacy and Protection (ICAP) and a Visiting Professor of Law at Georgetown University Law Center;&nbsp;<strong>Anupam Chander</strong>, a Professor of Law and Technology at Georgetown Law; and <strong>David Brody</strong>, Managing Attorney of the Digital Justice Initiative at the Lawyer’s Committee for Civil Rights Under the Law.&nbsp;</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/a-deep-dive-into-gonzalez-v-google]]></link><guid isPermaLink="false">2c8a51c0-e567-4d45-8962-24bb566a3050</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 19 Feb 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/f8bb15d9-2859-4263-9f3e-cf0e96d6a0ca/TPP155-converted.mp3" length="73508387" type="audio/mpeg"/><itunes:duration>01:27:31</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Evaluating Cries of Censorship on Capitol Hill</title><itunes:title>Evaluating Cries of Censorship on Capitol Hill</itunes:title><description><![CDATA[<p>Elon Musk, the platform’s new owner, says that Twitter is both a social media company and a "crime scene." The crime he appears most concerned about is purported censorship by the tech firms, which he says has occurs at the U.S. government’s direction. Musk, who claims he is leading a “revolution” against such practices, has given a small number of people access to internal Twitter documents- the so-called Twitter Files- including emails and internal message board communications that, in their selective release, demonstrate executives at the firm engaging with politicians and federal agencies on a range of issues, from COVID-19 to election disinformation.&nbsp;</p><p>This week, there were two hearings in the House of Representatives on this subject, including a Committee on Oversight and Accountability hearing titled “Protecting Speech from Government Interference and Social Media Bias, Part 1: Twitter’s Role in Suppressing the Biden Laptop Story,” and a hearing of the new Select Subcommittee on the Weaponization of the Federal Government that was intended to “discuss the politicization of the FBI and DOJ&nbsp;and attacks on&nbsp;American civil liberties.”</p><p>If we look past the conspiracy theories and legal gibberish, is there any there there? Should we pursue reforms and require greater transparency around the interaction between platforms and government? In this episode, we hear from three experts:</p><ul><li><a href="https://www.rstreet.org/team/shoshana-weissmann/" rel="noopener noreferrer" target="_blank">Shoshana Weissmann</a>, Digital Director and Fellow at the R Street Institute</li><li><a href="https://www.clemson.edu/cbshs/about/profiles/index.html?userid=darrenl" rel="noopener noreferrer" target="_blank">Darren Linvill</a>, Associate Professor, Clemson University Media Forensics Hub </li><li><a href="https://www.techdirt.com/user/mmasnick/" rel="noopener noreferrer" target="_blank">Mike Masnick</a>, Founder of <em>TechDirt</em> and CEO of the Copia Institute</li></ul><br/>]]></description><content:encoded><![CDATA[<p>Elon Musk, the platform’s new owner, says that Twitter is both a social media company and a "crime scene." The crime he appears most concerned about is purported censorship by the tech firms, which he says has occurs at the U.S. government’s direction. Musk, who claims he is leading a “revolution” against such practices, has given a small number of people access to internal Twitter documents- the so-called Twitter Files- including emails and internal message board communications that, in their selective release, demonstrate executives at the firm engaging with politicians and federal agencies on a range of issues, from COVID-19 to election disinformation.&nbsp;</p><p>This week, there were two hearings in the House of Representatives on this subject, including a Committee on Oversight and Accountability hearing titled “Protecting Speech from Government Interference and Social Media Bias, Part 1: Twitter’s Role in Suppressing the Biden Laptop Story,” and a hearing of the new Select Subcommittee on the Weaponization of the Federal Government that was intended to “discuss the politicization of the FBI and DOJ&nbsp;and attacks on&nbsp;American civil liberties.”</p><p>If we look past the conspiracy theories and legal gibberish, is there any there there? Should we pursue reforms and require greater transparency around the interaction between platforms and government? In this episode, we hear from three experts:</p><ul><li><a href="https://www.rstreet.org/team/shoshana-weissmann/" rel="noopener noreferrer" target="_blank">Shoshana Weissmann</a>, Digital Director and Fellow at the R Street Institute</li><li><a href="https://www.clemson.edu/cbshs/about/profiles/index.html?userid=darrenl" rel="noopener noreferrer" target="_blank">Darren Linvill</a>, Associate Professor, Clemson University Media Forensics Hub </li><li><a href="https://www.techdirt.com/user/mmasnick/" rel="noopener noreferrer" target="_blank">Mike Masnick</a>, Founder of <em>TechDirt</em> and CEO of the Copia Institute</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/evaluating-cries-of-censorship-on-capitol-hill]]></link><guid isPermaLink="false">2f322c41-3417-4f04-a728-028387413cd3</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 12 Feb 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/4f70201b-46e7-4985-b47c-407e1b8c5efe/TPP154-converted.mp3" length="35883410" type="audio/mpeg"/><itunes:duration>49:50</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Voices in the Code: Algorithms, People, and Values</title><itunes:title>Voices in the Code: Algorithms, People, and Values</itunes:title><description><![CDATA[<p>Today, we’re going to listen in on a panel discussion that took place at the end of last year, hosted by the Knight First Amendment Institute at Columbia University. The Institute’s Research Director, <strong>Katy Glenn Bass</strong>, hosted a&nbsp;conversation with based on themes from the scholar <strong>David G. Robinson</strong>’s first book&nbsp;<em>Voices in the Code</em>. The book contains the story of how a group of patients, doctors, data scientists, and advocates worked together to develop a new way to match kidney donations for transplants, with the goal of making the process fair and open. The book bears insights on how algorithmic systems that are often heavily freighted with moral and political complexity can and should be developed with care to avoid excluding the voices of non-technical stakeholders in the outcome, and is a guide for policymakers concerned with questions around transparency, safety and equity in such systems.&nbsp;Panelists included Robinson, as well as scholars <strong>Deborah Raj</strong>i and <strong>J. Nathan Matias.</strong></p>]]></description><content:encoded><![CDATA[<p>Today, we’re going to listen in on a panel discussion that took place at the end of last year, hosted by the Knight First Amendment Institute at Columbia University. The Institute’s Research Director, <strong>Katy Glenn Bass</strong>, hosted a&nbsp;conversation with based on themes from the scholar <strong>David G. Robinson</strong>’s first book&nbsp;<em>Voices in the Code</em>. The book contains the story of how a group of patients, doctors, data scientists, and advocates worked together to develop a new way to match kidney donations for transplants, with the goal of making the process fair and open. The book bears insights on how algorithmic systems that are often heavily freighted with moral and political complexity can and should be developed with care to avoid excluding the voices of non-technical stakeholders in the outcome, and is a guide for policymakers concerned with questions around transparency, safety and equity in such systems.&nbsp;Panelists included Robinson, as well as scholars <strong>Deborah Raj</strong>i and <strong>J. Nathan Matias.</strong></p>]]></content:encoded><link><![CDATA[https://techpolicy.press/voices-in-the-code-algorithms-people-and-values]]></link><guid isPermaLink="false">00731756-bc6c-4560-b30a-3d25f6970d7b</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 05 Feb 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/cad3374b-e875-4b56-87e5-170e925cc3b4/TPP153-converted.mp3" length="35261479" type="audio/mpeg"/><itunes:duration>58:46</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Samuel Woolley on Manufacturing Consensus: Understanding Propaganda in the Age of Automation and Anonymity</title><itunes:title>Samuel Woolley on Manufacturing Consensus: Understanding Propaganda in the Age of Automation and Anonymity</itunes:title><description><![CDATA[<p>Frequently on this podcast we come back to questions around information, misinformation, and disinformation. In this age of digital communications, the metaphorical flora and fauna of the information ecosystem are closely studied by scientists from a range of disciplines. We're joined in this episode by one such scientist who uses observation and ethnography as his method, bringing a particularly sharp eye to the study of propaganda, media manipulation, and how those in power— and those who seek power— use such tactics. </p><p>Samuel Woolley is the author of <a href="https://yalebooks.yale.edu/book/9780300251234/manufacturing-consensus/" rel="noopener noreferrer" target="_blank"><em>Manufacturing Consensus: Understanding Propaganda in the Age of Automation and Anonymity</em></a>, just out this week from Yale University Press. He’s also the author of&nbsp;<a href="https://www.publicaffairsbooks.com/titles/samuel-woolley/the-reality-game/9781541768253/" rel="noopener noreferrer" target="_blank"><em>The Reality Game: How the Next Wave of Technology Will Break the Truth</em></a>;&nbsp;co-author, with Nick Monaco, of&nbsp;a book on <a href="https://www.wiley.com/en-us/Bots-p-9781509543601" rel="noopener noreferrer" target="_blank"><em>Bots</em></a>; and co-editor, with Dr. Philip N. Howard, of a book on&nbsp;<a href="https://global.oup.com/academic/product/computational-propaganda-9780190931414?cc=us&amp;lang=en&amp;" rel="noopener noreferrer" target="_blank"><em>Computational Propaganda</em></a>.</p>]]></description><content:encoded><![CDATA[<p>Frequently on this podcast we come back to questions around information, misinformation, and disinformation. In this age of digital communications, the metaphorical flora and fauna of the information ecosystem are closely studied by scientists from a range of disciplines. We're joined in this episode by one such scientist who uses observation and ethnography as his method, bringing a particularly sharp eye to the study of propaganda, media manipulation, and how those in power— and those who seek power— use such tactics. </p><p>Samuel Woolley is the author of <a href="https://yalebooks.yale.edu/book/9780300251234/manufacturing-consensus/" rel="noopener noreferrer" target="_blank"><em>Manufacturing Consensus: Understanding Propaganda in the Age of Automation and Anonymity</em></a>, just out this week from Yale University Press. He’s also the author of&nbsp;<a href="https://www.publicaffairsbooks.com/titles/samuel-woolley/the-reality-game/9781541768253/" rel="noopener noreferrer" target="_blank"><em>The Reality Game: How the Next Wave of Technology Will Break the Truth</em></a>;&nbsp;co-author, with Nick Monaco, of&nbsp;a book on <a href="https://www.wiley.com/en-us/Bots-p-9781509543601" rel="noopener noreferrer" target="_blank"><em>Bots</em></a>; and co-editor, with Dr. Philip N. Howard, of a book on&nbsp;<a href="https://global.oup.com/academic/product/computational-propaganda-9780190931414?cc=us&amp;lang=en&amp;" rel="noopener noreferrer" target="_blank"><em>Computational Propaganda</em></a>.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/samuel-woolley-on-manufacturing-consensus-understanding-propaganda-in-the-age-of-automation-and-anonymity]]></link><guid isPermaLink="false">b0aca4c6-e5d1-41e0-87a0-a79028fa6885</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Tue, 31 Jan 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/1b5d9489-ec2a-46a6-b37d-9c05df2223dd/TPP152-converted.mp3" length="30659726" type="audio/mpeg"/><itunes:duration>42:35</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>An Indigenous Perspective on Generative AI</title><itunes:title>An Indigenous Perspective on Generative AI</itunes:title><description><![CDATA[<p>Earlier this month, Getty&nbsp;Images, one of the world’s most prominent suppliers of editorial photography, stock images, and other forms of media, <a href="https://www.theverge.com/2023/1/17/23558516/ai-art-copyright-stable-diffusion-getty-images-lawsuit" rel="noopener noreferrer" target="_blank">announced</a> that it had <a href="https://newsroom.gettyimages.com/en/getty-images/getty-images-statement" rel="noopener noreferrer" target="_blank">commenced legal proceedings</a> in the High Court of Justice in London against Stability AI, a British startup firm that says it builds AI solutions using "collective intelligence," claiming Stability AI infringed on Getty’s intellectual property rights by including content owned or represented by Getty&nbsp;Images in its training data. Getty says Stability AI unlawfully copied and processed millions of images protected by copyright and the associated metadata owned or represented by Getty&nbsp;Images without a license, which the company says is to the detriment of the content’s creators. The notion at the heart of Getty’s assertion- that generative AI tools like Stable Diffusion and OpenAI’s DALLE-2 are in fact exploiting the creators of the images their models are trained on- could have significant implications for the field.&nbsp;</p><p>Earlier this month I attended a <a href="https://sites.google.com/stanford.edu/xr-2023/home" rel="noopener noreferrer" target="_blank">symposium</a> on Existing Law and Extended Reality, hosted at Stanford Law School. There, I met today’s guest, <a href="https://www.linkedin.com/in/runningwolf/?originalSubdomain=ca" rel="noopener noreferrer" target="_blank">Michael Running Wolf,</a> who brings a unique perspective to questions related to AI and ownership, as a former Amazon software engineer, a PhD student in computer science at McGill University, and as a Northern Cheyenne man intent on preserving the language and culture of native people. </p>]]></description><content:encoded><![CDATA[<p>Earlier this month, Getty&nbsp;Images, one of the world’s most prominent suppliers of editorial photography, stock images, and other forms of media, <a href="https://www.theverge.com/2023/1/17/23558516/ai-art-copyright-stable-diffusion-getty-images-lawsuit" rel="noopener noreferrer" target="_blank">announced</a> that it had <a href="https://newsroom.gettyimages.com/en/getty-images/getty-images-statement" rel="noopener noreferrer" target="_blank">commenced legal proceedings</a> in the High Court of Justice in London against Stability AI, a British startup firm that says it builds AI solutions using "collective intelligence," claiming Stability AI infringed on Getty’s intellectual property rights by including content owned or represented by Getty&nbsp;Images in its training data. Getty says Stability AI unlawfully copied and processed millions of images protected by copyright and the associated metadata owned or represented by Getty&nbsp;Images without a license, which the company says is to the detriment of the content’s creators. The notion at the heart of Getty’s assertion- that generative AI tools like Stable Diffusion and OpenAI’s DALLE-2 are in fact exploiting the creators of the images their models are trained on- could have significant implications for the field.&nbsp;</p><p>Earlier this month I attended a <a href="https://sites.google.com/stanford.edu/xr-2023/home" rel="noopener noreferrer" target="_blank">symposium</a> on Existing Law and Extended Reality, hosted at Stanford Law School. There, I met today’s guest, <a href="https://www.linkedin.com/in/runningwolf/?originalSubdomain=ca" rel="noopener noreferrer" target="_blank">Michael Running Wolf,</a> who brings a unique perspective to questions related to AI and ownership, as a former Amazon software engineer, a PhD student in computer science at McGill University, and as a Northern Cheyenne man intent on preserving the language and culture of native people. </p>]]></content:encoded><link><![CDATA[https://techpolicy.press/an-indigenous-perspective-on-generative-ai]]></link><guid isPermaLink="false">cbd30194-bea3-4f02-b352-8343309b5ef6</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 29 Jan 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/fe19b9a9-0765-42f0-816d-f2826f834eaa/TPP151-converted.mp3" length="33146686" type="audio/mpeg"/><itunes:duration>46:02</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>A Causal Link Between Facebook and Mental Health</title><itunes:title>A Causal Link Between Facebook and Mental Health</itunes:title><description><![CDATA[<p>In 2004, Mark Zuckerberg launched “TheFacebook” at Harvard University before rolling the social networking site out to other students at Dartmouth, Columbia, and Yale. Soon, it was available on hundreds of college and university campuses, and thereafter the rollout included high schools.&nbsp;</p><p>Now, there are nearly 3 billion monthly active users of the site, and it is readily apparent that it has had a significant impact on society in a variety of ways. One such impact is on mental health. Researchers have found that Facebook use is associated with multiple mental health issues, ranging from anxiety, insomnia, depression and addiction to body image and eating disorders, alcohol use,&nbsp;and more. But while much of the <a href="https://www.sciencedirect.com/science/article/pii/S0747563217304685?casa_token=9RSqop52hLsAAAAA:v0XOBA4wIiVof5K3pt8B2tV-iVhabOlpYuA-suc7JP9l5VYanBPDtkDwGd9P0Aek361iTil2" rel="noopener noreferrer" target="_blank">evidence</a> <a href="https://www.sciencedirect.com/science/article/pii/S0165032719311139?casa_token=dpmA_jeokesAAAAA:tI8N2qlp03A01w9cnP1M0j46wZBBpN0SL5E3Y-zu-h88x9koj73n_5JY5EgoRbjUiIPuxny9" rel="noopener noreferrer" target="_blank">collected</a> <a href="https://www.sciencedirect.com/science/article/pii/S0165032717307012?casa_token=0nbS89rPrWwAAAAA:pYEHNbSLNY5Pj47MA5Z99Hyo_SPbvYOaXTUah1pUnfdabJ8_VLICaELOgG_zb-Jpp9sfMjpa" rel="noopener noreferrer" target="_blank">is concerning</a>, most such studies have not identified a solid causal connection between Facebook and negative mental health, and many skeptics remain.&nbsp;</p><p>But in today’s episode, we’re going to discuss <a href="https://www.aeaweb.org/articles?id=10.1257/aer.20211218" rel="noopener noreferrer" target="_blank">one study</a> that does appear to draw a causal connection between the use of Facebook and poor mental health with two its authors: <strong>Luca Braghieri</strong>, an Assistant Professor in the department of Decision Sciences at Bocconi University in Italy; and <strong>Alexey Makarin</strong>, an Assistant Professor in the Applied Economics group at the MIT Sloan School of Management.</p>]]></description><content:encoded><![CDATA[<p>In 2004, Mark Zuckerberg launched “TheFacebook” at Harvard University before rolling the social networking site out to other students at Dartmouth, Columbia, and Yale. Soon, it was available on hundreds of college and university campuses, and thereafter the rollout included high schools.&nbsp;</p><p>Now, there are nearly 3 billion monthly active users of the site, and it is readily apparent that it has had a significant impact on society in a variety of ways. One such impact is on mental health. Researchers have found that Facebook use is associated with multiple mental health issues, ranging from anxiety, insomnia, depression and addiction to body image and eating disorders, alcohol use,&nbsp;and more. But while much of the <a href="https://www.sciencedirect.com/science/article/pii/S0747563217304685?casa_token=9RSqop52hLsAAAAA:v0XOBA4wIiVof5K3pt8B2tV-iVhabOlpYuA-suc7JP9l5VYanBPDtkDwGd9P0Aek361iTil2" rel="noopener noreferrer" target="_blank">evidence</a> <a href="https://www.sciencedirect.com/science/article/pii/S0165032719311139?casa_token=dpmA_jeokesAAAAA:tI8N2qlp03A01w9cnP1M0j46wZBBpN0SL5E3Y-zu-h88x9koj73n_5JY5EgoRbjUiIPuxny9" rel="noopener noreferrer" target="_blank">collected</a> <a href="https://www.sciencedirect.com/science/article/pii/S0165032717307012?casa_token=0nbS89rPrWwAAAAA:pYEHNbSLNY5Pj47MA5Z99Hyo_SPbvYOaXTUah1pUnfdabJ8_VLICaELOgG_zb-Jpp9sfMjpa" rel="noopener noreferrer" target="_blank">is concerning</a>, most such studies have not identified a solid causal connection between Facebook and negative mental health, and many skeptics remain.&nbsp;</p><p>But in today’s episode, we’re going to discuss <a href="https://www.aeaweb.org/articles?id=10.1257/aer.20211218" rel="noopener noreferrer" target="_blank">one study</a> that does appear to draw a causal connection between the use of Facebook and poor mental health with two its authors: <strong>Luca Braghieri</strong>, an Assistant Professor in the department of Decision Sciences at Bocconi University in Italy; and <strong>Alexey Makarin</strong>, an Assistant Professor in the Applied Economics group at the MIT Sloan School of Management.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/a-causal-link-between-facebook-and-mental-health]]></link><guid isPermaLink="false">d0b394fc-d3db-4741-b3e2-24183a9e1e3a</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 22 Jan 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/f268e250-1724-4faa-ac7b-1cc02e74fc81/TPP150-converted.mp3" length="16577116" type="audio/mpeg"/><itunes:duration>27:38</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Examining the Impact of Internet Research Agency Tweets in the 2016 U.S. Election</title><itunes:title>Examining the Impact of Internet Research Agency Tweets in the 2016 U.S. Election</itunes:title><description><![CDATA[<p>In the years following the 2016 U.S. presidential election, much effort has been put into understanding foreign influence campaigns, and into disrupting efforts by Russia and other countries, such as China and Iran, to interfere in U.S. elections. Political and other computational social scientists continue to whittle at questions as to how much influence such campaigns have on domestic politics. One such question is how much did the Russian Internet Research Agency's (IRA) tweets, specifically, affect voting preferences and political polarization in the United States? </p><p>A <a href="https://www.nature.com/articles/s41467-022-35576-9#MOESM2" rel="noopener noreferrer" target="_blank">new paper</a> in the journal <em>Nature Communications</em> provides an answer to that specific question. Titled <em>Exposure to the Russian Internet Research Agency foreign influence campaign on Twitter in the 2016 US election and its relationship to attitudes and voting behavior</em>, the paper matches Twitter data with survey data to study the impact of the IRA's tweets. </p><p>To learn more about the paper, <strong>Justin Hendrix</strong> spoke with one of its authors, <strong>Joshua Tucker</strong>,  a professor of politics at NYU, where he also serves as the director of the Jordan Center for the Advanced Study of Russia and the co-director of the NYU Center for Social Media and Politics (CSMaP). Hendrix and Tucker talked about the study, as well as what can and cannot be understood about the impact of the broader campaign of the IRA, or certainly the broader Russian effort to interfere in the U.S. election, from its results. </p>]]></description><content:encoded><![CDATA[<p>In the years following the 2016 U.S. presidential election, much effort has been put into understanding foreign influence campaigns, and into disrupting efforts by Russia and other countries, such as China and Iran, to interfere in U.S. elections. Political and other computational social scientists continue to whittle at questions as to how much influence such campaigns have on domestic politics. One such question is how much did the Russian Internet Research Agency's (IRA) tweets, specifically, affect voting preferences and political polarization in the United States? </p><p>A <a href="https://www.nature.com/articles/s41467-022-35576-9#MOESM2" rel="noopener noreferrer" target="_blank">new paper</a> in the journal <em>Nature Communications</em> provides an answer to that specific question. Titled <em>Exposure to the Russian Internet Research Agency foreign influence campaign on Twitter in the 2016 US election and its relationship to attitudes and voting behavior</em>, the paper matches Twitter data with survey data to study the impact of the IRA's tweets. </p><p>To learn more about the paper, <strong>Justin Hendrix</strong> spoke with one of its authors, <strong>Joshua Tucker</strong>,  a professor of politics at NYU, where he also serves as the director of the Jordan Center for the Advanced Study of Russia and the co-director of the NYU Center for Social Media and Politics (CSMaP). Hendrix and Tucker talked about the study, as well as what can and cannot be understood about the impact of the broader campaign of the IRA, or certainly the broader Russian effort to interfere in the U.S. election, from its results. </p>]]></content:encoded><link><![CDATA[https://techpolicy.press/examining-the-impact-of-internet-research-agency-tweets-in-the-2016-u-s-election]]></link><guid isPermaLink="false">b21dcfbd-0ee6-4035-8c45-13f016aa27ec</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 15 Jan 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/8ce88dff-5d04-4922-926d-036e4bd95347/TPP149-converted.mp3" length="38029189" type="audio/mpeg"/><itunes:duration>52:49</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Election Disinformation and the Violence in Brazil</title><itunes:title>Election Disinformation and the Violence in Brazil</itunes:title><description><![CDATA[<p>To learn more about the events on January 8th, 2023, when supporters of former far-right Brazilian President Jair Bolsonaro stormed the country's capital, and the connection between U.S. and Brazilian election disinformation, Justin Hendrix spoke with a prominent Brazilian journalist who has been covering these issues for years: <strong>Patrícia Campos Mello, </strong>a reporter at large and columnist at the newspaper <em>Folha de São Paulo</em>. They discussed the role of social media in Brazilian politics, as well as the possibility that the attacks may spur new regulations. </p>]]></description><content:encoded><![CDATA[<p>To learn more about the events on January 8th, 2023, when supporters of former far-right Brazilian President Jair Bolsonaro stormed the country's capital, and the connection between U.S. and Brazilian election disinformation, Justin Hendrix spoke with a prominent Brazilian journalist who has been covering these issues for years: <strong>Patrícia Campos Mello, </strong>a reporter at large and columnist at the newspaper <em>Folha de São Paulo</em>. They discussed the role of social media in Brazilian politics, as well as the possibility that the attacks may spur new regulations. </p>]]></content:encoded><link><![CDATA[https://techpolicy.press/election-disinformation-and-the-violence-in-brazil]]></link><guid isPermaLink="false">85ffa87b-ac82-4e2e-a5b9-c90c416c143d</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sat, 14 Jan 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/24625c61-f036-4fc6-ac93-0235224161c0/TPP148-converted.mp3" length="19560078" type="audio/mpeg"/><itunes:duration>32:36</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Shedding Light on Google&apos;s Dark Side</title><itunes:title>Shedding Light on Google&apos;s Dark Side</itunes:title><description><![CDATA[<p>Imagine a company that hides who it works with and where billions of dollars flow around the world. That earns its profits financing a global network containing piracy, porn, fraud and disinformation, even doing business with figures sanctioned by the U.S. Treasury, including Russian companies that may access and store data about people browsing websites and apps in Ukraine, potentially opening a mechanism for Russian intelligence to target individuals there. A company that tells the public that it doesn’t make money from guns that nevertheless does business with the maker of the AR-15, the weapon used in so many horrific mass killings, including the recent massacre of teachers and students in Uvalde, Texas.&nbsp;</p><p>Is this some organized crime syndicate or shady offshore shell company? No, it’s Google, one of the biggest and most prominent technology companies on the planet.&nbsp;</p><p>This episode features a conversation with <strong>Craig Silverman</strong>, a journalist who has spent years uncovering fraud in the opaque world of digital advertising and media manipulation. With his colleagues at ProPublica, in a <a href="https://www.propublica.org/article/google-display-ads-piracy-porn-fraud" rel="noopener noreferrer" target="_blank">recent</a> <a href="https://www.propublica.org/article/google-guns-ads-firearms-alphabet-advertising" rel="noopener noreferrer" target="_blank">series</a> <a href="https://www.propublica.org/article/google-russia-rutarget-sberbank-sanctions-ukraine" rel="noopener noreferrer" target="_blank">of</a> <a href="https://www.propublica.org/article/google-alphabet-ads-fund-disinformation-covid-elections" rel="noopener noreferrer" target="_blank">articles</a> Silverman employed a unique investigative approach to uncover just exactly how Google operates in a shadowy realm of deceit and disinformation.</p>]]></description><content:encoded><![CDATA[<p>Imagine a company that hides who it works with and where billions of dollars flow around the world. That earns its profits financing a global network containing piracy, porn, fraud and disinformation, even doing business with figures sanctioned by the U.S. Treasury, including Russian companies that may access and store data about people browsing websites and apps in Ukraine, potentially opening a mechanism for Russian intelligence to target individuals there. A company that tells the public that it doesn’t make money from guns that nevertheless does business with the maker of the AR-15, the weapon used in so many horrific mass killings, including the recent massacre of teachers and students in Uvalde, Texas.&nbsp;</p><p>Is this some organized crime syndicate or shady offshore shell company? No, it’s Google, one of the biggest and most prominent technology companies on the planet.&nbsp;</p><p>This episode features a conversation with <strong>Craig Silverman</strong>, a journalist who has spent years uncovering fraud in the opaque world of digital advertising and media manipulation. With his colleagues at ProPublica, in a <a href="https://www.propublica.org/article/google-display-ads-piracy-porn-fraud" rel="noopener noreferrer" target="_blank">recent</a> <a href="https://www.propublica.org/article/google-guns-ads-firearms-alphabet-advertising" rel="noopener noreferrer" target="_blank">series</a> <a href="https://www.propublica.org/article/google-russia-rutarget-sberbank-sanctions-ukraine" rel="noopener noreferrer" target="_blank">of</a> <a href="https://www.propublica.org/article/google-alphabet-ads-fund-disinformation-covid-elections" rel="noopener noreferrer" target="_blank">articles</a> Silverman employed a unique investigative approach to uncover just exactly how Google operates in a shadowy realm of deceit and disinformation.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/shedding-light-on-googles-dark-side]]></link><guid isPermaLink="false">5a99d1fa-0f8d-43eb-b4bd-04b1928900b6</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 08 Jan 2023 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/770dcceb-b3da-4986-bc9d-5f8a315164cf/TPP147-converted.mp3" length="24208600" type="audio/mpeg"/><itunes:duration>33:37</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Results of the January 6th Committee&apos;s Social Media Investigation</title><itunes:title>Results of the January 6th Committee&apos;s Social Media Investigation</itunes:title><description><![CDATA[<p>According to the legislation that established the January 6th Committee, the members were&nbsp;<a href="https://www.justsecurity.org/wp-content/uploads/2021/06/Jan-6-Clearinghouse-House-Resolution-503-June-28-2021.pdf" rel="noopener noreferrer" target="_blank">mandated</a>&nbsp;to examine “how technology, including online platforms” such as Facebook, YouTube, Twitter and Reddit and others “may have factored into the motivation, organization, and execution” of the insurrection. </p><p>When the Committee <a href="https://techpolicy.press/january-6-committee-issues-subpoenas-to-social-media-platforms/" rel="noopener noreferrer" target="_blank">issued subpoenas</a> to platforms a year ago, Chairman Bennie Thompson (D-MS) said, “Two key questions for the Select Committee are how the spread of misinformation and violent extremism contributed to the violent attack on our democracy, and what steps—if any—social media companies took to prevent their platforms from being breeding grounds for radicalizing people to violence.” </p><p>In order to learn what came of this particular aspect of the Committee’s sprawling, 18 month investigation, in this episode I’m joined by four individuals who helped conduct it, including staffing the depositions of social media executives, message board operators, far-right online influencers, militia members, extremists and others that gave testimony to the Committee:</p><ul><li><strong>Meghan Conroy</strong> is the U.S. Research Fellow with the Digital Forensic Research Lab (DFRLab) and a co-founder of the Accelerationism Research Consortium (ARC), and was an Investigator with the Select Committee to Investigate the January 6th Attack on the U.S. Capitol.</li><li><strong>Dean Jackson</strong> is Project Manager of the Influence Operations Researchers’ Guild at the Carnegie Endowment for International Peace, and was formerly an Investigative Analyst with the Select Committee.&nbsp;</li><li><strong>Alex Newhouse</strong> is the Deputy Director at the Center on Terrorism, Extremism, and Counterterrorism and the Director of Technical Research at the Accelerationism Research Consortium (ARC), and&nbsp;served as an Investigative Analyst for the Select Committee.</li><li><strong>Jacob Glick</strong> is Policy Counsel at Georgetown’s Institute for Constitutional Advocacy and Protection, and served as an Investigative Counsel on the Select Committee.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>According to the legislation that established the January 6th Committee, the members were&nbsp;<a href="https://www.justsecurity.org/wp-content/uploads/2021/06/Jan-6-Clearinghouse-House-Resolution-503-June-28-2021.pdf" rel="noopener noreferrer" target="_blank">mandated</a>&nbsp;to examine “how technology, including online platforms” such as Facebook, YouTube, Twitter and Reddit and others “may have factored into the motivation, organization, and execution” of the insurrection. </p><p>When the Committee <a href="https://techpolicy.press/january-6-committee-issues-subpoenas-to-social-media-platforms/" rel="noopener noreferrer" target="_blank">issued subpoenas</a> to platforms a year ago, Chairman Bennie Thompson (D-MS) said, “Two key questions for the Select Committee are how the spread of misinformation and violent extremism contributed to the violent attack on our democracy, and what steps—if any—social media companies took to prevent their platforms from being breeding grounds for radicalizing people to violence.” </p><p>In order to learn what came of this particular aspect of the Committee’s sprawling, 18 month investigation, in this episode I’m joined by four individuals who helped conduct it, including staffing the depositions of social media executives, message board operators, far-right online influencers, militia members, extremists and others that gave testimony to the Committee:</p><ul><li><strong>Meghan Conroy</strong> is the U.S. Research Fellow with the Digital Forensic Research Lab (DFRLab) and a co-founder of the Accelerationism Research Consortium (ARC), and was an Investigator with the Select Committee to Investigate the January 6th Attack on the U.S. Capitol.</li><li><strong>Dean Jackson</strong> is Project Manager of the Influence Operations Researchers’ Guild at the Carnegie Endowment for International Peace, and was formerly an Investigative Analyst with the Select Committee.&nbsp;</li><li><strong>Alex Newhouse</strong> is the Deputy Director at the Center on Terrorism, Extremism, and Counterterrorism and the Director of Technical Research at the Accelerationism Research Consortium (ARC), and&nbsp;served as an Investigative Analyst for the Select Committee.</li><li><strong>Jacob Glick</strong> is Policy Counsel at Georgetown’s Institute for Constitutional Advocacy and Protection, and served as an Investigative Counsel on the Select Committee.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/results-of-the-january-6th-committees-social-media-investigation]]></link><guid isPermaLink="false">2dc0e55c-5e07-40f8-b15b-31c92b7ca0b4</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Fri, 06 Jan 2023 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/21b7ab24-0e78-4e57-9c0b-3ce80c5f9806/TPP146-converted.mp3" length="58406181" type="audio/mpeg"/><itunes:duration>01:21:07</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>A Conversation with Avi Asher-Schapiro</title><itunes:title>A Conversation with Avi Asher-Schapiro</itunes:title><description><![CDATA[<p><a href="https://news.trust.org/profile/?id=0033z00002g6HmAAAU" rel="noopener noreferrer" target="_blank">Avi Asher-Schapiro</a> is a journalist covering digital rights and technology for the Thomson Reuters Foundation. For the final <em>Tech Policy Press</em> podcast of 2022, Justin Hendrix spoke to Asher-Schapiro about some of the most significant stories he and his colleagues covered in 2022, as well as what may make headlines in 2023 at the intersection of technology and society, delving into topics ranging from surveillance and crypto to social media and tech policy.&nbsp;</p>]]></description><content:encoded><![CDATA[<p><a href="https://news.trust.org/profile/?id=0033z00002g6HmAAAU" rel="noopener noreferrer" target="_blank">Avi Asher-Schapiro</a> is a journalist covering digital rights and technology for the Thomson Reuters Foundation. For the final <em>Tech Policy Press</em> podcast of 2022, Justin Hendrix spoke to Asher-Schapiro about some of the most significant stories he and his colleagues covered in 2022, as well as what may make headlines in 2023 at the intersection of technology and society, delving into topics ranging from surveillance and crypto to social media and tech policy.&nbsp;</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/a-conversation-with-avi-asher-schapiro]]></link><guid isPermaLink="false">8ca3f7b3-d3d8-4a43-a5f1-374b50805b52</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Wed, 28 Dec 2022 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/5db10f1d-0ad1-45ba-8d6e-e61988c16395/TPP145-converted.mp3" length="27589877" type="audio/mpeg"/><itunes:duration>38:19</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Confronting Hate and Extremism in Online Games</title><itunes:title>Confronting Hate and Extremism in Online Games</itunes:title><description><![CDATA[<p>On Friday,&nbsp;Congresswoman <strong>Lori Trahan</strong>, a member of the House Energy and Commerce Committee, led a group of Democrats including Senator Ron Wyden and Representatives Katie Porter, Stephen Lynch, Susan Wild, Mondaire Jones, Kathy Castor, Adam Schiff, and Elissa Slotkin to sign letters&nbsp;requesting&nbsp;information from gaming companies about their efforts to combat hate, harassment, and extremism in online games. The letters were sent to companies including Activision Blizzard, Take-Two Interactive, Riot Games, Epic Games, Valve, Microsoft, Sony, and Roblox.&nbsp;</p><p>The letters followed a <a href="https://www.adl.org/resources/report/hate-no-game-hate-and-harassment-online-games-2022" rel="noopener noreferrer" target="_blank">report</a> issued by the Anti-Defamation League (ADL) earlier this month that found that 77 percent of adults and 66 percent of teens have reported experiences of harassment while playing online games over the past year, and identified a number of other concerns about social gaming environments.&nbsp;</p><p>Today, I’m joined by one of the authors of that report, ADL&nbsp;Center for Technology and Society Director of Strategy and Operations <strong>Daniel Kelley</strong>; as well as by Queens University professor <strong>Amarnath Amarasingam</strong>, coauthor of a <a href="https://techpolicy.press/un-report-examines-gaming-and-violent-extremism/" rel="noopener noreferrer" target="_blank">report</a> commissioned by the United Nations Office of Counter-Terrorism on the intersection of gaming and violent extremism that was released in October.&nbsp;</p>]]></description><content:encoded><![CDATA[<p>On Friday,&nbsp;Congresswoman <strong>Lori Trahan</strong>, a member of the House Energy and Commerce Committee, led a group of Democrats including Senator Ron Wyden and Representatives Katie Porter, Stephen Lynch, Susan Wild, Mondaire Jones, Kathy Castor, Adam Schiff, and Elissa Slotkin to sign letters&nbsp;requesting&nbsp;information from gaming companies about their efforts to combat hate, harassment, and extremism in online games. The letters were sent to companies including Activision Blizzard, Take-Two Interactive, Riot Games, Epic Games, Valve, Microsoft, Sony, and Roblox.&nbsp;</p><p>The letters followed a <a href="https://www.adl.org/resources/report/hate-no-game-hate-and-harassment-online-games-2022" rel="noopener noreferrer" target="_blank">report</a> issued by the Anti-Defamation League (ADL) earlier this month that found that 77 percent of adults and 66 percent of teens have reported experiences of harassment while playing online games over the past year, and identified a number of other concerns about social gaming environments.&nbsp;</p><p>Today, I’m joined by one of the authors of that report, ADL&nbsp;Center for Technology and Society Director of Strategy and Operations <strong>Daniel Kelley</strong>; as well as by Queens University professor <strong>Amarnath Amarasingam</strong>, coauthor of a <a href="https://techpolicy.press/un-report-examines-gaming-and-violent-extremism/" rel="noopener noreferrer" target="_blank">report</a> commissioned by the United Nations Office of Counter-Terrorism on the intersection of gaming and violent extremism that was released in October.&nbsp;</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/confronting-hate-and-extremism-in-online-games]]></link><guid isPermaLink="false">b1ba8dff-9cad-404d-bd4d-3b5f9aee5bb2</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 18 Dec 2022 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/a74fe2fa-46a7-4cdd-95c1-1a777f171e07/TPP144-converted.mp3" length="21779906" type="audio/mpeg"/><itunes:duration>30:15</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Examining Meta’s Cross-Check Program</title><itunes:title>Examining Meta’s Cross-Check Program</itunes:title><description><![CDATA[<p>A little more than a year ago, in the first <a href="https://www.wsj.com/articles/facebook-files-xcheck-zuckerberg-elite-rules-11631541353?mod=article_inline" rel="noopener noreferrer" target="_blank">article</a> announcing the release of the Facebook Files, the documents brought out of the company by whistleblower Frances Haugen, the <em>Wall Street Journal’s</em> Jeff Horwitz reported on Cross Check, a Facebook system that “exempted high-profile users from some or all” of the platform’s rules. The program shields millions of elites from normal content moderation enforcement. While the existence of such a program was known, its scale was and perhaps still is shocking.</p><p>Following the <em>Journal</em>’s reporting and subsequent concern in the public, Facebook (now Meta) President of Global Affairs Nick Clegg <a href="https://about.fb.com/news/2021/09/requesting-oversight-board-guidance-cross-check-system/" rel="noopener noreferrer" target="_blank">announced</a> the company would request a policy advisory opinion from its independent Oversight Board. 14 months later, the Oversight Board has completed its review and <a href="https://oversightboard.com/news/501654971916288-oversight-board-publishes-policy-advisory-opinion-on-meta-s-cross-check-program/" rel="noopener noreferrer" target="_blank">published</a> its opinion.&nbsp;</p><p>To talk more about the opinion, the Cross Check system and the problem of content moderation more generally, I’m joined with one member of the Oversight Board, <strong>Nighat Dad, </strong>a lawyer from Pakistan and founder of the Digital Rights Foundation; and one outside observer who answered the board’s call for opinions about the Cross Check system, R Street Institute senior fellow and University of Pennsylvania Annenberg Public Policy Center distinguished research fellow <strong>Chris Riley</strong>.</p>]]></description><content:encoded><![CDATA[<p>A little more than a year ago, in the first <a href="https://www.wsj.com/articles/facebook-files-xcheck-zuckerberg-elite-rules-11631541353?mod=article_inline" rel="noopener noreferrer" target="_blank">article</a> announcing the release of the Facebook Files, the documents brought out of the company by whistleblower Frances Haugen, the <em>Wall Street Journal’s</em> Jeff Horwitz reported on Cross Check, a Facebook system that “exempted high-profile users from some or all” of the platform’s rules. The program shields millions of elites from normal content moderation enforcement. While the existence of such a program was known, its scale was and perhaps still is shocking.</p><p>Following the <em>Journal</em>’s reporting and subsequent concern in the public, Facebook (now Meta) President of Global Affairs Nick Clegg <a href="https://about.fb.com/news/2021/09/requesting-oversight-board-guidance-cross-check-system/" rel="noopener noreferrer" target="_blank">announced</a> the company would request a policy advisory opinion from its independent Oversight Board. 14 months later, the Oversight Board has completed its review and <a href="https://oversightboard.com/news/501654971916288-oversight-board-publishes-policy-advisory-opinion-on-meta-s-cross-check-program/" rel="noopener noreferrer" target="_blank">published</a> its opinion.&nbsp;</p><p>To talk more about the opinion, the Cross Check system and the problem of content moderation more generally, I’m joined with one member of the Oversight Board, <strong>Nighat Dad, </strong>a lawyer from Pakistan and founder of the Digital Rights Foundation; and one outside observer who answered the board’s call for opinions about the Cross Check system, R Street Institute senior fellow and University of Pennsylvania Annenberg Public Policy Center distinguished research fellow <strong>Chris Riley</strong>.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/examining-metas-cross-check-program]]></link><guid isPermaLink="false">d59ce4ad-6c37-4790-b22a-97a23a0d9393</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Wed, 14 Dec 2022 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/5972aa2f-a988-4a5e-af76-b9fb1ea73e46/TPP143-converted.mp3" length="23171370" type="audio/mpeg"/><itunes:duration>32:11</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Chinese Censorship and Surveillance in a Moment of Unrest: Part 2</title><itunes:title>Chinese Censorship and Surveillance in a Moment of Unrest: Part 2</itunes:title><description><![CDATA[<p>Last week, the Chinese government under President Xi Jinping took steps to finally move away from its zero-COVID policy, following two weeks of protests in multiple cities. The unrest and anti-government sentiment was perhaps the most pronounced since the 1989 Tiananmen Square crackdown. And while these events gave Western observers an opportunity to grapple with the complexity of Chinese politics, generational and regional differences in the views of the population, and ultimately how the authoritarian government responds to public pressure, it also gave us a chance to see how the Chinese censorship and surveillance apparatus operates.&nbsp;</p><p>This week’s Tech Policy Press podcast comes in two parts. In both, we’ll hear from reporters covering the intersection of China and technology. This is the second part, and it features a conversation with two individuals covering China for the <em>New York Times</em>, <strong>Paul Mozur</strong> and <strong>Muyi Xiao</strong>. In their collaborative coverage they have mixed open source visual investigations methods with traditional reporting to get a sense of the protests and the state’s response.&nbsp;</p>]]></description><content:encoded><![CDATA[<p>Last week, the Chinese government under President Xi Jinping took steps to finally move away from its zero-COVID policy, following two weeks of protests in multiple cities. The unrest and anti-government sentiment was perhaps the most pronounced since the 1989 Tiananmen Square crackdown. And while these events gave Western observers an opportunity to grapple with the complexity of Chinese politics, generational and regional differences in the views of the population, and ultimately how the authoritarian government responds to public pressure, it also gave us a chance to see how the Chinese censorship and surveillance apparatus operates.&nbsp;</p><p>This week’s Tech Policy Press podcast comes in two parts. In both, we’ll hear from reporters covering the intersection of China and technology. This is the second part, and it features a conversation with two individuals covering China for the <em>New York Times</em>, <strong>Paul Mozur</strong> and <strong>Muyi Xiao</strong>. In their collaborative coverage they have mixed open source visual investigations methods with traditional reporting to get a sense of the protests and the state’s response.&nbsp;</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/chinese-censorship-and-surveillance-in-a-moment-of-unrest-part-2]]></link><guid isPermaLink="false">8801708a-4b88-4d25-9f58-d6737c9de560</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 11 Dec 2022 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/eccd105f-525d-4150-bae7-8463eaaa4c67/TPP142-converted.mp3" length="20066181" type="audio/mpeg"/><itunes:duration>27:52</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Chinese Censorship and Surveillance in a Moment of Unrest: Part 1</title><itunes:title>Chinese Censorship and Surveillance in a Moment of Unrest: Part 1</itunes:title><description><![CDATA[<p>Last week, the Chinese government under President Xi Jinping took steps to finally move away from its zero-COVID policy, following two weeks of protests in multiple cities. The unrest and anti-government sentiment was perhaps the most pronounced since the 1989 Tiananmen Square crackdown. And while these events gave Western observers an opportunity to grapple with the complexity of Chinese politics, generational and regional differences in the views of the population, and ultimately how the authoritarian government responds to public pressure, it also gave us a chance to see how the Chinese censorship and surveillance apparatus operates.&nbsp;</p><p>This week’s <em>Tech Policy Press</em> podcast comes in two parts. In both, we’ll hear from reporters covering the intersection of China and technology. This is the first part, and it features a conversation with Liza Lin, a Reporter at <em>The Wall Street Journal</em>. She covers Asia technology news for the <em>Journal</em> from Singapore. Before that she was the paper’s China correspondent, based in Shanghai. She was part of a team at the Journal to named as Pulitzer Finalists for the International Reporting category in 2021 for coverage of Chinese leader Xi Jinping, and with other Journal reporters won the Gerald Loeb Award for International Reporting in 2018 for a series of stories on China's Surveillance state. She’s co-author of a book on that subject titled <a href="https://www.amazon.com/Surveillance-State-Inside-Chinas-Control-ebook/dp/B08R2K1D36/ref=nodl_?dplnkId=c7f8a2e1-32a6-4a9a-be86-5ebcdcff64fb#aw-udpv3-customer-reviews_feature_div" rel="noopener noreferrer" target="_blank"><em>Surveillance State: Inside China's Quest to Launch a New Era of Social Control</em></a>, with Josh Chin.&nbsp;</p>]]></description><content:encoded><![CDATA[<p>Last week, the Chinese government under President Xi Jinping took steps to finally move away from its zero-COVID policy, following two weeks of protests in multiple cities. The unrest and anti-government sentiment was perhaps the most pronounced since the 1989 Tiananmen Square crackdown. And while these events gave Western observers an opportunity to grapple with the complexity of Chinese politics, generational and regional differences in the views of the population, and ultimately how the authoritarian government responds to public pressure, it also gave us a chance to see how the Chinese censorship and surveillance apparatus operates.&nbsp;</p><p>This week’s <em>Tech Policy Press</em> podcast comes in two parts. In both, we’ll hear from reporters covering the intersection of China and technology. This is the first part, and it features a conversation with Liza Lin, a Reporter at <em>The Wall Street Journal</em>. She covers Asia technology news for the <em>Journal</em> from Singapore. Before that she was the paper’s China correspondent, based in Shanghai. She was part of a team at the Journal to named as Pulitzer Finalists for the International Reporting category in 2021 for coverage of Chinese leader Xi Jinping, and with other Journal reporters won the Gerald Loeb Award for International Reporting in 2018 for a series of stories on China's Surveillance state. She’s co-author of a book on that subject titled <a href="https://www.amazon.com/Surveillance-State-Inside-Chinas-Control-ebook/dp/B08R2K1D36/ref=nodl_?dplnkId=c7f8a2e1-32a6-4a9a-be86-5ebcdcff64fb#aw-udpv3-customer-reviews_feature_div" rel="noopener noreferrer" target="_blank"><em>Surveillance State: Inside China's Quest to Launch a New Era of Social Control</em></a>, with Josh Chin.&nbsp;</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/chinese-censorship-and-surveillance-in-a-moment-of-unrest-part-1]]></link><guid isPermaLink="false">fbf68c0a-8405-420a-98ce-5f2a3eb3f21b</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sat, 10 Dec 2022 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/c942f9e9-ebe4-4a73-a98a-420ae5277734/TPP141-converted.mp3" length="22283638" type="audio/mpeg"/><itunes:duration>30:57</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Scrutinizing &quot;The Twitter Files&quot;</title><itunes:title>Scrutinizing &quot;The Twitter Files&quot;</itunes:title><description><![CDATA[<p>On Friday, Elon Musk announced via tweet that documents related to Twitter’s decision to intervene in the propagation of an October 2020 story in the <em>New York Post</em> about then candidate Joe Biden’s son, Hunter Biden, would be made public. The incident caused a furor at the time, with some Republicans and supporters of former President Donald Trump insinuating that it was proof that social media firms are biased against conservative interests. Some even maintain that the actions of Twitter and Facebook with regard to this particular <em>New York Post</em> story may have had some impact on the outcome of the election, as far-fetched as that might be.&nbsp;</p><p>Today, we’ll hear two voices on the disclosures. The first is <strong>David Ingram</strong>, who covers tech for <em>NBC News</em> and will walk us through what happened. And the second is <strong>Mike Masnick</strong>, the editor of the influential site <em>Tech Dirt</em>, who offers his first thoughts on the disclosures, and what they portend for the future of Twitter under Elon Musk.</p>]]></description><content:encoded><![CDATA[<p>On Friday, Elon Musk announced via tweet that documents related to Twitter’s decision to intervene in the propagation of an October 2020 story in the <em>New York Post</em> about then candidate Joe Biden’s son, Hunter Biden, would be made public. The incident caused a furor at the time, with some Republicans and supporters of former President Donald Trump insinuating that it was proof that social media firms are biased against conservative interests. Some even maintain that the actions of Twitter and Facebook with regard to this particular <em>New York Post</em> story may have had some impact on the outcome of the election, as far-fetched as that might be.&nbsp;</p><p>Today, we’ll hear two voices on the disclosures. The first is <strong>David Ingram</strong>, who covers tech for <em>NBC News</em> and will walk us through what happened. And the second is <strong>Mike Masnick</strong>, the editor of the influential site <em>Tech Dirt</em>, who offers his first thoughts on the disclosures, and what they portend for the future of Twitter under Elon Musk.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/scrutinizing-the-twitter-files]]></link><guid isPermaLink="false">a03d6663-cfb9-4205-b2ef-55b15fa694cf</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 04 Dec 2022 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/ae2595aa-0229-46dd-87a2-6687e1f2f9c8/TPP140-converted.mp3" length="37357380" type="audio/mpeg"/><itunes:duration>51:53</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Dissecting Tech Manifestos</title><itunes:title>Dissecting Tech Manifestos</itunes:title><description><![CDATA[<p>For this episode of the <em>Tech Policy Press</em> podcast,<strong> </strong>I had the chance to speak to <a href="https://www.cwanderson.org/about" rel="noopener noreferrer" target="_blank">Chris Anderson</a>, Ph.D., a professor of sociology at the University of Milan who is leading a course on tech manifestos and their evolution, inviting his students to dissect the language for what it can tell us about politics and power. </p><p>Documents such as <a href="https://www.eff.org/cyberspace-independence" rel="noopener noreferrer" target="_blank"><em>A Declaration of the Independence of Cyberspace</em></a> and <a href="https://monoskop.org/images/4/4c/Haraway_Donna_1985_A_Manifesto_for_Cyborgs_Science_Technology_and_Socialist_Feminism_in_the_1980s.pdf" rel="noopener noreferrer" target="_blank"><em>A Manifesto for Cyborgs</em></a> have given way to more vacuous statements from billionaires, such as Mark Zuckerberg's Facebook manifesto, <a href="https://www.facebook.com/notes/mark-zuckerberg/building-global-community/10154544292806634/" rel="noopener noreferrer" target="_blank"><em>Building Global Community</em></a>. These days a lot of Silicon Valley’s leaders don’t have much in the way of ideas, but they do have a lot of money, so either way they can push whatever agenda they may have on the rest of us. From promises of abundance delivered by artificial intelligence, to a 'global community' convened on social media platforms, to reimagined economies or even a new world order built on the blockchain,&nbsp;tech manifestos remain important, since they often signify large amounts of capital are about to be deployed to try to manifest someone's new vision.</p>]]></description><content:encoded><![CDATA[<p>For this episode of the <em>Tech Policy Press</em> podcast,<strong> </strong>I had the chance to speak to <a href="https://www.cwanderson.org/about" rel="noopener noreferrer" target="_blank">Chris Anderson</a>, Ph.D., a professor of sociology at the University of Milan who is leading a course on tech manifestos and their evolution, inviting his students to dissect the language for what it can tell us about politics and power. </p><p>Documents such as <a href="https://www.eff.org/cyberspace-independence" rel="noopener noreferrer" target="_blank"><em>A Declaration of the Independence of Cyberspace</em></a> and <a href="https://monoskop.org/images/4/4c/Haraway_Donna_1985_A_Manifesto_for_Cyborgs_Science_Technology_and_Socialist_Feminism_in_the_1980s.pdf" rel="noopener noreferrer" target="_blank"><em>A Manifesto for Cyborgs</em></a> have given way to more vacuous statements from billionaires, such as Mark Zuckerberg's Facebook manifesto, <a href="https://www.facebook.com/notes/mark-zuckerberg/building-global-community/10154544292806634/" rel="noopener noreferrer" target="_blank"><em>Building Global Community</em></a>. These days a lot of Silicon Valley’s leaders don’t have much in the way of ideas, but they do have a lot of money, so either way they can push whatever agenda they may have on the rest of us. From promises of abundance delivered by artificial intelligence, to a 'global community' convened on social media platforms, to reimagined economies or even a new world order built on the blockchain,&nbsp;tech manifestos remain important, since they often signify large amounts of capital are about to be deployed to try to manifest someone's new vision.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/dissecting-tech-manifestos]]></link><guid isPermaLink="false">522be22d-bba6-430e-8312-8a5454b8f7b5</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 27 Nov 2022 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/c495fc7e-79a5-4e90-8ebb-62dd6367bdd3/TPP139-converted.mp3" length="24748590" type="audio/mpeg"/><itunes:duration>34:22</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The Whiteness of Mastodon</title><itunes:title>The Whiteness of Mastodon</itunes:title><description><![CDATA[<p>By all accounts, Elon Musk’s acquisition of Twitter is not going well. And yet many have the real sense that something important may be lost if the platform collapses, or if there is a substantial migration away from it to alternatives like Mastodon, the open source, decentralized platform that has grown from three hundred thousand monthly active users to nearly two million since Musk bought Twitter. </p><p>In this episode, <em>Tech Policy Press </em>editor <strong>Justin Hendrix</strong> had the chance to discuss Musk’s takeover with <strong>Dr. Johnathan Flowers</strong>, and to learn more about some of the exclusive norms he’s observed that may create obstacles to communities of color when contemplating the switch to Mastodon.</p>]]></description><content:encoded><![CDATA[<p>By all accounts, Elon Musk’s acquisition of Twitter is not going well. And yet many have the real sense that something important may be lost if the platform collapses, or if there is a substantial migration away from it to alternatives like Mastodon, the open source, decentralized platform that has grown from three hundred thousand monthly active users to nearly two million since Musk bought Twitter. </p><p>In this episode, <em>Tech Policy Press </em>editor <strong>Justin Hendrix</strong> had the chance to discuss Musk’s takeover with <strong>Dr. Johnathan Flowers</strong>, and to learn more about some of the exclusive norms he’s observed that may create obstacles to communities of color when contemplating the switch to Mastodon.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/the-whiteness-of-mastodon]]></link><guid isPermaLink="false">6e9556b3-fcba-4dff-af91-adad3054da44</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Wed, 23 Nov 2022 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/4d8d4324-7aa8-4a68-8130-ad43a43a457a/TPP138-converted.mp3" length="40227149" type="audio/mpeg"/><itunes:duration>55:52</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>You Are Not Expected to Understand This</title><itunes:title>You Are Not Expected to Understand This</itunes:title><description><![CDATA[<p>Today we’re going to hear from the editor of-- and two authors included in-- a book of essays about how particular bits of software have changed the world in different ways, the just-published <a href="https://press.princeton.edu/books/paperback/9780691208480/you-are-not-expected-to-understand-this" rel="noopener noreferrer" target="_blank"><em>"You Are Not Expected to Understand This": How 26 Lines of Code Changed the World</em></a> from Princeton University Press. The book is at once delightful and enlightening, revealing how technology interacts with people and society in both good and bad ways, and how important and long lasting the decisions we take when designing software and systems can be on the world. </p><p>This episode features:</p><ul><li><strong>Torie Bosch</strong>, the editor of Future Tense, a collaborative project of&nbsp;<em>Slate&nbsp;</em>magazine, New America, and Arizona State University, and the editor of the book;</li><li><strong>Meredith Broussard</strong>, an associate professor at the Arthur L. Carter Journalism Institute of New York University and research director at the&nbsp;NYU Alliance for Public Interest Technology</li><li><strong>Charlton McIlwain</strong>, Vice Provost for Faculty Engagement and Development at New York University and Professor of Media, Culture, and Communication at NYU’s Steinhardt School of Culture, Education, and Human Development</li></ul><br/>]]></description><content:encoded><![CDATA[<p>Today we’re going to hear from the editor of-- and two authors included in-- a book of essays about how particular bits of software have changed the world in different ways, the just-published <a href="https://press.princeton.edu/books/paperback/9780691208480/you-are-not-expected-to-understand-this" rel="noopener noreferrer" target="_blank"><em>"You Are Not Expected to Understand This": How 26 Lines of Code Changed the World</em></a> from Princeton University Press. The book is at once delightful and enlightening, revealing how technology interacts with people and society in both good and bad ways, and how important and long lasting the decisions we take when designing software and systems can be on the world. </p><p>This episode features:</p><ul><li><strong>Torie Bosch</strong>, the editor of Future Tense, a collaborative project of&nbsp;<em>Slate&nbsp;</em>magazine, New America, and Arizona State University, and the editor of the book;</li><li><strong>Meredith Broussard</strong>, an associate professor at the Arthur L. Carter Journalism Institute of New York University and research director at the&nbsp;NYU Alliance for Public Interest Technology</li><li><strong>Charlton McIlwain</strong>, Vice Provost for Faculty Engagement and Development at New York University and Professor of Media, Culture, and Communication at NYU’s Steinhardt School of Culture, Education, and Human Development</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/you-are-not-expected-to-understand-this]]></link><guid isPermaLink="false">06d2ff90-bfa0-4f89-9940-e97e3fcb87b0</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 20 Nov 2022 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/ab70e131-bee2-4f15-887d-47ee741ccd3d/TPP137-converted.mp3" length="18940147" type="audio/mpeg"/><itunes:duration>31:34</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>What is Lost if Twitter Fails?</title><itunes:title>What is Lost if Twitter Fails?</itunes:title><description><![CDATA[<p>Media reports suggest that large swathes of employees at Twitter have resigned after the platform’s new owner, Elon Musk, issued a kind of ultimatum asking them to commit to "long hours at high intensity"&nbsp;to build “Twitter 2.0.” Last night, <a href="https://www.cnn.com/2022/11/17/tech/twitter-employees-ultimatum-deadline" rel="noopener noreferrer" target="_blank">according to</a> an internal Twitter email shared with CNN, employees who decided to stay at the company received an email that said the company's offices will be temporarily closed and badge access will be restricted through Monday. Whether the platform will remain functional with so many core engineering and other crucial teams decimated is an open question.&nbsp;</p><p>To talk more about Twitter, Musk, and what is potentially lost, Justin Hendrix spoke to <a href="https://www.meredithdclark.com/aboutmdc" rel="noopener noreferrer" target="_blank">Dr. Meredith Clark</a>, whose research focuses on the intersections of race, media, and power. She’s leading a project to <a href="https://www.meredithdclark.com/archivingblacktwitter" rel="noopener noreferrer" target="_blank">archive</a> Black Twitter, as part of a larger project to archive the Black web. And, she’s the author of a forthcoming book on Black Twitter. </p>]]></description><content:encoded><![CDATA[<p>Media reports suggest that large swathes of employees at Twitter have resigned after the platform’s new owner, Elon Musk, issued a kind of ultimatum asking them to commit to "long hours at high intensity"&nbsp;to build “Twitter 2.0.” Last night, <a href="https://www.cnn.com/2022/11/17/tech/twitter-employees-ultimatum-deadline" rel="noopener noreferrer" target="_blank">according to</a> an internal Twitter email shared with CNN, employees who decided to stay at the company received an email that said the company's offices will be temporarily closed and badge access will be restricted through Monday. Whether the platform will remain functional with so many core engineering and other crucial teams decimated is an open question.&nbsp;</p><p>To talk more about Twitter, Musk, and what is potentially lost, Justin Hendrix spoke to <a href="https://www.meredithdclark.com/aboutmdc" rel="noopener noreferrer" target="_blank">Dr. Meredith Clark</a>, whose research focuses on the intersections of race, media, and power. She’s leading a project to <a href="https://www.meredithdclark.com/archivingblacktwitter" rel="noopener noreferrer" target="_blank">archive</a> Black Twitter, as part of a larger project to archive the Black web. And, she’s the author of a forthcoming book on Black Twitter. </p>]]></content:encoded><link><![CDATA[https://techpolicy.press/what-is-lost-if-twitter-fails]]></link><guid isPermaLink="false">12bc6aa5-f617-46ea-8425-cd4d61009bb5</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Fri, 18 Nov 2022 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/e20c7061-a4ef-447c-bdbf-010f1f74baae/TPP136-converted.mp3" length="18990898" type="audio/mpeg"/><itunes:duration>26:23</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Internet Shutdowns and Censorship, in Iran and Beyond</title><itunes:title>Internet Shutdowns and Censorship, in Iran and Beyond</itunes:title><description><![CDATA[<p>According <a href="https://www.bbc.com/news/world-middle-east-63648629" rel="noopener noreferrer" target="_blank">to the BBC</a>, to date at least 348 Iranian protesters have been killed and nearly 16,000 arrested in women-led protests that erupted three months ago after the death Mahsa Amini, a 22-year-old woman who died in custody after being detained by morality police for allegedly breaking the strict rules on the wearing of hijabs.</p><p>One way the regime has responded to these antigovernment protests is to block access to the internet, independent news sites and social media and communication platforms. To talk more about how these tactics are being applied in Iran and around the world, and what policymakers in democratic countries can do to help dissidents on the ground, I spoke to two experts on digital and human rights:</p><ul><li><strong>Yasmin Green</strong>, CEO of Jigsaw and author of a recent <a href="https://www.wired.com/story/iran-mahsa-amini-internet-shutdown/" rel="noopener noreferrer" target="_blank">piece in <em>Wired</em></a> on Iran's internet blackouts</li><li><strong>Kian Vesteinsson</strong>, Senior Research Analyst for Technology and Democracy at Freedom House, one of the authors of the 12th annual <a href="https://freedomhouse.org/report/freedom-net/2022/countering-authoritarian-overhaul-internet#Resilient" rel="noopener noreferrer" target="_blank">Internet Freedom Report</a></li></ul><br/>]]></description><content:encoded><![CDATA[<p>According <a href="https://www.bbc.com/news/world-middle-east-63648629" rel="noopener noreferrer" target="_blank">to the BBC</a>, to date at least 348 Iranian protesters have been killed and nearly 16,000 arrested in women-led protests that erupted three months ago after the death Mahsa Amini, a 22-year-old woman who died in custody after being detained by morality police for allegedly breaking the strict rules on the wearing of hijabs.</p><p>One way the regime has responded to these antigovernment protests is to block access to the internet, independent news sites and social media and communication platforms. To talk more about how these tactics are being applied in Iran and around the world, and what policymakers in democratic countries can do to help dissidents on the ground, I spoke to two experts on digital and human rights:</p><ul><li><strong>Yasmin Green</strong>, CEO of Jigsaw and author of a recent <a href="https://www.wired.com/story/iran-mahsa-amini-internet-shutdown/" rel="noopener noreferrer" target="_blank">piece in <em>Wired</em></a> on Iran's internet blackouts</li><li><strong>Kian Vesteinsson</strong>, Senior Research Analyst for Technology and Democracy at Freedom House, one of the authors of the 12th annual <a href="https://freedomhouse.org/report/freedom-net/2022/countering-authoritarian-overhaul-internet#Resilient" rel="noopener noreferrer" target="_blank">Internet Freedom Report</a></li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/internet-shutdowns-and-censorship-in-iran-and-beyond]]></link><guid isPermaLink="false">a1356a73-e5c6-4bc6-be59-ae629807c14d</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Wed, 16 Nov 2022 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/e391331d-6fb5-4cad-bd9b-69b0827fe7be/TPP135-converted.mp3" length="18146361" type="audio/mpeg"/><itunes:duration>30:15</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The Impact of the U.S. Midterm Elections on Tech Policy</title><itunes:title>The Impact of the U.S. Midterm Elections on Tech Policy</itunes:title><description><![CDATA[<p>Voting in the U.S. midterm elections closed on Tuesday, and as of Sunday morning, November 13, Democrats secured another majority in the Senate. But ballots are still being counted in key races that will determine which party controls the House.<strong>&nbsp;</strong> It is clear, however, that the margins determining leadership in both chambers will be extremely small. In order to explore how the elections may impact the legislative debate over tech policy issues, <em>Tech Policy Press </em>editor <strong>Justin Hendrix</strong> spoke with three experts from civil society groups that regularly engage with lawmakers to find what scenarios and considerations are front of mind, even as we wait for the final tally:</p><ul><li><strong>Emma Llansó</strong>, Director of the Free Expression Project, Center for Democracy and Technology</li><li><strong>Yosef Getachew</strong>, Director of the Media and Democracy Program, Common Cause</li><li><strong>Matt Wood</strong>, Vice President of Policy and General Counsel, Free Press</li></ul><br/>]]></description><content:encoded><![CDATA[<p>Voting in the U.S. midterm elections closed on Tuesday, and as of Sunday morning, November 13, Democrats secured another majority in the Senate. But ballots are still being counted in key races that will determine which party controls the House.<strong>&nbsp;</strong> It is clear, however, that the margins determining leadership in both chambers will be extremely small. In order to explore how the elections may impact the legislative debate over tech policy issues, <em>Tech Policy Press </em>editor <strong>Justin Hendrix</strong> spoke with three experts from civil society groups that regularly engage with lawmakers to find what scenarios and considerations are front of mind, even as we wait for the final tally:</p><ul><li><strong>Emma Llansó</strong>, Director of the Free Expression Project, Center for Democracy and Technology</li><li><strong>Yosef Getachew</strong>, Director of the Media and Democracy Program, Common Cause</li><li><strong>Matt Wood</strong>, Vice President of Policy and General Counsel, Free Press</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/the-impact-of-the-u-s-midterm-elections-on-tech-policy]]></link><guid isPermaLink="false">1729cff3-a8b0-43e5-9063-1f930b0570ce</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 13 Nov 2022 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/7b03bdd0-6091-43a3-9bdf-439569cfd82d/TPP134-converted.mp3" length="30819506" type="audio/mpeg"/><itunes:duration>42:48</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Black Skinhead: A Conversation with Brandi Collins-Dexter</title><itunes:title>Black Skinhead: A Conversation with Brandi Collins-Dexter</itunes:title><description><![CDATA[<p>This episode features a discussion with Brandi Collins-Dexter, the author of the new book <a href="https://us.macmillan.com/books/9781250824110/blackskinhead" rel="noopener noreferrer" target="_blank"><em>BLACK SKINHEAD: Reflections on Blackness and Our Political Future</em></a>. Brandi is both an academic and a civil rights activist in the fight for media and tech justice, and her book is a rollercoaster ride through those issues through culture and music and politics. Part media and cultural criticism, part memoir, and part warning, the book takes us to the fringes of Black communities and tries to make sense of our political moment.</p>]]></description><content:encoded><![CDATA[<p>This episode features a discussion with Brandi Collins-Dexter, the author of the new book <a href="https://us.macmillan.com/books/9781250824110/blackskinhead" rel="noopener noreferrer" target="_blank"><em>BLACK SKINHEAD: Reflections on Blackness and Our Political Future</em></a>. Brandi is both an academic and a civil rights activist in the fight for media and tech justice, and her book is a rollercoaster ride through those issues through culture and music and politics. Part media and cultural criticism, part memoir, and part warning, the book takes us to the fringes of Black communities and tries to make sense of our political moment.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/black-skinhead-a-conversation-with-brandi-collins-dexter]]></link><guid isPermaLink="false">b87d83d2-8d04-41d2-946b-f01f14b17f0b</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 06 Nov 2022 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/1a80d343-a91a-4245-9df1-e647bccc8e99/TPP133-converted.mp3" length="36636829" type="audio/mpeg"/><itunes:duration>50:53</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Examining Programmatic Political Advertising in the United States</title><itunes:title>Examining Programmatic Political Advertising in the United States</itunes:title><description><![CDATA[<p>As the U.S. midterm elections approach next week, there is a renewed focus on understanding the spending on and claims made in political advertising in digital channels, particularly on social media. But what is going on across the web, beyond the social media platforms? </p><p>A <a href="https://techpolicy.unc.edu/wp-content/uploads/2022/09/UNC_CTP_Programmed-Political-Speech_final_corrected.pdf" rel="noopener noreferrer" target="_blank">recent report</a> from the University of North Carolina at Chapel Hill Center on Technology Policy found that as a result of restrictions on political ads instituted by major platforms ahead of the 2020 elections, political advertisers are increasingly turning to political advertising on other platforms. Programmatic advertising accounts for a substantial and increasing share of political advertising, they say, and more attention needs to be paid to this complex and confusing ecosystem of companies- large and small- that serve up ads on websites, apps, streaming services, and other digitally connected devices. This episode features a discussion with the report's authors, <strong>J. Scott Babwah Brennen</strong> &amp;<strong> Matt Perault</strong>.</p>]]></description><content:encoded><![CDATA[<p>As the U.S. midterm elections approach next week, there is a renewed focus on understanding the spending on and claims made in political advertising in digital channels, particularly on social media. But what is going on across the web, beyond the social media platforms? </p><p>A <a href="https://techpolicy.unc.edu/wp-content/uploads/2022/09/UNC_CTP_Programmed-Political-Speech_final_corrected.pdf" rel="noopener noreferrer" target="_blank">recent report</a> from the University of North Carolina at Chapel Hill Center on Technology Policy found that as a result of restrictions on political ads instituted by major platforms ahead of the 2020 elections, political advertisers are increasingly turning to political advertising on other platforms. Programmatic advertising accounts for a substantial and increasing share of political advertising, they say, and more attention needs to be paid to this complex and confusing ecosystem of companies- large and small- that serve up ads on websites, apps, streaming services, and other digitally connected devices. This episode features a discussion with the report's authors, <strong>J. Scott Babwah Brennen</strong> &amp;<strong> Matt Perault</strong>.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/examining-programmatic-political-advertising-in-the-united-states]]></link><guid isPermaLink="false">3e027bea-ae27-435b-98c1-3aa52d952f5d</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Tue, 01 Nov 2022 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/0453b392-0307-4199-9b74-b8f8282428b6/TPP132-converted.mp3" length="21982579" type="audio/mpeg"/><itunes:duration>36:38</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Danielle Citron on The Fight for Privacy</title><itunes:title>Danielle Citron on The Fight for Privacy</itunes:title><description><![CDATA[<p>Danielle Citron is the inaugural Jefferson Scholars Foundation Schenck Distinguished Professor in Law at the University of Virginia School of Law, where she teaches and writes about information privacy, free expression and civil rights. She is the vice president of the Cyber Civil Rights Initiative, a nonprofit devoted to fighting for civil rights and liberties in the digital age, and&nbsp;in 2019 she was named a MacArthur Fellow for her work on cyberstalking and intimate privacy. Her latest book, <a href="https://www.daniellecitron.com/the-fight-for-privacy-protecting-dignity-identity-and-love-in-our-digital-age/" rel="noopener noreferrer" target="_blank"><em>The Fight for Privacy: Protecting Dignity, Identity, and Love in the Digital Age</em></a>, published by W.W. Norton and Penguin Vintage UK, was released this month.</p>]]></description><content:encoded><![CDATA[<p>Danielle Citron is the inaugural Jefferson Scholars Foundation Schenck Distinguished Professor in Law at the University of Virginia School of Law, where she teaches and writes about information privacy, free expression and civil rights. She is the vice president of the Cyber Civil Rights Initiative, a nonprofit devoted to fighting for civil rights and liberties in the digital age, and&nbsp;in 2019 she was named a MacArthur Fellow for her work on cyberstalking and intimate privacy. Her latest book, <a href="https://www.daniellecitron.com/the-fight-for-privacy-protecting-dignity-identity-and-love-in-our-digital-age/" rel="noopener noreferrer" target="_blank"><em>The Fight for Privacy: Protecting Dignity, Identity, and Love in the Digital Age</em></a>, published by W.W. Norton and Penguin Vintage UK, was released this month.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/danielle-citron-on-the-fight-for-privacy]]></link><guid isPermaLink="false">3ae6a72c-900b-48de-9e36-8b33c8f2f9e9</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 30 Oct 2022 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/f6068e9c-3ca4-49ce-b44a-d40db36f5cf4/TPP131-converted.mp3" length="38676284" type="audio/mpeg"/><itunes:duration>53:43</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Elections, Misinformation, and Political Discourse in U.S. Latino Communities</title><itunes:title>Elections, Misinformation, and Political Discourse in U.S. Latino Communities</itunes:title><description><![CDATA[<p>In this episode of the podcast, we present two segments that explore how the combination of media, platforms, politics and people play out in Latino communities in the U.S., particularly at crucial moments for democracy, such as at election time. The first segment is with individuals who are leading efforts to understand and confront mis- and disinformation targeting Latino communities:</p><ul><li><strong>Roberta Braga</strong>, Director of Counter-Disinformation Strategies at Equis</li><li><strong>Jaime Longoria</strong>, Manager of Research and Training for the Disinfo Defense League at Media Democracy Fund.</li></ul><br/><p>And the second segment is a discussion with two researchers at the University of Texas at Austin who spent the summer talking specifically to Latino users of WhatsApp about how the political discourse plays out in their communities on that widely used messaging app, <a href="https://techpolicy.press/whatsapp-misinformation-and-latino-political-discourse-in-the-u-s/" rel="noopener noreferrer" target="_blank">and wrote about it for <em>Tech Policy Press</em></a> as part of a special series of essays on race, ethnicity, technology and elections:</p><ul><li><strong>Inga Kristina Trauthig</strong>, Ph.D., Research Manager of the Propaganda Research Lab at the Center for Media Engagement at The University of Texas at Austin</li><li><strong>Kayo Mimizuka, </strong>Graduate Research Assistant at the Center for Media Engagement and a Ph.D. student in the School of Journalism and Media at The University of Texas at Austin.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>In this episode of the podcast, we present two segments that explore how the combination of media, platforms, politics and people play out in Latino communities in the U.S., particularly at crucial moments for democracy, such as at election time. The first segment is with individuals who are leading efforts to understand and confront mis- and disinformation targeting Latino communities:</p><ul><li><strong>Roberta Braga</strong>, Director of Counter-Disinformation Strategies at Equis</li><li><strong>Jaime Longoria</strong>, Manager of Research and Training for the Disinfo Defense League at Media Democracy Fund.</li></ul><br/><p>And the second segment is a discussion with two researchers at the University of Texas at Austin who spent the summer talking specifically to Latino users of WhatsApp about how the political discourse plays out in their communities on that widely used messaging app, <a href="https://techpolicy.press/whatsapp-misinformation-and-latino-political-discourse-in-the-u-s/" rel="noopener noreferrer" target="_blank">and wrote about it for <em>Tech Policy Press</em></a> as part of a special series of essays on race, ethnicity, technology and elections:</p><ul><li><strong>Inga Kristina Trauthig</strong>, Ph.D., Research Manager of the Propaganda Research Lab at the Center for Media Engagement at The University of Texas at Austin</li><li><strong>Kayo Mimizuka, </strong>Graduate Research Assistant at the Center for Media Engagement and a Ph.D. student in the School of Journalism and Media at The University of Texas at Austin.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/elections-misinformation-and-political-discourse-in-u-s-latino-communities]]></link><guid isPermaLink="false">0880b398-be14-41f3-ae47-337fd1342726</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Tue, 25 Oct 2022 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/cd5fe186-3724-4b85-875f-988093ba00d1/TPP130-converted.mp3" length="46099069" type="audio/mpeg"/><itunes:duration>01:04:02</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Platform Election Policies, Now and Then</title><itunes:title>Platform Election Policies, Now and Then</itunes:title><description><![CDATA[<p>In recent episodes of this podcast we’ve explored the policies and practices of the social media platforms with regard to elections. In this week’s episode, we’ll hear two segments on this theme. First, an interview with <strong>Daniel Kriess</strong>, an Associate Professor in the Hussman School of Journalism and Media at the University of North Carolina at Chapel Hill and a principal researcher at the UNC Center for Information, Technology, and Public Life. With Ph.D candidate Erik Brooks, Daniel is the author of <a href="https://techpolicy.press/looking-to-the-midterms-the-state-of-platform-policies-on-u-s-political-speech/" rel="noopener noreferrer" target="_blank"><strong><em>Looking to the Midterms: The State of Platform Policies on U.S. Political Speech</em></strong></a>, a recent post at Tech Policy Press.</p><p>In the second segment, we zoom out and discuss the trajectory of tech company policies on elections over the last twenty six years with <strong>Katie Harbath </strong>and <strong>Collier Fernekes</strong>, authors of a <a href="https://bipartisanpolicy.org/report/history-tech-elections/" rel="noopener noreferrer" target="_blank"><strong>recent report</strong></a> for the Bipartisan Policy Center that was based on an archive of public announcements made by the firms.&nbsp;Katie is a former Facebook public policy director and now leads Anchor Change, a consultancy she started after leaving the tech company. Collier is a research analyst at the Bipartisan Policy Center.</p>]]></description><content:encoded><![CDATA[<p>In recent episodes of this podcast we’ve explored the policies and practices of the social media platforms with regard to elections. In this week’s episode, we’ll hear two segments on this theme. First, an interview with <strong>Daniel Kriess</strong>, an Associate Professor in the Hussman School of Journalism and Media at the University of North Carolina at Chapel Hill and a principal researcher at the UNC Center for Information, Technology, and Public Life. With Ph.D candidate Erik Brooks, Daniel is the author of <a href="https://techpolicy.press/looking-to-the-midterms-the-state-of-platform-policies-on-u-s-political-speech/" rel="noopener noreferrer" target="_blank"><strong><em>Looking to the Midterms: The State of Platform Policies on U.S. Political Speech</em></strong></a>, a recent post at Tech Policy Press.</p><p>In the second segment, we zoom out and discuss the trajectory of tech company policies on elections over the last twenty six years with <strong>Katie Harbath </strong>and <strong>Collier Fernekes</strong>, authors of a <a href="https://bipartisanpolicy.org/report/history-tech-elections/" rel="noopener noreferrer" target="_blank"><strong>recent report</strong></a> for the Bipartisan Policy Center that was based on an archive of public announcements made by the firms.&nbsp;Katie is a former Facebook public policy director and now leads Anchor Change, a consultancy she started after leaving the tech company. Collier is a research analyst at the Bipartisan Policy Center.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/platform-election-policies-now-and-then]]></link><guid isPermaLink="false">ad747e36-7b9b-45f9-b5a9-0c055c2d15af</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 23 Oct 2022 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/8c5dec8f-70bf-429e-b6a2-c8eae6dce3ac/TPP129-converted.mp3" length="34336268" type="audio/mpeg"/><itunes:duration>57:14</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Contending with Spyware and Oppression in Thailand</title><itunes:title>Contending with Spyware and Oppression in Thailand</itunes:title><description><![CDATA[<p>Earlier this year, an investigation published in the <em>New Yorker</em> by Ronan Farrow suggested that commercial spyware called Pegasus, developed by the Israeli firm NSO Group, is being used by governments in at least 45 countries around the world, including by U.S. and European intelligence and law enforcement services. The technology permits government agents to gain access to the contents of cell phones by exploiting flaws in device operating systems and software.&nbsp;</p><p>In this episode, we hear from three individuals in Bangkok, Thailand; pro-democracy activists who have seen their community targeted with Pegasus, part of a range of activities intended to discourage dissent and limit free expression:</p><ul><li><strong>Yingcheep Atchanont</strong>, a program manager at iLaw</li><li><strong>Ruchapong Chamjirachaikul</strong>, advocacy officer at iLaw</li><li><strong>Darika Bamrungchok</strong>, a program manager at Thai Netizen</li></ul><br/>]]></description><content:encoded><![CDATA[<p>Earlier this year, an investigation published in the <em>New Yorker</em> by Ronan Farrow suggested that commercial spyware called Pegasus, developed by the Israeli firm NSO Group, is being used by governments in at least 45 countries around the world, including by U.S. and European intelligence and law enforcement services. The technology permits government agents to gain access to the contents of cell phones by exploiting flaws in device operating systems and software.&nbsp;</p><p>In this episode, we hear from three individuals in Bangkok, Thailand; pro-democracy activists who have seen their community targeted with Pegasus, part of a range of activities intended to discourage dissent and limit free expression:</p><ul><li><strong>Yingcheep Atchanont</strong>, a program manager at iLaw</li><li><strong>Ruchapong Chamjirachaikul</strong>, advocacy officer at iLaw</li><li><strong>Darika Bamrungchok</strong>, a program manager at Thai Netizen</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/contending-with-spyware-and-oppression-in-thailand]]></link><guid isPermaLink="false">de5c3902-8590-44f8-9f5c-4aabe6656560</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 16 Oct 2022 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/91c1504a-fc93-41b8-90f0-18af745e9b69/TPP128-converted.mp3" length="23691078" type="audio/mpeg"/><itunes:duration>39:29</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Model Suggests Digital Media Contributing to “Maelstrom” of Societal Division</title><itunes:title>Model Suggests Digital Media Contributing to “Maelstrom” of Societal Division</itunes:title><description><![CDATA[<p>Regular users of social media platforms are well aware that they often produce toxic discourse. Scholars continue to produce results that bring clarity to the mechanisms by which digital and social media exacerbate partisan and identity-based conflict. A better understanding is crucial for keying in on what platforms should be held responsible for, devising better policy, and potentially designing solutions.&nbsp;</p><p>A <a href="https://www.pnas.org/doi/10.1073/pnas.2207159119" rel="noopener noreferrer" target="_blank">new peer-reviewed paper</a> from Petter Törnberg, a researcher at the University of Amsterdam Institute for Social Science Research, contributes to this understanding by developing a computational model that “suggests that digital media polarize through partisan sorting, creating a maelstrom in which more and more identities, beliefs, and cultural preferences become drawn into an all-encompassing societal division.”&nbsp;</p>]]></description><content:encoded><![CDATA[<p>Regular users of social media platforms are well aware that they often produce toxic discourse. Scholars continue to produce results that bring clarity to the mechanisms by which digital and social media exacerbate partisan and identity-based conflict. A better understanding is crucial for keying in on what platforms should be held responsible for, devising better policy, and potentially designing solutions.&nbsp;</p><p>A <a href="https://www.pnas.org/doi/10.1073/pnas.2207159119" rel="noopener noreferrer" target="_blank">new peer-reviewed paper</a> from Petter Törnberg, a researcher at the University of Amsterdam Institute for Social Science Research, contributes to this understanding by developing a computational model that “suggests that digital media polarize through partisan sorting, creating a maelstrom in which more and more identities, beliefs, and cultural preferences become drawn into an all-encompassing societal division.”&nbsp;</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/model-suggests-digital-media-contributing-to-maelstrom-of-societal-division]]></link><guid isPermaLink="false">3c692b54-926e-42e7-a3ea-5d4d9f5a17fd</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Thu, 13 Oct 2022 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/f6f729b6-ee25-4b5d-9778-df9f731538a9/TPP127-converted.mp3" length="23103157" type="audio/mpeg"/><itunes:duration>32:05</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Unpacking the Blueprint for an AI Bill of Rights</title><itunes:title>Unpacking the Blueprint for an AI Bill of Rights</itunes:title><description><![CDATA[<p>Last week, President <strong>Joe Biden</strong>’s White House published a 73-page document produced by the Office of Science and Technology Policy titled <a href="https://www.whitehouse.gov/ostp/ai-bill-of-rights/" rel="noopener noreferrer" target="_blank"><em>Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People</em></a>. </p><p>The White House says that “among the great challenges posed to democracy today is the use of technology, data, and automated systems in ways that threaten the rights of the American public.“ The Blueprint, then, is “a guide for a society that protects all people from these threats—and uses technologies in ways that reinforce our highest values.”</p><p>To discuss the blueprint and the broader context into which it was introduced, <em>Tech Policy Press</em> spoke to one expert who had a hand in writing it, and one external observer who follows these issues closely. Joining the discussion are <strong>Suresh Venkatasubramanian</strong>, a professor of computer science and data science and director of the Data Science Initiative at Brown University, who recently completed a 15-month appointment as an advisor to the White House Office of Science and Technology Policy; and <strong>Alex Engler</strong>, a fellow at the Brookings Institution, where he researches algorithms and policy.</p>]]></description><content:encoded><![CDATA[<p>Last week, President <strong>Joe Biden</strong>’s White House published a 73-page document produced by the Office of Science and Technology Policy titled <a href="https://www.whitehouse.gov/ostp/ai-bill-of-rights/" rel="noopener noreferrer" target="_blank"><em>Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People</em></a>. </p><p>The White House says that “among the great challenges posed to democracy today is the use of technology, data, and automated systems in ways that threaten the rights of the American public.“ The Blueprint, then, is “a guide for a society that protects all people from these threats—and uses technologies in ways that reinforce our highest values.”</p><p>To discuss the blueprint and the broader context into which it was introduced, <em>Tech Policy Press</em> spoke to one expert who had a hand in writing it, and one external observer who follows these issues closely. Joining the discussion are <strong>Suresh Venkatasubramanian</strong>, a professor of computer science and data science and director of the Data Science Initiative at Brown University, who recently completed a 15-month appointment as an advisor to the White House Office of Science and Technology Policy; and <strong>Alex Engler</strong>, a fellow at the Brookings Institution, where he researches algorithms and policy.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/unpacking-the-blueprint-for-an-ai-bill-of-rights]]></link><guid isPermaLink="false">f28600a2-fc2f-4bf9-a1a3-cb978b99df87</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Tue, 11 Oct 2022 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/d77abd08-028d-4816-948f-3a8f6131ec70/TPP126-converted.mp3" length="29489956" type="audio/mpeg"/><itunes:duration>49:09</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Debate Over Content Moderation Heads to the Supreme Court</title><itunes:title>Debate Over Content Moderation Heads to the Supreme Court</itunes:title><description><![CDATA[<p>Some of the most controversial debates over speech and content moderation on social media platforms are now due for consideration in the Supreme Court. Last month, Florida’s attorney general asked the Court to decide whether states have the right to regulate how social media companies moderate content on their services, after Florida and Texas passed laws that challenge practices of tech firms that lawmakers there regard as anti-democratic. And this month, the Supreme Court decided to hear two cases that will have bearing on interpretation of Section 230 of the Communications Decency Act, which generally provides platforms with immunity from legal liability for user generated content.&nbsp;</p><p>To talk about these various developments, <strong>Justin Hendrix</strong> spoke to three people covering these issues closely. Guests include:</p><ul><li><strong>Brandie Nonnecke</strong>, Director of the CITRIS Policy Lab at UC Berkeley and the Director of Our Better Web</li><li><strong>Jameel Jaffer</strong>, Director of the Knight First Amendment Institute at Columbia University</li><li><strong>Will Oremus</strong>, a news analysis writer focused on tech and society at <em>The Washington Post</em></li></ul><br/><p>The guests also made time to discuss <strong>Elon Musk</strong>’s on-again, off-again pursuit of Twitter, which appears to be on-again, and how his potential acquisition of the company relates to the broader debate around speech and moderation issues.&nbsp;</p>]]></description><content:encoded><![CDATA[<p>Some of the most controversial debates over speech and content moderation on social media platforms are now due for consideration in the Supreme Court. Last month, Florida’s attorney general asked the Court to decide whether states have the right to regulate how social media companies moderate content on their services, after Florida and Texas passed laws that challenge practices of tech firms that lawmakers there regard as anti-democratic. And this month, the Supreme Court decided to hear two cases that will have bearing on interpretation of Section 230 of the Communications Decency Act, which generally provides platforms with immunity from legal liability for user generated content.&nbsp;</p><p>To talk about these various developments, <strong>Justin Hendrix</strong> spoke to three people covering these issues closely. Guests include:</p><ul><li><strong>Brandie Nonnecke</strong>, Director of the CITRIS Policy Lab at UC Berkeley and the Director of Our Better Web</li><li><strong>Jameel Jaffer</strong>, Director of the Knight First Amendment Institute at Columbia University</li><li><strong>Will Oremus</strong>, a news analysis writer focused on tech and society at <em>The Washington Post</em></li></ul><br/><p>The guests also made time to discuss <strong>Elon Musk</strong>’s on-again, off-again pursuit of Twitter, which appears to be on-again, and how his potential acquisition of the company relates to the broader debate around speech and moderation issues.&nbsp;</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/debate-over-content-moderation-heads-to-the-supreme-court]]></link><guid isPermaLink="false">9363609f-46a5-4c2f-afb5-07e3088f066b</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 09 Oct 2022 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/adc333b1-f375-4fef-a54e-5f95cab4a5e8/TPP125-converted.mp3" length="33243469" type="audio/mpeg"/><itunes:duration>46:10</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Digital Governance and the State of Democracy: Why Does it Matter?</title><itunes:title>Digital Governance and the State of Democracy: Why Does it Matter?</itunes:title><description><![CDATA[<p>On September 21, <strong>Justin Hendrix</strong> moderated a panel discussion for the McCourt Institute at a pre-conference spotlight session on digital governance ahead of <em>Unfinished Live</em>, a conference on tech and society issues hosted at The Shed in New York City.&nbsp;The topic given by the organizers was<em> Digital Governance and the State of Democracy: Why Does it Matter?</em>&nbsp;</p><p>Panelist included: &nbsp;</p><ul><li><strong>Erik Brynjolfsson</strong>, the Jerry Yang and Akiko Yamazaki Professor and Senior Fellow, Stanford Institute for Human-Centered AI (HAI) and Director of the Stanford Digital Economy Lab</li><li><strong>Maggie Little</strong>, Director of the Ethics Lab at Georgetown University</li><li><strong>Eli Pariser</strong>, Co-Director of New_Public, an initiative focused on developing better digital public spaces; and</li><li><strong>Eric Salobir</strong>, the Chair of the Executive Committee, Human Technology Foundation, a research and action network placing the human being at the heart of technology development</li></ul><br/>]]></description><content:encoded><![CDATA[<p>On September 21, <strong>Justin Hendrix</strong> moderated a panel discussion for the McCourt Institute at a pre-conference spotlight session on digital governance ahead of <em>Unfinished Live</em>, a conference on tech and society issues hosted at The Shed in New York City.&nbsp;The topic given by the organizers was<em> Digital Governance and the State of Democracy: Why Does it Matter?</em>&nbsp;</p><p>Panelist included: &nbsp;</p><ul><li><strong>Erik Brynjolfsson</strong>, the Jerry Yang and Akiko Yamazaki Professor and Senior Fellow, Stanford Institute for Human-Centered AI (HAI) and Director of the Stanford Digital Economy Lab</li><li><strong>Maggie Little</strong>, Director of the Ethics Lab at Georgetown University</li><li><strong>Eli Pariser</strong>, Co-Director of New_Public, an initiative focused on developing better digital public spaces; and</li><li><strong>Eric Salobir</strong>, the Chair of the Executive Committee, Human Technology Foundation, a research and action network placing the human being at the heart of technology development</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/digital-governance-and-the-state-of-democracy-why-does-it-matter]]></link><guid isPermaLink="false">abef7fdf-fd87-47c1-ae1c-ae8324c2bc51</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sat, 08 Oct 2022 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/37f1d6f5-d87d-4d50-8a3d-722c8687baf0/TPP123-converted.mp3" length="39565223" type="audio/mpeg"/><itunes:duration>47:06</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The Supreme Court Takes Up Two Cases That Could Transform the Internet</title><itunes:title>The Supreme Court Takes Up Two Cases That Could Transform the Internet</itunes:title><description><![CDATA[<p>On Monday, the U.S. Supreme Court agreed to hear two cases that concern whether tech platforms can be held liable for user generated content, as well as for content that users see because of a platform’s algorithmic systems.&nbsp;In deciding to hear <em>Gonzalez et al vs. Google</em> and <em>Taamneh, Mehier et al vs Twitter et al</em>, the Court will broach the question of whether Section 230 of the Communications Decency Act should be narrowed, and whether it still immunizes the owners of websites when that algorithmically “recommend” third-party content into a user’s feed.</p><p>To learn more about these cases and the potential implications of the Court’s decision, <em>Tech Policy Press</em> spoke to an expert on tech and internet law: <strong>Anupam Chander</strong>, the Scott K. Ginsberg Professor of Law and Technology at Georgetown University.</p>]]></description><content:encoded><![CDATA[<p>On Monday, the U.S. Supreme Court agreed to hear two cases that concern whether tech platforms can be held liable for user generated content, as well as for content that users see because of a platform’s algorithmic systems.&nbsp;In deciding to hear <em>Gonzalez et al vs. Google</em> and <em>Taamneh, Mehier et al vs Twitter et al</em>, the Court will broach the question of whether Section 230 of the Communications Decency Act should be narrowed, and whether it still immunizes the owners of websites when that algorithmically “recommend” third-party content into a user’s feed.</p><p>To learn more about these cases and the potential implications of the Court’s decision, <em>Tech Policy Press</em> spoke to an expert on tech and internet law: <strong>Anupam Chander</strong>, the Scott K. Ginsberg Professor of Law and Technology at Georgetown University.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/the-supreme-court-takes-up-two-cases-that-could-transform-the-internet]]></link><guid isPermaLink="false">4d73af73-c549-42c1-9f31-ae1ad3568471</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Tue, 04 Oct 2022 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/3466bf2b-0b32-4f31-b34b-abd873baad39/TPP122-converted.mp3" length="17040818" type="audio/mpeg"/><itunes:duration>28:24</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Election Misinformation Thrives on Major Social Media Platforms</title><itunes:title>Election Misinformation Thrives on Major Social Media Platforms</itunes:title><description><![CDATA[<p>The former President and his supporters continue to sow doubt in the outcome of the 2020 election, and in the election system more generally. Now, with the the 2022 midterm elections just a month away, a number of observers are perplexed at the posture of large social media platforms, where false claims continue to fester and efforts to mitigate misinformation always seem puny compared to the scale of the problem. </p><p>This week we hear from three experts who are following these issues closely:&nbsp;</p><ul><li><strong>Nora Benavidez</strong>, Senior Counsel and Director of Digital Justice and Civil Rights, Free Press</li><li><strong>Paul Barrett</strong>, Deputy Director, Center for Business &amp; Human Rights, NYU Stern School of Business</li><li><strong>Mike Caulfield</strong>, Research Scientist at the Center for an Informed Public, University of Washington</li></ul><br/>]]></description><content:encoded><![CDATA[<p>The former President and his supporters continue to sow doubt in the outcome of the 2020 election, and in the election system more generally. Now, with the the 2022 midterm elections just a month away, a number of observers are perplexed at the posture of large social media platforms, where false claims continue to fester and efforts to mitigate misinformation always seem puny compared to the scale of the problem. </p><p>This week we hear from three experts who are following these issues closely:&nbsp;</p><ul><li><strong>Nora Benavidez</strong>, Senior Counsel and Director of Digital Justice and Civil Rights, Free Press</li><li><strong>Paul Barrett</strong>, Deputy Director, Center for Business &amp; Human Rights, NYU Stern School of Business</li><li><strong>Mike Caulfield</strong>, Research Scientist at the Center for an Informed Public, University of Washington</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/election-misinformation-thrives-on-major-social-media-platforms]]></link><guid isPermaLink="false">08b45081-5876-425f-8771-f6b4d30ed2ab</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 02 Oct 2022 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/9db3386f-2cdc-4c57-89ea-12b21a5612e5/TPP121-converted.mp3" length="30012331" type="audio/mpeg"/><itunes:duration>50:01</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Contemplating the &quot;Uselessness&quot; of AI Ethics</title><itunes:title>Contemplating the &quot;Uselessness&quot; of AI Ethics</itunes:title><description><![CDATA[<p>In a new paper-- "<a href="https://link.springer.com/article/10.1007/s43681-022-00209-w" rel="noopener noreferrer" target="_blank">The uselessness of AI Ethics</a>," published in the online edition of the journal <em>AI and Ethics</em>, <strong>Luke Munn</strong>, points to over 80 lists of AI ethical principles produced by governments, corporations, research groups and professional societies. In is paper, he expresses concern that most of these ethics statements deal in vague terms and lack any kind of actual enforcement. But in critiquing attempts at defining an ethical code for AI, he is not suggesting we let the technology develop in a technical vacuum. On the contrary, he wants us to think more deeper about the potential problems in deploying AI. </p><p>In this episode of the podcast, <strong>Mark Hansen</strong>, Director of the Brown&nbsp;Institute for Media Innovation and a professor at Columbia Journalism School, speaks with Munn about his ideas, which are part of a growing movement that sees the problems with AI less in purely computational terms, but instead as an area of social science.</p>]]></description><content:encoded><![CDATA[<p>In a new paper-- "<a href="https://link.springer.com/article/10.1007/s43681-022-00209-w" rel="noopener noreferrer" target="_blank">The uselessness of AI Ethics</a>," published in the online edition of the journal <em>AI and Ethics</em>, <strong>Luke Munn</strong>, points to over 80 lists of AI ethical principles produced by governments, corporations, research groups and professional societies. In is paper, he expresses concern that most of these ethics statements deal in vague terms and lack any kind of actual enforcement. But in critiquing attempts at defining an ethical code for AI, he is not suggesting we let the technology develop in a technical vacuum. On the contrary, he wants us to think more deeper about the potential problems in deploying AI. </p><p>In this episode of the podcast, <strong>Mark Hansen</strong>, Director of the Brown&nbsp;Institute for Media Innovation and a professor at Columbia Journalism School, speaks with Munn about his ideas, which are part of a growing movement that sees the problems with AI less in purely computational terms, but instead as an area of social science.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/contemplating-the-uselessness-of-ai-ethics]]></link><guid isPermaLink="false">0781e5b8-3498-431c-8bf8-1308fe4574a8</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Tue, 27 Sep 2022 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/2cd0f73e-25c4-43ba-b79f-a27e69c73964/TPP119-converted.mp3" length="28190233" type="audio/mpeg"/><itunes:duration>46:59</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Trust and Safety Comes of Age?</title><itunes:title>Trust and Safety Comes of Age?</itunes:title><description><![CDATA[<p>As content moderation and other trust and safety issues have been, to put it mildly, at the fore of tech concerns over the last few years, it’s interesting to take a step back and look at the various conferences, professional organizations and research communities that have emerged to address this broad and challenging set of subjects. To get a sense of where trust and safety is as a field at this moment in time, <em>Tech Policy Press</em> spoke to three individuals involved in it, each coming from different perspectives:</p><p><strong>Shelby Grossman</strong>, a research scholar at the Stanford Internet Observatory and a leader in the community of academic researchers studying trust and safety issues as co-editor of the recently launched <a href="https://tsjournal.org/index.php/jots" rel="noopener noreferrer" target="_blank"><em>Journal of Online Trust and Safety</em></a></p><p><strong>David Sullivan</strong>, the leader of an industry funded consortium focused on developing best practices for the field called the <a href="https://dtspartnership.org/" rel="noopener noreferrer" target="_blank">Digital Trust and Safety Partnership</a>; and</p><p><strong>Jeff Allen</strong>, co-founder and chief research officer of an independent membership organization of trust and safety professionals, the <a href="https://integrityinstitute.org/" rel="noopener noreferrer" target="_blank">Integrity Institute</a>.</p>]]></description><content:encoded><![CDATA[<p>As content moderation and other trust and safety issues have been, to put it mildly, at the fore of tech concerns over the last few years, it’s interesting to take a step back and look at the various conferences, professional organizations and research communities that have emerged to address this broad and challenging set of subjects. To get a sense of where trust and safety is as a field at this moment in time, <em>Tech Policy Press</em> spoke to three individuals involved in it, each coming from different perspectives:</p><p><strong>Shelby Grossman</strong>, a research scholar at the Stanford Internet Observatory and a leader in the community of academic researchers studying trust and safety issues as co-editor of the recently launched <a href="https://tsjournal.org/index.php/jots" rel="noopener noreferrer" target="_blank"><em>Journal of Online Trust and Safety</em></a></p><p><strong>David Sullivan</strong>, the leader of an industry funded consortium focused on developing best practices for the field called the <a href="https://dtspartnership.org/" rel="noopener noreferrer" target="_blank">Digital Trust and Safety Partnership</a>; and</p><p><strong>Jeff Allen</strong>, co-founder and chief research officer of an independent membership organization of trust and safety professionals, the <a href="https://integrityinstitute.org/" rel="noopener noreferrer" target="_blank">Integrity Institute</a>.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/trust-and-safety-comes-of-age]]></link><guid isPermaLink="false">c5fe54df-1bba-4447-bf0f-c544d745dd09</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 25 Sep 2022 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/bb8b7542-c270-4ffb-a4ac-7869664c4af5/TPP120-converted.mp3" length="36011541" type="audio/mpeg"/><itunes:duration>50:01</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Can Big Tech Platforms Operate Responsibly on a Global Scale?</title><itunes:title>Can Big Tech Platforms Operate Responsibly on a Global Scale?</itunes:title><description><![CDATA[<p>A <a href="https://www.article19.org/bridging-the-gap-local-voices-in-content-moderation/" rel="noopener noreferrer" target="_blank">series of reports</a> published this summer by Article 19- working with UNESCO and with funding from the European Union- take an in-depth look at how social media platforms operate in a global context, documenting a lack of understanding of cultural nuances and local languages, insufficient mechanisms for users and civil society groups to engage on moderation, a lack of transparency, and a power asymmetry that leaves local actors feeling powerless.</p><p>To learn more about <a href="https://www.article19.org/wp-content/uploads/2022/06/Summary-report-social-media-for-peace.pdf" rel="noopener noreferrer" target="_blank">the project and its recommendations</a>, in this episode we hear from four individuals involved in the drafting of the reports:</p><ul><li><strong>Pierre François Docquir, </strong>Head of Media Freedom, ARTICLE 19, who led the project globally;</li><li><strong>Roberta Taveri</strong>, an ARTICLE 19 program officer who played a role in delivering the research on Bosnia and Herzegovina;</li><li><strong>Catherine Muya</strong> from ARTICLE 19 East Africa, who focused on Kenya, and</li><li><strong>Sherly Haristya</strong>, PhD, an independent researcher who conducted the research on Indonesia.</li></ul><br/>]]></description><content:encoded><![CDATA[<p>A <a href="https://www.article19.org/bridging-the-gap-local-voices-in-content-moderation/" rel="noopener noreferrer" target="_blank">series of reports</a> published this summer by Article 19- working with UNESCO and with funding from the European Union- take an in-depth look at how social media platforms operate in a global context, documenting a lack of understanding of cultural nuances and local languages, insufficient mechanisms for users and civil society groups to engage on moderation, a lack of transparency, and a power asymmetry that leaves local actors feeling powerless.</p><p>To learn more about <a href="https://www.article19.org/wp-content/uploads/2022/06/Summary-report-social-media-for-peace.pdf" rel="noopener noreferrer" target="_blank">the project and its recommendations</a>, in this episode we hear from four individuals involved in the drafting of the reports:</p><ul><li><strong>Pierre François Docquir, </strong>Head of Media Freedom, ARTICLE 19, who led the project globally;</li><li><strong>Roberta Taveri</strong>, an ARTICLE 19 program officer who played a role in delivering the research on Bosnia and Herzegovina;</li><li><strong>Catherine Muya</strong> from ARTICLE 19 East Africa, who focused on Kenya, and</li><li><strong>Sherly Haristya</strong>, PhD, an independent researcher who conducted the research on Indonesia.</li></ul><br/>]]></content:encoded><link><![CDATA[https://techpolicy.press/can-big-tech-platforms-operate-responsibly-on-a-global-scale]]></link><guid isPermaLink="false">07b0070a-ca81-4bc7-b91d-0f73b4ac620c</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 18 Sep 2022 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/b41c2da8-2b4f-4398-9c90-a3e1e1221e86/TPP118-converted.mp3" length="32353985" type="audio/mpeg"/><itunes:duration>44:56</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Understanding Digital Dragnets: Surveillance in the Age of Smartphones</title><itunes:title>Understanding Digital Dragnets: Surveillance in the Age of Smartphones</itunes:title><description><![CDATA[<p>In this episode of the <em>Tech Policy Press</em> podcast, we’re going to explore how law enforcement and other government agencies in the United States acquire data drawn from commercial data brokers for investigative purposes, and the questions raised by these practices.</p><p>This is an issue that is still at question in the nation’s courts and is under active discussion on Capitol Hill. For instance, this summer the House Judiciary Committee <a href="https://techpolicy.press/bipartisan-support-for-fourth-amendment-is-not-for-sale-act-at-house-judiciary-hearing/" rel="noopener noreferrer" target="_blank">hosted a hearing</a> it titled <a href="https://judiciary.house.gov/calendar/eventsingle.aspx?EventID=4983" rel="noopener noreferrer" target="_blank"><em>Digital Dragnets: Examining the Government's Access to Your Personal Data</em></a><em>.</em><strong><em>&nbsp;</em></strong>At the hearing, experts witnesses testified that government agencies at all levels, including federal agencies such as the Department of Homeland Security (DHS), Central Intelligence Agency (CIA), Internal Revenue Service (IRS), the Department of Defense (DOD), as well as state and local law enforcement are collecting a massive amount of personal data on American citizens, sidestepping constitutional protections against unwarranted search and seizure provided in the Fourth Amendment. The hearing included discussion of the proposed <a href="https://techpolicy.press/wp-content/uploads/2022/07/BILLS-117hr2738ih.pdf" rel="noopener noreferrer" target="_blank">Fourth Amendment is Not For Sale Act</a>, which would restrict government entities from engaging in such practices.</p><p>But while the courts and Congress deliberate, government agencies are acquiring this information from software providers, including one such firm that was the subject of a recent investigative report from the Associated Press titled <a href="https://apnews.com/article/technology-police-government-surveillance-d395409ef5a8c6c3f6cdab5b1d0e27ef" rel="noopener noreferrer" target="_blank"><em>Tech tool offers police ‘mass surveillance on a budget</em></a>. Today, I’m joined by the two reporters who spent months trying to understand how a little known company in Virginia goes about acquiring commercially available data and selling it to police in departments across the country- global investigative journalist <strong>Garance Burke</strong> and national investigative reporter <strong>Jason Dearen</strong>.</p>]]></description><content:encoded><![CDATA[<p>In this episode of the <em>Tech Policy Press</em> podcast, we’re going to explore how law enforcement and other government agencies in the United States acquire data drawn from commercial data brokers for investigative purposes, and the questions raised by these practices.</p><p>This is an issue that is still at question in the nation’s courts and is under active discussion on Capitol Hill. For instance, this summer the House Judiciary Committee <a href="https://techpolicy.press/bipartisan-support-for-fourth-amendment-is-not-for-sale-act-at-house-judiciary-hearing/" rel="noopener noreferrer" target="_blank">hosted a hearing</a> it titled <a href="https://judiciary.house.gov/calendar/eventsingle.aspx?EventID=4983" rel="noopener noreferrer" target="_blank"><em>Digital Dragnets: Examining the Government's Access to Your Personal Data</em></a><em>.</em><strong><em>&nbsp;</em></strong>At the hearing, experts witnesses testified that government agencies at all levels, including federal agencies such as the Department of Homeland Security (DHS), Central Intelligence Agency (CIA), Internal Revenue Service (IRS), the Department of Defense (DOD), as well as state and local law enforcement are collecting a massive amount of personal data on American citizens, sidestepping constitutional protections against unwarranted search and seizure provided in the Fourth Amendment. The hearing included discussion of the proposed <a href="https://techpolicy.press/wp-content/uploads/2022/07/BILLS-117hr2738ih.pdf" rel="noopener noreferrer" target="_blank">Fourth Amendment is Not For Sale Act</a>, which would restrict government entities from engaging in such practices.</p><p>But while the courts and Congress deliberate, government agencies are acquiring this information from software providers, including one such firm that was the subject of a recent investigative report from the Associated Press titled <a href="https://apnews.com/article/technology-police-government-surveillance-d395409ef5a8c6c3f6cdab5b1d0e27ef" rel="noopener noreferrer" target="_blank"><em>Tech tool offers police ‘mass surveillance on a budget</em></a>. Today, I’m joined by the two reporters who spent months trying to understand how a little known company in Virginia goes about acquiring commercially available data and selling it to police in departments across the country- global investigative journalist <strong>Garance Burke</strong> and national investigative reporter <strong>Jason Dearen</strong>.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/understanding-digital-dragnets-surveillance-in-the-age-of-smartphones]]></link><guid isPermaLink="false">2d091c0a-f107-4abf-8029-83fe6c06c23b</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Wed, 14 Sep 2022 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/5caab426-90af-4b7d-983d-19195f20d807/TPP117-converted.mp3" length="24661370" type="audio/mpeg"/><itunes:duration>34:15</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Mitigating Election Disinformation in Brazil</title><itunes:title>Mitigating Election Disinformation in Brazil</itunes:title><description><![CDATA[<p>it is well understood that for all the shortcomings of the tech platforms’ approach to elections in this country, it’s much worse abroad, where often language and cultural barriers combine with fewer political and business incentives for firms such as Meta, Twitter, YouTube and TikTok to properly resource elections. </p><p>Now, just weeks before a general election in Brazil that will decide that country’s next President, there are signs that disinformation is rife on the platforms, with many observers concerned about the potential for violence. To learn more, <strong>Justin Hendrix </strong>spoke to two experts involved in efforts to identify and mitigate disinformation in Brazil: <strong>João Brant</strong>, coordinator of <strong>desinformante, </strong>an initiative of the nonprofit Ponteio Comunicação, Information and Culture and the Instituto Cultura e Democracia in Brazil, and <strong>Flora Rebello Arduini</strong>, Campaigns Director at <strong>SumOfUs</strong>, a global activist community that seeks to curb the growing power of corporations.</p>]]></description><content:encoded><![CDATA[<p>it is well understood that for all the shortcomings of the tech platforms’ approach to elections in this country, it’s much worse abroad, where often language and cultural barriers combine with fewer political and business incentives for firms such as Meta, Twitter, YouTube and TikTok to properly resource elections. </p><p>Now, just weeks before a general election in Brazil that will decide that country’s next President, there are signs that disinformation is rife on the platforms, with many observers concerned about the potential for violence. To learn more, <strong>Justin Hendrix </strong>spoke to two experts involved in efforts to identify and mitigate disinformation in Brazil: <strong>João Brant</strong>, coordinator of <strong>desinformante, </strong>an initiative of the nonprofit Ponteio Comunicação, Information and Culture and the Instituto Cultura e Democracia in Brazil, and <strong>Flora Rebello Arduini</strong>, Campaigns Director at <strong>SumOfUs</strong>, a global activist community that seeks to curb the growing power of corporations.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/mitigating-election-disinformation-in-brazil]]></link><guid isPermaLink="false">6b4aa63d-e516-488d-867e-e6ac10806cee</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 11 Sep 2022 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/b7bf0934-de1b-4348-b954-694a1a664881/TPP116-converted.mp3" length="23451681" type="audio/mpeg"/><itunes:duration>39:05</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Douglas Ruskhkoff, the Survival of the Richest and... the Battle of Endor?</title><itunes:title>Douglas Ruskhkoff, the Survival of the Richest and... the Battle of Endor?</itunes:title><description><![CDATA[<p>A common theme on this podcast is the future, and the visions of the future that a certain set of Silicon Valley tech and venture accelerationists are working hard to advance. Today we’re going to hear from author and scholar Douglas Rushkoff about his latest book-<a href="https://wwnorton.com/books/survival-of-the-richest" rel="noopener noreferrer" target="_blank"><em>Survival of the Richest: Escape Fantasies of the Tech Billionaires</em></a>- which lampoons and deflates these characters, offering instead a humanist approach to defining the future by how we comport ourselves in the present.</p>]]></description><content:encoded><![CDATA[<p>A common theme on this podcast is the future, and the visions of the future that a certain set of Silicon Valley tech and venture accelerationists are working hard to advance. Today we’re going to hear from author and scholar Douglas Rushkoff about his latest book-<a href="https://wwnorton.com/books/survival-of-the-richest" rel="noopener noreferrer" target="_blank"><em>Survival of the Richest: Escape Fantasies of the Tech Billionaires</em></a>- which lampoons and deflates these characters, offering instead a humanist approach to defining the future by how we comport ourselves in the present.</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/douglas-ruskhkoff-the-survival-of-the-richest-and-the-battle-of-endor]]></link><guid isPermaLink="false">a9333469-e734-45ec-b513-8c3e5d8c1a8d</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Tue, 06 Sep 2022 07:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/e2723f45-0fa2-4a0b-afab-22f6630b907b/TPP115-converted.mp3" length="26381742" type="audio/mpeg"/><itunes:duration>43:58</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Contemplating YouTube&apos;s Rise: A Conversation with Author Mark Bergen</title><itunes:title>Contemplating YouTube&apos;s Rise: A Conversation with Author Mark Bergen</itunes:title><description><![CDATA[<p>This episode features a conversation with Bloomberg journalist Mark Bergen. He’s the author of <a href="https://www.penguinrandomhouse.com/books/653248/like-comment-subscribe-by-mark-bergen/" rel="noopener noreferrer" target="_blank"><em>Like, Comment, Subscribe: Inside YouTube’s Chaotic Rise to World Domination</em></a>, from Viking.&nbsp;</p><p>This is a business book, a history, and a contemplation of YouTube’s role in society all in one. Bergen explores how the company evolved into the massive juggernaut it is today, and along the way gives insight into concerning phenomena that we’ve discussed on this podcast in the past, such as the relationship between YouTube and violent extremism, misogyny, racism, white nationalism and a variety of other ills. </p><p>The book pulls the curtain back on the internal dynamics and decisions that bring us to today. And it asks us to contemplate whether anyone- from Google’s leadership to regulators in any of the world’s governments- can truly get their heads or hands around YouTube.&nbsp;</p>]]></description><content:encoded><![CDATA[<p>This episode features a conversation with Bloomberg journalist Mark Bergen. He’s the author of <a href="https://www.penguinrandomhouse.com/books/653248/like-comment-subscribe-by-mark-bergen/" rel="noopener noreferrer" target="_blank"><em>Like, Comment, Subscribe: Inside YouTube’s Chaotic Rise to World Domination</em></a>, from Viking.&nbsp;</p><p>This is a business book, a history, and a contemplation of YouTube’s role in society all in one. Bergen explores how the company evolved into the massive juggernaut it is today, and along the way gives insight into concerning phenomena that we’ve discussed on this podcast in the past, such as the relationship between YouTube and violent extremism, misogyny, racism, white nationalism and a variety of other ills. </p><p>The book pulls the curtain back on the internal dynamics and decisions that bring us to today. And it asks us to contemplate whether anyone- from Google’s leadership to regulators in any of the world’s governments- can truly get their heads or hands around YouTube.&nbsp;</p>]]></content:encoded><link><![CDATA[https://techpolicy.press/contemplating-youtubes-rise-a-conversation-with-author-mark-bergen]]></link><guid isPermaLink="false">21177419-5275-4943-ab8e-aa84700a27ab</guid><itunes:image href="https://artwork.captivate.fm/13a4e094-bfb8-4b58-b59d-02b8f292a6f2/BgtfDWriiMrD_96Bs7jeTwKM.png"/><pubDate>Sun, 04 Sep 2022 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/9541dddd-9d8d-4d1b-b979-99fc3af3d132/TPP114-converted.mp3" length="35339537" type="audio/mpeg"/><itunes:duration>49:05</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item></channel></rss>