<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet href="https://feeds.captivate.fm/style.xsl" type="text/xsl"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:podcast="https://podcastindex.org/namespace/1.0"><channel><atom:link href="https://feeds.captivate.fm/gradient-dissent/" rel="self" type="application/rss+xml"/><title><![CDATA[Gradient Dissent: Conversations on AI]]></title><podcast:guid>9963e433-741d-5f81-af32-7ee21283f95f</podcast:guid><lastBuildDate>Tue, 31 Mar 2026 13:23:14 +0000</lastBuildDate><generator>Captivate.fm</generator><language><![CDATA[en]]></language><copyright><![CDATA[All rights reserved]]></copyright><managingEditor>Lukas Biewald</managingEditor><itunes:summary><![CDATA[Join Lukas Biewald on Gradient Dissent, an AI-focused podcast brought to you by Weights & Biases. Dive into fascinating conversations with industry giants from NVIDIA, Meta, Google, Lyft, OpenAI, and more. Explore the cutting-edge of AI and learn the intricacies of bringing models into production.]]></itunes:summary><itunes:image href="https://artwork.captivate.fm/25fd1181-b46e-459b-85a5-d397eec4cdcf/JDLDW81K-wlJoAWL7ZnxLdTp.jpg"/><itunes:owner><itunes:name>Lukas Biewald</itunes:name></itunes:owner><itunes:author>Lukas Biewald</itunes:author><description>Join Lukas Biewald on Gradient Dissent, an AI-focused podcast brought to you by Weights &amp; Biases. Dive into fascinating conversations with industry giants from NVIDIA, Meta, Google, Lyft, OpenAI, and more. Explore the cutting-edge of AI and learn the intricacies of bringing models into production.</description><link>https://wandb.ai/site/resources/podcast</link><atom:link href="https://pubsubhubbub.appspot.com" rel="hub"/><itunes:subtitle><![CDATA[Stories from AI experts solving real-world problems]]></itunes:subtitle><itunes:explicit>false</itunes:explicit><itunes:type>episodic</itunes:type><itunes:category text="Technology"></itunes:category><itunes:category text="Business"></itunes:category><itunes:new-feed-url>https://feeds.captivate.fm/gradient-dissent/</itunes:new-feed-url><podcast:locked>no</podcast:locked><podcast:medium>podcast</podcast:medium><item><title>Why Netflix, Uber, and Spotify Never Lag: The Database Nobody Talks About | Aaron Katz</title><itunes:title>Why Netflix, Uber, and Spotify Never Lag: The Database Nobody Talks About | Aaron Katz</itunes:title><description><![CDATA[<p>"Companies designing for agents, not humans, are going to get a lot of lift."</p><p></p><p>ClickHouse started as an internal tool at Yandex. Today it's the database Anthropic, OpenAI, Meta and Tesla all run on.</p><p></p><p>In this episode, CEO Aaron Katz joins Lukas Biewald to talk about how he turned an open source project into a $15B company, why he acquired LangFuse knowing it could cost him customers, and what he's actually building for the agent era.</p><p></p><p>Snowflake, Datadog and Databricks all come up. He doesn't shy away.</p><p></p><p></p><p></p><p>Connect with us here:</p><p>Aaron Katz: https://www.linkedin.com/in/aaron-katz-5762094</p><p>ClickHouse: https://www.linkedin.com/company/clickhouseinc/</p><p>Lukas Biewald: https://www.linkedin.com/in/lbiewald/</p><p>Weights and Biases: https://www.linkedin.com/company/wandb/</p><p></p><p></p><p></p><p>00:00 Trailer</p><p>00:57 The Origin Story: From Yandex to ClickHouse Inc.</p><p>04:43 Building ClickHouse Cloud &amp; Raising $300M</p><p>10:36 Growing Up Around Xerox PARC</p><p>12:51 Salesforce, Mark Benioff &amp; the Dot-Com Bust</p><p>15:32 Cloud Skeptics vs. AI Skeptics | History Repeating</p><p>18:05 Building a Modern Go-To-Market Playbook</p><p>21:57 The SaaS Crash, Agents &amp; the Future of Infrastructure</p><p>27:09 The Datadog Love-Hate Story</p><p>35:21 Hardest Moments: Russia, SVB &amp; Sleepless Nights</p><p>43:16 Outro</p>]]></description><content:encoded><![CDATA[<p>"Companies designing for agents, not humans, are going to get a lot of lift."</p><p></p><p>ClickHouse started as an internal tool at Yandex. Today it's the database Anthropic, OpenAI, Meta and Tesla all run on.</p><p></p><p>In this episode, CEO Aaron Katz joins Lukas Biewald to talk about how he turned an open source project into a $15B company, why he acquired LangFuse knowing it could cost him customers, and what he's actually building for the agent era.</p><p></p><p>Snowflake, Datadog and Databricks all come up. He doesn't shy away.</p><p></p><p></p><p></p><p>Connect with us here:</p><p>Aaron Katz: https://www.linkedin.com/in/aaron-katz-5762094</p><p>ClickHouse: https://www.linkedin.com/company/clickhouseinc/</p><p>Lukas Biewald: https://www.linkedin.com/in/lbiewald/</p><p>Weights and Biases: https://www.linkedin.com/company/wandb/</p><p></p><p></p><p></p><p>00:00 Trailer</p><p>00:57 The Origin Story: From Yandex to ClickHouse Inc.</p><p>04:43 Building ClickHouse Cloud &amp; Raising $300M</p><p>10:36 Growing Up Around Xerox PARC</p><p>12:51 Salesforce, Mark Benioff &amp; the Dot-Com Bust</p><p>15:32 Cloud Skeptics vs. AI Skeptics | History Repeating</p><p>18:05 Building a Modern Go-To-Market Playbook</p><p>21:57 The SaaS Crash, Agents &amp; the Future of Infrastructure</p><p>27:09 The Datadog Love-Hate Story</p><p>35:21 Hardest Moments: Russia, SVB &amp; Sleepless Nights</p><p>43:16 Outro</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">f776d7d6-e08f-42cf-b657-f9533b6d6e78</guid><itunes:image href="https://artwork.captivate.fm/1497023c-5696-4478-a19d-9fc375426fdd/rss-clickhouse.jpg"/><pubDate>Tue, 31 Mar 2026 05:45:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/f776d7d6-e08f-42cf-b657-f9533b6d6e78.mp3" length="62891199" type="audio/mpeg"/><itunes:duration>43:31</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The $64M Bet on an AI That Has to Be Right | Carina Hong, CEO of Axiom</title><itunes:title>The $64M Bet on an AI That Has to Be Right | Carina Hong, CEO of Axiom</itunes:title><description><![CDATA[<p>Formal verification already consumes years of human effort.</p><p>In this episode, Lukas Biewald talks with Carina Hong, Founder &amp; CEO of Axiom, about why verification is becoming the real bottleneck in high stakes AI systems.</p><p>They discuss how Axiom uses AI to take on the tedious checking that stretches verification cycles across years, starting with formal mathematics and extending to hardware and software.</p><p>Carina also explains why Axiom’s approach to auto-formalization mirrors spec driven models like Kiro from AWS.</p><p>Connect with us here:</p><p>Carina Hong: <a href="https://www.linkedin.com/in/carina-hong/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/carina-hong/</a></p><p>Axiom:<a href=" https://www.linkedin.com/company/axiommath/" rel="noopener noreferrer" target="_blank"> https://www.linkedin.com/company/axiommath/</a></p><p>Lukas Biewald: <a href="https://www.linkedin.com/in/lbiewald/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/lbiewald/</a></p><p>Weights &amp; Biases: <a href="https://www.linkedin.com/company/wandb/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/company/wandb/</a></p>]]></description><content:encoded><![CDATA[<p>Formal verification already consumes years of human effort.</p><p>In this episode, Lukas Biewald talks with Carina Hong, Founder &amp; CEO of Axiom, about why verification is becoming the real bottleneck in high stakes AI systems.</p><p>They discuss how Axiom uses AI to take on the tedious checking that stretches verification cycles across years, starting with formal mathematics and extending to hardware and software.</p><p>Carina also explains why Axiom’s approach to auto-formalization mirrors spec driven models like Kiro from AWS.</p><p>Connect with us here:</p><p>Carina Hong: <a href="https://www.linkedin.com/in/carina-hong/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/carina-hong/</a></p><p>Axiom:<a href=" https://www.linkedin.com/company/axiommath/" rel="noopener noreferrer" target="_blank"> https://www.linkedin.com/company/axiommath/</a></p><p>Lukas Biewald: <a href="https://www.linkedin.com/in/lbiewald/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/lbiewald/</a></p><p>Weights &amp; Biases: <a href="https://www.linkedin.com/company/wandb/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/company/wandb/</a></p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">d8dca4ed-165a-4b82-8e9e-3634835493aa</guid><itunes:image href="https://artwork.captivate.fm/78ea907b-8e12-4a14-af64-ab8c0b338f36/Untitled-1.jpg"/><pubDate>Thu, 05 Feb 2026 06:46:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/d8dca4ed-165a-4b82-8e9e-3634835493aa.mp3" length="73276355" type="audio/mpeg"/><itunes:duration>50:40</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>What a $42B Software Co. Really Spends on AI Tools</title><itunes:title>What a $42B Software Co. Really Spends on AI Tools</itunes:title><description><![CDATA[<p>“I don't worry about being replaced by AI. I worry about being replaced by someone who's really good at using AI.”</p><p>Atlassian has 10,000+ engineers currently split-testing the world’s top AI coding tools, from GitHub Copilot and Cursor to Claude Code. </p><p>In this episode, Co-Founder &amp; CEO Mike Cannon-Brookes joins Lukas Biewald to share what their data reveals about the world's best AI tools today.</p><p>Hear how 24 years of building a tech giant and a massive internal study on AI productivity have shaped Mike's vision for the future of dev jobs.</p><p>Connect with us here:</p><p>Mike Cannon-Brookes: <a href="https://www.linkedin.com/in/mcannonbrookes/?originalSubdomain=au" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/mcannonbrookes/?originalSubdomain=au</a></p><p>Atlassian: <a href="https://www.linkedin.com/company/atlassian/?viewAsMember=true" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/company/atlassian/?viewAsMember=true</a></p><p>Lukas Biewald: <a href="https://www.linkedin.com/in/lbiewald/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/lbiewald/</a> </p><p>Weights &amp; Biases: <a href="https://www.linkedin.com/company/wandb/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/company/wandb/</a></p><p><br></p><p>00:00 Trailer</p><p>01:08 Introduction</p><p>03:11 Connecting Technology and Business Teams</p><p>07:22 The Impact of AI on Business Workflows</p><p>13:26 Developer Productivity and AI</p><p>21:03 Measuring Developer Efficiency</p><p>25:41 Future of AI in Development</p><p>34:59 Legacy Technology and Code Changes</p><p>39:29 AI's Role in Developer Productivity</p><p>47:40 AI and Junior Developers</p><p>52:30 Product-Led Growth and Business Strategy</p><p>01:00:29 Core Metrics for Sustainable Growth</p><p>01:06:56 Staying Creative in the Tech Industry</p>]]></description><content:encoded><![CDATA[<p>“I don't worry about being replaced by AI. I worry about being replaced by someone who's really good at using AI.”</p><p>Atlassian has 10,000+ engineers currently split-testing the world’s top AI coding tools, from GitHub Copilot and Cursor to Claude Code. </p><p>In this episode, Co-Founder &amp; CEO Mike Cannon-Brookes joins Lukas Biewald to share what their data reveals about the world's best AI tools today.</p><p>Hear how 24 years of building a tech giant and a massive internal study on AI productivity have shaped Mike's vision for the future of dev jobs.</p><p>Connect with us here:</p><p>Mike Cannon-Brookes: <a href="https://www.linkedin.com/in/mcannonbrookes/?originalSubdomain=au" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/mcannonbrookes/?originalSubdomain=au</a></p><p>Atlassian: <a href="https://www.linkedin.com/company/atlassian/?viewAsMember=true" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/company/atlassian/?viewAsMember=true</a></p><p>Lukas Biewald: <a href="https://www.linkedin.com/in/lbiewald/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/lbiewald/</a> </p><p>Weights &amp; Biases: <a href="https://www.linkedin.com/company/wandb/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/company/wandb/</a></p><p><br></p><p>00:00 Trailer</p><p>01:08 Introduction</p><p>03:11 Connecting Technology and Business Teams</p><p>07:22 The Impact of AI on Business Workflows</p><p>13:26 Developer Productivity and AI</p><p>21:03 Measuring Developer Efficiency</p><p>25:41 Future of AI in Development</p><p>34:59 Legacy Technology and Code Changes</p><p>39:29 AI's Role in Developer Productivity</p><p>47:40 AI and Junior Developers</p><p>52:30 Product-Led Growth and Business Strategy</p><p>01:00:29 Core Metrics for Sustainable Growth</p><p>01:06:56 Staying Creative in the Tech Industry</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">6c4a814f-9e41-489d-b1b3-5d3e07230a23</guid><itunes:image href="https://artwork.captivate.fm/d33b3d9b-f56c-4f6c-a335-0143bbdd1070/mike-cannon.jpg"/><pubDate>Tue, 20 Jan 2026 05:05:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/6c4a814f-9e41-489d-b1b3-5d3e07230a23.mp3" length="98042774" type="audio/mpeg"/><itunes:duration>01:07:46</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Inside the $41B AI Cloud Challenging Big Tech | CoreWeave SVP</title><itunes:title>Inside the $41B AI Cloud Challenging Big Tech | CoreWeave SVP</itunes:title><description><![CDATA[<p>The future of AI training is shaped by one constraint: keeping GPUs fed.</p><p>In this episode, Lukas Biewald talks with CoreWeave SVP Corey Sanders about why general-purpose clouds start to break down under large-scale AI workloads.</p><p>According to Corey, the industry is shifting toward a "Neo Cloud" model to handle the unique demands of modern models.</p><p>They dive into the hardware and software stack required to maximize GPU utilization and achieve high goodput.</p><p>Corey’s conclusion is clear: AI demands specialization.</p><p><br></p><p><strong>Connect with us here:</strong></p><p>Corey Sanders: <a href="https://www.linkedin.com/in/corey-sanders-842b72/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/corey-sanders-842b72/ </a></p><p>CoreWeave: <a href="https://www.linkedin.com/company/coreweave/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/company/coreweave/ </a></p><p>Lukas Biewald: <a href="https://www.linkedin.com/in/lbiewald/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/lbiewald/ </a></p><p>Weights &amp; Biases: <a href="https://www.linkedin.com/company/wandb/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/company/wandb/</a></p><p><br></p><p>(00:00) Trailer</p><p>(00:57) Introduction</p><p>(02:51) The Evolution of AI Workloads</p><p>(06:22) Core Weave's Technological Innovations</p><p>(13:58) Customer Engagement and Future Prospects</p><p>(28:49) Comparing Cloud Approaches</p><p>(33:50) Balancing Executive Roles and Hands-On Projects</p><p>(46:44) Product Development and Customer Feedback</p>]]></description><content:encoded><![CDATA[<p>The future of AI training is shaped by one constraint: keeping GPUs fed.</p><p>In this episode, Lukas Biewald talks with CoreWeave SVP Corey Sanders about why general-purpose clouds start to break down under large-scale AI workloads.</p><p>According to Corey, the industry is shifting toward a "Neo Cloud" model to handle the unique demands of modern models.</p><p>They dive into the hardware and software stack required to maximize GPU utilization and achieve high goodput.</p><p>Corey’s conclusion is clear: AI demands specialization.</p><p><br></p><p><strong>Connect with us here:</strong></p><p>Corey Sanders: <a href="https://www.linkedin.com/in/corey-sanders-842b72/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/corey-sanders-842b72/ </a></p><p>CoreWeave: <a href="https://www.linkedin.com/company/coreweave/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/company/coreweave/ </a></p><p>Lukas Biewald: <a href="https://www.linkedin.com/in/lbiewald/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/lbiewald/ </a></p><p>Weights &amp; Biases: <a href="https://www.linkedin.com/company/wandb/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/company/wandb/</a></p><p><br></p><p>(00:00) Trailer</p><p>(00:57) Introduction</p><p>(02:51) The Evolution of AI Workloads</p><p>(06:22) Core Weave's Technological Innovations</p><p>(13:58) Customer Engagement and Future Prospects</p><p>(28:49) Comparing Cloud Approaches</p><p>(33:50) Balancing Executive Roles and Hands-On Projects</p><p>(46:44) Product Development and Customer Feedback</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">167db7d2-6e2f-406e-b26b-19e329f838c8</guid><itunes:image href="https://artwork.captivate.fm/6174e033-9b94-4a2f-a6f0-2077c05d3cff/new-hairline.jpg"/><pubDate>Tue, 06 Jan 2026 09:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/167db7d2-6e2f-406e-b26b-19e329f838c8.mp3" length="76885004" type="audio/mpeg"/><itunes:duration>53:19</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Why Physical AI Needed a Completely New Data Stack</title><itunes:title>Why Physical AI Needed a Completely New Data Stack</itunes:title><description><![CDATA[<p>The future of AI is physical.&nbsp;</p><p>In this episode, Lukas Biewald talks to Nikolaus West, CEO of Rerun, about why the breakthrough required to get AI out of the lab and into the messy real world is blocked by poor data tooling.&nbsp;</p><p>Nikolaus explains how Rerun solved this by adopting an Entity Component System (ECS), a data model built for games, to handle complex, multimodal, time-aware sensor data. This is the technology that makes solving previously impossible tasks, like flexible manipulation, suddenly feel "boring."&nbsp;</p><p>Connect with us here:&nbsp;</p><p>Nikolaus West: https://www.linkedin.com/in/nikolauswest/</p><p>Rerun: https://www.linkedin.com/company/rerun-io/</p><p>Lukas Biewald: https://www.linkedin.com/in/lbiewald/</p><p>Weights &amp; Biases: https://www.linkedin.com/company/wandb/</p>]]></description><content:encoded><![CDATA[<p>The future of AI is physical.&nbsp;</p><p>In this episode, Lukas Biewald talks to Nikolaus West, CEO of Rerun, about why the breakthrough required to get AI out of the lab and into the messy real world is blocked by poor data tooling.&nbsp;</p><p>Nikolaus explains how Rerun solved this by adopting an Entity Component System (ECS), a data model built for games, to handle complex, multimodal, time-aware sensor data. This is the technology that makes solving previously impossible tasks, like flexible manipulation, suddenly feel "boring."&nbsp;</p><p>Connect with us here:&nbsp;</p><p>Nikolaus West: https://www.linkedin.com/in/nikolauswest/</p><p>Rerun: https://www.linkedin.com/company/rerun-io/</p><p>Lukas Biewald: https://www.linkedin.com/in/lbiewald/</p><p>Weights &amp; Biases: https://www.linkedin.com/company/wandb/</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">22e1eade-d0e3-4331-832a-c908d197c62c</guid><itunes:image href="https://artwork.captivate.fm/6c2250f5-447d-4913-8372-b3dc0580f7e8/rss-final-final.jpg"/><pubDate>Tue, 16 Dec 2025 09:30:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/22e1eade-d0e3-4331-832a-c908d197c62c.mp3" length="87900482" type="audio/mpeg"/><itunes:duration>01:00:52</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><podcast:chapters url="https://transcripts.captivate.fm/chapter-7c33af68-9ab6-4438-8ebe-f2b194af591a.json" type="application/json+chapters"/></item><item><title>The Engineering Behind the World’s Most Advanced Video AI</title><itunes:title>The Engineering Behind the World’s Most Advanced Video AI</itunes:title><description><![CDATA[<p>Is video AI a viable path toward AGI?&nbsp;</p><p>Runway ML founder Cristóbal Valenzuela joins Lukas Biewald just after Gen 4.5 reached the #1 position on the Video Arena Leaderboard, according to community voting on <a href="https://artificialanalysis.ai/video/leaderboard/text-to-video" rel="noopener noreferrer" target="_blank">Artificial Analysis</a>.&nbsp;</p><p>Lukas examines how a focused research team at Runway outpaced much larger organizations like Google and Meta in one of the most compute-intensive areas of machine learning.</p><p><br></p><p>Cristóbal breaks down the architecture behind Gen 4.5 and explains the role of “taste” in model development. He details the engineering improvements in motion and camera control that solve long-standing issues like the restrictive “tripod look,” and shares why video models are starting to function as simulation engines with applications beyond media generation.</p><p><br></p><p><strong>Connect with us here:</strong></p><ul><li>Cristóbal Valenzuela: <a href="https://www.linkedin.com/in/cvalenzuelab" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/cvalenzuelab</a></li><li>Runway: <a href="https://www.linkedin.com/company/runwayml/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/company/runwayml/</a></li><li>Lukas Biewald: <a href="https://www.linkedin.com/in/lbiewald/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/lbiewald/</a></li><li>Weights &amp; Biases: <a href="https://www.linkedin.com/company/wandb/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/company/wandb/</a></li></ul><br/>]]></description><content:encoded><![CDATA[<p>Is video AI a viable path toward AGI?&nbsp;</p><p>Runway ML founder Cristóbal Valenzuela joins Lukas Biewald just after Gen 4.5 reached the #1 position on the Video Arena Leaderboard, according to community voting on <a href="https://artificialanalysis.ai/video/leaderboard/text-to-video" rel="noopener noreferrer" target="_blank">Artificial Analysis</a>.&nbsp;</p><p>Lukas examines how a focused research team at Runway outpaced much larger organizations like Google and Meta in one of the most compute-intensive areas of machine learning.</p><p><br></p><p>Cristóbal breaks down the architecture behind Gen 4.5 and explains the role of “taste” in model development. He details the engineering improvements in motion and camera control that solve long-standing issues like the restrictive “tripod look,” and shares why video models are starting to function as simulation engines with applications beyond media generation.</p><p><br></p><p><strong>Connect with us here:</strong></p><ul><li>Cristóbal Valenzuela: <a href="https://www.linkedin.com/in/cvalenzuelab" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/cvalenzuelab</a></li><li>Runway: <a href="https://www.linkedin.com/company/runwayml/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/company/runwayml/</a></li><li>Lukas Biewald: <a href="https://www.linkedin.com/in/lbiewald/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/lbiewald/</a></li><li>Weights &amp; Biases: <a href="https://www.linkedin.com/company/wandb/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/company/wandb/</a></li></ul><br/>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">b5bab0b9-2533-4ad2-ba37-7496ca641d95</guid><itunes:image href="https://artwork.captivate.fm/196cea9e-7901-4a4e-9e20-cbc1c5bd223d/3000.jpg"/><pubDate>Mon, 01 Dec 2025 10:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/b5bab0b9-2533-4ad2-ba37-7496ca641d95.mp3" length="21444183" type="audio/mpeg"/><itunes:duration>14:50</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The CEO Behind the Fastest-Growing AI Inference Company | Tuhin Srivastava</title><itunes:title>The CEO Behind the Fastest-Growing AI Inference Company | Tuhin Srivastava</itunes:title><description><![CDATA[<p>In this episode of Gradient Dissent, Lukas Biewald talks with Tuhin Srivastava, CEO and founder of Baseten, one of the fastest-growing companies in the AI inference ecosystem. Tuhin shares the real story behind Baseten’s rise and how the market finally aligned with the infrastructure they’d spent years building.</p><p>They get into the core challenges of modern inference, including why dedicated deployments matter, how runtime and infrastructure bottlenecks stack up, and what makes serving large models fundamentally different from smaller ones.</p><p>Tuhin also explains how vLLM, TensorRT-LLM, and SGLang differ in practice, what it takes to tune workloads for new chips like the B200, and why reliability becomes harder as systems scale.&nbsp;</p><p>The conversation dives into company-building, from killing product lines to avoiding premature scaling while navigating a market that shifts every few weeks.</p><p>Connect with us here:&nbsp;</p><p>Tuhin Srivastva: <a href="https://www.linkedin.com/in/tuhin-srivastava/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/tuhin-srivastava/</a>&nbsp;</p><p>Lukas Biewald: <a href="https://www.linkedin.com/in/lbiewald/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/lbiewald/</a></p><p>Weights &amp; Biases: <a href="https://www.linkedin.com/company/wandb/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/company/wandb/</a></p>]]></description><content:encoded><![CDATA[<p>In this episode of Gradient Dissent, Lukas Biewald talks with Tuhin Srivastava, CEO and founder of Baseten, one of the fastest-growing companies in the AI inference ecosystem. Tuhin shares the real story behind Baseten’s rise and how the market finally aligned with the infrastructure they’d spent years building.</p><p>They get into the core challenges of modern inference, including why dedicated deployments matter, how runtime and infrastructure bottlenecks stack up, and what makes serving large models fundamentally different from smaller ones.</p><p>Tuhin also explains how vLLM, TensorRT-LLM, and SGLang differ in practice, what it takes to tune workloads for new chips like the B200, and why reliability becomes harder as systems scale.&nbsp;</p><p>The conversation dives into company-building, from killing product lines to avoiding premature scaling while navigating a market that shifts every few weeks.</p><p>Connect with us here:&nbsp;</p><p>Tuhin Srivastva: <a href="https://www.linkedin.com/in/tuhin-srivastava/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/tuhin-srivastava/</a>&nbsp;</p><p>Lukas Biewald: <a href="https://www.linkedin.com/in/lbiewald/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/lbiewald/</a></p><p>Weights &amp; Biases: <a href="https://www.linkedin.com/company/wandb/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/company/wandb/</a></p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">bb7a7e75-34e4-471e-a3b8-12dfff61e22e</guid><itunes:image href="https://artwork.captivate.fm/08b2ba94-26b4-4a2c-bbe9-34840c0314ce/CaptivateThumb3.jpg"/><pubDate>Tue, 18 Nov 2025 08:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/bb7a7e75-34e4-471e-a3b8-12dfff61e22e.mp3" length="85546056" type="audio/mpeg"/><itunes:duration>59:13</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The Startup Powering The Data Behind AGI</title><itunes:title>The Startup Powering The Data Behind AGI</itunes:title><description><![CDATA[<p>In this episode of Gradient Dissent, Lukas Biewald talks with the CEO &amp; founder of Surge AI, the billion-dollar company quietly powering the next generation of frontier LLMs. They discuss Surge's origin story, why traditional data labeling is broken, and how their research-focused approach is reshaping how models are trained.</p><p>You’ll hear why inter-annotator agreement fails in high-complexity tasks like poetry and math, why synthetic data is often overrated, and how Surge builds rich RL environments to stress-test agentic reasoning. They also go deep on what kinds of data will be critical to future progress in AI—from scientific discovery to multimodal reasoning and personalized alignment.</p><p><br></p><p>It’s a rare, behind-the-scenes look into the world of high-quality data generation at scale—straight from the team most frontier labs trust to get it right.</p><p><br></p><p>Timestamps: </p><p>00:00 – Intro: Who is Edwin Chen?  </p><p>03:40 – The problem with early data labeling systems  </p><p>06:20 – Search ranking, clickbait, and product principles  </p><p>10:05 – Why Surge focused on high-skill, high-quality labeling  </p><p>13:50 – From Craigslist workers to a billion-dollar business  </p><p>16:40 – Scaling without funding and avoiding Silicon Valley status games  </p><p>21:15 – Why most human data platforms lack real tech  </p><p>25:05 – Detecting cheaters, liars, and low-quality labelers  </p><p>28:30 – Why inter-annotator agreement is a flawed metric  </p><p>32:15 – What makes a great poem? Not checkboxes  </p><p>36:40 – Measuring subjective quality rigorously  </p><p>40:00 – What types of data are becoming more important  </p><p>44:15 – Scientific collaboration and frontier research data  </p><p>47:00 – Multimodal data, Argentinian coding, and hyper-specificity  </p><p>50:10 – What's wrong with LMSYS and benchmark hacking  </p><p>53:20 – Personalization and taste in model behavior  </p><p>56:00 – Synthetic data vs. high-quality human data  </p><p><br></p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases </p><p>https://www.linkedin.com/company/wandb</p>]]></description><content:encoded><![CDATA[<p>In this episode of Gradient Dissent, Lukas Biewald talks with the CEO &amp; founder of Surge AI, the billion-dollar company quietly powering the next generation of frontier LLMs. They discuss Surge's origin story, why traditional data labeling is broken, and how their research-focused approach is reshaping how models are trained.</p><p>You’ll hear why inter-annotator agreement fails in high-complexity tasks like poetry and math, why synthetic data is often overrated, and how Surge builds rich RL environments to stress-test agentic reasoning. They also go deep on what kinds of data will be critical to future progress in AI—from scientific discovery to multimodal reasoning and personalized alignment.</p><p><br></p><p>It’s a rare, behind-the-scenes look into the world of high-quality data generation at scale—straight from the team most frontier labs trust to get it right.</p><p><br></p><p>Timestamps: </p><p>00:00 – Intro: Who is Edwin Chen?  </p><p>03:40 – The problem with early data labeling systems  </p><p>06:20 – Search ranking, clickbait, and product principles  </p><p>10:05 – Why Surge focused on high-skill, high-quality labeling  </p><p>13:50 – From Craigslist workers to a billion-dollar business  </p><p>16:40 – Scaling without funding and avoiding Silicon Valley status games  </p><p>21:15 – Why most human data platforms lack real tech  </p><p>25:05 – Detecting cheaters, liars, and low-quality labelers  </p><p>28:30 – Why inter-annotator agreement is a flawed metric  </p><p>32:15 – What makes a great poem? Not checkboxes  </p><p>36:40 – Measuring subjective quality rigorously  </p><p>40:00 – What types of data are becoming more important  </p><p>44:15 – Scientific collaboration and frontier research data  </p><p>47:00 – Multimodal data, Argentinian coding, and hyper-specificity  </p><p>50:10 – What's wrong with LMSYS and benchmark hacking  </p><p>53:20 – Personalization and taste in model behavior  </p><p>56:00 – Synthetic data vs. high-quality human data  </p><p><br></p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases </p><p>https://www.linkedin.com/company/wandb</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">becd4fd5-189b-4644-b956-4efd1c5756c1</guid><itunes:image href="https://artwork.captivate.fm/bf9c3e65-ae6a-4381-8fd5-e0a67b3a1c30/GD038-Square-thumb.jpg"/><pubDate>Tue, 16 Sep 2025 06:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/becd4fd5-189b-4644-b956-4efd1c5756c1.mp3" length="47249120" type="audio/mpeg"/><itunes:duration>56:15</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Arvind Jain on Building Glean and the Future of Enterprise AI</title><itunes:title>Arvind Jain on Building Glean and the Future of Enterprise AI</itunes:title><description><![CDATA[<p>In this episode of Gradient Dissent, Lukas Biewald sits down with Arvind Jain, CEO and founder of Glean. They discuss Glean's evolution from solving enterprise search to building agentic AI tools that understand internal knowledge and workflows. Arvind shares how his early use of transformer models in 2019 laid the foundation for Glean’s success, well before the term "generative AI" was mainstream.</p><p>They explore the technical and organizational challenges behind enterprise LLMs—including security, hallucination suppression—and when it makes sense to fine-tune models. Arvind also reflects on his previous startup Rubrik and explains how Glean’s AI platform aims to reshape how teams operate, from personalized agents to ever-fresh internal documentation.</p><p><strong>Follow Arvind Jain:</strong> https://x.com/jainarvind</p><p><strong>Follow Weights &amp; Biases:</strong> https://x.com/weights_biases</p><p><br></p><p><strong>Timestamps:</strong>&nbsp;</p><p>[00:01:00] What Glean is and how it works&nbsp;</p><p>[00:02:39] Starting Glean before the LLM boom&nbsp;</p><p>[00:04:10] Using transformers early in enterprise search&nbsp;</p><p>[00:06:48] Semantic search vs. generative answers&nbsp;</p><p>[00:08:13] When to fine-tune vs. use out-of-box models&nbsp;</p><p>[00:12:38] The value of small, purpose-trained models&nbsp;</p><p>[00:13:04] Enterprise security and embedding risks</p><p>[00:16:31] Lessons from Rubrik and starting Glean&nbsp;</p><p>[00:19:31] The contrarian bet on enterprise search&nbsp;</p><p>[00:22:57] Culture and lessons learned from Google&nbsp;</p><p>[00:25:13] Everyone will have their own AI-powered "team"&nbsp;</p><p>[00:28:43] Using AI to keep documentation evergreen&nbsp;</p><p>[00:31:22] AI-generated churn and risk analysis&nbsp;</p><p>[00:33:55] Measuring model improvement with golden sets</p><p>[00:36:05] Suppressing hallucinations with citations&nbsp;</p><p>[00:39:22] Agents that can ping humans for help&nbsp;</p><p>[00:40:41] AI as a force multiplier, not a replacement&nbsp;</p><p>[00:42:26] The enduring value of hard work</p><p><br></p><p><br></p>]]></description><content:encoded><![CDATA[<p>In this episode of Gradient Dissent, Lukas Biewald sits down with Arvind Jain, CEO and founder of Glean. They discuss Glean's evolution from solving enterprise search to building agentic AI tools that understand internal knowledge and workflows. Arvind shares how his early use of transformer models in 2019 laid the foundation for Glean’s success, well before the term "generative AI" was mainstream.</p><p>They explore the technical and organizational challenges behind enterprise LLMs—including security, hallucination suppression—and when it makes sense to fine-tune models. Arvind also reflects on his previous startup Rubrik and explains how Glean’s AI platform aims to reshape how teams operate, from personalized agents to ever-fresh internal documentation.</p><p><strong>Follow Arvind Jain:</strong> https://x.com/jainarvind</p><p><strong>Follow Weights &amp; Biases:</strong> https://x.com/weights_biases</p><p><br></p><p><strong>Timestamps:</strong>&nbsp;</p><p>[00:01:00] What Glean is and how it works&nbsp;</p><p>[00:02:39] Starting Glean before the LLM boom&nbsp;</p><p>[00:04:10] Using transformers early in enterprise search&nbsp;</p><p>[00:06:48] Semantic search vs. generative answers&nbsp;</p><p>[00:08:13] When to fine-tune vs. use out-of-box models&nbsp;</p><p>[00:12:38] The value of small, purpose-trained models&nbsp;</p><p>[00:13:04] Enterprise security and embedding risks</p><p>[00:16:31] Lessons from Rubrik and starting Glean&nbsp;</p><p>[00:19:31] The contrarian bet on enterprise search&nbsp;</p><p>[00:22:57] Culture and lessons learned from Google&nbsp;</p><p>[00:25:13] Everyone will have their own AI-powered "team"&nbsp;</p><p>[00:28:43] Using AI to keep documentation evergreen&nbsp;</p><p>[00:31:22] AI-generated churn and risk analysis&nbsp;</p><p>[00:33:55] Measuring model improvement with golden sets</p><p>[00:36:05] Suppressing hallucinations with citations&nbsp;</p><p>[00:39:22] Agents that can ping humans for help&nbsp;</p><p>[00:40:41] AI as a force multiplier, not a replacement&nbsp;</p><p>[00:42:26] The enduring value of hard work</p><p><br></p><p><br></p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">1b866c83-7cf5-4181-942a-d98dfcaef7aa</guid><itunes:image href="https://artwork.captivate.fm/879cb601-8d5b-44c5-8662-a77b0431dbf9/GWsw7QgI3Hg4RNleouyYbKq1.jpg"/><pubDate>Tue, 05 Aug 2025 06:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/1b866c83-7cf5-4181-942a-d98dfcaef7aa.mp3" length="36687632" type="audio/mpeg"/><itunes:duration>43:41</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>How DeepL Built a Translation Powerhouse with AI with CEO Jarek Kutylowski</title><itunes:title>How DeepL Built a Translation Powerhouse with AI with CEO Jarek Kutylowski</itunes:title><description><![CDATA[<p>In this episode of Gradient Dissent, Lukas Biewald talks with Jarek Kutylowski, CEO and founder of DeepL, an AI-powered translation company. Jarek shares DeepL’s journey from launching neural machine translation in 2017 to building custom data centers and how small teams can not only take on big players like Google Translate but win.</p><p>They dive into what makes translation so difficult for AI, why high-quality translations still require human context, and how DeepL tailors models for enterprise use cases. They also discuss the evolution of speech translation, compute infrastructure, training on curated multilingual datasets, hallucinations in models, and why DeepL avoids fine-tuning for each individual customer. It’s a fascinating behind-the-scenes look at one of the most advanced real-world applications of deep learning.</p><p><br></p><p>Timestamps: </p><p>[00:00:00] Introducing Jarek and DeepL’s mission </p><p>[00:01:46] Competing with Google Translate &amp; LLMs </p><p>[00:04:14] Pretraining vs. proprietary model strategy </p><p>[00:06:47] Building GPU data centers in 2017 </p><p>[00:08:09] The value of curated bilingual and monolingual data </p><p>[00:09:30] How DeepL measures translation quality </p><p>[00:12:27] Personalization and enterprise-specific tuning</p><p>[00:14:04] Why translation demand is growing </p><p>[00:16:16] ROI of incremental quality gains </p><p>[00:18:20] The role of human translators in the future </p><p>[00:22:48] Hallucinations in translation models </p><p>[00:24:05] DeepL’s work on speech translation </p><p>[00:28:22] The broader impact of global communication </p><p>[00:30:32] Handling smaller languages and language pairs </p><p>[00:32:25] Multi-language model consolidation </p><p>[00:35:28] Engineering infrastructure for large-scale inference </p><p>[00:39:23] Adapting to evolving LLM landscape &amp; enterprise needs</p>]]></description><content:encoded><![CDATA[<p>In this episode of Gradient Dissent, Lukas Biewald talks with Jarek Kutylowski, CEO and founder of DeepL, an AI-powered translation company. Jarek shares DeepL’s journey from launching neural machine translation in 2017 to building custom data centers and how small teams can not only take on big players like Google Translate but win.</p><p>They dive into what makes translation so difficult for AI, why high-quality translations still require human context, and how DeepL tailors models for enterprise use cases. They also discuss the evolution of speech translation, compute infrastructure, training on curated multilingual datasets, hallucinations in models, and why DeepL avoids fine-tuning for each individual customer. It’s a fascinating behind-the-scenes look at one of the most advanced real-world applications of deep learning.</p><p><br></p><p>Timestamps: </p><p>[00:00:00] Introducing Jarek and DeepL’s mission </p><p>[00:01:46] Competing with Google Translate &amp; LLMs </p><p>[00:04:14] Pretraining vs. proprietary model strategy </p><p>[00:06:47] Building GPU data centers in 2017 </p><p>[00:08:09] The value of curated bilingual and monolingual data </p><p>[00:09:30] How DeepL measures translation quality </p><p>[00:12:27] Personalization and enterprise-specific tuning</p><p>[00:14:04] Why translation demand is growing </p><p>[00:16:16] ROI of incremental quality gains </p><p>[00:18:20] The role of human translators in the future </p><p>[00:22:48] Hallucinations in translation models </p><p>[00:24:05] DeepL’s work on speech translation </p><p>[00:28:22] The broader impact of global communication </p><p>[00:30:32] Handling smaller languages and language pairs </p><p>[00:32:25] Multi-language model consolidation </p><p>[00:35:28] Engineering infrastructure for large-scale inference </p><p>[00:39:23] Adapting to evolving LLM landscape &amp; enterprise needs</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">e035b36b-c164-43e4-9f60-fb70ffb11e6b</guid><itunes:image href="https://artwork.captivate.fm/1393be8a-b051-4d5e-9e4e-ff724df1f163/F1IzFk5AWlsK4uFuR2KVe4lX.jpg"/><pubDate>Tue, 08 Jul 2025 05:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/e035b36b-c164-43e4-9f60-fb70ffb11e6b.mp3" length="35864768" type="audio/mpeg"/><itunes:duration>42:42</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>GitHub CEO Thomas Dohmke on Copilot and the Future of Software Development</title><itunes:title>GitHub CEO Thomas Dohmke on Copilot and the Future of Software Development</itunes:title><description><![CDATA[<p>In this episode of Gradient Dissent, Lukas Biewald sits down with Thomas Dohmke, CEO of GitHub, to talk about the future of software engineering in the age of AI. They discuss how GitHub Copilot was built, why agents are reshaping developer workflows, and what it takes to make tools that are not only powerful but also fun.</p><p>Thomas shares his experience leading GitHub through its $7.5B acquisition by Microsoft, the unexpected ways it accelerated innovation, and why developer happiness is crucial to productivity. They explore what still makes human engineers irreplaceable and how the next generation of developers might grow up coding alongside AI.</p><p>Follow Thomas Dohmke: https://www.linkedin.com/in/ashtom/</p><p><br></p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases </p><p>https://www.linkedin.com/company/wandb</p>]]></description><content:encoded><![CDATA[<p>In this episode of Gradient Dissent, Lukas Biewald sits down with Thomas Dohmke, CEO of GitHub, to talk about the future of software engineering in the age of AI. They discuss how GitHub Copilot was built, why agents are reshaping developer workflows, and what it takes to make tools that are not only powerful but also fun.</p><p>Thomas shares his experience leading GitHub through its $7.5B acquisition by Microsoft, the unexpected ways it accelerated innovation, and why developer happiness is crucial to productivity. They explore what still makes human engineers irreplaceable and how the next generation of developers might grow up coding alongside AI.</p><p>Follow Thomas Dohmke: https://www.linkedin.com/in/ashtom/</p><p><br></p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases </p><p>https://www.linkedin.com/company/wandb</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">96fe494f-bbae-4514-b8de-215d46a9d9d6</guid><itunes:image href="https://artwork.captivate.fm/3e883805-ed42-4e97-88d7-a971d4476514/ZsjY8ttvsbe_vlK0TN7WGDqD.jpg"/><pubDate>Tue, 10 Jun 2025 06:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/96fe494f-bbae-4514-b8de-215d46a9d9d6.mp3" length="58580384" type="audio/mpeg"/><itunes:duration>01:09:44</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>From Pharma to AGI Hype, and Developing AI in Finance: Martin Shkreli’s Journey</title><itunes:title>From Pharma to AGI Hype, and Developing AI in Finance: Martin Shkreli’s Journey</itunes:title><description><![CDATA[<p>In this episode of Gradient Dissent, Lukas Biewald talks with Martin Shkreli — the infamous "pharma bro" turned founder — about his path from hedge fund manager and pharma CEO to convicted felon and now software entrepreneur. Shkreli shares his side of the drug pricing controversy, reflects on his prison experience, and explains how he rebuilt his life and business after being "canceled."</p><p>They dive deep into AI and drug discovery, where Shkreli delivers a strong critique of mainstream approaches. He also talks about his latest venture in finance software, building Godel Terminal “a Vim for traders", and why he thinks the AI hype cycle is just beginning. It's a wide-ranging and candid conversation with one of the most controversial figures in tech and biotech.</p><p><strong>Follow Martin Shkreli on </strong><a href="https://x.com/martinshkreli?lang=en" rel="noopener noreferrer" target="_blank"><strong>Twitter</strong></a></p><p>Godel Terminal: https://godelterminal.com/</p><p><strong>Follow Weights &amp; Biases on </strong><a href="https://x.com/weights_biases?s=21&amp;t=ktCZSU5uZ-QmXUJakQgoeQ" rel="noopener noreferrer" target="_blank"><strong>Twitter</strong></a></p><p>https://www.linkedin.com/company/wandb&nbsp;&nbsp;</p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p>]]></description><content:encoded><![CDATA[<p>In this episode of Gradient Dissent, Lukas Biewald talks with Martin Shkreli — the infamous "pharma bro" turned founder — about his path from hedge fund manager and pharma CEO to convicted felon and now software entrepreneur. Shkreli shares his side of the drug pricing controversy, reflects on his prison experience, and explains how he rebuilt his life and business after being "canceled."</p><p>They dive deep into AI and drug discovery, where Shkreli delivers a strong critique of mainstream approaches. He also talks about his latest venture in finance software, building Godel Terminal “a Vim for traders", and why he thinks the AI hype cycle is just beginning. It's a wide-ranging and candid conversation with one of the most controversial figures in tech and biotech.</p><p><strong>Follow Martin Shkreli on </strong><a href="https://x.com/martinshkreli?lang=en" rel="noopener noreferrer" target="_blank"><strong>Twitter</strong></a></p><p>Godel Terminal: https://godelterminal.com/</p><p><strong>Follow Weights &amp; Biases on </strong><a href="https://x.com/weights_biases?s=21&amp;t=ktCZSU5uZ-QmXUJakQgoeQ" rel="noopener noreferrer" target="_blank"><strong>Twitter</strong></a></p><p>https://www.linkedin.com/company/wandb&nbsp;&nbsp;</p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">5c77ab41-f9df-48f5-a5fc-3184a5830dba</guid><itunes:image href="https://artwork.captivate.fm/c847c1de-52f1-4ffb-ad21-382fdee4206c/miST15iAHq-9zZ86xXXTnJzs.jpg"/><pubDate>Tue, 20 May 2025 06:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/5c77ab41-f9df-48f5-a5fc-3184a5830dba.mp3" length="130115367" type="audio/mpeg"/><itunes:duration>01:30:19</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Inside Cursor: The future of AI coding with Co-founder Sualeh Asif</title><itunes:title>Inside Cursor: The future of AI coding with Co-founder Sualeh Asif</itunes:title><description><![CDATA[<p>In this episode of Gradient Dissent, host Lukas Biewald talks with Sualeh Asif, the CPO and co-founder of Cursor, one of the fastest-growing and most loved AI-powered coding platforms. Sualeh shares the story behind Cursor’s creation, the technical and design decisions that set it apart, and how AI models are changing the way we build software. They dive deep into infrastructure challenges, the importance of speed and user experience, and how emerging trends in agents and reasoning models are reshaping the developer workflow.</p><p>Sualeh also discusses scaling AI inference to support hundreds of millions of requests per day, building trust through product quality, and his vision for how programming will evolve in the next few years.</p><p>⏳Timestamps:</p><p>00:00 How Cursor got started and why it took off</p><p>04:50 Switching from Vim to VS Code and the rise of CoPilot</p><p>08:10 Why Cursor won among competitors: product philosophy and execution</p><p>10:30 How user data and feedback loops drive Cursor’s improvements</p><p>12:20 Iterating on AI agents: what made Cursor hold back and wait</p><p>13:30 Competitive coding background: advantage or challenge?</p><p>16:30 Making coding fun again: latency, flow, and model choices</p><p>19:10 Building Cursor’s infrastructure: from GPUs to indexing billions of files</p><p>26:00 How Cursor prioritizes compute allocation for indexing</p><p>30:00 Running massive ML infrastructure: surprises and scaling lessons</p><p>34:50 Why Cursor chose DeepSeek models early</p><p>36:00 Where AI agents are heading next</p><p>40:07 Debugging and evaluating complex AI agents</p><p>42:00 How coding workflows will change over the next 2–3 years</p><p>46:20 Dream future projects: AI for reading codebases and papers</p><p><strong>🎙 Get our podcasts on these platforms:</strong></p><ul><li><strong><span class="ql-cursor">﻿﻿﻿﻿﻿﻿﻿</span>Apple Podcasts:</strong> <a href="https://wandb.me/apple-podcasts" target="_blank">https://wandb.me/apple-podcasts</a></li><li><strong><span class="ql-cursor">﻿﻿﻿﻿﻿﻿﻿﻿</span>Spotify:</strong> <a href="https://wandb.me/spotify" target="_blank">https://wandb.me/spotify</a></li><li><strong><span class="ql-cursor">﻿﻿﻿﻿﻿﻿﻿﻿﻿</span>YouTube:</strong> <a href="https://wandb.me/youtube" target="_blank">https://wandb.me/youtube</a></li></ul><br/><p><br></p><p><strong>Follow Weights &amp; Biases:</strong></p><ul><li><a href="https://x.com/weights_biases" target="_blank">https://x.com/weights_biases</a></li><li><a href="https://www.linkedin.com/company/wandb  " target="_blank">https://www.linkedin.com/company/wandb  </a></li></ul><br/>]]></description><content:encoded><![CDATA[<p>In this episode of Gradient Dissent, host Lukas Biewald talks with Sualeh Asif, the CPO and co-founder of Cursor, one of the fastest-growing and most loved AI-powered coding platforms. Sualeh shares the story behind Cursor’s creation, the technical and design decisions that set it apart, and how AI models are changing the way we build software. They dive deep into infrastructure challenges, the importance of speed and user experience, and how emerging trends in agents and reasoning models are reshaping the developer workflow.</p><p>Sualeh also discusses scaling AI inference to support hundreds of millions of requests per day, building trust through product quality, and his vision for how programming will evolve in the next few years.</p><p>⏳Timestamps:</p><p>00:00 How Cursor got started and why it took off</p><p>04:50 Switching from Vim to VS Code and the rise of CoPilot</p><p>08:10 Why Cursor won among competitors: product philosophy and execution</p><p>10:30 How user data and feedback loops drive Cursor’s improvements</p><p>12:20 Iterating on AI agents: what made Cursor hold back and wait</p><p>13:30 Competitive coding background: advantage or challenge?</p><p>16:30 Making coding fun again: latency, flow, and model choices</p><p>19:10 Building Cursor’s infrastructure: from GPUs to indexing billions of files</p><p>26:00 How Cursor prioritizes compute allocation for indexing</p><p>30:00 Running massive ML infrastructure: surprises and scaling lessons</p><p>34:50 Why Cursor chose DeepSeek models early</p><p>36:00 Where AI agents are heading next</p><p>40:07 Debugging and evaluating complex AI agents</p><p>42:00 How coding workflows will change over the next 2–3 years</p><p>46:20 Dream future projects: AI for reading codebases and papers</p><p><strong>🎙 Get our podcasts on these platforms:</strong></p><ul><li><strong><span class="ql-cursor">﻿﻿﻿﻿﻿﻿﻿</span>Apple Podcasts:</strong> <a href="https://wandb.me/apple-podcasts" target="_blank">https://wandb.me/apple-podcasts</a></li><li><strong><span class="ql-cursor">﻿﻿﻿﻿﻿﻿﻿﻿</span>Spotify:</strong> <a href="https://wandb.me/spotify" target="_blank">https://wandb.me/spotify</a></li><li><strong><span class="ql-cursor">﻿﻿﻿﻿﻿﻿﻿﻿﻿</span>YouTube:</strong> <a href="https://wandb.me/youtube" target="_blank">https://wandb.me/youtube</a></li></ul><br/><p><br></p><p><strong>Follow Weights &amp; Biases:</strong></p><ul><li><a href="https://x.com/weights_biases" target="_blank">https://x.com/weights_biases</a></li><li><a href="https://www.linkedin.com/company/wandb  " target="_blank">https://www.linkedin.com/company/wandb  </a></li></ul><br/>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">b129d63a-821a-488d-a80d-070d88ef45ab</guid><itunes:image href="https://artwork.captivate.fm/e01d9d02-4efe-4da1-8ab3-c74cf62f28e4/qL1UKt92SO2hww9R94cEcOK8.jpg"/><pubDate>Tue, 29 Apr 2025 06:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/01201111-4b98-4b40-a73a-a620e4d8a646/GD031-pod.mp3" length="41670848" type="audio/mpeg"/><itunes:duration>49:36</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Inside the Dark Web, AI and Cybersecurity with Christopher Ahlberg CEO of Recorded Future</title><itunes:title>Inside the Dark Web, AI and Cybersecurity with Christopher Ahlberg CEO of Recorded Future</itunes:title><description><![CDATA[<p>In this episode of Gradient Dissent, host Lukas Biewald talks with Christopher Ahlberg, CEO of Recorded Future, a pioneering cybersecurity company leveraging AI to provide intelligence insights. Christopher shares his fascinating journey from founding data visualization startup Spotfire to building Recorded Future into an industry leader, eventually leading to its acquisition by Mastercard.</p><p>They dive into gripping stories of cyber espionage, including how Recorded Future intercepted a hacker selling access to the U.S. Electoral Assistance Commission. Christopher also explains why the criminal underworld has shifted to platforms like Telegram, how AI is transforming both cyber threats and defenses, and the real-world implications of becoming an "undesirable enemy" of the Russian state.</p><p>This episode offers unique insights into cybersecurity, AI-driven intelligence, entrepreneurship lessons from a two-time founder, and what happens when geopolitical tensions intersect with cutting-edge technology. A must-listen for anyone interested in cybersecurity, artificial intelligence, or the complex dynamics shaping global security.</p><p>🎙 Get our podcasts on these platforms:</p><p>Apple Podcasts: <a href="https://wandb.me/apple-podcasts" target="_blank">https://wandb.me/apple-podcast</a><a href="http://wandb.me/apple-podcasts" target="_blank">s</a></p><p>Spotify: <a href="https://wandb.me/spotify" target="_blank">https://wandb.me/spotify</a></p><p>YouTube: <a href="https://wandb.me/youtube" target="_blank">https://wandb.me/youtube</a></p><p><br></p><p>Follow Weights &amp; Biases:</p><p><a href="https://twitter.com/weights_biases" target="_blank">https://twitter.com/weights_biases</a>&nbsp;</p><p><a href="https://www.linkedin.com/company/wandb" target="_blank">https://www.linkedin.com/company/wandb</a>&nbsp;&nbsp;</p><p><br></p>]]></description><content:encoded><![CDATA[<p>In this episode of Gradient Dissent, host Lukas Biewald talks with Christopher Ahlberg, CEO of Recorded Future, a pioneering cybersecurity company leveraging AI to provide intelligence insights. Christopher shares his fascinating journey from founding data visualization startup Spotfire to building Recorded Future into an industry leader, eventually leading to its acquisition by Mastercard.</p><p>They dive into gripping stories of cyber espionage, including how Recorded Future intercepted a hacker selling access to the U.S. Electoral Assistance Commission. Christopher also explains why the criminal underworld has shifted to platforms like Telegram, how AI is transforming both cyber threats and defenses, and the real-world implications of becoming an "undesirable enemy" of the Russian state.</p><p>This episode offers unique insights into cybersecurity, AI-driven intelligence, entrepreneurship lessons from a two-time founder, and what happens when geopolitical tensions intersect with cutting-edge technology. A must-listen for anyone interested in cybersecurity, artificial intelligence, or the complex dynamics shaping global security.</p><p>🎙 Get our podcasts on these platforms:</p><p>Apple Podcasts: <a href="https://wandb.me/apple-podcasts" target="_blank">https://wandb.me/apple-podcast</a><a href="http://wandb.me/apple-podcasts" target="_blank">s</a></p><p>Spotify: <a href="https://wandb.me/spotify" target="_blank">https://wandb.me/spotify</a></p><p>YouTube: <a href="https://wandb.me/youtube" target="_blank">https://wandb.me/youtube</a></p><p><br></p><p>Follow Weights &amp; Biases:</p><p><a href="https://twitter.com/weights_biases" target="_blank">https://twitter.com/weights_biases</a>&nbsp;</p><p><a href="https://www.linkedin.com/company/wandb" target="_blank">https://www.linkedin.com/company/wandb</a>&nbsp;&nbsp;</p><p><br></p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">7856194c-b83d-4cdc-8e4f-a266b847e24f</guid><itunes:image href="https://artwork.captivate.fm/68aec87a-e9d3-465b-8e5c-8e6df2cfd15b/tXVBg070w-zup9-J_2I3croB.jpg"/><pubDate>Tue, 08 Apr 2025 06:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/03446282-c113-4b62-a187-aa1dc6f87b90/GD032-pod.mp3" length="42210464" type="audio/mpeg"/><itunes:duration>50:15</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>AI, autonomy, and the future of naval warfare with Captain Jon Haase, United States Navy</title><itunes:title>AI, autonomy, and the future of naval warfare with Captain Jon Haase, United States Navy</itunes:title><description><![CDATA[<p>In this episode of Gradient Dissent, host Lukas Biewald speaks with Captain Jon Haase, United States Navy about real-world applications of AI and autonomy in defense. From underwater mine detection with autonomous vehicles to the ethics of lethal AI systems, this conversation dives into how the U.S. military is integrating AI into mission-critical operations — and why humans will always be at the center of warfighting.</p><p>They explore the challenges of underwater autonomy, multi-agent collaboration, cybersecurity, and the growing role of large language models like Gemini and Claude in the defense space. </p><p>Essential listening for anyone curious about military AI, defense tech, and the future of autonomous systems.</p><p>✅ *Subscribe to Weights &amp; Biases* →  https://bit.ly/45BCkYz</p><p>🎙 Get our podcasts on these platforms:</p><p>Apple Podcasts: http://wandb.me/apple-podcasts</p><p>Spotify: http://wandb.me/spotify</p><p>Google: http://wandb.me/gd_google</p><p>YouTube: http://wandb.me/youtube</p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases </p><p>https://www.linkedin.com/company/wandb  </p><p><br></p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p>]]></description><content:encoded><![CDATA[<p>In this episode of Gradient Dissent, host Lukas Biewald speaks with Captain Jon Haase, United States Navy about real-world applications of AI and autonomy in defense. From underwater mine detection with autonomous vehicles to the ethics of lethal AI systems, this conversation dives into how the U.S. military is integrating AI into mission-critical operations — and why humans will always be at the center of warfighting.</p><p>They explore the challenges of underwater autonomy, multi-agent collaboration, cybersecurity, and the growing role of large language models like Gemini and Claude in the defense space. </p><p>Essential listening for anyone curious about military AI, defense tech, and the future of autonomous systems.</p><p>✅ *Subscribe to Weights &amp; Biases* →  https://bit.ly/45BCkYz</p><p>🎙 Get our podcasts on these platforms:</p><p>Apple Podcasts: http://wandb.me/apple-podcasts</p><p>Spotify: http://wandb.me/spotify</p><p>Google: http://wandb.me/gd_google</p><p>YouTube: http://wandb.me/youtube</p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases </p><p>https://www.linkedin.com/company/wandb  </p><p><br></p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">b4e39cda-0454-486b-b587-312955d5dbeb</guid><itunes:image href="https://artwork.captivate.fm/e092e358-2b0f-48b9-b4cf-7a66a5240f41/4n2xbYkrAG-Sx5zbwyRQ8Ewp.jpg"/><pubDate>Tue, 25 Mar 2025 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/8b41a549-bdea-44b2-9826-623b56af1fd4/GD030-pod.mp3" length="51694736" type="audio/mpeg"/><itunes:duration>01:01:32</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The rise of AI agents</title><itunes:title>The rise of AI agents</itunes:title><description><![CDATA[<p>In this episode of Gradient Dissent, host Lukas Biewald sits down with João Moura, CEO &amp; Founder of CrewAI, one of the leading platforms enabling AI agents for enterprise applications. Joe shares insights into how AI agents are being successfully deployed in over 40% of Fortune 500 companies, what tools these agents rely on, and how software companies are adapting to an agentic world.</p><p>They also discuss:</p><ul><li>What defines a true AI agent versus simple automation</li><li>How AI agents are transforming business processes in industries like finance, insurance, and software</li><li>The evolving business models for APIs as AI agents become the dominant software users</li><li>What the next breakthroughs in agentic AI might look like in 2025 and beyond</li></ul><br/><p>If you're curious about the cutting edge of AI automation, enterprise AI adoption, and the real impact of multi-agent systems, this episode is packed with essential insights.</p>]]></description><content:encoded><![CDATA[<p>In this episode of Gradient Dissent, host Lukas Biewald sits down with João Moura, CEO &amp; Founder of CrewAI, one of the leading platforms enabling AI agents for enterprise applications. Joe shares insights into how AI agents are being successfully deployed in over 40% of Fortune 500 companies, what tools these agents rely on, and how software companies are adapting to an agentic world.</p><p>They also discuss:</p><ul><li>What defines a true AI agent versus simple automation</li><li>How AI agents are transforming business processes in industries like finance, insurance, and software</li><li>The evolving business models for APIs as AI agents become the dominant software users</li><li>What the next breakthroughs in agentic AI might look like in 2025 and beyond</li></ul><br/><p>If you're curious about the cutting edge of AI automation, enterprise AI adoption, and the real impact of multi-agent systems, this episode is packed with essential insights.</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">6e5f7ae9-637e-48dc-8413-8b2f8c11faa1</guid><itunes:image href="https://artwork.captivate.fm/e45f95b9-34f9-429b-83a6-711480087be2/L97L-Ws0MTkrjRrqFdc_iQ3E.jpg"/><pubDate>Tue, 25 Feb 2025 06:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/16d3b857-0b1a-4fae-9533-c65980eb2961/GD029-pod.mp3" length="41285456" type="audio/mpeg"/><itunes:duration>49:09</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>R1, OpenAI’s o3, and the ARC-AGI Benchmark: Insights from Mike Knoop</title><itunes:title>R1, OpenAI’s o3, and the ARC-AGI Benchmark: Insights from Mike Knoop</itunes:title><description><![CDATA[<p>In this episode of <em>Gradient Dissent</em>, host Lukas Biewald sits down with <strong>Mike Knoop</strong>, Co-founder and CEO of <strong>Ndea</strong>, a cutting-edge AI research lab. Mike shares his journey from building Zapier into a major automation platform to diving into the frontiers of AI research. They discuss <strong>DeepSeek’s R1, OpenAI’s O-series models, and the ARC Prize</strong>, a competition aimed at advancing AI’s reasoning capabilities. Mike explains how <strong>program synthesis</strong> and deep learning must merge to create <strong>true AGI</strong>, and why he believes AI reliability is the biggest hurdle for automation adoption.</p><p>This conversation covers <strong>AGI timelines, research breakthroughs, and the future of intelligent systems</strong>, making it essential listening for AI enthusiasts, researchers, and entrepreneurs.</p><p>Mentioned Show Notes:</p><p>https://ndea.com</p><p>https://arcprize.org/blog/r1-zero-r1-results-analysis</p><p>https://arcprize.org/blog/oai-o3-pub-breakthrough</p><p><br></p><p>🎙 Get our podcasts on these platforms:</p><p>Apple Podcasts: http://wandb.me/apple-podcasts</p><p>Spotify: http://wandb.me/spotify</p><p>Google: http://wandb.me/gd_google</p><p>YouTube: http://wandb.me/youtube</p><p><br></p><p>Connect with Mike Knoop"</p><p>@mikeknoop</p><p><br></p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases </p><p>https://www.linkedin.com/company/wandb  </p><p><br></p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p><p><br></p>]]></description><content:encoded><![CDATA[<p>In this episode of <em>Gradient Dissent</em>, host Lukas Biewald sits down with <strong>Mike Knoop</strong>, Co-founder and CEO of <strong>Ndea</strong>, a cutting-edge AI research lab. Mike shares his journey from building Zapier into a major automation platform to diving into the frontiers of AI research. They discuss <strong>DeepSeek’s R1, OpenAI’s O-series models, and the ARC Prize</strong>, a competition aimed at advancing AI’s reasoning capabilities. Mike explains how <strong>program synthesis</strong> and deep learning must merge to create <strong>true AGI</strong>, and why he believes AI reliability is the biggest hurdle for automation adoption.</p><p>This conversation covers <strong>AGI timelines, research breakthroughs, and the future of intelligent systems</strong>, making it essential listening for AI enthusiasts, researchers, and entrepreneurs.</p><p>Mentioned Show Notes:</p><p>https://ndea.com</p><p>https://arcprize.org/blog/r1-zero-r1-results-analysis</p><p>https://arcprize.org/blog/oai-o3-pub-breakthrough</p><p><br></p><p>🎙 Get our podcasts on these platforms:</p><p>Apple Podcasts: http://wandb.me/apple-podcasts</p><p>Spotify: http://wandb.me/spotify</p><p>Google: http://wandb.me/gd_google</p><p>YouTube: http://wandb.me/youtube</p><p><br></p><p>Connect with Mike Knoop"</p><p>@mikeknoop</p><p><br></p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases </p><p>https://www.linkedin.com/company/wandb  </p><p><br></p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p><p><br></p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">6be37142-35bd-4a22-95bf-5d5da3bc870a</guid><itunes:image href="https://artwork.captivate.fm/7cbf2fbe-067c-46d8-bf9a-5dadb280accd/L7NlxkWbD4GKT_IYsmFkia6C.jpg"/><pubDate>Tue, 04 Feb 2025 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/bf353c95-4f1d-449e-96d7-11be1bd1782d/GD028-pod.mp3" length="60491552" type="audio/mpeg"/><itunes:duration>01:12:01</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>DeepSeek, Stargate and AI&apos;s $600 Billion Question with Sequoia&apos;s David Cahn</title><itunes:title>DeepSeek, Stargate and AI&apos;s $600 Billion Question with Sequoia&apos;s David Cahn</itunes:title><description><![CDATA[<p>In this episode of <em>Gradient Dissent</em>, host Lukas Biewald sits down with David Cahn, partner at Sequoia Capital, for a compelling discussion on the dynamic world of AI investments. They dive into recent developments, including DeepSeek and Stargate, exploring their implications for the AI industry. Drawing from his articles, <em>"AI's $200 Billion Question"</em> and <em>"AI's $600 Billion Question,"</em> David unpacks the financial challenges and opportunities surrounding AI infrastructure spending and the staggering revenue required to sustain these investments. Together, they examine the competitive strategies of cloud providers, the transformative impact of AI on business models, and predictions for the next wave of AI-driven growth. This episode offers an in-depth look at the crossroads of AI innovation and financial strategy.</p><p>Mentioned Articles:</p><p><a href="https://www.sequoiacap.com/article/follow-the-gpus-perspective/" rel="noopener noreferrer" target="_blank">AI’s $200B Question</a></p><p><a href="https://www.sequoiacap.com/article/ais-600b-question/" rel="noopener noreferrer" target="_blank">AI’s $600B Question</a></p><p>🎙 Get our podcasts on these platforms:</p><p>Apple Podcasts: http://wandb.me/apple-podcasts</p><p>Spotify: http://wandb.me/spotify</p><p>Google: http://wandb.me/gd_google</p><p>YouTube: http://wandb.me/youtube</p><p><br></p><p>Connect with David Cahn:</p><p>@DavidCahn6</p><p><br></p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;&nbsp;</p><p><br></p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p>]]></description><content:encoded><![CDATA[<p>In this episode of <em>Gradient Dissent</em>, host Lukas Biewald sits down with David Cahn, partner at Sequoia Capital, for a compelling discussion on the dynamic world of AI investments. They dive into recent developments, including DeepSeek and Stargate, exploring their implications for the AI industry. Drawing from his articles, <em>"AI's $200 Billion Question"</em> and <em>"AI's $600 Billion Question,"</em> David unpacks the financial challenges and opportunities surrounding AI infrastructure spending and the staggering revenue required to sustain these investments. Together, they examine the competitive strategies of cloud providers, the transformative impact of AI on business models, and predictions for the next wave of AI-driven growth. This episode offers an in-depth look at the crossroads of AI innovation and financial strategy.</p><p>Mentioned Articles:</p><p><a href="https://www.sequoiacap.com/article/follow-the-gpus-perspective/" rel="noopener noreferrer" target="_blank">AI’s $200B Question</a></p><p><a href="https://www.sequoiacap.com/article/ais-600b-question/" rel="noopener noreferrer" target="_blank">AI’s $600B Question</a></p><p>🎙 Get our podcasts on these platforms:</p><p>Apple Podcasts: http://wandb.me/apple-podcasts</p><p>Spotify: http://wandb.me/spotify</p><p>Google: http://wandb.me/gd_google</p><p>YouTube: http://wandb.me/youtube</p><p><br></p><p>Connect with David Cahn:</p><p>@DavidCahn6</p><p><br></p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;&nbsp;</p><p><br></p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">cc08dc59-d1d6-4ad6-b47e-9e4ee7970cc9</guid><itunes:image href="https://artwork.captivate.fm/627286cf-d3ed-4107-9fbf-dba867c038c7/HCSX2JTr3gPLLrBg4QLIwqoz.jpg"/><pubDate>Tue, 28 Jan 2025 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/28f47480-8c9e-456a-bb82-741d7752caf1/GD027-pod.mp3" length="48940208" type="audio/mpeg"/><itunes:duration>58:16</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Building the future of collaborative AI development with Akshay Agrawal</title><itunes:title>Building the future of collaborative AI development with Akshay Agrawal</itunes:title><description><![CDATA[<p>In this episode of <em>Gradient Dissent</em>, Akshay Agrawal, Co-Founder of Marimo, joins host Lukas Biewald to discuss the future of collaborative AI development.&nbsp;</p><p>They dive into how Marimo is enabling developers and researchers to collaborate seamlessly on AI projects, the challenges of scaling AI tools, and the importance of fostering open ecosystems for innovation. Akshay shares insights into building a platform that empowers teams to iterate faster and solve complex AI challenges together.</p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;&nbsp;</p><p><br></p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p><p><br></p><p><br></p><p><br></p>]]></description><content:encoded><![CDATA[<p>In this episode of <em>Gradient Dissent</em>, Akshay Agrawal, Co-Founder of Marimo, joins host Lukas Biewald to discuss the future of collaborative AI development.&nbsp;</p><p>They dive into how Marimo is enabling developers and researchers to collaborate seamlessly on AI projects, the challenges of scaling AI tools, and the importance of fostering open ecosystems for innovation. Akshay shares insights into building a platform that empowers teams to iterate faster and solve complex AI challenges together.</p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;&nbsp;</p><p><br></p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p><p><br></p><p><br></p><p><br></p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">dc681fe9-1cab-425e-9761-6cff85303410</guid><itunes:image href="https://artwork.captivate.fm/08eefecc-e7de-4a7a-9e5a-89d35079fc9a/XyBiQjkRD4oc58gXh-OK7IZY.jpg"/><pubDate>Tue, 07 Jan 2025 05:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/e96a7800-95f4-4eee-a825-154465bb6e54/GD026-Pod.mp3" length="34483472" type="audio/mpeg"/><itunes:duration>41:03</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Evaluating LLMs with Chatbot Arena and Joseph E. Gonzalez</title><itunes:title>Evaluating LLMs with Chatbot Arena and Joseph E. Gonzalez</itunes:title><description><![CDATA[<p>In this episode of Gradient Dissent, Joseph E. Gonzalez, EECS Professor at UC Berkeley and Co-Founder at RunLLM, joins host Lukas Biewald to explore innovative approaches to evaluating LLMs.</p><p>They discuss the concept of vibes-based evaluation, which examines not just accuracy but also the style and tone of model responses, and how Chatbot Arena has become a community-driven benchmark for open-source and commercial LLMs. Joseph shares insights on democratizing model evaluation, refining AI-human interactions, and leveraging human preferences to improve model performance. This episode provides a deep dive into the evolving landscape of LLM evaluation and its impact on AI development.</p><p>🎙 Get our podcasts on these platforms:</p><p>Apple Podcasts: http://wandb.me/apple-podcasts</p><p>Spotify: http://wandb.me/spotify</p><p>Google: http://wandb.me/gd_google</p><p>YouTube: http://wandb.me/youtube</p><p><br></p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;&nbsp;</p><p><br></p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p>]]></description><content:encoded><![CDATA[<p>In this episode of Gradient Dissent, Joseph E. Gonzalez, EECS Professor at UC Berkeley and Co-Founder at RunLLM, joins host Lukas Biewald to explore innovative approaches to evaluating LLMs.</p><p>They discuss the concept of vibes-based evaluation, which examines not just accuracy but also the style and tone of model responses, and how Chatbot Arena has become a community-driven benchmark for open-source and commercial LLMs. Joseph shares insights on democratizing model evaluation, refining AI-human interactions, and leveraging human preferences to improve model performance. This episode provides a deep dive into the evolving landscape of LLM evaluation and its impact on AI development.</p><p>🎙 Get our podcasts on these platforms:</p><p>Apple Podcasts: http://wandb.me/apple-podcasts</p><p>Spotify: http://wandb.me/spotify</p><p>Google: http://wandb.me/gd_google</p><p>YouTube: http://wandb.me/youtube</p><p><br></p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;&nbsp;</p><p><br></p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">80ba4fe4-8197-49f8-a152-a27149d8b94f</guid><itunes:image href="https://artwork.captivate.fm/94ad37fe-3bb2-4413-9ebb-243824dbdacd/HA8T1vLpm8IZOlOocBQSQP7n.jpg"/><pubDate>Tue, 17 Dec 2024 06:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/cdcbeb15-fdc4-4d45-a468-7686da005f47/GD025-Pod.mp3" length="46643648" type="audio/mpeg"/><itunes:duration>55:32</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>AI’s breakthrough in weather forecasting with Brightband’s Julian Green</title><itunes:title>AI’s breakthrough in weather forecasting with Brightband’s Julian Green</itunes:title><description><![CDATA[<p>In this episode of Gradient Dissent, Julian Green, Co-founder &amp; CEO of Brightband, joins host Lukas Biewald to discuss how AI is transforming weather forecasting and climate solutions.</p><p>They explore Brightband's innovative approach to using AI for extreme weather prediction, the shift from physics-based models to AI-driven forecasting, and the potential for democratizing weather data. Julian shares insights into building trust in AI for critical decisions, navigating the challenges of deep tech entrepreneurship, and the broader implications of AI in mitigating climate risks. This episode delves into the intersection of AI and Earth systems, highlighting its transformative impact on weather and climate decision-making.</p><p>🎙 Get our podcasts on these platforms:</p><p>Apple Podcasts: http://wandb.me/apple-podcasts</p><p>Spotify: http://wandb.me/spotify</p><p>Google: http://wandb.me/gd_google</p><p>YouTube: http://wandb.me/youtube</p><p>Connect with Julian Green:</p><p>@juliangreensf</p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;&nbsp;</p><p><br></p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p>]]></description><content:encoded><![CDATA[<p>In this episode of Gradient Dissent, Julian Green, Co-founder &amp; CEO of Brightband, joins host Lukas Biewald to discuss how AI is transforming weather forecasting and climate solutions.</p><p>They explore Brightband's innovative approach to using AI for extreme weather prediction, the shift from physics-based models to AI-driven forecasting, and the potential for democratizing weather data. Julian shares insights into building trust in AI for critical decisions, navigating the challenges of deep tech entrepreneurship, and the broader implications of AI in mitigating climate risks. This episode delves into the intersection of AI and Earth systems, highlighting its transformative impact on weather and climate decision-making.</p><p>🎙 Get our podcasts on these platforms:</p><p>Apple Podcasts: http://wandb.me/apple-podcasts</p><p>Spotify: http://wandb.me/spotify</p><p>Google: http://wandb.me/gd_google</p><p>YouTube: http://wandb.me/youtube</p><p>Connect with Julian Green:</p><p>@juliangreensf</p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;&nbsp;</p><p><br></p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">95f1fa6d-830c-40b8-a9d3-38af3d95aa12</guid><itunes:image href="https://artwork.captivate.fm/2c29f390-924e-4b7e-870c-f5a1b1f90a77/_jZRfGFx6Nx4xyevSfqcNiF4.jpg"/><pubDate>Tue, 26 Nov 2024 06:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/65744267-5c5a-4e50-b356-5eed8d1221a5/GD024-Pod-2.mp3" length="41973248" type="audio/mpeg"/><itunes:duration>49:58</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>What’s the path to AGI? A conversation with Turing Co-founder and CEO Jonathan Siddharth</title><itunes:title>What’s the path to AGI? A conversation with Turing Co-founder and CEO Jonathan Siddharth</itunes:title><description><![CDATA[<p>In this episode of Gradient Dissent, Jonathan Siddharth, CEO &amp; Co-Founder of Turing, joins host Lukas Biewald to discuss the path to AGI.</p><p>They explore how Turing built a "developer cloud" of 3.7 million engineers to power AGI training, providing high-quality code and reasoning data to leading AI labs. Jonathan shares insights on Turing’s journey, from building coding datasets to solving enterprise AI challenges and enabling human-in-the-loop solutions. This episode offers a unique perspective on the intersection of human intelligence and AGI, with an eye on the expansion of new domains beyond coding.</p><p>✅ *Subscribe to Weights &amp; Biases* →&nbsp;<a href="https://bit.ly/45BCkYz" target="_blank">https://bit.ly/45BCkYz</a>&nbsp;</p><p>🎙 Get our podcasts on these platforms:</p><p>Apple Podcasts: http://wandb.me/apple-podcasts</p><p>Spotify: http://wandb.me/spotify</p><p>Google: http://wandb.me/gd_google</p><p>YouTube: http://wandb.me/youtube</p><p>Connect with Jonathan Siddharth:</p><p><a href="https://www.linkedin.com/in/jonsid/" target="_blank">https://www.linkedin.com/in/jonsid/</a>&nbsp;</p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;&nbsp;</p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p>]]></description><content:encoded><![CDATA[<p>In this episode of Gradient Dissent, Jonathan Siddharth, CEO &amp; Co-Founder of Turing, joins host Lukas Biewald to discuss the path to AGI.</p><p>They explore how Turing built a "developer cloud" of 3.7 million engineers to power AGI training, providing high-quality code and reasoning data to leading AI labs. Jonathan shares insights on Turing’s journey, from building coding datasets to solving enterprise AI challenges and enabling human-in-the-loop solutions. This episode offers a unique perspective on the intersection of human intelligence and AGI, with an eye on the expansion of new domains beyond coding.</p><p>✅ *Subscribe to Weights &amp; Biases* →&nbsp;<a href="https://bit.ly/45BCkYz" target="_blank">https://bit.ly/45BCkYz</a>&nbsp;</p><p>🎙 Get our podcasts on these platforms:</p><p>Apple Podcasts: http://wandb.me/apple-podcasts</p><p>Spotify: http://wandb.me/spotify</p><p>Google: http://wandb.me/gd_google</p><p>YouTube: http://wandb.me/youtube</p><p>Connect with Jonathan Siddharth:</p><p><a href="https://www.linkedin.com/in/jonsid/" target="_blank">https://www.linkedin.com/in/jonsid/</a>&nbsp;</p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;&nbsp;</p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">10637c69-34c0-45c1-9964-6cd899d8a17d</guid><itunes:image href="https://artwork.captivate.fm/8f64a35f-8f26-4428-9477-8413fce654b3/a9Bylbw0XakRVJmGAlE91LUN.jpg"/><pubDate>Thu, 07 Nov 2024 06:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/e2b8442f-d5bb-4169-a8c9-d96aacf9c38f/GD023-Pod.mp3" length="46029440" type="audio/mpeg"/><itunes:duration>54:48</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Vercel’s CEO &amp; Founder Guillermo Rauch on the impact of AI on Web Development and Front End Engineering</title><itunes:title>Vercel&apos;s CEO &amp; Founder Guillermo Rauch on the impact of AI on Web Development and Front End Engineering</itunes:title><description><![CDATA[<p>In this episode of Gradient Dissent, Guillermo Rauch, CEO &amp; Founder of Vercel, joins host Lukas Biewald for a wide ranging discussion on how AI is changing web development and front end engineering. They discuss how Vercel’s v0 expert AI agent is generating code and UI based on simple ChatGPT-like prompts, the importance of releasing daily for AI applications, and the changing landscape of frontier model performance between open and closed models.</p><p>Listen on Apple Podcasts: <a href="http://wandb.me/apple-podcasts" target="_blank">http://wandb.me/apple-podcasts</a></p><p>Listen on Spotify: <a href="http://wandb.me/spotify" target="_blank">http://wandb.me/spotify</a>&nbsp;</p><p>Subscribe to Weights &amp; Biases:&nbsp;<a href="https://bit.ly/45BCkYz" target="_blank">https://bit.ly/45BCkYz</a></p><p>Get our podcasts on these platforms:</p><p>Apple Podcasts: <a href="http://wandb.me/apple-podcasts" target="_blank">http://wandb.me/apple-podcasts</a></p><p>Spotify: <a href="http://wandb.me/spotify" target="_blank">http://wandb.me/spotify</a></p><p>Google: <a href="http://wandb.me/gd_google" target="_blank">http://wandb.me/gd_google</a></p><p>YouTube: http://wandb.me/youtube</p><p>Connect with Guillermo Rauch:</p><p><a href="https://www.linkedin.com/in/rauchg/" target="_blank">https://www.linkedin.com/in/rauchg/</a>&nbsp;</p><p><a href="https://x.com/rauchg" target="_blank">https://x.com/rauchg</a></p><p>Follow Weights &amp; Biases:</p><p><a href="https://twitter.com/weights_biases&nbsp;" target="_blank">https://twitter.com/weights_biases&nbsp;</a></p><p><a href="https://www.linkedin.com/company/wandb&nbsp;" target="_blank">https://www.linkedin.com/company/wandb&nbsp;</a>&nbsp;</p><p>Join the Weights &amp; Biases Discord Server:</p><p><a href="https://discord.gg/CkZKRNnaf3" target="_blank">https://discord.gg/CkZKRNnaf3</a></p>]]></description><content:encoded><![CDATA[<p>In this episode of Gradient Dissent, Guillermo Rauch, CEO &amp; Founder of Vercel, joins host Lukas Biewald for a wide ranging discussion on how AI is changing web development and front end engineering. They discuss how Vercel’s v0 expert AI agent is generating code and UI based on simple ChatGPT-like prompts, the importance of releasing daily for AI applications, and the changing landscape of frontier model performance between open and closed models.</p><p>Listen on Apple Podcasts: <a href="http://wandb.me/apple-podcasts" target="_blank">http://wandb.me/apple-podcasts</a></p><p>Listen on Spotify: <a href="http://wandb.me/spotify" target="_blank">http://wandb.me/spotify</a>&nbsp;</p><p>Subscribe to Weights &amp; Biases:&nbsp;<a href="https://bit.ly/45BCkYz" target="_blank">https://bit.ly/45BCkYz</a></p><p>Get our podcasts on these platforms:</p><p>Apple Podcasts: <a href="http://wandb.me/apple-podcasts" target="_blank">http://wandb.me/apple-podcasts</a></p><p>Spotify: <a href="http://wandb.me/spotify" target="_blank">http://wandb.me/spotify</a></p><p>Google: <a href="http://wandb.me/gd_google" target="_blank">http://wandb.me/gd_google</a></p><p>YouTube: http://wandb.me/youtube</p><p>Connect with Guillermo Rauch:</p><p><a href="https://www.linkedin.com/in/rauchg/" target="_blank">https://www.linkedin.com/in/rauchg/</a>&nbsp;</p><p><a href="https://x.com/rauchg" target="_blank">https://x.com/rauchg</a></p><p>Follow Weights &amp; Biases:</p><p><a href="https://twitter.com/weights_biases&nbsp;" target="_blank">https://twitter.com/weights_biases&nbsp;</a></p><p><a href="https://www.linkedin.com/company/wandb&nbsp;" target="_blank">https://www.linkedin.com/company/wandb&nbsp;</a>&nbsp;</p><p>Join the Weights &amp; Biases Discord Server:</p><p><a href="https://discord.gg/CkZKRNnaf3" target="_blank">https://discord.gg/CkZKRNnaf3</a></p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">4f6aa5f1-261e-493f-ab84-f049abdcb8d0</guid><itunes:image href="https://artwork.captivate.fm/4ac6fa51-f777-4227-b8a9-0d414adc9602/6QedpeK7YmYjgfVUbErjZ5mc.jpg"/><pubDate>Thu, 24 Oct 2024 06:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/2f789fd0-291c-418d-94fd-66ec4b9c4ba0/GD022-Pod.mp3" length="47842160" type="audio/mpeg"/><itunes:duration>56:57</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Snowflake’s CEO Sridhar Ramaswamy on 700+ LLM enterprise use cases</title><itunes:title>Snowflake’s CEO Sridhar Ramaswamy on 700+ LLM enterprise use cases</itunes:title><description><![CDATA[<p>In this episode of Gradient Dissent, Snowflake CEO Sridhar Ramaswamy joins host Lukas Biewald to explore how AI is transforming enterprise data strategies.</p><p>They discuss Sridhar's journey from Google to Snowflake, diving into the evolving role of foundation models, Snowflake’s AI strategy, and the challenges of scaling AI in business. Sridhar also shares his thoughts on leadership, rapid iteration, and creating meaningful AI solutions for enterprise clients. Tune in to discover how Snowflake is driving innovation in the AI and data space.</p><p>Connect with Sridhar Ramaswamy:</p><p><a href="https://www.linkedin.com/in/sridhar-ramaswamy/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/sridhar-ramaswamy/</a>&nbsp;</p><p><br></p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;&nbsp;</p><p><br></p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p>]]></description><content:encoded><![CDATA[<p>In this episode of Gradient Dissent, Snowflake CEO Sridhar Ramaswamy joins host Lukas Biewald to explore how AI is transforming enterprise data strategies.</p><p>They discuss Sridhar's journey from Google to Snowflake, diving into the evolving role of foundation models, Snowflake’s AI strategy, and the challenges of scaling AI in business. Sridhar also shares his thoughts on leadership, rapid iteration, and creating meaningful AI solutions for enterprise clients. Tune in to discover how Snowflake is driving innovation in the AI and data space.</p><p>Connect with Sridhar Ramaswamy:</p><p><a href="https://www.linkedin.com/in/sridhar-ramaswamy/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/sridhar-ramaswamy/</a>&nbsp;</p><p><br></p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;&nbsp;</p><p><br></p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">11cc0fc0-5d7d-4bce-88d0-a5a42197cee6</guid><itunes:image href="https://artwork.captivate.fm/446dcabf-a33f-42a5-92e1-3587d7ea53a9/AeaurTCGI6OwaOgc_l_t4tIl.jpg"/><pubDate>Thu, 10 Oct 2024 12:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/be34aa84-3b97-463e-8978-3e49ff4d2f7a/GD021-Pod.mp3" length="46783424" type="audio/mpeg"/><itunes:duration>55:42</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Elevating ML Infrastructure with Modal Labs CEO Erik Bernhardsson</title><itunes:title>Elevating ML Infrastructure with Modal Labs CEO Erik Bernhardsson</itunes:title><description><![CDATA[<p>In this episode of Gradient Dissent, Erik Bernhardsson, CEO &amp; Founder of Modal Labs, joins host Lukas Biewald to discuss the future of machine learning infrastructure. They explore how Modal is enhancing the developer experience, handling large-scale GPU workloads, and simplifying cloud execution for data teams. If you’re into AI, data pipelines, or building robust ML systems, this episode is packed with valuable insights!</p><p>🎙 *Listen on Apple Podcasts*: <a href="http://wandb.me/apple-podcasts" rel="noopener noreferrer" target="_blank">http://wandb.me/apple-podcasts</a></p><p>🎙 *Listen on Spotify*: <a href="http://wandb.me/spotify" rel="noopener noreferrer" target="_blank">http://wandb.me/spotify</a>&nbsp;</p><p><br></p><p>✅ *Subscribe to Weights &amp; Biases* →&nbsp; https://bit.ly/45BCkYz</p><p><br></p><p>🎙 Get our podcasts on these platforms:</p><p>Apple Podcasts: http://wandb.me/apple-podcasts</p><p>Spotify: http://wandb.me/spotify</p><p>Google: http://wandb.me/gd_google</p><p>YouTube: http://wandb.me/youtube</p><p><br></p><p>Connect with Erik Bernhardsson:&nbsp;</p><p><a href="https://www.linkedin.com/in/erikbern/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/erikbern/</a>&nbsp;</p><p><a href="https://x.com/bernhardsson" rel="noopener noreferrer" target="_blank">https://x.com/bernhardsson</a>&nbsp;</p><p><br></p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;&nbsp;</p><p><br></p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p>]]></description><content:encoded><![CDATA[<p>In this episode of Gradient Dissent, Erik Bernhardsson, CEO &amp; Founder of Modal Labs, joins host Lukas Biewald to discuss the future of machine learning infrastructure. They explore how Modal is enhancing the developer experience, handling large-scale GPU workloads, and simplifying cloud execution for data teams. If you’re into AI, data pipelines, or building robust ML systems, this episode is packed with valuable insights!</p><p>🎙 *Listen on Apple Podcasts*: <a href="http://wandb.me/apple-podcasts" rel="noopener noreferrer" target="_blank">http://wandb.me/apple-podcasts</a></p><p>🎙 *Listen on Spotify*: <a href="http://wandb.me/spotify" rel="noopener noreferrer" target="_blank">http://wandb.me/spotify</a>&nbsp;</p><p><br></p><p>✅ *Subscribe to Weights &amp; Biases* →&nbsp; https://bit.ly/45BCkYz</p><p><br></p><p>🎙 Get our podcasts on these platforms:</p><p>Apple Podcasts: http://wandb.me/apple-podcasts</p><p>Spotify: http://wandb.me/spotify</p><p>Google: http://wandb.me/gd_google</p><p>YouTube: http://wandb.me/youtube</p><p><br></p><p>Connect with Erik Bernhardsson:&nbsp;</p><p><a href="https://www.linkedin.com/in/erikbern/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/erikbern/</a>&nbsp;</p><p><a href="https://x.com/bernhardsson" rel="noopener noreferrer" target="_blank">https://x.com/bernhardsson</a>&nbsp;</p><p><br></p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;&nbsp;</p><p><br></p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">254b898d-10ff-4d0d-a53f-818893240187</guid><itunes:image href="https://artwork.captivate.fm/2ffb2c41-6b54-4387-bedb-d221fc2262aa/Ta5R0HrNy1oaAkTwQJmOEyAc.jpg"/><pubDate>Thu, 26 Sep 2024 12:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/ec30ed62-256b-4c27-8193-e578e4bdb910/GD020-pod.mp3" length="41705120" type="audio/mpeg"/><itunes:duration>49:39</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>From No-Code to AI-Powered Apps with Airtable’s Howie Liu</title><itunes:title>From No-Code to AI-Powered Apps with Airtable’s Howie Liu</itunes:title><description><![CDATA[<p>In this episode of Gradient Dissent, Howie Lou, CEO of Airtable, joins host Lukas Biewald to dive into Airtable's transformation from a no-code app builder to a platform capable of supporting complex AI-driven workflows. They discuss the strategic decisions that propelled Airtable's growth, the challenges of scaling AI in enterprise settings, and the future of AI in business operations. Discover how Airtable is reshaping digital transformation and why flexibility and innovation are key in today's tech landscape. Tune in now to learn about the evolving role of AI in business and product development.</p><p>🎙 *Listen on Apple Podcasts*: <a href="http://wandb.me/apple-podcasts" rel="noopener noreferrer" target="_blank">http://wandb.me/apple-podcasts</a></p><p>🎙 *Listen on Spotify*: <a href="http://wandb.me/spotify" rel="noopener noreferrer" target="_blank">http://wandb.me/spotify</a>&nbsp;</p><p>✅ *Subscribe to Weights &amp; Biases* →&nbsp; https://bit.ly/45BCkYz</p><p><br></p><p>🎙 Get our podcasts on these platforms:</p><p>Apple Podcasts: http://wandb.me/apple-podcasts</p><p>Spotify: http://wandb.me/spotify</p><p>Google: http://wandb.me/gd_google</p><p>YouTube: http://wandb.me/youtube</p><p><br></p><p>Connect with Howie Liu:</p><p><a href="https://www.linkedin.com/in/howieliu/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/howieliu/</a>&nbsp;</p><p><a href="https://x.com/howietl" rel="noopener noreferrer" target="_blank">https://x.com/howietl</a>&nbsp;</p><p><br></p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;&nbsp;</p><p><br></p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p><p><br></p>]]></description><content:encoded><![CDATA[<p>In this episode of Gradient Dissent, Howie Lou, CEO of Airtable, joins host Lukas Biewald to dive into Airtable's transformation from a no-code app builder to a platform capable of supporting complex AI-driven workflows. They discuss the strategic decisions that propelled Airtable's growth, the challenges of scaling AI in enterprise settings, and the future of AI in business operations. Discover how Airtable is reshaping digital transformation and why flexibility and innovation are key in today's tech landscape. Tune in now to learn about the evolving role of AI in business and product development.</p><p>🎙 *Listen on Apple Podcasts*: <a href="http://wandb.me/apple-podcasts" rel="noopener noreferrer" target="_blank">http://wandb.me/apple-podcasts</a></p><p>🎙 *Listen on Spotify*: <a href="http://wandb.me/spotify" rel="noopener noreferrer" target="_blank">http://wandb.me/spotify</a>&nbsp;</p><p>✅ *Subscribe to Weights &amp; Biases* →&nbsp; https://bit.ly/45BCkYz</p><p><br></p><p>🎙 Get our podcasts on these platforms:</p><p>Apple Podcasts: http://wandb.me/apple-podcasts</p><p>Spotify: http://wandb.me/spotify</p><p>Google: http://wandb.me/gd_google</p><p>YouTube: http://wandb.me/youtube</p><p><br></p><p>Connect with Howie Liu:</p><p><a href="https://www.linkedin.com/in/howieliu/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/howieliu/</a>&nbsp;</p><p><a href="https://x.com/howietl" rel="noopener noreferrer" target="_blank">https://x.com/howietl</a>&nbsp;</p><p><br></p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;&nbsp;</p><p><br></p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p><p><br></p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">1f3fac6e-ab95-4676-bd65-4fdc09c34f1d</guid><itunes:image href="https://artwork.captivate.fm/5bba133a-bc09-4596-bb1a-52ea81985cd8/Xa7L_pIlQ4szNYFw13XRN9Aj.jpg"/><pubDate>Thu, 12 Sep 2024 12:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/65f8a203-3e91-4ba0-8539-29410303623e/GD018-pod.mp3" length="61276784" type="audio/mpeg"/><itunes:duration>01:12:57</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Launching the Fastest AI Inference Solution with Cerebras Systems CEO Andrew Feldman</title><itunes:title>Launching the Fastest AI Inference Solution with Cerebras Systems CEO Andrew Feldman</itunes:title><description><![CDATA[<p>In this episode of Gradient Dissent, Andrew Feldman, CEO of Cerebras Systems, joins host Lukas Biewald to discuss the latest advancements in AI inference technology. They explore Cerebras Systems' groundbreaking new AI inference product, examining how their wafer-scale chips are setting new benchmarks in speed, accuracy, and cost efficiency. Andrew shares insights on the architectural innovations that make this possible and discusses the broader implications for AI workloads in production. This episode provides a comprehensive look at the cutting-edge of AI hardware and its impact on the future of machine learning.</p><p>✅ *Subscribe to Weights &amp; Biases* →&nbsp;https://bit.ly/45BCkYz</p><p>🎙 Get our podcasts on these platforms:</p><p>Apple Podcasts: http://wandb.me/apple-podcasts</p><p>Spotify: http://wandb.me/spotify</p><p>Google: http://wandb.me/gd_google</p><p>YouTube: http://wandb.me/youtube</p><p>Connect with Andrew Feldman:</p><p><a href="https://www.linkedin.com/in/andrewdfeldman/" target="_blank">https://www.linkedin.com/in/andrewdfeldman/</a>&nbsp;</p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;&nbsp;</p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p><p>Paper Andrew referenced Paul David- Economic historian&nbsp;&nbsp;</p><p><a href="https://www.jstor.org/stable/2006600" target="_blank">https://www.jstor.org/stable/2006600</a>&nbsp;</p>]]></description><content:encoded><![CDATA[<p>In this episode of Gradient Dissent, Andrew Feldman, CEO of Cerebras Systems, joins host Lukas Biewald to discuss the latest advancements in AI inference technology. They explore Cerebras Systems' groundbreaking new AI inference product, examining how their wafer-scale chips are setting new benchmarks in speed, accuracy, and cost efficiency. Andrew shares insights on the architectural innovations that make this possible and discusses the broader implications for AI workloads in production. This episode provides a comprehensive look at the cutting-edge of AI hardware and its impact on the future of machine learning.</p><p>✅ *Subscribe to Weights &amp; Biases* →&nbsp;https://bit.ly/45BCkYz</p><p>🎙 Get our podcasts on these platforms:</p><p>Apple Podcasts: http://wandb.me/apple-podcasts</p><p>Spotify: http://wandb.me/spotify</p><p>Google: http://wandb.me/gd_google</p><p>YouTube: http://wandb.me/youtube</p><p>Connect with Andrew Feldman:</p><p><a href="https://www.linkedin.com/in/andrewdfeldman/" target="_blank">https://www.linkedin.com/in/andrewdfeldman/</a>&nbsp;</p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;&nbsp;</p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p><p>Paper Andrew referenced Paul David- Economic historian&nbsp;&nbsp;</p><p><a href="https://www.jstor.org/stable/2006600" target="_blank">https://www.jstor.org/stable/2006600</a>&nbsp;</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">c19d57a9-70f5-4074-ab2f-91d8eb430807</guid><itunes:image href="https://artwork.captivate.fm/70d45eae-6ebe-49de-a464-ec1314889fb8/RjwhnHz5oo-u_BfXM1YuDJmP.jpg"/><pubDate>Tue, 27 Aug 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/e861628d-57b2-4298-8cfd-cc5dd7d8af64/GD019-pod.mp3" length="44711312" type="audio/mpeg"/><itunes:duration>53:14</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Reinventing AI Agents with Imbue CEO Kanjun Qiu</title><itunes:title>Reinventing AI Agents with Imbue CEO Kanjun Qiu</itunes:title><description><![CDATA[<p>In this episode of Gradient Dissent, Kanjun Qiu, CEO and Co-founder of Imbue, joins host Lukas Biewald to discuss how AI agents are transforming code generation and software development. Discover the potential impact and challenges of creating autonomous AI systems that can write and verify code and and learn about the practical research involved.</p><p>✅ *Subscribe to Weights &amp; Biases* →&nbsp; https://bit.ly/45BCkYz</p><p><br></p><p>Connect with Kanjun Qiu:&nbsp;</p><p><a href="https://www.linkedin.com/in/kanjun/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/kanjun/</a>&nbsp;</p><p><a href="https://x.com/kanjun" rel="noopener noreferrer" target="_blank">https://x.com/kanjun</a></p><p><br></p><p>General Intelligent Podcast:&nbsp;</p><p><a href="https://imbue.com/podcast/" rel="noopener noreferrer" target="_blank">https://imbue.com/podcast/</a></p><p><br></p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;&nbsp;</p><p><br></p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p><p><br></p><p><br></p>]]></description><content:encoded><![CDATA[<p>In this episode of Gradient Dissent, Kanjun Qiu, CEO and Co-founder of Imbue, joins host Lukas Biewald to discuss how AI agents are transforming code generation and software development. Discover the potential impact and challenges of creating autonomous AI systems that can write and verify code and and learn about the practical research involved.</p><p>✅ *Subscribe to Weights &amp; Biases* →&nbsp; https://bit.ly/45BCkYz</p><p><br></p><p>Connect with Kanjun Qiu:&nbsp;</p><p><a href="https://www.linkedin.com/in/kanjun/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/kanjun/</a>&nbsp;</p><p><a href="https://x.com/kanjun" rel="noopener noreferrer" target="_blank">https://x.com/kanjun</a></p><p><br></p><p>General Intelligent Podcast:&nbsp;</p><p><a href="https://imbue.com/podcast/" rel="noopener noreferrer" target="_blank">https://imbue.com/podcast/</a></p><p><br></p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;&nbsp;</p><p><br></p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p><p><br></p><p><br></p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">f9a98ceb-7589-46b2-a1ab-3a6740a45e27</guid><itunes:image href="https://artwork.captivate.fm/398e6766-2011-4b18-9537-4a94253895ae/DM3y5PSP7ICUvZHRhDAK3fZn.jpg"/><pubDate>Thu, 08 Aug 2024 12:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/51f68bc5-09b8-4885-af03-6935fd19b89e/GD017-YT.mp3" length="40836224" type="audio/mpeg"/><itunes:duration>48:37</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>From startup to $1.2B with Lambda’s Stephen Balaban</title><itunes:title>From startup to $1.2B with Lambda&apos;s Stephen Balaban</itunes:title><description><![CDATA[<p>In this episode of Gradient Dissent, Stephen Balaban, CEO of Lambda Labs, joins host Lukas Biewald to discuss the journey of scaling Lambda Labs to an impressive $400M in revenue. They explore the pivotal moments that shaped the company, the future of GPU technology, and the impact of AI data centers on the energy grid. Discover the challenges and triumphs of running a successful hardware and cloud business in the AI industry.</p><p>Tune in now to explore the evolving landscape of AI hardware and cloud services.</p><p>✅ *Subscribe to Weights &amp; Biases* →&nbsp;https://bit.ly/45BCkYz</p><p>Connect with Stephen Balaban:</p><p><a href="https://www.linkedin.com/in/sbalaban/" target="_blank">https://www.linkedin.com/in/sbalaban/</a>&nbsp;</p><p><a href="https://x.com/stephenbalaban" target="_blank">https://x.com/stephenbalaban</a>&nbsp;</p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;&nbsp;</p>]]></description><content:encoded><![CDATA[<p>In this episode of Gradient Dissent, Stephen Balaban, CEO of Lambda Labs, joins host Lukas Biewald to discuss the journey of scaling Lambda Labs to an impressive $400M in revenue. They explore the pivotal moments that shaped the company, the future of GPU technology, and the impact of AI data centers on the energy grid. Discover the challenges and triumphs of running a successful hardware and cloud business in the AI industry.</p><p>Tune in now to explore the evolving landscape of AI hardware and cloud services.</p><p>✅ *Subscribe to Weights &amp; Biases* →&nbsp;https://bit.ly/45BCkYz</p><p>Connect with Stephen Balaban:</p><p><a href="https://www.linkedin.com/in/sbalaban/" target="_blank">https://www.linkedin.com/in/sbalaban/</a>&nbsp;</p><p><a href="https://x.com/stephenbalaban" target="_blank">https://x.com/stephenbalaban</a>&nbsp;</p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;&nbsp;</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">42251140-efca-4372-8fdf-73ae1d9612fb</guid><itunes:image href="https://artwork.captivate.fm/2c7f35f0-28b2-4a17-91ce-03e6a7dd7282/nJobh4ivj3wbMnBJ3UvUMhRt.jpg"/><pubDate>Thu, 25 Jul 2024 12:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/e6494c5f-dec2-4c6d-95e3-eac76fe6d5e8/GD016-Final-Pod.mp3" length="119835968" type="audio/mpeg"/><itunes:duration>49:56</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Harnessing AI for legal practice with CoCounsel’s Jake Heller</title><itunes:title>Harnessing AI for legal practice with CoCounsel’s Jake Heller</itunes:title><description><![CDATA[In this episode of Gradient Dissent, Jake Heller, Head of Product, CoCounsel, joins host Lukas Biewald to discuss how AI is innovating legal practices and reshaping educational approaches for aspiring lawyers. From automating document review to enhancing legal research capabilities, explore the potential impact and challenges AI presents in the legal field. Whether you're a legal professional, a student, or simply curious about the future of law and technology, this conversation provides valuable insights and perspectives. Tune in now to explore the evolving landscape of AI in legal education.

✅ *Subscribe to Weights &amp; Biases* → https://bit.ly/45BCkYz

🎙 Get our podcasts on these platforms:

Apple Podcasts: http://wandb.me/apple-podcasts

Spotify: http://wandb.me/spotify

Google: http://wandb.me/gd_google

YouTube: http://wandb.me/youtube

Connect with Jake Heller:

https://www.linkedin.com/in/jakeheller

Follow Weights &amp; Biases:

https://twitter.com/weights_biases

https://www.linkedin.com/company/wandb

Join the Weights &amp; Biases Discord Server:

https://discord.gg/CkZKRNnaf3

&nbsp;]]></description><content:encoded><![CDATA[In this episode of Gradient Dissent, Jake Heller, Head of Product, CoCounsel, joins host Lukas Biewald to discuss how AI is innovating legal practices and reshaping educational approaches for aspiring lawyers. From automating document review to enhancing legal research capabilities, explore the potential impact and challenges AI presents in the legal field. Whether you're a legal professional, a student, or simply curious about the future of law and technology, this conversation provides valuable insights and perspectives. Tune in now to explore the evolving landscape of AI in legal education.

✅ *Subscribe to Weights &amp; Biases* → https://bit.ly/45BCkYz

🎙 Get our podcasts on these platforms:

Apple Podcasts: http://wandb.me/apple-podcasts

Spotify: http://wandb.me/spotify

Google: http://wandb.me/gd_google

YouTube: http://wandb.me/youtube

Connect with Jake Heller:

https://www.linkedin.com/in/jakeheller

Follow Weights &amp; Biases:

https://twitter.com/weights_biases

https://www.linkedin.com/company/wandb

Join the Weights &amp; Biases Discord Server:

https://discord.gg/CkZKRNnaf3

&nbsp;]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">3eb000f9-4ce8-4257-97d8-8c39d2835545</guid><itunes:image href="https://artwork.captivate.fm/44d9ddc0-470f-4368-b098-b83f27b9bbb0/wKLXUmUO6G59R_2gjQF7eBcF.jpg"/><pubDate>Thu, 11 Jul 2024 12:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/137f844c-7625-462b-9594-846b4683683e/GD015-Pod.mp3" length="53990288" type="audio/mpeg"/><itunes:duration>01:04:16</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Transforming Search with Perplexity AI’s CTO Denis Yarats</title><itunes:title>Transforming Search with Perplexity AI’s CTO Denis Yarats</itunes:title><description><![CDATA[<p>In this episode of Gradient Dissent, Denis Yarats, CTO of Perplexity, joins host Lukas Biewald to discuss the innovative use of AI in creating high-quality, fast search engine answers.</p><p>Discover how Perplexity combines advancements in search engines and LLMs to deliver precise answers. Yarats shares insights on the technical challenges, the importance of speed, and the future of AI in search.</p><p>✅ *Subscribe to Weights &amp; Biases* →&nbsp;https://bit.ly/45BCkYz</p><p>🎙 Get our podcasts on these platforms:</p><p>Apple Podcasts: http://wandb.me/apple-podcasts</p><p>Spotify: http://wandb.me/spotify</p><p>Google: http://wandb.me/gd_google</p><p>YouTube: http://wandb.me/youtube</p><p>Connect with Denis Yarats:</p><p><a href="https://www.linkedin.com/in/denisyarats/" target="_blank">https://www.linkedin.com/in/denisyarats/</a>&nbsp;</p><p><a href="https://x.com/denisyarats" target="_blank">https://x.com/denisyarats</a>&nbsp;</p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;</p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p>]]></description><content:encoded><![CDATA[<p>In this episode of Gradient Dissent, Denis Yarats, CTO of Perplexity, joins host Lukas Biewald to discuss the innovative use of AI in creating high-quality, fast search engine answers.</p><p>Discover how Perplexity combines advancements in search engines and LLMs to deliver precise answers. Yarats shares insights on the technical challenges, the importance of speed, and the future of AI in search.</p><p>✅ *Subscribe to Weights &amp; Biases* →&nbsp;https://bit.ly/45BCkYz</p><p>🎙 Get our podcasts on these platforms:</p><p>Apple Podcasts: http://wandb.me/apple-podcasts</p><p>Spotify: http://wandb.me/spotify</p><p>Google: http://wandb.me/gd_google</p><p>YouTube: http://wandb.me/youtube</p><p>Connect with Denis Yarats:</p><p><a href="https://www.linkedin.com/in/denisyarats/" target="_blank">https://www.linkedin.com/in/denisyarats/</a>&nbsp;</p><p><a href="https://x.com/denisyarats" target="_blank">https://x.com/denisyarats</a>&nbsp;</p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;</p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">a71bcec5-15f7-495d-b79c-82fbfad42498</guid><itunes:image href="https://artwork.captivate.fm/08a2f998-7894-41b5-863e-cd1c9240761d/Jgeat7OS8uhy3Mj5aGWDsUtE.jpg"/><pubDate>Thu, 20 Jun 2024 12:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/b5d4098b-ced1-4c5e-ba8a-9e7141fec2b5/GD014-FinalPod.mp3" length="63266470" type="audio/mpeg"/><itunes:duration>43:54</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>AI in electronics: Quilter’s journey in PCB design</title><itunes:title>AI in electronics: Quilter’s journey in PCB design</itunes:title><description><![CDATA[<p>In this episode of Gradient Dissent, Sergiy Nesterenko, CEO of Quilter, joins host Lukas Biewald to discuss the groundbreaking use of reinforcement learning in PCB design.&nbsp;</p><p>Learn how Quilter automates the complex, manual process of creating PCBs, making it faster and more efficient. Nesterenko shares insights on the challenges and successes of integrating AI with real-world applications and discusses the future of electronic design.</p><p><br></p><p>✅ *Subscribe to Weights &amp; Biases* →&nbsp; https://bit.ly/45BCkYz</p><p><br></p><p>Connect with Sergiy Nesterenko:</p><p><a href="https://www.linkedin.com/in/sergiynesterenko/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/sergiynesterenko/</a>&nbsp;</p><p><br></p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;</p><p><br></p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p>]]></description><content:encoded><![CDATA[<p>In this episode of Gradient Dissent, Sergiy Nesterenko, CEO of Quilter, joins host Lukas Biewald to discuss the groundbreaking use of reinforcement learning in PCB design.&nbsp;</p><p>Learn how Quilter automates the complex, manual process of creating PCBs, making it faster and more efficient. Nesterenko shares insights on the challenges and successes of integrating AI with real-world applications and discusses the future of electronic design.</p><p><br></p><p>✅ *Subscribe to Weights &amp; Biases* →&nbsp; https://bit.ly/45BCkYz</p><p><br></p><p>Connect with Sergiy Nesterenko:</p><p><a href="https://www.linkedin.com/in/sergiynesterenko/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/sergiynesterenko/</a>&nbsp;</p><p><br></p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;</p><p><br></p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">00457627-c7b8-4865-93ad-87a1d80ab0e8</guid><itunes:image href="https://artwork.captivate.fm/2978921c-8ded-4c53-9d95-efaa0c73d8ff/0kgbE5E55ckGPvQXTzCmgeEO.jpg"/><pubDate>Thu, 06 Jun 2024 12:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/67afd2da-d766-4028-bd3f-447fa3d1913b/GD013-Pod.mp3" length="36826064" type="audio/mpeg"/><itunes:duration>43:50</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The Future of AI in Coding with Codeium CEO Varun Mohan</title><itunes:title>The Future of AI in Coding with Codeium CEO Varun Mohan</itunes:title><description><![CDATA[<p>In this episode of Gradient Dissent, Varun Mohan, Co-Founder &amp; CEO of Codeium, joins host Lukas Biewald to discuss the transformative power of AI in coding.&nbsp;</p><p>They explore how Codeium evolved from GPU virtualization to a widely used AI coding tool and tackled the technical challenges and future prospects of AI-assisted software development. Varun shares insights on overcoming performance and latency issues and how AI can significantly enhance engineering velocity. This episode offers an in-depth look at the intersection of AI and coding, highlighting both technological advancements and the potential for more efficient development processes.</p><p><br></p><p>✅ *Subscribe to Weights &amp; Biases* →&nbsp; https://bit.ly/45BCkYz</p><p><br></p><p>Connect with Varun Mohan:</p><p><a href="https://www.linkedin.com/in/varunkmohan/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/varunkmohan/</a>&nbsp;</p><p><a href="https://x.com/_mohansolo" rel="noopener noreferrer" target="_blank">https://x.com/_mohansolo</a>&nbsp;</p><p><br></p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;</p><p><br></p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p><p><br></p><p><br></p>]]></description><content:encoded><![CDATA[<p>In this episode of Gradient Dissent, Varun Mohan, Co-Founder &amp; CEO of Codeium, joins host Lukas Biewald to discuss the transformative power of AI in coding.&nbsp;</p><p>They explore how Codeium evolved from GPU virtualization to a widely used AI coding tool and tackled the technical challenges and future prospects of AI-assisted software development. Varun shares insights on overcoming performance and latency issues and how AI can significantly enhance engineering velocity. This episode offers an in-depth look at the intersection of AI and coding, highlighting both technological advancements and the potential for more efficient development processes.</p><p><br></p><p>✅ *Subscribe to Weights &amp; Biases* →&nbsp; https://bit.ly/45BCkYz</p><p><br></p><p>Connect with Varun Mohan:</p><p><a href="https://www.linkedin.com/in/varunkmohan/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/varunkmohan/</a>&nbsp;</p><p><a href="https://x.com/_mohansolo" rel="noopener noreferrer" target="_blank">https://x.com/_mohansolo</a>&nbsp;</p><p><br></p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;</p><p><br></p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p><p><br></p><p><br></p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">84d4f348-0dd2-4925-a532-e47deaf60568</guid><itunes:image href="https://artwork.captivate.fm/a3f80409-792f-498d-a75d-45b40df24a0f/r_bXTLKif22VAgIQIKbXmz4K.jpg"/><pubDate>Thu, 23 May 2024 12:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/5e1e7924-2f00-49f7-80d0-643251cbeb2c/GD012-Pod-File.mp3" length="46095632" type="audio/mpeg"/><itunes:duration>54:53</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Shaping AI Benchmarks with Together AI Co-Founder Percy Liang</title><itunes:title>Shaping AI Benchmarks with Together AI Co-Founder Percy Liang</itunes:title><description><![CDATA[<p>In this episode of Gradient Dissent, Together AI co-founder and Stanford Associate Professor Percy Liang joins host, Lukas Biewald, to discuss advancements in AI benchmarking and the pivotal role that open-source plays in AI development.</p><p>He shares his development of HELM—a robust framework for evaluating language models. The discussion highlights how this framework improves transparency and effectiveness in AI benchmarks. Additionally, Percy shares insights on the pivotal role of open-source models in democratizing AI development and addresses the challenges of English language bias in global AI applications. This episode offers in-depth insights into how benchmarks are shaping the future of AI, highlighting both technological advancements and the push for more equitable and inclusive technologies.</p><p>✅ <strong>Subscribe to Weights &amp; Biases </strong>→&nbsp; <a href="https://wandb.me/yt_subscribe" rel="noopener noreferrer" target="_blank">http://wandb.me/yt_subscribe</a></p><p><strong>Connect with Percy Liang:</strong></p><ul><li><a href="https://www.linkedin.com/in/percy-liang-717b8a/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/percy-liang-717b8a/</a>&nbsp;</li><li><a href="https://twitter.com/percyliang" rel="noopener noreferrer" target="_blank">https://twitter.com/percyliang</a>&nbsp;</li></ul><br/><p>&nbsp;<strong>Anticipatory Music Composer:</strong></p><ul><li><a href="https://stanford.io/3y5VycN" rel="noopener noreferrer" target="_blank">https://stanford.io/3y5VycN</a>&nbsp;</li></ul><br/><p><strong>Blog Post:</strong></p><ul><li><a href="https://crfm.stanford.edu/2024/02/18/helm-instruct.html" rel="noopener noreferrer" target="_blank">https://crfm.stanford.edu/2024/02/18/helm-instruct.html</a></li></ul><br/><p><strong>Follow Weights &amp; Biases:</strong></p><ul><li><a href="https://twitter.com/weights_biases" rel="noopener noreferrer" target="_blank">https://twitter.com/weights_biases&nbsp;</a></li><li><a href="https://www.linkedin.com/company/wandb" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/company/wandb&nbsp;</a></li></ul><br/>]]></description><content:encoded><![CDATA[<p>In this episode of Gradient Dissent, Together AI co-founder and Stanford Associate Professor Percy Liang joins host, Lukas Biewald, to discuss advancements in AI benchmarking and the pivotal role that open-source plays in AI development.</p><p>He shares his development of HELM—a robust framework for evaluating language models. The discussion highlights how this framework improves transparency and effectiveness in AI benchmarks. Additionally, Percy shares insights on the pivotal role of open-source models in democratizing AI development and addresses the challenges of English language bias in global AI applications. This episode offers in-depth insights into how benchmarks are shaping the future of AI, highlighting both technological advancements and the push for more equitable and inclusive technologies.</p><p>✅ <strong>Subscribe to Weights &amp; Biases </strong>→&nbsp; <a href="https://wandb.me/yt_subscribe" rel="noopener noreferrer" target="_blank">http://wandb.me/yt_subscribe</a></p><p><strong>Connect with Percy Liang:</strong></p><ul><li><a href="https://www.linkedin.com/in/percy-liang-717b8a/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/percy-liang-717b8a/</a>&nbsp;</li><li><a href="https://twitter.com/percyliang" rel="noopener noreferrer" target="_blank">https://twitter.com/percyliang</a>&nbsp;</li></ul><br/><p>&nbsp;<strong>Anticipatory Music Composer:</strong></p><ul><li><a href="https://stanford.io/3y5VycN" rel="noopener noreferrer" target="_blank">https://stanford.io/3y5VycN</a>&nbsp;</li></ul><br/><p><strong>Blog Post:</strong></p><ul><li><a href="https://crfm.stanford.edu/2024/02/18/helm-instruct.html" rel="noopener noreferrer" target="_blank">https://crfm.stanford.edu/2024/02/18/helm-instruct.html</a></li></ul><br/><p><strong>Follow Weights &amp; Biases:</strong></p><ul><li><a href="https://twitter.com/weights_biases" rel="noopener noreferrer" target="_blank">https://twitter.com/weights_biases&nbsp;</a></li><li><a href="https://www.linkedin.com/company/wandb" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/company/wandb&nbsp;</a></li></ul><br/>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast/episodes/shaping-ai-benchmarks-with-together-ai-co-founder-percy-lang]]></link><guid isPermaLink="false">f029bf6e-93a8-4750-95b9-cf6a95307884</guid><itunes:image href="https://artwork.captivate.fm/0e133b3b-e923-4684-9cde-88b5d52f6f10/NBShli7M_j0yQjHbjliod2hZ.jpg"/><pubDate>Thu, 09 May 2024 12:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/f95149f1-2e28-4ad4-a3f4-9ae566b6f3ba/GD011-Podcast-Final.mp3" length="44801696" type="audio/mpeg"/><itunes:duration>53:20</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Accelerating drug discovery with AI: Insights from Isomorphic Labs</title><itunes:title>Accelerating drug discovery with AI: Insights from Isomorphic Labs</itunes:title><description><![CDATA[<p>In this episode of Gradient Dissent, <strong>Isomorphic Labs</strong> Chief AI Officer <strong>Max Jaderberg,</strong> and Chief Technology Officer <strong>Sergei Yakneen</strong> join our host Lukas Biewald to discuss the advancements in biotech and drug discovery being unlocked with machine learning.</p><p>With backgrounds in advanced AI research at DeepMind, Max and Sergei offer their unique insights into the challenges and successes of applying AI in a complex field like biotechnology. They share their journey at Isomorphic Labs, a company dedicated to revolutionizing drug discovery with AI. In this episode, they discuss the transformative impact of deep learning on the drug development process and Isomorphic Labs' strategy to innovate from molecular design to clinical trials.</p><p>You’ll come away with valuable insights into the challenges of applying AI in biotech, the role of AI in streamlining the drug discovery pipeline, and peer into the&nbsp; future of AI-driven solutions in healthcare.</p><p><strong>Connect with Sergei Yakneen &amp; Max Jaderberg:</strong></p><p><a href="https://www.linkedin.com/in/maxjaderberg/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/maxjaderberg/</a>&nbsp;</p><p><a href="https://www.linkedin.com/in/yakneensergei/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/yakneensergei/</a>&nbsp;</p><p><a href="https://twitter.com/SergeiIakhnin" rel="noopener noreferrer" target="_blank">https://twitter.com/SergeiIakhnin</a>&nbsp;</p><p><a href="https://twitter.com/maxjaderberg" rel="noopener noreferrer" target="_blank">https://twitter.com/maxjaderberg</a>&nbsp;</p><p><strong>Follow Weights &amp; Biases:</strong></p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;</p><p><br></p>]]></description><content:encoded><![CDATA[<p>In this episode of Gradient Dissent, <strong>Isomorphic Labs</strong> Chief AI Officer <strong>Max Jaderberg,</strong> and Chief Technology Officer <strong>Sergei Yakneen</strong> join our host Lukas Biewald to discuss the advancements in biotech and drug discovery being unlocked with machine learning.</p><p>With backgrounds in advanced AI research at DeepMind, Max and Sergei offer their unique insights into the challenges and successes of applying AI in a complex field like biotechnology. They share their journey at Isomorphic Labs, a company dedicated to revolutionizing drug discovery with AI. In this episode, they discuss the transformative impact of deep learning on the drug development process and Isomorphic Labs' strategy to innovate from molecular design to clinical trials.</p><p>You’ll come away with valuable insights into the challenges of applying AI in biotech, the role of AI in streamlining the drug discovery pipeline, and peer into the&nbsp; future of AI-driven solutions in healthcare.</p><p><strong>Connect with Sergei Yakneen &amp; Max Jaderberg:</strong></p><p><a href="https://www.linkedin.com/in/maxjaderberg/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/maxjaderberg/</a>&nbsp;</p><p><a href="https://www.linkedin.com/in/yakneensergei/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/yakneensergei/</a>&nbsp;</p><p><a href="https://twitter.com/SergeiIakhnin" rel="noopener noreferrer" target="_blank">https://twitter.com/SergeiIakhnin</a>&nbsp;</p><p><a href="https://twitter.com/maxjaderberg" rel="noopener noreferrer" target="_blank">https://twitter.com/maxjaderberg</a>&nbsp;</p><p><strong>Follow Weights &amp; Biases:</strong></p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;</p><p><br></p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">7b6292fa-8e87-423d-8011-31c7f2cb6265</guid><itunes:image href="https://artwork.captivate.fm/9c469284-2420-4767-a6ac-2666ee07fed7/iu-8a2WOSA_KCmTbkfkklaK9.jpg"/><pubDate>Thu, 25 Apr 2024 12:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/36972ca8-2c29-4bf9-8ce2-0569911c6d0c/GD010-Pod.mp3" length="59128736" type="audio/mpeg"/><itunes:duration>01:10:23</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Redefining AI Hardware for Enterprise with SambaNova’s Rodrigo Liang</title><itunes:title>Redefining AI Hardware for Enterprise with SambaNova’s Rodrigo Liang</itunes:title><description><![CDATA[<p>🚀 Discover the cutting-edge AI hardware development for enterprises in this episode of Gradient Dissent, featuring Rodrigo Liang, CEO of SambaNova Systems.&nbsp;</p><p><strong>Rodrigo Liang’s</strong> journey from Oracle to founding <strong>SambaNova</strong> is a tale of innovation and determination. In this episode, Rodrigo discusses the importance of specialized hardware in unlocking AI's potential for Enterprise businesses and SambaNova's mission to deliver comprehensive AI solutions from chips to models.&nbsp;</p><p>Explore the critical insights on navigating the challenges of introducing AI to executives and the evolution of AI applications within large enterprises, and get a glimpse into the future of AI in the business world.</p><p><strong>🎙 Get our podcasts on these platforms:</strong></p><p>Apple Podcasts: http://wandb.me/apple-podcasts</p><p>Spotify: http://wandb.me/spotify</p><p>Google: http://wandb.me/gd_google</p><p>YouTube: http://wandb.me/youtube</p><p><strong>Connect with Rodrigo Liang:</strong></p><p><a href="https://www.linkedin.com/in/rodrigo-liang/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/rodrigo-liang/</a></p><p><a href="https://twitter.com/RodrigoLiang" rel="noopener noreferrer" target="_blank">https://twitter.com/RodrigoLiang</a>&nbsp;</p><p>&nbsp;</p><p><strong>Follow Weights &amp; Biases:</strong></p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;</p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p>]]></description><content:encoded><![CDATA[<p>🚀 Discover the cutting-edge AI hardware development for enterprises in this episode of Gradient Dissent, featuring Rodrigo Liang, CEO of SambaNova Systems.&nbsp;</p><p><strong>Rodrigo Liang’s</strong> journey from Oracle to founding <strong>SambaNova</strong> is a tale of innovation and determination. In this episode, Rodrigo discusses the importance of specialized hardware in unlocking AI's potential for Enterprise businesses and SambaNova's mission to deliver comprehensive AI solutions from chips to models.&nbsp;</p><p>Explore the critical insights on navigating the challenges of introducing AI to executives and the evolution of AI applications within large enterprises, and get a glimpse into the future of AI in the business world.</p><p><strong>🎙 Get our podcasts on these platforms:</strong></p><p>Apple Podcasts: http://wandb.me/apple-podcasts</p><p>Spotify: http://wandb.me/spotify</p><p>Google: http://wandb.me/gd_google</p><p>YouTube: http://wandb.me/youtube</p><p><strong>Connect with Rodrigo Liang:</strong></p><p><a href="https://www.linkedin.com/in/rodrigo-liang/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/rodrigo-liang/</a></p><p><a href="https://twitter.com/RodrigoLiang" rel="noopener noreferrer" target="_blank">https://twitter.com/RodrigoLiang</a>&nbsp;</p><p>&nbsp;</p><p><strong>Follow Weights &amp; Biases:</strong></p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;</p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">b961e74c-2338-4043-9a1a-e38e2930ee3e</guid><itunes:image href="https://artwork.captivate.fm/4d70b4cc-46a8-428a-a889-9330d48c9816/XAT4Sp14KXNpa5aQRaTdm8GW.jpg"/><pubDate>Thu, 11 Apr 2024 12:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/08528294-340e-4443-ab9f-55fc534c93ce/GD009-Pod-File.mp3" length="44579264" type="audio/mpeg"/><itunes:duration>53:04</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Navigating the Vector Database Landscape with Pinecone&apos;s Edo Liberty</title><itunes:title>Navigating the Vector Database Landscape with Pinecone&apos;s Edo Liberty</itunes:title><description><![CDATA[<p>🚀 This episode of Gradient Dissent welcomes <strong>Edo Liberty</strong>, the mind behind <strong>Pinecone's revolutionary vector database technolog</strong>y.</p><p>As a former leader at Amazon AI Labs and Yahoo's New York lab, Edo Liberty's extensive background in AI research and development showcases the complexities behind vector databases and their essential role in enhancing AI's capabilities.</p><p>Discover the pivotal moments and key decisions that have defined Pinecone's journey, learn about the different embedding strategies that are reshaping AI applications, and understand how Pinecone's success has had a profound impact on the technology landscape.</p><p><strong>Connect with Edo Liberty:</strong></p><p><a href="https://www.linkedin.com/in/edo-liberty-4380164/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/edo-liberty-4380164/</a>&nbsp;</p><p><a href="https://twitter.com/EdoLiberty" rel="noopener noreferrer" target="_blank">https://twitter.com/EdoLiberty</a>&nbsp;</p><p><strong>Follow Weights &amp; Biases:</strong></p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;</p><p><strong>Join the Weights &amp; Biases Discord Server:</strong></p><p>https://discord.gg/CkZKRNnaf3</p>]]></description><content:encoded><![CDATA[<p>🚀 This episode of Gradient Dissent welcomes <strong>Edo Liberty</strong>, the mind behind <strong>Pinecone's revolutionary vector database technolog</strong>y.</p><p>As a former leader at Amazon AI Labs and Yahoo's New York lab, Edo Liberty's extensive background in AI research and development showcases the complexities behind vector databases and their essential role in enhancing AI's capabilities.</p><p>Discover the pivotal moments and key decisions that have defined Pinecone's journey, learn about the different embedding strategies that are reshaping AI applications, and understand how Pinecone's success has had a profound impact on the technology landscape.</p><p><strong>Connect with Edo Liberty:</strong></p><p><a href="https://www.linkedin.com/in/edo-liberty-4380164/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/edo-liberty-4380164/</a>&nbsp;</p><p><a href="https://twitter.com/EdoLiberty" rel="noopener noreferrer" target="_blank">https://twitter.com/EdoLiberty</a>&nbsp;</p><p><strong>Follow Weights &amp; Biases:</strong></p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;</p><p><strong>Join the Weights &amp; Biases Discord Server:</strong></p><p>https://discord.gg/CkZKRNnaf3</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">0eb9f8e4-17f0-44eb-9941-13a6c2491340</guid><itunes:image href="https://artwork.captivate.fm/54a8504a-af32-4dfe-996a-a9f305655d24/8qCRsG-7QvQ6syw5KBuYVXNt.jpg"/><pubDate>Thu, 28 Mar 2024 12:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/97ad9945-4a78-4d11-95a3-8feb1eafa5bf/GD008-Pod.mp3" length="55511024" type="audio/mpeg"/><itunes:duration>01:06:05</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Transforming Data into Business Solutions with Salesforce AI CEO, Clara Shih</title><itunes:title>Transforming Data into Business Solutions with Salesforce AI CEO, Clara Shih</itunes:title><description><![CDATA[<p>🚀 In this episode of Gradient Dissent, we explore the revolutionary impact of AI across industries with <strong>Clara Shih, CEO of Salesforce AI and Founder of Hearsay Systems.&nbsp;</strong></p><p>Dive into <strong>Salesforce AI's</strong> cutting-edge approach to customer service through AI, the importance of a trust-first strategy, and the future of AI policies and education. Learn how Salesforce empowers businesses and shapes the future with AI innovations like Prompt Builder and Copilot Studio. Whether you're an AI enthusiast, a business leader, or someone curious about the future of technology, this discussion offers valuable insights into navigating the rapidly evolving world of AI.</p><p>✅ <strong>Subscribe to Weights &amp; Biases YouTube</strong> →&nbsp;https://bit.ly/45BCkYz</p><p>🎙 <strong>Get our podcasts on these platforms:</strong></p><p>Apple Podcasts: http://wandb.me/apple-podcasts</p><p>Spotify: http://wandb.me/spotify</p><p>Google: http://wandb.me/gd_google</p><p><strong>Connect with Clara:</strong></p><p><a href="https://www.linkedin.com/in/clarashih/" target="_blank">https://www.linkedin.com/in/clarashih/</a></p><p><a href="https://x.com/clarashih?s=20" target="_blank">https://x.com/clarashih?s=20</a>&nbsp;&nbsp;</p><p><strong>Follow Weights &amp; Biases:</strong></p><p><a href="https://twitter.com/weights_biases" target="_blank">https://twitter.com/weights_biases&nbsp;</a></p><p><a href="https://www.linkedin.com/company/wandb" target="_blank">https://www.linkedin.com/company/wandb&nbsp;</a></p>]]></description><content:encoded><![CDATA[<p>🚀 In this episode of Gradient Dissent, we explore the revolutionary impact of AI across industries with <strong>Clara Shih, CEO of Salesforce AI and Founder of Hearsay Systems.&nbsp;</strong></p><p>Dive into <strong>Salesforce AI's</strong> cutting-edge approach to customer service through AI, the importance of a trust-first strategy, and the future of AI policies and education. Learn how Salesforce empowers businesses and shapes the future with AI innovations like Prompt Builder and Copilot Studio. Whether you're an AI enthusiast, a business leader, or someone curious about the future of technology, this discussion offers valuable insights into navigating the rapidly evolving world of AI.</p><p>✅ <strong>Subscribe to Weights &amp; Biases YouTube</strong> →&nbsp;https://bit.ly/45BCkYz</p><p>🎙 <strong>Get our podcasts on these platforms:</strong></p><p>Apple Podcasts: http://wandb.me/apple-podcasts</p><p>Spotify: http://wandb.me/spotify</p><p>Google: http://wandb.me/gd_google</p><p><strong>Connect with Clara:</strong></p><p><a href="https://www.linkedin.com/in/clarashih/" target="_blank">https://www.linkedin.com/in/clarashih/</a></p><p><a href="https://x.com/clarashih?s=20" target="_blank">https://x.com/clarashih?s=20</a>&nbsp;&nbsp;</p><p><strong>Follow Weights &amp; Biases:</strong></p><p><a href="https://twitter.com/weights_biases" target="_blank">https://twitter.com/weights_biases&nbsp;</a></p><p><a href="https://www.linkedin.com/company/wandb" target="_blank">https://www.linkedin.com/company/wandb&nbsp;</a></p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">41c79e62-a4ff-4582-bf53-e7b97d27805d</guid><itunes:image href="https://artwork.captivate.fm/07f91f81-3c17-4c29-974c-84eaf70e534c/ODc6yvhBszuwK07MOo1dtClO.jpg"/><pubDate>Thu, 14 Mar 2024 12:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/4c71a337-cee9-4fa4-b0e5-8619abfe1947/GD007-Final-Podcast.mp3" length="84146266" type="audio/mpeg"/><itunes:duration>58:24</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Upgrading Your Health: Navigating AI&apos;s Future In Healthcare with John Halamka of Mayo Clinic Platform</title><itunes:title>Upgrading Your Health: Navigating AI&apos;s Future In Healthcare with John Halamka of Mayo Clinic Platform</itunes:title><description><![CDATA[<p>In the newest episode of Gradient Dissent, we explore the intersecting worlds of <strong>AI and Healthcare</strong> with <strong>John Halamka, President of the Mayo Clinic Platform.</strong></p><p>Journey with us down John Halamka's remarkable path from his early tech startup days to leading innovations as the President of the Mayo Clinic Platform, one of the world's most esteemed healthcare institutions. This deep dive into AI's role in modern medicine covers the technology's evolution, its potential to redefine patient care, and the visionary work of Mayo Clinic Platform in harnessing AI responsibly.</p><p>Explore the misconceptions surrounding AI in healthcare and discover the ethical and regulatory frameworks guiding its application. Glimpse into the future with Halamka's visionary perspective on AI's potential to democratize and revolutionize healthcare across the globe. Join us for an enlightening discussion on the challenges, triumphs, and the horizon of AI in healthcare through the lens of John Halamka's pioneering experiences.</p><p><strong>🎙 Get our podcasts on these platforms:</strong></p><p>Apple Podcasts: http://wandb.me/apple-podcasts</p><p>Spotify: http://wandb.me/spotify</p><p>Google: http://wandb.me/gd_google</p><p>YouTube: http://wandb.me/youtube</p><p>✅ <strong>Follow Weights &amp; Biases:</strong></p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;</p>]]></description><content:encoded><![CDATA[<p>In the newest episode of Gradient Dissent, we explore the intersecting worlds of <strong>AI and Healthcare</strong> with <strong>John Halamka, President of the Mayo Clinic Platform.</strong></p><p>Journey with us down John Halamka's remarkable path from his early tech startup days to leading innovations as the President of the Mayo Clinic Platform, one of the world's most esteemed healthcare institutions. This deep dive into AI's role in modern medicine covers the technology's evolution, its potential to redefine patient care, and the visionary work of Mayo Clinic Platform in harnessing AI responsibly.</p><p>Explore the misconceptions surrounding AI in healthcare and discover the ethical and regulatory frameworks guiding its application. Glimpse into the future with Halamka's visionary perspective on AI's potential to democratize and revolutionize healthcare across the globe. Join us for an enlightening discussion on the challenges, triumphs, and the horizon of AI in healthcare through the lens of John Halamka's pioneering experiences.</p><p><strong>🎙 Get our podcasts on these platforms:</strong></p><p>Apple Podcasts: http://wandb.me/apple-podcasts</p><p>Spotify: http://wandb.me/spotify</p><p>Google: http://wandb.me/gd_google</p><p>YouTube: http://wandb.me/youtube</p><p>✅ <strong>Follow Weights &amp; Biases:</strong></p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">4a20c61d-78bd-4754-a3b6-8c75d0e5feb8</guid><itunes:image href="https://artwork.captivate.fm/f46d32ce-8988-44a2-af76-3a2c2e6ed730/xbHmg1PIKzYQPnroeItT3P4F.jpg"/><pubDate>Thu, 29 Feb 2024 12:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/cb08e386-ade4-42c5-9fae-d230c98be88a/GD006-Pod-Mix.mp3" length="54102848" type="audio/mpeg"/><itunes:duration>01:04:24</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Shaping the World of Robotics with Chelsea Finn</title><itunes:title>Shaping the World of Robotics with Chelsea Finn</itunes:title><description><![CDATA[<p>In the newest episode of Gradient Dissent, Chelsea Finn, Assistant Professor at Stanford's Computer Science Department, discusses the forefront of robotics and machine learning.</p><p>Discover her groundbreaking work, where two-armed robots learn to cook shrimp (messes included!), and discuss how robotic learning could transform student feedback in education.</p><p>We'll dive into the challenges of developing humanoid and quadruped robots, explore the limitations of simulated environments and discuss why real-world experience is key for adaptable machines. Plus, Chelsea will offer a glimpse into the future of household robotics and why it may be a few years before a robot is making your bed.</p><p>Whether you're an AI enthusiast, a robotics professional, or simply curious about the potential and future of the technology, this episode offers unique insights into the evolving world of robotics and where it's headed next.</p><p>*Subscribe to Weights &amp; Biases* →  https://bit.ly/45BCkYz</p><p>🎙 Get our podcasts on these platforms:</p><p>Apple Podcasts: http://wandb.me/apple-podcasts</p><p>Spotify: http://wandb.me/spotify</p><p>Google: http://wandb.me/gd_google</p><p>YouTube: http://wandb.me/youtube</p><p>Connect with Chelsea Finn:</p><p>https://www.linkedin.com/in/cbfinn/ </p><p>https://twitter.com/chelseabfinn</p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases </p><p>https://www.linkedin.com/company/wandb </p><p><br></p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p>]]></description><content:encoded><![CDATA[<p>In the newest episode of Gradient Dissent, Chelsea Finn, Assistant Professor at Stanford's Computer Science Department, discusses the forefront of robotics and machine learning.</p><p>Discover her groundbreaking work, where two-armed robots learn to cook shrimp (messes included!), and discuss how robotic learning could transform student feedback in education.</p><p>We'll dive into the challenges of developing humanoid and quadruped robots, explore the limitations of simulated environments and discuss why real-world experience is key for adaptable machines. Plus, Chelsea will offer a glimpse into the future of household robotics and why it may be a few years before a robot is making your bed.</p><p>Whether you're an AI enthusiast, a robotics professional, or simply curious about the potential and future of the technology, this episode offers unique insights into the evolving world of robotics and where it's headed next.</p><p>*Subscribe to Weights &amp; Biases* →  https://bit.ly/45BCkYz</p><p>🎙 Get our podcasts on these platforms:</p><p>Apple Podcasts: http://wandb.me/apple-podcasts</p><p>Spotify: http://wandb.me/spotify</p><p>Google: http://wandb.me/gd_google</p><p>YouTube: http://wandb.me/youtube</p><p>Connect with Chelsea Finn:</p><p>https://www.linkedin.com/in/cbfinn/ </p><p>https://twitter.com/chelseabfinn</p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases </p><p>https://www.linkedin.com/company/wandb </p><p><br></p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">0f6789fd-5bf3-4329-aeaa-5c584e5681eb</guid><itunes:image href="https://artwork.captivate.fm/0cf89d3d-0fea-4f46-b2c3-d341cf0fdc6a/9o1glkydZ0XUF-UpCHduWsjo.jpg"/><pubDate>Thu, 15 Feb 2024 12:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/b5410bbb-a632-4d74-971f-8caaadad7706/GD005-Pod-Final-Mix.mp3" length="45157520" type="audio/mpeg"/><itunes:duration>53:46</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>The Power of AI in Search with You.com&apos;s Richard Socher</title><itunes:title>The Power of AI in Search with You.com&apos;s Richard Socher</itunes:title><description><![CDATA[<p>In the latest episode of Gradient Dissent, Richard Socher, CEO of You.com, shares his insights on the power of AI in search. The episode focuses on how advanced language models like GPT-4 are transforming search engines and changing the way we interact with digital platforms. The discussion covers the practical applications and challenges of integrating AI into search functionality, as well as the ethical considerations and future implications of AI in our digital lives. Join us for an enlightening conversation on how AI and you.com are reshaping how we access and interact with information online.</p><p>*Subscribe to Weights &amp; Biases* →&nbsp; https://bit.ly/45BCkYz</p><p>Timestamps:</p><p>	00:00 - Introduction to Gradient Dissent Podcast</p><p>	00:48 - Richard Socher’s Journey: From Linguistic Computer Science to AI</p><p>	06:42 - The Genesis and Evolution of MetaMind</p><p>	13:30 - Exploring You.com's Approach to Enhanced Search</p><p>	18:15 - Demonstrating You.com's AI in Mortgage Calculations</p><p>	24:10 - The Power of AI in Search: A Deep Dive with You.com</p><p>	30:25 - Security Measures in Running AI-Generated Code</p><p>	35:50 - Building a Robust and Secure AI Tech Stack</p><p>	42:33 - The Role of AI in Automating and Transforming Digital Work</p><p>	48:50 - Discussing Ethical Considerations and the Societal Impact of AI</p><p>	55:15 - Envisioning the Future of AI in Daily Life and Work</p><p>	01:02:00 - Reflecting on the Evolution of AI and Its Future Prospects</p><p>	01:05:00 - Closing Remarks and Podcast Wrap-Up</p><p>🎙 Get our podcasts on these platforms:</p><p>Apple Podcasts: http://wandb.me/apple-podcasts</p><p>Spotify: http://wandb.me/spotify</p><p>Google: http://wandb.me/gd_google</p><p>YouTube: http://wandb.me/youtube</p><p>Connect with Richard Socher:</p><p><a href="https://www.linkedin.com/in/richardsocher/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/richardsocher/</a>&nbsp;</p><p><a href="https://twitter.com/RichardSocher" rel="noopener noreferrer" target="_blank">https://twitter.com/RichardSocher</a>&nbsp;</p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;</p><p><br></p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p><p><br></p>]]></description><content:encoded><![CDATA[<p>In the latest episode of Gradient Dissent, Richard Socher, CEO of You.com, shares his insights on the power of AI in search. The episode focuses on how advanced language models like GPT-4 are transforming search engines and changing the way we interact with digital platforms. The discussion covers the practical applications and challenges of integrating AI into search functionality, as well as the ethical considerations and future implications of AI in our digital lives. Join us for an enlightening conversation on how AI and you.com are reshaping how we access and interact with information online.</p><p>*Subscribe to Weights &amp; Biases* →&nbsp; https://bit.ly/45BCkYz</p><p>Timestamps:</p><p>	00:00 - Introduction to Gradient Dissent Podcast</p><p>	00:48 - Richard Socher’s Journey: From Linguistic Computer Science to AI</p><p>	06:42 - The Genesis and Evolution of MetaMind</p><p>	13:30 - Exploring You.com's Approach to Enhanced Search</p><p>	18:15 - Demonstrating You.com's AI in Mortgage Calculations</p><p>	24:10 - The Power of AI in Search: A Deep Dive with You.com</p><p>	30:25 - Security Measures in Running AI-Generated Code</p><p>	35:50 - Building a Robust and Secure AI Tech Stack</p><p>	42:33 - The Role of AI in Automating and Transforming Digital Work</p><p>	48:50 - Discussing Ethical Considerations and the Societal Impact of AI</p><p>	55:15 - Envisioning the Future of AI in Daily Life and Work</p><p>	01:02:00 - Reflecting on the Evolution of AI and Its Future Prospects</p><p>	01:05:00 - Closing Remarks and Podcast Wrap-Up</p><p>🎙 Get our podcasts on these platforms:</p><p>Apple Podcasts: http://wandb.me/apple-podcasts</p><p>Spotify: http://wandb.me/spotify</p><p>Google: http://wandb.me/gd_google</p><p>YouTube: http://wandb.me/youtube</p><p>Connect with Richard Socher:</p><p><a href="https://www.linkedin.com/in/richardsocher/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/richardsocher/</a>&nbsp;</p><p><a href="https://twitter.com/RichardSocher" rel="noopener noreferrer" target="_blank">https://twitter.com/RichardSocher</a>&nbsp;</p><p>Follow Weights &amp; Biases:</p><p>https://twitter.com/weights_biases&nbsp;</p><p>https://www.linkedin.com/company/wandb&nbsp;</p><p><br></p><p>Join the Weights &amp; Biases Discord Server:</p><p>https://discord.gg/CkZKRNnaf3</p><p><br></p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">c93c09cb-3fdf-4a77-bb05-d8a61f385049</guid><itunes:image href="https://artwork.captivate.fm/8e51fa40-3b9e-4823-a0b1-b64330266ce0/5mbSgE4WoUc897uVE-GKxOuf.jpg"/><pubDate>Thu, 01 Feb 2024 12:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/5d190bc5-012c-43c9-90de-bf90b63a3c6b/GD004-Podcast-Final.mp3" length="57484352" type="audio/mpeg"/><itunes:duration>01:08:26</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>AI’s Future: Investment &amp; Impact with Sarah Guo and Elad Gil</title><itunes:title>AI&apos;s Future: Investment &amp; Impact with Sarah Guo and Elad Gil</itunes:title><description><![CDATA[<p>Explore the Future of Investment &amp; Impact in AI with Host Lukas Biewald and Guests Elad Gill and Sarah Guo of the No Priors podcast.</p><p>Sarah is the founder of Conviction VC, an AI-centric $100 million venture fund. Elad, a seasoned entrepreneur and startup investor, boasts an impressive portfolio in over 40 companies, each valued at $1 billion or more, and wrote the influential "High Growth Handbook."</p><p>Join us for a deep dive into the nuanced world of AI, where we'll explore its broader industry impact, focusing on how startups can seamlessly blend product-centric approaches with a balance of innovation and practical development.</p><p>*Subscribe to Weights &amp; Biases* →&nbsp;https://bit.ly/45BCkYz</p><p>Timestamps:</p><p>0:00 - Introduction&nbsp;</p><p>5:15 - Exploring Fine-Tuning vs RAG in AI</p><p>10:30 - Evaluating AI Research for Investment</p><p>15:45 - Impact of AI Models on Product Development</p><p>20:00 - AI's Role in Evolving Job Markets</p><p>25:15 - The Balance Between AI Research and Product Development</p><p>30:00 - Code Generation Technologies in Software Engineering</p><p>35:00 - AI's Broader Industry Implications</p><p>40:00 - Importance of Product-Driven Approaches in AI Startups</p><p>45:00 - AI in Various Sectors: Beyond Software Engineering</p><p>50:00 - Open Source vs Proprietary AI Models</p><p>55:00 - AI's Impact on Traditional Roles and Industries</p><p>1:00:00 - Closing Thoughts&nbsp;</p><p>Thanks for listening to the Gradient Dissent podcast, brought to you by Weights &amp; Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</p><p>Follow Weights &amp; Biases:</p><p>YouTube: <a href="http://wandb.me/youtube" target="_blank">http://wandb.me/youtube</a></p><p>Twitter: <a href="https://twitter.com/weights_biases" target="_blank">https://twitter.com/weights_biases</a>&nbsp;</p><p>LinkedIn: <a href="https://www.linkedin.com/company/wandb" target="_blank">https://www.linkedin.com/company/wandb&nbsp;</a></p><p>Join the Weights &amp; Biases Discord Server:</p><p><a href="https://discord.gg/CkZKRNnaf3" target="_blank">https://discord.gg/CkZKRNnaf3</a></p><p>#OCR #DeepLearning #AI #Modeling #ML</p>]]></description><content:encoded><![CDATA[<p>Explore the Future of Investment &amp; Impact in AI with Host Lukas Biewald and Guests Elad Gill and Sarah Guo of the No Priors podcast.</p><p>Sarah is the founder of Conviction VC, an AI-centric $100 million venture fund. Elad, a seasoned entrepreneur and startup investor, boasts an impressive portfolio in over 40 companies, each valued at $1 billion or more, and wrote the influential "High Growth Handbook."</p><p>Join us for a deep dive into the nuanced world of AI, where we'll explore its broader industry impact, focusing on how startups can seamlessly blend product-centric approaches with a balance of innovation and practical development.</p><p>*Subscribe to Weights &amp; Biases* →&nbsp;https://bit.ly/45BCkYz</p><p>Timestamps:</p><p>0:00 - Introduction&nbsp;</p><p>5:15 - Exploring Fine-Tuning vs RAG in AI</p><p>10:30 - Evaluating AI Research for Investment</p><p>15:45 - Impact of AI Models on Product Development</p><p>20:00 - AI's Role in Evolving Job Markets</p><p>25:15 - The Balance Between AI Research and Product Development</p><p>30:00 - Code Generation Technologies in Software Engineering</p><p>35:00 - AI's Broader Industry Implications</p><p>40:00 - Importance of Product-Driven Approaches in AI Startups</p><p>45:00 - AI in Various Sectors: Beyond Software Engineering</p><p>50:00 - Open Source vs Proprietary AI Models</p><p>55:00 - AI's Impact on Traditional Roles and Industries</p><p>1:00:00 - Closing Thoughts&nbsp;</p><p>Thanks for listening to the Gradient Dissent podcast, brought to you by Weights &amp; Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</p><p>Follow Weights &amp; Biases:</p><p>YouTube: <a href="http://wandb.me/youtube" target="_blank">http://wandb.me/youtube</a></p><p>Twitter: <a href="https://twitter.com/weights_biases" target="_blank">https://twitter.com/weights_biases</a>&nbsp;</p><p>LinkedIn: <a href="https://www.linkedin.com/company/wandb" target="_blank">https://www.linkedin.com/company/wandb&nbsp;</a></p><p>Join the Weights &amp; Biases Discord Server:</p><p><a href="https://discord.gg/CkZKRNnaf3" target="_blank">https://discord.gg/CkZKRNnaf3</a></p><p>#OCR #DeepLearning #AI #Modeling #ML</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">e4572c9e-bb80-4b6d-929b-d849d7b1cacc</guid><itunes:image href="https://artwork.captivate.fm/1af94f21-b117-48aa-ad41-718b0502de5a/64wbbSBE11MiPoJxOrGoRlrx.jpg"/><pubDate>Thu, 18 Jan 2024 12:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/4396b202-c54e-43bd-9d74-3f79e65d5d00/GD003-Final-Mix-for-Pod.mp3" length="53952656" type="audio/mpeg"/><itunes:duration>01:04:14</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Revolutionizing AI Data Management with Jerry Liu, CEO of LlamaIndex</title><itunes:title>Revolutionizing AI Data Management with Jerry Liu, CEO of LlamaIndex</itunes:title><description><![CDATA[<p>In the latest episode of Gradient Dissent, we explore the innovative features and impact of LlamaIndex in AI data management with <a href="https://www.linkedin.com/in/jerry-liu-64390071/" rel="noopener noreferrer" target="_blank">Jerry Liu</a>, CEO of <a href="https://www.llamaindex.ai/" rel="noopener noreferrer" target="_blank">LlamaIndex</a>. Jerry shares insights on how LlamaIndex integrates diverse data formats with advanced AI technologies, addressing challenges in data retrieval, analysis, and conversational memory. We also delve into the future of AI-driven systems and LlamaIndex's role in this rapidly evolving field. This episode is a must-watch for anyone interested in AI, data science, and the future of technology.</p><p>Timestamps:</p><p>0:00 - Introduction&nbsp;</p><p>4:46 - Differentiating&nbsp; LlamaIndex in the AI framework ecosystem.</p><p>9:00 - Discussing data analysis, search, and retrieval applications.</p><p>14:17 - Exploring Retrieval Augmented Generation (RAG) and vector databases.</p><p>19:33 - Implementing and optimizing One Bot in Discord.</p><p>24:19 - Developing and evaluating datasets for AI systems.</p><p>28:00 - Community contributions and the growth of LlamaIndex.</p><p>34:34 - Discussing embedding models and the use of vector databases.</p><p>39:33 - Addressing AI model hallucinations and fine-tuning.</p><p>44:51 - Text extraction applications and agent-based systems in AI.</p><p>49:25 - Community contributions to LlamaIndex and managing refactors.</p><p>52:00 - Interactions with big tech's corpus and AI context length.</p><p>54:59 - Final thoughts on underrated aspects of ML and challenges in AI.</p><p>Thanks for listening to the Gradient Dissent podcast, brought to you by Weights &amp; Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</p><p><strong>Connect with Jerry:</strong></p><p>https://twitter.com/jerryjliu0</p><p>https://www.linkedin.com/in/jerry-liu-64390071/</p><p><strong>Follow Weights &amp; Biases:</strong></p><p>YouTube: <a href="http://wandb.me/youtube" rel="noopener noreferrer" target="_blank">http://wandb.me/youtube</a></p><p>Twitter: <a href="https://twitter.com/weights_biases" rel="noopener noreferrer" target="_blank">https://twitter.com/weights_biases</a>&nbsp;</p><p>LinkedIn: <a href="https://www.linkedin.com/company/wandb" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/company/wandb&nbsp;</a></p><p>Join the Weights &amp; Biases Discord Server:</p><p><a href="https://discord.gg/CkZKRNnaf3" rel="noopener noreferrer" target="_blank">https://discord.gg/CkZKRNnaf3</a></p><p><br></p><p><br></p><p>#OCR #DeepLearning #AI #Modeling #ML</p>]]></description><content:encoded><![CDATA[<p>In the latest episode of Gradient Dissent, we explore the innovative features and impact of LlamaIndex in AI data management with <a href="https://www.linkedin.com/in/jerry-liu-64390071/" rel="noopener noreferrer" target="_blank">Jerry Liu</a>, CEO of <a href="https://www.llamaindex.ai/" rel="noopener noreferrer" target="_blank">LlamaIndex</a>. Jerry shares insights on how LlamaIndex integrates diverse data formats with advanced AI technologies, addressing challenges in data retrieval, analysis, and conversational memory. We also delve into the future of AI-driven systems and LlamaIndex's role in this rapidly evolving field. This episode is a must-watch for anyone interested in AI, data science, and the future of technology.</p><p>Timestamps:</p><p>0:00 - Introduction&nbsp;</p><p>4:46 - Differentiating&nbsp; LlamaIndex in the AI framework ecosystem.</p><p>9:00 - Discussing data analysis, search, and retrieval applications.</p><p>14:17 - Exploring Retrieval Augmented Generation (RAG) and vector databases.</p><p>19:33 - Implementing and optimizing One Bot in Discord.</p><p>24:19 - Developing and evaluating datasets for AI systems.</p><p>28:00 - Community contributions and the growth of LlamaIndex.</p><p>34:34 - Discussing embedding models and the use of vector databases.</p><p>39:33 - Addressing AI model hallucinations and fine-tuning.</p><p>44:51 - Text extraction applications and agent-based systems in AI.</p><p>49:25 - Community contributions to LlamaIndex and managing refactors.</p><p>52:00 - Interactions with big tech's corpus and AI context length.</p><p>54:59 - Final thoughts on underrated aspects of ML and challenges in AI.</p><p>Thanks for listening to the Gradient Dissent podcast, brought to you by Weights &amp; Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</p><p><strong>Connect with Jerry:</strong></p><p>https://twitter.com/jerryjliu0</p><p>https://www.linkedin.com/in/jerry-liu-64390071/</p><p><strong>Follow Weights &amp; Biases:</strong></p><p>YouTube: <a href="http://wandb.me/youtube" rel="noopener noreferrer" target="_blank">http://wandb.me/youtube</a></p><p>Twitter: <a href="https://twitter.com/weights_biases" rel="noopener noreferrer" target="_blank">https://twitter.com/weights_biases</a>&nbsp;</p><p>LinkedIn: <a href="https://www.linkedin.com/company/wandb" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/company/wandb&nbsp;</a></p><p>Join the Weights &amp; Biases Discord Server:</p><p><a href="https://discord.gg/CkZKRNnaf3" rel="noopener noreferrer" target="_blank">https://discord.gg/CkZKRNnaf3</a></p><p><br></p><p><br></p><p>#OCR #DeepLearning #AI #Modeling #ML</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">d9ef30f2-a720-4010-ba07-f301d8332928</guid><itunes:image href="https://artwork.captivate.fm/6be2962e-1f76-4ac0-8405-49e08ff73376/u-aC4KirE0JPe8NWnNnOzt2d.jpg"/><pubDate>Thu, 04 Jan 2024 12:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/29337ec4-4b86-4a99-bb3a-507dd4cb9c31/GD002-Final-Pod-Mix.mp3" length="48376400" type="audio/mpeg"/><itunes:duration>57:35</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Bridging AI and Science: The Impact of Machine Learning on Material Innovation with Joe Spisak of Meta</title><itunes:title>Bridging AI and Science: The Impact of Machine Learning on Material Innovation with Joe Spisak of Meta</itunes:title><description><![CDATA[<p>In the latest episode of Gradient Dissent, we hear from Joseph Spisak, Product Director, Generative AI @Meta, to explore the boundless impacts of AI and its expansive role in reshaping various sectors.&nbsp;</p><p>We delve into the intricacies of models like GPT and Llama2, their influence on user experiences, and AI's groundbreaking contributions to fields like biology, material science, and green hydrogen production through the Open Catalyst Project. The episode also examines AI's practical business applications, from document summarization to intelligent note-taking, addressing the ethical complexities of AI deployment.&nbsp;</p><p>We wrap up with a discussion on the significance of open-source AI development, community collaboration, and AI democratization.&nbsp;</p><p>Tune in for valuable insights into the expansive world of AI, relevant to developers, business leaders, and tech enthusiasts.</p><p>We discuss:</p><ul><li>0:00 Intro</li><li>0:32 Joe is Back at Meta</li><li>3:28 What Does Meta Get Out Of Putting Out LLMs?</li><li>8:24 Measuring The Quality Of LLMs</li><li>10:55 How Do You Pick The Sizes Of Models</li><li>16:45 Advice On Choosing Which Model To Start With</li><li>24:57 The Secret Sauce In The Training</li><li>26:17 What Is Being Worked On Now</li><li>33:00 The Safety Mechanisms In Llama 2</li><li>37:00 The Datasets Llama 2 Is Trained On</li><li>38:00 On Multilingual Capabilities &amp; Tone</li><li>43:30 On The Biggest Applications Of Llama 2</li><li>47:25 On Why The Best Teams Are Built By Users</li><li>54:01 The Culture Differences Of Meta vs Open Source</li><li>57:39 The AI Learning Alliance</li><li>1:01:34 Where To Learn About Machine Learning</li><li>1:05:10 Why AI For Science Is Under-rated</li><li>1:11:36 What Are The Biggest Issues With Real-World Applications</li></ul><br/><p>Thanks for listening to the Gradient Dissent podcast, brought to you by Weights &amp; Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</p><p>#OCR #DeepLearning #AI #Modeling #ML</p>]]></description><content:encoded><![CDATA[<p>In the latest episode of Gradient Dissent, we hear from Joseph Spisak, Product Director, Generative AI @Meta, to explore the boundless impacts of AI and its expansive role in reshaping various sectors.&nbsp;</p><p>We delve into the intricacies of models like GPT and Llama2, their influence on user experiences, and AI's groundbreaking contributions to fields like biology, material science, and green hydrogen production through the Open Catalyst Project. The episode also examines AI's practical business applications, from document summarization to intelligent note-taking, addressing the ethical complexities of AI deployment.&nbsp;</p><p>We wrap up with a discussion on the significance of open-source AI development, community collaboration, and AI democratization.&nbsp;</p><p>Tune in for valuable insights into the expansive world of AI, relevant to developers, business leaders, and tech enthusiasts.</p><p>We discuss:</p><ul><li>0:00 Intro</li><li>0:32 Joe is Back at Meta</li><li>3:28 What Does Meta Get Out Of Putting Out LLMs?</li><li>8:24 Measuring The Quality Of LLMs</li><li>10:55 How Do You Pick The Sizes Of Models</li><li>16:45 Advice On Choosing Which Model To Start With</li><li>24:57 The Secret Sauce In The Training</li><li>26:17 What Is Being Worked On Now</li><li>33:00 The Safety Mechanisms In Llama 2</li><li>37:00 The Datasets Llama 2 Is Trained On</li><li>38:00 On Multilingual Capabilities &amp; Tone</li><li>43:30 On The Biggest Applications Of Llama 2</li><li>47:25 On Why The Best Teams Are Built By Users</li><li>54:01 The Culture Differences Of Meta vs Open Source</li><li>57:39 The AI Learning Alliance</li><li>1:01:34 Where To Learn About Machine Learning</li><li>1:05:10 Why AI For Science Is Under-rated</li><li>1:11:36 What Are The Biggest Issues With Real-World Applications</li></ul><br/><p>Thanks for listening to the Gradient Dissent podcast, brought to you by Weights &amp; Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</p><p>#OCR #DeepLearning #AI #Modeling #ML</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">ce42f6d2-2060-4e21-ad0b-2227a7abe58b</guid><itunes:image href="https://artwork.captivate.fm/8b305f90-eea7-4ff5-912b-7592586ee1e3/jQVWYFboPoPhJRgqK3Ji1BPg.jpg"/><pubDate>Thu, 07 Dec 2023 12:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/63d66ec7-2289-491b-868d-cadb40730d7f/GD002-Final-Pod.mp3" length="62780048" type="audio/mpeg"/><itunes:duration>01:14:44</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Unlocking the Power of Language Models in Enterprise: A Deep Dive with Chris Van Pelt</title><itunes:title>Unlocking the Power of Language Models in Enterprise: A Deep Dive with Chris Van Pelt</itunes:title><description><![CDATA[<p>In the premiere episode of Gradient Dissent Business, we're joined by Weights &amp; Biases co-founder Chris Van Pelt for a deep dive into the world of large language models like GPT-3.5 and GPT-4. Chris bridges his expertise as both a tech founder and AI expert, offering key strategies for startups seeking to connect with early users, and for enterprises experimenting with AI. He highlights the melding of AI and traditional web development, sharing his insights on product evolution, leadership, and the power of customer conversations—even for the most introverted founders. He shares how personal development and authentic co-founder relationships enrich business dynamics. Join us for a compelling episode brimming with actionable advice for those looking to innovate with language models, all while managing the inherent complexities. Don't miss Chris Van Pelt's invaluable take on the future of AI in this thought-provoking installment of Gradient Dissent Business.</p><p>We discuss:</p><ul><li>0:00 - Intro</li><li>5:59 - Impactful relationships in Chris's life</li><li>13:15 - Advice for finding co-founders</li><li>16:25 - Chris's fascination with challenging problems</li><li>22:30 - Tech stack for AI labs</li><li>30:50 - Impactful capabilities of AI models</li><li>36:24 - How this AI era is different</li><li>47:36 - Advising large enterprises on language model integration</li><li>51:18 - Using language models for business intelligence and automation</li><li>52:13 - Closing thoughts and appreciation</li></ul><br/><p>Thanks for listening to the Gradient Dissent Business podcast, with hosts Lavanya Shukla and Caryn Marooney, brought to you by Weights &amp; Biases. Be sure to click the subscribe button below, to keep your finger on the pulse of this fast-moving space and hear from other amazing guests</p><p>#OCR #DeepLearning #AI #Modeling #ML</p>]]></description><content:encoded><![CDATA[<p>In the premiere episode of Gradient Dissent Business, we're joined by Weights &amp; Biases co-founder Chris Van Pelt for a deep dive into the world of large language models like GPT-3.5 and GPT-4. Chris bridges his expertise as both a tech founder and AI expert, offering key strategies for startups seeking to connect with early users, and for enterprises experimenting with AI. He highlights the melding of AI and traditional web development, sharing his insights on product evolution, leadership, and the power of customer conversations—even for the most introverted founders. He shares how personal development and authentic co-founder relationships enrich business dynamics. Join us for a compelling episode brimming with actionable advice for those looking to innovate with language models, all while managing the inherent complexities. Don't miss Chris Van Pelt's invaluable take on the future of AI in this thought-provoking installment of Gradient Dissent Business.</p><p>We discuss:</p><ul><li>0:00 - Intro</li><li>5:59 - Impactful relationships in Chris's life</li><li>13:15 - Advice for finding co-founders</li><li>16:25 - Chris's fascination with challenging problems</li><li>22:30 - Tech stack for AI labs</li><li>30:50 - Impactful capabilities of AI models</li><li>36:24 - How this AI era is different</li><li>47:36 - Advising large enterprises on language model integration</li><li>51:18 - Using language models for business intelligence and automation</li><li>52:13 - Closing thoughts and appreciation</li></ul><br/><p>Thanks for listening to the Gradient Dissent Business podcast, with hosts Lavanya Shukla and Caryn Marooney, brought to you by Weights &amp; Biases. Be sure to click the subscribe button below, to keep your finger on the pulse of this fast-moving space and hear from other amazing guests</p><p>#OCR #DeepLearning #AI #Modeling #ML</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">df57c140-80d5-434d-91a7-d77d43a3a23f</guid><itunes:image href="https://artwork.captivate.fm/489953d2-c552-46b2-821f-9d51d4cd2544/fxYIBR8SACzHb38e9QVuKVHo.jpg"/><pubDate>Thu, 16 Nov 2023 12:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/8dc7ee63-a882-4193-a645-7d50e9921f8f/GDB001-Final-Podcast-File.mp3" length="100643935" type="audio/mpeg"/><itunes:duration>52:25</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Providing Greater Access to LLMs with Brandon Duderstadt, Co-Founder and CEO of Nomic AI</title><itunes:title>Providing Greater Access to LLMs with Brandon Duderstadt, Co-Founder and CEO of Nomic AI</itunes:title><description><![CDATA[<p>On this episode, we’re joined by <a href="https://www.linkedin.com/in/brandon-duderstadt-a3269112a/" rel="noopener noreferrer" target="_blank">Brandon Duderstadt,</a> Co-Founder and CEO of <a href="https://www.linkedin.com/company/nomic-ai/" rel="noopener noreferrer" target="_blank">Nomic AI</a>. Both of Nomic AI’s products, Atlas and GPT4All, aim to improve the explainability and accessibility of AI.</p><p>We discuss:</p><p>- (0:55) What GPT4All is and its value proposition.</p><p>- (6:56) The advantages of using smaller LLMs for specific tasks.&nbsp;</p><p>- (9:42) Brandon’s thoughts on the cost of training LLMs.&nbsp;</p><p>- (10:50) Details about the current state of fine-tuning LLMs.&nbsp;</p><p>- (12:20) What quantization is and what it does.&nbsp;</p><p>- (21:16) What Atlas is and what it allows you to do.</p><p>- (27:30) Training code models versus language models.</p><p>- (32:19) Details around evaluating different models.</p><p>- (38:34) The opportunity for smaller companies to build open-source models.&nbsp;</p><p>- (42:00) Prompt chaining versus fine-tuning models.</p><p>Resources mentioned:</p><p><a href="https://www.linkedin.com/in/brandon-duderstadt-a3269112a/" rel="noopener noreferrer" target="_blank">Brandon Duderstadt</a> - <a href="https://www.linkedin.com/in/brandon-duderstadt-a3269112a/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/brandon-duderstadt-a3269112a/</a></p><p><a href="https://www.linkedin.com/company/nomic-ai/" rel="noopener noreferrer" target="_blank">Nomic AI</a> - <a href="https://www.linkedin.com/company/nomic-ai/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/company/nomic-ai/</a></p><p><a href="https://home.nomic.ai/" rel="noopener noreferrer" target="_blank">Nomic AI Website</a> - <a href="https://home.nomic.ai/" rel="noopener noreferrer" target="_blank">https://home.nomic.ai/</a></p><p>Thanks for listening to the Gradient Dissent podcast, brought to you by Weights &amp; Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</p><p>#OCR #DeepLearning #AI #Modeling #ML</p>]]></description><content:encoded><![CDATA[<p>On this episode, we’re joined by <a href="https://www.linkedin.com/in/brandon-duderstadt-a3269112a/" rel="noopener noreferrer" target="_blank">Brandon Duderstadt,</a> Co-Founder and CEO of <a href="https://www.linkedin.com/company/nomic-ai/" rel="noopener noreferrer" target="_blank">Nomic AI</a>. Both of Nomic AI’s products, Atlas and GPT4All, aim to improve the explainability and accessibility of AI.</p><p>We discuss:</p><p>- (0:55) What GPT4All is and its value proposition.</p><p>- (6:56) The advantages of using smaller LLMs for specific tasks.&nbsp;</p><p>- (9:42) Brandon’s thoughts on the cost of training LLMs.&nbsp;</p><p>- (10:50) Details about the current state of fine-tuning LLMs.&nbsp;</p><p>- (12:20) What quantization is and what it does.&nbsp;</p><p>- (21:16) What Atlas is and what it allows you to do.</p><p>- (27:30) Training code models versus language models.</p><p>- (32:19) Details around evaluating different models.</p><p>- (38:34) The opportunity for smaller companies to build open-source models.&nbsp;</p><p>- (42:00) Prompt chaining versus fine-tuning models.</p><p>Resources mentioned:</p><p><a href="https://www.linkedin.com/in/brandon-duderstadt-a3269112a/" rel="noopener noreferrer" target="_blank">Brandon Duderstadt</a> - <a href="https://www.linkedin.com/in/brandon-duderstadt-a3269112a/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/in/brandon-duderstadt-a3269112a/</a></p><p><a href="https://www.linkedin.com/company/nomic-ai/" rel="noopener noreferrer" target="_blank">Nomic AI</a> - <a href="https://www.linkedin.com/company/nomic-ai/" rel="noopener noreferrer" target="_blank">https://www.linkedin.com/company/nomic-ai/</a></p><p><a href="https://home.nomic.ai/" rel="noopener noreferrer" target="_blank">Nomic AI Website</a> - <a href="https://home.nomic.ai/" rel="noopener noreferrer" target="_blank">https://home.nomic.ai/</a></p><p>Thanks for listening to the Gradient Dissent podcast, brought to you by Weights &amp; Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</p><p>#OCR #DeepLearning #AI #Modeling #ML</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">3de6df71-8c08-466f-a1c8-52a52de250be</guid><itunes:image href="https://artwork.captivate.fm/298564fe-9827-4a4a-9be4-ef97fef485b6/uVhd_D8f9yYpSPNGtkke-dMy.jpg"/><pubDate>Thu, 27 Jul 2023 01:18:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/736c22d0-5df0-4621-bd01-f47c7ae49666/WEIGHTS-Brandon-Duderstadt-V1.mp3" length="58954919" type="audio/mpeg"/><itunes:duration>01:01:25</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Exploring PyTorch and Open-Source Communities with Soumith Chintala, VP/Fellow of Meta, Co-Creator of PyTorch</title><itunes:title>Exploring PyTorch and Open-Source Communities with Soumith Chintala, VP/Fellow of Meta, Co-Creator of PyTorch</itunes:title><description><![CDATA[<p>On this episode, we’re joined by <a href="https://www.linkedin.com/in/soumith/" rel="noopener noreferrer" target="_blank">Soumith Chintala</a>, VP/Fellow of <a href="https://www.linkedin.com/company/meta/" rel="noopener noreferrer" target="_blank">Meta</a> and Co-Creator of PyTorch. Soumith and his colleagues’ open-source framework impacted both the development process and the end-user experience of what would become PyTorch.</p><p>We discuss:</p><p>- The history of PyTorch’s development and TensorFlow’s impact on development decisions.</p><p>- How a symbolic execution model affects the implementation speed of an ML compiler.</p><p>- The strengths of different programming languages in various development stages.</p><p>- The importance of customer engagement as a measure of success instead of hard metrics.</p><p>- Why community-guided innovation offers an effective development roadmap.</p><p>- How PyTorch’s open-source nature cultivates an efficient development ecosystem.</p><p>- The role of community building in consolidating assets for more creative innovation.</p><p>- How to protect community values in an open-source development environment.</p><p>- The value of an intrinsic organizational motivation structure.</p><p>- The ongoing debate between open-source and closed-source products, especially as it relates to AI and machine learning.</p><p><br></p><p><br></p><p><strong>Resources:</strong></p><p><strong>-</strong> Soumith Chintala</p><p>https://www.linkedin.com/in/soumith/</p><p>- Meta | LinkedIn</p><p>https://www.linkedin.com/company/meta/</p><p>- Meta | Website</p><p>https://about.meta.com/</p><p>- Pytorch</p><p>https://pytorch.org/</p><p><br></p><p><br></p><p><br></p><p>Thanks for listening to the Gradient Dissent podcast, brought to you by Weights &amp; Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</p><p><br></p><p><br></p><p><br></p><p><br></p><p>#OCR #DeepLearning #AI #Modeling #ML</p>]]></description><content:encoded><![CDATA[<p>On this episode, we’re joined by <a href="https://www.linkedin.com/in/soumith/" rel="noopener noreferrer" target="_blank">Soumith Chintala</a>, VP/Fellow of <a href="https://www.linkedin.com/company/meta/" rel="noopener noreferrer" target="_blank">Meta</a> and Co-Creator of PyTorch. Soumith and his colleagues’ open-source framework impacted both the development process and the end-user experience of what would become PyTorch.</p><p>We discuss:</p><p>- The history of PyTorch’s development and TensorFlow’s impact on development decisions.</p><p>- How a symbolic execution model affects the implementation speed of an ML compiler.</p><p>- The strengths of different programming languages in various development stages.</p><p>- The importance of customer engagement as a measure of success instead of hard metrics.</p><p>- Why community-guided innovation offers an effective development roadmap.</p><p>- How PyTorch’s open-source nature cultivates an efficient development ecosystem.</p><p>- The role of community building in consolidating assets for more creative innovation.</p><p>- How to protect community values in an open-source development environment.</p><p>- The value of an intrinsic organizational motivation structure.</p><p>- The ongoing debate between open-source and closed-source products, especially as it relates to AI and machine learning.</p><p><br></p><p><br></p><p><strong>Resources:</strong></p><p><strong>-</strong> Soumith Chintala</p><p>https://www.linkedin.com/in/soumith/</p><p>- Meta | LinkedIn</p><p>https://www.linkedin.com/company/meta/</p><p>- Meta | Website</p><p>https://about.meta.com/</p><p>- Pytorch</p><p>https://pytorch.org/</p><p><br></p><p><br></p><p><br></p><p>Thanks for listening to the Gradient Dissent podcast, brought to you by Weights &amp; Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</p><p><br></p><p><br></p><p><br></p><p><br></p><p>#OCR #DeepLearning #AI #Modeling #ML</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">3d2f7298-7181-4244-ba39-8a888de0e5f4</guid><itunes:image href="https://artwork.captivate.fm/20a2ddd6-b525-4d5f-8e08-10a6b3610cb5/kNbbtI1iqmtfarARhwcOxeM6.jpg"/><pubDate>Thu, 13 Jul 2023 13:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/9a806bf5-2674-4de4-b8e5-be6b87d7e958/WEIGHTS-Soumith-Chintala-V1.mp3" length="65841631" type="audio/mpeg"/><itunes:duration>01:08:35</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Advanced AI Accelerators and Processors with Andrew Feldman of Cerebras Systems</title><itunes:title>Advanced AI Accelerators and Processors with Andrew Feldman of Cerebras Systems</itunes:title><description><![CDATA[<p>On this episode, we’re joined by <a href="https://www.linkedin.com/in/andrewdfeldman/" rel="noopener noreferrer" target="_blank">Andrew Feldman</a>, Founder and CEO of <a href="https://www.linkedin.com/company/cerebras-systems/" rel="noopener noreferrer" target="_blank">Cerebras Systems</a>. Andrew and the Cerebras team are responsible for building the largest-ever computer chip and the fastest AI-specific processor in the industry.</p><p>We discuss:</p><p>- The advantages of using large chips for AI work.</p><p>- Cerebras Systems’ process for building chips optimized for AI.</p><p>- Why traditional GPUs aren’t the optimal machines for AI work.</p><p>- Why efficiently distributing computing resources is a significant challenge for AI work.</p><p>- How much faster Cerebras Systems’ machines are than other processors on the market.</p><p>- Reasons why some ML-specific chip companies fail and what Cerebras does differently.</p><p>- Unique challenges for chip makers and hardware companies.</p><p>- Cooling and heat-transfer techniques for Cerebras machines.</p><p>- How Cerebras approaches building chips that will fit the needs of customers for years to come.</p><p>- Why the strategic vision for what data to collect for ML needs more discussion.</p><p><strong>Resources:</strong></p><p><strong>Andrew Feldman - </strong>https://www.linkedin.com/in/andrewdfeldman/</p><p>Cerebras Systems - https://www.linkedin.com/company/cerebras-systems/</p><p>Cerebras Systems | Website - <a href="https://www.cerebras.net/" rel="noopener noreferrer" target="_blank">https://www.cerebras.net/</a></p><p>Thanks for listening to the Gradient Dissent podcast, brought to you by Weights &amp; Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</p><p>#OCR #DeepLearning #AI #Modeling #ML</p>]]></description><content:encoded><![CDATA[<p>On this episode, we’re joined by <a href="https://www.linkedin.com/in/andrewdfeldman/" rel="noopener noreferrer" target="_blank">Andrew Feldman</a>, Founder and CEO of <a href="https://www.linkedin.com/company/cerebras-systems/" rel="noopener noreferrer" target="_blank">Cerebras Systems</a>. Andrew and the Cerebras team are responsible for building the largest-ever computer chip and the fastest AI-specific processor in the industry.</p><p>We discuss:</p><p>- The advantages of using large chips for AI work.</p><p>- Cerebras Systems’ process for building chips optimized for AI.</p><p>- Why traditional GPUs aren’t the optimal machines for AI work.</p><p>- Why efficiently distributing computing resources is a significant challenge for AI work.</p><p>- How much faster Cerebras Systems’ machines are than other processors on the market.</p><p>- Reasons why some ML-specific chip companies fail and what Cerebras does differently.</p><p>- Unique challenges for chip makers and hardware companies.</p><p>- Cooling and heat-transfer techniques for Cerebras machines.</p><p>- How Cerebras approaches building chips that will fit the needs of customers for years to come.</p><p>- Why the strategic vision for what data to collect for ML needs more discussion.</p><p><strong>Resources:</strong></p><p><strong>Andrew Feldman - </strong>https://www.linkedin.com/in/andrewdfeldman/</p><p>Cerebras Systems - https://www.linkedin.com/company/cerebras-systems/</p><p>Cerebras Systems | Website - <a href="https://www.cerebras.net/" rel="noopener noreferrer" target="_blank">https://www.cerebras.net/</a></p><p>Thanks for listening to the Gradient Dissent podcast, brought to you by Weights &amp; Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</p><p>#OCR #DeepLearning #AI #Modeling #ML</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">f8b8e5e4-a4bb-41c6-8478-9b6492aab039</guid><itunes:image href="https://artwork.captivate.fm/290fccfd-6592-4763-99b4-0d24062da904/06B8eLtZ0bYSM_C9CgX_ejGP.jpg"/><pubDate>Thu, 22 Jun 2023 01:15:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/989fd2c7-3fa0-48c1-8b4e-9ecef1dbc59a/WEIGHTS-Andrew-Feldman-V3.mp3" length="115515671" type="audio/mpeg"/><itunes:duration>01:00:10</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Enabling LLM-Powered Applications with Harrison Chase of LangChain</title><itunes:title>Enabling LLM-Powered Applications with Harrison Chase of LangChain</itunes:title><description><![CDATA[<p>On this episode, we’re joined by <a href="https://www.linkedin.com/in/harrison-chase-961287118/" rel="noopener noreferrer" target="_blank">Harrison Chase</a>, Co-Founder and CEO of <a href="https://www.linkedin.com/company/langchain/" rel="noopener noreferrer" target="_blank">LangChain</a>. Harrison and his team at LangChain are on a mission to make the process of creating applications powered by LLMs as easy as possible.</p><p>We discuss:</p><p>- What LangChain is and examples of how it works.&nbsp;</p><p>- Why LangChain has gained so much attention.&nbsp;</p><p>- When LangChain started and what sparked its growth.&nbsp;</p><p>- Harrison’s approach to community-building around LangChain.&nbsp;</p><p>- Real-world use cases for LangChain.</p><p>- What parts of LangChain Harrison is proud of and which parts can be improved.</p><p>- Details around evaluating effectiveness in the ML space.</p><p>- Harrison's opinion on fine-tuning LLMs.</p><p>- The importance of detailed prompt engineering.</p><p>- Predictions for the future of LLM providers.</p><p><br></p><p><strong>Resources:</strong></p><p><br></p><p>Harrison Chase - https://www.linkedin.com/in/harrison-chase-961287118/</p><p>LangChain | LinkedIn - https://www.linkedin.com/company/langchain/</p><p>LangChain | Website - https://docs.langchain.com/docs/</p><p><br></p><p><br></p><p><br></p><p>Thanks for listening to the Gradient Dissent podcast, brought to you by Weights &amp; Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</p><p><br></p><p><br></p><p><br></p><p>#OCR #DeepLearning #AI #Modeling #ML</p>]]></description><content:encoded><![CDATA[<p>On this episode, we’re joined by <a href="https://www.linkedin.com/in/harrison-chase-961287118/" rel="noopener noreferrer" target="_blank">Harrison Chase</a>, Co-Founder and CEO of <a href="https://www.linkedin.com/company/langchain/" rel="noopener noreferrer" target="_blank">LangChain</a>. Harrison and his team at LangChain are on a mission to make the process of creating applications powered by LLMs as easy as possible.</p><p>We discuss:</p><p>- What LangChain is and examples of how it works.&nbsp;</p><p>- Why LangChain has gained so much attention.&nbsp;</p><p>- When LangChain started and what sparked its growth.&nbsp;</p><p>- Harrison’s approach to community-building around LangChain.&nbsp;</p><p>- Real-world use cases for LangChain.</p><p>- What parts of LangChain Harrison is proud of and which parts can be improved.</p><p>- Details around evaluating effectiveness in the ML space.</p><p>- Harrison's opinion on fine-tuning LLMs.</p><p>- The importance of detailed prompt engineering.</p><p>- Predictions for the future of LLM providers.</p><p><br></p><p><strong>Resources:</strong></p><p><br></p><p>Harrison Chase - https://www.linkedin.com/in/harrison-chase-961287118/</p><p>LangChain | LinkedIn - https://www.linkedin.com/company/langchain/</p><p>LangChain | Website - https://docs.langchain.com/docs/</p><p><br></p><p><br></p><p><br></p><p>Thanks for listening to the Gradient Dissent podcast, brought to you by Weights &amp; Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</p><p><br></p><p><br></p><p><br></p><p>#OCR #DeepLearning #AI #Modeling #ML</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">d486f8e2-6ed3-44ca-a25a-26937ac1cd97</guid><itunes:image href="https://artwork.captivate.fm/e3b0f5a4-3bb5-48be-8a95-6f61e3d76bd2/kprsciP4d_t-TguAiGY1d5_1.jpg"/><pubDate>Thu, 01 Jun 2023 01:20:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/f119eaf8-5c3b-44d9-8db8-cfe89a96e71a/WEIGHTS-Harrison-Chase-V2.mp3" length="49831228" type="audio/mpeg"/><itunes:duration>51:54</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Deploying Autonomous Mobile Robots with Jean Marc Alkazzi at idealworks</title><itunes:title>Deploying Autonomous Mobile Robots with Jean Marc Alkazzi at idealworks</itunes:title><description><![CDATA[<p>On this episode, we’re joined by <a href="https://www.linkedin.com/in/jeanmarcjeanazzi/" rel="noopener noreferrer" target="_blank">Jean Marc Alkazzi</a>, Applied AI at <a href="https://www.linkedin.com/company/idealworks-gmbh/" rel="noopener noreferrer" target="_blank">idealworks</a>. Jean focuses his attention on applied AI, leveraging the use of autonomous mobile robots (AMRs) to improve efficiency within factories and more.</p><p>We discuss:</p><p>- Use cases for autonomous mobile robots (AMRs) and how to manage a fleet of them.&nbsp;</p><p>- How AMRs interact with humans working in warehouses.</p><p>- The challenges of building and deploying autonomous robots.</p><p>- Computer vision vs. other types of localization technology for robots.</p><p>- The purpose and types of simulation environments for robotic testing.</p><p>- The importance of aligning a robotic fleet’s workflow with concrete business objectives.</p><p>- What the update process looks like for robots.</p><p>- The importance of avoiding your own biases when developing and testing AMRs.</p><p>- The challenges associated with troubleshooting ML systems.</p><p><strong>Resources:</strong>&nbsp;</p><p>Jean Marc Alkazzi - https://www.linkedin.com/in/jeanmarcjeanazzi/</p><p>idealworks |LinkedIn - https://www.linkedin.com/company/idealworks-gmbh/</p><p>idealworks | Website -  https://idealworks.com/</p><p>Thanks for listening to the Gradient Dissent podcast, brought to you by Weights &amp; Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</p><p>#OCR #DeepLearning #AI #Modeling #ML</p>]]></description><content:encoded><![CDATA[<p>On this episode, we’re joined by <a href="https://www.linkedin.com/in/jeanmarcjeanazzi/" rel="noopener noreferrer" target="_blank">Jean Marc Alkazzi</a>, Applied AI at <a href="https://www.linkedin.com/company/idealworks-gmbh/" rel="noopener noreferrer" target="_blank">idealworks</a>. Jean focuses his attention on applied AI, leveraging the use of autonomous mobile robots (AMRs) to improve efficiency within factories and more.</p><p>We discuss:</p><p>- Use cases for autonomous mobile robots (AMRs) and how to manage a fleet of them.&nbsp;</p><p>- How AMRs interact with humans working in warehouses.</p><p>- The challenges of building and deploying autonomous robots.</p><p>- Computer vision vs. other types of localization technology for robots.</p><p>- The purpose and types of simulation environments for robotic testing.</p><p>- The importance of aligning a robotic fleet’s workflow with concrete business objectives.</p><p>- What the update process looks like for robots.</p><p>- The importance of avoiding your own biases when developing and testing AMRs.</p><p>- The challenges associated with troubleshooting ML systems.</p><p><strong>Resources:</strong>&nbsp;</p><p>Jean Marc Alkazzi - https://www.linkedin.com/in/jeanmarcjeanazzi/</p><p>idealworks |LinkedIn - https://www.linkedin.com/company/idealworks-gmbh/</p><p>idealworks | Website -  https://idealworks.com/</p><p>Thanks for listening to the Gradient Dissent podcast, brought to you by Weights &amp; Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</p><p>#OCR #DeepLearning #AI #Modeling #ML</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">84a9bb16-fe75-4e87-9a4d-c4c51a0c01bd</guid><itunes:image href="https://artwork.captivate.fm/241c922c-bf15-41d9-8f76-e789f5f9b37d/G6Susk8k4n27yDZYEp9dToIv.jpg"/><pubDate>Thu, 18 May 2023 10:45:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/f2d5dd94-22ad-41f1-aa1f-01b41cbd1752/WEIGHTS-Jean-Marc-Alkazzi-V5.mp3" length="111530007" type="audio/mpeg"/><itunes:duration>58:05</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>How EleutherAI Trains and Releases LLMs: Interview with Stella Biderman</title><itunes:title>How EleutherAI Trains and Releases LLMs: Interview with Stella Biderman</itunes:title><description><![CDATA[<p>On this episode, we’re joined by <a href="https://www.linkedin.com/in/stellabiderman/" rel="noopener noreferrer" target="_blank">Stella Biderman</a>, Executive Director at <a href="https://www.linkedin.com/company/eleutherai/" rel="noopener noreferrer" target="_blank">EleutherAI</a> and Lead Scientist - Mathematician at Booz Allen Hamilton.</p><p>EleutherAI is a grassroots collective that enables open-source AI research and focuses on the development and interpretability of large language models (LLMs).</p><p>We discuss:</p><p>- How EleutherAI got its start and where it's headed.</p><p>- The similarities and differences between various LLMs.</p><p>- How to decide which model to use for your desired outcome.</p><p>- The benefits and challenges of reinforcement learning from human feedback.</p><p>- Details around pre-training and fine-tuning LLMs.</p><p>- Which types of GPUs are best when training LLMs.</p><p>- What separates EleutherAI from other companies training LLMs.</p><p>- Details around mechanistic interpretability.</p><p>- Why understanding what and how LLMs memorize is important.</p><p>- The importance of giving researchers and the public access to LLMs.</p><p>Stella Biderman - https://www.linkedin.com/in/stellabiderman/</p><p>EleutherAI - https://www.linkedin.com/company/eleutherai/</p><p><strong>﻿Resources:</strong></p><p>- <a href="https://www.eleuther.ai/" rel="noopener noreferrer" target="_blank">https://www.eleuther.ai/</a></p><p>Thanks for listening to the Gradient Dissent podcast, brought to you by Weights &amp; Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</p><p><br></p><p>#OCR #DeepLearning #AI #Modeling #ML</p>]]></description><content:encoded><![CDATA[<p>On this episode, we’re joined by <a href="https://www.linkedin.com/in/stellabiderman/" rel="noopener noreferrer" target="_blank">Stella Biderman</a>, Executive Director at <a href="https://www.linkedin.com/company/eleutherai/" rel="noopener noreferrer" target="_blank">EleutherAI</a> and Lead Scientist - Mathematician at Booz Allen Hamilton.</p><p>EleutherAI is a grassroots collective that enables open-source AI research and focuses on the development and interpretability of large language models (LLMs).</p><p>We discuss:</p><p>- How EleutherAI got its start and where it's headed.</p><p>- The similarities and differences between various LLMs.</p><p>- How to decide which model to use for your desired outcome.</p><p>- The benefits and challenges of reinforcement learning from human feedback.</p><p>- Details around pre-training and fine-tuning LLMs.</p><p>- Which types of GPUs are best when training LLMs.</p><p>- What separates EleutherAI from other companies training LLMs.</p><p>- Details around mechanistic interpretability.</p><p>- Why understanding what and how LLMs memorize is important.</p><p>- The importance of giving researchers and the public access to LLMs.</p><p>Stella Biderman - https://www.linkedin.com/in/stellabiderman/</p><p>EleutherAI - https://www.linkedin.com/company/eleutherai/</p><p><strong>﻿Resources:</strong></p><p>- <a href="https://www.eleuther.ai/" rel="noopener noreferrer" target="_blank">https://www.eleuther.ai/</a></p><p>Thanks for listening to the Gradient Dissent podcast, brought to you by Weights &amp; Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</p><p><br></p><p>#OCR #DeepLearning #AI #Modeling #ML</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">1f33b906-03a8-47cf-81cb-068497d4419a</guid><itunes:image href="https://artwork.captivate.fm/c5757332-4c6f-48d1-84a8-bf373780f315/VePl-eOiftslnk0PyotqUK40.jpg"/><pubDate>Thu, 04 May 2023 00:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/7e8f1e30-7ee2-4c22-9681-d213ac27087f/WEIGHTS-Stella-Biderman-V3.mp3" length="109945940" type="audio/mpeg"/><itunes:duration>57:16</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Scaling LLMs and Accelerating Adoption with Aidan Gomez at Cohere</title><itunes:title>Scaling LLMs and Accelerating Adoption with Aidan Gomez at Cohere</itunes:title><description><![CDATA[<p>On this episode, we’re joined by <a href="https://www.linkedin.com/in/aidangomez/" rel="noopener noreferrer" target="_blank">Aidan Gomez</a>, Co-Founder and CEO at <a href="https://www.linkedin.com/company/cohere-ai/" rel="noopener noreferrer" target="_blank">Cohere</a>. Cohere develops and releases a range of innovative AI-powered tools and solutions for a variety of NLP use cases.</p><p>We discuss:</p><p>- What “attention” means in the context of ML.</p><p>- Aidan’s role in the “Attention Is All You Need” paper.</p><p>- What state-space models (SSMs) are, and how they could be an alternative to transformers.&nbsp;</p><p>- What it means for an ML architecture to saturate compute.</p><p>- Details around data constraints for when LLMs scale.</p><p>- Challenges of measuring LLM performance.</p><p>- How Cohere is positioned within the LLM development space.</p><p>- Insights around scaling down an LLM into a more domain-specific one.</p><p>- Concerns around synthetic content and AI changing public discourse.</p><p>- The importance of raising money at healthy milestones for AI development.</p><p>Aidan Gomez - https://www.linkedin.com/in/aidangomez/</p><p>Cohere - https://www.linkedin.com/company/cohere-ai/</p><p><br></p><p><br></p><p>Thanks for listening to the Gradient Dissent podcast, brought to you by Weights &amp; Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</p><p><br></p><p><strong>Resources:</strong></p><p>- <a href="https://cohere.ai/" rel="noopener noreferrer" target="_blank">https://cohere.ai/</a></p><p>- <a href="https://research.google/pubs/pub46201/" rel="noopener noreferrer" target="_blank">“Attention Is All You Need”</a></p><p><br></p><p><br></p><p><br></p><p>#OCR #DeepLearning #AI #Modeling #ML</p>]]></description><content:encoded><![CDATA[<p>On this episode, we’re joined by <a href="https://www.linkedin.com/in/aidangomez/" rel="noopener noreferrer" target="_blank">Aidan Gomez</a>, Co-Founder and CEO at <a href="https://www.linkedin.com/company/cohere-ai/" rel="noopener noreferrer" target="_blank">Cohere</a>. Cohere develops and releases a range of innovative AI-powered tools and solutions for a variety of NLP use cases.</p><p>We discuss:</p><p>- What “attention” means in the context of ML.</p><p>- Aidan’s role in the “Attention Is All You Need” paper.</p><p>- What state-space models (SSMs) are, and how they could be an alternative to transformers.&nbsp;</p><p>- What it means for an ML architecture to saturate compute.</p><p>- Details around data constraints for when LLMs scale.</p><p>- Challenges of measuring LLM performance.</p><p>- How Cohere is positioned within the LLM development space.</p><p>- Insights around scaling down an LLM into a more domain-specific one.</p><p>- Concerns around synthetic content and AI changing public discourse.</p><p>- The importance of raising money at healthy milestones for AI development.</p><p>Aidan Gomez - https://www.linkedin.com/in/aidangomez/</p><p>Cohere - https://www.linkedin.com/company/cohere-ai/</p><p><br></p><p><br></p><p>Thanks for listening to the Gradient Dissent podcast, brought to you by Weights &amp; Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</p><p><br></p><p><strong>Resources:</strong></p><p>- <a href="https://cohere.ai/" rel="noopener noreferrer" target="_blank">https://cohere.ai/</a></p><p>- <a href="https://research.google/pubs/pub46201/" rel="noopener noreferrer" target="_blank">“Attention Is All You Need”</a></p><p><br></p><p><br></p><p><br></p><p>#OCR #DeepLearning #AI #Modeling #ML</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">df1a4ed4-8ed9-4ed1-b032-a0cc05925e5b</guid><itunes:image href="https://artwork.captivate.fm/dc95fed8-fa4d-450e-991b-5d274b21f294/JO5BwQtoVMJiBFlN_C4He1AN.jpg"/><pubDate>Thu, 20 Apr 2023 00:05:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/d05cc323-31a7-4792-ba1e-894ad88b9438/WEIGHTS-Aidan-Gomez-V3.mp3" length="49455536" type="audio/mpeg"/><itunes:duration>51:31</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Neural Network Pruning and Training with Jonathan Frankle at MosaicML</title><itunes:title>Neural Network Pruning and Training with Jonathan Frankle at MosaicML</itunes:title><description><![CDATA[<p><a href="https://www.linkedin.com/in/jfrankle/" rel="noopener noreferrer" target="_blank">Jonathan Frankle</a>, Chief Scientist at <a href="https://www.linkedin.com/company/mosaicml/" rel="noopener noreferrer" target="_blank">MosaicML</a> and Assistant Professor of Computer Science at Harvard University, joins us on this episode. With comprehensive infrastructure and software tools, MosaicML aims to help businesses train complex machine-learning models using their own proprietary data.</p><p>We discuss:</p><p>- Details of Jonathan’s Ph.D. dissertation which explores his “Lottery Ticket Hypothesis.”</p><p>- The role of neural network pruning and how it impacts the performance of ML models.</p><p>- Why transformers will be the go-to way to train NLP models for the foreseeable future.</p><p>- Why the process of speeding up neural net learning is both scientific and artisanal.&nbsp;</p><p>- What MosaicML does, and how it approaches working with clients.</p><p>- The challenges for developing AGI.</p><p>- Details around ML training policy and ethics.</p><p>- Why data brings the magic to customized ML models.</p><p>- The many use cases for companies looking to build customized AI models.</p><p>Jonathan Frankle - https://www.linkedin.com/in/jfrankle/</p><p><strong>Resources:</strong></p><p>- <a href="https://mosaicml.com/" rel="noopener noreferrer" target="_blank">https://mosaicml.com/</a></p><p>- <a href="https://openreview.net/forum?id=rJl-b3RcF7" rel="noopener noreferrer" target="_blank">The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks</a></p><p><br></p><p><br></p><p>Thanks for listening to the Gradient Dissent podcast, brought to you by Weights &amp; Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</p><p><br></p><p>#OCR #DeepLearning #AI #Modeling #ML</p>]]></description><content:encoded><![CDATA[<p><a href="https://www.linkedin.com/in/jfrankle/" rel="noopener noreferrer" target="_blank">Jonathan Frankle</a>, Chief Scientist at <a href="https://www.linkedin.com/company/mosaicml/" rel="noopener noreferrer" target="_blank">MosaicML</a> and Assistant Professor of Computer Science at Harvard University, joins us on this episode. With comprehensive infrastructure and software tools, MosaicML aims to help businesses train complex machine-learning models using their own proprietary data.</p><p>We discuss:</p><p>- Details of Jonathan’s Ph.D. dissertation which explores his “Lottery Ticket Hypothesis.”</p><p>- The role of neural network pruning and how it impacts the performance of ML models.</p><p>- Why transformers will be the go-to way to train NLP models for the foreseeable future.</p><p>- Why the process of speeding up neural net learning is both scientific and artisanal.&nbsp;</p><p>- What MosaicML does, and how it approaches working with clients.</p><p>- The challenges for developing AGI.</p><p>- Details around ML training policy and ethics.</p><p>- Why data brings the magic to customized ML models.</p><p>- The many use cases for companies looking to build customized AI models.</p><p>Jonathan Frankle - https://www.linkedin.com/in/jfrankle/</p><p><strong>Resources:</strong></p><p>- <a href="https://mosaicml.com/" rel="noopener noreferrer" target="_blank">https://mosaicml.com/</a></p><p>- <a href="https://openreview.net/forum?id=rJl-b3RcF7" rel="noopener noreferrer" target="_blank">The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks</a></p><p><br></p><p><br></p><p>Thanks for listening to the Gradient Dissent podcast, brought to you by Weights &amp; Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</p><p><br></p><p>#OCR #DeepLearning #AI #Modeling #ML</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">90e64099-258d-4cbd-a7e7-68a89910acd7</guid><itunes:image href="https://artwork.captivate.fm/a363072e-1709-4bf1-b73c-dc1a1ec17139/2YkCeETkHdvsRRK5z3lxxPN1.jpg"/><pubDate>Tue, 04 Apr 2023 00:23:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/a616f9bd-2927-4712-945d-ca3ff22e73b8/WEIGHTS-Jonathan-Frankle-V2.mp3" length="59523739" type="audio/mpeg"/><itunes:duration>01:02:00</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Shreya Shankar — Operationalizing Machine Learning</title><itunes:title>Shreya Shankar — Operationalizing Machine Learning</itunes:title><description><![CDATA[<p><strong>About This Episode</strong></p><p>Shreya Shankar is a computer scientist, PhD student in databases at UC Berkeley, and co-author of "Operationalizing Machine Learning: An Interview Study", an ethnographic interview study with 18 machine learning engineers across a variety of industries on their experience deploying and maintaining ML pipelines in production.</p><p>Shreya explains the high-level findings of "Operationalizing Machine Learning"; variables that indicate a successful deployment (velocity, validation, and versioning), common pain points, and a grouping of the MLOps tool stack into four layers. Shreya and Lukas also discuss examples of data challenges in production, Jupyter Notebooks, and reproducibility.</p><p>Show notes (transcript and links): http://wandb.me/gd-shreya</p><p>---</p><p>💬 *Host:* Lukas Biewald</p><p>---</p><p>*Subscribe and listen to Gradient Dissent today!*</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></description><content:encoded><![CDATA[<p><strong>About This Episode</strong></p><p>Shreya Shankar is a computer scientist, PhD student in databases at UC Berkeley, and co-author of "Operationalizing Machine Learning: An Interview Study", an ethnographic interview study with 18 machine learning engineers across a variety of industries on their experience deploying and maintaining ML pipelines in production.</p><p>Shreya explains the high-level findings of "Operationalizing Machine Learning"; variables that indicate a successful deployment (velocity, validation, and versioning), common pain points, and a grouping of the MLOps tool stack into four layers. Shreya and Lukas also discuss examples of data challenges in production, Jupyter Notebooks, and reproducibility.</p><p>Show notes (transcript and links): http://wandb.me/gd-shreya</p><p>---</p><p>💬 *Host:* Lukas Biewald</p><p>---</p><p>*Subscribe and listen to Gradient Dissent today!*</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></content:encoded><link><![CDATA[https://wandb.ai/wandb_fc/gradient-dissent/reports/Shreya-Shankar-Operationalizing-Machine-Learning--VmlldzozNjg4MzUz]]></link><guid isPermaLink="false">5eafdc45-feb6-44a6-81c6-4521b03cdf7e</guid><itunes:image href="https://artwork.captivate.fm/88aa2dfa-0133-4932-bb49-43d61a4cf791/jMjJ1irBdRv0lraDm3J6k9Xp.jpg"/><pubDate>Thu, 02 Mar 2023 20:39:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/37f2c611-ce30-4734-bfab-b7328bbbe43d/out.mp3" length="52451296" type="audio/mpeg"/><itunes:duration>54:38</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Sarah Catanzaro — Remembering the Lessons of the Last AI Renaissance</title><itunes:title>Sarah Catanzaro — Remembering the Lessons of the Last AI Renaissance</itunes:title><description><![CDATA[<p>Sarah Catanzaro is a General Partner at Amplify Partners, and one of the leading investors in AI and ML. Her investments include RunwayML, OctoML, and Gantry.</p><p>Sarah and Lukas discuss lessons learned from the "AI renaissance" of the mid 2010s and compare the general perception of ML back then to now. Sarah also provides insights from her perspective as an investor, from selling into tech-forward companies vs. traditional enterprises, to the current state of MLOps/developer tools, to large language models and hype bubbles.</p><p>Show notes (transcript and links): http://wandb.me/gd-sarah-catanzaro</p><p>---</p><p>⏳ Timestamps: </p><p>0:00 Intro</p><p>1:10 Lessons learned from previous AI hype cycles</p><p>11:46 Maintaining technical knowledge as an investor</p><p>19:05 Selling into tech-forward companies vs. traditional enterprises</p><p>25:09 Building point solutions vs. end-to-end platforms</p><p>36:27 LLMS, new tooling, and commoditization</p><p>44:39 Failing fast and how startups can compete with large cloud vendors</p><p>52:31 The gap between research and industry, and vice versa</p><p>1:00:01 Advice for ML practitioners during hype bubbles</p><p>1:03:17 Sarah's thoughts on Rust and bottlenecks in deployment</p><p>1:11:23 The importance of aligning technology with people</p><p>1:15:58 Outro</p><p>---</p><p>📝 Links</p><p>📍 "Operationalizing Machine Learning: An Interview Study" (Shankar et al., 2022), an interview study on deploying and maintaining ML production pipelines: https://arxiv.org/abs/2209.09125</p><p>---</p><p>Connect with Sarah:</p><p>📍 Sarah on Twitter: https://twitter.com/sarahcat21</p><p>📍 Sarah's Amplify Partners profile: https://www.amplifypartners.com/investment-team/sarah-catanzaro</p><p>---</p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Riley Fields, Angelica Pan</p><p>---</p><p>Subscribe and listen to Gradient Dissent today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></description><content:encoded><![CDATA[<p>Sarah Catanzaro is a General Partner at Amplify Partners, and one of the leading investors in AI and ML. Her investments include RunwayML, OctoML, and Gantry.</p><p>Sarah and Lukas discuss lessons learned from the "AI renaissance" of the mid 2010s and compare the general perception of ML back then to now. Sarah also provides insights from her perspective as an investor, from selling into tech-forward companies vs. traditional enterprises, to the current state of MLOps/developer tools, to large language models and hype bubbles.</p><p>Show notes (transcript and links): http://wandb.me/gd-sarah-catanzaro</p><p>---</p><p>⏳ Timestamps: </p><p>0:00 Intro</p><p>1:10 Lessons learned from previous AI hype cycles</p><p>11:46 Maintaining technical knowledge as an investor</p><p>19:05 Selling into tech-forward companies vs. traditional enterprises</p><p>25:09 Building point solutions vs. end-to-end platforms</p><p>36:27 LLMS, new tooling, and commoditization</p><p>44:39 Failing fast and how startups can compete with large cloud vendors</p><p>52:31 The gap between research and industry, and vice versa</p><p>1:00:01 Advice for ML practitioners during hype bubbles</p><p>1:03:17 Sarah's thoughts on Rust and bottlenecks in deployment</p><p>1:11:23 The importance of aligning technology with people</p><p>1:15:58 Outro</p><p>---</p><p>📝 Links</p><p>📍 "Operationalizing Machine Learning: An Interview Study" (Shankar et al., 2022), an interview study on deploying and maintaining ML production pipelines: https://arxiv.org/abs/2209.09125</p><p>---</p><p>Connect with Sarah:</p><p>📍 Sarah on Twitter: https://twitter.com/sarahcat21</p><p>📍 Sarah's Amplify Partners profile: https://www.amplifypartners.com/investment-team/sarah-catanzaro</p><p>---</p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Riley Fields, Angelica Pan</p><p>---</p><p>Subscribe and listen to Gradient Dissent today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">3177ae16-6e6e-4226-a8c3-fb11fdcba525</guid><itunes:image href="https://artwork.captivate.fm/9feaefa8-8638-4c3d-ad0c-b2a5da6d9830/YMyeemF7cguSZVa3QQkzk9Vm.png"/><pubDate>Thu, 02 Feb 2023 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/e9421d91-e842-4332-be85-c8fe4584282a/GD-81-SarahCatanzaro-MASTER-mp3.mp3" length="110005632" type="audio/mpeg"/><itunes:duration>01:16:24</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Cristóbal Valenzuela — The Next Generation of Content Creation and AI</title><itunes:title>Cristóbal Valenzuela — The Next Generation of Content Creation and AI</itunes:title><description><![CDATA[<p>Cristóbal Valenzuela is co-founder and CEO of Runway ML, a startup that's building the future of AI-powered content creation tools. Runway's research areas include diffusion systems for image generation.</p><p>Cris gives a demo of Runway's video editing platform. Then, he shares how his interest in combining technology with creativity led to Runway, and where he thinks the world of computation and content might be headed to next. Cris and Lukas also discuss Runway's tech stack and research.</p><p>Show notes (transcript and links): http://wandb.me/gd-cristobal-valenzuela</p><p>---</p><p>⏳ Timestamps: </p><p>0:00 Intro</p><p>1:06 How Runway uses ML to improve video editing</p><p>6:04 A demo of Runway’s video editing capabilities</p><p>13:36 How Cris entered the machine learning space</p><p>18:55 Cris’ thoughts on the future of ML for creative use cases</p><p>28:46 Runway’s tech stack</p><p>32:38 Creativity, and keeping humans in the loop</p><p>36:15 The potential of audio generation and new mental models</p><p>40:01 Outro</p><p>---</p><p>🎥 Runway's AI Film Festival is accepting submissions through January 23! 🎥</p><p>They are looking for art and artists that are at the forefront of AI filmmaking. Submissions should be between 1-10 minutes long, and a core component of the film should include generative content</p><p>📍 https://aiff.runwayml.com/</p><p>--</p><p>📝 Links</p><p>📍 "High-Resolution Image Synthesis with Latent Diffusion Models" (Rombach et al., 2022)", the research paper behind Stable Diffusion: https://research.runwayml.com/publications/high-resolution-image-synthesis-with-latent-diffusion-models</p><p>📍 Lexman Artificial, a 100% AI-generated podcast: https://twitter.com/lexman_ai</p><p>---</p><p>Connect with Cris and Runway:</p><p>📍 Cris on Twitter: https://twitter.com/c_valenzuelab</p><p>📍 Runway on Twitter: https://twitter.com/runwayml</p><p>📍 Careers at Runway: https://runwayml.com/careers/</p><p>---</p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Riley Fields, Angelica Pan</p><p>---</p><p>Subscribe and listen to Gradient Dissent today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></description><content:encoded><![CDATA[<p>Cristóbal Valenzuela is co-founder and CEO of Runway ML, a startup that's building the future of AI-powered content creation tools. Runway's research areas include diffusion systems for image generation.</p><p>Cris gives a demo of Runway's video editing platform. Then, he shares how his interest in combining technology with creativity led to Runway, and where he thinks the world of computation and content might be headed to next. Cris and Lukas also discuss Runway's tech stack and research.</p><p>Show notes (transcript and links): http://wandb.me/gd-cristobal-valenzuela</p><p>---</p><p>⏳ Timestamps: </p><p>0:00 Intro</p><p>1:06 How Runway uses ML to improve video editing</p><p>6:04 A demo of Runway’s video editing capabilities</p><p>13:36 How Cris entered the machine learning space</p><p>18:55 Cris’ thoughts on the future of ML for creative use cases</p><p>28:46 Runway’s tech stack</p><p>32:38 Creativity, and keeping humans in the loop</p><p>36:15 The potential of audio generation and new mental models</p><p>40:01 Outro</p><p>---</p><p>🎥 Runway's AI Film Festival is accepting submissions through January 23! 🎥</p><p>They are looking for art and artists that are at the forefront of AI filmmaking. Submissions should be between 1-10 minutes long, and a core component of the film should include generative content</p><p>📍 https://aiff.runwayml.com/</p><p>--</p><p>📝 Links</p><p>📍 "High-Resolution Image Synthesis with Latent Diffusion Models" (Rombach et al., 2022)", the research paper behind Stable Diffusion: https://research.runwayml.com/publications/high-resolution-image-synthesis-with-latent-diffusion-models</p><p>📍 Lexman Artificial, a 100% AI-generated podcast: https://twitter.com/lexman_ai</p><p>---</p><p>Connect with Cris and Runway:</p><p>📍 Cris on Twitter: https://twitter.com/c_valenzuelab</p><p>📍 Runway on Twitter: https://twitter.com/runwayml</p><p>📍 Careers at Runway: https://runwayml.com/careers/</p><p>---</p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Riley Fields, Angelica Pan</p><p>---</p><p>Subscribe and listen to Gradient Dissent today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">96905353-206e-4540-a333-526d255c0f8d</guid><itunes:image href="https://artwork.captivate.fm/eaad03b5-49f7-4b9a-b638-350abb8aa977/F057f9m4a7PqhzWi_Y3Kvx45.png"/><pubDate>Thu, 19 Jan 2023 00:30:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/9f0cda14-abc0-432c-8b91-c94c993c877c/GD-Cristobal-Valenzuela-v5-1.mp3" length="77434130" type="audio/mpeg"/><itunes:duration>40:26</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Jeremy Howard — The Simple but Profound Insight Behind Diffusion</title><itunes:title>Jeremy Howard — The Simple but Profound Insight Behind Diffusion</itunes:title><description><![CDATA[<p>Jeremy Howard is a co-founder of fast.ai, the non-profit research group behind the popular massive open online course "Practical Deep Learning for Coders", and the open source deep learning library "fastai".</p><p>Jeremy is also a co-founder of #Masks4All, a global volunteer organization founded in March 2020 that advocated for the public adoption of homemade face masks in order to help slow the spread of COVID-19. His Washington Post article "Simple DIY masks could help flatten the curve." went viral in late March/early April 2020, and is associated with the U.S CDC's change in guidance a few days later to recommend wearing masks in public.</p><p>In this episode, Jeremy explains how diffusion works and how individuals with limited compute budgets can engage meaningfully with large, state-of-the-art models. Then, as our first-ever repeat guest on Gradient Dissent, Jeremy revisits a previous conversation with Lukas on Python vs. Julia for machine learning.</p><p>Finally, Jeremy shares his perspective on the early days of COVID-19, and what his experience as one of the earliest and most high-profile advocates for widespread mask-wearing was like.</p><p>Show notes (transcript and links): http://wandb.me/gd-jeremy-howard-2</p><p>---</p><p>⏳ Timestamps:</p><p>0:00 Intro</p><p>1:06 Diffusion and generative models</p><p>14:40 Engaging with large models meaningfully</p><p>20:30 Jeremy's thoughts on Stable Diffusion and OpenAI</p><p>26:38 Prompt engineering and large language models</p><p>32:00 Revisiting Julia vs. Python</p><p>40:22 Jeremy's science advocacy during early COVID days</p><p>1:01:03 Researching how to improve children's education</p><p>1:07:43 The importance of executive buy-in</p><p>1:11:34 Outro</p><p>1:12:02 Bonus: Weights &amp; Biases</p><p>---</p><p>📝 Links</p><p>📍 Jeremy's previous Gradient Dissent episode (8/25/2022): http://wandb.me/gd-jeremy-howard</p><p>📍 "Simple DIY masks could help flatten the curve. We should all wear them in public.", Jeremy's viral Washington Post article: https://www.washingtonpost.com/outlook/2020/03/28/masks-all-coronavirus/</p><p>📍 "An evidence review of face masks against COVID-19" (Howard et al., 2021), one of the first peer-reviewed papers on the effectiveness of wearing masks: https://www.pnas.org/doi/10.1073/pnas.2014564118</p><p>📍 Jeremy's Twitter thread summary of "An evidence review of face masks against COVID-19": https://twitter.com/jeremyphoward/status/1348771993949151232</p><p>📍 Read more about Jeremy's mask-wearing advocacy: https://www.smh.com.au/world/north-america/australian-expat-s-push-for-universal-mask-wearing-catches-fire-in-the-us-20200401-p54fu2.html</p><p>---</p><p>Connect with Jeremy and fast.ai:</p><p>📍 Jeremy on Twitter: https://twitter.com/jeremyphoward</p><p>📍 fast.ai on Twitter: https://twitter.com/FastDotAI</p><p>📍 Jeremy on LinkedIn: https://www.linkedin.com/in/howardjeremy/</p><p>---</p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Riley Fields, Angelica Pan</p>]]></description><content:encoded><![CDATA[<p>Jeremy Howard is a co-founder of fast.ai, the non-profit research group behind the popular massive open online course "Practical Deep Learning for Coders", and the open source deep learning library "fastai".</p><p>Jeremy is also a co-founder of #Masks4All, a global volunteer organization founded in March 2020 that advocated for the public adoption of homemade face masks in order to help slow the spread of COVID-19. His Washington Post article "Simple DIY masks could help flatten the curve." went viral in late March/early April 2020, and is associated with the U.S CDC's change in guidance a few days later to recommend wearing masks in public.</p><p>In this episode, Jeremy explains how diffusion works and how individuals with limited compute budgets can engage meaningfully with large, state-of-the-art models. Then, as our first-ever repeat guest on Gradient Dissent, Jeremy revisits a previous conversation with Lukas on Python vs. Julia for machine learning.</p><p>Finally, Jeremy shares his perspective on the early days of COVID-19, and what his experience as one of the earliest and most high-profile advocates for widespread mask-wearing was like.</p><p>Show notes (transcript and links): http://wandb.me/gd-jeremy-howard-2</p><p>---</p><p>⏳ Timestamps:</p><p>0:00 Intro</p><p>1:06 Diffusion and generative models</p><p>14:40 Engaging with large models meaningfully</p><p>20:30 Jeremy's thoughts on Stable Diffusion and OpenAI</p><p>26:38 Prompt engineering and large language models</p><p>32:00 Revisiting Julia vs. Python</p><p>40:22 Jeremy's science advocacy during early COVID days</p><p>1:01:03 Researching how to improve children's education</p><p>1:07:43 The importance of executive buy-in</p><p>1:11:34 Outro</p><p>1:12:02 Bonus: Weights &amp; Biases</p><p>---</p><p>📝 Links</p><p>📍 Jeremy's previous Gradient Dissent episode (8/25/2022): http://wandb.me/gd-jeremy-howard</p><p>📍 "Simple DIY masks could help flatten the curve. We should all wear them in public.", Jeremy's viral Washington Post article: https://www.washingtonpost.com/outlook/2020/03/28/masks-all-coronavirus/</p><p>📍 "An evidence review of face masks against COVID-19" (Howard et al., 2021), one of the first peer-reviewed papers on the effectiveness of wearing masks: https://www.pnas.org/doi/10.1073/pnas.2014564118</p><p>📍 Jeremy's Twitter thread summary of "An evidence review of face masks against COVID-19": https://twitter.com/jeremyphoward/status/1348771993949151232</p><p>📍 Read more about Jeremy's mask-wearing advocacy: https://www.smh.com.au/world/north-america/australian-expat-s-push-for-universal-mask-wearing-catches-fire-in-the-us-20200401-p54fu2.html</p><p>---</p><p>Connect with Jeremy and fast.ai:</p><p>📍 Jeremy on Twitter: https://twitter.com/jeremyphoward</p><p>📍 fast.ai on Twitter: https://twitter.com/FastDotAI</p><p>📍 Jeremy on LinkedIn: https://www.linkedin.com/in/howardjeremy/</p><p>---</p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Riley Fields, Angelica Pan</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">f4e5761a-22e4-4bd5-a27c-1067fd3e79c5</guid><itunes:image href="https://artwork.captivate.fm/c61000ad-047c-4fff-9f6c-9c12ad38ae90/8yQBwR-DTy5n6-kpCmuB3ytC.png"/><pubDate>Thu, 05 Jan 2023 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/9e0de7d8-434e-4829-8333-eeb51ef82ca8/GD-JeremyHoward-v3.mp3" length="105051456" type="audio/mpeg"/><itunes:duration>01:12:57</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Jerome Pesenti — Large Language Models, PyTorch, and Meta</title><itunes:title>Jerome Pesenti — Large Language Models, PyTorch, and Meta</itunes:title><description><![CDATA[<p>Jerome Pesenti is the former VP of AI at Meta, a tech conglomerate that includes Facebook, WhatsApp, and Instagram, and one of the most exciting places where AI research is happening today.</p><p>Jerome shares his thoughts on Transformers-based large language models, and why he's excited by the progress but skeptical of the term "AGI". Then, he discusses some of the practical applications of ML at Meta (recommender systems and moderation!) and dives into the story behind Meta's development of PyTorch. Jerome and Lukas also chat about Jerome's time at IBM Watson and in drug discovery.</p><p>Show notes (transcript and links): http://wandb.me/gd-jerome-pesenti</p><p>---</p><p>⏳ Timestamps: </p><p>0:00 Intro</p><p>0:28 Jerome's thought on large language models</p><p>12:53 AI applications and challenges at Meta</p><p>18:41 The story behind developing PyTorch</p><p>26:40 Jerome's experience at IBM Watson</p><p>28:53 Drug discovery, AI, and changing the game</p><p>36:10 The potential of education and AI</p><p>40:10 Meta and AR/VR interfaces</p><p>43:43 Why NVIDIA is such a powerhouse</p><p>47:08 Jerome's advice to people starting their careers</p><p>48:50 Going back to coding, the challenges of scaling</p><p>52:11 Outro</p><p>---</p><p>Connect with Jerome:</p><p>📍 Jerome on Twitter: https://twitter.com/an_open_mind</p><p>📍 Jerome on LinkedIn: https://www.linkedin.com/in/jpesenti/</p><p>---</p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Riley Fields, Angelica Pan, Lavanya Shukla</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></description><content:encoded><![CDATA[<p>Jerome Pesenti is the former VP of AI at Meta, a tech conglomerate that includes Facebook, WhatsApp, and Instagram, and one of the most exciting places where AI research is happening today.</p><p>Jerome shares his thoughts on Transformers-based large language models, and why he's excited by the progress but skeptical of the term "AGI". Then, he discusses some of the practical applications of ML at Meta (recommender systems and moderation!) and dives into the story behind Meta's development of PyTorch. Jerome and Lukas also chat about Jerome's time at IBM Watson and in drug discovery.</p><p>Show notes (transcript and links): http://wandb.me/gd-jerome-pesenti</p><p>---</p><p>⏳ Timestamps: </p><p>0:00 Intro</p><p>0:28 Jerome's thought on large language models</p><p>12:53 AI applications and challenges at Meta</p><p>18:41 The story behind developing PyTorch</p><p>26:40 Jerome's experience at IBM Watson</p><p>28:53 Drug discovery, AI, and changing the game</p><p>36:10 The potential of education and AI</p><p>40:10 Meta and AR/VR interfaces</p><p>43:43 Why NVIDIA is such a powerhouse</p><p>47:08 Jerome's advice to people starting their careers</p><p>48:50 Going back to coding, the challenges of scaling</p><p>52:11 Outro</p><p>---</p><p>Connect with Jerome:</p><p>📍 Jerome on Twitter: https://twitter.com/an_open_mind</p><p>📍 Jerome on LinkedIn: https://www.linkedin.com/in/jpesenti/</p><p>---</p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Riley Fields, Angelica Pan, Lavanya Shukla</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">7a6cea90-7eb7-49a0-918a-026521072cfe</guid><itunes:image href="https://artwork.captivate.fm/54ed20ee-2789-4252-86b6-cc6539d17835/rEnruNWwWu1OZXKQHWFUr0CC.jpg"/><pubDate>Thu, 22 Dec 2022 08:15:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/e813fe0b-e1c2-4b6a-8612-aee58269218f/GD-Jermone-Pesenti-v4.mp3" length="94783503" type="audio/mpeg"/><itunes:duration>52:35</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>D. Sculley — Technical Debt, Trade-offs, and Kaggle</title><itunes:title>D. Sculley — Technical Debt, Trade-offs, and Kaggle</itunes:title><description><![CDATA[<p>D. Sculley is CEO of Kaggle, the beloved and well-known data science and machine learning community.</p><p>D. discusses his influential 2015 paper "Machine Learning: The High Interest Credit Card of Technical Debt" and what the current challenges of deploying models in the real world are now, in 2022. Then, D. and Lukas chat about why Kaggle is like a rain forest, and about Kaggle's historic, current, and potential future roles in the broader machine learning community.</p><p>Show notes (transcript and links): http://wandb.me/gd-d-sculley</p><p>---</p><p>⏳ Timestamps: </p><p>0:00 Intro</p><p>1:02 Machine learning and technical debt</p><p>11:18 MLOps, increased stakes, and realistic expectations</p><p>19:12 Evaluating models methodically</p><p>25:32 Kaggle's role in the ML world</p><p>33:34 Kaggle competitions, datasets, and notebooks</p><p>38:49 Why Kaggle is like a rain forest</p><p>44:25 Possible future directions for Kaggle</p><p>46:50 Healthy competitions and self-growth</p><p>48:44 Kaggle's relevance in a compute-heavy future</p><p>53:49 AutoML vs. human judgment</p><p>56:06 After a model goes into production</p><p>1:00:00 Outro</p><p>---</p><p>Connect with D. and Kaggle:</p><p>📍 D. on LinkedIn: https://www.linkedin.com/in/d-sculley-90467310/</p><p>📍 Kaggle on Twitter: https://twitter.com/kaggle</p><p>---</p><p>Links:</p><p>📍 "Machine Learning: The High Interest Credit Card of Technical Debt" (Sculley et al. 2014): https://research.google/pubs/pub43146/</p><p>---</p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Riley Fields, Angelica Pan, Anish Shah, Lavanya Shukla</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></description><content:encoded><![CDATA[<p>D. Sculley is CEO of Kaggle, the beloved and well-known data science and machine learning community.</p><p>D. discusses his influential 2015 paper "Machine Learning: The High Interest Credit Card of Technical Debt" and what the current challenges of deploying models in the real world are now, in 2022. Then, D. and Lukas chat about why Kaggle is like a rain forest, and about Kaggle's historic, current, and potential future roles in the broader machine learning community.</p><p>Show notes (transcript and links): http://wandb.me/gd-d-sculley</p><p>---</p><p>⏳ Timestamps: </p><p>0:00 Intro</p><p>1:02 Machine learning and technical debt</p><p>11:18 MLOps, increased stakes, and realistic expectations</p><p>19:12 Evaluating models methodically</p><p>25:32 Kaggle's role in the ML world</p><p>33:34 Kaggle competitions, datasets, and notebooks</p><p>38:49 Why Kaggle is like a rain forest</p><p>44:25 Possible future directions for Kaggle</p><p>46:50 Healthy competitions and self-growth</p><p>48:44 Kaggle's relevance in a compute-heavy future</p><p>53:49 AutoML vs. human judgment</p><p>56:06 After a model goes into production</p><p>1:00:00 Outro</p><p>---</p><p>Connect with D. and Kaggle:</p><p>📍 D. on LinkedIn: https://www.linkedin.com/in/d-sculley-90467310/</p><p>📍 Kaggle on Twitter: https://twitter.com/kaggle</p><p>---</p><p>Links:</p><p>📍 "Machine Learning: The High Interest Credit Card of Technical Debt" (Sculley et al. 2014): https://research.google/pubs/pub43146/</p><p>---</p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Riley Fields, Angelica Pan, Anish Shah, Lavanya Shukla</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">8bbb0939-ceb6-499d-bb54-d9819e9c7a85</guid><itunes:image href="https://artwork.captivate.fm/c1ac5454-a4cb-452d-ad65-535c83ad4d57/p7YHy_hKgVj9qHiM8GchFsIC.jpg"/><pubDate>Thu, 01 Dec 2022 00:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/70100591-cabe-4c7f-a0fe-cf2fb9565766/GD-D-Sculley-v3.mp3" length="86897472" type="audio/mpeg"/><itunes:duration>01:00:26</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Emad Mostaque — Stable Diffusion, Stability AI, and What’s Next</title><itunes:title>Emad Mostaque — Stable Diffusion, Stability AI, and What’s Next</itunes:title><description><![CDATA[<p>Emad Mostaque is CEO and co-founder of Stability AI, a startup and network of decentralized developer communities building open AI tools. Stability AI is the company behind Stable Diffusion, the well-known, open source, text-to-image generation model.</p><p>Emad shares the story and mission behind Stability AI (unlocking humanity's potential with open AI technology), and explains how Stability's role as a community catalyst and compute provider might evolve as the company grows. Then, Emad and Lukas discuss what the future might hold in store: big models vs "optimal" models, better datasets, and more decentralization.</p><p>-</p><p>🎶 Special note: This week’s theme music was composed by Weights &amp; Biases’ own Justin Tenuto with help from Harmonai’s Dance Diffusion.</p><p>-</p><p>Show notes (transcript and links): http://wandb.me/gd-emad-mostaque</p><p>-</p><p>⏳ Timestamps:</p><p>00:00 Intro</p><p>00:42 How AI fits into the safety/security industry</p><p>09:33 Event matching and object detection</p><p>14:47 Running models on the right hardware</p><p>17:46 Scaling model evaluation</p><p>23:58 Monitoring and evaluation challenges</p><p>26:30 Identifying and sorting issues</p><p>30:27 Bridging vision and language domains</p><p>39:25 Challenges and promises of natural language technology</p><p>41:35 Production environment</p><p>43:15 Using synthetic data</p><p>49:59 Working with startups</p><p>53:55 Multi-task learning, meta-learning, and user experience</p><p>56:44 Optimization and testing across multiple platforms</p><p>59:36 Outro</p><p>-</p><p>Connect with Jehan and Motorola Solutions:</p><p>📍 Jehan on LinkedIn: https://www.linkedin.com/in/jehanw/</p><p>📍 Jehan on Twitter: https://twitter.com/jehan/</p><p>📍 Motorola Solutions on Twitter: https://twitter.com/MotoSolutions/</p><p>📍 Careers at Motorola Solutions: https://www.motorolasolutions.com/en_us/about/careers.html</p><p>-</p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Riley Fields, Angelica Pan, Lavanya Shukla, Anish Shah</p><p>-</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></description><content:encoded><![CDATA[<p>Emad Mostaque is CEO and co-founder of Stability AI, a startup and network of decentralized developer communities building open AI tools. Stability AI is the company behind Stable Diffusion, the well-known, open source, text-to-image generation model.</p><p>Emad shares the story and mission behind Stability AI (unlocking humanity's potential with open AI technology), and explains how Stability's role as a community catalyst and compute provider might evolve as the company grows. Then, Emad and Lukas discuss what the future might hold in store: big models vs "optimal" models, better datasets, and more decentralization.</p><p>-</p><p>🎶 Special note: This week’s theme music was composed by Weights &amp; Biases’ own Justin Tenuto with help from Harmonai’s Dance Diffusion.</p><p>-</p><p>Show notes (transcript and links): http://wandb.me/gd-emad-mostaque</p><p>-</p><p>⏳ Timestamps:</p><p>00:00 Intro</p><p>00:42 How AI fits into the safety/security industry</p><p>09:33 Event matching and object detection</p><p>14:47 Running models on the right hardware</p><p>17:46 Scaling model evaluation</p><p>23:58 Monitoring and evaluation challenges</p><p>26:30 Identifying and sorting issues</p><p>30:27 Bridging vision and language domains</p><p>39:25 Challenges and promises of natural language technology</p><p>41:35 Production environment</p><p>43:15 Using synthetic data</p><p>49:59 Working with startups</p><p>53:55 Multi-task learning, meta-learning, and user experience</p><p>56:44 Optimization and testing across multiple platforms</p><p>59:36 Outro</p><p>-</p><p>Connect with Jehan and Motorola Solutions:</p><p>📍 Jehan on LinkedIn: https://www.linkedin.com/in/jehanw/</p><p>📍 Jehan on Twitter: https://twitter.com/jehan/</p><p>📍 Motorola Solutions on Twitter: https://twitter.com/MotoSolutions/</p><p>📍 Careers at Motorola Solutions: https://www.motorolasolutions.com/en_us/about/careers.html</p><p>-</p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Riley Fields, Angelica Pan, Lavanya Shukla, Anish Shah</p><p>-</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">107fd48b-5067-48c4-9131-d0b5803dfc46</guid><itunes:image href="https://artwork.captivate.fm/314e56e4-e473-499c-a582-b6f3e14a6ddc/r7X6yElEeAVmtabbZtuL0OK6.jpg"/><pubDate>Tue, 15 Nov 2022 07:45:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/71cf4e05-dd20-4ca9-be03-bb67a9dbe9bb/GD-Emad-Mostaque-20v3-1.mp3" length="101494080" type="audio/mpeg"/><itunes:duration>01:10:29</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Jehan Wickramasuriya — AI in High-Stress Scenarios</title><itunes:title>Jehan Wickramasuriya — AI in High-Stress Scenarios</itunes:title><description><![CDATA[<p>Jehan Wickramasuriya is the Vice President of AI, Platform &amp; Data Services at Motorola Solutions, a global leader in public safety and enterprise security.</p><p>In this episode, Jehan discusses how Motorola Solutions uses AI to simplify data streams to help maximize human potential in high-stress situations. He also shares his thoughts on augmenting synthetic data with real data and the challenges posed in partnering with startups.</p><p>Show notes (transcript and links): <a href="http://wandb.me/gd-jehan-wickramasuriya" rel="noopener noreferrer" target="_blank">http://wandb.me/gd-jehan-wickramasuriya</a></p><p>-</p><p>⏳ Timestamps: </p><p>00:00 Intro</p><p>00:42 How AI fits into the safety/security industry </p><p>09:33 Event matching and object detection</p><p>14:47 Running models on the right hardware</p><p>17:46 Scaling model evaluation</p><p>23:58 Monitoring and evaluation challenges</p><p>26:30 Identifying and sorting issues</p><p>30:27 Bridging vision and language domains</p><p>39:25 Challenges and promises of natural language technology</p><p>41:35 Production environment</p><p>43:15 Using synthetic data</p><p>49:59 Working with startups</p><p>53:55 Multi-task learning, meta-learning, and user experience</p><p>56:44 Optimization and testing across multiple platforms</p><p>59:36 Outro</p><p>-</p><p>Connect with Jehan and Motorola Solutions:</p><p>📍 Jehan on LinkedIn: https://www.linkedin.com/in/jehanw/</p><p>📍 Jehan on Twitter: https://twitter.com/jehan/</p><p>📍 Motorola Solutions on Twitter: https://twitter.com/MotoSolutions/</p><p>📍 Careers at Motorola Solutions: https://www.motorolasolutions.com/en_us/about/careers.html</p><p>-</p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Riley Fields, Cayla Sharp, Angelica Pan, Lavanya Shukla</p><p><br></p><p>-</p><p><br></p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></description><content:encoded><![CDATA[<p>Jehan Wickramasuriya is the Vice President of AI, Platform &amp; Data Services at Motorola Solutions, a global leader in public safety and enterprise security.</p><p>In this episode, Jehan discusses how Motorola Solutions uses AI to simplify data streams to help maximize human potential in high-stress situations. He also shares his thoughts on augmenting synthetic data with real data and the challenges posed in partnering with startups.</p><p>Show notes (transcript and links): <a href="http://wandb.me/gd-jehan-wickramasuriya" rel="noopener noreferrer" target="_blank">http://wandb.me/gd-jehan-wickramasuriya</a></p><p>-</p><p>⏳ Timestamps: </p><p>00:00 Intro</p><p>00:42 How AI fits into the safety/security industry </p><p>09:33 Event matching and object detection</p><p>14:47 Running models on the right hardware</p><p>17:46 Scaling model evaluation</p><p>23:58 Monitoring and evaluation challenges</p><p>26:30 Identifying and sorting issues</p><p>30:27 Bridging vision and language domains</p><p>39:25 Challenges and promises of natural language technology</p><p>41:35 Production environment</p><p>43:15 Using synthetic data</p><p>49:59 Working with startups</p><p>53:55 Multi-task learning, meta-learning, and user experience</p><p>56:44 Optimization and testing across multiple platforms</p><p>59:36 Outro</p><p>-</p><p>Connect with Jehan and Motorola Solutions:</p><p>📍 Jehan on LinkedIn: https://www.linkedin.com/in/jehanw/</p><p>📍 Jehan on Twitter: https://twitter.com/jehan/</p><p>📍 Motorola Solutions on Twitter: https://twitter.com/MotoSolutions/</p><p>📍 Careers at Motorola Solutions: https://www.motorolasolutions.com/en_us/about/careers.html</p><p>-</p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Riley Fields, Cayla Sharp, Angelica Pan, Lavanya Shukla</p><p><br></p><p>-</p><p><br></p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">c384a9a6-6748-498b-a17b-8f29a4e75e96</guid><itunes:image href="https://artwork.captivate.fm/14cc34cc-39bd-41e6-a1db-267a287efc39/GIu9ny7coLPDnyRsaqS7HbY1.jpg"/><pubDate>Thu, 06 Oct 2022 08:15:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/26b58c77-f021-450f-aef5-cb9d863b20a7/GD-Jehan-Wickramasuriya-20v2-20audio.mp3" length="95087712" type="audio/mpeg"/><itunes:duration>01:00:02</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Will Falcon — Making Lightning the Apple of ML</title><itunes:title>Will Falcon — Making Lightning the Apple of ML</itunes:title><description><![CDATA[<p>Will Falcon is the CEO and co-founder of Lightning AI, a platform that enables users to quickly build and publish ML models.</p><p>In this episode, Will explains how Lightning addresses the challenges of a fragmented AI ecosystem and reveals which framework PyTorch Lightning was originally built upon (hint: not PyTorch!) He also shares lessons he took from his experience serving in the military and offers a recommendation to veterans who want to work in tech.</p><p>Show notes (transcript and links): http://wandb.me/gd-will-falcon</p><p><br></p><p>---</p><p><br></p><p>⏳ Timestamps: </p><p>00:00 Intro</p><p>01:00 From SEAL training to FAIR</p><p>04:17 Stress-testing Lightning</p><p>07:55 Choosing PyTorch over TensorFlow and other frameworks</p><p>13:16 Components of the Lightning platform</p><p>17:01 Launching Lightning from Facebook</p><p>19:09 Similarities between leadership and research</p><p>22:08 Lessons from the military</p><p>26:56 Scaling PyTorch Lightning to Lightning AI</p><p>33:21 Hiring the right people</p><p>35:21 The future of Lightning</p><p>39:53 Reducing algorithm complexity in self-supervised learning</p><p>42:19 A fragmented ML landscape</p><p>44:35 Outro</p><p><br></p><p>---</p><p><br></p><p>Connect with Lightning</p><p>📍 Website: https://lightning.ai</p><p>📍 Twitter: https://twitter.com/LightningAI</p><p>📍 LinkedIn: https://www.linkedin.com/company/pytorch-lightning/</p><p>📍 Careers: https://boards.greenhouse.io/lightningai</p><p><br></p><p>---</p><p><br></p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Riley Fields, Anish Shah, Cayla Sharp, Angelica Pan, Lavanya Shukla</p><p><br></p><p>---</p><p><br></p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></description><content:encoded><![CDATA[<p>Will Falcon is the CEO and co-founder of Lightning AI, a platform that enables users to quickly build and publish ML models.</p><p>In this episode, Will explains how Lightning addresses the challenges of a fragmented AI ecosystem and reveals which framework PyTorch Lightning was originally built upon (hint: not PyTorch!) He also shares lessons he took from his experience serving in the military and offers a recommendation to veterans who want to work in tech.</p><p>Show notes (transcript and links): http://wandb.me/gd-will-falcon</p><p><br></p><p>---</p><p><br></p><p>⏳ Timestamps: </p><p>00:00 Intro</p><p>01:00 From SEAL training to FAIR</p><p>04:17 Stress-testing Lightning</p><p>07:55 Choosing PyTorch over TensorFlow and other frameworks</p><p>13:16 Components of the Lightning platform</p><p>17:01 Launching Lightning from Facebook</p><p>19:09 Similarities between leadership and research</p><p>22:08 Lessons from the military</p><p>26:56 Scaling PyTorch Lightning to Lightning AI</p><p>33:21 Hiring the right people</p><p>35:21 The future of Lightning</p><p>39:53 Reducing algorithm complexity in self-supervised learning</p><p>42:19 A fragmented ML landscape</p><p>44:35 Outro</p><p><br></p><p>---</p><p><br></p><p>Connect with Lightning</p><p>📍 Website: https://lightning.ai</p><p>📍 Twitter: https://twitter.com/LightningAI</p><p>📍 LinkedIn: https://www.linkedin.com/company/pytorch-lightning/</p><p>📍 Careers: https://boards.greenhouse.io/lightningai</p><p><br></p><p>---</p><p><br></p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Riley Fields, Anish Shah, Cayla Sharp, Angelica Pan, Lavanya Shukla</p><p><br></p><p>---</p><p><br></p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">cdad773f-5601-4850-89e6-7e10055518e3</guid><itunes:image href="https://artwork.captivate.fm/1ed18687-3bd2-43b9-ac73-ab7178f1ddaf/renVrft83xu88g8zlSlwMoCX.jpg"/><pubDate>Thu, 15 Sep 2022 08:30:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/96d4ab37-2ab1-4279-8019-0e1b01b22871/GD-Will-Falcon-20v6.mp3" length="78370698" type="audio/mpeg"/><itunes:duration>45:21</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Aaron Colak — ML and NLP in Experience Management</title><itunes:title>Aaron Colak — ML and NLP in Experience Management</itunes:title><description><![CDATA[<p>Aaron Colak is the Leader of Core Machine Learning at Qualtrics, an experiment management company that takes large language models and applies them to real-world, B2B use cases.</p><p>In this episode, Aaron describes mixing classical linguistic analysis with deep learning models and how Qualtrics organized their machine learning organizations and model to leverage the best of these techniques. He also explains how advances in NLP have invited new opportunities in low-resource languages.</p><p>Show notes (transcript and links): http://wandb.me/gd-aaron-colak</p><p>---</p><p>⏳ Timestamps: </p><p>00:00 Intro</p><p>00:57 Evolving from surveys to experience management</p><p>04:56 Detecting sentiment with ML</p><p>10:57 Working with large language models and rule-based systems</p><p>14:50 Zero-shot learning, NLP, and low-resource languages</p><p>20:11 Letting customers control data</p><p>25:13 Deep learning and tabular data</p><p>28:40 Hyperscalers and performance monitoring</p><p>34:54 Combining deep learning with linguistics</p><p>40:03 A sense of accomplishment</p><p>42:52 Causality and observational data in healthcare</p><p>45:09 Challenges of interdisciplinary collaboration</p><p>49:27 Outro</p><p>---</p><p>Connect with Aaron and Qualtrics</p><p>📍 Aaron on LinkedIn: https://www.linkedin.com/in/aaron-r-colak-3522308/</p><p>📍 Qualtrics on Twitter: https://twitter.com/qualtrics/</p><p>📍 Careers at Qualtrics: https://www.qualtrics.com/careers/</p><p>---</p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Riley Fields, Cayla Sharp, Angelica Pan, Lavanya Shukla</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></description><content:encoded><![CDATA[<p>Aaron Colak is the Leader of Core Machine Learning at Qualtrics, an experiment management company that takes large language models and applies them to real-world, B2B use cases.</p><p>In this episode, Aaron describes mixing classical linguistic analysis with deep learning models and how Qualtrics organized their machine learning organizations and model to leverage the best of these techniques. He also explains how advances in NLP have invited new opportunities in low-resource languages.</p><p>Show notes (transcript and links): http://wandb.me/gd-aaron-colak</p><p>---</p><p>⏳ Timestamps: </p><p>00:00 Intro</p><p>00:57 Evolving from surveys to experience management</p><p>04:56 Detecting sentiment with ML</p><p>10:57 Working with large language models and rule-based systems</p><p>14:50 Zero-shot learning, NLP, and low-resource languages</p><p>20:11 Letting customers control data</p><p>25:13 Deep learning and tabular data</p><p>28:40 Hyperscalers and performance monitoring</p><p>34:54 Combining deep learning with linguistics</p><p>40:03 A sense of accomplishment</p><p>42:52 Causality and observational data in healthcare</p><p>45:09 Challenges of interdisciplinary collaboration</p><p>49:27 Outro</p><p>---</p><p>Connect with Aaron and Qualtrics</p><p>📍 Aaron on LinkedIn: https://www.linkedin.com/in/aaron-r-colak-3522308/</p><p>📍 Qualtrics on Twitter: https://twitter.com/qualtrics/</p><p>📍 Careers at Qualtrics: https://www.qualtrics.com/careers/</p><p>---</p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Riley Fields, Cayla Sharp, Angelica Pan, Lavanya Shukla</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">0b6e6fcc-d0bd-4920-ab9f-c320b87adb9d</guid><itunes:image href="https://artwork.captivate.fm/6c5b70c5-37ac-434b-a585-f4269f8b1a0a/V-tFwO1BcIgEysF04LknnW-2.jpg"/><pubDate>Fri, 26 Aug 2022 07:40:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/12451010-7037-47a3-9d5b-4f61259c29c5/GD-72-AaronColak-V2.mp3" length="74308497" type="audio/mpeg"/><itunes:duration>50:00</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Jordan Fisher — Skipping the Line with Autonomous Checkout</title><itunes:title>Jordan Fisher — Skipping the Line with Autonomous Checkout</itunes:title><description><![CDATA[<p>Jordan Fisher is the CEO and co-founder of Standard AI, an autonomous checkout company that’s pushing the boundaries of computer vision.</p><p>In this episode, Jordan discusses “the Wild West” of the MLOps stack and tells Lukas why Rust beats Python. He also explains why AutoML shouldn't be overlooked and uses a bag of chips to help explain the Manifold Hypothesis.</p><p>Show notes (transcript and links): http://wandb.me/gd-jordan-fisher</p><p>---</p><p>⏳ Timestamps: </p><p>00:00 Intro</p><p>00:40 The origins of Standard AI</p><p>08:30 Getting Standard into stores</p><p>18:00 Supervised learning, the advent of synthetic data, and the manifold hypothesis</p><p>24:23 What's important in a MLOps stack</p><p>27:32 The merits of AutoML</p><p>30:00 Deep learning frameworks</p><p>33:02 Python versus Rust</p><p>39:32 Raw camera data versus video</p><p>42:47 The future of autonomous checkout</p><p>48:02 Sharing the StandardSim data set</p><p>52:30 Picking the right tools</p><p>54:30 Overcoming dynamic data set challenges</p><p>57:35 Outro</p><p>---</p><p>Connect with Jordan and Standard AI</p><p>📍 Jordan on LinkedIn: https://www.linkedin.com/in/jordan-fisher-81145025/</p><p>📍 Standard AI on Twitter: https://twitter.com/StandardAi</p><p>📍 Careers at Standard AI: https://careers.standard.ai/</p><p>---</p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Riley Fields, Cayla Sharp, Angelica Pan, Lavanya Shukla</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></description><content:encoded><![CDATA[<p>Jordan Fisher is the CEO and co-founder of Standard AI, an autonomous checkout company that’s pushing the boundaries of computer vision.</p><p>In this episode, Jordan discusses “the Wild West” of the MLOps stack and tells Lukas why Rust beats Python. He also explains why AutoML shouldn't be overlooked and uses a bag of chips to help explain the Manifold Hypothesis.</p><p>Show notes (transcript and links): http://wandb.me/gd-jordan-fisher</p><p>---</p><p>⏳ Timestamps: </p><p>00:00 Intro</p><p>00:40 The origins of Standard AI</p><p>08:30 Getting Standard into stores</p><p>18:00 Supervised learning, the advent of synthetic data, and the manifold hypothesis</p><p>24:23 What's important in a MLOps stack</p><p>27:32 The merits of AutoML</p><p>30:00 Deep learning frameworks</p><p>33:02 Python versus Rust</p><p>39:32 Raw camera data versus video</p><p>42:47 The future of autonomous checkout</p><p>48:02 Sharing the StandardSim data set</p><p>52:30 Picking the right tools</p><p>54:30 Overcoming dynamic data set challenges</p><p>57:35 Outro</p><p>---</p><p>Connect with Jordan and Standard AI</p><p>📍 Jordan on LinkedIn: https://www.linkedin.com/in/jordan-fisher-81145025/</p><p>📍 Standard AI on Twitter: https://twitter.com/StandardAi</p><p>📍 Careers at Standard AI: https://careers.standard.ai/</p><p>---</p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Riley Fields, Cayla Sharp, Angelica Pan, Lavanya Shukla</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">8db81974-1588-4d74-beb3-36f42f9d6cdc</guid><itunes:image href="https://artwork.captivate.fm/3dce7c0d-290b-4e31-9d7c-7e03c9a93172/MvDxvYA-1g10GNjbsyyhuOW9.jpg"/><pubDate>Thu, 04 Aug 2022 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/77bb061e-6605-4d1f-9d45-49c3573c4538/GD-71-Jordan-20Fisher-V1.mp3" length="83533577" type="audio/mpeg"/><itunes:duration>57:58</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Drago Anguelov — Robustness, Safety, and Scalability at Waymo</title><itunes:title>Drago Anguelov — Robustness, Safety, and Scalability at Waymo</itunes:title><description><![CDATA[<p>Drago Anguelov is a Distinguished Scientist and Head of Research at Waymo, an autonomous driving technology company and subsidiary of Alphabet Inc.</p><p>We begin by discussing Drago's work on the original Inception architecture, winner of the 2014 ImageNet challenge and introduction of the inception module. Then, we explore milestones and current trends in autonomous driving, from Waymo's release of the Open Dataset to the trade-offs between modular and end-to-end systems.</p><p>Drago also shares his thoughts on finding rare examples, and the challenges of creating scalable and robust systems.</p><p>Show notes (transcript and links): http://wandb.me/gd-drago-anguelov</p><p>---</p><p>⏳ Timestamps: </p><p>0:00 Intro</p><p>0:45 The story behind the Inception architecture</p><p>13:51 Trends and milestones in autonomous vehicles</p><p>23:52 The challenges of scalability and simulation</p><p>30:19 Why LiDar and mapping are useful</p><p>35:31 Waymo Via and autonomous trucking</p><p>37:31 Robustness and unsupervised domain adaptation</p><p>40:44 Why Waymo released the Waymo Open Dataset</p><p>49:02 The domain gap between simulation and the real world</p><p>56:40 Finding rare examples</p><p>1:04:34 The challenges of production requirements</p><p>1:08:36 Outro</p><p>---</p><p>Connect with Drago &amp; Waymo</p><p>📍 Drago on LinkedIn: https://www.linkedin.com/in/dragomiranguelov/</p><p>📍 Waymo on Twitter: https://twitter.com/waymo/</p><p>📍 Careers at Waymo: https://waymo.com/careers/</p><p>---</p><p>Links:</p><p>📍 Inception v1: https://arxiv.org/abs/1409.4842</p><p>📍 "SPG: Unsupervised Domain Adaptation for 3D Object Detection via Semantic Point Generation", Qiangeng Xu et al. (2021), https://arxiv.org/abs/2108.06709</p><p>📍 "GradTail: Learning Long-Tailed Data Using Gradient-based Sample Weighting", Zhao Chen et al. (2022), https://arxiv.org/abs/2201.05938</p><p>---</p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Cayla Sharp, Angelica Pan, Lavanya Shukla</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></description><content:encoded><![CDATA[<p>Drago Anguelov is a Distinguished Scientist and Head of Research at Waymo, an autonomous driving technology company and subsidiary of Alphabet Inc.</p><p>We begin by discussing Drago's work on the original Inception architecture, winner of the 2014 ImageNet challenge and introduction of the inception module. Then, we explore milestones and current trends in autonomous driving, from Waymo's release of the Open Dataset to the trade-offs between modular and end-to-end systems.</p><p>Drago also shares his thoughts on finding rare examples, and the challenges of creating scalable and robust systems.</p><p>Show notes (transcript and links): http://wandb.me/gd-drago-anguelov</p><p>---</p><p>⏳ Timestamps: </p><p>0:00 Intro</p><p>0:45 The story behind the Inception architecture</p><p>13:51 Trends and milestones in autonomous vehicles</p><p>23:52 The challenges of scalability and simulation</p><p>30:19 Why LiDar and mapping are useful</p><p>35:31 Waymo Via and autonomous trucking</p><p>37:31 Robustness and unsupervised domain adaptation</p><p>40:44 Why Waymo released the Waymo Open Dataset</p><p>49:02 The domain gap between simulation and the real world</p><p>56:40 Finding rare examples</p><p>1:04:34 The challenges of production requirements</p><p>1:08:36 Outro</p><p>---</p><p>Connect with Drago &amp; Waymo</p><p>📍 Drago on LinkedIn: https://www.linkedin.com/in/dragomiranguelov/</p><p>📍 Waymo on Twitter: https://twitter.com/waymo/</p><p>📍 Careers at Waymo: https://waymo.com/careers/</p><p>---</p><p>Links:</p><p>📍 Inception v1: https://arxiv.org/abs/1409.4842</p><p>📍 "SPG: Unsupervised Domain Adaptation for 3D Object Detection via Semantic Point Generation", Qiangeng Xu et al. (2021), https://arxiv.org/abs/2108.06709</p><p>📍 "GradTail: Learning Long-Tailed Data Using Gradient-based Sample Weighting", Zhao Chen et al. (2022), https://arxiv.org/abs/2201.05938</p><p>---</p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Cayla Sharp, Angelica Pan, Lavanya Shukla</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">33981b5c-6e39-45d6-9382-a3abe913997b</guid><itunes:image href="https://artwork.captivate.fm/a567c4bd-5817-4155-9cf1-9e68578934db/i7FPIsfB9KqpA1UqWKOR4-0x.jpg"/><pubDate>Thu, 14 Jul 2022 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/37566194-1248-400a-a347-9424f1fe4991/GD-70-Dragomir-V1.mp3" length="99452097" type="audio/mpeg"/><itunes:duration>01:09:01</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>James Cham — Investing in the Intersection of Business and Technology</title><itunes:title>James Cham — Investing in the Intersection of Business and Technology</itunes:title><description><![CDATA[<p>James Cham is a co-founder and partner at Bloomberg Beta, an early-stage venture firm that invests in machine learning and the future of work, the intersection between business and technology.</p><p>James explains how his approach to investing in AI has developed over the last decade, which signals of success he looks for in the ever-adapting world of venture startups (tip: look for the "gradient of admiration"), and why it's so important to demystify ML for executives and decision-makers.</p><p>Lukas and James also discuss how new technologies create new business models, and what the ethical considerations of a world where machine learning is accepted to be possibly fallible would be like.</p><p>Show notes (transcript and links): http://wandb.me/gd-james-cham</p><p>---</p><p>⏳ Timestamps: </p><p>0:00 Intro</p><p>0:46 How investment in AI has changed and developed</p><p>7:08 Creating the first MI landscape infographics</p><p>10:30 The impact of ML on organizations and management</p><p>17:40 Demystifying ML for executives</p><p>21:40 Why signals of successful startups change over time</p><p>27:07 ML and the emergence of new business models</p><p>37:58 New technology vs new consumer goods</p><p>39:50 What James considers when investing</p><p>44:19 Ethical considerations of accepting that ML models are fallible</p><p>50:30 Reflecting on past investment decisions</p><p>52:56 Thoughts on consciousness and Theseus' paradox</p><p>59:08 Why it's important to increase general ML literacy</p><p>1:03:09 Outro</p><p>1:03:30 Bonus: How James' faith informs his thoughts on ML</p><p>---</p><p>Connect with James:</p><p>📍 Twitter: https://twitter.com/jamescham</p><p>📍 Bloomberg Beta: https://github.com/Bloomberg-Beta/Manual</p><p>---</p><p>Links:</p><p>📍 "Street-Level Algorithms: A Theory at the Gaps Between Policy and Decisions" by Ali Alkhatib and Michael Bernstein (2019): https://doi.org/10.1145/3290605.3300760</p><p>---</p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Cayla Sharp, Angelica Pan, Lavanya Shukla</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></description><content:encoded><![CDATA[<p>James Cham is a co-founder and partner at Bloomberg Beta, an early-stage venture firm that invests in machine learning and the future of work, the intersection between business and technology.</p><p>James explains how his approach to investing in AI has developed over the last decade, which signals of success he looks for in the ever-adapting world of venture startups (tip: look for the "gradient of admiration"), and why it's so important to demystify ML for executives and decision-makers.</p><p>Lukas and James also discuss how new technologies create new business models, and what the ethical considerations of a world where machine learning is accepted to be possibly fallible would be like.</p><p>Show notes (transcript and links): http://wandb.me/gd-james-cham</p><p>---</p><p>⏳ Timestamps: </p><p>0:00 Intro</p><p>0:46 How investment in AI has changed and developed</p><p>7:08 Creating the first MI landscape infographics</p><p>10:30 The impact of ML on organizations and management</p><p>17:40 Demystifying ML for executives</p><p>21:40 Why signals of successful startups change over time</p><p>27:07 ML and the emergence of new business models</p><p>37:58 New technology vs new consumer goods</p><p>39:50 What James considers when investing</p><p>44:19 Ethical considerations of accepting that ML models are fallible</p><p>50:30 Reflecting on past investment decisions</p><p>52:56 Thoughts on consciousness and Theseus' paradox</p><p>59:08 Why it's important to increase general ML literacy</p><p>1:03:09 Outro</p><p>1:03:30 Bonus: How James' faith informs his thoughts on ML</p><p>---</p><p>Connect with James:</p><p>📍 Twitter: https://twitter.com/jamescham</p><p>📍 Bloomberg Beta: https://github.com/Bloomberg-Beta/Manual</p><p>---</p><p>Links:</p><p>📍 "Street-Level Algorithms: A Theory at the Gaps Between Policy and Decisions" by Ali Alkhatib and Michael Bernstein (2019): https://doi.org/10.1145/3290605.3300760</p><p>---</p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Cayla Sharp, Angelica Pan, Lavanya Shukla</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">2422bdd5-5db5-4eeb-87f9-531d57684a5a</guid><itunes:image href="https://artwork.captivate.fm/db3ebddd-48cc-4a89-b7d9-354528879001/g6z74HMeYkbbBAkg0KC6UG1u.jpg"/><pubDate>Thu, 07 Jul 2022 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/9730cb6e-673d-4c1a-9c25-544c53dd6967/GD-James-20Cham-V1.mp3" length="95473891" type="audio/mpeg"/><itunes:duration>01:06:11</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Boris Dayma — The Story Behind DALL·E mini, the Viral Phenomenon</title><itunes:title>Boris Dayma — The Story Behind DALL·E mini, the Viral Phenomenon</itunes:title><description><![CDATA[<p class="ql-align-right"><br></p><p>Check out this report by Boris about DALL-E mini:</p><p><a href="https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-mini-Generate-images-from-any-text-prompt--VmlldzoyMDE4NDAy" rel="noopener noreferrer" target="_blank">https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-mini-Generate-images-from-any-text-prompt--VmlldzoyMDE4NDAy</a></p><p><a href="https://wandb.ai/_scott/wandb_example/reports/Collaboration-in-ML-made-easy-with-W-B-Teams--VmlldzoxMjcwMDU5" rel="noopener noreferrer" target="_blank">https://wandb.ai/_scott/wandb_example/reports/Collaboration-in-ML-made-easy-with-W-B-Teams--VmlldzoxMjcwMDU5</a></p><p>https://twitter.com/weirddalle</p><p>Connect with Boris:</p><p>📍 Twitter: https://twitter.com/borisdayma</p><p>---</p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Cayla Sharp, Angelica Pan, Sanyam Bhutani, Lavanya Shukla</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></description><content:encoded><![CDATA[<p class="ql-align-right"><br></p><p>Check out this report by Boris about DALL-E mini:</p><p><a href="https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-mini-Generate-images-from-any-text-prompt--VmlldzoyMDE4NDAy" rel="noopener noreferrer" target="_blank">https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-mini-Generate-images-from-any-text-prompt--VmlldzoyMDE4NDAy</a></p><p><a href="https://wandb.ai/_scott/wandb_example/reports/Collaboration-in-ML-made-easy-with-W-B-Teams--VmlldzoxMjcwMDU5" rel="noopener noreferrer" target="_blank">https://wandb.ai/_scott/wandb_example/reports/Collaboration-in-ML-made-easy-with-W-B-Teams--VmlldzoxMjcwMDU5</a></p><p>https://twitter.com/weirddalle</p><p>Connect with Boris:</p><p>📍 Twitter: https://twitter.com/borisdayma</p><p>---</p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Cayla Sharp, Angelica Pan, Sanyam Bhutani, Lavanya Shukla</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">4e635598-4876-4e0a-875f-47d8578d4bdb</guid><itunes:image href="https://artwork.captivate.fm/d1b33452-73fa-4420-9a74-5db88ae4a329/JxYV_Wo5pQku2at1v_OtYZ0y.png"/><pubDate>Fri, 17 Jun 2022 15:32:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/71e565d6-769d-4148-9322-b1eb63c943cb/GD-Boris-DALL-E-V1.mp3" length="52033906" type="audio/mpeg"/><itunes:duration>35:59</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Tristan Handy — The Work Behind the Data Work</title><itunes:title>Tristan Handy — The Work Behind the Data Work</itunes:title><description><![CDATA[<p>Tristan Handy is CEO and founder of dbt Labs. dbt (data build tool) simplifies the data transformation workflow and helps organizations make better decisions.</p><p>Lukas and Tristan dive into the history of the modern data stack and the subsequent challenges that dbt was created to address; communities of identity and product-led growth; and thoughts on why SQL has survived and thrived for so long. Tristan also shares his hopes for the future of BI tools and the data stack.</p><p>Show notes (transcript and links): http://wandb.me/gd-tristan-handy</p><p>---</p><p>⏳ Timestamps: </p><p>0:00 Intro</p><p>0:40 How dbt makes data transformation easier</p><p>4:52 dbt and avoiding bad data habits</p><p>14:23 Agreeing on organizational ground truths</p><p>19:04 Staying current while running a company</p><p>22:15 The origin story of dbt</p><p>26:08 Why dbt is conceptually simple but hard to execute </p><p>34:47 The dbt community and the bottom-up mindset</p><p>41:50 The future of data and operations</p><p>47:41 dbt and machine learning</p><p>49:17 Why SQL is so ubiquitous</p><p>55:20 Bridging the gap between the ML and data worlds</p><p>1:00:22 Outro</p><p>---</p><p>Connect with Tristan:</p><p>📍 Twitter: https://twitter.com/jthandy</p><p>📍 The Analytics Engineering Roundup: https://roundup.getdbt.com/</p><p>---</p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Cayla Sharp, Angelica Pan, Sanyam Bhutani, Lavanya Shukla</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></description><content:encoded><![CDATA[<p>Tristan Handy is CEO and founder of dbt Labs. dbt (data build tool) simplifies the data transformation workflow and helps organizations make better decisions.</p><p>Lukas and Tristan dive into the history of the modern data stack and the subsequent challenges that dbt was created to address; communities of identity and product-led growth; and thoughts on why SQL has survived and thrived for so long. Tristan also shares his hopes for the future of BI tools and the data stack.</p><p>Show notes (transcript and links): http://wandb.me/gd-tristan-handy</p><p>---</p><p>⏳ Timestamps: </p><p>0:00 Intro</p><p>0:40 How dbt makes data transformation easier</p><p>4:52 dbt and avoiding bad data habits</p><p>14:23 Agreeing on organizational ground truths</p><p>19:04 Staying current while running a company</p><p>22:15 The origin story of dbt</p><p>26:08 Why dbt is conceptually simple but hard to execute </p><p>34:47 The dbt community and the bottom-up mindset</p><p>41:50 The future of data and operations</p><p>47:41 dbt and machine learning</p><p>49:17 Why SQL is so ubiquitous</p><p>55:20 Bridging the gap between the ML and data worlds</p><p>1:00:22 Outro</p><p>---</p><p>Connect with Tristan:</p><p>📍 Twitter: https://twitter.com/jthandy</p><p>📍 The Analytics Engineering Roundup: https://roundup.getdbt.com/</p><p>---</p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Cayla Sharp, Angelica Pan, Sanyam Bhutani, Lavanya Shukla</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">452f24ae-6fe1-4116-83ec-9c9f36ffd0b9</guid><itunes:image href="https://artwork.captivate.fm/83b5e25e-37ba-4de1-939f-efe266214b5d/UQgwAp2qOu65eKrug5m50Q0E.jpg"/><pubDate>Thu, 09 Jun 2022 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/605ac116-15ea-453f-8a35-2eb69f077699/GD-Tristan-20Handy-1.mp3" length="87660093" type="audio/mpeg"/><itunes:duration>01:00:48</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Johannes Otterbach — Unlocking ML for Traditional Companies</title><itunes:title>Johannes Otterbach — Unlocking ML for Traditional Companies</itunes:title><description><![CDATA[<p>Johannes Otterbach is VP of Machine Learning Research at Merantix Momentum, an ML consulting studio that helps their clients build AI solutions.</p><p>Johannes and Lukas talk about Johannes' background in physics and applications of ML to quantum computing, why Merantix is investing in creating a cloud-agnostic tech stack, and the unique challenges of developing and deploying models for different customers. They also discuss some of Johannes' articles on the impact of NLP models and the future of AI regulations.</p><p>Show notes (transcript and links): http://wandb.me/gd-johannes-otterbach</p><p>---</p><p>⏳ Timestamps: </p><p>0:00 Intro</p><p>1:04 Quantum computing and ML applications</p><p>9:21 Merantix, Ventures, and ML consulting</p><p>19:09 Building a cloud-agnostic tech stack</p><p>24:40 The open source tooling ecosystem </p><p>30:28 Handing off models to customers</p><p>31:42 The impact of NLP models on the real world</p><p>35:40 Thoughts on AI and regulation</p><p>40:10 Statistical physics and optimization problems</p><p>42:50 The challenges of getting high-quality data</p><p>44:30 Outro</p><p>---</p><p>Connect with Johannes:</p><p>📍 LinkedIn: https://twitter.com/jsotterbach</p><p>📍 Personal website: http://jotterbach.github.io/</p><p>📍 Careers at Merantix Momentum: https://merantix-momentum.com/about#jobs</p><p>---</p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Cayla Sharp, Angelica Pan, Sanyam Bhutani, Lavanya Shukla</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></description><content:encoded><![CDATA[<p>Johannes Otterbach is VP of Machine Learning Research at Merantix Momentum, an ML consulting studio that helps their clients build AI solutions.</p><p>Johannes and Lukas talk about Johannes' background in physics and applications of ML to quantum computing, why Merantix is investing in creating a cloud-agnostic tech stack, and the unique challenges of developing and deploying models for different customers. They also discuss some of Johannes' articles on the impact of NLP models and the future of AI regulations.</p><p>Show notes (transcript and links): http://wandb.me/gd-johannes-otterbach</p><p>---</p><p>⏳ Timestamps: </p><p>0:00 Intro</p><p>1:04 Quantum computing and ML applications</p><p>9:21 Merantix, Ventures, and ML consulting</p><p>19:09 Building a cloud-agnostic tech stack</p><p>24:40 The open source tooling ecosystem </p><p>30:28 Handing off models to customers</p><p>31:42 The impact of NLP models on the real world</p><p>35:40 Thoughts on AI and regulation</p><p>40:10 Statistical physics and optimization problems</p><p>42:50 The challenges of getting high-quality data</p><p>44:30 Outro</p><p>---</p><p>Connect with Johannes:</p><p>📍 LinkedIn: https://twitter.com/jsotterbach</p><p>📍 Personal website: http://jotterbach.github.io/</p><p>📍 Careers at Merantix Momentum: https://merantix-momentum.com/about#jobs</p><p>---</p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Cayla Sharp, Angelica Pan, Sanyam Bhutani, Lavanya Shukla</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">afe9ceed-cce3-462c-9ecd-eb1956acd21d</guid><itunes:image href="https://artwork.captivate.fm/dcfc5b7c-d2d7-466f-9f39-f0636d364280/yAlxHEtuRPq_RJr6p3f7HYp_.jpg"/><pubDate>Thu, 12 May 2022 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/e5aa451c-49fe-4092-8610-c3ed37bee6ce/GD-Johannes-V1.mp3" length="64857425" type="audio/mpeg"/><itunes:duration>44:50</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Mircea Neagovici — Robotic Process Automation (RPA) and ML</title><itunes:title>Mircea Neagovici — Robotic Process Automation (RPA) and ML</itunes:title><description><![CDATA[<p>Mircea Neagovici is VP, AI and Research at UiPath, where his team works on task mining and other ways of combining robotic process automation (RPA) with machine learning for their B2B products.</p><p>Mircea and Lukas talk about the challenges of allowing customers to fine-tune their models, the trade-offs between traditional ML and more complex deep learning models, and how Mircea transitioned from a more traditional software engineering role to running a machine learning organization.</p><p>Show notes (transcript and links): http://wandb.me/gd-mircea-neagovici</p><p>---</p><p>⏳ Timestamps: </p><p>0:00 Intro﻿</p><p>1:05 Robotic Process Automation (RPA)﻿</p><p>4:20 RPA and machine learning at UiPath﻿</p><p>8:20 Fine-tuning &amp; PyTorch vs TensorFlow﻿</p><p>14:50 Monitoring models in production﻿</p><p>16:33 Task mining﻿</p><p>22:37 Trade-offs in ML models﻿</p><p>29:45 Transitioning from software engineering to ML﻿</p><p>34:02 ML teams vs engineering teams﻿</p><p>40:41 Spending more time on data﻿</p><p>43:55 The organizational machinery behind ML models﻿</p><p>45:57 Outro</p><p>---</p><p>Connect with Mircea:</p><p>📍 LinkedIn: https://www.linkedin.com/in/mirceaneagovici/</p><p>📍 Careers at UiPath: https://www.uipath.com/company/careers</p><p>---</p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Cayla Sharp, Angelica Pan, Sanyam Bhutani, Lavanya Shukla</p>]]></description><content:encoded><![CDATA[<p>Mircea Neagovici is VP, AI and Research at UiPath, where his team works on task mining and other ways of combining robotic process automation (RPA) with machine learning for their B2B products.</p><p>Mircea and Lukas talk about the challenges of allowing customers to fine-tune their models, the trade-offs between traditional ML and more complex deep learning models, and how Mircea transitioned from a more traditional software engineering role to running a machine learning organization.</p><p>Show notes (transcript and links): http://wandb.me/gd-mircea-neagovici</p><p>---</p><p>⏳ Timestamps: </p><p>0:00 Intro﻿</p><p>1:05 Robotic Process Automation (RPA)﻿</p><p>4:20 RPA and machine learning at UiPath﻿</p><p>8:20 Fine-tuning &amp; PyTorch vs TensorFlow﻿</p><p>14:50 Monitoring models in production﻿</p><p>16:33 Task mining﻿</p><p>22:37 Trade-offs in ML models﻿</p><p>29:45 Transitioning from software engineering to ML﻿</p><p>34:02 ML teams vs engineering teams﻿</p><p>40:41 Spending more time on data﻿</p><p>43:55 The organizational machinery behind ML models﻿</p><p>45:57 Outro</p><p>---</p><p>Connect with Mircea:</p><p>📍 LinkedIn: https://www.linkedin.com/in/mirceaneagovici/</p><p>📍 Careers at UiPath: https://www.uipath.com/company/careers</p><p>---</p><p>💬 Host: Lukas Biewald</p><p>📹 Producers: Cayla Sharp, Angelica Pan, Sanyam Bhutani, Lavanya Shukla</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">7d4d10a0-c7bd-4062-930e-4fa48848992e</guid><itunes:image href="https://artwork.captivate.fm/01e8772f-1bc2-42f2-afc0-2c90e4a2d16d/_SusOPOi14UhSZF8YSEIvEoO.jpg"/><pubDate>Thu, 21 Apr 2022 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/0e308087-f262-49de-b4d5-b29cc55c9879/GD-Mircea-V3.mp3" length="44736206" type="audio/mpeg"/><itunes:duration>46:22</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Jensen Huang — NVIDIA’s CEO on the Next Generation of AI and MLOps</title><itunes:title>Jensen Huang — NVIDIA&apos;s CEO on the Next Generation of AI and MLOps</itunes:title><description><![CDATA[<p>Jensen Huang is founder and CEO of NVIDIA, whose GPUs sit at the heart of the majority of machine learning models today.</p><p>Jensen shares the story behind NVIDIA's expansion from gaming to deep learning acceleration, leadership lessons that he's learned over the last few decades, and why we need a virtual world that obeys the laws of physics (aka the Omniverse) in order to take AI to the next era. Jensen and Lukas also talk about the singularity, the slow-but-steady approach to building a new market, and the importance of MLOps.</p><p>The complete show notes (transcript and links) can be found here: http://wandb.me/gd-jensen-huang</p><p>---</p><p>⏳ Timestamps:</p><p>0:00 Intro</p><p>0:50 Why NVIDIA moved into the deep learning space</p><p>7:33 Balancing the compute needs of different audiences</p><p>10:40 Quantum computing, Huang's Law, and the singularity</p><p>15:53 Democratizing scientific computing</p><p>20:59 How Jensen stays current with technology trends</p><p>25:10 The global chip shortage</p><p>27:00 Leadership lessons that Jensen has learned</p><p>32:32 Keeping a steady vision for NVIDIA</p><p>35:48 Omniverse and the next era of AI</p><p>42:00 ML topics that Jensen's excited about</p><p>45:05 Why MLOps is vital</p><p>48:38 Outro</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></description><content:encoded><![CDATA[<p>Jensen Huang is founder and CEO of NVIDIA, whose GPUs sit at the heart of the majority of machine learning models today.</p><p>Jensen shares the story behind NVIDIA's expansion from gaming to deep learning acceleration, leadership lessons that he's learned over the last few decades, and why we need a virtual world that obeys the laws of physics (aka the Omniverse) in order to take AI to the next era. Jensen and Lukas also talk about the singularity, the slow-but-steady approach to building a new market, and the importance of MLOps.</p><p>The complete show notes (transcript and links) can be found here: http://wandb.me/gd-jensen-huang</p><p>---</p><p>⏳ Timestamps:</p><p>0:00 Intro</p><p>0:50 Why NVIDIA moved into the deep learning space</p><p>7:33 Balancing the compute needs of different audiences</p><p>10:40 Quantum computing, Huang's Law, and the singularity</p><p>15:53 Democratizing scientific computing</p><p>20:59 How Jensen stays current with technology trends</p><p>25:10 The global chip shortage</p><p>27:00 Leadership lessons that Jensen has learned</p><p>32:32 Keeping a steady vision for NVIDIA</p><p>35:48 Omniverse and the next era of AI</p><p>42:00 ML topics that Jensen's excited about</p><p>45:05 Why MLOps is vital</p><p>48:38 Outro</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">c944bac0-a0b2-4050-bf31-915d962013a1</guid><itunes:image href="https://artwork.captivate.fm/f43233bf-708e-4ea4-b419-f09996ce0475/JnUm4cr58owuD5yoEAllCB9m.jpg"/><pubDate>Thu, 03 Mar 2022 10:30:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/3e4904e5-70c4-4ba3-9bad-6503a346cb33/gd-jensen-v6.mp3" length="47253599" type="audio/mpeg"/><itunes:duration>48:55</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Peter &amp; Boris — Fine-tuning OpenAI&apos;s GPT-3</title><itunes:title>Peter &amp; Boris — Fine-tuning OpenAI&apos;s GPT-3</itunes:title><description><![CDATA[<p>Peter Welinder is VP of Product &amp; Partnerships at OpenAI, where he runs product and commercialization efforts of GPT-3, Codex, GitHub Copilot, and more. Boris Dayma is Machine Learning Engineer at Weights &amp; Biases, and works on integrations and large model training.</p><p>Peter, Boris, and Lukas dive into the world of GPT-3:</p><p>- How people are applying GPT-3 to translation, copywriting, and other commercial tasks</p><p>- The performance benefits of fine-tuning GPT-3- </p><p>- Developing an API on top of GPT-3 that works out of the box, but is also flexible and customizable</p><p><br></p><p>They also discuss the new OpenAI and Weights &amp; Biases collaboration, which enables a user to log their GPT-3 fine-tuning projects to W&amp;B with a single line of code.</p><p><br></p><p>The complete show notes (transcript and links) can be found here: http://wandb.me/gd-peter-and-boris</p><p>---</p><p>Connect with Peter &amp; Boris:</p><p>📍 Peter's Twitter: https://twitter.com/npew</p><p>📍 Boris' Twitter: https://twitter.com/borisdayma</p><p>---</p><p>⏳ Timestamps: </p><p>0:00 Intro</p><p>1:01 Solving real-world problems with GPT-3</p><p>6:57 Applying GPT-3 to translation tasks</p><p>14:58 Copywriting and other commercial GPT-3 applications</p><p>20:22 The OpenAI API and fine-tuning GPT-3</p><p>28:22 Logging GPT-3 fine-tuning projects to W&amp;B</p><p>38:25 Engineering challenges behind OpenAI's API</p><p>43:15 Outro</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></description><content:encoded><![CDATA[<p>Peter Welinder is VP of Product &amp; Partnerships at OpenAI, where he runs product and commercialization efforts of GPT-3, Codex, GitHub Copilot, and more. Boris Dayma is Machine Learning Engineer at Weights &amp; Biases, and works on integrations and large model training.</p><p>Peter, Boris, and Lukas dive into the world of GPT-3:</p><p>- How people are applying GPT-3 to translation, copywriting, and other commercial tasks</p><p>- The performance benefits of fine-tuning GPT-3- </p><p>- Developing an API on top of GPT-3 that works out of the box, but is also flexible and customizable</p><p><br></p><p>They also discuss the new OpenAI and Weights &amp; Biases collaboration, which enables a user to log their GPT-3 fine-tuning projects to W&amp;B with a single line of code.</p><p><br></p><p>The complete show notes (transcript and links) can be found here: http://wandb.me/gd-peter-and-boris</p><p>---</p><p>Connect with Peter &amp; Boris:</p><p>📍 Peter's Twitter: https://twitter.com/npew</p><p>📍 Boris' Twitter: https://twitter.com/borisdayma</p><p>---</p><p>⏳ Timestamps: </p><p>0:00 Intro</p><p>1:01 Solving real-world problems with GPT-3</p><p>6:57 Applying GPT-3 to translation tasks</p><p>14:58 Copywriting and other commercial GPT-3 applications</p><p>20:22 The OpenAI API and fine-tuning GPT-3</p><p>28:22 Logging GPT-3 fine-tuning projects to W&amp;B</p><p>38:25 Engineering challenges behind OpenAI's API</p><p>43:15 Outro</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">e56f1a79-d458-4e8d-a2d4-70b07a3ee719</guid><itunes:image href="https://artwork.captivate.fm/607999d0-8026-4ce9-8561-a8c19a5457cb/BhVttBGcANHoMU92J52xrbBt.jpg"/><pubDate>Thu, 10 Feb 2022 11:20:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/35ebd136-23ea-45b2-b667-7c6004d4d481/gd-peterw-boris-v3.mp3" length="42416212" type="audio/mpeg"/><itunes:duration>43:39</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Ion Stoica — Spark, Ray, and Enterprise Open Source</title><itunes:title>Ion Stoica — Spark, Ray, and Enterprise Open Source</itunes:title><description><![CDATA[<p>Ion Stoica is co-creator of the distributed computing frameworks Spark and Ray, and co-founder and Executive Chairman of Databricks and Anyscale. He is also a Professor of computer science at UC Berkeley and Principal Investigator of RISELab, a five-year research lab that develops technology for low-latency, intelligent decisions.</p><p>Ion and Lukas chat about the challenges of making a simple (but good!) distributed framework, the similarities and differences between developing Spark and Ray, and how Spark and Ray led to the formation of Databricks and Anyscale. Ion also reflects on the early startup days, from deciding to commercialize to picking co-founders, and shares advice on building a successful company.</p><p>The complete show notes (transcript and links) can be found here: http://wandb.me/gd-ion-stoica</p><p>---</p><p>Timestamps: </p><p>0:00 Intro</p><p>0:56 Ray, Anyscale, and making a distributed framework</p><p>11:39 How Spark informed the development of Ray</p><p>18:53 The story behind Spark and Databricks</p><p>33:00 Why TensorFlow and PyTorch haven't monetized</p><p>35:35 Picking co-founders and other startup advice</p><p>46:04  The early signs of sky computing</p><p>49:24 Breaking problems down and prioritizing</p><p>53:17 Outro</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></description><content:encoded><![CDATA[<p>Ion Stoica is co-creator of the distributed computing frameworks Spark and Ray, and co-founder and Executive Chairman of Databricks and Anyscale. He is also a Professor of computer science at UC Berkeley and Principal Investigator of RISELab, a five-year research lab that develops technology for low-latency, intelligent decisions.</p><p>Ion and Lukas chat about the challenges of making a simple (but good!) distributed framework, the similarities and differences between developing Spark and Ray, and how Spark and Ray led to the formation of Databricks and Anyscale. Ion also reflects on the early startup days, from deciding to commercialize to picking co-founders, and shares advice on building a successful company.</p><p>The complete show notes (transcript and links) can be found here: http://wandb.me/gd-ion-stoica</p><p>---</p><p>Timestamps: </p><p>0:00 Intro</p><p>0:56 Ray, Anyscale, and making a distributed framework</p><p>11:39 How Spark informed the development of Ray</p><p>18:53 The story behind Spark and Databricks</p><p>33:00 Why TensorFlow and PyTorch haven't monetized</p><p>35:35 Picking co-founders and other startup advice</p><p>46:04  The early signs of sky computing</p><p>49:24 Breaking problems down and prioritizing</p><p>53:17 Outro</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">d0f3e12b-6aef-4072-9e58-e1f921d2745a</guid><itunes:image href="https://artwork.captivate.fm/dec3fec9-01a5-4cd0-9ee8-44161f80939a/MzX-4XkgCNTKLNPB6Dwe4Rxk.jpg"/><pubDate>Thu, 20 Jan 2022 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/3072eb79-f1fd-4f1b-9ed9-28a0c94e149c/gd-ion-stoica-v2.mp3" length="52074821" type="audio/mpeg"/><itunes:duration>53:42</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Stephan Fabel — Efficient Supercomputing with NVIDIA&apos;s Base Command Platform</title><itunes:title>Stephan Fabel — Efficient Supercomputing with NVIDIA&apos;s Base Command Platform</itunes:title><description><![CDATA[<p>Stephan Fabel is Senior Director of Infrastructure Systems &amp; Software at NVIDIA, where he works on Base Command, a software platform to coordinate access to NVIDIA's DGX SuperPOD infrastructure.</p><p>Lukas and Stephan talk about why having a supercomputer is one thing but using it effectively is another, why a deeper understanding of hardware on the practitioner level is becoming more advantageous, and which areas of the ML tech stack NVIDIA is looking to expand into.</p><p>The complete show notes (transcript and links) can be found here: http://wandb.me/gd-stephan-fabel</p><p>---</p><p>Timestamps: </p><p>0:00 Intro</p><p>1:09 NVIDIA Base Command and DGX SuperPOD</p><p>10:33 The challenges of multi-node processing at scale</p><p>18:35 Why it's hard to use a supercomputer effectively</p><p>25:14 The advantages of de-abstracting hardware</p><p>29:09 Understanding Base Command's product-market fit</p><p>36:59 Data center infrastructure as a value center</p><p>42:13 Base Command's role in tech stacks</p><p>47:16 Why crowdsourcing is underrated</p><p>49:24 The challenges of scaling beyond a POC</p><p>51:39 Outro</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></description><content:encoded><![CDATA[<p>Stephan Fabel is Senior Director of Infrastructure Systems &amp; Software at NVIDIA, where he works on Base Command, a software platform to coordinate access to NVIDIA's DGX SuperPOD infrastructure.</p><p>Lukas and Stephan talk about why having a supercomputer is one thing but using it effectively is another, why a deeper understanding of hardware on the practitioner level is becoming more advantageous, and which areas of the ML tech stack NVIDIA is looking to expand into.</p><p>The complete show notes (transcript and links) can be found here: http://wandb.me/gd-stephan-fabel</p><p>---</p><p>Timestamps: </p><p>0:00 Intro</p><p>1:09 NVIDIA Base Command and DGX SuperPOD</p><p>10:33 The challenges of multi-node processing at scale</p><p>18:35 Why it's hard to use a supercomputer effectively</p><p>25:14 The advantages of de-abstracting hardware</p><p>29:09 Understanding Base Command's product-market fit</p><p>36:59 Data center infrastructure as a value center</p><p>42:13 Base Command's role in tech stacks</p><p>47:16 Why crowdsourcing is underrated</p><p>49:24 The challenges of scaling beyond a POC</p><p>51:39 Outro</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">3a506bac-f918-47b1-875b-f6fd34808165</guid><itunes:image href="https://artwork.captivate.fm/4574d4a4-e403-41de-95c7-83b52ce2f09b/bQsAjv4HiQaJxn3FrPfKNC6k.jpg"/><pubDate>Thu, 06 Jan 2022 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/55f01816-f0da-4974-a64d-57b472f0e15b/gd-stephan-fabel-v3.mp3" length="50115512" type="audio/mpeg"/><itunes:duration>52:01</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Chris Padwick — Smart Machines for More Sustainable Farming</title><itunes:title>Chris Padwick — Smart Machines for More Sustainable Farming</itunes:title><description><![CDATA[<p>Chris Padwick is Director of Computer Vision Machine Learning at Blue River Technology, a subsidiary of John Deere. Their core product, See &amp; Spray, is a weeding robot that identifies crops and weeds in order to spray only the weeds with herbicide.</p><p>Chris and Lukas dive into the challenges of bringing See &amp; Spray to life, from the hard computer vision problem of classifying weeds from crops, to the engineering feat of building and updating embedded systems that can survive on a farming machine in the field. Chris also explains why user feedback is crucial, and shares some of the surprising product insights he's gained from working with farmers.</p><p>The complete show notes (transcript and links) can be found here: http://wandb.me/gd-chris-padwick</p><p>---</p><p>Connect with Chris:</p><p>📍 LinkedIn: https://www.linkedin.com/in/chris-padwick-75b5761/</p><p>📍 Blue River on Twitter: https://twitter.com/BlueRiverTech</p><p>---</p><p>Timestamps: </p><p>0:00 Intro</p><p>1:09 How does See &amp; Spray reduce herbicide usage?</p><p>9:15 Classifying weeds and crops in real time</p><p>17:45 Insights from deployment and user feedback</p><p>29:08 Why weed and crop classification is surprisingly hard</p><p>37:33 Improving and updating models in the field</p><p>40:55 Blue River's ML stack</p><p>44:55 Autonomous tractors and upcoming directions</p><p>48:05 Why data pipelines are underrated</p><p>52:10 The challenges of scaling software &amp; hardware</p><p>54:44 Outro</p><p>55:55 Bonus: Transporters and the singularity</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></description><content:encoded><![CDATA[<p>Chris Padwick is Director of Computer Vision Machine Learning at Blue River Technology, a subsidiary of John Deere. Their core product, See &amp; Spray, is a weeding robot that identifies crops and weeds in order to spray only the weeds with herbicide.</p><p>Chris and Lukas dive into the challenges of bringing See &amp; Spray to life, from the hard computer vision problem of classifying weeds from crops, to the engineering feat of building and updating embedded systems that can survive on a farming machine in the field. Chris also explains why user feedback is crucial, and shares some of the surprising product insights he's gained from working with farmers.</p><p>The complete show notes (transcript and links) can be found here: http://wandb.me/gd-chris-padwick</p><p>---</p><p>Connect with Chris:</p><p>📍 LinkedIn: https://www.linkedin.com/in/chris-padwick-75b5761/</p><p>📍 Blue River on Twitter: https://twitter.com/BlueRiverTech</p><p>---</p><p>Timestamps: </p><p>0:00 Intro</p><p>1:09 How does See &amp; Spray reduce herbicide usage?</p><p>9:15 Classifying weeds and crops in real time</p><p>17:45 Insights from deployment and user feedback</p><p>29:08 Why weed and crop classification is surprisingly hard</p><p>37:33 Improving and updating models in the field</p><p>40:55 Blue River's ML stack</p><p>44:55 Autonomous tractors and upcoming directions</p><p>48:05 Why data pipelines are underrated</p><p>52:10 The challenges of scaling software &amp; hardware</p><p>54:44 Outro</p><p>55:55 Bonus: Transporters and the singularity</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">fb339bcd-7c22-48ed-b372-3bbb8bfe2496</guid><itunes:image href="https://artwork.captivate.fm/25aa3889-8228-4c21-8a62-c644ba62eff1/Ra5yXsMa7gjdV26TgO4U52do.png"/><pubDate>Thu, 23 Dec 2021 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/55dd9076-d900-48d5-9ec8-295b18d1127b/gd-chris-padwick-v3.mp3" length="58740497" type="audio/mpeg"/><itunes:duration>01:00:59</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Kathryn Hume — Financial Models, ML, and 17th-Century Philosophy</title><itunes:title>Kathryn Hume — Financial Models, ML, and 17th-Century Philosophy</itunes:title><description><![CDATA[<p>Kathryn Hume is Vice President Digital Investments Technology at the Royal Bank of Canada (RBC). At the time of recording, she was Interim Head of Borealis AI, RBC's research institute for machine learning.</p><p>Kathryn and Lukas talk about ML applications in finance, from building a personal finance forecasting model to applying reinforcement learning to trade execution, and take a philosophical detour into the 17th century as they speculate on what Newton and Descartes would have thought about machine learning.</p><p>The complete show notes (transcript and links) can be found here: http://wandb.me/gd-kathryn-hume</p><p>---</p><p>Connect with Kathryn:</p><p>📍 Twitter: https://twitter.com/humekathryn</p><p>📍 Website: https://quamproxime.com/</p><p>---</p><p>Timestamps: </p><p>0:00 Intro</p><p>0:54 Building a personal finance forecasting model</p><p>10:54 Applying RL to trade execution</p><p>18:55 Transparent financial models and fairness</p><p>26:20 Semantic parsing and building a text-to-SQL interface</p><p>29:20 From comparative literature and math to product</p><p>37:33 What would Newton and Descartes think about ML?</p><p>44:15 On sentient AI and transporters</p><p>47:33 Why casual inference is under-appreciated</p><p>49:25 The challenges of integrating models into the business</p><p>51:45 Outro</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></description><content:encoded><![CDATA[<p>Kathryn Hume is Vice President Digital Investments Technology at the Royal Bank of Canada (RBC). At the time of recording, she was Interim Head of Borealis AI, RBC's research institute for machine learning.</p><p>Kathryn and Lukas talk about ML applications in finance, from building a personal finance forecasting model to applying reinforcement learning to trade execution, and take a philosophical detour into the 17th century as they speculate on what Newton and Descartes would have thought about machine learning.</p><p>The complete show notes (transcript and links) can be found here: http://wandb.me/gd-kathryn-hume</p><p>---</p><p>Connect with Kathryn:</p><p>📍 Twitter: https://twitter.com/humekathryn</p><p>📍 Website: https://quamproxime.com/</p><p>---</p><p>Timestamps: </p><p>0:00 Intro</p><p>0:54 Building a personal finance forecasting model</p><p>10:54 Applying RL to trade execution</p><p>18:55 Transparent financial models and fairness</p><p>26:20 Semantic parsing and building a text-to-SQL interface</p><p>29:20 From comparative literature and math to product</p><p>37:33 What would Newton and Descartes think about ML?</p><p>44:15 On sentient AI and transporters</p><p>47:33 Why casual inference is under-appreciated</p><p>49:25 The challenges of integrating models into the business</p><p>51:45 Outro</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">83f30764-6296-42cb-8a0d-75b1da485f83</guid><itunes:image href="https://artwork.captivate.fm/551ed314-508d-4f8e-8a76-6342c9b01d27/sxVfXlo_efKFfmINqRCbLD7_.jpg"/><pubDate>Thu, 16 Dec 2021 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/cd16f063-d41a-49ff-8cfc-218b29fb6236/gd-kathryn-hume-v4.mp3" length="50258576" type="audio/mpeg"/><itunes:duration>52:08</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Sean &amp; Greg — Biology and ML for Drug Discovery</title><itunes:title>Sean &amp; Greg — Biology and ML for Drug Discovery</itunes:title><description><![CDATA[<p>Sean McClain is the founder and CEO, and Gregory Hannum is the VP of AI Research at Absci, a biotech company that's using deep learning to expedite drug discovery and development.</p><p>Lukas, Sean, and Greg talk about why Absci started investing so heavily in ML research (it all comes back to the data), what it'll take to build the GPT-3 of DNA, and where the future of pharma is headed. Sean and Greg also share some of the challenges of building cross-functional teams and combining two highly specialized fields like biology and ML.</p><p>The complete show notes (transcript and links) can be found here: http://wandb.me/gd-sean-and-greg</p><p>---</p><p>Connect with Sean and Greg:</p><p>📍 Sean's Twitter: https://twitter.com/seanrmcclain</p><p>📍 Greg's Twitter: https://twitter.com/gregory_hannum</p><p>📍 Absci's Twitter: https://twitter.com/abscibio</p><p>---</p><p>Timestamps: </p><p>0:00 Intro</p><p>0:53 How Absci merges biology and AI</p><p>11:24 Why Absci started investing in ML</p><p>19:00 Creating the GPT-3 of DNA</p><p>25:34 Investing in data collection and in ML teams</p><p>33:14 Clinical trials and Absci's revenue structure</p><p>38:17 Combining knowledge from different domains</p><p>45:22 The potential of multitask learning</p><p>50:43 Why biological data is tricky to work with</p><p>55:00 Outro</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></description><content:encoded><![CDATA[<p>Sean McClain is the founder and CEO, and Gregory Hannum is the VP of AI Research at Absci, a biotech company that's using deep learning to expedite drug discovery and development.</p><p>Lukas, Sean, and Greg talk about why Absci started investing so heavily in ML research (it all comes back to the data), what it'll take to build the GPT-3 of DNA, and where the future of pharma is headed. Sean and Greg also share some of the challenges of building cross-functional teams and combining two highly specialized fields like biology and ML.</p><p>The complete show notes (transcript and links) can be found here: http://wandb.me/gd-sean-and-greg</p><p>---</p><p>Connect with Sean and Greg:</p><p>📍 Sean's Twitter: https://twitter.com/seanrmcclain</p><p>📍 Greg's Twitter: https://twitter.com/gregory_hannum</p><p>📍 Absci's Twitter: https://twitter.com/abscibio</p><p>---</p><p>Timestamps: </p><p>0:00 Intro</p><p>0:53 How Absci merges biology and AI</p><p>11:24 Why Absci started investing in ML</p><p>19:00 Creating the GPT-3 of DNA</p><p>25:34 Investing in data collection and in ML teams</p><p>33:14 Clinical trials and Absci's revenue structure</p><p>38:17 Combining knowledge from different domains</p><p>45:22 The potential of multitask learning</p><p>50:43 Why biological data is tricky to work with</p><p>55:00 Outro</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">0a4a7603-099c-4deb-8a5e-c12c34b1364c</guid><itunes:image href="https://artwork.captivate.fm/d6a88e9b-ae1e-499b-be8b-e421aa2bef49/6Pao6SrdR99C-RYkoe6TDPj9.jpg"/><pubDate>Thu, 02 Dec 2021 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/0795db11-7f83-4077-a1a9-2b080c2e70e7/gd-absci-v2.mp3" length="53457701" type="audio/mpeg"/><itunes:duration>55:25</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Chris, Shawn, and Lukas — The Weights &amp; Biases Journey</title><itunes:title>Chris, Shawn, and Lukas — The Weights &amp; Biases Journey</itunes:title><description><![CDATA[<p>You might know him as the host of Gradient Dissent, but Lukas is also the CEO of Weights &amp; Biases, a developer-first ML tools platform!</p><p>In this special episode, the three W&amp;B co-founders — Chris (CVP), Shawn (CTO), and Lukas (CEO) — sit down to tell the company's origin stories, reflect on the highs and lows, and give advice to engineers looking to start their own business.</p><p>Chris reveals the W&amp;B server architecture (tl;dr - React + GraphQL), Shawn shares his favorite product feature (it's a hidden frontend layer), and Lukas explains why it's so important to work with customers that inspire you.</p><p>The complete show notes (transcript and links) can be found here: http://wandb.me/gd-wandb-cofounders</p><p>---</p><p>Connect with us:</p><p>📍 Chris' Twitter: https://twitter.com/vanpelt</p><p>📍 Shawn's Twitter: https://twitter.com/shawnup</p><p>📍 Lukas' Twitter: https://twitter.com/l2k</p><p>📍 W&amp;B's Twitter: https://twitter.com/weights_biases</p><p>---</p><p>Timestamps: </p><p>0:00 Intro</p><p>1:29 The stories behind Weights &amp; Biases</p><p>7:45 The W&amp;B tech stack</p><p>9:28 Looking back at the beginning</p><p>11:42 Hallmark moments</p><p>14:49 Favorite product features</p><p>16:49 Rewriting the W&amp;B backend</p><p>18:21 The importance of customer feedback</p><p>21:18 How Chris and Shawn have changed</p><p>22:35 How the ML space has changed</p><p>28:24 Staying positive when things look bleak</p><p>32:19 Lukas' advice to new entrepreneurs</p><p>35:29 Hopes for the next five years</p><p>38:09 Making a paintbot &amp; model understanding</p><p>41:30 Biggest bottlenecks in deployment</p><p>44:08 Outro</p><p>44:38 Bonus: Under- vs overrated technologies</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></description><content:encoded><![CDATA[<p>You might know him as the host of Gradient Dissent, but Lukas is also the CEO of Weights &amp; Biases, a developer-first ML tools platform!</p><p>In this special episode, the three W&amp;B co-founders — Chris (CVP), Shawn (CTO), and Lukas (CEO) — sit down to tell the company's origin stories, reflect on the highs and lows, and give advice to engineers looking to start their own business.</p><p>Chris reveals the W&amp;B server architecture (tl;dr - React + GraphQL), Shawn shares his favorite product feature (it's a hidden frontend layer), and Lukas explains why it's so important to work with customers that inspire you.</p><p>The complete show notes (transcript and links) can be found here: http://wandb.me/gd-wandb-cofounders</p><p>---</p><p>Connect with us:</p><p>📍 Chris' Twitter: https://twitter.com/vanpelt</p><p>📍 Shawn's Twitter: https://twitter.com/shawnup</p><p>📍 Lukas' Twitter: https://twitter.com/l2k</p><p>📍 W&amp;B's Twitter: https://twitter.com/weights_biases</p><p>---</p><p>Timestamps: </p><p>0:00 Intro</p><p>1:29 The stories behind Weights &amp; Biases</p><p>7:45 The W&amp;B tech stack</p><p>9:28 Looking back at the beginning</p><p>11:42 Hallmark moments</p><p>14:49 Favorite product features</p><p>16:49 Rewriting the W&amp;B backend</p><p>18:21 The importance of customer feedback</p><p>21:18 How Chris and Shawn have changed</p><p>22:35 How the ML space has changed</p><p>28:24 Staying positive when things look bleak</p><p>32:19 Lukas' advice to new entrepreneurs</p><p>35:29 Hopes for the next five years</p><p>38:09 Making a paintbot &amp; model understanding</p><p>41:30 Biggest bottlenecks in deployment</p><p>44:08 Outro</p><p>44:38 Bonus: Under- vs overrated technologies</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">dd800553-c505-4cc0-8d4a-c6cd18de9ed6</guid><itunes:image href="https://artwork.captivate.fm/528c4041-08c1-47aa-894f-c48a16318eaf/i6P-CvY8UcaoYUCWtgFiP3c4.jpg"/><pubDate>Fri, 05 Nov 2021 07:45:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/c3cba7d5-65d4-4084-8921-fff6e0ff341b/gd-founders-revert-v2.mp3" length="47567266" type="audio/mpeg"/><itunes:duration>49:13</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Pete Warden — Practical Applications of TinyML</title><itunes:title>Pete Warden — Practical Applications of TinyML</itunes:title><description><![CDATA[<p>Pete is the Technical Lead of the TensorFlow Micro team, which works on deep learning for mobile and embedded devices.</p><p>Lukas and Pete talk about hacking a Raspberry Pi to run AlexNet, the power and size constraints of embedded devices, and techniques to reduce model size. Pete also explains real world applications of TensorFlow Lite Micro and shares what it's been like to work on TensorFlow from the beginning.</p><p>The complete show notes (transcript and links) can be found here: http://wandb.me/gd-pete-warden</p><p>---</p><p>Connect with Pete:</p><p>📍 Twitter: https://twitter.com/petewarden</p><p>📍 Website: https://petewarden.com/</p><p>---</p><p>Timestamps: </p><p>0:00 Intro</p><p>1:23 Hacking a Raspberry Pi to run neural nets</p><p>13:50 Model and hardware architectures</p><p>18:56 Training a magic wand</p><p>21:47 Raspberry Pi vs Arduino</p><p>27:51 Reducing model size</p><p>33:29 Training on the edge</p><p>39:47 What it's like to work on TensorFlow</p><p>47:45 Improving datasets and model deployment</p><p>53:05 Outro</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></description><content:encoded><![CDATA[<p>Pete is the Technical Lead of the TensorFlow Micro team, which works on deep learning for mobile and embedded devices.</p><p>Lukas and Pete talk about hacking a Raspberry Pi to run AlexNet, the power and size constraints of embedded devices, and techniques to reduce model size. Pete also explains real world applications of TensorFlow Lite Micro and shares what it's been like to work on TensorFlow from the beginning.</p><p>The complete show notes (transcript and links) can be found here: http://wandb.me/gd-pete-warden</p><p>---</p><p>Connect with Pete:</p><p>📍 Twitter: https://twitter.com/petewarden</p><p>📍 Website: https://petewarden.com/</p><p>---</p><p>Timestamps: </p><p>0:00 Intro</p><p>1:23 Hacking a Raspberry Pi to run neural nets</p><p>13:50 Model and hardware architectures</p><p>18:56 Training a magic wand</p><p>21:47 Raspberry Pi vs Arduino</p><p>27:51 Reducing model size</p><p>33:29 Training on the edge</p><p>39:47 What it's like to work on TensorFlow</p><p>47:45 Improving datasets and model deployment</p><p>53:05 Outro</p><p>---</p><p>Subscribe and listen to our podcast today!</p><p>👉 Apple Podcasts: http://wandb.me/apple-podcasts​​</p><p>👉 Google Podcasts: http://wandb.me/google-podcasts​</p><p>👉 Spotify: http://wandb.me/spotify​</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">fd469f20-939e-435f-a9b8-b4a577263acc</guid><itunes:image href="https://artwork.captivate.fm/0b622c0f-f508-4531-ac17-57f3af5f86ae/cOZxG83sv1cEqdi6LN6foffD.jpg"/><pubDate>Thu, 21 Oct 2021 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/037cf163-8582-4048-a1c2-e8eac4974f62/gd-pete-warden-v3.mp3" length="51545759" type="audio/mpeg"/><itunes:duration>53:28</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Pieter Abbeel — Robotics, Startups, and Robotics Startups</title><itunes:title>Pieter Abbeel — Robotics, Startups, and Robotics Startups</itunes:title><description><![CDATA[<p>Pieter is the Chief Scientist and Co-founder at Covariant, where his team is building universal AI for robotic manipulation. Pieter also hosts The Robot Brains Podcast, in which he explores how far humanity has come in its mission to create conscious computers, mindful machines, and rational robots.</p><p>Lukas and Pieter explore the state of affairs of robotics in 2021, the challenges of achieving consistency and reliability, and what it'll take to make robotics more ubiquitous. Pieter also shares some perspective on entrepreneurship, from how he knew it was time to commercialize Gradescope to what he looks for in co-founders to why he started Covariant.</p><p>Show notes: http://wandb.me/gd-pieter-abbeel</p><p>---</p><p>Connect with Pieter:</p><p>📍 Twitter: https://twitter.com/pabbeel</p><p>📍 Website: https://people.eecs.berkeley.edu/~pabbeel/</p><p>📍 The Robot Brains Podcast: https://www.therobotbrains.ai/</p><p>---</p><p>Timestamps: </p><p>0:00 Intro</p><p>1:15 The challenges of robotics</p><p>8:10 Progress in robotics</p><p>13:34 Imitation learning and reinforcement learning</p><p>21:37 Simulated data, real data, and reliability</p><p>27:53 The increasing capabilities of robotics</p><p>36:23 Entrepreneurship and co-founding Gradescope</p><p>44:35 The story behind Covariant</p><p>47:50 Pieter's communication tips</p><p>52:13 What Pieter's currently excited about</p><p>55:08 Focusing on good UI and high reliability</p><p>57:01 Outro</p>]]></description><content:encoded><![CDATA[<p>Pieter is the Chief Scientist and Co-founder at Covariant, where his team is building universal AI for robotic manipulation. Pieter also hosts The Robot Brains Podcast, in which he explores how far humanity has come in its mission to create conscious computers, mindful machines, and rational robots.</p><p>Lukas and Pieter explore the state of affairs of robotics in 2021, the challenges of achieving consistency and reliability, and what it'll take to make robotics more ubiquitous. Pieter also shares some perspective on entrepreneurship, from how he knew it was time to commercialize Gradescope to what he looks for in co-founders to why he started Covariant.</p><p>Show notes: http://wandb.me/gd-pieter-abbeel</p><p>---</p><p>Connect with Pieter:</p><p>📍 Twitter: https://twitter.com/pabbeel</p><p>📍 Website: https://people.eecs.berkeley.edu/~pabbeel/</p><p>📍 The Robot Brains Podcast: https://www.therobotbrains.ai/</p><p>---</p><p>Timestamps: </p><p>0:00 Intro</p><p>1:15 The challenges of robotics</p><p>8:10 Progress in robotics</p><p>13:34 Imitation learning and reinforcement learning</p><p>21:37 Simulated data, real data, and reliability</p><p>27:53 The increasing capabilities of robotics</p><p>36:23 Entrepreneurship and co-founding Gradescope</p><p>44:35 The story behind Covariant</p><p>47:50 Pieter's communication tips</p><p>52:13 What Pieter's currently excited about</p><p>55:08 Focusing on good UI and high reliability</p><p>57:01 Outro</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">01e5536b-3534-4bc4-af77-66a4fd22030b</guid><itunes:image href="https://artwork.captivate.fm/94d3dcae-440f-4e3f-a05f-26ac553c6862/VmokNuItPoLQMXY13h62FH5k.jpg"/><pubDate>Thu, 07 Oct 2021 08:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/441651d3-e394-48c3-bf57-95dd887590b0/gd-pieter-abbeel-v2.mp3" length="55129399" type="audio/mpeg"/><itunes:duration>57:17</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Chris Albon — ML Models and Infrastructure at Wikimedia</title><itunes:title>Chris Albon — ML Models and Infrastructure at Wikimedia</itunes:title><description><![CDATA[<p>In this episode we're joined by Chris Albon, Director of Machine Learning at the Wikimedia Foundation.</p><p>Lukas and Chris talk about Wikimedia's approach to content moderation, what it's like to work in a place so transparent that even internal chats are public, how Wikimedia uses machine learning (spoiler: they do a lot of models to help editors), and why they're switching to Kubeflow and Docker. Chris also shares how his focus on outcomes has shaped his career and his approach to technical interviews.</p><p>Show notes: http://wandb.me/gd-chris-albon</p><p>---</p><p>Connect with Chris:</p><p>- Twitter: https://twitter.com/chrisalbon</p><p>- Website: https://chrisalbon.com/</p><p>---</p><p>Timestamps: </p><p>0:00 Intro</p><p>1:08 How Wikimedia approaches moderation</p><p>9:55 Working in the open and embracing humility</p><p>16:08 Going down Wikipedia rabbit holes</p><p>20:03 How Wikimedia uses machine learning</p><p>27:38 Wikimedia's ML infrastructure</p><p>42:56 How Chris got into machine learning</p><p>46:43 Machine Learning Flashcards and technical interviews</p><p>52:10 Low-power models and MLOps</p><p>55:58 Outro</p>]]></description><content:encoded><![CDATA[<p>In this episode we're joined by Chris Albon, Director of Machine Learning at the Wikimedia Foundation.</p><p>Lukas and Chris talk about Wikimedia's approach to content moderation, what it's like to work in a place so transparent that even internal chats are public, how Wikimedia uses machine learning (spoiler: they do a lot of models to help editors), and why they're switching to Kubeflow and Docker. Chris also shares how his focus on outcomes has shaped his career and his approach to technical interviews.</p><p>Show notes: http://wandb.me/gd-chris-albon</p><p>---</p><p>Connect with Chris:</p><p>- Twitter: https://twitter.com/chrisalbon</p><p>- Website: https://chrisalbon.com/</p><p>---</p><p>Timestamps: </p><p>0:00 Intro</p><p>1:08 How Wikimedia approaches moderation</p><p>9:55 Working in the open and embracing humility</p><p>16:08 Going down Wikipedia rabbit holes</p><p>20:03 How Wikimedia uses machine learning</p><p>27:38 Wikimedia's ML infrastructure</p><p>42:56 How Chris got into machine learning</p><p>46:43 Machine Learning Flashcards and technical interviews</p><p>52:10 Low-power models and MLOps</p><p>55:58 Outro</p>]]></content:encoded><link><![CDATA[http://wandb.me/gd-chris-albon]]></link><guid isPermaLink="false">25072d56-bada-43a9-a79c-26dfe73a96fa</guid><itunes:image href="https://artwork.captivate.fm/98d49a6f-ab2a-4c88-9d74-c807f178349a/TK5rZMxp7TZ1q81ifr2jPOt4.png"/><pubDate>Thu, 23 Sep 2021 07:55:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/7356cf35-46ee-450f-9004-2ddca241df33/gd-chris-albon-v2-1.mp3" length="54089578" type="audio/mpeg"/><itunes:duration>56:15</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Emily M. Bender — Language Models and Linguistics</title><itunes:title>Emily M. Bender — Language Models and Linguistics</itunes:title><description><![CDATA[<p>In this episode, Emily and Lukas dive into the problems with bigger and bigger language models, the difference between form and meaning, the limits of benchmarks, and why it's important to name the languages we study.</p><p>Show notes (links to papers and transcript): http://wandb.me/gd-emily-m-bender</p><p>---</p><p>Emily M. Bender is a Professor of Linguistics at and Faculty Director of the Master's Program in Computational Linguistics at University of Washington. Her research areas include multilingual grammar engineering, variation (within and across languages), the relationship between linguistics and computational linguistics, and societal issues in NLP.</p><p>---</p><p>Timestamps:</p><p>0:00 Sneak peek, intro</p><p>1:03 Stochastic Parrots</p><p>9:57 The societal impact of big language models</p><p>16:49 How language models can be harmful</p><p>26:00 The important difference between linguistic form and meaning</p><p>34:40 The octopus thought experiment</p><p>42:11 Language acquisition and the future of language models</p><p>49:47 Why benchmarks are limited</p><p>54:38 Ways of complementing benchmarks</p><p>1:01:20 The #BenderRule</p><p>1:03:50 Language diversity and linguistics</p><p>1:12:49 Outro</p>]]></description><content:encoded><![CDATA[<p>In this episode, Emily and Lukas dive into the problems with bigger and bigger language models, the difference between form and meaning, the limits of benchmarks, and why it's important to name the languages we study.</p><p>Show notes (links to papers and transcript): http://wandb.me/gd-emily-m-bender</p><p>---</p><p>Emily M. Bender is a Professor of Linguistics at and Faculty Director of the Master's Program in Computational Linguistics at University of Washington. Her research areas include multilingual grammar engineering, variation (within and across languages), the relationship between linguistics and computational linguistics, and societal issues in NLP.</p><p>---</p><p>Timestamps:</p><p>0:00 Sneak peek, intro</p><p>1:03 Stochastic Parrots</p><p>9:57 The societal impact of big language models</p><p>16:49 How language models can be harmful</p><p>26:00 The important difference between linguistic form and meaning</p><p>34:40 The octopus thought experiment</p><p>42:11 Language acquisition and the future of language models</p><p>49:47 Why benchmarks are limited</p><p>54:38 Ways of complementing benchmarks</p><p>1:01:20 The #BenderRule</p><p>1:03:50 Language diversity and linguistics</p><p>1:12:49 Outro</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">d22b4b2b-003d-4a79-b8fd-2939ae865612</guid><itunes:image href="https://artwork.captivate.fm/4206a185-9722-4916-a7e6-53e944a70107/33Rbt9280PrAx2C6lfb4V4sP.jpg"/><pubDate>Thu, 09 Sep 2021 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/355d4ea7-90e1-40cd-b10d-7a2d073163ab/gd-emily-bender-v5.mp3" length="70743343" type="audio/mpeg"/><itunes:duration>01:12:55</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType></item><item><title>Jeff Hammerbacher — From data science to biomedicine</title><itunes:title>Jeff Hammerbacher — From data science to biomedicine</itunes:title><description><![CDATA[<p>Jeff talks about building Facebook's early data team, founding Cloudera, and transitioning into biomedicine with Hammer Lab and Related Sciences.</p><p>(Read more: http://wandb.me/gd-jeff-hammerbacher)</p><p>---</p><p>Jeff Hammerbacher is a scientist, software developer, entrepreneur, and investor. Jeff's current work focuses on drug discovery at Related Sciences, a biotech venture creation firm that he co-founded in 2020.</p><p>Prior to his work at Related Sciences, Jeff was the Principal Investigator of Hammer Lab, a founder and the Chief Scientist of Cloudera, an Entrepreneur-in-Residence at Accel, and the manager of the Data team at Facebook.</p><p>---</p><p>Follow Gradient Dissent on Twitter: https://twitter.com/weights_biases</p><p>---</p><p>0:00 Sneak peek, intro</p><p>1:13 The start of Facebook's data science team</p><p>6:53 Facebook's early tech stack</p><p>14:20 Early growth strategies at Facebook</p><p>17:37 The origin story of Cloudera</p><p>24:51 Cloudera's success, in retrospect</p><p>31:05 Jeff's transition into biomedicine</p><p>38:38 Immune checkpoint blockade in cancer therapy</p><p>48:55 Data and techniques for biomedicine</p><p>53:00 Why Jeff created Related Sciences</p><p>56:32 Outro</p>]]></description><content:encoded><![CDATA[<p>Jeff talks about building Facebook's early data team, founding Cloudera, and transitioning into biomedicine with Hammer Lab and Related Sciences.</p><p>(Read more: http://wandb.me/gd-jeff-hammerbacher)</p><p>---</p><p>Jeff Hammerbacher is a scientist, software developer, entrepreneur, and investor. Jeff's current work focuses on drug discovery at Related Sciences, a biotech venture creation firm that he co-founded in 2020.</p><p>Prior to his work at Related Sciences, Jeff was the Principal Investigator of Hammer Lab, a founder and the Chief Scientist of Cloudera, an Entrepreneur-in-Residence at Accel, and the manager of the Data team at Facebook.</p><p>---</p><p>Follow Gradient Dissent on Twitter: https://twitter.com/weights_biases</p><p>---</p><p>0:00 Sneak peek, intro</p><p>1:13 The start of Facebook's data science team</p><p>6:53 Facebook's early tech stack</p><p>14:20 Early growth strategies at Facebook</p><p>17:37 The origin story of Cloudera</p><p>24:51 Cloudera's success, in retrospect</p><p>31:05 Jeff's transition into biomedicine</p><p>38:38 Immune checkpoint blockade in cancer therapy</p><p>48:55 Data and techniques for biomedicine</p><p>53:00 Why Jeff created Related Sciences</p><p>56:32 Outro</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">5b828810-0bb8-4a0f-8c73-952c00d2f310</guid><itunes:image href="https://artwork.captivate.fm/c393b733-c60f-41b0-986d-439ad4fcfb7a/z2hlmu3Nd6cyA1aMRrpuxsfi.jpg"/><pubDate>Thu, 26 Aug 2021 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/007d646a-ea29-435c-a951-0207c8933286/gd-jeff-hammerbacher-v3b.mp3" length="54838553" type="audio/mpeg"/><itunes:duration>56:34</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Jeff talks about building Facebook&apos;s early data team, founding Cloudera, and transitioning into biomedicine with Hammer Lab and Related Sciences.</itunes:summary></item><item><title>Josh Bloom — The Link Between Astronomy and ML</title><itunes:title>Josh Bloom — The Link Between Astronomy and ML</itunes:title><description><![CDATA[<p>Josh explains how astronomy and machine learning have informed each other, their current limitations, and where their intersection goes from here. </p><p>(<em>Read more: http://wandb.me/gd-josh-bloom</em>)</p><p>---</p><p>Josh is a Professor of Astronomy and Chair of the Astronomy Department at UC Berkeley. His research interests include the intersection of machine learning and physics, time-domain transients events, artificial intelligence, and optical/infared instrumentation.</p><p>---</p><p>Follow Gradient Dissent on Twitter: https://twitter.com/weights_biases</p><p>---</p><p>0:00 Intro, sneak peek</p><p>1:15 How astronomy has informed ML</p><p>4:20 The big questions in astronomy today</p><p>10:15 On dark matter and dark energy</p><p>16:37 Finding life on other planets</p><p>19:55 Driving advancements in astronomy</p><p>27:05 Putting telescopes in space</p><p>31:05 Why Josh started using ML in his research</p><p>33:54 Crowdsourcing in astronomy</p><p>36:20 How ML has (and hasn't) informed astronomy</p><p>47:22 The next generation of cross-functional grad students</p><p>50:50 How Josh started coding</p><p>56:11 Incentives and maintaining research codebases</p><p>1:00:01 ML4Science's tech stack</p><p>1:02:11 Uncertainty quantification in a sensor-based world</p><p>1:04:28 Why it's not good to always get an answer</p><p>1:07:47 Outro</p>]]></description><content:encoded><![CDATA[<p>Josh explains how astronomy and machine learning have informed each other, their current limitations, and where their intersection goes from here. </p><p>(<em>Read more: http://wandb.me/gd-josh-bloom</em>)</p><p>---</p><p>Josh is a Professor of Astronomy and Chair of the Astronomy Department at UC Berkeley. His research interests include the intersection of machine learning and physics, time-domain transients events, artificial intelligence, and optical/infared instrumentation.</p><p>---</p><p>Follow Gradient Dissent on Twitter: https://twitter.com/weights_biases</p><p>---</p><p>0:00 Intro, sneak peek</p><p>1:15 How astronomy has informed ML</p><p>4:20 The big questions in astronomy today</p><p>10:15 On dark matter and dark energy</p><p>16:37 Finding life on other planets</p><p>19:55 Driving advancements in astronomy</p><p>27:05 Putting telescopes in space</p><p>31:05 Why Josh started using ML in his research</p><p>33:54 Crowdsourcing in astronomy</p><p>36:20 How ML has (and hasn't) informed astronomy</p><p>47:22 The next generation of cross-functional grad students</p><p>50:50 How Josh started coding</p><p>56:11 Incentives and maintaining research codebases</p><p>1:00:01 ML4Science's tech stack</p><p>1:02:11 Uncertainty quantification in a sensor-based world</p><p>1:04:28 Why it's not good to always get an answer</p><p>1:07:47 Outro</p>]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/1109004070</guid><itunes:image href="https://artwork.captivate.fm/0e348306-ebce-44c0-9ff0-bbc2084bac6b/artworks-tq5xhepwsslfpvej-cggghw-t3000x3000.jpg"/><pubDate>Fri, 20 Aug 2021 15:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/f4fb8c29-5356-467e-9a15-935ede020e50/1109004070-wandb-gd-josh-bloom.mp3" length="65541014" type="audio/mpeg"/><itunes:duration>01:08:16</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Josh explains how astronomy and machine learning have informed each other, their current limitations, and where their intersection goes from here.</itunes:summary></item><item><title>Xavier Amatriain — Building AI-powered Primary Care</title><itunes:title>Xavier Amatriain — Building AI-powered Primary Care</itunes:title><description><![CDATA[Xavier shares his experience deploying healthcare models, augmenting primary care with AI, the challenges of "ground truth" in medicine, and robustness in ML.

---

Xavier Amatriain is co-founder and CTO of Curai, an ML-based primary care chat system. Previously, he was VP of Engineering at Quora, and Research/Engineering Director at Neflix, where he started and led the Algorithms team responsible for Netflix's recommendation systems.

---

⏳ Timestamps: 
0:00 Sneak peak, intro
0:49 What is Curai?
5:48 The role of AI within Curai
8:44 Why Curai keeps humans in the loop
15:00 Measuring diagnostic accuracy
18:53 Patient safety 
22:39 Different types of models at Curai
25:42 Using GPT-3 to generate training data
32:13 How Curai monitors and debugs models
35:19 Model explainability
39:27 Robustness in ML 
45:52 Connecting metrics to impact
49:32 Outro

🌟 Show notes:
- http://wandb.me/gd-xavier-amatriain
- Transcription of the episode
- Links to papers, projects, and people

---

Follow us on Twitter! 
📍 https://twitter.com/wandb_gd

Get our podcast on these platforms:
👉 Apple Podcasts: http://wandb.me/apple-podcasts​​
👉 Spotify: http://wandb.me/spotify​
👉 Google Podcasts: http://wandb.me/google-podcasts​​
👉 YouTube: http://wandb.me/youtube​​
👉 Soundcloud: http://wandb.me/soundcloud​]]></description><content:encoded><![CDATA[Xavier shares his experience deploying healthcare models, augmenting primary care with AI, the challenges of "ground truth" in medicine, and robustness in ML.

---

Xavier Amatriain is co-founder and CTO of Curai, an ML-based primary care chat system. Previously, he was VP of Engineering at Quora, and Research/Engineering Director at Neflix, where he started and led the Algorithms team responsible for Netflix's recommendation systems.

---

⏳ Timestamps: 
0:00 Sneak peak, intro
0:49 What is Curai?
5:48 The role of AI within Curai
8:44 Why Curai keeps humans in the loop
15:00 Measuring diagnostic accuracy
18:53 Patient safety 
22:39 Different types of models at Curai
25:42 Using GPT-3 to generate training data
32:13 How Curai monitors and debugs models
35:19 Model explainability
39:27 Robustness in ML 
45:52 Connecting metrics to impact
49:32 Outro

🌟 Show notes:
- http://wandb.me/gd-xavier-amatriain
- Transcription of the episode
- Links to papers, projects, and people

---

Follow us on Twitter! 
📍 https://twitter.com/wandb_gd

Get our podcast on these platforms:
👉 Apple Podcasts: http://wandb.me/apple-podcasts​​
👉 Spotify: http://wandb.me/spotify​
👉 Google Podcasts: http://wandb.me/google-podcasts​​
👉 YouTube: http://wandb.me/youtube​​
👉 Soundcloud: http://wandb.me/soundcloud​]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/1096633399</guid><itunes:image href="https://artwork.captivate.fm/158c4fd3-89d7-4a73-b7ef-ab7488a90197/artworks-rr0lq0ushzrms4yn-0kpp0q-t3000x3000.jpg"/><pubDate>Fri, 30 Jul 2021 15:01:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/9427edec-d844-4b2d-b96a-7b7dd73b5de3/1096633399-wandb-gd-xavier-amatriain.mp3" length="48140537" type="audio/mpeg"/><itunes:duration>50:09</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Xavier shares his experience deploying healthcare models, augmenting primary care with AI, the challenges of &quot;ground truth&quot; in medicine, and robustness in ML.

---

Xavier Amatriain is co-founder and CTO of Curai, an ML-based primary care chat system. Previously, he was VP of Engineering at Quora, and Research/Engineering Director at Neflix, where he started and led the Algorithms team responsible for Netflix&apos;s recommendation systems.

---

⏳ Timestamps: 
0:00 Sneak peak, intro
0:49 What is Curai?
5:48 The role of AI within Curai
8:44 Why Curai keeps humans in the loop
15:00 Measuring diagnostic accuracy
18:53 Patient safety 
22:39 Different types of models at Curai
25:42 Using GPT-3 to generate training data
32:13 How Curai monitors and debugs models
35:19 Model explainability
39:27 Robustness in ML 
45:52 Connecting metrics to impact
49:32 Outro

🌟 Show notes:
- http://wandb.me/gd-xavier-amatriain
- Transcription of the episode
- Links to papers, projects, and people

---

Follow us on Twitter! 
📍 https://twitter.com/wandb_gd

Get our podcast on these platforms:
👉 Apple Podcasts: http://wandb.me/apple-podcasts​​
👉 Spotify: http://wandb.me/spotify​
👉 Google Podcasts: http://wandb.me/google-podcasts​​
👉 YouTube: http://wandb.me/youtube​​
👉 Soundcloud: http://wandb.me/soundcloud​</itunes:summary></item><item><title>Spence Green — Enterprise-scale Machine Translation</title><itunes:title>Spence Green — Enterprise-scale Machine Translation</itunes:title><description><![CDATA[Spence shares his experience creating a product around human-in-the-loop machine translation, and explains how machine translation has evolved over the years.

---

Spence Green is co-founder and CEO of Lilt, an AI-powered language translation platform. Lilt combines human translators and machine translation in order to produce high-quality translations more efficiently. 

---

🌟 Show notes:
- http://wandb.me/gd-spence-green
- Transcription of the episode
- Links to papers, projects, and people

⏳ Timestamps:
0:00 Sneak peak, intro
0:45 The story behind Lilt
3:08 Statistical MT vs neural MT
6:30 Domain adaptation and personalized models
8:00 The emergence of neural MT and development of Lilt
13:09 What success looks like for Lilt
18:20 Models that self-correct for gender bias
19:39 How Lilt runs its models in production
26:33 How far can MT go?
29:55 Why Lilt cares about human-computer interaction
35:04 Bilingual grammatical error correction
37:18 Human parity in MT
39:41 The unexpected challenges of prototype to production


---

Get our podcast on these platforms:
👉 Apple Podcasts: http://wandb.me/apple-podcasts​​
👉 Spotify: http://wandb.me/spotify​
👉 Google Podcasts: http://wandb.me/google-podcasts​​
👉 YouTube: http://wandb.me/youtube​​
👉 Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected]]></description><content:encoded><![CDATA[Spence shares his experience creating a product around human-in-the-loop machine translation, and explains how machine translation has evolved over the years.

---

Spence Green is co-founder and CEO of Lilt, an AI-powered language translation platform. Lilt combines human translators and machine translation in order to produce high-quality translations more efficiently. 

---

🌟 Show notes:
- http://wandb.me/gd-spence-green
- Transcription of the episode
- Links to papers, projects, and people

⏳ Timestamps:
0:00 Sneak peak, intro
0:45 The story behind Lilt
3:08 Statistical MT vs neural MT
6:30 Domain adaptation and personalized models
8:00 The emergence of neural MT and development of Lilt
13:09 What success looks like for Lilt
18:20 Models that self-correct for gender bias
19:39 How Lilt runs its models in production
26:33 How far can MT go?
29:55 Why Lilt cares about human-computer interaction
35:04 Bilingual grammatical error correction
37:18 Human parity in MT
39:41 The unexpected challenges of prototype to production


---

Get our podcast on these platforms:
👉 Apple Podcasts: http://wandb.me/apple-podcasts​​
👉 Spotify: http://wandb.me/spotify​
👉 Google Podcasts: http://wandb.me/google-podcasts​​
👉 YouTube: http://wandb.me/youtube​​
👉 Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/1088789779</guid><itunes:image href="https://artwork.captivate.fm/66d9fc94-a94d-4661-9545-35a9f8f83d89/artworks-ku4hbevln0drycac-d0pbyq-t3000x3000.jpg"/><pubDate>Fri, 16 Jul 2021 17:08:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/33faab1c-f862-462d-9ee2-abce4e9a58bc/1088789779-wandb-gd-spence-green.mp3" length="42013256" type="audio/mpeg"/><itunes:duration>43:46</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Spence shares his experience creating a product around human-in-the-loop machine translation, and explains how machine translation has evolved over the years.

---

Spence Green is co-founder and CEO of Lilt, an AI-powered language translation platform. Lilt combines human translators and machine translation in order to produce high-quality translations more efficiently. 

---

🌟 Show notes:
- http://wandb.me/gd-spence-green
- Transcription of the episode
- Links to papers, projects, and people

⏳ Timestamps:
0:00 Sneak peak, intro
0:45 The story behind Lilt
3:08 Statistical MT vs neural MT
6:30 Domain adaptation and personalized models
8:00 The emergence of neural MT and development of Lilt
13:09 What success looks like for Lilt
18:20 Models that self-correct for gender bias
19:39 How Lilt runs its models in production
26:33 How far can MT go?
29:55 Why Lilt cares about human-computer interaction
35:04 Bilingual grammatical error correction
37:18 Human parity in MT
39:41 The unexpected challenges of prototype to production


---

Get our podcast on these platforms:
👉 Apple Podcasts: http://wandb.me/apple-podcasts​​
👉 Spotify: http://wandb.me/spotify​
👉 Google Podcasts: http://wandb.me/google-podcasts​​
👉 YouTube: http://wandb.me/youtube​​
👉 Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected</itunes:summary></item><item><title>Roger &amp; DJ — The Rise of Big Data and CA&apos;s COVID-19 Response</title><itunes:title>Roger &amp; DJ — The Rise of Big Data and CA&apos;s COVID-19 Response</itunes:title><description><![CDATA[Roger and DJ share some of the history behind data science as we know it today, and reflect on their experiences working on California's COVID-19 response.

---

Roger Magoulas is Senior Director of Data Strategy at Astronomer, where he works on data infrastructure, analytics, and community development. Previously, he was VP of Research at O'Reilly and co-chair of O'Reilly's Strata Data and AI Conference.

DJ Patil is a board member and former CTO of Devoted Health, a healthcare company for seniors. He was also Chief Data Scientist under the Obama administration and the Head of Data Science at LinkedIn.

Roger and DJ recently volunteered for the California COVID-19 response, and worked with data to understand case counts, bed capacities and the impact of intervention.

Connect with Roger and DJ:
📍 Roger's Twitter: https://twitter.com/rogerm
📍 DJ's Twitter: https://twitter.com/dpatil

---

🌟 Transcript: http://wandb.me/gd-roger-and-dj 🌟

⏳ Timestamps:
0:00 Sneak peek, intro
1:03 Coining the terms "big data" and "data scientist"
7:12 The rise of data science teams
15:28 Big Data, Hadoop, and Spark
23:10 The importance of using the right tools
29:20 BLUF: Bottom Line Up Front
34:44 California's COVID response
41:21 The human aspects of responding to COVID
48:33 Reflecting on the impact of COVID interventions
57:06 Advice on doing meaningful data science work
1:04:18 Outro

🍀 Links:
1. "MapReduce: Simplified Data Processing on Large Clusters" (Dean and Ghemawat, 2004): https://research.google/pubs/pub62/﻿﻿
2. "Big Data: Technologies and Techniques for Large-Scale Data" (Magoulas and Lorica, 2009): https://academics.uccs.edu/~ooluwada/courses/datamining/ExtraReading/BigData
3. The O'RLY book covers: https://www.businessinsider.com/these-hilarious-memes-perfectly-capture-what-its-like-to-work-in-tech-2016-4
4. "The Premonition" (Lewis, 2021): https://www.npr.org/2021/05/03/991570372/michael-lewis-the-premonition-is-a-sweeping-indictment-of-the-cdc
5. Why California's beaches are glowing with bioluminescence: https://www.youtube.com/watch?v=AVYSr19ReOs
6.﻿﻿﻿
7. Sturgis Motorcyle Rally: https://en.wikipedia.org/wiki/Sturgis_Motorcycle_Rally﻿

---

Get our podcast on these platforms:
👉 Apple Podcasts: http://wandb.me/apple-podcasts​​
👉 Spotify: http://wandb.me/spotify​
👉 Google Podcasts: http://wandb.me/google-podcasts​​
👉 YouTube: http://wandb.me/youtube​​
👉 Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected]]></description><content:encoded><![CDATA[Roger and DJ share some of the history behind data science as we know it today, and reflect on their experiences working on California's COVID-19 response.

---

Roger Magoulas is Senior Director of Data Strategy at Astronomer, where he works on data infrastructure, analytics, and community development. Previously, he was VP of Research at O'Reilly and co-chair of O'Reilly's Strata Data and AI Conference.

DJ Patil is a board member and former CTO of Devoted Health, a healthcare company for seniors. He was also Chief Data Scientist under the Obama administration and the Head of Data Science at LinkedIn.

Roger and DJ recently volunteered for the California COVID-19 response, and worked with data to understand case counts, bed capacities and the impact of intervention.

Connect with Roger and DJ:
📍 Roger's Twitter: https://twitter.com/rogerm
📍 DJ's Twitter: https://twitter.com/dpatil

---

🌟 Transcript: http://wandb.me/gd-roger-and-dj 🌟

⏳ Timestamps:
0:00 Sneak peek, intro
1:03 Coining the terms "big data" and "data scientist"
7:12 The rise of data science teams
15:28 Big Data, Hadoop, and Spark
23:10 The importance of using the right tools
29:20 BLUF: Bottom Line Up Front
34:44 California's COVID response
41:21 The human aspects of responding to COVID
48:33 Reflecting on the impact of COVID interventions
57:06 Advice on doing meaningful data science work
1:04:18 Outro

🍀 Links:
1. "MapReduce: Simplified Data Processing on Large Clusters" (Dean and Ghemawat, 2004): https://research.google/pubs/pub62/﻿﻿
2. "Big Data: Technologies and Techniques for Large-Scale Data" (Magoulas and Lorica, 2009): https://academics.uccs.edu/~ooluwada/courses/datamining/ExtraReading/BigData
3. The O'RLY book covers: https://www.businessinsider.com/these-hilarious-memes-perfectly-capture-what-its-like-to-work-in-tech-2016-4
4. "The Premonition" (Lewis, 2021): https://www.npr.org/2021/05/03/991570372/michael-lewis-the-premonition-is-a-sweeping-indictment-of-the-cdc
5. Why California's beaches are glowing with bioluminescence: https://www.youtube.com/watch?v=AVYSr19ReOs
6.﻿﻿﻿
7. Sturgis Motorcyle Rally: https://en.wikipedia.org/wiki/Sturgis_Motorcycle_Rally﻿

---

Get our podcast on these platforms:
👉 Apple Podcasts: http://wandb.me/apple-podcasts​​
👉 Spotify: http://wandb.me/spotify​
👉 Google Podcasts: http://wandb.me/google-podcasts​​
👉 YouTube: http://wandb.me/youtube​​
👉 Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/1068496090</guid><itunes:image href="https://artwork.captivate.fm/76e93fdc-a9f6-434b-88a9-1476aa8a8188/artworks-mrdhsox3yhbqzaqd-5dvyxg-t3000x3000.jpg"/><pubDate>Thu, 08 Jul 2021 15:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/a4600a89-dc8a-4ea6-bad5-1bd6d91065f4/1068496090-wandb-gd-roger-and-dj.mp3" length="62285948" type="audio/mpeg"/><itunes:duration>01:04:53</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Roger and DJ share some of the history behind data science as we know it today, and reflect on their experiences working on California&apos;s COVID-19 response.

---

Roger Magoulas is Senior Director of Data Strategy at Astronomer, where he works on data infrastructure, analytics, and community development. Previously, he was VP of Research at O&apos;Reilly and co-chair of O&apos;Reilly&apos;s Strata Data and AI Conference.

DJ Patil is a board member and former CTO of Devoted Health, a healthcare company for seniors. He was also Chief Data Scientist under the Obama administration and the Head of Data Science at LinkedIn.

Roger and DJ recently volunteered for the California COVID-19 response, and worked with data to understand case counts, bed capacities and the impact of intervention.

Connect with Roger and DJ:
📍 Roger&apos;s Twitter: https://twitter.com/rogerm
📍 DJ&apos;s Twitter: https://twitter.com/dpatil

---

🌟 Transcript: http://wandb.me/gd-roger-and-dj 🌟

⏳ Timestamps:
0:00 Sneak peek, intro
1:03 Coining the terms &quot;big data&quot; and &quot;data scientist&quot;
7:12 The rise of data science teams
15:28 Big Data, Hadoop, and Spark
23:10 The importance of using the right tools
29:20 BLUF: Bottom Line Up Front
34:44 California&apos;s COVID response
41:21 The human aspects of responding to COVID
48:33 Reflecting on the impact of COVID interventions
57:06 Advice on doing meaningful data science work
1:04:18 Outro

🍀 Links:
1. &quot;MapReduce: Simplified Data Processing on Large Clusters&quot; (Dean and Ghemawat, 2004): https://research.google/pubs/pub62/﻿﻿
2. &quot;Big Data: Technologies and Techniques for Large-Scale Data&quot; (Magoulas and Lorica, 2009): https://academics.uccs.edu/~ooluwada/courses/datamining/ExtraReading/BigData
3. The O&apos;RLY book covers: https://www.businessinsider.com/these-hilarious-memes-perfectly-capture-what-its-like-to-work-in-tech-2016-4
4. &quot;The Premonition&quot; (Lewis, 2021): https://www.npr.org/2021/05/03/991570372/michael-lewis-the-premonition-is-a-sweeping-indictment-of-the-cdc
5. Why California&apos;s beaches are glowing with bioluminescence: https://www.youtube.com/watch?v=AVYSr19ReOs
6.﻿﻿﻿
7. Sturgis Motorcyle Rally: https://en.wikipedia.org/wiki/Sturgis_Motorcycle_Rally﻿

---

Get our podcast on these platforms:
👉 Apple Podcasts: http://wandb.me/apple-podcasts​​
👉 Spotify: http://wandb.me/spotify​
👉 Google Podcasts: http://wandb.me/google-podcasts​​
👉 YouTube: http://wandb.me/youtube​​
👉 Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected</itunes:summary></item><item><title>Amelia &amp; Filip — How Pandora Deploys ML Models into Production</title><itunes:title>Amelia &amp; Filip — How Pandora Deploys ML Models into Production</itunes:title><description><![CDATA[Amelia and Filip give insights into the recommender systems powering Pandora, from developing models to balancing effectiveness and efficiency in production.

---

Amelia Nybakke is a Software Engineer at Pandora. Her team is responsible for the production system that serves models to listeners.
Filip Korzeniowski is a Senior Scientist at Pandora working on recommender systems. Before that, he was a PhD student working on deep neural networks for acoustic and language modeling applied to musical audio recordings.

Connect with Amelia and Filip:
📍 Amelia's LinkedIn: https://www.linkedin.com/in/amelia-nybakke-60bba5107/
📍 Filip's LinkedIn: https://www.linkedin.com/in/filip-korzeniowski-28b33815a/

---

⏳ Timestamps:
0:00 Sneak peek, intro
0:42 What type of ML models are at Pandora?
3:39 What makes two songs similar or not similar?
7:33 Improving models and A/B testing
8:52 Chaining, retraining, versioning, and tracking models
13:29 Useful development tools
15:10 Debugging models
18:28 Communicating progress
20:33 Tuning and improving models
23:08 How Pandora puts models into production
29:45 Bias in ML models
36:01 Repetition vs novelty in recommended songs
38:01 The bottlenecks of deployment

🌟 Transcript: http://wandb.me/gd-amelia-and-filip 🌟

Links:
📍 Amelia's "Women's History Month" playlist: https://www.pandora.com/playlist/PL:1407374934299927:100514833

---

Get our podcast on these platforms:
👉 Apple Podcasts: http://wandb.me/apple-podcasts​​
👉 Spotify: http://wandb.me/spotify​
👉 Google Podcasts: http://wandb.me/google-podcasts​​
👉 YouTube: http://wandb.me/youtube​​
👉 Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected]]></description><content:encoded><![CDATA[Amelia and Filip give insights into the recommender systems powering Pandora, from developing models to balancing effectiveness and efficiency in production.

---

Amelia Nybakke is a Software Engineer at Pandora. Her team is responsible for the production system that serves models to listeners.
Filip Korzeniowski is a Senior Scientist at Pandora working on recommender systems. Before that, he was a PhD student working on deep neural networks for acoustic and language modeling applied to musical audio recordings.

Connect with Amelia and Filip:
📍 Amelia's LinkedIn: https://www.linkedin.com/in/amelia-nybakke-60bba5107/
📍 Filip's LinkedIn: https://www.linkedin.com/in/filip-korzeniowski-28b33815a/

---

⏳ Timestamps:
0:00 Sneak peek, intro
0:42 What type of ML models are at Pandora?
3:39 What makes two songs similar or not similar?
7:33 Improving models and A/B testing
8:52 Chaining, retraining, versioning, and tracking models
13:29 Useful development tools
15:10 Debugging models
18:28 Communicating progress
20:33 Tuning and improving models
23:08 How Pandora puts models into production
29:45 Bias in ML models
36:01 Repetition vs novelty in recommended songs
38:01 The bottlenecks of deployment

🌟 Transcript: http://wandb.me/gd-amelia-and-filip 🌟

Links:
📍 Amelia's "Women's History Month" playlist: https://www.pandora.com/playlist/PL:1407374934299927:100514833

---

Get our podcast on these platforms:
👉 Apple Podcasts: http://wandb.me/apple-podcasts​​
👉 Spotify: http://wandb.me/spotify​
👉 Google Podcasts: http://wandb.me/google-podcasts​​
👉 YouTube: http://wandb.me/youtube​​
👉 Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/1068496105</guid><itunes:image href="https://artwork.captivate.fm/1c75648b-9043-4cf8-8728-8565c634a2b3/artworks-kprxjrz3g2yamocp-dshi9a-t3000x3000.jpg"/><pubDate>Thu, 01 Jul 2021 15:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/9f63f168-96fb-419c-8457-90c791db212b/1068496105-wandb-gd-amelia-and-filip.mp3" length="39187016" type="audio/mpeg"/><itunes:duration>40:49</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Amelia and Filip give insights into the recommender systems powering Pandora, from developing models to balancing effectiveness and efficiency in production.

---

Amelia Nybakke is a Software Engineer at Pandora. Her team is responsible for the production system that serves models to listeners.
Filip Korzeniowski is a Senior Scientist at Pandora working on recommender systems. Before that, he was a PhD student working on deep neural networks for acoustic and language modeling applied to musical audio recordings.

Connect with Amelia and Filip:
📍 Amelia&apos;s LinkedIn: https://www.linkedin.com/in/amelia-nybakke-60bba5107/
📍 Filip&apos;s LinkedIn: https://www.linkedin.com/in/filip-korzeniowski-28b33815a/

---

⏳ Timestamps:
0:00 Sneak peek, intro
0:42 What type of ML models are at Pandora?
3:39 What makes two songs similar or not similar?
7:33 Improving models and A/B testing
8:52 Chaining, retraining, versioning, and tracking models
13:29 Useful development tools
15:10 Debugging models
18:28 Communicating progress
20:33 Tuning and improving models
23:08 How Pandora puts models into production
29:45 Bias in ML models
36:01 Repetition vs novelty in recommended songs
38:01 The bottlenecks of deployment

🌟 Transcript: http://wandb.me/gd-amelia-and-filip 🌟

Links:
📍 Amelia&apos;s &quot;Women&apos;s History Month&quot; playlist: https://www.pandora.com/playlist/PL:1407374934299927:100514833

---

Get our podcast on these platforms:
👉 Apple Podcasts: http://wandb.me/apple-podcasts​​
👉 Spotify: http://wandb.me/spotify​
👉 Google Podcasts: http://wandb.me/google-podcasts​​
👉 YouTube: http://wandb.me/youtube​​
👉 Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected</itunes:summary></item><item><title>Luis Ceze — Accelerating Machine Learning Systems</title><itunes:title>Luis Ceze — Accelerating Machine Learning Systems</itunes:title><description><![CDATA[From Apache TVM to OctoML, Luis gives direct insight into the world of ML hardware optimization, and where systems optimization is heading.

---

Luis Ceze is co-founder and CEO of OctoML, co-author of the Apache TVM Project, and Professor of Computer Science and Engineering at the University of Washington. His research focuses on the intersection of computer architecture, programming languages, machine learning, and molecular biology. 

Connect with Luis:
📍 Twitter: https://twitter.com/luisceze
📍 University of Washington profile: https://homes.cs.washington.edu/~luisceze/

---

⏳ Timestamps:
0:00 Intro and sneak peek
0:59 What is TVM?
8:57 Freedom of choice in software and hardware stacks
15:53 How new libraries can improve system performance
20:10 Trade-offs between efficiency and complexity
24:35 Specialized instructions
26:34 The future of hardware design and research
30:03 Where does architecture and research go from here?
30:56 The environmental impact of efficiency
32:49 Optimizing and trade-offs
37:54 What is OctoML and the Octomizer?
42:31 Automating systems design with and for ML 
44:18 ML and molecular biology
46:09 The challenges of deployment and post-deployment

🌟 Transcript: http://wandb.me/gd-luis-ceze 🌟

Links:
1. OctoML: https://octoml.ai/
2. Apache TVM: https://tvm.apache.org/﻿﻿﻿﻿
3. "Scalable and Intelligent Learning Systems" (Chen, 2019): https://digital.lib.washington.edu/researchworks/handle/1773/44766
4. "Principled Optimization Of Dynamic Neural Networks" (Roesch, 2020): https://digital.lib.washington.edu/researchworks/handle/1773/46765
5. "Cross-Stack Co-Design for Efficient and Adaptable Hardware Acceleration" (Moreau, 2018): https://digital.lib.washington.edu/researchworks/handle/1773/43349
6. "TVM: An Automated End-to-End Optimizing Compiler for Deep Learning" (Chen et al., 2018): https://www.usenix.org/system/files/osdi18-chen.pdf
7. Porcupine is a molecular tagging system introduced in "Rapid and robust assembly and decoding of molecular tags with DNA-based nanopore signatures" (Doroschak et al., 2020): https://www.nature.com/articles/s41467-020-19151-8

---

Get our podcast on these platforms:
👉 Apple Podcasts: http://wandb.me/apple-podcasts​​
👉 Spotify: http://wandb.me/spotify​
👉 Google Podcasts: http://wandb.me/google-podcasts​​
👉 YouTube: http://wandb.me/youtube​​
👉 Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected]]></description><content:encoded><![CDATA[From Apache TVM to OctoML, Luis gives direct insight into the world of ML hardware optimization, and where systems optimization is heading.

---

Luis Ceze is co-founder and CEO of OctoML, co-author of the Apache TVM Project, and Professor of Computer Science and Engineering at the University of Washington. His research focuses on the intersection of computer architecture, programming languages, machine learning, and molecular biology. 

Connect with Luis:
📍 Twitter: https://twitter.com/luisceze
📍 University of Washington profile: https://homes.cs.washington.edu/~luisceze/

---

⏳ Timestamps:
0:00 Intro and sneak peek
0:59 What is TVM?
8:57 Freedom of choice in software and hardware stacks
15:53 How new libraries can improve system performance
20:10 Trade-offs between efficiency and complexity
24:35 Specialized instructions
26:34 The future of hardware design and research
30:03 Where does architecture and research go from here?
30:56 The environmental impact of efficiency
32:49 Optimizing and trade-offs
37:54 What is OctoML and the Octomizer?
42:31 Automating systems design with and for ML 
44:18 ML and molecular biology
46:09 The challenges of deployment and post-deployment

🌟 Transcript: http://wandb.me/gd-luis-ceze 🌟

Links:
1. OctoML: https://octoml.ai/
2. Apache TVM: https://tvm.apache.org/﻿﻿﻿﻿
3. "Scalable and Intelligent Learning Systems" (Chen, 2019): https://digital.lib.washington.edu/researchworks/handle/1773/44766
4. "Principled Optimization Of Dynamic Neural Networks" (Roesch, 2020): https://digital.lib.washington.edu/researchworks/handle/1773/46765
5. "Cross-Stack Co-Design for Efficient and Adaptable Hardware Acceleration" (Moreau, 2018): https://digital.lib.washington.edu/researchworks/handle/1773/43349
6. "TVM: An Automated End-to-End Optimizing Compiler for Deep Learning" (Chen et al., 2018): https://www.usenix.org/system/files/osdi18-chen.pdf
7. Porcupine is a molecular tagging system introduced in "Rapid and robust assembly and decoding of molecular tags with DNA-based nanopore signatures" (Doroschak et al., 2020): https://www.nature.com/articles/s41467-020-19151-8

---

Get our podcast on these platforms:
👉 Apple Podcasts: http://wandb.me/apple-podcasts​​
👉 Spotify: http://wandb.me/spotify​
👉 Google Podcasts: http://wandb.me/google-podcasts​​
👉 YouTube: http://wandb.me/youtube​​
👉 Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/1073893573</guid><itunes:image href="https://artwork.captivate.fm/461f4bda-97d8-4f03-b822-26c5377b303f/artworks-jwa2pvkwnsr0k9jr-p8czza-t3000x3000.jpg"/><pubDate>Thu, 24 Jun 2021 15:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/cdc48d06-d47f-4715-9c41-063b32a25e05/1073893573-wandb-gd-luis-ceze.mp3" length="46528887" type="audio/mpeg"/><itunes:duration>48:28</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>From Apache TVM to OctoML, Luis gives direct insight into the world of ML hardware optimization, and where systems optimization is heading.

---

Luis Ceze is co-founder and CEO of OctoML, co-author of the Apache TVM Project, and Professor of Computer Science and Engineering at the University of Washington. His research focuses on the intersection of computer architecture, programming languages, machine learning, and molecular biology. 

Connect with Luis:
📍 Twitter: https://twitter.com/luisceze
📍 University of Washington profile: https://homes.cs.washington.edu/~luisceze/

---

⏳ Timestamps:
0:00 Intro and sneak peek
0:59 What is TVM?
8:57 Freedom of choice in software and hardware stacks
15:53 How new libraries can improve system performance
20:10 Trade-offs between efficiency and complexity
24:35 Specialized instructions
26:34 The future of hardware design and research
30:03 Where does architecture and research go from here?
30:56 The environmental impact of efficiency
32:49 Optimizing and trade-offs
37:54 What is OctoML and the Octomizer?
42:31 Automating systems design with and for ML 
44:18 ML and molecular biology
46:09 The challenges of deployment and post-deployment

🌟 Transcript: http://wandb.me/gd-luis-ceze 🌟

Links:
1. OctoML: https://octoml.ai/
2. Apache TVM: https://tvm.apache.org/﻿﻿﻿﻿
3. &quot;Scalable and Intelligent Learning Systems&quot; (Chen, 2019): https://digital.lib.washington.edu/researchworks/handle/1773/44766
4. &quot;Principled Optimization Of Dynamic Neural Networks&quot; (Roesch, 2020): https://digital.lib.washington.edu/researchworks/handle/1773/46765
5. &quot;Cross-Stack Co-Design for Efficient and Adaptable Hardware Acceleration&quot; (Moreau, 2018): https://digital.lib.washington.edu/researchworks/handle/1773/43349
6. &quot;TVM: An Automated End-to-End Optimizing Compiler for Deep Learning&quot; (Chen et al., 2018): https://www.usenix.org/system/files/osdi18-chen.pdf
7. Porcupine is a molecular tagging system introduced in &quot;Rapid and robust assembly and decoding of molecular tags with DNA-based nanopore signatures&quot; (Doroschak et al., 2020): https://www.nature.com/articles/s41467-020-19151-8

---

Get our podcast on these platforms:
👉 Apple Podcasts: http://wandb.me/apple-podcasts​​
👉 Spotify: http://wandb.me/spotify​
👉 Google Podcasts: http://wandb.me/google-podcasts​​
👉 YouTube: http://wandb.me/youtube​​
👉 Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected</itunes:summary></item><item><title>Matthew Davis — Bringing Genetic Insights to Everyone</title><itunes:title>Matthew Davis — Bringing Genetic Insights to Everyone</itunes:title><description><![CDATA[Matthew explains how combining machine learning and computational biology can provide mainstream medicine with better diagnostics and insights.

---

Matthew Davis is Head of AI at Invitae, the largest and fastest growing genetic testing company in the world. His research includes bioinformatics, computational biology, NLP, reinforcement learning, and information retrieval. Matthew was previously at IBM Research AI, where he led a research team focused on improving AI systems.

Connect with Matthew:
📍 Personal website: https://www.linkedin.com/in/matthew-davis-51233386/
📍 Twitter: https://twitter.com/deadsmiths

---

⏳ Timestamps:
0:00 Sneak peek, intro
1:02 What is Invitae?
2:58 Why genetic testing can help everyone
7:51 How Invitae uses ML techniques
14:02 Modeling molecules and deciding which genes to look at
22:22 NLP applications in bioinformatics
27:10 Team structure at Invitae
36:50 Why reasoning is an underrated topic in ML
40:25 Why having a clear buy-in is important

🌟 Transcript: http://wandb.me/gd-matthew-davis 🌟

Links:
📍 Invitae: https://www.invitae.com/en
📍 Careers at Invitae: https://www.invitae.com/en/careers/

---

Get our podcast on these platforms:
👉 Apple Podcasts: http://wandb.me/apple-podcasts​​
👉 Spotify: http://wandb.me/spotify​
👉 Google Podcasts: http://wandb.me/google-podcasts​​
👉 YouTube: http://wandb.me/youtube​​
👉 Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected]]></description><content:encoded><![CDATA[Matthew explains how combining machine learning and computational biology can provide mainstream medicine with better diagnostics and insights.

---

Matthew Davis is Head of AI at Invitae, the largest and fastest growing genetic testing company in the world. His research includes bioinformatics, computational biology, NLP, reinforcement learning, and information retrieval. Matthew was previously at IBM Research AI, where he led a research team focused on improving AI systems.

Connect with Matthew:
📍 Personal website: https://www.linkedin.com/in/matthew-davis-51233386/
📍 Twitter: https://twitter.com/deadsmiths

---

⏳ Timestamps:
0:00 Sneak peek, intro
1:02 What is Invitae?
2:58 Why genetic testing can help everyone
7:51 How Invitae uses ML techniques
14:02 Modeling molecules and deciding which genes to look at
22:22 NLP applications in bioinformatics
27:10 Team structure at Invitae
36:50 Why reasoning is an underrated topic in ML
40:25 Why having a clear buy-in is important

🌟 Transcript: http://wandb.me/gd-matthew-davis 🌟

Links:
📍 Invitae: https://www.invitae.com/en
📍 Careers at Invitae: https://www.invitae.com/en/careers/

---

Get our podcast on these platforms:
👉 Apple Podcasts: http://wandb.me/apple-podcasts​​
👉 Spotify: http://wandb.me/spotify​
👉 Google Podcasts: http://wandb.me/google-podcasts​​
👉 YouTube: http://wandb.me/youtube​​
👉 Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/1066746868</guid><itunes:image href="https://artwork.captivate.fm/852f9fe5-d012-4757-9e04-a1aae0e3d66f/artworks-wmgtdczghwnt0wjb-8jkd2g-t3000x3000.jpg"/><pubDate>Thu, 17 Jun 2021 15:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/ef35bb42-cede-41fc-af9d-da23d1f239f5/1066746868-wandb-gd-matthew-davis.mp3" length="41306487" type="audio/mpeg"/><itunes:duration>43:02</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Matthew explains how combining machine learning and computational biology can provide mainstream medicine with better diagnostics and insights.

---

Matthew Davis is Head of AI at Invitae, the largest and fastest growing genetic testing company in the world. His research includes bioinformatics, computational biology, NLP, reinforcement learning, and information retrieval. Matthew was previously at IBM Research AI, where he led a research team focused on improving AI systems.

Connect with Matthew:
📍 Personal website: https://www.linkedin.com/in/matthew-davis-51233386/
📍 Twitter: https://twitter.com/deadsmiths

---

⏳ Timestamps:
0:00 Sneak peek, intro
1:02 What is Invitae?
2:58 Why genetic testing can help everyone
7:51 How Invitae uses ML techniques
14:02 Modeling molecules and deciding which genes to look at
22:22 NLP applications in bioinformatics
27:10 Team structure at Invitae
36:50 Why reasoning is an underrated topic in ML
40:25 Why having a clear buy-in is important

🌟 Transcript: http://wandb.me/gd-matthew-davis 🌟

Links:
📍 Invitae: https://www.invitae.com/en
📍 Careers at Invitae: https://www.invitae.com/en/careers/

---

Get our podcast on these platforms:
👉 Apple Podcasts: http://wandb.me/apple-podcasts​​
👉 Spotify: http://wandb.me/spotify​
👉 Google Podcasts: http://wandb.me/google-podcasts​​
👉 YouTube: http://wandb.me/youtube​​
👉 Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected</itunes:summary></item><item><title>Clément Delangue — The Power of the Open Source Community</title><itunes:title>Clément Delangue — The Power of the Open Source Community</itunes:title><description><![CDATA[Clem explains the virtuous cycles behind the creation and success of Hugging Face, and shares his thoughts on where NLP is heading.

---

Clément Delangue is co-founder and CEO of Hugging Face, the AI community building the future. Hugging Face started as an open source NLP library and has quickly grown into a commercial product used by over 5,000 companies.

Connect with Clem:
📍 Twitter: https://twitter.com/ClementDelangue
📍 LinkedIn: https://www.linkedin.com/in/clementdelangue/

---

🌟 Transcript: http://wandb.me/gd-clement-delangue 🌟

⏳ Timestamps:
0:00 Sneak peek and intro
0:56 What is Hugging Face?	
4:15 The success of Hugging Face Transformers
7:53 Open source and virtuous cycles
10:37 Working with both TensorFlow and PyTorch
13:20 The "Write With Transformer" project
14:36 Transfer learning in NLP
16:43 BERT and DistilBERT
22:33 GPT
26:32 The power of the open source community
29:40 Current applications of NLP
35:15 The Turing Test and conversational AI
41:19 Why speech is an upcoming field within NLP
43:44 The human challenges of machine learning

Links Discussed:
📍 Write With Transformer, Hugging Face Transformer's text generation demo: https://transformer.huggingface.co/
📍 "Attention Is All You Need" (Vaswani et al., 2017): https://arxiv.org/abs/1706.03762
📍 EleutherAI and GPT-Neo: https://github.com/EleutherAI/gpt-neo]
📍 Rasa, open source conversational AI: https://rasa.com/
📍 Roblox article on BERT: https://blog.roblox.com/2020/05/scaled-bert-serve-1-billion-daily-requests-cpus/

---

Get our podcast on these platforms:
👉 Apple Podcasts: http://wandb.me/apple-podcasts​​
👉 Spotify: http://wandb.me/spotify​
👉 Google Podcasts: http://wandb.me/google-podcasts​​
👉 YouTube: http://wandb.me/youtube​​
👉 Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected]]></description><content:encoded><![CDATA[Clem explains the virtuous cycles behind the creation and success of Hugging Face, and shares his thoughts on where NLP is heading.

---

Clément Delangue is co-founder and CEO of Hugging Face, the AI community building the future. Hugging Face started as an open source NLP library and has quickly grown into a commercial product used by over 5,000 companies.

Connect with Clem:
📍 Twitter: https://twitter.com/ClementDelangue
📍 LinkedIn: https://www.linkedin.com/in/clementdelangue/

---

🌟 Transcript: http://wandb.me/gd-clement-delangue 🌟

⏳ Timestamps:
0:00 Sneak peek and intro
0:56 What is Hugging Face?	
4:15 The success of Hugging Face Transformers
7:53 Open source and virtuous cycles
10:37 Working with both TensorFlow and PyTorch
13:20 The "Write With Transformer" project
14:36 Transfer learning in NLP
16:43 BERT and DistilBERT
22:33 GPT
26:32 The power of the open source community
29:40 Current applications of NLP
35:15 The Turing Test and conversational AI
41:19 Why speech is an upcoming field within NLP
43:44 The human challenges of machine learning

Links Discussed:
📍 Write With Transformer, Hugging Face Transformer's text generation demo: https://transformer.huggingface.co/
📍 "Attention Is All You Need" (Vaswani et al., 2017): https://arxiv.org/abs/1706.03762
📍 EleutherAI and GPT-Neo: https://github.com/EleutherAI/gpt-neo]
📍 Rasa, open source conversational AI: https://rasa.com/
📍 Roblox article on BERT: https://blog.roblox.com/2020/05/scaled-bert-serve-1-billion-daily-requests-cpus/

---

Get our podcast on these platforms:
👉 Apple Podcasts: http://wandb.me/apple-podcasts​​
👉 Spotify: http://wandb.me/spotify​
👉 Google Podcasts: http://wandb.me/google-podcasts​​
👉 YouTube: http://wandb.me/youtube​​
👉 Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/1056933214</guid><itunes:image href="https://artwork.captivate.fm/c4d30d93-8dfb-4f4d-84da-9fd1bf621af1/artworks-xbrtgobhk89urnbb-wxfmpq-t3000x3000.jpg"/><pubDate>Thu, 10 Jun 2021 15:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/589f7b00-6794-45f6-b809-e967165a1428/1056933214-wandb-gd-clement-delangue.mp3" length="44715362" type="audio/mpeg"/><itunes:duration>46:35</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Clem explains the virtuous cycles behind the creation and success of Hugging Face, and shares his thoughts on where NLP is heading.

---

Clément Delangue is co-founder and CEO of Hugging Face, the AI community building the future. Hugging Face started as an open source NLP library and has quickly grown into a commercial product used by over 5,000 companies.

Connect with Clem:
📍 Twitter: https://twitter.com/ClementDelangue
📍 LinkedIn: https://www.linkedin.com/in/clementdelangue/

---

🌟 Transcript: http://wandb.me/gd-clement-delangue 🌟

⏳ Timestamps:
0:00 Sneak peek and intro
0:56 What is Hugging Face?	
4:15 The success of Hugging Face Transformers
7:53 Open source and virtuous cycles
10:37 Working with both TensorFlow and PyTorch
13:20 The &quot;Write With Transformer&quot; project
14:36 Transfer learning in NLP
16:43 BERT and DistilBERT
22:33 GPT
26:32 The power of the open source community
29:40 Current applications of NLP
35:15 The Turing Test and conversational AI
41:19 Why speech is an upcoming field within NLP
43:44 The human challenges of machine learning

Links Discussed:
📍 Write With Transformer, Hugging Face Transformer&apos;s text generation demo: https://transformer.huggingface.co/
📍 &quot;Attention Is All You Need&quot; (Vaswani et al., 2017): https://arxiv.org/abs/1706.03762
📍 EleutherAI and GPT-Neo: https://github.com/EleutherAI/gpt-neo]
📍 Rasa, open source conversational AI: https://rasa.com/
📍 Roblox article on BERT: https://blog.roblox.com/2020/05/scaled-bert-serve-1-billion-daily-requests-cpus/

---

Get our podcast on these platforms:
👉 Apple Podcasts: http://wandb.me/apple-podcasts​​
👉 Spotify: http://wandb.me/spotify​
👉 Google Podcasts: http://wandb.me/google-podcasts​​
👉 YouTube: http://wandb.me/youtube​​
👉 Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected</itunes:summary></item><item><title>Wojciech Zaremba — What Could Make AI Conscious?</title><itunes:title>Wojciech Zaremba — What Could Make AI Conscious?</itunes:title><description><![CDATA[Wojciech joins us to talk the principles behind OpenAI, the Fermi Paradox, and the future stages of developments in AGI.

---

Wojciech Zaremba is a co-founder of OpenAI, a research company dedicated to discovering and enacting the path to safe artificial general intelligence. He was also Head of Robotics, where his team developed general-purpose robots through new approaches to transfer learning, and taught robots complex behaviors.

Connect with Wojciech:
Personal website: https://wojzaremba.com//
Twitter: https://twitter.com/woj_zaremba

---

Topics Discussed:
0:00 Sneak peek and intro
1:03 The people and principles behind OpenAI
6:31 The stages of future AI developments
13:42 The Fermi paradox
16:18 What drives Wojciech?
19:17 Thoughts on robotics
24:58 Dota and other projects at OpenAI
33:42 What would make an AI conscious?
41:31 How to be succeed in robotics

Transcript:
http://wandb.me/gd-wojciech-zaremba

Links:
Fermi paradox: https://en.wikipedia.org/wiki/Fermi_paradox
OpenAI and Dota: https://openai.com/projects/five/

---

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​​
Spotify: http://wandb.me/spotify​
Google Podcasts: http://wandb.me/google-podcasts​​
YouTube: http://wandb.me/youtube​​
Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected]]></description><content:encoded><![CDATA[Wojciech joins us to talk the principles behind OpenAI, the Fermi Paradox, and the future stages of developments in AGI.

---

Wojciech Zaremba is a co-founder of OpenAI, a research company dedicated to discovering and enacting the path to safe artificial general intelligence. He was also Head of Robotics, where his team developed general-purpose robots through new approaches to transfer learning, and taught robots complex behaviors.

Connect with Wojciech:
Personal website: https://wojzaremba.com//
Twitter: https://twitter.com/woj_zaremba

---

Topics Discussed:
0:00 Sneak peek and intro
1:03 The people and principles behind OpenAI
6:31 The stages of future AI developments
13:42 The Fermi paradox
16:18 What drives Wojciech?
19:17 Thoughts on robotics
24:58 Dota and other projects at OpenAI
33:42 What would make an AI conscious?
41:31 How to be succeed in robotics

Transcript:
http://wandb.me/gd-wojciech-zaremba

Links:
Fermi paradox: https://en.wikipedia.org/wiki/Fermi_paradox
OpenAI and Dota: https://openai.com/projects/five/

---

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​​
Spotify: http://wandb.me/spotify​
Google Podcasts: http://wandb.me/google-podcasts​​
YouTube: http://wandb.me/youtube​​
Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/1037344267</guid><itunes:image href="https://artwork.captivate.fm/d69ebac7-ac2c-4367-a484-8ffded49ae9f/artworks-uribvrkwlekzngvj-dktk6g-t3000x3000.jpg"/><pubDate>Thu, 03 Jun 2021 15:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/356625d4-9377-4578-944a-cb5e17f17301/1037344267-wandb-gd-wojciech-zaremba.mp3" length="42673631" type="audio/mpeg"/><itunes:duration>44:27</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Wojciech joins us to talk the principles behind OpenAI, the Fermi Paradox, and the future stages of developments in AGI.

---

Wojciech Zaremba is a co-founder of OpenAI, a research company dedicated to discovering and enacting the path to safe artificial general intelligence. He was also Head of Robotics, where his team developed general-purpose robots through new approaches to transfer learning, and taught robots complex behaviors.

Connect with Wojciech:
Personal website: https://wojzaremba.com//
Twitter: https://twitter.com/woj_zaremba

---

Topics Discussed:
0:00 Sneak peek and intro
1:03 The people and principles behind OpenAI
6:31 The stages of future AI developments
13:42 The Fermi paradox
16:18 What drives Wojciech?
19:17 Thoughts on robotics
24:58 Dota and other projects at OpenAI
33:42 What would make an AI conscious?
41:31 How to be succeed in robotics

Transcript:
http://wandb.me/gd-wojciech-zaremba

Links:
Fermi paradox: https://en.wikipedia.org/wiki/Fermi_paradox
OpenAI and Dota: https://openai.com/projects/five/

---

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​​
Spotify: http://wandb.me/spotify​
Google Podcasts: http://wandb.me/google-podcasts​​
YouTube: http://wandb.me/youtube​​
Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected</itunes:summary></item><item><title>Phil Brown — How IPUs are Advancing Machine Intelligence</title><itunes:title>Phil Brown — How IPUs are Advancing Machine Intelligence</itunes:title><description><![CDATA[Phil shares some of the approaches, like sparsity and low precision, behind the breakthrough performance of Graphcore's Intelligence Processing Units (IPUs).

---

Phil Brown leads the Applications team at Graphcore, where they're building high-performance machine learning applications for their Intelligence Processing Units (IPUs), new processors specifically designed for AI compute.

Connect with Phil:
LinkedIn: https://www.linkedin.com/in/philipsbrown/
Twitter: https://twitter.com/phil_s_brown

---

0:00 Sneak peek, intro
1:44 From computational chemistry to Graphcore
5:16 The simulations behind weather prediction
10:54 Measuring improvement in weather prediction systems
15:35 How high performance computing and ML have different needs
19:00 The potential of sparse training
31:08 IPUs and computer architecture for machine learning
39:10 On performance improvements
44:43 The impacts of increasing computing capability
50:24 The ML chicken and egg problem
52:00 The challenges of converging at scale and bringing hardware to market

Links Discussed:
Rigging the Lottery: Making All Tickets Winners (Evci et al., 2019): https://arxiv.org/abs/1911.11134
﻿Graphcore MK2 Benchmarks﻿﻿﻿: https://www.graphcore.ai/mk2-benchmarks

Check out the transcription and discover more awesome ML projects: http://wandb.me/gd-phil-brown

---

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​​​
Spotify: http://wandb.me/spotify​​
Google Podcasts: http://wandb.me/google-podcasts​​​
YouTube: http://wandb.me/youtube​​​
Soundcloud: http://wandb.me/soundcloud​​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack​​​

Check out our Gallery, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more: https://wandb.ai/gallery]]></description><content:encoded><![CDATA[Phil shares some of the approaches, like sparsity and low precision, behind the breakthrough performance of Graphcore's Intelligence Processing Units (IPUs).

---

Phil Brown leads the Applications team at Graphcore, where they're building high-performance machine learning applications for their Intelligence Processing Units (IPUs), new processors specifically designed for AI compute.

Connect with Phil:
LinkedIn: https://www.linkedin.com/in/philipsbrown/
Twitter: https://twitter.com/phil_s_brown

---

0:00 Sneak peek, intro
1:44 From computational chemistry to Graphcore
5:16 The simulations behind weather prediction
10:54 Measuring improvement in weather prediction systems
15:35 How high performance computing and ML have different needs
19:00 The potential of sparse training
31:08 IPUs and computer architecture for machine learning
39:10 On performance improvements
44:43 The impacts of increasing computing capability
50:24 The ML chicken and egg problem
52:00 The challenges of converging at scale and bringing hardware to market

Links Discussed:
Rigging the Lottery: Making All Tickets Winners (Evci et al., 2019): https://arxiv.org/abs/1911.11134
﻿Graphcore MK2 Benchmarks﻿﻿﻿: https://www.graphcore.ai/mk2-benchmarks

Check out the transcription and discover more awesome ML projects: http://wandb.me/gd-phil-brown

---

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​​​
Spotify: http://wandb.me/spotify​​
Google Podcasts: http://wandb.me/google-podcasts​​​
YouTube: http://wandb.me/youtube​​​
Soundcloud: http://wandb.me/soundcloud​​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack​​​

Check out our Gallery, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more: https://wandb.ai/gallery]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/1022858641</guid><itunes:image href="https://artwork.captivate.fm/1a67f199-fbf3-4eca-8626-02dbd0a71081/artworks-awq1oko3rcbu0z1t-ra9hpg-t3000x3000.jpg"/><pubDate>Thu, 27 May 2021 15:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/c70e87db-0917-4e43-b75e-d51be76ba1b9/1022858641-wandb-gd-phil-brown.mp3" length="54884727" type="audio/mpeg"/><itunes:duration>57:10</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Phil shares some of the approaches, like sparsity and low precision, behind the breakthrough performance of Graphcore&apos;s Intelligence Processing Units (IPUs).

---

Phil Brown leads the Applications team at Graphcore, where they&apos;re building high-performance machine learning applications for their Intelligence Processing Units (IPUs), new processors specifically designed for AI compute.

Connect with Phil:
LinkedIn: https://www.linkedin.com/in/philipsbrown/
Twitter: https://twitter.com/phil_s_brown

---

0:00 Sneak peek, intro
1:44 From computational chemistry to Graphcore
5:16 The simulations behind weather prediction
10:54 Measuring improvement in weather prediction systems
15:35 How high performance computing and ML have different needs
19:00 The potential of sparse training
31:08 IPUs and computer architecture for machine learning
39:10 On performance improvements
44:43 The impacts of increasing computing capability
50:24 The ML chicken and egg problem
52:00 The challenges of converging at scale and bringing hardware to market

Links Discussed:
Rigging the Lottery: Making All Tickets Winners (Evci et al., 2019): https://arxiv.org/abs/1911.11134
﻿Graphcore MK2 Benchmarks﻿﻿﻿: https://www.graphcore.ai/mk2-benchmarks

Check out the transcription and discover more awesome ML projects: http://wandb.me/gd-phil-brown

---

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​​​
Spotify: http://wandb.me/spotify​​
Google Podcasts: http://wandb.me/google-podcasts​​​
YouTube: http://wandb.me/youtube​​​
Soundcloud: http://wandb.me/soundcloud​​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack​​​

Check out our Gallery, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more: https://wandb.ai/gallery</itunes:summary></item><item><title>Alyssa Simpson Rochwerger — Responsible ML in the Real World</title><itunes:title>Alyssa Simpson Rochwerger — Responsible ML in the Real World</itunes:title><description><![CDATA[From working on COVID-19 vaccine rollout to writing a book on responsible ML, Alyssa shares her thoughts on meaningful projects and the importance of teamwork.

---

Alyssa Simpson Rochwerger is as a Director of Product at Blue Shield of California, pursuing her dream of using technology to improve healthcare. She has over a decade of experience in building technical data-driven products and has held numerous leadership roles for machine learning organizations, including VP of AI and Data at Appen and Director of Product at IBM Watson.

Connect with Sean:
Personal website: https://seanjtaylor.com/
Twitter: https://twitter.com/seanjtaylor
LinkedIn: https://www.linkedin.com/in/seanjtaylor/

---

Topics Discussed:
0:00 Sneak peak, intro
1:17 Working on COVID-19 vaccine rollout in California
6:50 Real World AI
12:26 Diagnosing bias in models
17:43 Common challenges in ML
21:56 Finding meaningful projects
24:28 ML applications in health insurance
31:21 Longitudinal health records and data cleaning
38:24 Following your interests
40:21 Why teamwork is crucial

Transcript:
http://wandb.me/gd-alyssa-s-rochwerger

Links Discussed:
My Turn: https://myturn.ca.gov/
"Turn the Ship Around!": https://www.penguinrandomhouse.com/books/314163/turn-the-ship-around-by-l-david-marquet/

---

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​​
Spotify: http://wandb.me/spotify​
Google Podcasts: http://wandb.me/google-podcasts​​
YouTube: http://wandb.me/youtube​​
Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected]]></description><content:encoded><![CDATA[From working on COVID-19 vaccine rollout to writing a book on responsible ML, Alyssa shares her thoughts on meaningful projects and the importance of teamwork.

---

Alyssa Simpson Rochwerger is as a Director of Product at Blue Shield of California, pursuing her dream of using technology to improve healthcare. She has over a decade of experience in building technical data-driven products and has held numerous leadership roles for machine learning organizations, including VP of AI and Data at Appen and Director of Product at IBM Watson.

Connect with Sean:
Personal website: https://seanjtaylor.com/
Twitter: https://twitter.com/seanjtaylor
LinkedIn: https://www.linkedin.com/in/seanjtaylor/

---

Topics Discussed:
0:00 Sneak peak, intro
1:17 Working on COVID-19 vaccine rollout in California
6:50 Real World AI
12:26 Diagnosing bias in models
17:43 Common challenges in ML
21:56 Finding meaningful projects
24:28 ML applications in health insurance
31:21 Longitudinal health records and data cleaning
38:24 Following your interests
40:21 Why teamwork is crucial

Transcript:
http://wandb.me/gd-alyssa-s-rochwerger

Links Discussed:
My Turn: https://myturn.ca.gov/
"Turn the Ship Around!": https://www.penguinrandomhouse.com/books/314163/turn-the-ship-around-by-l-david-marquet/

---

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​​
Spotify: http://wandb.me/spotify​
Google Podcasts: http://wandb.me/google-podcasts​​
YouTube: http://wandb.me/youtube​​
Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/1042586617</guid><itunes:image href="https://artwork.captivate.fm/2f331ade-6ed6-44f8-97cc-826618172048/artworks-ofssirtza9k2jt2a-x09ttq-t3000x3000.jpg"/><pubDate>Thu, 20 May 2021 15:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/3f6d0253-f9fd-422c-97c3-8c3003318303/1042586617-wandb-gd-alyssa-s-rochwerger.mp3" length="43671300" type="audio/mpeg"/><itunes:duration>45:29</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>From working on COVID-19 vaccine rollout to writing a book on responsible ML, Alyssa shares her thoughts on meaningful projects and the importance of teamwork.

---

Alyssa Simpson Rochwerger is as a Director of Product at Blue Shield of California, pursuing her dream of using technology to improve healthcare. She has over a decade of experience in building technical data-driven products and has held numerous leadership roles for machine learning organizations, including VP of AI and Data at Appen and Director of Product at IBM Watson.

Connect with Sean:
Personal website: https://seanjtaylor.com/
Twitter: https://twitter.com/seanjtaylor
LinkedIn: https://www.linkedin.com/in/seanjtaylor/

---

Topics Discussed:
0:00 Sneak peak, intro
1:17 Working on COVID-19 vaccine rollout in California
6:50 Real World AI
12:26 Diagnosing bias in models
17:43 Common challenges in ML
21:56 Finding meaningful projects
24:28 ML applications in health insurance
31:21 Longitudinal health records and data cleaning
38:24 Following your interests
40:21 Why teamwork is crucial

Transcript:
http://wandb.me/gd-alyssa-s-rochwerger

Links Discussed:
My Turn: https://myturn.ca.gov/
&quot;Turn the Ship Around!&quot;: https://www.penguinrandomhouse.com/books/314163/turn-the-ship-around-by-l-david-marquet/

---

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​​
Spotify: http://wandb.me/spotify​
Google Podcasts: http://wandb.me/google-podcasts​​
YouTube: http://wandb.me/youtube​​
Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected</itunes:summary></item><item><title>Sean Taylor — Business Decision Problems</title><itunes:title>Sean Taylor — Business Decision Problems</itunes:title><description><![CDATA[Sean joins us to chat about ML models and tools at Lyft Rideshare Labs, Python vs R, time series forecasting with Prophet, and election forecasting.

---

Sean Taylor is a Data Scientist at (and former Head of) Lyft Rideshare Labs, and specializes in methods for solving causal inference and business decision problems. Previously, he was a Research Scientist on Facebook's Core Data Science team. His interests include experiments, causal inference, statistics, machine learning, and economics.

Connect with Sean:
Personal website: https://seanjtaylor.com/
Twitter: https://twitter.com/seanjtaylor
LinkedIn: https://www.linkedin.com/in/seanjtaylor/

---

Topics Discussed:
0:00 Sneak peek, intro
0:50 Pricing algorithms at Lyft
07:46 Loss functions and ETAs at Lyft
12:59 Models and tools at Lyft
20:46 Python vs R
25:30 Forecasting time series data with Prophet
33:06 Election forecasting and prediction markets
40:55 Comparing and evaluating models
43:22 Bottlenecks in going from research to production

Transcript:
http://wandb.me/gd-sean-taylor

Links Discussed:
"How Lyft predicts a rider’s destination for better in-app experience"": https://eng.lyft.com/how-lyft-predicts-your-destination-with-attention-791146b0a439
Prophet: https://facebook.github.io/prophet/
Andrew Gelman's blog post "Facebook's Prophet uses Stan": https://statmodeling.stat.columbia.edu/2017/03/01/facebooks-prophet-uses-stan/
Twitter thread "Election forecasting using prediction markets": https://twitter.com/seanjtaylor/status/1270899371706466304
"An Updated Dynamic Bayesian Forecasting Model for the 2020 Election": https://hdsr.mitpress.mit.edu/pub/nw1dzd02/release/1

---

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​​
Spotify: http://wandb.me/spotify​
Google Podcasts: http://wandb.me/google-podcasts​​
YouTube: http://wandb.me/youtube​​
Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected]]></description><content:encoded><![CDATA[Sean joins us to chat about ML models and tools at Lyft Rideshare Labs, Python vs R, time series forecasting with Prophet, and election forecasting.

---

Sean Taylor is a Data Scientist at (and former Head of) Lyft Rideshare Labs, and specializes in methods for solving causal inference and business decision problems. Previously, he was a Research Scientist on Facebook's Core Data Science team. His interests include experiments, causal inference, statistics, machine learning, and economics.

Connect with Sean:
Personal website: https://seanjtaylor.com/
Twitter: https://twitter.com/seanjtaylor
LinkedIn: https://www.linkedin.com/in/seanjtaylor/

---

Topics Discussed:
0:00 Sneak peek, intro
0:50 Pricing algorithms at Lyft
07:46 Loss functions and ETAs at Lyft
12:59 Models and tools at Lyft
20:46 Python vs R
25:30 Forecasting time series data with Prophet
33:06 Election forecasting and prediction markets
40:55 Comparing and evaluating models
43:22 Bottlenecks in going from research to production

Transcript:
http://wandb.me/gd-sean-taylor

Links Discussed:
"How Lyft predicts a rider’s destination for better in-app experience"": https://eng.lyft.com/how-lyft-predicts-your-destination-with-attention-791146b0a439
Prophet: https://facebook.github.io/prophet/
Andrew Gelman's blog post "Facebook's Prophet uses Stan": https://statmodeling.stat.columbia.edu/2017/03/01/facebooks-prophet-uses-stan/
Twitter thread "Election forecasting using prediction markets": https://twitter.com/seanjtaylor/status/1270899371706466304
"An Updated Dynamic Bayesian Forecasting Model for the 2020 Election": https://hdsr.mitpress.mit.edu/pub/nw1dzd02/release/1

---

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​​
Spotify: http://wandb.me/spotify​
Google Podcasts: http://wandb.me/google-podcasts​​
YouTube: http://wandb.me/youtube​​
Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/1041905833</guid><itunes:image href="https://artwork.captivate.fm/b53057b7-2cd0-48d0-bc89-cc0b52ffa4bf/artworks-dw74ggpynjqys9ub-50rccw-t3000x3000.jpg"/><pubDate>Thu, 13 May 2021 15:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/716d606c-3b4d-4846-b940-cd10fb490a8b/1041905833-wandb-gd-sean-taylor.mp3" length="43850186" type="audio/mpeg"/><itunes:duration>45:41</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Sean joins us to chat about ML models and tools at Lyft Rideshare Labs, Python vs R, time series forecasting with Prophet, and election forecasting.

---

Sean Taylor is a Data Scientist at (and former Head of) Lyft Rideshare Labs, and specializes in methods for solving causal inference and business decision problems. Previously, he was a Research Scientist on Facebook&apos;s Core Data Science team. His interests include experiments, causal inference, statistics, machine learning, and economics.

Connect with Sean:
Personal website: https://seanjtaylor.com/
Twitter: https://twitter.com/seanjtaylor
LinkedIn: https://www.linkedin.com/in/seanjtaylor/

---

Topics Discussed:
0:00 Sneak peek, intro
0:50 Pricing algorithms at Lyft
07:46 Loss functions and ETAs at Lyft
12:59 Models and tools at Lyft
20:46 Python vs R
25:30 Forecasting time series data with Prophet
33:06 Election forecasting and prediction markets
40:55 Comparing and evaluating models
43:22 Bottlenecks in going from research to production

Transcript:
http://wandb.me/gd-sean-taylor

Links Discussed:
&quot;How Lyft predicts a rider’s destination for better in-app experience&quot;&quot;: https://eng.lyft.com/how-lyft-predicts-your-destination-with-attention-791146b0a439
Prophet: https://facebook.github.io/prophet/
Andrew Gelman&apos;s blog post &quot;Facebook&apos;s Prophet uses Stan&quot;: https://statmodeling.stat.columbia.edu/2017/03/01/facebooks-prophet-uses-stan/
Twitter thread &quot;Election forecasting using prediction markets&quot;: https://twitter.com/seanjtaylor/status/1270899371706466304
&quot;An Updated Dynamic Bayesian Forecasting Model for the 2020 Election&quot;: https://hdsr.mitpress.mit.edu/pub/nw1dzd02/release/1

---

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​​
Spotify: http://wandb.me/spotify​
Google Podcasts: http://wandb.me/google-podcasts​​
YouTube: http://wandb.me/youtube​​
Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected</itunes:summary></item><item><title>Polly Fordyce — Microfluidic Platforms and Machine Learning</title><itunes:title>Polly Fordyce — Microfluidic Platforms and Machine Learning</itunes:title><description><![CDATA[Polly explains how microfluidics allow bioengineering researchers to create high throughput data, and shares her experiences with biology and machine learning. 

---

Polly Fordyce is an Assistant Professor of Genetics and Bioengineering and fellow of the ChEM-H Institute at Stanford. She is the Principal Investigator of The Fordyce Lab, which focuses on developing and applying new microfluidic platforms for quantitative, high-throughput biophysics and biochemistry.

Twitter: https://twitter.com/fordycelab​
Website: http://www.fordycelab.com/​

---

Topics Discussed:
0:00​ Sneak peek, intro
2:11​ Background on protein sequencing
7:38​ How changes to a protein's sequence alters its structure and function
11:07​ Microfluidics and machine learning
19:25​ Why protein folding is important
25:17​ Collaborating with ML practitioners
31:46​ Transfer learning and big data sets in biology
38:42​ Where Polly hopes bioengineering research will go
42:43​ Advice for students

Transcript:
http://wandb.me/gd-polly-fordyce​

Links Discussed:
"The Weather Makers": https://en.wikipedia.org/wiki/The_Wea...​

---

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​​​
Spotify: http://wandb.me/spotify​​
Google Podcasts: http://wandb.me/google-podcasts​​​
YouTube: http://wandb.me/youtube​​​
Soundcloud: http://wandb.me/soundcloud​​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected]]></description><content:encoded><![CDATA[Polly explains how microfluidics allow bioengineering researchers to create high throughput data, and shares her experiences with biology and machine learning. 

---

Polly Fordyce is an Assistant Professor of Genetics and Bioengineering and fellow of the ChEM-H Institute at Stanford. She is the Principal Investigator of The Fordyce Lab, which focuses on developing and applying new microfluidic platforms for quantitative, high-throughput biophysics and biochemistry.

Twitter: https://twitter.com/fordycelab​
Website: http://www.fordycelab.com/​

---

Topics Discussed:
0:00​ Sneak peek, intro
2:11​ Background on protein sequencing
7:38​ How changes to a protein's sequence alters its structure and function
11:07​ Microfluidics and machine learning
19:25​ Why protein folding is important
25:17​ Collaborating with ML practitioners
31:46​ Transfer learning and big data sets in biology
38:42​ Where Polly hopes bioengineering research will go
42:43​ Advice for students

Transcript:
http://wandb.me/gd-polly-fordyce​

Links Discussed:
"The Weather Makers": https://en.wikipedia.org/wiki/The_Wea...​

---

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​​​
Spotify: http://wandb.me/spotify​​
Google Podcasts: http://wandb.me/google-podcasts​​​
YouTube: http://wandb.me/youtube​​​
Soundcloud: http://wandb.me/soundcloud​​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/1022806447</guid><itunes:image href="https://artwork.captivate.fm/3b7638d5-570f-4f8a-863e-1f6826b75a49/artworks-ymnlzk67aiy4hlh4-dgw35a-t3000x3000.jpg"/><pubDate>Thu, 29 Apr 2021 15:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/01ccbb9b-ef8c-4c4d-9626-2ed3157d98f9/1022806447-wandb-gd-polly-fordyce.mp3" length="44075466" type="audio/mpeg"/><itunes:duration>45:55</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Polly explains how microfluidics allow bioengineering researchers to create high throughput data, and shares her experiences with biology and machine learning. 

---

Polly Fordyce is an Assistant Professor of Genetics and Bioengineering and fellow of the ChEM-H Institute at Stanford. She is the Principal Investigator of The Fordyce Lab, which focuses on developing and applying new microfluidic platforms for quantitative, high-throughput biophysics and biochemistry.

Twitter: https://twitter.com/fordycelab​
Website: http://www.fordycelab.com/​

---

Topics Discussed:
0:00​ Sneak peek, intro
2:11​ Background on protein sequencing
7:38​ How changes to a protein&apos;s sequence alters its structure and function
11:07​ Microfluidics and machine learning
19:25​ Why protein folding is important
25:17​ Collaborating with ML practitioners
31:46​ Transfer learning and big data sets in biology
38:42​ Where Polly hopes bioengineering research will go
42:43​ Advice for students

Transcript:
http://wandb.me/gd-polly-fordyce​

Links Discussed:
&quot;The Weather Makers&quot;: https://en.wikipedia.org/wiki/The_Wea...​

---

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​​​
Spotify: http://wandb.me/spotify​​
Google Podcasts: http://wandb.me/google-podcasts​​​
YouTube: http://wandb.me/youtube​​​
Soundcloud: http://wandb.me/soundcloud​​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected</itunes:summary></item><item><title>Adrien Gaidon — Advancing ML Research in Autonomous Vehicles</title><itunes:title>Adrien Gaidon — Advancing ML Research in Autonomous Vehicles</itunes:title><description><![CDATA[Adrien Gaidon shares his approach to building teams and taking state-of-the-art research from conception to production at Toyota Research Institute.

---

Adrien Gaidon is the Head of Machine Learning Research at the Toyota Research Institute (TRI). His research focuses on scaling up ML for robot autonomy, spanning Scene and Behavior Understanding, Simulation for Deep Learning, 3D Computer Vision, and Self-Supervised Learning. 

Connect with Adrien:
﻿Twitter: https://twitter.com/adnothing
﻿﻿﻿LinkedIn﻿: https://www.linkedin.com/in/adrien-gaidon-63ab2358/
Personal website: https://adriengaidon.com/

---

Topics Discussed:
0:00 Sneak peek, intro
0:48 Guitars and other favorite tools
3:55 Why is PyTorch so popular?
11:40 Autonomous vehicle research in the long term
15:10 Game-changing academic advances
20:53 The challenges of bringing autonomous vehicles to market
26:05 Perception and prediction
35:01 Fleet learning and meta learning
41:20 The human aspects of machine learning
44:25 The scalability bottleneck

Transcript:
http://wandb.me/gd-adrien-gaidon

Links Discussed:
TRI Global Research: https://www.tri.global/research/
todoist: https://todoist.com/
Contrastive Learning of Structured World Models: https://arxiv.org/abs/2002.05709
SimCLR:   https://arxiv.org/abs/2002.05709

---

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​​
Spotify: http://wandb.me/spotify​
Google Podcasts: http://wandb.me/google-podcasts​​
YouTube: http://wandb.me/youtube​​
Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected]]></description><content:encoded><![CDATA[Adrien Gaidon shares his approach to building teams and taking state-of-the-art research from conception to production at Toyota Research Institute.

---

Adrien Gaidon is the Head of Machine Learning Research at the Toyota Research Institute (TRI). His research focuses on scaling up ML for robot autonomy, spanning Scene and Behavior Understanding, Simulation for Deep Learning, 3D Computer Vision, and Self-Supervised Learning. 

Connect with Adrien:
﻿Twitter: https://twitter.com/adnothing
﻿﻿﻿LinkedIn﻿: https://www.linkedin.com/in/adrien-gaidon-63ab2358/
Personal website: https://adriengaidon.com/

---

Topics Discussed:
0:00 Sneak peek, intro
0:48 Guitars and other favorite tools
3:55 Why is PyTorch so popular?
11:40 Autonomous vehicle research in the long term
15:10 Game-changing academic advances
20:53 The challenges of bringing autonomous vehicles to market
26:05 Perception and prediction
35:01 Fleet learning and meta learning
41:20 The human aspects of machine learning
44:25 The scalability bottleneck

Transcript:
http://wandb.me/gd-adrien-gaidon

Links Discussed:
TRI Global Research: https://www.tri.global/research/
todoist: https://todoist.com/
Contrastive Learning of Structured World Models: https://arxiv.org/abs/2002.05709
SimCLR:   https://arxiv.org/abs/2002.05709

---

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​​
Spotify: http://wandb.me/spotify​
Google Podcasts: http://wandb.me/google-podcasts​​
YouTube: http://wandb.me/youtube​​
Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/1033373617</guid><itunes:image href="https://artwork.captivate.fm/c17ff8d1-0c97-4d2b-a68b-d596d04686fd/artworks-yozpbmyjogvurzfr-1ri0hq-t3000x3000.jpg"/><pubDate>Thu, 22 Apr 2021 15:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/62011adc-e2cb-4331-b4a6-16ce1c3053de/1033373617-wandb-adrien-gaidon.mp3" length="46118033" type="audio/mpeg"/><itunes:duration>48:02</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Adrien Gaidon shares his approach to building teams and taking state-of-the-art research from conception to production at Toyota Research Institute.

---

Adrien Gaidon is the Head of Machine Learning Research at the Toyota Research Institute (TRI). His research focuses on scaling up ML for robot autonomy, spanning Scene and Behavior Understanding, Simulation for Deep Learning, 3D Computer Vision, and Self-Supervised Learning. 

Connect with Adrien:
﻿Twitter: https://twitter.com/adnothing
﻿﻿﻿LinkedIn﻿: https://www.linkedin.com/in/adrien-gaidon-63ab2358/
Personal website: https://adriengaidon.com/

---

Topics Discussed:
0:00 Sneak peek, intro
0:48 Guitars and other favorite tools
3:55 Why is PyTorch so popular?
11:40 Autonomous vehicle research in the long term
15:10 Game-changing academic advances
20:53 The challenges of bringing autonomous vehicles to market
26:05 Perception and prediction
35:01 Fleet learning and meta learning
41:20 The human aspects of machine learning
44:25 The scalability bottleneck

Transcript:
http://wandb.me/gd-adrien-gaidon

Links Discussed:
TRI Global Research: https://www.tri.global/research/
todoist: https://todoist.com/
Contrastive Learning of Structured World Models: https://arxiv.org/abs/2002.05709
SimCLR:   https://arxiv.org/abs/2002.05709

---

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​​
Spotify: http://wandb.me/spotify​
Google Podcasts: http://wandb.me/google-podcasts​​
YouTube: http://wandb.me/youtube​​
Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected</itunes:summary></item><item><title>Nimrod Shabtay — Deployment and Monitoring at Nanit</title><itunes:title>Nimrod Shabtay — Deployment and Monitoring at Nanit</itunes:title><description><![CDATA[A look at how Nimrod and the team at Nanit are building smart baby monitor systems, from data collection to model deployment and production monitoring.

---

Nimrod Shabtay is a Senior Computer Vision Algorithm Developer at Nanit, a New York-based company that's developing better baby monitoring devices. 

Connect with Nimrod:
LinkedIn: https://www.linkedin.com/in/nimrod-shabtay-76072840/

---

Links Discussed:
Guidelines for building an accurate and robust ML/DL model in production: https://engineering.nanit.com/guideli...​
Careers at Nanit: https://www.nanit.com/jobs​

---

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​​
Spotify: http://wandb.me/spotify​
Google: http://wandb.me/google-podcasts​​
YouTube: http://wandb.me/youtube​​
Soundcloud: http://wandb.me/soundcloud​

---

Join our community of ML practitioners where we host AMAs, share interesting projects, and more:
http://wandb.me/slack​​

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery]]></description><content:encoded><![CDATA[A look at how Nimrod and the team at Nanit are building smart baby monitor systems, from data collection to model deployment and production monitoring.

---

Nimrod Shabtay is a Senior Computer Vision Algorithm Developer at Nanit, a New York-based company that's developing better baby monitoring devices. 

Connect with Nimrod:
LinkedIn: https://www.linkedin.com/in/nimrod-shabtay-76072840/

---

Links Discussed:
Guidelines for building an accurate and robust ML/DL model in production: https://engineering.nanit.com/guideli...​
Careers at Nanit: https://www.nanit.com/jobs​

---

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​​
Spotify: http://wandb.me/spotify​
Google: http://wandb.me/google-podcasts​​
YouTube: http://wandb.me/youtube​​
Soundcloud: http://wandb.me/soundcloud​

---

Join our community of ML practitioners where we host AMAs, share interesting projects, and more:
http://wandb.me/slack​​

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/1022853517</guid><itunes:image href="https://artwork.captivate.fm/7e0d5e48-ba6f-4a1e-8e22-8d51726f1cd8/artworks-pcgfsnwxela49yz6-y1wiyq-t3000x3000.jpg"/><pubDate>Thu, 15 Apr 2021 15:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/6f914f28-0334-4bca-8ed8-44949d0a83b2/1022853517-wandb-nimrod-shabtay.mp3" length="32622549" type="audio/mpeg"/><itunes:duration>33:59</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>A look at how Nimrod and the team at Nanit are building smart baby monitor systems, from data collection to model deployment and production monitoring.

---

Nimrod Shabtay is a Senior Computer Vision Algorithm Developer at Nanit, a New York-based company that&apos;s developing better baby monitoring devices. 

Connect with Nimrod:
LinkedIn: https://www.linkedin.com/in/nimrod-shabtay-76072840/

---

Links Discussed:
Guidelines for building an accurate and robust ML/DL model in production: https://engineering.nanit.com/guideli...​
Careers at Nanit: https://www.nanit.com/jobs​

---

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​​
Spotify: http://wandb.me/spotify​
Google: http://wandb.me/google-podcasts​​
YouTube: http://wandb.me/youtube​​
Soundcloud: http://wandb.me/soundcloud​

---

Join our community of ML practitioners where we host AMAs, share interesting projects, and more:
http://wandb.me/slack​​

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery</itunes:summary></item><item><title>Chris Mattmann — ML Applications on Earth, Mars, and Beyond</title><itunes:title>Chris Mattmann — ML Applications on Earth, Mars, and Beyond</itunes:title><description><![CDATA[Chris shares some of the incredible work and innovations behind deep space exploration at NASA JPL and reflects on the past, present, and future of machine learning.

---

Chris Mattmann is the Chief Technology and Innovation Officer at NASA Jet Propulsion Laboratory, where he focuses on organizational innovation through technology. He's worked on space missions such as the Orbiting Carbon Observatory 2 and Soil Moisture Active Passive satellites.
Chris is also a co-creator of Apache Tika, a content detection and analysis framework that was one of the key technologies used to uncover the Panama Papers, and is the author of "Machine Learning with TensorFlow, Second Edition" and "Tika in Action".

Connect with Chris:
Personal website: https://www.mattmann.ai/
Twitter: https://twitter.com/chrismattmann

---

Topics Discussed:
0:00 Sneak peek, intro
0:52 On Perseverance and Ingenuity
8:40 Machine learning applications at NASA JPL
11:51 Innovation in scientific instruments and data formats
18:26 Data processing levels: Level 1 vs Level 2 vs Level 3
22:20 Competitive data processing
27:38 Kerbal Space Program
30:19 The ideas behind "Machine Learning with Tensorflow, Second Edition"
35:37 The future of MLOps and AutoML
38:51 Machine learning at the edge

Transcript:
http://wandb.me/gd-chris-mattmann

Links Discussed:
Perseverance and Ingenuity: https://mars.nasa.gov/mars2020/
Data processing levels at NASA: https://earthdata.nasa.gov/collaborate/open-data-services-and-software/data-information-policy/data-levels
OCO-2: https://www.jpl.nasa.gov/missions/orbiting-carbon-observatory-2-oco-2
"Machine Learning with TensorFlow, Second Edition" (2020): https://www.manning.com/books/machine-learning-with-tensorflow-second-edition
"Tika in Action" (2011): https://www.manning.com/books/tika-in-action

Transcript:
http://wandb.me/gd-chris-mattmann

---

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​​
Spotify: http://wandb.me/spotify​
Google Podcasts: http://wandb.me/google-podcasts​​
YouTube: http://wandb.me/youtube​​
Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected]]></description><content:encoded><![CDATA[Chris shares some of the incredible work and innovations behind deep space exploration at NASA JPL and reflects on the past, present, and future of machine learning.

---

Chris Mattmann is the Chief Technology and Innovation Officer at NASA Jet Propulsion Laboratory, where he focuses on organizational innovation through technology. He's worked on space missions such as the Orbiting Carbon Observatory 2 and Soil Moisture Active Passive satellites.
Chris is also a co-creator of Apache Tika, a content detection and analysis framework that was one of the key technologies used to uncover the Panama Papers, and is the author of "Machine Learning with TensorFlow, Second Edition" and "Tika in Action".

Connect with Chris:
Personal website: https://www.mattmann.ai/
Twitter: https://twitter.com/chrismattmann

---

Topics Discussed:
0:00 Sneak peek, intro
0:52 On Perseverance and Ingenuity
8:40 Machine learning applications at NASA JPL
11:51 Innovation in scientific instruments and data formats
18:26 Data processing levels: Level 1 vs Level 2 vs Level 3
22:20 Competitive data processing
27:38 Kerbal Space Program
30:19 The ideas behind "Machine Learning with Tensorflow, Second Edition"
35:37 The future of MLOps and AutoML
38:51 Machine learning at the edge

Transcript:
http://wandb.me/gd-chris-mattmann

Links Discussed:
Perseverance and Ingenuity: https://mars.nasa.gov/mars2020/
Data processing levels at NASA: https://earthdata.nasa.gov/collaborate/open-data-services-and-software/data-information-policy/data-levels
OCO-2: https://www.jpl.nasa.gov/missions/orbiting-carbon-observatory-2-oco-2
"Machine Learning with TensorFlow, Second Edition" (2020): https://www.manning.com/books/machine-learning-with-tensorflow-second-edition
"Tika in Action" (2011): https://www.manning.com/books/tika-in-action

Transcript:
http://wandb.me/gd-chris-mattmann

---

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​​
Spotify: http://wandb.me/spotify​
Google Podcasts: http://wandb.me/google-podcasts​​
YouTube: http://wandb.me/youtube​​
Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/1022795128</guid><itunes:image href="https://artwork.captivate.fm/b25d835b-5cc8-401b-b110-602637d3cf18/artworks-8mrucnoqvbt1hyly-ngittg-t3000x3000.jpg"/><pubDate>Thu, 08 Apr 2021 17:24:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/f0f80626-8a42-4ff8-96d7-1b8a96d0556f/1022795128-wandb-chris-mattman.mp3" length="40353540" type="audio/mpeg"/><itunes:duration>42:02</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Chris shares some of the incredible work and innovations behind deep space exploration at NASA JPL and reflects on the past, present, and future of machine learning.

---

Chris Mattmann is the Chief Technology and Innovation Officer at NASA Jet Propulsion Laboratory, where he focuses on organizational innovation through technology. He&apos;s worked on space missions such as the Orbiting Carbon Observatory 2 and Soil Moisture Active Passive satellites.
Chris is also a co-creator of Apache Tika, a content detection and analysis framework that was one of the key technologies used to uncover the Panama Papers, and is the author of &quot;Machine Learning with TensorFlow, Second Edition&quot; and &quot;Tika in Action&quot;.

Connect with Chris:
Personal website: https://www.mattmann.ai/
Twitter: https://twitter.com/chrismattmann

---

Topics Discussed:
0:00 Sneak peek, intro
0:52 On Perseverance and Ingenuity
8:40 Machine learning applications at NASA JPL
11:51 Innovation in scientific instruments and data formats
18:26 Data processing levels: Level 1 vs Level 2 vs Level 3
22:20 Competitive data processing
27:38 Kerbal Space Program
30:19 The ideas behind &quot;Machine Learning with Tensorflow, Second Edition&quot;
35:37 The future of MLOps and AutoML
38:51 Machine learning at the edge

Transcript:
http://wandb.me/gd-chris-mattmann

Links Discussed:
Perseverance and Ingenuity: https://mars.nasa.gov/mars2020/
Data processing levels at NASA: https://earthdata.nasa.gov/collaborate/open-data-services-and-software/data-information-policy/data-levels
OCO-2: https://www.jpl.nasa.gov/missions/orbiting-carbon-observatory-2-oco-2
&quot;Machine Learning with TensorFlow, Second Edition&quot; (2020): https://www.manning.com/books/machine-learning-with-tensorflow-second-edition
&quot;Tika in Action&quot; (2011): https://www.manning.com/books/tika-in-action

Transcript:
http://wandb.me/gd-chris-mattmann

---

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​​
Spotify: http://wandb.me/spotify​
Google Podcasts: http://wandb.me/google-podcasts​​
YouTube: http://wandb.me/youtube​​
Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected</itunes:summary></item><item><title>Vladlen Koltun — The Power of Simulation and Abstraction</title><itunes:title>Vladlen Koltun — The Power of Simulation and Abstraction</itunes:title><description><![CDATA[From legged locomotion to autonomous driving, Vladlen explains how simulation and abstraction help us understand embodied intelligence.

---

Vladlen Koltun is the Chief Scientist for Intelligent Systems at Intel, where he leads an international lab of researchers working in machine learning, robotics, computer vision, computational science, and related areas. 

Connect with Vladlen:
Personal website: http://vladlen.info/
LinkedIn: https://www.linkedin.com/in/vladlenkoltun/

---

0:00 Sneak peek and intro
1:20 "Intelligent Systems" vs "AI"
3:02 Legged locomotion
9:26 The power of simulation
14:32 Privileged learning
18:19 Drone acrobatics
20:19 Using abstraction to transfer simulations to reality
25:35 Sample Factory for reinforcement learning
34:30 What inspired CARLA and what keeps it going
41:43 The challenges of and for robotics

Links Discussed
Learning quadrupedal locomotion over challenging terrain (Lee et al., 2020): https://robotics.sciencemag.org/content/5/47/eabc5986.abstract
﻿Deep Drone Acrobatics (Kaufmann et al., 2020): https://arxiv.org/abs/2006.05768
﻿Sample Factory: Egocentric 3D Control from Pixels at 100000 FPS with Asynchronous Reinforcement Learning (Petrenko et al., 2020): https://arxiv.org/abs/2006.11751
﻿CARLA﻿: https://carla.org/

---

Check out the transcription and discover more awesome ML projects:
http://wandb.me/vladlen-koltun​-podcast

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​​
Spotify: http://wandb.me/spotify​
Google: http://wandb.me/google-podcasts​​
YouTube: http://wandb.me/youtube​​
Soundcloud: http://wandb.me/soundcloud​

---

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery]]></description><content:encoded><![CDATA[From legged locomotion to autonomous driving, Vladlen explains how simulation and abstraction help us understand embodied intelligence.

---

Vladlen Koltun is the Chief Scientist for Intelligent Systems at Intel, where he leads an international lab of researchers working in machine learning, robotics, computer vision, computational science, and related areas. 

Connect with Vladlen:
Personal website: http://vladlen.info/
LinkedIn: https://www.linkedin.com/in/vladlenkoltun/

---

0:00 Sneak peek and intro
1:20 "Intelligent Systems" vs "AI"
3:02 Legged locomotion
9:26 The power of simulation
14:32 Privileged learning
18:19 Drone acrobatics
20:19 Using abstraction to transfer simulations to reality
25:35 Sample Factory for reinforcement learning
34:30 What inspired CARLA and what keeps it going
41:43 The challenges of and for robotics

Links Discussed
Learning quadrupedal locomotion over challenging terrain (Lee et al., 2020): https://robotics.sciencemag.org/content/5/47/eabc5986.abstract
﻿Deep Drone Acrobatics (Kaufmann et al., 2020): https://arxiv.org/abs/2006.05768
﻿Sample Factory: Egocentric 3D Control from Pixels at 100000 FPS with Asynchronous Reinforcement Learning (Petrenko et al., 2020): https://arxiv.org/abs/2006.11751
﻿CARLA﻿: https://carla.org/

---

Check out the transcription and discover more awesome ML projects:
http://wandb.me/vladlen-koltun​-podcast

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​​
Spotify: http://wandb.me/spotify​
Google: http://wandb.me/google-podcasts​​
YouTube: http://wandb.me/youtube​​
Soundcloud: http://wandb.me/soundcloud​

---

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/1019430883</guid><itunes:image href="https://artwork.captivate.fm/adebfabc-079c-4c9d-bb97-e134c5913557/artworks-s1gxde2skblyxxyw-gvzara-t3000x3000.jpg"/><pubDate>Thu, 01 Apr 2021 17:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/8bafb2b0-0895-4102-b71f-e14efe3f8a2c/1019430883-wandb-the-power-of-simulation-and-abstraction-with-i.mp3" length="47484759" type="audio/mpeg"/><itunes:duration>49:28</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>From legged locomotion to autonomous driving, Vladlen explains how simulation and abstraction help us understand embodied intelligence.

---

Vladlen Koltun is the Chief Scientist for Intelligent Systems at Intel, where he leads an international lab of researchers working in machine learning, robotics, computer vision, computational science, and related areas. 

Connect with Vladlen:
Personal website: http://vladlen.info/
LinkedIn: https://www.linkedin.com/in/vladlenkoltun/

---

0:00 Sneak peek and intro
1:20 &quot;Intelligent Systems&quot; vs &quot;AI&quot;
3:02 Legged locomotion
9:26 The power of simulation
14:32 Privileged learning
18:19 Drone acrobatics
20:19 Using abstraction to transfer simulations to reality
25:35 Sample Factory for reinforcement learning
34:30 What inspired CARLA and what keeps it going
41:43 The challenges of and for robotics

Links Discussed
Learning quadrupedal locomotion over challenging terrain (Lee et al., 2020): https://robotics.sciencemag.org/content/5/47/eabc5986.abstract
﻿Deep Drone Acrobatics (Kaufmann et al., 2020): https://arxiv.org/abs/2006.05768
﻿Sample Factory: Egocentric 3D Control from Pixels at 100000 FPS with Asynchronous Reinforcement Learning (Petrenko et al., 2020): https://arxiv.org/abs/2006.11751
﻿CARLA﻿: https://carla.org/

---

Check out the transcription and discover more awesome ML projects:
http://wandb.me/vladlen-koltun​-podcast

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​​
Spotify: http://wandb.me/spotify​
Google: http://wandb.me/google-podcasts​​
YouTube: http://wandb.me/youtube​​
Soundcloud: http://wandb.me/soundcloud​

---

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery</itunes:summary></item><item><title>Dominik Moritz — Building Intuitive Data Visualization Tools</title><itunes:title>Dominik Moritz — Building Intuitive Data Visualization Tools</itunes:title><description><![CDATA[Dominik shares the story and principles behind Vega and Vega-Lite, and explains how visualization and machine learning help each other.
---
Dominik is a co-author of Vega-Lite, a high-level visualization grammar for building interactive plots. He's also a professor at the Human-Computer Interaction Institute Institute at Carnegie Mellon University and an ML researcher at Apple.

Connect with Dominik
﻿Twitter﻿: https://twitter.com/domoritz
﻿GitHub﻿: https://github.com/domoritz
﻿Personal website: https://www.domoritz.de/
---
0:00 Sneak peek, intro
1:15 What is Vega-Lite?
5:39 The grammar of graphics
9:00 Using visualizations creatively
11:36 Vega vs Vega-Lite
16:03 ggplot2 and machine learning
18:39 Voyager and the challenges of scale
24:54 Model explainability and visualizations
31:24 Underrated topics: constraints and visualization theory
34:38 The challenge of metrics in deployment
36:54 In between aggregate statistics and individual examples

Links Discussed
﻿Vega-Lite﻿: https://vega.github.io/vega-lite/
﻿﻿﻿Data analysis and statistics: an expository overview (Tukey and Wilk, 1966): https://dl.acm.org/doi/10.1145/1464291.1464366
﻿Slope chart / slope graph﻿: https://vega.github.io/vega-lite/examples/line_slope.html
﻿﻿﻿Voyager﻿: https://github.com/vega/voyager
﻿Draco: https://github.com/uwdata/draco

Check out the transcription and discover more awesome ML projects:
http://wandb.me/gd-domink-moritz
---
Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​
Spotify: http://wandb.me/spotify​
Google: http://wandb.me/google-podcasts​
YouTube: http://wandb.me/youtube​
Soundcloud: http://wandb.me/soundcloud

---

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery]]></description><content:encoded><![CDATA[Dominik shares the story and principles behind Vega and Vega-Lite, and explains how visualization and machine learning help each other.
---
Dominik is a co-author of Vega-Lite, a high-level visualization grammar for building interactive plots. He's also a professor at the Human-Computer Interaction Institute Institute at Carnegie Mellon University and an ML researcher at Apple.

Connect with Dominik
﻿Twitter﻿: https://twitter.com/domoritz
﻿GitHub﻿: https://github.com/domoritz
﻿Personal website: https://www.domoritz.de/
---
0:00 Sneak peek, intro
1:15 What is Vega-Lite?
5:39 The grammar of graphics
9:00 Using visualizations creatively
11:36 Vega vs Vega-Lite
16:03 ggplot2 and machine learning
18:39 Voyager and the challenges of scale
24:54 Model explainability and visualizations
31:24 Underrated topics: constraints and visualization theory
34:38 The challenge of metrics in deployment
36:54 In between aggregate statistics and individual examples

Links Discussed
﻿Vega-Lite﻿: https://vega.github.io/vega-lite/
﻿﻿﻿Data analysis and statistics: an expository overview (Tukey and Wilk, 1966): https://dl.acm.org/doi/10.1145/1464291.1464366
﻿Slope chart / slope graph﻿: https://vega.github.io/vega-lite/examples/line_slope.html
﻿﻿﻿Voyager﻿: https://github.com/vega/voyager
﻿Draco: https://github.com/uwdata/draco

Check out the transcription and discover more awesome ML projects:
http://wandb.me/gd-domink-moritz
---
Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​
Spotify: http://wandb.me/spotify​
Google: http://wandb.me/google-podcasts​
YouTube: http://wandb.me/youtube​
Soundcloud: http://wandb.me/soundcloud

---

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/1015617130</guid><itunes:image href="https://artwork.captivate.fm/9f597b53-18e7-4e28-b3ab-ee72fce56a6a/artworks-nykduynnijmmpqwo-eyqmyw-t3000x3000.jpg"/><pubDate>Thu, 25 Mar 2021 17:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/4590c347-b36a-4c47-b125-bd1b49d44db9/1015617130-wandb-dominik-moritz.mp3" length="37499297" type="audio/mpeg"/><itunes:duration>39:04</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Dominik shares the story and principles behind Vega and Vega-Lite, and explains how visualization and machine learning help each other.
---
Dominik is a co-author of Vega-Lite, a high-level visualization grammar for building interactive plots. He&apos;s also a professor at the Human-Computer Interaction Institute Institute at Carnegie Mellon University and an ML researcher at Apple.

Connect with Dominik
﻿Twitter﻿: https://twitter.com/domoritz
﻿GitHub﻿: https://github.com/domoritz
﻿Personal website: https://www.domoritz.de/
---
0:00 Sneak peek, intro
1:15 What is Vega-Lite?
5:39 The grammar of graphics
9:00 Using visualizations creatively
11:36 Vega vs Vega-Lite
16:03 ggplot2 and machine learning
18:39 Voyager and the challenges of scale
24:54 Model explainability and visualizations
31:24 Underrated topics: constraints and visualization theory
34:38 The challenge of metrics in deployment
36:54 In between aggregate statistics and individual examples

Links Discussed
﻿Vega-Lite﻿: https://vega.github.io/vega-lite/
﻿﻿﻿Data analysis and statistics: an expository overview (Tukey and Wilk, 1966): https://dl.acm.org/doi/10.1145/1464291.1464366
﻿Slope chart / slope graph﻿: https://vega.github.io/vega-lite/examples/line_slope.html
﻿﻿﻿Voyager﻿: https://github.com/vega/voyager
﻿Draco: https://github.com/uwdata/draco

Check out the transcription and discover more awesome ML projects:
http://wandb.me/gd-domink-moritz
---
Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​
Spotify: http://wandb.me/spotify​
Google: http://wandb.me/google-podcasts​
YouTube: http://wandb.me/youtube​
Soundcloud: http://wandb.me/soundcloud

---

Join our community of ML practitioners where we host AMA&apos;s, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery</itunes:summary></item><item><title>Cade Metz — The Stories Behind the Rise of AI</title><itunes:title>Cade Metz — The Stories Behind the Rise of AI</itunes:title><description><![CDATA[How Cade got access to the stories behind some of the biggest advancements in AI, and the dynamic playing out between leaders at companies like Google, Microsoft, and Facebook.

Cade Metz is a New York Times reporter covering artificial intelligence, driverless cars, robotics, virtual reality, and other emerging areas. Previously, he was a senior staff writer with Wired magazine and the U.S. editor of The Register, one of Britain’s leading science and technology news sites. His first book, "Genius Makers", tells the stories of the pioneers behind AI.

Get the book: http://bit.ly/GeniusMakers
Follow Cade on Twitter: https://twitter.com/CadeMetz/
And on Linkedin: https://www.linkedin.com/in/cademetz/

Topics discussed:
0:00 sneak peek, intro
3:25 audience and charachters
7:18 *spoiler alert* AGI
11:01 book ends, but story goes on
17:31 overinflated claims in AI
23:12 Deep Mind, OpenAI, building AGI
29:02 neuroscience and psychology, outsiders
34:35 Early adopters of ML
38:34 WojNet, where is credit due?
42:45 press covering AI
46:38 Aligning technology and need

Read the transcript and discover awesome ML projects:
http://wandb.me/cade-metz

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud

Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://wandb.me/salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery]]></description><content:encoded><![CDATA[How Cade got access to the stories behind some of the biggest advancements in AI, and the dynamic playing out between leaders at companies like Google, Microsoft, and Facebook.

Cade Metz is a New York Times reporter covering artificial intelligence, driverless cars, robotics, virtual reality, and other emerging areas. Previously, he was a senior staff writer with Wired magazine and the U.S. editor of The Register, one of Britain’s leading science and technology news sites. His first book, "Genius Makers", tells the stories of the pioneers behind AI.

Get the book: http://bit.ly/GeniusMakers
Follow Cade on Twitter: https://twitter.com/CadeMetz/
And on Linkedin: https://www.linkedin.com/in/cademetz/

Topics discussed:
0:00 sneak peek, intro
3:25 audience and charachters
7:18 *spoiler alert* AGI
11:01 book ends, but story goes on
17:31 overinflated claims in AI
23:12 Deep Mind, OpenAI, building AGI
29:02 neuroscience and psychology, outsiders
34:35 Early adopters of ML
38:34 WojNet, where is credit due?
42:45 press covering AI
46:38 Aligning technology and need

Read the transcript and discover awesome ML projects:
http://wandb.me/cade-metz

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud

Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://wandb.me/salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/1009238959</guid><itunes:image href="https://artwork.captivate.fm/b55532df-9672-4ec5-93cb-7f8747f3c9f7/artworks-jxp0bicer0rcm7if-1iu6nq-t3000x3000.jpg"/><pubDate>Thu, 18 Mar 2021 17:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/824d9f18-e27d-49e2-bbb8-404c853288d6/1009238959-wandb-cade-metz.mp3" length="47182575" type="audio/mpeg"/><itunes:duration>49:09</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>How Cade got access to the stories behind some of the biggest advancements in AI, and the dynamic playing out between leaders at companies like Google, Microsoft, and Facebook.

Cade Metz is a New York Times reporter covering artificial intelligence, driverless cars, robotics, virtual reality, and other emerging areas. Previously, he was a senior staff writer with Wired magazine and the U.S. editor of The Register, one of Britain’s leading science and technology news sites. His first book, &quot;Genius Makers&quot;, tells the stories of the pioneers behind AI.

Get the book: http://bit.ly/GeniusMakers
Follow Cade on Twitter: https://twitter.com/CadeMetz/
And on Linkedin: https://www.linkedin.com/in/cademetz/

Topics discussed:
0:00 sneak peek, intro
3:25 audience and charachters
7:18 *spoiler alert* AGI
11:01 book ends, but story goes on
17:31 overinflated claims in AI
23:12 Deep Mind, OpenAI, building AGI
29:02 neuroscience and psychology, outsiders
34:35 Early adopters of ML
38:34 WojNet, where is credit due?
42:45 press covering AI
46:38 Aligning technology and need

Read the transcript and discover awesome ML projects:
http://wandb.me/cade-metz

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud

Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://wandb.me/salon

Join our community of ML practitioners where we host AMA&apos;s, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery</itunes:summary></item><item><title>Dave Selinger — AI and the Next Generation of Security Systems</title><itunes:title>Dave Selinger — AI and the Next Generation of Security Systems</itunes:title><description><![CDATA[Learn why traditional home security systems tend to fail and how Dave’s love of tinkering and deep learning are helping him and the team at Deep Sentinel avoid those same pitfalls. He also discusses the importance of combatting racial bias by designing race-agnostic systems and what their approach is to solving that problem.

Dave Selinger is the co-founder and CEO of Deep Sentinel, an intelligent crime prediction and prevention system that stops crime before it happens using deep learning vision techniques. Prior to founding Deep Sentinel, Dave co-founded RichRelevance, an AI recommendation company.

https://www.deepsentinel.com/
https://www.meetup.com/East-Bay-Tri-Valley-Machine-Learning-Meetup/
https://twitter.com/daveselinger

Topics covered:
0:00 Sneak peek, smart vs dumb cameras, intro
0:59 What is Deep Sentinel, how does it work?
6:00 Hardware, edge devices
10:40 OpenCV Fork, tinkering
16:18 ML Meetup, Climbing the AI research ladder
20:36 Challenge of Safety critical applications
27:03 New models, re-training, exhibitionists and voyeurs
31:17 How do you prove your cameras are better?
34:24 Angel investing in AI companies
38:00 Social responsibility with data
43:33 Combatting bias with data systems
52:22 Biggest bottlenecks production

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud

Read the transcript and discover more awesome machine learning material here: 
http://wandb.me/Dave-selinger-podcast

Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://wandb.me/salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery]]></description><content:encoded><![CDATA[Learn why traditional home security systems tend to fail and how Dave’s love of tinkering and deep learning are helping him and the team at Deep Sentinel avoid those same pitfalls. He also discusses the importance of combatting racial bias by designing race-agnostic systems and what their approach is to solving that problem.

Dave Selinger is the co-founder and CEO of Deep Sentinel, an intelligent crime prediction and prevention system that stops crime before it happens using deep learning vision techniques. Prior to founding Deep Sentinel, Dave co-founded RichRelevance, an AI recommendation company.

https://www.deepsentinel.com/
https://www.meetup.com/East-Bay-Tri-Valley-Machine-Learning-Meetup/
https://twitter.com/daveselinger

Topics covered:
0:00 Sneak peek, smart vs dumb cameras, intro
0:59 What is Deep Sentinel, how does it work?
6:00 Hardware, edge devices
10:40 OpenCV Fork, tinkering
16:18 ML Meetup, Climbing the AI research ladder
20:36 Challenge of Safety critical applications
27:03 New models, re-training, exhibitionists and voyeurs
31:17 How do you prove your cameras are better?
34:24 Angel investing in AI companies
38:00 Social responsibility with data
43:33 Combatting bias with data systems
52:22 Biggest bottlenecks production

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud

Read the transcript and discover more awesome machine learning material here: 
http://wandb.me/Dave-selinger-podcast

Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://wandb.me/salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/1003771381</guid><itunes:image href="https://artwork.captivate.fm/e70bc361-c8f4-4d76-8ecd-de01e2942c90/artworks-ycqaenwr1uc7d7no-zyambg-t3000x3000.jpg"/><pubDate>Thu, 11 Mar 2021 18:43:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/7f91044f-6ac4-40c3-9a5a-9faba496d8f5/1003771381-wandb-dave-selinger.mp3" length="53885804" type="audio/mpeg"/><itunes:duration>56:08</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Learn why traditional home security systems tend to fail and how Dave’s love of tinkering and deep learning are helping him and the team at Deep Sentinel avoid those same pitfalls. He also discusses the importance of combatting racial bias by designing race-agnostic systems and what their approach is to solving that problem.

Dave Selinger is the co-founder and CEO of Deep Sentinel, an intelligent crime prediction and prevention system that stops crime before it happens using deep learning vision techniques. Prior to founding Deep Sentinel, Dave co-founded RichRelevance, an AI recommendation company.

https://www.deepsentinel.com/
https://www.meetup.com/East-Bay-Tri-Valley-Machine-Learning-Meetup/
https://twitter.com/daveselinger

Topics covered:
0:00 Sneak peek, smart vs dumb cameras, intro
0:59 What is Deep Sentinel, how does it work?
6:00 Hardware, edge devices
10:40 OpenCV Fork, tinkering
16:18 ML Meetup, Climbing the AI research ladder
20:36 Challenge of Safety critical applications
27:03 New models, re-training, exhibitionists and voyeurs
31:17 How do you prove your cameras are better?
34:24 Angel investing in AI companies
38:00 Social responsibility with data
43:33 Combatting bias with data systems
52:22 Biggest bottlenecks production

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud

Read the transcript and discover more awesome machine learning material here: 
http://wandb.me/Dave-selinger-podcast

Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://wandb.me/salon

Join our community of ML practitioners where we host AMA&apos;s, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery</itunes:summary></item><item><title>Tim &amp; Heinrich — Democraticizing Reinforcement Learning Research</title><itunes:title>Tim &amp; Heinrich — Democraticizing Reinforcement Learning Research</itunes:title><description><![CDATA[Since reinforcement learning requires hefty compute resources, it can be tough to keep up without a serious budget of your own. Find out how the team at Facebook AI Research (FAIR) is looking to increase access and level the playing field with the help of NetHack, an archaic rogue-like video game from the late 80s.

Links discussed:
The NetHack Learning Environment: 
https://ai.facebook.com/blog/nethack-learning-environment-to-advance-deep-reinforcement-learning/
Reinforcement learning, intrinsic motivation: 
https://arxiv.org/abs/2002.12292
Knowledge transfer:
https://arxiv.org/abs/1910.08210

Tim Rocktäschel is a Research Scientist at Facebook AI Research (FAIR) London and a Lecturer in the Department of Computer Science at University College London (UCL). At UCL, he is a member of the UCL Centre for Artificial Intelligence and the UCL Natural Language Processing group. Prior to that, he was a Postdoctoral Researcher in the Whiteson Research Lab, a Stipendiary Lecturer in Computer Science at Hertford College, and a Junior Research Fellow in Computer Science at Jesus College, at the University of Oxford.
https://twitter.com/_rockt

Heinrich Kuttler is an AI and machine learning researcher at Facebook AI Research (FAIR) and before that was a research engineer and team lead at DeepMind.
https://twitter.com/HeinrichKuttler
https://www.linkedin.com/in/heinrich-kuttler/

Topics covered:
0:00 a lack of reproducibility in RL
1:05 What is NetHack and how did the idea come to be?
5:46 RL in Go vs NetHack
11:04 performance of vanilla agents, what do you optimize for
18:36 transferring domain knowledge, source diving
22:27 human vs machines intrinsic learning
28:19 ICLR paper - exploration and RL strategies
35:48 the future of reinforcement learning
43:18 going from supervised to reinforcement learning
45:07 reproducibility in RL
50:05 most underrated aspect of ML, biggest challenges?

Get our podcast on these other platforms:
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud

Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://wandb.me/salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery]]></description><content:encoded><![CDATA[Since reinforcement learning requires hefty compute resources, it can be tough to keep up without a serious budget of your own. Find out how the team at Facebook AI Research (FAIR) is looking to increase access and level the playing field with the help of NetHack, an archaic rogue-like video game from the late 80s.

Links discussed:
The NetHack Learning Environment: 
https://ai.facebook.com/blog/nethack-learning-environment-to-advance-deep-reinforcement-learning/
Reinforcement learning, intrinsic motivation: 
https://arxiv.org/abs/2002.12292
Knowledge transfer:
https://arxiv.org/abs/1910.08210

Tim Rocktäschel is a Research Scientist at Facebook AI Research (FAIR) London and a Lecturer in the Department of Computer Science at University College London (UCL). At UCL, he is a member of the UCL Centre for Artificial Intelligence and the UCL Natural Language Processing group. Prior to that, he was a Postdoctoral Researcher in the Whiteson Research Lab, a Stipendiary Lecturer in Computer Science at Hertford College, and a Junior Research Fellow in Computer Science at Jesus College, at the University of Oxford.
https://twitter.com/_rockt

Heinrich Kuttler is an AI and machine learning researcher at Facebook AI Research (FAIR) and before that was a research engineer and team lead at DeepMind.
https://twitter.com/HeinrichKuttler
https://www.linkedin.com/in/heinrich-kuttler/

Topics covered:
0:00 a lack of reproducibility in RL
1:05 What is NetHack and how did the idea come to be?
5:46 RL in Go vs NetHack
11:04 performance of vanilla agents, what do you optimize for
18:36 transferring domain knowledge, source diving
22:27 human vs machines intrinsic learning
28:19 ICLR paper - exploration and RL strategies
35:48 the future of reinforcement learning
43:18 going from supervised to reinforcement learning
45:07 reproducibility in RL
50:05 most underrated aspect of ML, biggest challenges?

Get our podcast on these other platforms:
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud

Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://wandb.me/salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/998090023</guid><itunes:image href="https://artwork.captivate.fm/f38f1503-125a-43a5-a908-9f76bb545022/artworks-lo5o0m2hyudrreag-pp9vuq-t3000x3000.jpg"/><pubDate>Thu, 04 Mar 2021 04:01:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/4ab25b50-63f5-442c-ae83-245837000eb4/998090023-wandb-tim-heinrich.mp3" length="51986598" type="audio/mpeg"/><itunes:duration>54:09</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Since reinforcement learning requires hefty compute resources, it can be tough to keep up without a serious budget of your own. Find out how the team at Facebook AI Research (FAIR) is looking to increase access and level the playing field with the help of NetHack, an archaic rogue-like video game from the late 80s.

Links discussed:
The NetHack Learning Environment: 
https://ai.facebook.com/blog/nethack-learning-environment-to-advance-deep-reinforcement-learning/
Reinforcement learning, intrinsic motivation: 
https://arxiv.org/abs/2002.12292
Knowledge transfer:
https://arxiv.org/abs/1910.08210

Tim Rocktäschel is a Research Scientist at Facebook AI Research (FAIR) London and a Lecturer in the Department of Computer Science at University College London (UCL). At UCL, he is a member of the UCL Centre for Artificial Intelligence and the UCL Natural Language Processing group. Prior to that, he was a Postdoctoral Researcher in the Whiteson Research Lab, a Stipendiary Lecturer in Computer Science at Hertford College, and a Junior Research Fellow in Computer Science at Jesus College, at the University of Oxford.
https://twitter.com/_rockt

Heinrich Kuttler is an AI and machine learning researcher at Facebook AI Research (FAIR) and before that was a research engineer and team lead at DeepMind.
https://twitter.com/HeinrichKuttler
https://www.linkedin.com/in/heinrich-kuttler/

Topics covered:
0:00 a lack of reproducibility in RL
1:05 What is NetHack and how did the idea come to be?
5:46 RL in Go vs NetHack
11:04 performance of vanilla agents, what do you optimize for
18:36 transferring domain knowledge, source diving
22:27 human vs machines intrinsic learning
28:19 ICLR paper - exploration and RL strategies
35:48 the future of reinforcement learning
43:18 going from supervised to reinforcement learning
45:07 reproducibility in RL
50:05 most underrated aspect of ML, biggest challenges?

Get our podcast on these other platforms:
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud

Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://wandb.me/salon

Join our community of ML practitioners where we host AMA&apos;s, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery</itunes:summary></item><item><title>Daphne Koller — Digital Biology and the Next Epoch of Science</title><itunes:title>Daphne Koller — Digital Biology and the Next Epoch of Science</itunes:title><description><![CDATA[From teaching at Stanford to co-founding Coursera, insitro, and Engageli, Daphne Koller reflects on the importance of education, giving back, and cross-functional research.

Daphne Koller is the founder and CEO of insitro, a company using machine learning to rethink drug discovery and development. She is a MacArthur Fellowship recipient, member of the National Academy of Engineering, member of the American Academy of Arts and Science, and has been a Professor in the Department of Computer Science at Stanford University. In 2012, Daphne co-founded Coursera, one of the world's largest online education platforms. She is also a co-founder of Engageli, a digital platform designed to optimize student success. 

https://www.insitro.com/
https://www.insitro.com/jobs
https://www.engageli.com/
https://www.coursera.org/

Follow Daphne on Twitter: https://twitter.com/DaphneKoller
https://www.linkedin.com/in/daphne-koller-4053a820/

Topics covered:
0:00​ Giving back and intro
2:10​ insitro's mission statement and Eroom's Law
3:21​ The drug discovery process and how ML helps
10:05​ Protein folding
15:48​ From 2004 to now, what's changed?
22:09​ On the availability of biology and vision datasets
26:17​ Cross-functional collaboration at insitro
28:18​ On teaching and founding Coursera
31:56​ The origins of Engageli
36:38​ Probabilistic graphic models
39:33​ Most underrated topic in ML
43:43​ Biggest day-to-day challenges

Get our podcast on these other platforms:
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud

Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://wandb.me/salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery]]></description><content:encoded><![CDATA[From teaching at Stanford to co-founding Coursera, insitro, and Engageli, Daphne Koller reflects on the importance of education, giving back, and cross-functional research.

Daphne Koller is the founder and CEO of insitro, a company using machine learning to rethink drug discovery and development. She is a MacArthur Fellowship recipient, member of the National Academy of Engineering, member of the American Academy of Arts and Science, and has been a Professor in the Department of Computer Science at Stanford University. In 2012, Daphne co-founded Coursera, one of the world's largest online education platforms. She is also a co-founder of Engageli, a digital platform designed to optimize student success. 

https://www.insitro.com/
https://www.insitro.com/jobs
https://www.engageli.com/
https://www.coursera.org/

Follow Daphne on Twitter: https://twitter.com/DaphneKoller
https://www.linkedin.com/in/daphne-koller-4053a820/

Topics covered:
0:00​ Giving back and intro
2:10​ insitro's mission statement and Eroom's Law
3:21​ The drug discovery process and how ML helps
10:05​ Protein folding
15:48​ From 2004 to now, what's changed?
22:09​ On the availability of biology and vision datasets
26:17​ Cross-functional collaboration at insitro
28:18​ On teaching and founding Coursera
31:56​ The origins of Engageli
36:38​ Probabilistic graphic models
39:33​ Most underrated topic in ML
43:43​ Biggest day-to-day challenges

Get our podcast on these other platforms:
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud

Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://wandb.me/salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/987077155</guid><itunes:image href="https://artwork.captivate.fm/e0c2c587-506d-4480-b2de-727655453829/artworks-lo5o0m2hyudrreag-pp9vuq-t3000x3000.jpg"/><pubDate>Thu, 18 Feb 2021 17:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/abe9d707-d1dc-490d-9c34-f1686307f784/987077155-wandb-daphne-koller.mp3" length="89041117" type="audio/mpeg"/><itunes:duration>46:16</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>From teaching at Stanford to co-founding Coursera, insitro, and Engageli, Daphne Koller reflects on the importance of education, giving back, and cross-functional research.

Daphne Koller is the founder and CEO of insitro, a company using machine learning to rethink drug discovery and development. She is a MacArthur Fellowship recipient, member of the National Academy of Engineering, member of the American Academy of Arts and Science, and has been a Professor in the Department of Computer Science at Stanford University. In 2012, Daphne co-founded Coursera, one of the world&apos;s largest online education platforms. She is also a co-founder of Engageli, a digital platform designed to optimize student success. 

https://www.insitro.com/
https://www.insitro.com/jobs
https://www.engageli.com/
https://www.coursera.org/

Follow Daphne on Twitter: https://twitter.com/DaphneKoller
https://www.linkedin.com/in/daphne-koller-4053a820/

Topics covered:
0:00​ Giving back and intro
2:10​ insitro&apos;s mission statement and Eroom&apos;s Law
3:21​ The drug discovery process and how ML helps
10:05​ Protein folding
15:48​ From 2004 to now, what&apos;s changed?
22:09​ On the availability of biology and vision datasets
26:17​ Cross-functional collaboration at insitro
28:18​ On teaching and founding Coursera
31:56​ The origins of Engageli
36:38​ Probabilistic graphic models
39:33​ Most underrated topic in ML
43:43​ Biggest day-to-day challenges

Get our podcast on these other platforms:
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud

Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://wandb.me/salon

Join our community of ML practitioners where we host AMA&apos;s, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery</itunes:summary></item><item><title>Piero Molino — The Secret Behind Building Successful Open Source Projects</title><itunes:title>Piero Molino — The Secret Behind Building Successful Open Source Projects</itunes:title><description><![CDATA[Piero shares the story of how Ludwig was created, as well as the ins and outs of how Ludwig works and the future of machine learning with no code.

Piero is a Staff Research Scientist in the Hazy Research group at Stanford University. He is a former founding member of Uber AI, where he created Ludwig, worked on applied projects (COTA, Graph Learning for Uber Eats, Uber’s Dialogue System), and published research on NLP, Dialogue, Visualization, Graph Learning, Reinforcement Learning, and Computer Vision.

Topics covered:
0:00 Sneak peek and intro
1:24 What is Ludwig, at a high level?
4:42 What is Ludwig doing under the hood?
7:11 No-code machine learning and data types
14:15 How Ludwig started
17:33 Model performance and underlying architecture
21:52 On Python in ML
24:44 Defaults and W&B integration
28:26 Perspective on NLP after 10 years in the field
31:49 Most underrated aspect of ML
33:30 Hardest part of deploying ML models in the real world

Learn more about Ludwig: https://ludwig-ai.github.io/ludwig-docs/
Piero's Twitter: https://twitter.com/w4nderlus7
Follow Piero on Linkedin: https://www.linkedin.com/in/pieromolino/?locale=en_US

Get our podcast on these other platforms:
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud

Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://wandb.me/salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery]]></description><content:encoded><![CDATA[Piero shares the story of how Ludwig was created, as well as the ins and outs of how Ludwig works and the future of machine learning with no code.

Piero is a Staff Research Scientist in the Hazy Research group at Stanford University. He is a former founding member of Uber AI, where he created Ludwig, worked on applied projects (COTA, Graph Learning for Uber Eats, Uber’s Dialogue System), and published research on NLP, Dialogue, Visualization, Graph Learning, Reinforcement Learning, and Computer Vision.

Topics covered:
0:00 Sneak peek and intro
1:24 What is Ludwig, at a high level?
4:42 What is Ludwig doing under the hood?
7:11 No-code machine learning and data types
14:15 How Ludwig started
17:33 Model performance and underlying architecture
21:52 On Python in ML
24:44 Defaults and W&B integration
28:26 Perspective on NLP after 10 years in the field
31:49 Most underrated aspect of ML
33:30 Hardest part of deploying ML models in the real world

Learn more about Ludwig: https://ludwig-ai.github.io/ludwig-docs/
Piero's Twitter: https://twitter.com/w4nderlus7
Follow Piero on Linkedin: https://www.linkedin.com/in/pieromolino/?locale=en_US

Get our podcast on these other platforms:
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud

Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://wandb.me/salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/982896013</guid><itunes:image href="https://artwork.captivate.fm/6f3afffa-e904-479d-a7f9-ed865d9e3a1e/artworks-lo5o0m2hyudrreag-pp9vuq-t3000x3000.jpg"/><pubDate>Thu, 11 Feb 2021 17:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/3362a0fd-7995-4369-85fc-60ed43f45249/982896013-wandb-piero-molino.mp3" length="70009373" type="audio/mpeg"/><itunes:duration>36:18</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Piero shares the story of how Ludwig was created, as well as the ins and outs of how Ludwig works and the future of machine learning with no code.

Piero is a Staff Research Scientist in the Hazy Research group at Stanford University. He is a former founding member of Uber AI, where he created Ludwig, worked on applied projects (COTA, Graph Learning for Uber Eats, Uber’s Dialogue System), and published research on NLP, Dialogue, Visualization, Graph Learning, Reinforcement Learning, and Computer Vision.

Topics covered:
0:00 Sneak peek and intro
1:24 What is Ludwig, at a high level?
4:42 What is Ludwig doing under the hood?
7:11 No-code machine learning and data types
14:15 How Ludwig started
17:33 Model performance and underlying architecture
21:52 On Python in ML
24:44 Defaults and W&amp;B integration
28:26 Perspective on NLP after 10 years in the field
31:49 Most underrated aspect of ML
33:30 Hardest part of deploying ML models in the real world

Learn more about Ludwig: https://ludwig-ai.github.io/ludwig-docs/
Piero&apos;s Twitter: https://twitter.com/w4nderlus7
Follow Piero on Linkedin: https://www.linkedin.com/in/pieromolino/?locale=en_US

Get our podcast on these other platforms:
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud

Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://wandb.me/salon

Join our community of ML practitioners where we host AMA&apos;s, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery</itunes:summary></item><item><title>Rosanne Liu — Conducting Fundamental ML Research as a Nonprofit</title><itunes:title>Rosanne Liu — Conducting Fundamental ML Research as a Nonprofit</itunes:title><description><![CDATA[How Rosanne is working to democratize AI research and improve diversity and fairness in the field through starting a non-profit after being a founding member of Uber AI Labs, doing lots of amazing research, and publishing papers at top conferences.

Rosanne is a machine learning researcher, and co-founder of ML Collective, a nonprofit organization for open collaboration and mentorship. Before that, she was a founding member of Uber AI. She has published research at NeurIPS, ICLR, ICML, Science, and other top venues. While at school she used neural networks to help discover novel materials and to optimize fuel efficiency in hybrid vehicles.

ML Collective: http://mlcollective.org/
Controlling Text Generation with Plug and Play Language Models: https://eng.uber.com/pplm/
LCA: Loss Change Allocation for Neural Network Training: https://eng.uber.com/research/lca-loss-change-allocation-for-neural-network-training/

Topics covered
0:00 Sneak peek, Intro
1:53 The origin of ML Collective
5:31 Why a non-profit and who is MLC for?
14:30 LCA, Loss Change Allocation
18:20 Running an org, research vs admin work
20:10 Advice for people trying to get published
24:15 on reading papers and Intrinsic Dimension paper
36:25 NeurIPS - Open Collaboration
40:20 What is your reward function?
44:44 Underrated aspect of ML
47:22 How to get involved with MLC

Get our podcast on these other platforms:
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts
YouTube: http://wandb.me/youtube

Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://wandb.me/salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery]]></description><content:encoded><![CDATA[How Rosanne is working to democratize AI research and improve diversity and fairness in the field through starting a non-profit after being a founding member of Uber AI Labs, doing lots of amazing research, and publishing papers at top conferences.

Rosanne is a machine learning researcher, and co-founder of ML Collective, a nonprofit organization for open collaboration and mentorship. Before that, she was a founding member of Uber AI. She has published research at NeurIPS, ICLR, ICML, Science, and other top venues. While at school she used neural networks to help discover novel materials and to optimize fuel efficiency in hybrid vehicles.

ML Collective: http://mlcollective.org/
Controlling Text Generation with Plug and Play Language Models: https://eng.uber.com/pplm/
LCA: Loss Change Allocation for Neural Network Training: https://eng.uber.com/research/lca-loss-change-allocation-for-neural-network-training/

Topics covered
0:00 Sneak peek, Intro
1:53 The origin of ML Collective
5:31 Why a non-profit and who is MLC for?
14:30 LCA, Loss Change Allocation
18:20 Running an org, research vs admin work
20:10 Advice for people trying to get published
24:15 on reading papers and Intrinsic Dimension paper
36:25 NeurIPS - Open Collaboration
40:20 What is your reward function?
44:44 Underrated aspect of ML
47:22 How to get involved with MLC

Get our podcast on these other platforms:
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts
YouTube: http://wandb.me/youtube

Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://wandb.me/salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/978470368</guid><itunes:image href="https://artwork.captivate.fm/3a8a3321-08e7-45a8-bc0e-aa1d77be6ab4/artworks-lo5o0m2hyudrreag-pp9vuq-t3000x3000.jpg"/><pubDate>Fri, 05 Feb 2021 18:44:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/b67a86b1-6473-4994-8bbb-c1f856990b28/978470368-wandb-rosanne-liu.mp3" length="118788806" type="audio/mpeg"/><itunes:duration>49:10</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>How Rosanne is working to democratize AI research and improve diversity and fairness in the field through starting a non-profit after being a founding member of Uber AI Labs, doing lots of amazing research, and publishing papers at top conferences.

Rosanne is a machine learning researcher, and co-founder of ML Collective, a nonprofit organization for open collaboration and mentorship. Before that, she was a founding member of Uber AI. She has published research at NeurIPS, ICLR, ICML, Science, and other top venues. While at school she used neural networks to help discover novel materials and to optimize fuel efficiency in hybrid vehicles.

ML Collective: http://mlcollective.org/
Controlling Text Generation with Plug and Play Language Models: https://eng.uber.com/pplm/
LCA: Loss Change Allocation for Neural Network Training: https://eng.uber.com/research/lca-loss-change-allocation-for-neural-network-training/

Topics covered
0:00 Sneak peek, Intro
1:53 The origin of ML Collective
5:31 Why a non-profit and who is MLC for?
14:30 LCA, Loss Change Allocation
18:20 Running an org, research vs admin work
20:10 Advice for people trying to get published
24:15 on reading papers and Intrinsic Dimension paper
36:25 NeurIPS - Open Collaboration
40:20 What is your reward function?
44:44 Underrated aspect of ML
47:22 How to get involved with MLC

Get our podcast on these other platforms:
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts
YouTube: http://wandb.me/youtube

Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://wandb.me/salon

Join our community of ML practitioners where we host AMA&apos;s, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery</itunes:summary></item><item><title>Sean Gourley — NLP, National Defense, and Establishing Ground Truth</title><itunes:title>Sean Gourley — NLP, National Defense, and Establishing Ground Truth</itunes:title><description><![CDATA[In this episode of Gradient Dissent, Primer CEO Sean Gourley and Lukas Biewald sit down to talk about NLP, working with vast amounts of information, and how crucially it relates to national defense. They also chat about their experience of being second-time founders coming from a data science background and how it affects the way they run their companies. We hope you enjoy this episode!

Sean Gourley is the founder and CEO Primer, a natural language processing startup in San Francisco. Previously, he was CTO of Quid an augmented intelligence company that he cofounded back in 2009. And prior to that, he worked on self-repairing nano circuits at NASA Ames. Sean has a PhD in physics from Oxford, where his research as a road scholar focused on graph theory, complex systems, and the mathematical patterns underlying modern war.

Follow Sean on Twitter:
https://primer.ai/
https://twitter.com/sgourley

Topics Covered:
0:00 Sneak peek, intro
1:42 Primer's mission and purpose
4:29 The Diamond Age – How do we train machines to observe the world and help us understand it
7:44 a self-writing Wikipedia
9:30 second-time founder
11:26 being a founder as a data scientist
15:44 commercializing algorithms
17:54 Is GPT-3 worth the hype? The mind-blowing scale of transformers
23:00 AI Safety, military/defense
29:20 disinformation, does ML play a role?
34:55 Establishing ground truth and informational provenance
39:10 COVID misinformation, Masks, division
44:07 most underrated aspect of ML
45:09 biggest bottlenecks in ML?

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

Get our podcast on these other platforms:
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their work:
http://wandb.me/salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
https://wandb.ai/gallery]]></description><content:encoded><![CDATA[In this episode of Gradient Dissent, Primer CEO Sean Gourley and Lukas Biewald sit down to talk about NLP, working with vast amounts of information, and how crucially it relates to national defense. They also chat about their experience of being second-time founders coming from a data science background and how it affects the way they run their companies. We hope you enjoy this episode!

Sean Gourley is the founder and CEO Primer, a natural language processing startup in San Francisco. Previously, he was CTO of Quid an augmented intelligence company that he cofounded back in 2009. And prior to that, he worked on self-repairing nano circuits at NASA Ames. Sean has a PhD in physics from Oxford, where his research as a road scholar focused on graph theory, complex systems, and the mathematical patterns underlying modern war.

Follow Sean on Twitter:
https://primer.ai/
https://twitter.com/sgourley

Topics Covered:
0:00 Sneak peek, intro
1:42 Primer's mission and purpose
4:29 The Diamond Age – How do we train machines to observe the world and help us understand it
7:44 a self-writing Wikipedia
9:30 second-time founder
11:26 being a founder as a data scientist
15:44 commercializing algorithms
17:54 Is GPT-3 worth the hype? The mind-blowing scale of transformers
23:00 AI Safety, military/defense
29:20 disinformation, does ML play a role?
34:55 Establishing ground truth and informational provenance
39:10 COVID misinformation, Masks, division
44:07 most underrated aspect of ML
45:09 biggest bottlenecks in ML?

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

Get our podcast on these other platforms:
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their work:
http://wandb.me/salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
https://wandb.ai/gallery]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/974197570</guid><itunes:image href="https://artwork.captivate.fm/087a928d-aabe-4295-8b7b-c5e4a7c4baaf/artworks-lo5o0m2hyudrreag-pp9vuq-t3000x3000.jpg"/><pubDate>Thu, 28 Jan 2021 18:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/9ce97840-a6b1-4ffc-8473-9b366907a3bf/974197570-wandb-sean-gourley.mp3" length="45328508" type="audio/mpeg"/><itunes:duration>47:13</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>In this episode of Gradient Dissent, Primer CEO Sean Gourley and Lukas Biewald sit down to talk about NLP, working with vast amounts of information, and how crucially it relates to national defense. They also chat about their experience of being second-time founders coming from a data science background and how it affects the way they run their companies. We hope you enjoy this episode!

Sean Gourley is the founder and CEO Primer, a natural language processing startup in San Francisco. Previously, he was CTO of Quid an augmented intelligence company that he cofounded back in 2009. And prior to that, he worked on self-repairing nano circuits at NASA Ames. Sean has a PhD in physics from Oxford, where his research as a road scholar focused on graph theory, complex systems, and the mathematical patterns underlying modern war.

Follow Sean on Twitter:
https://primer.ai/
https://twitter.com/sgourley

Topics Covered:
0:00 Sneak peek, intro
1:42 Primer&apos;s mission and purpose
4:29 The Diamond Age – How do we train machines to observe the world and help us understand it
7:44 a self-writing Wikipedia
9:30 second-time founder
11:26 being a founder as a data scientist
15:44 commercializing algorithms
17:54 Is GPT-3 worth the hype? The mind-blowing scale of transformers
23:00 AI Safety, military/defense
29:20 disinformation, does ML play a role?
34:55 Establishing ground truth and informational provenance
39:10 COVID misinformation, Masks, division
44:07 most underrated aspect of ML
45:09 biggest bottlenecks in ML?

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

Get our podcast on these other platforms:
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their work:
http://wandb.me/salon

Join our community of ML practitioners where we host AMA&apos;s, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
https://wandb.ai/gallery</itunes:summary></item><item><title>Peter Wang — Anaconda, Python, and Scientific Computing</title><itunes:title>Peter Wang — Anaconda, Python, and Scientific Computing</itunes:title><description><![CDATA[Peter Wang talks about his journey of being the CEO of and co-founding Anaconda, his perspective on the Python programming language, and its use for scientific computing.

Peter Wang has been developing commercial scientific computing and visualization software for over 15 years. He has extensive experience in software design and development across a broad range of areas, including 3D graphics, geophysics, large data simulation and visualization, financial risk modeling, and medical imaging.

Peter’s interests in the fundamentals of vector computing and interactive visualization led him to co-found Anaconda (formerly Continuum Analytics). Peter leads the open source and community innovation group.

As a creator of the PyData community and conferences, he devotes time and energy to growing the Python data science community and advocating and teaching Python at conferences around the world. Peter holds a BA in Physics from Cornell University.

Follow peter on Twitter: https://twitter.com/pwang​
https://www.anaconda.com/​
Intake: https://www.anaconda.com/blog/intake-...​
https://pydata.org/​
Scientific Data Management in the Coming Decade paper: https://arxiv.org/pdf/cs/0502008.pdf

Topics covered:
0:00​ (intro) Technology is not value neutral; Don't punt on ethics
1:30​ What is Conda?
2:57​ Peter's Story and Anaconda's beginning
6:45​ Do you ever regret choosing Python?
9:39​ On other programming languages
17:13​ Scientific Data Management in the Coming Decade
21:48​ Who are your customers?
26:24​ The ML hierarchy of needs
30:02​ The cybernetic era and Conway's Law
34:31​ R vs python
42:19​ Most underrated: Ethics - Don't Punt
46:50​ biggest bottlenecks: open-source, python

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

Get our podcast on these other platforms:
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their work:
http://wandb.me/salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
https://wandb.ai/gallery]]></description><content:encoded><![CDATA[Peter Wang talks about his journey of being the CEO of and co-founding Anaconda, his perspective on the Python programming language, and its use for scientific computing.

Peter Wang has been developing commercial scientific computing and visualization software for over 15 years. He has extensive experience in software design and development across a broad range of areas, including 3D graphics, geophysics, large data simulation and visualization, financial risk modeling, and medical imaging.

Peter’s interests in the fundamentals of vector computing and interactive visualization led him to co-found Anaconda (formerly Continuum Analytics). Peter leads the open source and community innovation group.

As a creator of the PyData community and conferences, he devotes time and energy to growing the Python data science community and advocating and teaching Python at conferences around the world. Peter holds a BA in Physics from Cornell University.

Follow peter on Twitter: https://twitter.com/pwang​
https://www.anaconda.com/​
Intake: https://www.anaconda.com/blog/intake-...​
https://pydata.org/​
Scientific Data Management in the Coming Decade paper: https://arxiv.org/pdf/cs/0502008.pdf

Topics covered:
0:00​ (intro) Technology is not value neutral; Don't punt on ethics
1:30​ What is Conda?
2:57​ Peter's Story and Anaconda's beginning
6:45​ Do you ever regret choosing Python?
9:39​ On other programming languages
17:13​ Scientific Data Management in the Coming Decade
21:48​ Who are your customers?
26:24​ The ML hierarchy of needs
30:02​ The cybernetic era and Conway's Law
34:31​ R vs python
42:19​ Most underrated: Ethics - Don't Punt
46:50​ biggest bottlenecks: open-source, python

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

Get our podcast on these other platforms:
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their work:
http://wandb.me/salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
https://wandb.ai/gallery]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/970085233</guid><itunes:image href="https://artwork.captivate.fm/38bfa7d5-273a-45cc-b4d4-931b3a5abc91/artworks-lo5o0m2hyudrreag-pp9vuq-t3000x3000.jpg"/><pubDate>Thu, 21 Jan 2021 19:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/df94ba47-cf03-4447-bab4-8ed22340c0ab/970085233-wandb-peter-wang.mp3" length="48176900" type="audio/mpeg"/><itunes:duration>50:11</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Peter Wang talks about his journey of being the CEO of and co-founding Anaconda, his perspective on the Python programming language, and its use for scientific computing.

Peter Wang has been developing commercial scientific computing and visualization software for over 15 years. He has extensive experience in software design and development across a broad range of areas, including 3D graphics, geophysics, large data simulation and visualization, financial risk modeling, and medical imaging.

Peter’s interests in the fundamentals of vector computing and interactive visualization led him to co-found Anaconda (formerly Continuum Analytics). Peter leads the open source and community innovation group.

As a creator of the PyData community and conferences, he devotes time and energy to growing the Python data science community and advocating and teaching Python at conferences around the world. Peter holds a BA in Physics from Cornell University.

Follow peter on Twitter: https://twitter.com/pwang​
https://www.anaconda.com/​
Intake: https://www.anaconda.com/blog/intake-...​
https://pydata.org/​
Scientific Data Management in the Coming Decade paper: https://arxiv.org/pdf/cs/0502008.pdf

Topics covered:
0:00​ (intro) Technology is not value neutral; Don&apos;t punt on ethics
1:30​ What is Conda?
2:57​ Peter&apos;s Story and Anaconda&apos;s beginning
6:45​ Do you ever regret choosing Python?
9:39​ On other programming languages
17:13​ Scientific Data Management in the Coming Decade
21:48​ Who are your customers?
26:24​ The ML hierarchy of needs
30:02​ The cybernetic era and Conway&apos;s Law
34:31​ R vs python
42:19​ Most underrated: Ethics - Don&apos;t Punt
46:50​ biggest bottlenecks: open-source, python

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

Get our podcast on these other platforms:
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their work:
http://wandb.me/salon

Join our community of ML practitioners where we host AMA&apos;s, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
https://wandb.ai/gallery</itunes:summary></item><item><title>Chris Anderson — Robocars, Drones, and WIRED Magazine</title><itunes:title>Chris Anderson — Robocars, Drones, and WIRED Magazine</itunes:title><description><![CDATA[Chris shares his journey starting from playing in R.E.M, becoming interested in physics to leading WIRED Magazine for 11 years. His robot fascination lead to starting a company that manufactures drones, and creating a community democratizing self-driving cars.

Chris Anderson is the CEO of 3D Robotics, founder of the Linux Foundation Dronecode Project and founder of the DIY Drones and DIY Robocars communities. From 2001 through 2012 he was the Editor in Chief of Wired Magazine. He's also the author of the New York Times bestsellers `The Long Tail` and `Free` and `Makers: The New Industrial Revolution`. In 2007 he was named to "Time 100," most influential men and women in the world.

Links discussed in this episode:
DIY Robocars: diyrobocars.com
Getting Started with Robocars: https://diyrobocars.com/2020/10/31/getting-started-with-robocars/
DIY Robotics Meet Up: https://www.meetup.com/DIYRobocars
Other Works
3DRobotics: https://www.3dr.com/
The Long Tail by Chris Anderson: https://www.amazon.com/Long-Tail-Future-Business-Selling/dp/1401309666/ref=sr_1_1?dchild=1&keywords=The+Long+Tail&qid=1610580178&s=books&sr=1-1
Interesting links Chris shared
OpenMV: https://openmv.io/
Intel Tracking Camera: https://www.intelrealsense.com/tracking-camera-t265/
Zumi Self-Driving Car Kit: https://www.robolink.com/zumi/
Possible Minds: Twenty-Five Ways of Looking at AI: https://www.amazon.com/Possible-Minds-Twenty-Five-Ways-Looking/dp/0525557997

Topics discussed:
0:00 sneak peek and intro
1:03 Battle of the REM's
3:35 A brief stint with Physics
5:09 Becoming a journalist and the woes of being a modern physicis
9:25 WIRED in the aughts
12:13 perspectives on "The Long Tail"
20:47 getting into drones
25:08 "Take a smartphone, add wings"
28:07 How did you get to autonomous racing cars?
33:30 COVID and virtual environments
38:40 Chris's hope for Robocars
40:54 Robocar hardware, software, sensors
53:49 path to Singularity/ regulations on drones
58:50 "the golden age of simulation"
1:00:22 biggest challenge in deploying ML models

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

Get our podcast on these other platforms:
YouTube: http://wandb.me/youtube
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their work:
http://wandb.me/salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
https://wandb.ai/gallery]]></description><content:encoded><![CDATA[Chris shares his journey starting from playing in R.E.M, becoming interested in physics to leading WIRED Magazine for 11 years. His robot fascination lead to starting a company that manufactures drones, and creating a community democratizing self-driving cars.

Chris Anderson is the CEO of 3D Robotics, founder of the Linux Foundation Dronecode Project and founder of the DIY Drones and DIY Robocars communities. From 2001 through 2012 he was the Editor in Chief of Wired Magazine. He's also the author of the New York Times bestsellers `The Long Tail` and `Free` and `Makers: The New Industrial Revolution`. In 2007 he was named to "Time 100," most influential men and women in the world.

Links discussed in this episode:
DIY Robocars: diyrobocars.com
Getting Started with Robocars: https://diyrobocars.com/2020/10/31/getting-started-with-robocars/
DIY Robotics Meet Up: https://www.meetup.com/DIYRobocars
Other Works
3DRobotics: https://www.3dr.com/
The Long Tail by Chris Anderson: https://www.amazon.com/Long-Tail-Future-Business-Selling/dp/1401309666/ref=sr_1_1?dchild=1&keywords=The+Long+Tail&qid=1610580178&s=books&sr=1-1
Interesting links Chris shared
OpenMV: https://openmv.io/
Intel Tracking Camera: https://www.intelrealsense.com/tracking-camera-t265/
Zumi Self-Driving Car Kit: https://www.robolink.com/zumi/
Possible Minds: Twenty-Five Ways of Looking at AI: https://www.amazon.com/Possible-Minds-Twenty-Five-Ways-Looking/dp/0525557997

Topics discussed:
0:00 sneak peek and intro
1:03 Battle of the REM's
3:35 A brief stint with Physics
5:09 Becoming a journalist and the woes of being a modern physicis
9:25 WIRED in the aughts
12:13 perspectives on "The Long Tail"
20:47 getting into drones
25:08 "Take a smartphone, add wings"
28:07 How did you get to autonomous racing cars?
33:30 COVID and virtual environments
38:40 Chris's hope for Robocars
40:54 Robocar hardware, software, sensors
53:49 path to Singularity/ regulations on drones
58:50 "the golden age of simulation"
1:00:22 biggest challenge in deploying ML models

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

Get our podcast on these other platforms:
YouTube: http://wandb.me/youtube
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their work:
http://wandb.me/salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
https://wandb.ai/gallery]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/965124268</guid><itunes:image href="https://artwork.captivate.fm/7f57b22d-5129-4bf6-b496-70eb9c118629/artworks-lo5o0m2hyudrreag-pp9vuq-t3000x3000.jpg"/><pubDate>Thu, 14 Jan 2021 18:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/fa236537-7235-494a-bb7d-3b11b35b08ce/965124268-wandb-chris-anderson.mp3" length="60905847" type="audio/mpeg"/><itunes:duration>01:03:27</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Chris shares his journey starting from playing in R.E.M, becoming interested in physics to leading WIRED Magazine for 11 years. His robot fascination lead to starting a company that manufactures drones, and creating a community democratizing self-driving cars.

Chris Anderson is the CEO of 3D Robotics, founder of the Linux Foundation Dronecode Project and founder of the DIY Drones and DIY Robocars communities. From 2001 through 2012 he was the Editor in Chief of Wired Magazine. He&apos;s also the author of the New York Times bestsellers `The Long Tail` and `Free` and `Makers: The New Industrial Revolution`. In 2007 he was named to &quot;Time 100,&quot; most influential men and women in the world.

Links discussed in this episode:
DIY Robocars: diyrobocars.com
Getting Started with Robocars: https://diyrobocars.com/2020/10/31/getting-started-with-robocars/
DIY Robotics Meet Up: https://www.meetup.com/DIYRobocars
Other Works
3DRobotics: https://www.3dr.com/
The Long Tail by Chris Anderson: https://www.amazon.com/Long-Tail-Future-Business-Selling/dp/1401309666/ref=sr_1_1?dchild=1&amp;keywords=The+Long+Tail&amp;qid=1610580178&amp;s=books&amp;sr=1-1
Interesting links Chris shared
OpenMV: https://openmv.io/
Intel Tracking Camera: https://www.intelrealsense.com/tracking-camera-t265/
Zumi Self-Driving Car Kit: https://www.robolink.com/zumi/
Possible Minds: Twenty-Five Ways of Looking at AI: https://www.amazon.com/Possible-Minds-Twenty-Five-Ways-Looking/dp/0525557997

Topics discussed:
0:00 sneak peek and intro
1:03 Battle of the REM&apos;s
3:35 A brief stint with Physics
5:09 Becoming a journalist and the woes of being a modern physicis
9:25 WIRED in the aughts
12:13 perspectives on &quot;The Long Tail&quot;
20:47 getting into drones
25:08 &quot;Take a smartphone, add wings&quot;
28:07 How did you get to autonomous racing cars?
33:30 COVID and virtual environments
38:40 Chris&apos;s hope for Robocars
40:54 Robocar hardware, software, sensors
53:49 path to Singularity/ regulations on drones
58:50 &quot;the golden age of simulation&quot;
1:00:22 biggest challenge in deploying ML models

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

Get our podcast on these other platforms:
YouTube: http://wandb.me/youtube
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their work:
http://wandb.me/salon

Join our community of ML practitioners where we host AMA&apos;s, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
https://wandb.ai/gallery</itunes:summary></item><item><title>Adrien Treuille — Building Blazingly Fast Tools That People Love</title><itunes:title>Adrien Treuille — Building Blazingly Fast Tools That People Love</itunes:title><description><![CDATA[Adrien shares his journey from making games that advance science (Eterna, Foldit) to creating a Streamlit, an open-source app framework enabling ML/Data practitioners to easily build powerful and interactive apps in a few hours.

Adrien is co-founder and CEO of Streamlit, an open-source app framework that helps create beautiful data apps in hours in pure Python. Dr. Treuille has been a Zoox VP, Google X project lead, and Computer Science faculty at Carnegie Mellon. He has won numerous scientific awards, including the MIT TR35. Adrien has been featured in the documentaries What Will the Future Be Like by PBS/NOVA, and Lo and Behold by Werner Herzog.

https://twitter.com/myelbows
https://www.linkedin.com/in/adrien-treuille-52215718/
https://www.streamlit.io/
https://eternagame.org/
https://fold.it/

Topics covered:
0:00 sneak peek/Streamlit
0:47 intro
1:21 from aspiring guitar player to machine learning
4:16 Foldit - games that train humans
10:08 Eterna - another game and its relation to ML
16:15 Research areas as a professor at Carnegie Mellon
18:07 the origin of Streamlit
23:53 evolution of Streamlit: data science-ing a pivot
30:20 on programming languages
32:20 what’s next for Streamlit
37:34 On meditation and work/life
41:40 Underrated aspect of Machine Learning
443:07 Biggest challenge in deploying ML in the real world

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

Get our podcast on YouTube, Apple, Spotify, and Google!
YouTube: http://wandb.me/youtube
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their work:
http://wandb.me/salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.]]></description><content:encoded><![CDATA[Adrien shares his journey from making games that advance science (Eterna, Foldit) to creating a Streamlit, an open-source app framework enabling ML/Data practitioners to easily build powerful and interactive apps in a few hours.

Adrien is co-founder and CEO of Streamlit, an open-source app framework that helps create beautiful data apps in hours in pure Python. Dr. Treuille has been a Zoox VP, Google X project lead, and Computer Science faculty at Carnegie Mellon. He has won numerous scientific awards, including the MIT TR35. Adrien has been featured in the documentaries What Will the Future Be Like by PBS/NOVA, and Lo and Behold by Werner Herzog.

https://twitter.com/myelbows
https://www.linkedin.com/in/adrien-treuille-52215718/
https://www.streamlit.io/
https://eternagame.org/
https://fold.it/

Topics covered:
0:00 sneak peek/Streamlit
0:47 intro
1:21 from aspiring guitar player to machine learning
4:16 Foldit - games that train humans
10:08 Eterna - another game and its relation to ML
16:15 Research areas as a professor at Carnegie Mellon
18:07 the origin of Streamlit
23:53 evolution of Streamlit: data science-ing a pivot
30:20 on programming languages
32:20 what’s next for Streamlit
37:34 On meditation and work/life
41:40 Underrated aspect of Machine Learning
443:07 Biggest challenge in deploying ML in the real world

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

Get our podcast on YouTube, Apple, Spotify, and Google!
YouTube: http://wandb.me/youtube
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their work:
http://wandb.me/salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/941141545</guid><itunes:image href="https://artwork.captivate.fm/4dcaa27f-cee4-4f93-b8ff-139f6bb62300/artworks-lo5o0m2hyudrreag-pp9vuq-t3000x3000.jpg"/><pubDate>Fri, 04 Dec 2020 18:30:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/91049d6c-99cf-4011-b71f-031282f79336/941141545-wandb-adrien-treuille.mp3" length="43794180" type="audio/mpeg"/><itunes:duration>45:37</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Adrien shares his journey from making games that advance science (Eterna, Foldit) to creating a Streamlit, an open-source app framework enabling ML/Data practitioners to easily build powerful and interactive apps in a few hours.

Adrien is co-founder and CEO of Streamlit, an open-source app framework that helps create beautiful data apps in hours in pure Python. Dr. Treuille has been a Zoox VP, Google X project lead, and Computer Science faculty at Carnegie Mellon. He has won numerous scientific awards, including the MIT TR35. Adrien has been featured in the documentaries What Will the Future Be Like by PBS/NOVA, and Lo and Behold by Werner Herzog.

https://twitter.com/myelbows
https://www.linkedin.com/in/adrien-treuille-52215718/
https://www.streamlit.io/
https://eternagame.org/
https://fold.it/

Topics covered:
0:00 sneak peek/Streamlit
0:47 intro
1:21 from aspiring guitar player to machine learning
4:16 Foldit - games that train humans
10:08 Eterna - another game and its relation to ML
16:15 Research areas as a professor at Carnegie Mellon
18:07 the origin of Streamlit
23:53 evolution of Streamlit: data science-ing a pivot
30:20 on programming languages
32:20 what’s next for Streamlit
37:34 On meditation and work/life
41:40 Underrated aspect of Machine Learning
443:07 Biggest challenge in deploying ML in the real world

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

Get our podcast on YouTube, Apple, Spotify, and Google!
YouTube: http://wandb.me/youtube
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their work:
http://wandb.me/salon

Join our community of ML practitioners where we host AMA&apos;s, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.</itunes:summary></item><item><title>Peter Norvig – Singularity Is in the Eye of the Beholder</title><itunes:title>Peter Norvig – Singularity Is in the Eye of the Beholder</itunes:title><description><![CDATA[We're thrilled to have Peter Norvig join us to talk about the evolution of deep learning, his industry-defining book, his work at Google, and what he thinks the future holds for machine learning research.

Peter Norvig is a Director of Research at Google Inc; previously he directed Google's core search algorithms group. He is co-author of Artificial Intelligence: A Modern Approach, the leading textbook in the field, and co-teacher of an Artificial Intelligence class that signed up 160,000. Prior to his work at Google, Norvig was NASA's chief computer scientist.

Peter's website:
https://norvig.com/

Topics covered:
0:00 singularity is in the eye of the beholder
0:32 introduction
1:09 project Euler
2:42 advent of code/pytudes
4:55 new sections in the new version of his book
10:32 unreasonable effectiveness of data Paper 15 years later
14:44 what advice would you give to a young researcher?
16:03 computing power in the evolution of deep learning
19:19 what's been surprising in the development of AI?
24:21 from alpha go to human-like intelligence
28:46 What in AI has been surprisingly hard or easy?
32:11 synthetic data and language
35:16 singularity is in the eye of the beholder
38:43 the future of python in ML and why he used it in his book
43:00 underrated topic in ML and bottlenecks in production

Visit our podcasts homepage for transcripts and more episodes!
https://www.wandb.com/podcast

Get our podcast on Apple, Spotify, and Google!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF
Google: https://tiny.cc/GD_Google

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
https://tiny.cc/wb-salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
https://bit.ly/wb-slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
https://wandb.ai/gallery]]></description><content:encoded><![CDATA[We're thrilled to have Peter Norvig join us to talk about the evolution of deep learning, his industry-defining book, his work at Google, and what he thinks the future holds for machine learning research.

Peter Norvig is a Director of Research at Google Inc; previously he directed Google's core search algorithms group. He is co-author of Artificial Intelligence: A Modern Approach, the leading textbook in the field, and co-teacher of an Artificial Intelligence class that signed up 160,000. Prior to his work at Google, Norvig was NASA's chief computer scientist.

Peter's website:
https://norvig.com/

Topics covered:
0:00 singularity is in the eye of the beholder
0:32 introduction
1:09 project Euler
2:42 advent of code/pytudes
4:55 new sections in the new version of his book
10:32 unreasonable effectiveness of data Paper 15 years later
14:44 what advice would you give to a young researcher?
16:03 computing power in the evolution of deep learning
19:19 what's been surprising in the development of AI?
24:21 from alpha go to human-like intelligence
28:46 What in AI has been surprisingly hard or easy?
32:11 synthetic data and language
35:16 singularity is in the eye of the beholder
38:43 the future of python in ML and why he used it in his book
43:00 underrated topic in ML and bottlenecks in production

Visit our podcasts homepage for transcripts and more episodes!
https://www.wandb.com/podcast

Get our podcast on Apple, Spotify, and Google!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF
Google: https://tiny.cc/GD_Google

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
https://tiny.cc/wb-salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
https://bit.ly/wb-slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
https://wandb.ai/gallery]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/928363108</guid><itunes:image href="https://artwork.captivate.fm/a3b118d6-3451-4d42-a827-7d20e99953a6/artworks-lo5o0m2hyudrreag-pp9vuq-t3000x3000.jpg"/><pubDate>Fri, 20 Nov 2020 17:55:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/4936c229-4c62-4a73-bb2b-a78b99375709/928363108-wandb-peter-norvig.mp3" length="45292981" type="audio/mpeg"/><itunes:duration>47:11</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>We&apos;re thrilled to have Peter Norvig join us to talk about the evolution of deep learning, his industry-defining book, his work at Google, and what he thinks the future holds for machine learning research.

Peter Norvig is a Director of Research at Google Inc; previously he directed Google&apos;s core search algorithms group. He is co-author of Artificial Intelligence: A Modern Approach, the leading textbook in the field, and co-teacher of an Artificial Intelligence class that signed up 160,000. Prior to his work at Google, Norvig was NASA&apos;s chief computer scientist.

Peter&apos;s website:
https://norvig.com/

Topics covered:
0:00 singularity is in the eye of the beholder
0:32 introduction
1:09 project Euler
2:42 advent of code/pytudes
4:55 new sections in the new version of his book
10:32 unreasonable effectiveness of data Paper 15 years later
14:44 what advice would you give to a young researcher?
16:03 computing power in the evolution of deep learning
19:19 what&apos;s been surprising in the development of AI?
24:21 from alpha go to human-like intelligence
28:46 What in AI has been surprisingly hard or easy?
32:11 synthetic data and language
35:16 singularity is in the eye of the beholder
38:43 the future of python in ML and why he used it in his book
43:00 underrated topic in ML and bottlenecks in production

Visit our podcasts homepage for transcripts and more episodes!
https://www.wandb.com/podcast

Get our podcast on Apple, Spotify, and Google!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF
Google: https://tiny.cc/GD_Google

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
https://tiny.cc/wb-salon

Join our community of ML practitioners where we host AMA&apos;s, share interesting projects and meet other people working in Deep Learning:
https://bit.ly/wb-slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
https://wandb.ai/gallery</itunes:summary></item><item><title>Robert Nishihara — The State of Distributed Computing in ML</title><itunes:title>Robert Nishihara — The State of Distributed Computing in ML</itunes:title><description><![CDATA[The story of Ray and what lead Robert to go from reinforcement learning researcher to creating open-source tools for machine learning and beyond

Robert is currently working on Ray, a high-performance distributed execution framework for AI applications. He studied mathematics at Harvard. He’s broadly interested in applied math, machine learning, and optimization, and was a member of the Statistical AI Lab, the AMPLab/RISELab, and the Berkeley AI Research Lab at UC Berkeley.

robertnishihara.com
https://anyscale.com/
https://github.com/ray-project/ray
https://twitter.com/robertnishihara
https://www.linkedin.com/in/robert-nishihara-b6465444/

Topics covered:
0:00 sneak peak + intro
1:09 what is Ray?
3:07 Spark and Ray
5:48 reinforcement learning
8:15 non-ml use cases of ray
10:00 RL in the real world and and common uses of Ray
13:49 Ppython in ML
16:38 from grad school to ML tools company
20:40 pulling product requirements in surprising directions
23:25 how to manage a large open source community
27:05 Ray Tune
29:35 where do you see bottlenecks in production?
31:39 An underrated aspect of Machine Learning

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast 

Get our podcast on Apple, Spotify, and Google!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF
Google: http://tiny.cc/GD_Google

Subscribe to our YouTube channel for videos of these podcasts and more Machine learning-related videos:
https://www.youtube.com/c/WeightsBiases

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://tiny.cc/wb-salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://bit.ly/wb-slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
https://app.wandb.ai/gallery]]></description><content:encoded><![CDATA[The story of Ray and what lead Robert to go from reinforcement learning researcher to creating open-source tools for machine learning and beyond

Robert is currently working on Ray, a high-performance distributed execution framework for AI applications. He studied mathematics at Harvard. He’s broadly interested in applied math, machine learning, and optimization, and was a member of the Statistical AI Lab, the AMPLab/RISELab, and the Berkeley AI Research Lab at UC Berkeley.

robertnishihara.com
https://anyscale.com/
https://github.com/ray-project/ray
https://twitter.com/robertnishihara
https://www.linkedin.com/in/robert-nishihara-b6465444/

Topics covered:
0:00 sneak peak + intro
1:09 what is Ray?
3:07 Spark and Ray
5:48 reinforcement learning
8:15 non-ml use cases of ray
10:00 RL in the real world and and common uses of Ray
13:49 Ppython in ML
16:38 from grad school to ML tools company
20:40 pulling product requirements in surprising directions
23:25 how to manage a large open source community
27:05 Ray Tune
29:35 where do you see bottlenecks in production?
31:39 An underrated aspect of Machine Learning

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast 

Get our podcast on Apple, Spotify, and Google!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF
Google: http://tiny.cc/GD_Google

Subscribe to our YouTube channel for videos of these podcasts and more Machine learning-related videos:
https://www.youtube.com/c/WeightsBiases

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://tiny.cc/wb-salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://bit.ly/wb-slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
https://app.wandb.ai/gallery]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/923617159</guid><itunes:image href="https://artwork.captivate.fm/cc631b0a-4425-4b2b-ba0e-847e947f9994/artworks-lo5o0m2hyudrreag-pp9vuq-t3000x3000.jpg"/><pubDate>Fri, 13 Nov 2020 18:01:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/49cdf48f-b93a-4022-9c5a-b13aeb81a583/923617159-wandb-robert-nishihara.mp3" length="33894817" type="audio/mpeg"/><itunes:duration>35:18</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>The story of Ray and what lead Robert to go from reinforcement learning researcher to creating open-source tools for machine learning and beyond

Robert is currently working on Ray, a high-performance distributed execution framework for AI applications. He studied mathematics at Harvard. He’s broadly interested in applied math, machine learning, and optimization, and was a member of the Statistical AI Lab, the AMPLab/RISELab, and the Berkeley AI Research Lab at UC Berkeley.

robertnishihara.com
https://anyscale.com/
https://github.com/ray-project/ray
https://twitter.com/robertnishihara
https://www.linkedin.com/in/robert-nishihara-b6465444/

Topics covered:
0:00 sneak peak + intro
1:09 what is Ray?
3:07 Spark and Ray
5:48 reinforcement learning
8:15 non-ml use cases of ray
10:00 RL in the real world and and common uses of Ray
13:49 Ppython in ML
16:38 from grad school to ML tools company
20:40 pulling product requirements in surprising directions
23:25 how to manage a large open source community
27:05 Ray Tune
29:35 where do you see bottlenecks in production?
31:39 An underrated aspect of Machine Learning

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast 

Get our podcast on Apple, Spotify, and Google!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF
Google: http://tiny.cc/GD_Google

Subscribe to our YouTube channel for videos of these podcasts and more Machine learning-related videos:
https://www.youtube.com/c/WeightsBiases

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://tiny.cc/wb-salon

Join our community of ML practitioners where we host AMA&apos;s, share interesting projects and meet other people working in Deep Learning:
http://bit.ly/wb-slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
https://app.wandb.ai/gallery</itunes:summary></item><item><title>Ines &amp; Sofie — Building Industrial-Strength NLP Pipelines</title><itunes:title>Ines &amp; Sofie — Building Industrial-Strength NLP Pipelines</itunes:title><description><![CDATA[Sofie and Ines walk us through how the new spaCy library helps build end to end SOTA natural language processing workflows.

Ines Montani is the co-founder of Explosion AI, a digital studio specializing in tools for AI technology. She's a core developer of spaCy, one of the leading open-source libraries for Natural Language Processing in Python and Prodigy, a new data annotation tool powered by active learning. Before founding Explosion AI, she was a freelance front-end developer and strategist.
https://twitter.com/_inesmontani

Sofie Van Landeghem is a Natural Language Processing and Machine Learning engineer at Explosion.ai. She is a Software Engineer at heart, with an absurd love for quality assurance and testing, introducing proper levels of abstraction, and ensuring code robustness and modularity.

She has more than 12 years of experience in Natural Language Processing and Machine Learning, including in the pharmaceutical industry and the food industry.
https://twitter.com/oxykodit

https://spacy.io/
https://prodi.gy/
https://thinc.ai/
https://explosion.ai/

Topics covered:
0:00 Sneak peek
0:35 intro
2:29 How spaCy was started
6:11 Business model, open source
9:55 What was spaCy designed to solve?
12:23 advances in NLP and modern practices in industry
17:19 what differentiates spaCy from a more research focused NLP library?
19:28 Multi-lingual/domain specific support
23:52 spaCy V3 configuration
28:16 Thoughts on Python, Syphon, other programming languages for ML
33:45 Making things clear and reproducible
37:30 prodigy and getting good training data
44:09 most underrated aspect of ML
51:00 hardest part of putting models into production

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

Get our podcast on Apple, Spotify, and Google!
Apple Podcasts: bit.ly/2WdrUvI
Spotify: bit.ly/2SqtadF
Google:tiny.cc/GD_Google

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
tiny.cc/wb-salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
bit.ly/wb-slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
app.wandb.ai/gallery]]></description><content:encoded><![CDATA[Sofie and Ines walk us through how the new spaCy library helps build end to end SOTA natural language processing workflows.

Ines Montani is the co-founder of Explosion AI, a digital studio specializing in tools for AI technology. She's a core developer of spaCy, one of the leading open-source libraries for Natural Language Processing in Python and Prodigy, a new data annotation tool powered by active learning. Before founding Explosion AI, she was a freelance front-end developer and strategist.
https://twitter.com/_inesmontani

Sofie Van Landeghem is a Natural Language Processing and Machine Learning engineer at Explosion.ai. She is a Software Engineer at heart, with an absurd love for quality assurance and testing, introducing proper levels of abstraction, and ensuring code robustness and modularity.

She has more than 12 years of experience in Natural Language Processing and Machine Learning, including in the pharmaceutical industry and the food industry.
https://twitter.com/oxykodit

https://spacy.io/
https://prodi.gy/
https://thinc.ai/
https://explosion.ai/

Topics covered:
0:00 Sneak peek
0:35 intro
2:29 How spaCy was started
6:11 Business model, open source
9:55 What was spaCy designed to solve?
12:23 advances in NLP and modern practices in industry
17:19 what differentiates spaCy from a more research focused NLP library?
19:28 Multi-lingual/domain specific support
23:52 spaCy V3 configuration
28:16 Thoughts on Python, Syphon, other programming languages for ML
33:45 Making things clear and reproducible
37:30 prodigy and getting good training data
44:09 most underrated aspect of ML
51:00 hardest part of putting models into production

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

Get our podcast on Apple, Spotify, and Google!
Apple Podcasts: bit.ly/2WdrUvI
Spotify: bit.ly/2SqtadF
Google:tiny.cc/GD_Google

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
tiny.cc/wb-salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
bit.ly/wb-slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
app.wandb.ai/gallery]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/919258699</guid><itunes:image href="https://artwork.captivate.fm/367d8269-bbb2-42e6-8903-a2eed9bd264a/artworks-lo5o0m2hyudrreag-pp9vuq-t3000x3000.jpg"/><pubDate>Thu, 29 Oct 2020 17:55:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/fa5a3ed4-3e54-414c-83b6-88bc0199f47d/919258699-wandb-ines-sophie.mp3" length="56322506" type="audio/mpeg"/><itunes:duration>58:40</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Sofie and Ines walk us through how the new spaCy library helps build end to end SOTA natural language processing workflows.

Ines Montani is the co-founder of Explosion AI, a digital studio specializing in tools for AI technology. She&apos;s a core developer of spaCy, one of the leading open-source libraries for Natural Language Processing in Python and Prodigy, a new data annotation tool powered by active learning. Before founding Explosion AI, she was a freelance front-end developer and strategist.
https://twitter.com/_inesmontani

Sofie Van Landeghem is a Natural Language Processing and Machine Learning engineer at Explosion.ai. She is a Software Engineer at heart, with an absurd love for quality assurance and testing, introducing proper levels of abstraction, and ensuring code robustness and modularity.

She has more than 12 years of experience in Natural Language Processing and Machine Learning, including in the pharmaceutical industry and the food industry.
https://twitter.com/oxykodit

https://spacy.io/
https://prodi.gy/
https://thinc.ai/
https://explosion.ai/

Topics covered:
0:00 Sneak peek
0:35 intro
2:29 How spaCy was started
6:11 Business model, open source
9:55 What was spaCy designed to solve?
12:23 advances in NLP and modern practices in industry
17:19 what differentiates spaCy from a more research focused NLP library?
19:28 Multi-lingual/domain specific support
23:52 spaCy V3 configuration
28:16 Thoughts on Python, Syphon, other programming languages for ML
33:45 Making things clear and reproducible
37:30 prodigy and getting good training data
44:09 most underrated aspect of ML
51:00 hardest part of putting models into production

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

Get our podcast on Apple, Spotify, and Google!
Apple Podcasts: bit.ly/2WdrUvI
Spotify: bit.ly/2SqtadF
Google:tiny.cc/GD_Google

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
tiny.cc/wb-salon

Join our community of ML practitioners where we host AMA&apos;s, share interesting projects and meet other people working in Deep Learning:
bit.ly/wb-slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
app.wandb.ai/gallery</itunes:summary></item><item><title>Daeil Kim — The Unreasonable Effectiveness of Synthetic Data</title><itunes:title>Daeil Kim — The Unreasonable Effectiveness of Synthetic Data</itunes:title><description><![CDATA[Supercharging computer vision model performance by generating years of training data in minutes.

Daeil Kim is the co-founder and CEO of AI.Reverie(https://aireverie.com/), a startup that specializes in creating high quality synthetic training data for computer vision algorithms. Before that, he was a senior data scientist at the New York Times. And before that he got his PhD in computer science from Brown University, focusing on machine learning and Bayesian statistics. He's going to talk about tools that will advance machine learning progress, and he's going to talk about synthetic data.

https://twitter.com/daeil

Topics covered:

0:00 Diversifying content
0:23 Intro+bio
1:00 From liberal arts to synthetic data
8:48 What is synthetic data?
11:24 Real world examples of synthetic data
16:16 Understanding performance gains using synthetic data
21:32 The future of Synthetic data and AI.Reverie
23:21 The composition of people at AI.reverie and ML
28:28 The evolution of ML tools and systems that Daeil uses
33:16 Most underrated aspect of ML and common misconceptions
34:42 Biggest challenge in making synthetic data work in the real world

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

Get our podcast on Apple, Spotify, and Google!

Apple Podcasts: bit.ly/2WdrUvI
Spotify: bit.ly/2SqtadF
Google:tiny.cc/GD_Google

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
tiny.cc/wb-salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
bit.ly/wb-slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
app.wandb.ai/gallery]]></description><content:encoded><![CDATA[Supercharging computer vision model performance by generating years of training data in minutes.

Daeil Kim is the co-founder and CEO of AI.Reverie(https://aireverie.com/), a startup that specializes in creating high quality synthetic training data for computer vision algorithms. Before that, he was a senior data scientist at the New York Times. And before that he got his PhD in computer science from Brown University, focusing on machine learning and Bayesian statistics. He's going to talk about tools that will advance machine learning progress, and he's going to talk about synthetic data.

https://twitter.com/daeil

Topics covered:

0:00 Diversifying content
0:23 Intro+bio
1:00 From liberal arts to synthetic data
8:48 What is synthetic data?
11:24 Real world examples of synthetic data
16:16 Understanding performance gains using synthetic data
21:32 The future of Synthetic data and AI.Reverie
23:21 The composition of people at AI.reverie and ML
28:28 The evolution of ML tools and systems that Daeil uses
33:16 Most underrated aspect of ML and common misconceptions
34:42 Biggest challenge in making synthetic data work in the real world

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

Get our podcast on Apple, Spotify, and Google!

Apple Podcasts: bit.ly/2WdrUvI
Spotify: bit.ly/2SqtadF
Google:tiny.cc/GD_Google

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
tiny.cc/wb-salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
bit.ly/wb-slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
app.wandb.ai/gallery]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/911249224</guid><itunes:image href="https://artwork.captivate.fm/ef36e9c2-3234-41d6-8d81-26ef771a3024/artworks-lo5o0m2hyudrreag-pp9vuq-t3000x3000.jpg"/><pubDate>Thu, 15 Oct 2020 19:39:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/43b02cba-b844-4f44-a996-49b35ccc2869/911249224-wandb-the-unreasonable-effectiveness-of-synthetic-dat.mp3" length="35679920" type="audio/mpeg"/><itunes:duration>37:10</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Supercharging computer vision model performance by generating years of training data in minutes.

Daeil Kim is the co-founder and CEO of AI.Reverie(https://aireverie.com/), a startup that specializes in creating high quality synthetic training data for computer vision algorithms. Before that, he was a senior data scientist at the New York Times. And before that he got his PhD in computer science from Brown University, focusing on machine learning and Bayesian statistics. He&apos;s going to talk about tools that will advance machine learning progress, and he&apos;s going to talk about synthetic data.

https://twitter.com/daeil

Topics covered:

0:00 Diversifying content
0:23 Intro+bio
1:00 From liberal arts to synthetic data
8:48 What is synthetic data?
11:24 Real world examples of synthetic data
16:16 Understanding performance gains using synthetic data
21:32 The future of Synthetic data and AI.Reverie
23:21 The composition of people at AI.reverie and ML
28:28 The evolution of ML tools and systems that Daeil uses
33:16 Most underrated aspect of ML and common misconceptions
34:42 Biggest challenge in making synthetic data work in the real world

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

Get our podcast on Apple, Spotify, and Google!

Apple Podcasts: bit.ly/2WdrUvI
Spotify: bit.ly/2SqtadF
Google:tiny.cc/GD_Google

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
tiny.cc/wb-salon

Join our community of ML practitioners where we host AMA&apos;s, share interesting projects and meet other people working in Deep Learning:
bit.ly/wb-slack

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
app.wandb.ai/gallery</itunes:summary></item><item><title>Joaquin Candela — Definitions of Fairness</title><itunes:title>Joaquin Candela — Definitions of Fairness</itunes:title><description><![CDATA[Joaquin chats about scaling and democratizing AI at Facebook, while understanding fairness and algorithmic bias.

---

Joaquin Quiñonero Candela is Distinguished Tech Lead for Responsible AI at Facebook, where he aims to understand and mitigate the risks and unintended consequences of the widespread use of AI across Facebook. He was previously Director of Society and AI Lab and Director of Engineering for Applied ML. Before joining Facebook, Joaquin taught at the University of Cambridge, and worked at Microsoft Research.

Connect with Joaquin:
Personal website: https://quinonero.net/
Twitter: https://twitter.com/jquinonero
LinkedIn: https://www.linkedin.com/in/joaquin-qui%C3%B1onero-candela-440844/

---

Topics Discussed:
0:00 Intro, sneak peak
0:53 Looking back at building and scaling AI at Facebook
10:31 How do you ship a model every week?
15:36 Getting buy-in to use a system
19:36 More on ML tools
24:01 Responsible AI at Facebook
38:33 How to engage with those effected by ML decisions
41:54 Approaches to fairness
53:10 How to know things are built right
59:34 Diversity, inclusion, and AI
1:14:21 Underrated aspect of AI
1:16:43 Hardest thing when putting models into production

Transcript:
http://wandb.me/gd-joaquin-candela

Links Discussed:  
Race and Gender (2019): https://arxiv.org/pdf/1908.06165.pdf
Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Learning (2019): https://arxiv.org/abs/1912.10389
Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification (2018): http://proceedings.mlr.press/v81/buolamwini18a.html

---

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​​
Spotify: http://wandb.me/spotify​
Google Podcasts: http://wandb.me/google-podcasts​​
YouTube: http://wandb.me/youtube​​
Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected]]></description><content:encoded><![CDATA[Joaquin chats about scaling and democratizing AI at Facebook, while understanding fairness and algorithmic bias.

---

Joaquin Quiñonero Candela is Distinguished Tech Lead for Responsible AI at Facebook, where he aims to understand and mitigate the risks and unintended consequences of the widespread use of AI across Facebook. He was previously Director of Society and AI Lab and Director of Engineering for Applied ML. Before joining Facebook, Joaquin taught at the University of Cambridge, and worked at Microsoft Research.

Connect with Joaquin:
Personal website: https://quinonero.net/
Twitter: https://twitter.com/jquinonero
LinkedIn: https://www.linkedin.com/in/joaquin-qui%C3%B1onero-candela-440844/

---

Topics Discussed:
0:00 Intro, sneak peak
0:53 Looking back at building and scaling AI at Facebook
10:31 How do you ship a model every week?
15:36 Getting buy-in to use a system
19:36 More on ML tools
24:01 Responsible AI at Facebook
38:33 How to engage with those effected by ML decisions
41:54 Approaches to fairness
53:10 How to know things are built right
59:34 Diversity, inclusion, and AI
1:14:21 Underrated aspect of AI
1:16:43 Hardest thing when putting models into production

Transcript:
http://wandb.me/gd-joaquin-candela

Links Discussed:  
Race and Gender (2019): https://arxiv.org/pdf/1908.06165.pdf
Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Learning (2019): https://arxiv.org/abs/1912.10389
Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification (2018): http://proceedings.mlr.press/v81/buolamwini18a.html

---

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​​
Spotify: http://wandb.me/spotify​
Google Podcasts: http://wandb.me/google-podcasts​​
YouTube: http://wandb.me/youtube​​
Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/902547697</guid><itunes:image href="https://artwork.captivate.fm/6539bcc1-6835-4c1f-ae6f-f7ecb406fa31/artworks-lo5o0m2hyudrreag-pp9vuq-t3000x3000.jpg"/><pubDate>Thu, 01 Oct 2020 02:27:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/f453349c-5b1b-44dd-87bf-9c80c8731ff5/902547697-wandb-joaquin-candela.mp3" length="76112874" type="audio/mpeg"/><itunes:duration>01:19:17</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Joaquin chats about scaling and democratizing AI at Facebook, while understanding fairness and algorithmic bias.

---

Joaquin Quiñonero Candela is Distinguished Tech Lead for Responsible AI at Facebook, where he aims to understand and mitigate the risks and unintended consequences of the widespread use of AI across Facebook. He was previously Director of Society and AI Lab and Director of Engineering for Applied ML. Before joining Facebook, Joaquin taught at the University of Cambridge, and worked at Microsoft Research.

Connect with Joaquin:
Personal website: https://quinonero.net/
Twitter: https://twitter.com/jquinonero
LinkedIn: https://www.linkedin.com/in/joaquin-qui%C3%B1onero-candela-440844/

---

Topics Discussed:
0:00 Intro, sneak peak
0:53 Looking back at building and scaling AI at Facebook
10:31 How do you ship a model every week?
15:36 Getting buy-in to use a system
19:36 More on ML tools
24:01 Responsible AI at Facebook
38:33 How to engage with those effected by ML decisions
41:54 Approaches to fairness
53:10 How to know things are built right
59:34 Diversity, inclusion, and AI
1:14:21 Underrated aspect of AI
1:16:43 Hardest thing when putting models into production

Transcript:
http://wandb.me/gd-joaquin-candela

Links Discussed:  
Race and Gender (2019): https://arxiv.org/pdf/1908.06165.pdf
Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Learning (2019): https://arxiv.org/abs/1912.10389
Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification (2018): http://proceedings.mlr.press/v81/buolamwini18a.html

---

Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts​​
Spotify: http://wandb.me/spotify​
Google Podcasts: http://wandb.me/google-podcasts​​
YouTube: http://wandb.me/youtube​​
Soundcloud: http://wandb.me/soundcloud​

Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack​​

Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected</itunes:summary></item><item><title>Richard Socher — The Challenges of Making ML Work in the Real World</title><itunes:title>Richard Socher — The Challenges of Making ML Work in the Real World</itunes:title><description><![CDATA[Richard Socher, ex-Chief Scientist at Salesforce, joins us to talk about The AI Economist, NLP protein generation and biggest challenge in making ML work in the real world.

Richard Socher was the Chief scientist (EVP) at Salesforce where he lead teams working on fundamental research(einstein.ai/), applied research, product incubation, CRM search, customer service automation and a cross-product AI platform for unstructured and structured data. Previously, he was an adjunct professor at Stanford’s computer science department and the founder and CEO/CTO of MetaMind(www.metamind.io/) which was acquired by Salesforce in 2016. In 2014, he got my PhD in the [CS Department](www.cs.stanford.edu/) at Stanford. He likes paramotoring and water adventures, traveling and photography. More info:

- Forbes article:
https://www.forbes.com/sites/gilpress/2017/05/01/emerging-artificial-intelligence-ai-leaders-richard-socher-salesforce/) with more info about Richard's bio.
- CS224n - NLP with Deep Learning(http://cs224n.stanford.edu/) the class Richard used to teach.
- TEDx talk(https://www.youtube.com/watch?v=8cmx7V4oIR8) about where AI is today and where it's going.

Research:

Google Scholar Link(https://scholar.google.com/citations?user=FaOcyfMAAAAJ&hl=en)

The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies
Arxiv link(https://arxiv.org/abs/2004.13332), blog(https://blog.einstein.ai/the-ai-economist/), short video(https://www.youtube.com/watch?v=4iQUcGyQhdA), Q&A(https://salesforce.com/company/news-press/stories/2020/4/salesforce-ai-economist/), Press: VentureBeat(https://venturebeat.com/2020/04/29/salesforces-ai-economist-taps-reinforcement-learning-to-generate-optimal-tax-policies/), TechCrunch(https://techcrunch.com/2020/04/29/salesforce-researchers-are-working-on-an-ai-economist-for-more-equitable-tax-policy/) 

ProGen: Language Modeling for Protein Generation:
bioRxiv link(https://www.biorxiv.org/content/10.1101/2020.03.07.982272v2), [blog](https://blog.einstein.ai/progen/) ]

Dye-sensitized solar cells under ambient light powering machine learning: towards autonomous smart sensors for the internet of things
Issue11, (**Chemical Science 2020**). paper link(https://pubs.rsc.org/en/content/articlelanding/2020/sc/c9sc06145b#!divAbstract)

CTRL: A Conditional Transformer Language Model for Controllable Generation:
Arxiv link(https://arxiv.org/abs/1909.05858), code pre-trained and fine-tuning(https://github.com/salesforce/ctrl), blog(https://blog.einstein.ai/introducing-a-conditional-transformer-language-model-for-controllable-generation/) 

Genie: a generator of natural language semantic parsers for virtual assistant commands:
PLDI 2019 pdf link(https://almond-static.stanford.edu/papers/genie-pldi19.pdf), https://almond.stanford.edu

Topics Covered:

0:00 intro
0:42 the AI economist
7:08 the objective function and Gini Coefficient
12:13 on growing up in Eastern Germany and cultural differences
15:02 Language models for protein generation (ProGen)
27:53 CTRL: conditional transformer language model for controllable generation
37:52 Businesses vs Academia
40:00 What ML applications are important to salesforce
44:57 an underrated aspect of machine learning
48:13 Biggest challenge in making ML work in the real world

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

Get our podcast on Soundcloud, Apple, Spotify, and Google!
Soundcloud: https://bit.ly/2YnGjIq
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF
Google: http://tiny.cc/GD_Google

Weights and Biases makes developer tools for deep learning.

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://tiny.cc/wb-salon

Join our community of ML practitioners:
http://bit.ly/wb-slack

Our gallery features curated machine learning reports by ML researchers.
https://app.wandb.ai/gallery]]></description><content:encoded><![CDATA[Richard Socher, ex-Chief Scientist at Salesforce, joins us to talk about The AI Economist, NLP protein generation and biggest challenge in making ML work in the real world.

Richard Socher was the Chief scientist (EVP) at Salesforce where he lead teams working on fundamental research(einstein.ai/), applied research, product incubation, CRM search, customer service automation and a cross-product AI platform for unstructured and structured data. Previously, he was an adjunct professor at Stanford’s computer science department and the founder and CEO/CTO of MetaMind(www.metamind.io/) which was acquired by Salesforce in 2016. In 2014, he got my PhD in the [CS Department](www.cs.stanford.edu/) at Stanford. He likes paramotoring and water adventures, traveling and photography. More info:

- Forbes article:
https://www.forbes.com/sites/gilpress/2017/05/01/emerging-artificial-intelligence-ai-leaders-richard-socher-salesforce/) with more info about Richard's bio.
- CS224n - NLP with Deep Learning(http://cs224n.stanford.edu/) the class Richard used to teach.
- TEDx talk(https://www.youtube.com/watch?v=8cmx7V4oIR8) about where AI is today and where it's going.

Research:

Google Scholar Link(https://scholar.google.com/citations?user=FaOcyfMAAAAJ&hl=en)

The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies
Arxiv link(https://arxiv.org/abs/2004.13332), blog(https://blog.einstein.ai/the-ai-economist/), short video(https://www.youtube.com/watch?v=4iQUcGyQhdA), Q&A(https://salesforce.com/company/news-press/stories/2020/4/salesforce-ai-economist/), Press: VentureBeat(https://venturebeat.com/2020/04/29/salesforces-ai-economist-taps-reinforcement-learning-to-generate-optimal-tax-policies/), TechCrunch(https://techcrunch.com/2020/04/29/salesforce-researchers-are-working-on-an-ai-economist-for-more-equitable-tax-policy/) 

ProGen: Language Modeling for Protein Generation:
bioRxiv link(https://www.biorxiv.org/content/10.1101/2020.03.07.982272v2), [blog](https://blog.einstein.ai/progen/) ]

Dye-sensitized solar cells under ambient light powering machine learning: towards autonomous smart sensors for the internet of things
Issue11, (**Chemical Science 2020**). paper link(https://pubs.rsc.org/en/content/articlelanding/2020/sc/c9sc06145b#!divAbstract)

CTRL: A Conditional Transformer Language Model for Controllable Generation:
Arxiv link(https://arxiv.org/abs/1909.05858), code pre-trained and fine-tuning(https://github.com/salesforce/ctrl), blog(https://blog.einstein.ai/introducing-a-conditional-transformer-language-model-for-controllable-generation/) 

Genie: a generator of natural language semantic parsers for virtual assistant commands:
PLDI 2019 pdf link(https://almond-static.stanford.edu/papers/genie-pldi19.pdf), https://almond.stanford.edu

Topics Covered:

0:00 intro
0:42 the AI economist
7:08 the objective function and Gini Coefficient
12:13 on growing up in Eastern Germany and cultural differences
15:02 Language models for protein generation (ProGen)
27:53 CTRL: conditional transformer language model for controllable generation
37:52 Businesses vs Academia
40:00 What ML applications are important to salesforce
44:57 an underrated aspect of machine learning
48:13 Biggest challenge in making ML work in the real world

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

Get our podcast on Soundcloud, Apple, Spotify, and Google!
Soundcloud: https://bit.ly/2YnGjIq
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF
Google: http://tiny.cc/GD_Google

Weights and Biases makes developer tools for deep learning.

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://tiny.cc/wb-salon

Join our community of ML practitioners:
http://bit.ly/wb-slack

Our gallery features curated machine learning reports by ML researchers.
https://app.wandb.ai/gallery]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/901333930</guid><itunes:image href="https://artwork.captivate.fm/5138652e-580d-4cef-b67a-888ef1e6db97/artworks-lo5o0m2hyudrreag-pp9vuq-t3000x3000.jpg"/><pubDate>Tue, 29 Sep 2020 02:29:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/b4590a76-7d7d-4952-8023-20f208c4fc47/901333930-wandb-richard-socher.mp3" length="48865697" type="audio/mpeg"/><itunes:duration>50:54</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Richard Socher, ex-Chief Scientist at Salesforce, joins us to talk about The AI Economist, NLP protein generation and biggest challenge in making ML work in the real world.

Richard Socher was the Chief scientist (EVP) at Salesforce where he lead teams working on fundamental research(einstein.ai/), applied research, product incubation, CRM search, customer service automation and a cross-product AI platform for unstructured and structured data. Previously, he was an adjunct professor at Stanford’s computer science department and the founder and CEO/CTO of MetaMind(www.metamind.io/) which was acquired by Salesforce in 2016. In 2014, he got my PhD in the [CS Department](www.cs.stanford.edu/) at Stanford. He likes paramotoring and water adventures, traveling and photography. More info:

- Forbes article:
https://www.forbes.com/sites/gilpress/2017/05/01/emerging-artificial-intelligence-ai-leaders-richard-socher-salesforce/) with more info about Richard&apos;s bio.
- CS224n - NLP with Deep Learning(http://cs224n.stanford.edu/) the class Richard used to teach.
- TEDx talk(https://www.youtube.com/watch?v=8cmx7V4oIR8) about where AI is today and where it&apos;s going.

Research:

Google Scholar Link(https://scholar.google.com/citations?user=FaOcyfMAAAAJ&amp;hl=en)

The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies
Arxiv link(https://arxiv.org/abs/2004.13332), blog(https://blog.einstein.ai/the-ai-economist/), short video(https://www.youtube.com/watch?v=4iQUcGyQhdA), Q&amp;A(https://salesforce.com/company/news-press/stories/2020/4/salesforce-ai-economist/), Press: VentureBeat(https://venturebeat.com/2020/04/29/salesforces-ai-economist-taps-reinforcement-learning-to-generate-optimal-tax-policies/), TechCrunch(https://techcrunch.com/2020/04/29/salesforce-researchers-are-working-on-an-ai-economist-for-more-equitable-tax-policy/) 

ProGen: Language Modeling for Protein Generation:
bioRxiv link(https://www.biorxiv.org/content/10.1101/2020.03.07.982272v2), [blog](https://blog.einstein.ai/progen/) ]

Dye-sensitized solar cells under ambient light powering machine learning: towards autonomous smart sensors for the internet of things
Issue11, (**Chemical Science 2020**). paper link(https://pubs.rsc.org/en/content/articlelanding/2020/sc/c9sc06145b#!divAbstract)

CTRL: A Conditional Transformer Language Model for Controllable Generation:
Arxiv link(https://arxiv.org/abs/1909.05858), code pre-trained and fine-tuning(https://github.com/salesforce/ctrl), blog(https://blog.einstein.ai/introducing-a-conditional-transformer-language-model-for-controllable-generation/) 

Genie: a generator of natural language semantic parsers for virtual assistant commands:
PLDI 2019 pdf link(https://almond-static.stanford.edu/papers/genie-pldi19.pdf), https://almond.stanford.edu

Topics Covered:

0:00 intro
0:42 the AI economist
7:08 the objective function and Gini Coefficient
12:13 on growing up in Eastern Germany and cultural differences
15:02 Language models for protein generation (ProGen)
27:53 CTRL: conditional transformer language model for controllable generation
37:52 Businesses vs Academia
40:00 What ML applications are important to salesforce
44:57 an underrated aspect of machine learning
48:13 Biggest challenge in making ML work in the real world

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

Get our podcast on Soundcloud, Apple, Spotify, and Google!
Soundcloud: https://bit.ly/2YnGjIq
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF
Google: http://tiny.cc/GD_Google

Weights and Biases makes developer tools for deep learning.

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://tiny.cc/wb-salon

Join our community of ML practitioners:
http://bit.ly/wb-slack

Our gallery features curated machine learning reports by ML researchers.
https://app.wandb.ai/gallery</itunes:summary></item><item><title>Zack Chase Lipton — The Medical Machine Learning Landscape</title><itunes:title>Zack Chase Lipton — The Medical Machine Learning Landscape</itunes:title><description><![CDATA[How Zack went from being a musician to professor, how medical applications of Machine Learning are developing, and the challenges of counteracting bias in real world applications.

Zachary Chase Lipton is an assistant professor of Operations Research and Machine Learning at Carnegie Mellon University.

His research spans core machine learning methods and their social impact and addresses diverse application areas, including clinical medicine and natural language processing. Current research focuses include robustness under distribution shift, breast cancer screening, the effective and equitable allocation of organs, and the intersection of causal thinking with messy data.

He is the founder of the Approximately Correct (approximatelycorrect.com) blog and the creator of Dive Into Deep Learning, an interactive open-source book drafted entirely through Jupyter notebooks.

Zack’s blog - http://approximatelycorrect.com/

Detecting and Correcting for Label Shift with Black Box Predictors: https://arxiv.org/pdf/1802.03916.pdf

Algorithmic Fairness from a Non-Ideal Perspective https://www.datascience.columbia.edu/data-good-zachary-lipton-lecture

Jonas Peter’s lectures on causality:
https://youtu.be/zvrcyqcN9Wo

0:00 Sneak peek: Is this a problem worth solving?
0:38 Intro
1:23 Zack’s journey from being a musician to a professor at CMU
4:45 Applying machine learning to medical imaging
10:14 Exploring new frontiers: the most impressive deep learning applications for healthcare
12:45 Evaluating the models – Are they ready to be deployed in hospitals for use by doctors?
19:16 Capturing the signals in evolving representations of healthcare data
27:00 How does the data we capture affect the predictions we make
30:40 Distinguishing between associations and correlations in data – Horror vs romance movies
34:20 The positive effects of augmenting datasets with counterfactually flipped data
39:25 Algorithmic fairness in the real world
41:03 What does it mean to say your model isn’t biased?
43:40 Real world implications of decisions to counteract model bias
49:10  The pragmatic approach to counteracting bias in a non-ideal world
51:24 An underrated aspect of machine learning
55:11 Why defining the problem is the biggest challenge for machine learning in the real world

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

Get our podcast on YouTube, Apple, and Spotify!
YouTube: https://www.youtube.com/c/WeightsBiases
Soundcloud: https://bit.ly/2YnGjIq
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://tiny.cc/wb-salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://bit.ly/wandb-forum

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
https://app.wandb.ai/gallery]]></description><content:encoded><![CDATA[How Zack went from being a musician to professor, how medical applications of Machine Learning are developing, and the challenges of counteracting bias in real world applications.

Zachary Chase Lipton is an assistant professor of Operations Research and Machine Learning at Carnegie Mellon University.

His research spans core machine learning methods and their social impact and addresses diverse application areas, including clinical medicine and natural language processing. Current research focuses include robustness under distribution shift, breast cancer screening, the effective and equitable allocation of organs, and the intersection of causal thinking with messy data.

He is the founder of the Approximately Correct (approximatelycorrect.com) blog and the creator of Dive Into Deep Learning, an interactive open-source book drafted entirely through Jupyter notebooks.

Zack’s blog - http://approximatelycorrect.com/

Detecting and Correcting for Label Shift with Black Box Predictors: https://arxiv.org/pdf/1802.03916.pdf

Algorithmic Fairness from a Non-Ideal Perspective https://www.datascience.columbia.edu/data-good-zachary-lipton-lecture

Jonas Peter’s lectures on causality:
https://youtu.be/zvrcyqcN9Wo

0:00 Sneak peek: Is this a problem worth solving?
0:38 Intro
1:23 Zack’s journey from being a musician to a professor at CMU
4:45 Applying machine learning to medical imaging
10:14 Exploring new frontiers: the most impressive deep learning applications for healthcare
12:45 Evaluating the models – Are they ready to be deployed in hospitals for use by doctors?
19:16 Capturing the signals in evolving representations of healthcare data
27:00 How does the data we capture affect the predictions we make
30:40 Distinguishing between associations and correlations in data – Horror vs romance movies
34:20 The positive effects of augmenting datasets with counterfactually flipped data
39:25 Algorithmic fairness in the real world
41:03 What does it mean to say your model isn’t biased?
43:40 Real world implications of decisions to counteract model bias
49:10  The pragmatic approach to counteracting bias in a non-ideal world
51:24 An underrated aspect of machine learning
55:11 Why defining the problem is the biggest challenge for machine learning in the real world

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

Get our podcast on YouTube, Apple, and Spotify!
YouTube: https://www.youtube.com/c/WeightsBiases
Soundcloud: https://bit.ly/2YnGjIq
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://tiny.cc/wb-salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://bit.ly/wandb-forum

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
https://app.wandb.ai/gallery]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/894585067</guid><itunes:image href="https://artwork.captivate.fm/3b2be9e1-a367-4678-a0eb-c2bfda9fb3d1/artworks-6tvjy49ttdlhxoxy-et3kka-t3000x3000.jpg"/><pubDate>Thu, 17 Sep 2020 03:14:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/dd14f7ec-4657-427a-8b42-b755592ecc6e/894585067-wandb-zack-chase-lipton.mp3" length="57475238" type="audio/mpeg"/><itunes:duration>59:52</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>How Zack went from being a musician to professor, how medical applications of Machine Learning are developing, and the challenges of counteracting bias in real world applications.

Zachary Chase Lipton is an assistant professor of Operations Research and Machine Learning at Carnegie Mellon University.

His research spans core machine learning methods and their social impact and addresses diverse application areas, including clinical medicine and natural language processing. Current research focuses include robustness under distribution shift, breast cancer screening, the effective and equitable allocation of organs, and the intersection of causal thinking with messy data.

He is the founder of the Approximately Correct (approximatelycorrect.com) blog and the creator of Dive Into Deep Learning, an interactive open-source book drafted entirely through Jupyter notebooks.

Zack’s blog - http://approximatelycorrect.com/

Detecting and Correcting for Label Shift with Black Box Predictors: https://arxiv.org/pdf/1802.03916.pdf

Algorithmic Fairness from a Non-Ideal Perspective https://www.datascience.columbia.edu/data-good-zachary-lipton-lecture

Jonas Peter’s lectures on causality:
https://youtu.be/zvrcyqcN9Wo

0:00 Sneak peek: Is this a problem worth solving?
0:38 Intro
1:23 Zack’s journey from being a musician to a professor at CMU
4:45 Applying machine learning to medical imaging
10:14 Exploring new frontiers: the most impressive deep learning applications for healthcare
12:45 Evaluating the models – Are they ready to be deployed in hospitals for use by doctors?
19:16 Capturing the signals in evolving representations of healthcare data
27:00 How does the data we capture affect the predictions we make
30:40 Distinguishing between associations and correlations in data – Horror vs romance movies
34:20 The positive effects of augmenting datasets with counterfactually flipped data
39:25 Algorithmic fairness in the real world
41:03 What does it mean to say your model isn’t biased?
43:40 Real world implications of decisions to counteract model bias
49:10  The pragmatic approach to counteracting bias in a non-ideal world
51:24 An underrated aspect of machine learning
55:11 Why defining the problem is the biggest challenge for machine learning in the real world

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

Get our podcast on YouTube, Apple, and Spotify!
YouTube: https://www.youtube.com/c/WeightsBiases
Soundcloud: https://bit.ly/2YnGjIq
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://tiny.cc/wb-salon

Join our community of ML practitioners where we host AMA&apos;s, share interesting projects and meet other people working in Deep Learning:
http://bit.ly/wandb-forum

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
https://app.wandb.ai/gallery</itunes:summary></item><item><title>Anthony Goldbloom — How to Win Kaggle Competitions</title><itunes:title>Anthony Goldbloom — How to Win Kaggle Competitions</itunes:title><description><![CDATA[Anthony Goldbloom is the founder and CEO of Kaggle. In 2011 & 2012, Forbes Magazine named Anthony as one of the 30 under 30 in technology. In 2011, Fast Company featured him as one of the innovative thinkers who are changing the future of business.

He and Lukas discuss the differences in strategies that do well in Kaggle competitions vs academia vs in production. They discuss his 2016 Ted talk through the lens of 2020, frameworks, and languages.

Topics Discussed:
0:00 Sneak Peek
0:20 Introduction
0:45 methods used in kaggle competitions vs mainstream academia
2:30 Feature engineering
3:55 Kaggle Competitions now vs 10 years ago
8:35 Data augmentation strategies
10:06 Overfitting in Kaggle Competitions
12:53 How to not overfit
14:11 Kaggle competitions vs the real world
18:15 Getting into ML through Kaggle
22:03 Other Kaggle products
25:48 Favorite under appreciated kernel or dataset
28:27 Python & R
32:03 Frameworks
35:15 2016 Ted talk though the lens of 2020
37:54 Reinforcement Learning
38:43 What’s the topic in ML that people don’t talk about enough?
42:02 Where are the biggest bottlenecks in deploying ML software?

Check out Kaggle: https://www.kaggle.com/
Follow Anthony on Twitter: https://twitter.com/antgoldbloom
Watch his 2016 Ted Talk: https://www.ted.com/talks/anthony_goldbloom_the_jobs_we_ll_lose_to_machines_and_the_ones_we_won_t

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

 Get our podcast on Soundcloud, Apple, and Spotify!
Soundcloud: https://bit.ly/2YnGjIq
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!


Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.
* Blog: https://www.wandb.com/articles
* Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
* Join our community of ML practitioners working on interesting problems - https://www.wandb.com/ml-community 


Host: Lukas Biewald - https://twitter.com/l2k

Producer: Lavanya Shukla - https://twitter.com/lavanyaai

Editor: Cayla Sharp - http://caylasharp.com/]]></description><content:encoded><![CDATA[Anthony Goldbloom is the founder and CEO of Kaggle. In 2011 & 2012, Forbes Magazine named Anthony as one of the 30 under 30 in technology. In 2011, Fast Company featured him as one of the innovative thinkers who are changing the future of business.

He and Lukas discuss the differences in strategies that do well in Kaggle competitions vs academia vs in production. They discuss his 2016 Ted talk through the lens of 2020, frameworks, and languages.

Topics Discussed:
0:00 Sneak Peek
0:20 Introduction
0:45 methods used in kaggle competitions vs mainstream academia
2:30 Feature engineering
3:55 Kaggle Competitions now vs 10 years ago
8:35 Data augmentation strategies
10:06 Overfitting in Kaggle Competitions
12:53 How to not overfit
14:11 Kaggle competitions vs the real world
18:15 Getting into ML through Kaggle
22:03 Other Kaggle products
25:48 Favorite under appreciated kernel or dataset
28:27 Python & R
32:03 Frameworks
35:15 2016 Ted talk though the lens of 2020
37:54 Reinforcement Learning
38:43 What’s the topic in ML that people don’t talk about enough?
42:02 Where are the biggest bottlenecks in deploying ML software?

Check out Kaggle: https://www.kaggle.com/
Follow Anthony on Twitter: https://twitter.com/antgoldbloom
Watch his 2016 Ted Talk: https://www.ted.com/talks/anthony_goldbloom_the_jobs_we_ll_lose_to_machines_and_the_ones_we_won_t

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

 Get our podcast on Soundcloud, Apple, and Spotify!
Soundcloud: https://bit.ly/2YnGjIq
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!


Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.
* Blog: https://www.wandb.com/articles
* Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
* Join our community of ML practitioners working on interesting problems - https://www.wandb.com/ml-community 


Host: Lukas Biewald - https://twitter.com/l2k

Producer: Lavanya Shukla - https://twitter.com/lavanyaai

Editor: Cayla Sharp - http://caylasharp.com/]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/890210491</guid><itunes:image href="https://artwork.captivate.fm/c32a40cc-3636-4fdb-ae36-a5b494ae160d/artworks-lo5o0m2hyudrreag-pp9vuq-t3000x3000.jpg"/><pubDate>Wed, 09 Sep 2020 06:29:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/700b4130-ab7e-4796-9e63-38e95d3a4b98/890210491-wandb-anthony-goldbloom.mp3" length="42517314" type="audio/mpeg"/><itunes:duration>44:17</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Anthony Goldbloom is the founder and CEO of Kaggle. In 2011 &amp; 2012, Forbes Magazine named Anthony as one of the 30 under 30 in technology. In 2011, Fast Company featured him as one of the innovative thinkers who are changing the future of business.

He and Lukas discuss the differences in strategies that do well in Kaggle competitions vs academia vs in production. They discuss his 2016 Ted talk through the lens of 2020, frameworks, and languages.

Topics Discussed:
0:00 Sneak Peek
0:20 Introduction
0:45 methods used in kaggle competitions vs mainstream academia
2:30 Feature engineering
3:55 Kaggle Competitions now vs 10 years ago
8:35 Data augmentation strategies
10:06 Overfitting in Kaggle Competitions
12:53 How to not overfit
14:11 Kaggle competitions vs the real world
18:15 Getting into ML through Kaggle
22:03 Other Kaggle products
25:48 Favorite under appreciated kernel or dataset
28:27 Python &amp; R
32:03 Frameworks
35:15 2016 Ted talk though the lens of 2020
37:54 Reinforcement Learning
38:43 What’s the topic in ML that people don’t talk about enough?
42:02 Where are the biggest bottlenecks in deploying ML software?

Check out Kaggle: https://www.kaggle.com/
Follow Anthony on Twitter: https://twitter.com/antgoldbloom
Watch his 2016 Ted Talk: https://www.ted.com/talks/anthony_goldbloom_the_jobs_we_ll_lose_to_machines_and_the_ones_we_won_t

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

 Get our podcast on Soundcloud, Apple, and Spotify!
Soundcloud: https://bit.ly/2YnGjIq
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!


Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.
* Blog: https://www.wandb.com/articles
* Gallery: See what you can create with W&amp;B - https://app.wandb.ai/gallery
* Join our community of ML practitioners working on interesting problems - https://www.wandb.com/ml-community 


Host: Lukas Biewald - https://twitter.com/l2k

Producer: Lavanya Shukla - https://twitter.com/lavanyaai

Editor: Cayla Sharp - http://caylasharp.com/</itunes:summary></item><item><title>Suzana Ilić — Cultivating Machine Learning Communities</title><itunes:title>Suzana Ilić — Cultivating Machine Learning Communities</itunes:title><description><![CDATA[👩‍💻Today our guest is Suzanah Ilić!
Suzanah is a founder of Machine Learning Tokyo which is a nonprofit organization dedicated to democratizing Machine Learning. They are a team of ML Engineers and Researchers and a community of more than 3000 people.

Machine Learning Tokyo: https://mltokyo.ai/
Follow Suzanah on twitter: https://twitter.com/suzatweet

Check out our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Apple and Spotify!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast. We hope you have as much fun listening to it as we had making it.

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/]]></description><content:encoded><![CDATA[👩‍💻Today our guest is Suzanah Ilić!
Suzanah is a founder of Machine Learning Tokyo which is a nonprofit organization dedicated to democratizing Machine Learning. They are a team of ML Engineers and Researchers and a community of more than 3000 people.

Machine Learning Tokyo: https://mltokyo.ai/
Follow Suzanah on twitter: https://twitter.com/suzatweet

Check out our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Apple and Spotify!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast. We hope you have as much fun listening to it as we had making it.

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/885989293</guid><itunes:image href="https://artwork.captivate.fm/927c87da-2fcc-4c83-8d55-21adf16b181a/artworks-6tvjy49ttdlhxoxy-et3kka-t3000x3000.jpg"/><pubDate>Wed, 02 Sep 2020 00:55:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/c3cafaa8-9c8f-4293-a59f-e3f7d27f4a73/885989293-wandb-suzanah-ilic.mp3" length="33540387" type="audio/mpeg"/><itunes:duration>34:56</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>👩‍💻Today our guest is Suzanah Ilić!
Suzanah is a founder of Machine Learning Tokyo which is a nonprofit organization dedicated to democratizing Machine Learning. They are a team of ML Engineers and Researchers and a community of more than 3000 people.

Machine Learning Tokyo: https://mltokyo.ai/
Follow Suzanah on twitter: https://twitter.com/suzatweet

Check out our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Apple and Spotify!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast. We hope you have as much fun listening to it as we had making it.

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&amp;B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/</itunes:summary></item><item><title>Jeremy Howard — The Story of fast.ai and Why Python Is Not the Future of ML</title><itunes:title>Jeremy Howard — The Story of fast.ai and Why Python Is Not the Future of ML</itunes:title><description><![CDATA[Jeremy Howard is a founding researcher at fast.ai, a research institute dedicated to making Deep Learning more accessible. Previously, he was the CEO and Founder at Enlitic, an advanced machine learning company in San Francisco, California. 
 
Howard is a faculty member at Singularity University, where he teaches data science. He is also a Young Global Leader with the World Economic Forum, and spoke at the World Economic Forum Annual Meeting 2014 on "Jobs For The Machines." 
 
Howard advised Khosla Ventures as their Data Strategist, identifying the biggest opportunities for investing in data-driven startups and mentoring their portfolio companies to build data-driven businesses. Howard was the founding CEO of two successful Australian startups, FastMail and Optimal Decisions Group. Before that, he spent eight years in management consulting, at McKinsey & Company and AT Kearney.

TOPICS COVERED:
0:00 Introduction
0:52 Dad things
2:40 The story of Fast.ai
4:57 How the courses have evolved over time
9:24 Jeremy’s top down approach to teaching
13:02 From Fast.ai the course to Fast.ai the library
15:08 Designing V2 of the library from the ground up
21:44 The ingenious type dispatch system that powers Fast.ai
25:52 Were you able to realize the vision behind v2 of the library
28:05 Is it important to you that Fast.ai is used by everyone in the world, beyond the context of learning
29:37 Real world applications of Fast.ai, including animal husbandry
35:08 Staying ahead of the new developments in the field
38:50 A bias towards learning by doing
40:02 What’s next for Fast.ai
40.35 Python is not the future of Machine Learning
43:58 One underrated aspect of machine learning
45:25 Biggest challenge of machine learning in the real world

Follow Jeremy on Twitter:
https://twitter.com/jeremyphoward

Links:
Deep learning R&D & education: http://fast.ai
Software: http://docs.fast.ai
Book: http://up.fm/book
Course: http://course.fast.ai
Papers:
The business impact of deep learning
https://dl.acm.org/doi/10.1145/2487575.2491127
De-identification Methods for Open Health Data

https://www.jmir.org/2012/1/e33/


Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Soundcloud, Apple, and Spotify!
YouTube: https://www.youtube.com/c/WeightsBiases
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/]]></description><content:encoded><![CDATA[Jeremy Howard is a founding researcher at fast.ai, a research institute dedicated to making Deep Learning more accessible. Previously, he was the CEO and Founder at Enlitic, an advanced machine learning company in San Francisco, California. 
 
Howard is a faculty member at Singularity University, where he teaches data science. He is also a Young Global Leader with the World Economic Forum, and spoke at the World Economic Forum Annual Meeting 2014 on "Jobs For The Machines." 
 
Howard advised Khosla Ventures as their Data Strategist, identifying the biggest opportunities for investing in data-driven startups and mentoring their portfolio companies to build data-driven businesses. Howard was the founding CEO of two successful Australian startups, FastMail and Optimal Decisions Group. Before that, he spent eight years in management consulting, at McKinsey & Company and AT Kearney.

TOPICS COVERED:
0:00 Introduction
0:52 Dad things
2:40 The story of Fast.ai
4:57 How the courses have evolved over time
9:24 Jeremy’s top down approach to teaching
13:02 From Fast.ai the course to Fast.ai the library
15:08 Designing V2 of the library from the ground up
21:44 The ingenious type dispatch system that powers Fast.ai
25:52 Were you able to realize the vision behind v2 of the library
28:05 Is it important to you that Fast.ai is used by everyone in the world, beyond the context of learning
29:37 Real world applications of Fast.ai, including animal husbandry
35:08 Staying ahead of the new developments in the field
38:50 A bias towards learning by doing
40:02 What’s next for Fast.ai
40.35 Python is not the future of Machine Learning
43:58 One underrated aspect of machine learning
45:25 Biggest challenge of machine learning in the real world

Follow Jeremy on Twitter:
https://twitter.com/jeremyphoward

Links:
Deep learning R&D & education: http://fast.ai
Software: http://docs.fast.ai
Book: http://up.fm/book
Course: http://course.fast.ai
Papers:
The business impact of deep learning
https://dl.acm.org/doi/10.1145/2487575.2491127
De-identification Methods for Open Health Data

https://www.jmir.org/2012/1/e33/


Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Soundcloud, Apple, and Spotify!
YouTube: https://www.youtube.com/c/WeightsBiases
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/881376838</guid><itunes:image href="https://artwork.captivate.fm/5abfaab6-3418-40b0-bdd3-c21be90ef803/artworks-ztr5yhsi5c1uobip-gccmhw-t3000x3000.jpg"/><pubDate>Tue, 25 Aug 2020 01:56:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/c4bae829-048d-4b83-ae78-61266e241621/881376838-wandb-gradient-dissent-jeremy-howard.mp3" length="49108949" type="audio/mpeg"/><itunes:duration>51:09</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Jeremy Howard is a founding researcher at fast.ai, a research institute dedicated to making Deep Learning more accessible. Previously, he was the CEO and Founder at Enlitic, an advanced machine learning company in San Francisco, California. 
 
Howard is a faculty member at Singularity University, where he teaches data science. He is also a Young Global Leader with the World Economic Forum, and spoke at the World Economic Forum Annual Meeting 2014 on &quot;Jobs For The Machines.&quot; 
 
Howard advised Khosla Ventures as their Data Strategist, identifying the biggest opportunities for investing in data-driven startups and mentoring their portfolio companies to build data-driven businesses. Howard was the founding CEO of two successful Australian startups, FastMail and Optimal Decisions Group. Before that, he spent eight years in management consulting, at McKinsey &amp; Company and AT Kearney.

TOPICS COVERED:
0:00 Introduction
0:52 Dad things
2:40 The story of Fast.ai
4:57 How the courses have evolved over time
9:24 Jeremy’s top down approach to teaching
13:02 From Fast.ai the course to Fast.ai the library
15:08 Designing V2 of the library from the ground up
21:44 The ingenious type dispatch system that powers Fast.ai
25:52 Were you able to realize the vision behind v2 of the library
28:05 Is it important to you that Fast.ai is used by everyone in the world, beyond the context of learning
29:37 Real world applications of Fast.ai, including animal husbandry
35:08 Staying ahead of the new developments in the field
38:50 A bias towards learning by doing
40:02 What’s next for Fast.ai
40.35 Python is not the future of Machine Learning
43:58 One underrated aspect of machine learning
45:25 Biggest challenge of machine learning in the real world

Follow Jeremy on Twitter:
https://twitter.com/jeremyphoward

Links:
Deep learning R&amp;D &amp; education: http://fast.ai
Software: http://docs.fast.ai
Book: http://up.fm/book
Course: http://course.fast.ai
Papers:
The business impact of deep learning
https://dl.acm.org/doi/10.1145/2487575.2491127
De-identification Methods for Open Health Data

https://www.jmir.org/2012/1/e33/


Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Soundcloud, Apple, and Spotify!
YouTube: https://www.youtube.com/c/WeightsBiases
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&amp;B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/</itunes:summary></item><item><title>Anantha Kancherla — Building Level 5 Autonomous Vehicles</title><itunes:title>Anantha Kancherla — Building Level 5 Autonomous Vehicles</itunes:title><description><![CDATA[As Lyft’s VP of Engineering, Software at Level 5, Autonomous Vehicle Program, Anantha Kancherla has a birds-eye view on what it takes to make self-driving cars work in the real world. He previously worked on Windows at Microsoft focusing on DirectX, Graphics and UI; Facebook’s mobile Newsfeed and core mobile experiences; and led the Collaboration efforts at Dropbox involving launching Dropbox Paper as well as improving core collaboration functionality in Dropbox.

He and Lukas dive into the challenges of working on large projects and how to approach breaking down a major project into pieces, tracking progress and addressing bugs.

Check out Lyft’s Self-Driving Website:
https://self-driving.lyft.com/

And this article on building the self-driving team at Lyft:
https://medium.com/lyftlevel5/going-from-zero-to-sixty-building-lyfts-self-driving-software-team-1ac693800588

Follow Lyft Level 5 on Twitter:
https://twitter.com/LyftLevel5

Topics covered:
0:00 Sharp Knives
0:44 Introduction
1:07 Breaking down a big goal
8:15 Breaking down Metrics
10:50 Allocating Resources
12:40 Interventions
13:27 What part still has lots ofroom for improvement?
14:25 Various ways of deploying models
15:30 Rideshare
15:57 Infrastructure, updates
17:28 Model versioning
19:16 Model improvement goals
22:42 Unit testing
25:12 Interactions of models
26:30 Improvements in data vs models
29:50 finding the right data
30:38 Deploying models into production
32:17 Feature drift
34:20 When to file bug tickets
37:25 Processes and growth
40:56 Underrated aspect 
42:34 Biggest challenges 

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Apple and Spotify!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF]]></description><content:encoded><![CDATA[As Lyft’s VP of Engineering, Software at Level 5, Autonomous Vehicle Program, Anantha Kancherla has a birds-eye view on what it takes to make self-driving cars work in the real world. He previously worked on Windows at Microsoft focusing on DirectX, Graphics and UI; Facebook’s mobile Newsfeed and core mobile experiences; and led the Collaboration efforts at Dropbox involving launching Dropbox Paper as well as improving core collaboration functionality in Dropbox.

He and Lukas dive into the challenges of working on large projects and how to approach breaking down a major project into pieces, tracking progress and addressing bugs.

Check out Lyft’s Self-Driving Website:
https://self-driving.lyft.com/

And this article on building the self-driving team at Lyft:
https://medium.com/lyftlevel5/going-from-zero-to-sixty-building-lyfts-self-driving-software-team-1ac693800588

Follow Lyft Level 5 on Twitter:
https://twitter.com/LyftLevel5

Topics covered:
0:00 Sharp Knives
0:44 Introduction
1:07 Breaking down a big goal
8:15 Breaking down Metrics
10:50 Allocating Resources
12:40 Interventions
13:27 What part still has lots ofroom for improvement?
14:25 Various ways of deploying models
15:30 Rideshare
15:57 Infrastructure, updates
17:28 Model versioning
19:16 Model improvement goals
22:42 Unit testing
25:12 Interactions of models
26:30 Improvements in data vs models
29:50 finding the right data
30:38 Deploying models into production
32:17 Feature drift
34:20 When to file bug tickets
37:25 Processes and growth
40:56 Underrated aspect 
42:34 Biggest challenges 

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Apple and Spotify!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/873744652</guid><itunes:image href="https://artwork.captivate.fm/d6a2a351-743a-4350-84a6-42d81802d9d6/artworks-8692qwcmga1vwf4w-agzyrg-t3000x3000.jpg"/><pubDate>Tue, 11 Aug 2020 23:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/9ab0d61c-b37b-49cf-b5a0-d67007ca57ee/873744652-wandb-anantha-kancherla.mp3" length="42743012" type="audio/mpeg"/><itunes:duration>44:31</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>As Lyft’s VP of Engineering, Software at Level 5, Autonomous Vehicle Program, Anantha Kancherla has a birds-eye view on what it takes to make self-driving cars work in the real world. He previously worked on Windows at Microsoft focusing on DirectX, Graphics and UI; Facebook’s mobile Newsfeed and core mobile experiences; and led the Collaboration efforts at Dropbox involving launching Dropbox Paper as well as improving core collaboration functionality in Dropbox.

He and Lukas dive into the challenges of working on large projects and how to approach breaking down a major project into pieces, tracking progress and addressing bugs.

Check out Lyft’s Self-Driving Website:
https://self-driving.lyft.com/

And this article on building the self-driving team at Lyft:
https://medium.com/lyftlevel5/going-from-zero-to-sixty-building-lyfts-self-driving-software-team-1ac693800588

Follow Lyft Level 5 on Twitter:
https://twitter.com/LyftLevel5

Topics covered:
0:00 Sharp Knives
0:44 Introduction
1:07 Breaking down a big goal
8:15 Breaking down Metrics
10:50 Allocating Resources
12:40 Interventions
13:27 What part still has lots ofroom for improvement?
14:25 Various ways of deploying models
15:30 Rideshare
15:57 Infrastructure, updates
17:28 Model versioning
19:16 Model improvement goals
22:42 Unit testing
25:12 Interactions of models
26:30 Improvements in data vs models
29:50 finding the right data
30:38 Deploying models into production
32:17 Feature drift
34:20 When to file bug tickets
37:25 Processes and growth
40:56 Underrated aspect 
42:34 Biggest challenges 

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Apple and Spotify!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF</itunes:summary></item><item><title>Bharath Ramsundar — Deep Learning for Molecules and Medicine Discovery</title><itunes:title>Bharath Ramsundar — Deep Learning for Molecules and Medicine Discovery</itunes:title><description><![CDATA[Bharath created the deepchem.io open-source project to grow the deep drug discovery open source community, co-created the moleculenet.ai benchmark suite to facilitate development of molecular algorithms, and more. Bharath’s graduate education was supported by a Hertz Fellowship, the most selective graduate fellowship in the sciences. Bharath is the lead author of “TensorFlow for Deep Learning: From Linear Regression to Reinforcement Learning”, a developer’s introduction to modern machine learning, with O’Reilly Media. 

Today, Bharath is focused on designing the decentralized protocols that will unlock data and AI to create the next stage of the internet. He received a BA and BS from UC Berkeley in EECS and Mathematics and was valedictorian of his graduating class in mathematics. He did his PhD in computer science at Stanford University where he studied the application of deep-learning to problems in drug-discovery.

Follow Bharath on Twitter and Github
https://twitter.com/rbhar90
rbharath.github.io

Check out some of his projects:
https://deepchem.io/
https://moleculenet.ai/
https://scholar.google.com/citations?user=LOdVDNYAAAAJ&hl=en&oi=ao

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Apple and Spotify!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/]]></description><content:encoded><![CDATA[Bharath created the deepchem.io open-source project to grow the deep drug discovery open source community, co-created the moleculenet.ai benchmark suite to facilitate development of molecular algorithms, and more. Bharath’s graduate education was supported by a Hertz Fellowship, the most selective graduate fellowship in the sciences. Bharath is the lead author of “TensorFlow for Deep Learning: From Linear Regression to Reinforcement Learning”, a developer’s introduction to modern machine learning, with O’Reilly Media. 

Today, Bharath is focused on designing the decentralized protocols that will unlock data and AI to create the next stage of the internet. He received a BA and BS from UC Berkeley in EECS and Mathematics and was valedictorian of his graduating class in mathematics. He did his PhD in computer science at Stanford University where he studied the application of deep-learning to problems in drug-discovery.

Follow Bharath on Twitter and Github
https://twitter.com/rbhar90
rbharath.github.io

Check out some of his projects:
https://deepchem.io/
https://moleculenet.ai/
https://scholar.google.com/citations?user=LOdVDNYAAAAJ&hl=en&oi=ao

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Apple and Spotify!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/870384418</guid><itunes:image href="https://artwork.captivate.fm/61d8c671-ba2e-4936-b355-46123de1b245/artworks-8692qwcmga1vwf4w-agzyrg-t3000x3000.jpg"/><pubDate>Wed, 05 Aug 2020 02:14:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/66fe3dd3-56d5-48d6-9c90-14bfcfb6d068/870384418-wandb-bharath-ramsundar.mp3" length="52970474" type="audio/mpeg"/><itunes:duration>55:11</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Bharath created the deepchem.io open-source project to grow the deep drug discovery open source community, co-created the moleculenet.ai benchmark suite to facilitate development of molecular algorithms, and more. Bharath’s graduate education was supported by a Hertz Fellowship, the most selective graduate fellowship in the sciences. Bharath is the lead author of “TensorFlow for Deep Learning: From Linear Regression to Reinforcement Learning”, a developer’s introduction to modern machine learning, with O’Reilly Media. 

Today, Bharath is focused on designing the decentralized protocols that will unlock data and AI to create the next stage of the internet. He received a BA and BS from UC Berkeley in EECS and Mathematics and was valedictorian of his graduating class in mathematics. He did his PhD in computer science at Stanford University where he studied the application of deep-learning to problems in drug-discovery.

Follow Bharath on Twitter and Github
https://twitter.com/rbhar90
rbharath.github.io

Check out some of his projects:
https://deepchem.io/
https://moleculenet.ai/
https://scholar.google.com/citations?user=LOdVDNYAAAAJ&amp;hl=en&amp;oi=ao

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Apple and Spotify!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&amp;B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/</itunes:summary></item><item><title>Chip Huyen — ML Research and Production Pipelines</title><itunes:title>Chip Huyen — ML Research and Production Pipelines</itunes:title><description><![CDATA[Chip Huyen is a writer and computer scientist currently working at a startup that focuses on machine learning production pipelines. Previously, she’s worked at NVIDIA, Netflix, and Primer. She helped launch Coc Coc - Vietnam’s second most popular web browser with 20+ million monthly active users. Before all of that, she was a best selling author and traveled the world.

Chip graduated from Stanford, where she created and taught the course on TensorFlow for Deep Learning Research.

Check out Chip's recent article on ML Tools: https://huyenchip.com/2020/06/22/mlops.html
Follow  Chip on Twitter: https://twitter.com/chipro
And on her Website: https://huyenchip.com/

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Apple and Spotify!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/]]></description><content:encoded><![CDATA[Chip Huyen is a writer and computer scientist currently working at a startup that focuses on machine learning production pipelines. Previously, she’s worked at NVIDIA, Netflix, and Primer. She helped launch Coc Coc - Vietnam’s second most popular web browser with 20+ million monthly active users. Before all of that, she was a best selling author and traveled the world.

Chip graduated from Stanford, where she created and taught the course on TensorFlow for Deep Learning Research.

Check out Chip's recent article on ML Tools: https://huyenchip.com/2020/06/22/mlops.html
Follow  Chip on Twitter: https://twitter.com/chipro
And on her Website: https://huyenchip.com/

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Apple and Spotify!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/816538480</guid><itunes:image href="https://artwork.captivate.fm/fc50359b-5e92-431e-b779-40b90afe5b21/artworks-8692qwcmga1vwf4w-agzyrg-t3000x3000.jpg"/><pubDate>Tue, 28 Jul 2020 22:27:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/e2548730-7a72-4a17-a8f2-86f257b4541d/816538480-wandb-chip-huyen.mp3" length="41395512" type="audio/mpeg"/><itunes:duration>43:07</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Chip Huyen is a writer and computer scientist currently working at a startup that focuses on machine learning production pipelines. Previously, she’s worked at NVIDIA, Netflix, and Primer. She helped launch Coc Coc - Vietnam’s second most popular web browser with 20+ million monthly active users. Before all of that, she was a best selling author and traveled the world.

Chip graduated from Stanford, where she created and taught the course on TensorFlow for Deep Learning Research.

Check out Chip&apos;s recent article on ML Tools: https://huyenchip.com/2020/06/22/mlops.html
Follow  Chip on Twitter: https://twitter.com/chipro
And on her Website: https://huyenchip.com/

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Apple and Spotify!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&amp;B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/</itunes:summary></item><item><title>Peter Skomoroch — Product Management for AI</title><itunes:title>Peter Skomoroch — Product Management for AI</itunes:title><description><![CDATA[👨🏻‍💻Our guest on this episode of Gradient Dissent is Peter Skomoroch!
Peter is the former head of data products at Workday and LinkedIn. Previously, he was the cofounder and CEO of venture-backed deep learning startup SkipFlag, which was acquired by Workday, and a principal data scientist at LinkedIn.

Check out his recent publication: What you need to know about product management for AI
 https://www.oreilly.com/radar/what-you-need-to-know-about-product-management-for-ai/

Follow Peter on Twitter:
https://twitter.com/peteskomoroch

And read some of his other work:
Pangloss: Fast Entity Linking in Noisy Text Environments

Large-Scale Hierarchical Topic Models

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Soundcloud, Apple, and Spotify!
YouTube: https://bit.ly/32NzZvI
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/]]></description><content:encoded><![CDATA[👨🏻‍💻Our guest on this episode of Gradient Dissent is Peter Skomoroch!
Peter is the former head of data products at Workday and LinkedIn. Previously, he was the cofounder and CEO of venture-backed deep learning startup SkipFlag, which was acquired by Workday, and a principal data scientist at LinkedIn.

Check out his recent publication: What you need to know about product management for AI
 https://www.oreilly.com/radar/what-you-need-to-know-about-product-management-for-ai/

Follow Peter on Twitter:
https://twitter.com/peteskomoroch

And read some of his other work:
Pangloss: Fast Entity Linking in Noisy Text Environments

Large-Scale Hierarchical Topic Models

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Soundcloud, Apple, and Spotify!
YouTube: https://bit.ly/32NzZvI
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/862090300</guid><itunes:image href="https://artwork.captivate.fm/d286f00c-f1fc-46f8-bd21-e3ee4c86e438/artworks-8692qwcmga1vwf4w-agzyrg-t3000x3000.jpg"/><pubDate>Tue, 21 Jul 2020 19:55:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/66482c42-f876-4107-b681-f95318c659d1/862090300-wandb-peter-skomoroch.mp3" length="83906559" type="audio/mpeg"/><itunes:duration>01:27:24</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>👨🏻‍💻Our guest on this episode of Gradient Dissent is Peter Skomoroch!
Peter is the former head of data products at Workday and LinkedIn. Previously, he was the cofounder and CEO of venture-backed deep learning startup SkipFlag, which was acquired by Workday, and a principal data scientist at LinkedIn.

Check out his recent publication: What you need to know about product management for AI
 https://www.oreilly.com/radar/what-you-need-to-know-about-product-management-for-ai/

Follow Peter on Twitter:
https://twitter.com/peteskomoroch

And read some of his other work:
Pangloss: Fast Entity Linking in Noisy Text Environments

Large-Scale Hierarchical Topic Models

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Soundcloud, Apple, and Spotify!
YouTube: https://bit.ly/32NzZvI
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&amp;B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/</itunes:summary></item><item><title>Josh Tobin — Productionizing ML Models</title><itunes:title>Josh Tobin — Productionizing ML Models</itunes:title><description><![CDATA[Josh Tobin is a researcher working at the intersection of machine learning and robotics. His research focuses on applying deep reinforcement learning, generative models, and synthetic data to problems in robotic perception and control.

Additionally, he co-organizes a machine learning training program for engineers to learn about production-ready deep learning called Full Stack Deep Learning. https://fullstackdeeplearning.com/

Josh did his PhD in Computer Science at UC Berkeley advised by Pieter Abbeel and was a research scientist at OpenAI for 3 years during his PhD.

Finally, Josh created this amazing field guide on troubleshooting deep neural networks:
http://josh-tobin.com/assets/pdf/troubleshooting-deep-neural-networks-01-19.pdf

Follow Josh on twitter: https://twitter.com/josh_tobin
And on his website:http://josh-tobin.com/


Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Youtube, Apple, and Spotify!
Youtube: https://www.youtube.com/playlist?list=PLD80i8An1OEEb1jP0sjEyiLG8ULRXFob_
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/]]></description><content:encoded><![CDATA[Josh Tobin is a researcher working at the intersection of machine learning and robotics. His research focuses on applying deep reinforcement learning, generative models, and synthetic data to problems in robotic perception and control.

Additionally, he co-organizes a machine learning training program for engineers to learn about production-ready deep learning called Full Stack Deep Learning. https://fullstackdeeplearning.com/

Josh did his PhD in Computer Science at UC Berkeley advised by Pieter Abbeel and was a research scientist at OpenAI for 3 years during his PhD.

Finally, Josh created this amazing field guide on troubleshooting deep neural networks:
http://josh-tobin.com/assets/pdf/troubleshooting-deep-neural-networks-01-19.pdf

Follow Josh on twitter: https://twitter.com/josh_tobin
And on his website:http://josh-tobin.com/


Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Youtube, Apple, and Spotify!
Youtube: https://www.youtube.com/playlist?list=PLD80i8An1OEEb1jP0sjEyiLG8ULRXFob_
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/816540124</guid><itunes:image href="https://artwork.captivate.fm/23bab974-5d71-482a-b4f3-052b9a2a6131/artworks-8692qwcmga1vwf4w-agzyrg-t3000x3000.jpg"/><pubDate>Tue, 07 Jul 2020 20:22:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/ecb78937-7f4e-4379-b362-6d0303f8e9df/816540124-wandb-gd-josh-tobin.mp3" length="46388870" type="audio/mpeg"/><itunes:duration>48:19</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Josh Tobin is a researcher working at the intersection of machine learning and robotics. His research focuses on applying deep reinforcement learning, generative models, and synthetic data to problems in robotic perception and control.

Additionally, he co-organizes a machine learning training program for engineers to learn about production-ready deep learning called Full Stack Deep Learning. https://fullstackdeeplearning.com/

Josh did his PhD in Computer Science at UC Berkeley advised by Pieter Abbeel and was a research scientist at OpenAI for 3 years during his PhD.

Finally, Josh created this amazing field guide on troubleshooting deep neural networks:
http://josh-tobin.com/assets/pdf/troubleshooting-deep-neural-networks-01-19.pdf

Follow Josh on twitter: https://twitter.com/josh_tobin
And on his website:http://josh-tobin.com/


Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Youtube, Apple, and Spotify!
Youtube: https://www.youtube.com/playlist?list=PLD80i8An1OEEb1jP0sjEyiLG8ULRXFob_
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&amp;B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/</itunes:summary></item><item><title>Miles Brundage — Societal Impacts of Artificial Intelligence</title><itunes:title>Miles Brundage — Societal Impacts of Artificial Intelligence</itunes:title><description><![CDATA[Miles Brundage researches the societal impacts of artificial intelligence and how to make sure they go well. In 2018, he joined OpenAI, as a Research Scientist on the Policy team. Previously, he was a Research Fellow at the University of Oxford's Future of Humanity Institute and served as a member of Axon's AI and Policing Technology Ethics Board.

Keep up with Miles on his website: https://www.milesbrundage.com/
and on Twitter: https://twitter.com/miles_brundage

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Soundcloud, Apple, and Spotify!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/]]></description><content:encoded><![CDATA[Miles Brundage researches the societal impacts of artificial intelligence and how to make sure they go well. In 2018, he joined OpenAI, as a Research Scientist on the Policy team. Previously, he was a Research Fellow at the University of Oxford's Future of Humanity Institute and served as a member of Axon's AI and Policing Technology Ethics Board.

Keep up with Miles on his website: https://www.milesbrundage.com/
and on Twitter: https://twitter.com/miles_brundage

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Soundcloud, Apple, and Spotify!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/833653144</guid><itunes:image href="https://artwork.captivate.fm/f0f8619f-5af4-4070-b236-b261f405b843/artworks-8692qwcmga1vwf4w-agzyrg-t3000x3000.jpg"/><pubDate>Tue, 30 Jun 2020 21:40:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/cc1ba5e9-58b3-4f8c-b5ae-26302e25e327/833653144-wandb-miles-brundage.mp3" length="59866982" type="audio/mpeg"/><itunes:duration>01:02:17</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Miles Brundage researches the societal impacts of artificial intelligence and how to make sure they go well. In 2018, he joined OpenAI, as a Research Scientist on the Policy team. Previously, he was a Research Fellow at the University of Oxford&apos;s Future of Humanity Institute and served as a member of Axon&apos;s AI and Policing Technology Ethics Board.

Keep up with Miles on his website: https://www.milesbrundage.com/
and on Twitter: https://twitter.com/miles_brundage

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Soundcloud, Apple, and Spotify!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&amp;B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/</itunes:summary></item><item><title>Hamel Husain — Building Machine Learning Tools</title><itunes:title>Hamel Husain — Building Machine Learning Tools</itunes:title><description><![CDATA[Hamel Husain is a Staff Machine Learning Engineer at Github. He has extensive experience building data analytics and predictive modeling solutions for a wide range of industries, including: hospitality, telecom, retail, restaurant, entertainment and finance. He has built large data science teams (50+) from the ground up and have extensive experience building solutions as an individual contributor.

Follow Hamel on Twitter:
https://twitter.com/HamelHusain
And on his website: http://hamel.io/

Learn more about Github Actions:
https://github.com/features/actions

and the CodeSearchNet Challenge:
https://github.blog/2019-09-26-introducing-the-codesearchnet-challenge/

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Apple, and Spotify!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/]]></description><content:encoded><![CDATA[Hamel Husain is a Staff Machine Learning Engineer at Github. He has extensive experience building data analytics and predictive modeling solutions for a wide range of industries, including: hospitality, telecom, retail, restaurant, entertainment and finance. He has built large data science teams (50+) from the ground up and have extensive experience building solutions as an individual contributor.

Follow Hamel on Twitter:
https://twitter.com/HamelHusain
And on his website: http://hamel.io/

Learn more about Github Actions:
https://github.com/features/actions

and the CodeSearchNet Challenge:
https://github.blog/2019-09-26-introducing-the-codesearchnet-challenge/

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Apple, and Spotify!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/816499858</guid><itunes:image href="https://artwork.captivate.fm/df88919e-33d1-44d7-b220-f4053bf07ef4/artworks-8692qwcmga1vwf4w-agzyrg-t3000x3000.jpg"/><pubDate>Tue, 23 Jun 2020 21:14:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/f88e7512-09ab-4933-9c9b-ae4c821a5b58/816499858-wandb-hamel-husain.mp3" length="34635858" type="audio/mpeg"/><itunes:duration>36:05</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Hamel Husain is a Staff Machine Learning Engineer at Github. He has extensive experience building data analytics and predictive modeling solutions for a wide range of industries, including: hospitality, telecom, retail, restaurant, entertainment and finance. He has built large data science teams (50+) from the ground up and have extensive experience building solutions as an individual contributor.

Follow Hamel on Twitter:
https://twitter.com/HamelHusain
And on his website: http://hamel.io/

Learn more about Github Actions:
https://github.com/features/actions

and the CodeSearchNet Challenge:
https://github.blog/2019-09-26-introducing-the-codesearchnet-challenge/

Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Apple, and Spotify!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&amp;B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/</itunes:summary></item><item><title>Peter Welinder — Deep Reinforcement Learning and Robotics</title><itunes:title>Peter Welinder — Deep Reinforcement Learning and Robotics</itunes:title><description><![CDATA[Peter Welinder is a research scientist and roboticist at OpenAI. Before that, he was an engineer at Dropbox and ran the machine learning team, and before that, he co-founded Anchovi Labs a startup using Computer Vision to organize photos that was acquired by Dropbox in 2012. In this episode of our podcast, Peter shares his experiences and the challenges associated with building a robotic hand that can solve a rubix cube.

Read some of Peter’s Articles:
https://openai.com/blog/authors/peter/

Follow Peter on Twitter:
https://twitter.com/npew

Check out our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Apple, and Spotify!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast. We hope you have as much fun listening to it as we had making it.

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/]]></description><content:encoded><![CDATA[Peter Welinder is a research scientist and roboticist at OpenAI. Before that, he was an engineer at Dropbox and ran the machine learning team, and before that, he co-founded Anchovi Labs a startup using Computer Vision to organize photos that was acquired by Dropbox in 2012. In this episode of our podcast, Peter shares his experiences and the challenges associated with building a robotic hand that can solve a rubix cube.

Read some of Peter’s Articles:
https://openai.com/blog/authors/peter/

Follow Peter on Twitter:
https://twitter.com/npew

Check out our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Apple, and Spotify!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast. We hope you have as much fun listening to it as we had making it.

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/833587330</guid><itunes:image href="https://artwork.captivate.fm/6c18099a-1fa2-427a-81ed-ec45552e57db/artworks-8692qwcmga1vwf4w-agzyrg-t3000x3000.jpg"/><pubDate>Wed, 17 Jun 2020 17:12:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/7213ee73-ca40-4a5d-8070-16470db021cc/833587330-wandb-gd-peter-welinder.mp3" length="78308412" type="audio/mpeg"/><itunes:duration>54:17</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Peter Welinder is a research scientist and roboticist at OpenAI. Before that, he was an engineer at Dropbox and ran the machine learning team, and before that, he co-founded Anchovi Labs a startup using Computer Vision to organize photos that was acquired by Dropbox in 2012. In this episode of our podcast, Peter shares his experiences and the challenges associated with building a robotic hand that can solve a rubix cube.

Read some of Peter’s Articles:
https://openai.com/blog/authors/peter/

Follow Peter on Twitter:
https://twitter.com/npew

Check out our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Apple, and Spotify!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast. We hope you have as much fun listening to it as we had making it.

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&amp;B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/</itunes:summary></item><item><title>Vicki Boykis — Machine Learning Across Industries</title><itunes:title>Vicki Boykis — Machine Learning Across Industries</itunes:title><description><![CDATA[👩‍💻Today our guest is Vicki Boykis!
Vicki is a senior consultant in machine learning and engineering and works with clients to build holistic data products used for decision-making. She's previously spoken at PyData, taught SQL for GirlDevelopIt, and blogs about data pipelines and open internet. 
Follow her on her website: vickiboykis.com
On twitter: https://twitter.com/vboykis
and subscribe to her newsletter: vicki.substack.com 

Check out our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Apple and Spotify!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast. We hope you have as much fun listening to it as we had making it.

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/]]></description><content:encoded><![CDATA[👩‍💻Today our guest is Vicki Boykis!
Vicki is a senior consultant in machine learning and engineering and works with clients to build holistic data products used for decision-making. She's previously spoken at PyData, taught SQL for GirlDevelopIt, and blogs about data pipelines and open internet. 
Follow her on her website: vickiboykis.com
On twitter: https://twitter.com/vboykis
and subscribe to her newsletter: vicki.substack.com 

Check out our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Apple and Spotify!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast. We hope you have as much fun listening to it as we had making it.

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/810992620</guid><itunes:image href="https://artwork.captivate.fm/13730627-57b3-42eb-b405-8e5ec2237961/artworks-8692qwcmga1vwf4w-agzyrg-t3000x3000.jpg"/><pubDate>Wed, 03 Jun 2020 22:14:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/ca0d00d2-0944-46c8-814e-3b908fdc61a2/810992620-wandb-ml-consulting-with-vicki-boykis.mp3" length="32678555" type="audio/mpeg"/><itunes:duration>34:02</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>👩‍💻Today our guest is Vicki Boykis!
Vicki is a senior consultant in machine learning and engineering and works with clients to build holistic data products used for decision-making. She&apos;s previously spoken at PyData, taught SQL for GirlDevelopIt, and blogs about data pipelines and open internet. 
Follow her on her website: vickiboykis.com
On twitter: https://twitter.com/vboykis
and subscribe to her newsletter: vicki.substack.com 

Check out our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Apple and Spotify!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast. We hope you have as much fun listening to it as we had making it.

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&amp;B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/</itunes:summary></item><item><title>Angela &amp; Danielle — Designing ML Models for Millions of Consumer Robots</title><itunes:title>Angela &amp; Danielle — Designing ML Models for Millions of Consumer Robots</itunes:title><description><![CDATA[👩‍💻👩‍💻On this episode of Gradient Dissent our guests are Angela Bassa and Danielle Dean!

Angela is an expert in building and leading data teams. An MIT-trained and Edelman-award-winning mathematician, she has over 15 years of experience across industries—spanning finance, life sciences, agriculture, marketing, energy, software, and robotics. Angela heads Data Science and Machine Learning at iRobot, where her teams help bring intelligence to a global fleet of millions of consumer robots. She is also a renowned keynote speaker and author, with credits including the Wall Street Journal and Harvard Business Review.
Follow Angela on twitter:  https://twitter.com/angebassa
And on her website: https://www.angelabassa.com/

Danielle Dean, PhD is the Technical Director of Machine Learning at iRobot where she is helping lead the intelligence revolution for robots. She leads a team that leverages machine learning, reinforcement learning, and software engineering to build algorithms that will result in massive improvements in our robots. Before iRobot, Danielle was a Principal Data Scientist Lead at Microsoft Corp. in AzureCAT Engineering within the Cloud AI Platform division.
Follow Danielle on Twitter: https://twitter.com/danielleodean

Check out our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Apple and Spotify!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast. We hope you have as much fun listening to it as we had making it.

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/]]></description><content:encoded><![CDATA[👩‍💻👩‍💻On this episode of Gradient Dissent our guests are Angela Bassa and Danielle Dean!

Angela is an expert in building and leading data teams. An MIT-trained and Edelman-award-winning mathematician, she has over 15 years of experience across industries—spanning finance, life sciences, agriculture, marketing, energy, software, and robotics. Angela heads Data Science and Machine Learning at iRobot, where her teams help bring intelligence to a global fleet of millions of consumer robots. She is also a renowned keynote speaker and author, with credits including the Wall Street Journal and Harvard Business Review.
Follow Angela on twitter:  https://twitter.com/angebassa
And on her website: https://www.angelabassa.com/

Danielle Dean, PhD is the Technical Director of Machine Learning at iRobot where she is helping lead the intelligence revolution for robots. She leads a team that leverages machine learning, reinforcement learning, and software engineering to build algorithms that will result in massive improvements in our robots. Before iRobot, Danielle was a Principal Data Scientist Lead at Microsoft Corp. in AzureCAT Engineering within the Cloud AI Platform division.
Follow Danielle on Twitter: https://twitter.com/danielleodean

Check out our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Apple and Spotify!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast. We hope you have as much fun listening to it as we had making it.

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/]]></content:encoded><link><![CDATA[https://wandb.ai/site/resources/podcast]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/810995236</guid><itunes:image href="https://artwork.captivate.fm/f562e755-0ff3-4f4d-9447-0db3e61ef85f/artworks-8692qwcmga1vwf4w-agzyrg-t3000x3000.jpg"/><pubDate>Tue, 05 May 2020 22:21:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/e30e91cb-db82-4ecb-9df6-c3d3090994a5/810995236-wandb-angela-danielle.mp3" length="50525830" type="audio/mpeg"/><itunes:duration>52:38</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>👩‍💻👩‍💻On this episode of Gradient Dissent our guests are Angela Bassa and Danielle Dean!

Angela is an expert in building and leading data teams. An MIT-trained and Edelman-award-winning mathematician, she has over 15 years of experience across industries—spanning finance, life sciences, agriculture, marketing, energy, software, and robotics. Angela heads Data Science and Machine Learning at iRobot, where her teams help bring intelligence to a global fleet of millions of consumer robots. She is also a renowned keynote speaker and author, with credits including the Wall Street Journal and Harvard Business Review.
Follow Angela on twitter:  https://twitter.com/angebassa
And on her website: https://www.angelabassa.com/

Danielle Dean, PhD is the Technical Director of Machine Learning at iRobot where she is helping lead the intelligence revolution for robots. She leads a team that leverages machine learning, reinforcement learning, and software engineering to build algorithms that will result in massive improvements in our robots. Before iRobot, Danielle was a Principal Data Scientist Lead at Microsoft Corp. in AzureCAT Engineering within the Cloud AI Platform division.
Follow Danielle on Twitter: https://twitter.com/danielleodean

Check out our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast

🔊 Get our podcast on Apple and Spotify!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast. We hope you have as much fun listening to it as we had making it.

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&amp;B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/</itunes:summary></item><item><title>Jack Clark — Building Trustworthy AI Systems</title><itunes:title>Jack Clark — Building Trustworthy AI Systems</itunes:title><description><![CDATA[Jack Clark is the Strategy and Communications Director at OpenAI and formerly worked as the world’s only neural network reporter at Bloomberg. Lukas and Jack discuss AI policy, ethics, and the responsibilities of AI researchers.
Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims by OpenAI: https://arxiv.org/abs/2004.07213
Follow Jack Clark on Twitter: twitter.com/jackclarkSF
Read more posts by Jack on his website: https://jack-clark.net/

Get our podcast on Apple and Spotify!
https://podcasts.apple.com/us/podcast/gradient-dissent-weights-biases/id1504567418
https://open.spotify.com/show/7o9r3fFig3MhTJwehXDbXm

🤖Gradient Dissent by Weights and Biases
Get a behind-the-scenes look at how industry leaders are using machine learning in the real world. While building experiment tracking tools, we’ve had the opportunity to learn about how different teams are building and deploying models.  In this podcast, we share some of the insights and stories we’ve heard along the way. Follow Gradient Dissent for weekly machine learning updates, and be part of the conversation.

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/]]></description><content:encoded><![CDATA[Jack Clark is the Strategy and Communications Director at OpenAI and formerly worked as the world’s only neural network reporter at Bloomberg. Lukas and Jack discuss AI policy, ethics, and the responsibilities of AI researchers.
Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims by OpenAI: https://arxiv.org/abs/2004.07213
Follow Jack Clark on Twitter: twitter.com/jackclarkSF
Read more posts by Jack on his website: https://jack-clark.net/

Get our podcast on Apple and Spotify!
https://podcasts.apple.com/us/podcast/gradient-dissent-weights-biases/id1504567418
https://open.spotify.com/show/7o9r3fFig3MhTJwehXDbXm

🤖Gradient Dissent by Weights and Biases
Get a behind-the-scenes look at how industry leaders are using machine learning in the real world. While building experiment tracking tools, we’ve had the opportunity to learn about how different teams are building and deploying models.  In this podcast, we share some of the insights and stories we’ve heard along the way. Follow Gradient Dissent for weekly machine learning updates, and be part of the conversation.

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/]]></content:encoded><link><![CDATA[https://soundcloud.com/wandb/whats-the-chance-of-an-ai-apocalypse-wjack-clark]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/804309751</guid><itunes:image href="https://artwork.captivate.fm/eae17174-a482-4abd-bd8f-59e759fb9f72/artworks-8692qwcmga1vwf4w-agzyrg-t3000x3000.jpg"/><pubDate>Wed, 22 Apr 2020 04:01:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/c3c3e1ca-4b82-44de-b2db-bfb1f6a2fe35/804309751-wandb-whats-the-chance-of-an-ai-apocalypse-wjack-clark.mp3" length="53696469" type="audio/mpeg"/><itunes:duration>55:56</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>Jack Clark is the Strategy and Communications Director at OpenAI and formerly worked as the world’s only neural network reporter at Bloomberg. Lukas and Jack discuss AI policy, ethics, and the responsibilities of AI researchers.
Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims by OpenAI: https://arxiv.org/abs/2004.07213
Follow Jack Clark on Twitter: twitter.com/jackclarkSF
Read more posts by Jack on his website: https://jack-clark.net/

Get our podcast on Apple and Spotify!
https://podcasts.apple.com/us/podcast/gradient-dissent-weights-biases/id1504567418
https://open.spotify.com/show/7o9r3fFig3MhTJwehXDbXm

🤖Gradient Dissent by Weights and Biases
Get a behind-the-scenes look at how industry leaders are using machine learning in the real world. While building experiment tracking tools, we’ve had the opportunity to learn about how different teams are building and deploying models.  In this podcast, we share some of the insights and stories we’ve heard along the way. Follow Gradient Dissent for weekly machine learning updates, and be part of the conversation.

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&amp;B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/</itunes:summary></item><item><title>Rachael Tatman — Conversational AI and Linguistics</title><itunes:title>Rachael Tatman — Conversational AI and Linguistics</itunes:title><description><![CDATA[🏅 See how W&B is your secret weapon to make it onto the Kaggle leaderboards - https://www.wandb.com/kaggle

👩‍💻Rachael Tatman is a developer advocate for Rasa, where she helps developers build and deploy conversational AI applications using their open source framework. 🤖💬 She has a PhD in Linguistics from the University of Washington where she researched computational sociolinguistics, or how our social identity affects the way we use language in computational contexts. Previously she was a data scientist at Kaggle where she’s still a Grandmaster.

💻Keep up with Rachael on her website: http://www.rctatman.com/
🐦Follow Rachael on twitter: https://twitter.com/rctatman

Get our podcast on Apple and Spotify!
https://podcasts.apple.com/us/podcast/gradient-dissent-weights-biases/id1504567418
https://open.spotify.com/show/7o9r3fFig3MhTJwehXDbXm

🤖Gradient Dissent by Weights and Biases
We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast. We hope you have as much fun listening to it as we had making it.

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/]]></description><content:encoded><![CDATA[🏅 See how W&B is your secret weapon to make it onto the Kaggle leaderboards - https://www.wandb.com/kaggle

👩‍💻Rachael Tatman is a developer advocate for Rasa, where she helps developers build and deploy conversational AI applications using their open source framework. 🤖💬 She has a PhD in Linguistics from the University of Washington where she researched computational sociolinguistics, or how our social identity affects the way we use language in computational contexts. Previously she was a data scientist at Kaggle where she’s still a Grandmaster.

💻Keep up with Rachael on her website: http://www.rctatman.com/
🐦Follow Rachael on twitter: https://twitter.com/rctatman

Get our podcast on Apple and Spotify!
https://podcasts.apple.com/us/podcast/gradient-dissent-weights-biases/id1504567418
https://open.spotify.com/show/7o9r3fFig3MhTJwehXDbXm

🤖Gradient Dissent by Weights and Biases
We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast. We hope you have as much fun listening to it as we had making it.

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/]]></content:encoded><link><![CDATA[https://soundcloud.com/wandb/rachel-tatman]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/792488173</guid><itunes:image href="https://artwork.captivate.fm/fbeef1cd-07a7-4e80-9ba8-9663219e2013/artworks-8692qwcmga1vwf4w-agzyrg-t3000x3000.jpg"/><pubDate>Tue, 07 Apr 2020 00:03:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/d17c47d7-aa6a-4076-9946-9044932f9144/792488173-wandb-rachel-tatman.mp3" length="35368959" type="audio/mpeg"/><itunes:duration>36:51</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>🏅 See how W&amp;B is your secret weapon to make it onto the Kaggle leaderboards - https://www.wandb.com/kaggle

👩‍💻Rachael Tatman is a developer advocate for Rasa, where she helps developers build and deploy conversational AI applications using their open source framework. 🤖💬 She has a PhD in Linguistics from the University of Washington where she researched computational sociolinguistics, or how our social identity affects the way we use language in computational contexts. Previously she was a data scientist at Kaggle where she’s still a Grandmaster.

💻Keep up with Rachael on her website: http://www.rctatman.com/
🐦Follow Rachael on twitter: https://twitter.com/rctatman

Get our podcast on Apple and Spotify!
https://podcasts.apple.com/us/podcast/gradient-dissent-weights-biases/id1504567418
https://open.spotify.com/show/7o9r3fFig3MhTJwehXDbXm

🤖Gradient Dissent by Weights and Biases
We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast. We hope you have as much fun listening to it as we had making it.

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&amp;B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/</itunes:summary></item><item><title>Nicolas Koumchatzky — Machine Learning in Production for Self-Driving Cars</title><itunes:title>Nicolas Koumchatzky — Machine Learning in Production for Self-Driving Cars</itunes:title><description><![CDATA[👨🏻‍💻Nicolas Koumchatzky is the Director of AI infrastructure at NVIDIA, where he's responsible for MagLev, the production-grade machine learning platform by NVIDIA. His team supports diverse ML use cases: autonomous vehicles, medical imaging, super resolution, predictive analytics, cyber security, robotics. He started as a Quant in Paris, then joined Madbits, a startup specialized on using deep learning for content understanding. When Madbits was acquired by Twitter in 2014, he joined as a deep learning expert and led a few projects in Cortex, include a real-time live video classification product for Periscope. In 2016, he focused on building an scalable AI platform for the company. Early 2017, he became the lead for the Cortex team. He joined NVIDIA in 2018.
🐦Follow Nicolas on twitter: https://twitter.com/nkoumchatzky
🛠Maglev: https://blogs.nvidia.com/blog/2018/09/13/how-maglev-speeds-autonomous-vehicles-to-superhuman-levels-of-safety/
✍️Scalable Active Learning for Autonomous Driving: https://medium.com/nvidia-ai/scalable-active-learning-for-autonomous-driving-a-practical-implementation-and-a-b-test-4d315ed04b5f
✍️Active Learning – Finding the right self-driving training data doesn’t have to take a swarm of human labelers: https://blogs.nvidia.com/blog/2020/01/16/what-is-active-learning/

👫Continue the conversation on our slack community - http://bit.ly/wandb-forum

🤖Gradient Dissent by Weights and Biases
We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast. We hope you have as much fun listening to it as we had making it.

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.
* Visualize your Scikit model performance with W&B - https://app.wandb.ai/lavanyashukla/visualize-sklearn/reports/Visualizing-Sklearn-With-Weights-and-Biases--Vmlldzo0ODIzNg
* Blog: https://www.wandb.com/articles
* Gallery: See what you can create with W&B - https://app.wandb.ai/gallery

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/]]></description><content:encoded><![CDATA[👨🏻‍💻Nicolas Koumchatzky is the Director of AI infrastructure at NVIDIA, where he's responsible for MagLev, the production-grade machine learning platform by NVIDIA. His team supports diverse ML use cases: autonomous vehicles, medical imaging, super resolution, predictive analytics, cyber security, robotics. He started as a Quant in Paris, then joined Madbits, a startup specialized on using deep learning for content understanding. When Madbits was acquired by Twitter in 2014, he joined as a deep learning expert and led a few projects in Cortex, include a real-time live video classification product for Periscope. In 2016, he focused on building an scalable AI platform for the company. Early 2017, he became the lead for the Cortex team. He joined NVIDIA in 2018.
🐦Follow Nicolas on twitter: https://twitter.com/nkoumchatzky
🛠Maglev: https://blogs.nvidia.com/blog/2018/09/13/how-maglev-speeds-autonomous-vehicles-to-superhuman-levels-of-safety/
✍️Scalable Active Learning for Autonomous Driving: https://medium.com/nvidia-ai/scalable-active-learning-for-autonomous-driving-a-practical-implementation-and-a-b-test-4d315ed04b5f
✍️Active Learning – Finding the right self-driving training data doesn’t have to take a swarm of human labelers: https://blogs.nvidia.com/blog/2020/01/16/what-is-active-learning/

👫Continue the conversation on our slack community - http://bit.ly/wandb-forum

🤖Gradient Dissent by Weights and Biases
We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast. We hope you have as much fun listening to it as we had making it.

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.
* Visualize your Scikit model performance with W&B - https://app.wandb.ai/lavanyashukla/visualize-sklearn/reports/Visualizing-Sklearn-With-Weights-and-Biases--Vmlldzo0ODIzNg
* Blog: https://www.wandb.com/articles
* Gallery: See what you can create with W&B - https://app.wandb.ai/gallery

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/]]></content:encoded><link><![CDATA[https://soundcloud.com/wandb/nicolas-v5]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/779992225</guid><itunes:image href="https://artwork.captivate.fm/6f5f46ec-a3ec-4d89-a314-34e754c644d5/artworks-lbgopbyj3trveyxq-wujzqa-t3000x3000.jpg"/><pubDate>Sat, 21 Mar 2020 00:32:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/3018abaf-3e1b-4d5b-9567-22b628795d96/779992225-wandb-nicolas-v5.mp3" length="107868640" type="audio/mpeg"/><itunes:duration>44:56</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>👨🏻‍💻Nicolas Koumchatzky is the Director of AI infrastructure at NVIDIA, where he&apos;s responsible for MagLev, the production-grade machine learning platform by NVIDIA. His team supports diverse ML use cases: autonomous vehicles, medical imaging, super resolution, predictive analytics, cyber security, robotics. He started as a Quant in Paris, then joined Madbits, a startup specialized on using deep learning for content understanding. When Madbits was acquired by Twitter in 2014, he joined as a deep learning expert and led a few projects in Cortex, include a real-time live video classification product for Periscope. In 2016, he focused on building an scalable AI platform for the company. Early 2017, he became the lead for the Cortex team. He joined NVIDIA in 2018.
🐦Follow Nicolas on twitter: https://twitter.com/nkoumchatzky
🛠Maglev: https://blogs.nvidia.com/blog/2018/09/13/how-maglev-speeds-autonomous-vehicles-to-superhuman-levels-of-safety/
✍️Scalable Active Learning for Autonomous Driving: https://medium.com/nvidia-ai/scalable-active-learning-for-autonomous-driving-a-practical-implementation-and-a-b-test-4d315ed04b5f
✍️Active Learning – Finding the right self-driving training data doesn’t have to take a swarm of human labelers: https://blogs.nvidia.com/blog/2020/01/16/what-is-active-learning/

👫Continue the conversation on our slack community - http://bit.ly/wandb-forum

🤖Gradient Dissent by Weights and Biases
We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast. We hope you have as much fun listening to it as we had making it.

👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.
* Visualize your Scikit model performance with W&amp;B - https://app.wandb.ai/lavanyashukla/visualize-sklearn/reports/Visualizing-Sklearn-With-Weights-and-Biases--Vmlldzo0ODIzNg
* Blog: https://www.wandb.com/articles
* Gallery: See what you can create with W&amp;B - https://app.wandb.ai/gallery

🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/</itunes:summary></item><item><title>Brandon Rohrer — Machine Learning in Production for Robots</title><itunes:title>Brandon Rohrer — Machine Learning in Production for Robots</itunes:title><description><![CDATA[👨🏻‍💻Brandon Rohrer is a Mechanical Engineer turned Data Scientist. He’s currently a Principal Data Scientist at iRobot and has an incredibly popular Machine Learning course at e2eML where he’s made some wildly popular videos on convolutional neural networks and deep learning. His fascination with robots began after watching Luke Skywalker’s prosthetic hand in the Empire Strikes Back. He turned this fascination into a PhD from MIT and subsequently found his way to building some incredible data science products at Facebook, Microsoft and now at iRobot.
✍️Brandon’s brilliant machine learning course: http://e2eml.school/
🐦Follow Brandon on twitter: https://twitter.com/_brohrer_

👫Continue the conversation on our slack community - http://bit.ly/wandb-forum

🤖Gradient Dissent by Weights and Biases - http://wandb.com

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast. We hope you have as much fun listening to it as we had making it.

Today our guest is Brandon Rohrer.
👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

• Visualize your Scikit model performance with W&B - https://app.wandb.ai/lavanyashukla/visualize-sklearn/reports/Visualizing-Sklearn-With-Weights-and-Biases--Vmlldzo0ODIzNg
• Blog: https://www.wandb.com/articles
• Gallery: See what you can create with W&B - https://app.wandb.ai/gallery]]></description><content:encoded><![CDATA[👨🏻‍💻Brandon Rohrer is a Mechanical Engineer turned Data Scientist. He’s currently a Principal Data Scientist at iRobot and has an incredibly popular Machine Learning course at e2eML where he’s made some wildly popular videos on convolutional neural networks and deep learning. His fascination with robots began after watching Luke Skywalker’s prosthetic hand in the Empire Strikes Back. He turned this fascination into a PhD from MIT and subsequently found his way to building some incredible data science products at Facebook, Microsoft and now at iRobot.
✍️Brandon’s brilliant machine learning course: http://e2eml.school/
🐦Follow Brandon on twitter: https://twitter.com/_brohrer_

👫Continue the conversation on our slack community - http://bit.ly/wandb-forum

🤖Gradient Dissent by Weights and Biases - http://wandb.com

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast. We hope you have as much fun listening to it as we had making it.

Today our guest is Brandon Rohrer.
👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

• Visualize your Scikit model performance with W&B - https://app.wandb.ai/lavanyashukla/visualize-sklearn/reports/Visualizing-Sklearn-With-Weights-and-Biases--Vmlldzo0ODIzNg
• Blog: https://www.wandb.com/articles
• Gallery: See what you can create with W&B - https://app.wandb.ai/gallery]]></content:encoded><link><![CDATA[https://soundcloud.com/wandb/gradient-dissent-brandon-rohrer]]></link><guid isPermaLink="false">tag:soundcloud,2010:tracks/773984860</guid><itunes:image href="https://artwork.captivate.fm/107ec77a-a77c-479e-b069-f890fecf903c/artworks-lbgopbyj3trveyxq-wujzqa-t3000x3000.jpg"/><pubDate>Tue, 10 Mar 2020 22:52:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/8149d990-37c5-4df0-aae9-0c0f617a81cd/773984860-wandb-gradient-dissent-brandon-rohrer.mp3" length="82908750" type="audio/mpeg"/><itunes:duration>34:31</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:summary>👨🏻‍💻Brandon Rohrer is a Mechanical Engineer turned Data Scientist. He’s currently a Principal Data Scientist at iRobot and has an incredibly popular Machine Learning course at e2eML where he’s made some wildly popular videos on convolutional neural networks and deep learning. His fascination with robots began after watching Luke Skywalker’s prosthetic hand in the Empire Strikes Back. He turned this fascination into a PhD from MIT and subsequently found his way to building some incredible data science products at Facebook, Microsoft and now at iRobot.
✍️Brandon’s brilliant machine learning course: http://e2eml.school/
🐦Follow Brandon on twitter: https://twitter.com/_brohrer_

👫Continue the conversation on our slack community - http://bit.ly/wandb-forum

🤖Gradient Dissent by Weights and Biases - http://wandb.com

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast. We hope you have as much fun listening to it as we had making it.

Today our guest is Brandon Rohrer.
👩🏼‍🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.

• Visualize your Scikit model performance with W&amp;B - https://app.wandb.ai/lavanyashukla/visualize-sklearn/reports/Visualizing-Sklearn-With-Weights-and-Biases--Vmlldzo0ODIzNg
• Blog: https://www.wandb.com/articles
• Gallery: See what you can create with W&amp;B - https://app.wandb.ai/gallery</itunes:summary></item></channel></rss>