<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet href="https://feeds.captivate.fm/style.xsl" type="text/xsl"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:podcast="https://podcastindex.org/namespace/1.0"><channel><atom:link href="https://feeds.captivate.fm/ai-goes-to-college/" rel="self" type="application/rss+xml"/><title><![CDATA[AI Goes to College]]></title><podcast:guid>77df2be6-0b2f-58be-8e49-0421a098a70b</podcast:guid><lastBuildDate>Tue, 31 Mar 2026 08:45:04 +0000</lastBuildDate><generator>Captivate.fm</generator><language><![CDATA[en]]></language><copyright><![CDATA[2024]]></copyright><managingEditor>Craig Van Slyke</managingEditor><itunes:summary><![CDATA[Generative artificial intelligence (GAI) has taken higher education by storm. Higher ed professionals need to find ways to understand and stay up with developments in GAI. AI Goes to College helps higher ed professionals learn about the latest developments in GAI, how these might affect higher ed, and what they can do in response.

Each episode offers insights about how to leverage GAI, and about the promise and perils of recent advances. The hosts, Dr. Craig Van Slyke and Dr. Robert E. Crossler are an experts in the adoption and use of GAI and understanding its impacts on various fields, including higher ed.]]></itunes:summary><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><itunes:owner><itunes:name>Craig Van Slyke</itunes:name></itunes:owner><itunes:author>Craig Van Slyke</itunes:author><description>Generative artificial intelligence (GAI) has taken higher education by storm. Higher ed professionals need to find ways to understand and stay up with developments in GAI. AI Goes to College helps higher ed professionals learn about the latest developments in GAI, how these might affect higher ed, and what they can do in response.

Each episode offers insights about how to leverage GAI, and about the promise and perils of recent advances. The hosts, Dr. Craig Van Slyke and Dr. Robert E. Crossler are an experts in the adoption and use of GAI and understanding its impacts on various fields, including higher ed.</description><link>https://sites.libsyn.com/502703</link><atom:link href="https://pubsubhubbub.appspot.com" rel="hub"/><itunes:explicit>false</itunes:explicit><itunes:type>episodic</itunes:type><itunes:category text="Education"></itunes:category><itunes:category text="Technology"></itunes:category><itunes:new-feed-url>https://feeds.captivate.fm/ai-goes-to-college/</itunes:new-feed-url><podcast:locked>no</podcast:locked><podcast:medium>podcast</podcast:medium><item><title>Accessibility Hacks, 81,000 Interviews, and the Choppy Waters of Academic AI</title><itunes:title>Accessibility Hacks, 81,000 Interviews, and the Choppy Waters of Academic AI</itunes:title><description><![CDATA[<h1><span class="ql-size-small">AI Goes to College, Episode 33: Accessibility Hacks, 81,000 Interviews, and the Choppy Waters of Academic AI</span></h1><p>Higher education is drowning in accessibility deadlines, grappling with what 81,000 AI interviews reveal about how people actually use these tools, and watching the academic publishing system creak under new pressures. In this episode, Craig and Rob dig into all three, with practical advice, a few uncomfortable truths, and their usual mix of optimism and healthy skepticism.</p><h2>The Accessibility Crunch Is Here (and AI Can Help)</h2><p>The episode opens with a problem that's top of mind for faculty everywhere: the April 24 federal deadline requiring public-facing digital content to meet WCAG accessibility guidelines. Universities have been scrambling, and many of the contracted tools designed to help have been, as Craig diplomatically puts it, hit and miss.</p><p>Craig shares a concrete example from his own workflow. He took three image-heavy slide decks from his Principles of Information Systems course and handed them to Claude Cowork with a simple instruction: add alt text for all the images. Within about 30 minutes, the job was done. The accuracy? Roughly 75 to 80 percent. A handful of images needed corrections, but instead of writing alt text for 40 or 50 images from scratch, he only had to fix six or eight. Rob tried something similar with Microsoft Copilot on a keynote presentation he gave at the SAIS conference in Asheville; two images, 30 seconds, done.</p><p>Rob makes the important point that accessibility isn't just a PowerPoint problem. It extends to whiteboard files, videos, and essentially everything faculty communicate digitally. The burden is real, and it lands on faculty who are already overwhelmed by the changes AI is bringing to their professional lives. Craig adds a note of personal sensitivity here; his wife has a profound hearing disability, which makes these issues more than abstract compliance for him.</p><p>The larger takeaway? When you hit one of these friction points in your work, try AI. It won't always solve the problem, but it often will, and the time savings can be substantial.</p><h2>What 81,000 Interviews Tell Us About How People Actually Use AI</h2><p>Link: <a href="https://www.anthropic.com/features/81k-interviews" rel="noopener noreferrer" target="_blank">https://www.anthropic.com/features/81k-interviews</a></p><p>Craig's article: <a href="https://open.substack.com/pub/aigoestocollege/p/what-81000-people-told-anthropic" rel="noopener noreferrer" target="_blank">https://open.substack.com/pub/aigoestocollege/p/what-81000-people-told-anthropic</a></p><p>The conversation shifts to Anthropic's large-scale qualitative study, where Claude was used to conduct and analyze 81,000 interviews about how people use AI tools. Rob, who has spent considerable time doing qualitative research the traditional way (36 interview transcripts with families, a labor-intensive process), finds the scale almost hard to believe. Craig wrote a separate article about this study for the AI Goes to College newsletter.</p><p>The phrase that catches both hosts' attention is one from the report: "the light and the shade are tangled together." It captures the tension between excitement about AI's possibilities and anxiety about what those possibilities mean for how people work, learn, and think. Craig connects this to a concept from technology studies: this is not technological determinism. The outcomes aren't dictated by the tools themselves. They emerge from the sociotechnical space where human choices and technological capabilities intersect.</p><p>Rob observes that most current AI use cases still amount to doing what we've always done, just faster. The real transformation will come when people start imagining entirely new approaches (he draws an analogy to cloud computing, which started as a backup solution and eventually reshaped how people interact with technology in ways nobody initially anticipated).</p><p>One quote from the Anthropic study lands hard. A freelance software engineer in Pakistan says: "I want to learn skills, but learning deeply is of no use. Ultimately I can just use AI." Craig points out that if a working professional thinks this way, the implications for students who may not yet appreciate the long-term value of deep learning are sobering. Rob agrees but pushes back slightly: people who lean too far into this mindset will eventually hit a wall where they lack the critical thinking skills to know when or why AI has gotten something wrong.</p><p>The hosts converge on what's becoming a running theme for the podcast: higher education's central task is helping students understand the long-term value of cognitive engagement, because without that understanding, the default will always be to let AI handle it.</p><h2>Academics Need to Wake Up: 10 Theses on a Shifting Landscape</h2><p>Link: https://substack.com/home/post/p-189705626</p><p>The second major discussion centers on Alexander Kustoff's Substack article, "Academics Need to Wake Up on AI: 10 Theses for Folks Who Haven't Noticed the Ground Shifting Under Their Feet." Rob sees it as a useful prompt for conversations the research community needs to have. Craig appreciates the ambition but pushes back on some of the claims.</p><p>Take thesis number one: AI can already do social science research better than most professors. Craig's reaction is nuanced. The claim is probably technically true if "most" is read literally, since many professors don't publish much (Rob notes the median number of publications for business school professors may be as low as one). But the implication that AI can replace skilled researchers? Not yet. Craig estimates that a knowledgeable researcher can use AI to cut research production time by about three-quarters, but that knowledge is the key ingredient; without research skill, you'll just produce publishable garbage faster.</p><p>Rob raises something interesting: colleagues who are brilliant thinkers but never thrived in research because they didn't enjoy writing may now have a path to contribute. AI could genuinely democratize parts of the research process. Craig extends this point to data analysis; tools like Cowork can run Python and R analyses without expensive specialized software, which matters enormously for under-resourced institutions and researchers in developing countries.</p><p>The conversation turns to the strain AI is putting on the peer review system. More submissions (many of them better written thanks to AI) are flooding journals, but finding reviewers was already difficult. Craig, speaking from his role as a journal editor, argues that well-trained AI could do a better job reviewing than roughly half of current human reviewers. Rob agrees but emphasizes that journal leaders need to come together and define norms for what's acceptable. Right now, the rules are either nonexistent or unrealistically restrictive ("just don't use AI for anything"), which creates the same kind of confusion faculty have imposed on students with inconsistent classroom policies.</p><p>One of the most provocative moments comes when Craig reads a quote from the Kustoff article: "I don't envision a research assistant role in my workflow anymore. What I want from collaborators is original thinking, domain expertise, and intellectual challenge. This is a genuine loss for the traditional apprenticeship model, and I don't have a clean answer for how to replace it." Both hosts take this seriously. Craig argues that senior scholars will need to accept some suboptimal results in the short term to continue mentoring the next generation. Rob suggests the apprenticeship model isn't dying; it's transforming. The mentorship shifts from teaching students how to do tasks to teaching them how to direct AI tools and critically evaluate what those tools produce.</p><p>Craig closes with a characteristically honest observation: senior scholars get stuck in their ways of thinking, and one of the real values of working with early-career doctoral students is the occasional moment when their unformed, messy thinking reveals a perspective that nobody in the room had considered. That's worth protecting.</p><h2>AI-Generated Lesson Plans and the Bloom's Taxonomy Problem</h2><p>Link: <a href="https://citejournal.org/volume-25/issue-3-25/social-studies/civic-education-in-the-age-of-ai-should-we-trust-ai-generated-lesson-plans/" rel="noopener noreferrer" target="_blank">https://citejournal.org/volume-25/issue-3-25/social-studies/civic-education-in-the-age-of-ai-should-we-trust-ai-generated-lesson-plans/</a></p><p>The final segment covers a paper by four researchers from UMass Amherst, "Civic Education in the Age of AI: Should We Trust AI-Generated Lesson Plans?" The study found that roughly 90 percent of AI-generated lesson plans hit only the lower levels of Bloom's taxonomy (remembering, understanding) rather than the higher-order thinking skills like analyzing, evaluating, and creating.</p><p>Craig's first reaction was that the prompts used in the study were terrible. But he acknowledges the researchers had a reason: they were mimicking how most teachers would actually prompt. And that's the real finding. The problem isn't that AI can't produce sophisticated lesson plans; the problem is that untrained users produce unsophisticated prompts, and the output reflects the input. Rob agrees and broadens the point: if even a fraction of teachers are prompting this way, that's affecting a lot of students.</p><p>Craig shares a personal anecdote from his one year as a high school teacher. He diligently wrote lesson plans; a veteran teacher (whom he describes as one of the best he'd ever seen) simply copied his plans to satisfy an administrative checkbox. The experienced teacher didn't need detailed plans because she could read the room and adapt in real time. Some lesson planning, Craig suggests, falls...]]></description><content:encoded><![CDATA[<h1><span class="ql-size-small">AI Goes to College, Episode 33: Accessibility Hacks, 81,000 Interviews, and the Choppy Waters of Academic AI</span></h1><p>Higher education is drowning in accessibility deadlines, grappling with what 81,000 AI interviews reveal about how people actually use these tools, and watching the academic publishing system creak under new pressures. In this episode, Craig and Rob dig into all three, with practical advice, a few uncomfortable truths, and their usual mix of optimism and healthy skepticism.</p><h2>The Accessibility Crunch Is Here (and AI Can Help)</h2><p>The episode opens with a problem that's top of mind for faculty everywhere: the April 24 federal deadline requiring public-facing digital content to meet WCAG accessibility guidelines. Universities have been scrambling, and many of the contracted tools designed to help have been, as Craig diplomatically puts it, hit and miss.</p><p>Craig shares a concrete example from his own workflow. He took three image-heavy slide decks from his Principles of Information Systems course and handed them to Claude Cowork with a simple instruction: add alt text for all the images. Within about 30 minutes, the job was done. The accuracy? Roughly 75 to 80 percent. A handful of images needed corrections, but instead of writing alt text for 40 or 50 images from scratch, he only had to fix six or eight. Rob tried something similar with Microsoft Copilot on a keynote presentation he gave at the SAIS conference in Asheville; two images, 30 seconds, done.</p><p>Rob makes the important point that accessibility isn't just a PowerPoint problem. It extends to whiteboard files, videos, and essentially everything faculty communicate digitally. The burden is real, and it lands on faculty who are already overwhelmed by the changes AI is bringing to their professional lives. Craig adds a note of personal sensitivity here; his wife has a profound hearing disability, which makes these issues more than abstract compliance for him.</p><p>The larger takeaway? When you hit one of these friction points in your work, try AI. It won't always solve the problem, but it often will, and the time savings can be substantial.</p><h2>What 81,000 Interviews Tell Us About How People Actually Use AI</h2><p>Link: <a href="https://www.anthropic.com/features/81k-interviews" rel="noopener noreferrer" target="_blank">https://www.anthropic.com/features/81k-interviews</a></p><p>Craig's article: <a href="https://open.substack.com/pub/aigoestocollege/p/what-81000-people-told-anthropic" rel="noopener noreferrer" target="_blank">https://open.substack.com/pub/aigoestocollege/p/what-81000-people-told-anthropic</a></p><p>The conversation shifts to Anthropic's large-scale qualitative study, where Claude was used to conduct and analyze 81,000 interviews about how people use AI tools. Rob, who has spent considerable time doing qualitative research the traditional way (36 interview transcripts with families, a labor-intensive process), finds the scale almost hard to believe. Craig wrote a separate article about this study for the AI Goes to College newsletter.</p><p>The phrase that catches both hosts' attention is one from the report: "the light and the shade are tangled together." It captures the tension between excitement about AI's possibilities and anxiety about what those possibilities mean for how people work, learn, and think. Craig connects this to a concept from technology studies: this is not technological determinism. The outcomes aren't dictated by the tools themselves. They emerge from the sociotechnical space where human choices and technological capabilities intersect.</p><p>Rob observes that most current AI use cases still amount to doing what we've always done, just faster. The real transformation will come when people start imagining entirely new approaches (he draws an analogy to cloud computing, which started as a backup solution and eventually reshaped how people interact with technology in ways nobody initially anticipated).</p><p>One quote from the Anthropic study lands hard. A freelance software engineer in Pakistan says: "I want to learn skills, but learning deeply is of no use. Ultimately I can just use AI." Craig points out that if a working professional thinks this way, the implications for students who may not yet appreciate the long-term value of deep learning are sobering. Rob agrees but pushes back slightly: people who lean too far into this mindset will eventually hit a wall where they lack the critical thinking skills to know when or why AI has gotten something wrong.</p><p>The hosts converge on what's becoming a running theme for the podcast: higher education's central task is helping students understand the long-term value of cognitive engagement, because without that understanding, the default will always be to let AI handle it.</p><h2>Academics Need to Wake Up: 10 Theses on a Shifting Landscape</h2><p>Link: https://substack.com/home/post/p-189705626</p><p>The second major discussion centers on Alexander Kustoff's Substack article, "Academics Need to Wake Up on AI: 10 Theses for Folks Who Haven't Noticed the Ground Shifting Under Their Feet." Rob sees it as a useful prompt for conversations the research community needs to have. Craig appreciates the ambition but pushes back on some of the claims.</p><p>Take thesis number one: AI can already do social science research better than most professors. Craig's reaction is nuanced. The claim is probably technically true if "most" is read literally, since many professors don't publish much (Rob notes the median number of publications for business school professors may be as low as one). But the implication that AI can replace skilled researchers? Not yet. Craig estimates that a knowledgeable researcher can use AI to cut research production time by about three-quarters, but that knowledge is the key ingredient; without research skill, you'll just produce publishable garbage faster.</p><p>Rob raises something interesting: colleagues who are brilliant thinkers but never thrived in research because they didn't enjoy writing may now have a path to contribute. AI could genuinely democratize parts of the research process. Craig extends this point to data analysis; tools like Cowork can run Python and R analyses without expensive specialized software, which matters enormously for under-resourced institutions and researchers in developing countries.</p><p>The conversation turns to the strain AI is putting on the peer review system. More submissions (many of them better written thanks to AI) are flooding journals, but finding reviewers was already difficult. Craig, speaking from his role as a journal editor, argues that well-trained AI could do a better job reviewing than roughly half of current human reviewers. Rob agrees but emphasizes that journal leaders need to come together and define norms for what's acceptable. Right now, the rules are either nonexistent or unrealistically restrictive ("just don't use AI for anything"), which creates the same kind of confusion faculty have imposed on students with inconsistent classroom policies.</p><p>One of the most provocative moments comes when Craig reads a quote from the Kustoff article: "I don't envision a research assistant role in my workflow anymore. What I want from collaborators is original thinking, domain expertise, and intellectual challenge. This is a genuine loss for the traditional apprenticeship model, and I don't have a clean answer for how to replace it." Both hosts take this seriously. Craig argues that senior scholars will need to accept some suboptimal results in the short term to continue mentoring the next generation. Rob suggests the apprenticeship model isn't dying; it's transforming. The mentorship shifts from teaching students how to do tasks to teaching them how to direct AI tools and critically evaluate what those tools produce.</p><p>Craig closes with a characteristically honest observation: senior scholars get stuck in their ways of thinking, and one of the real values of working with early-career doctoral students is the occasional moment when their unformed, messy thinking reveals a perspective that nobody in the room had considered. That's worth protecting.</p><h2>AI-Generated Lesson Plans and the Bloom's Taxonomy Problem</h2><p>Link: <a href="https://citejournal.org/volume-25/issue-3-25/social-studies/civic-education-in-the-age-of-ai-should-we-trust-ai-generated-lesson-plans/" rel="noopener noreferrer" target="_blank">https://citejournal.org/volume-25/issue-3-25/social-studies/civic-education-in-the-age-of-ai-should-we-trust-ai-generated-lesson-plans/</a></p><p>The final segment covers a paper by four researchers from UMass Amherst, "Civic Education in the Age of AI: Should We Trust AI-Generated Lesson Plans?" The study found that roughly 90 percent of AI-generated lesson plans hit only the lower levels of Bloom's taxonomy (remembering, understanding) rather than the higher-order thinking skills like analyzing, evaluating, and creating.</p><p>Craig's first reaction was that the prompts used in the study were terrible. But he acknowledges the researchers had a reason: they were mimicking how most teachers would actually prompt. And that's the real finding. The problem isn't that AI can't produce sophisticated lesson plans; the problem is that untrained users produce unsophisticated prompts, and the output reflects the input. Rob agrees and broadens the point: if even a fraction of teachers are prompting this way, that's affecting a lot of students.</p><p>Craig shares a personal anecdote from his one year as a high school teacher. He diligently wrote lesson plans; a veteran teacher (whom he describes as one of the best he'd ever seen) simply copied his plans to satisfy an administrative checkbox. The experienced teacher didn't need detailed plans because she could read the room and adapt in real time. Some lesson planning, Craig suggests, falls into a compliance category where the quality of the plan matters less than the quality of the teaching.</p><p>But the bigger message is one both hosts keep returning to: we have to teach people how to use these tools well. Craig suspects that even a slightly more complex prompt ("address this level of Bloom's taxonomy and make sure you include demographic diversity") would produce dramatically better lesson plans.</p><p>Rob makes a final observation that resonates beyond lesson planning. People who spend a lot of time thinking about AI (like Rob and Craig) can easily forget that most people don't. Understanding what AI use looks like for someone without deep expertise, and then helping to lift them up, is the real work ahead.</p><p>Craig's response? Maybe the strategy should be seeding the field with AI evangelists, a small number of engaged opinion leaders who help others one conversation at a time, rather than trying to train everyone through top-down institutional programs. That's how innovations actually spread.</p><h2>A Meta-Moment: Who Wrote This, Really?</h2><p>In a brief but revealing aside, Craig mentions that his Substack article about the Anthropic study was entirely generated and posted by an agentic AI workflow using Claude Code and Opus 4.6, built on his custom "write like Craig" skill. He asks Rob to guess the accuracy. Rob says 75 percent. Craig confirms. The question lingers: if AI can write in your voice with 75 percent accuracy and post it autonomously, who's really the author? Craig leaves that for the listener to decide.</p><h2>Key Takeaways</h2><p><strong>AI is a practical solution for the accessibility crunch.</strong> With the April 24 WCAG deadline looming, tools like Claude Cowork and Microsoft Copilot can generate alt text for images at roughly 75 to 80 percent accuracy, dramatically reducing the manual burden on faculty.</p><p><strong>"The light and the shade are tangled together."</strong> Anthropic's 81,000-interview study reinforces that AI's benefits and risks aren't separable. Higher education's job is to help students navigate both, not pretend one side doesn't exist.</p><p><strong>AI adoption follows a predictable pattern.</strong> First we use new technology to do old things faster. The real transformation comes when we start imagining fundamentally new approaches. Higher ed is still mostly in phase one.</p><p><strong>The prompt is the bottleneck, not the tool.</strong> AI-generated lesson plans that hit only lower-order Bloom's taxonomy levels aren't evidence that AI can't do better. They're evidence that untrained users produce unsophisticated prompts.</p><p><strong>Academic publishing is under real strain.</strong> More submissions, better surface-level writing, reviewer shortages, and undefined norms for AI use are all converging. Journal leaders need to establish clear, workable standards.</p><p><strong>The apprenticeship model is transforming, not dying.</strong> Mentoring doctoral students shifts from teaching them to do tasks toward teaching them to direct AI tools and critically evaluate the output. Senior scholars need to stay open to messy, unexpected thinking from early-career researchers.</p><p><strong>Seed the field with opinion leaders.</strong> Rather than top-down institutional training programs, Craig argues for cultivating AI evangelists who spread knowledge one conversation at a time; that's how innovations actually diffuse.</p><p><span class="ql-size-large">Links</span></p><p>Anthropic's 81,000 interviews: <a href="https://www.anthropic.com/features/81k-interviews" rel="noopener noreferrer" target="_blank">https://www.anthropic.com/features/81k-interviews</a></p><p>Craig's article: <a href="https://open.substack.com/pub/aigoestocollege/p/what-81000-people-told-anthropic" rel="noopener noreferrer" target="_blank">https://open.substack.com/pub/aigoestocollege/p/what-81000-people-told-anthropic</a></p><p>Academics need to wake up on AI: https://substack.com/home/post/p-189705626</p><p>AI generated lesson plans: <a href="https://citejournal.org/volume-25/issue-3-25/social-studies/civic-education-in-the-age-of-ai-should-we-trust-ai-generated-lesson-plans/" rel="noopener noreferrer" target="_blank">https://citejournal.org/volume-25/issue-3-25/social-studies/civic-education-in-the-age-of-ai-should-we-trust-ai-generated-lesson-plans/</a></p><p>Companies/Products mentioned in this episode:</p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Claude Cowork</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Microsoft Copilot</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Anthropic</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>University of Central Oklahoma</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>UMass Amherst</li></ol><br/><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></content:encoded><link><![CDATA[https://sites.libsyn.com/502703]]></link><guid isPermaLink="false">28ff86d2-2501-4d33-9dce-d3d7f060edd0</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Tue, 31 Mar 2026 04:45:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/28ff86d2-2501-4d33-9dce-d3d7f060edd0.mp3" length="45185685" type="audio/mpeg"/><itunes:duration>47:04</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>33</itunes:episode><podcast:episode>33</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/2613bd1f-490b-4c3b-b6e3-3e2267452934/transcript.json" type="application/json"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/2613bd1f-490b-4c3b-b6e3-3e2267452934/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/2613bd1f-490b-4c3b-b6e3-3e2267452934/index.html" type="text/html"/><podcast:chapters url="https://transcripts.captivate.fm/chapter-8007dc95-0f5e-4c8d-ab7f-e09529e67a4d.json" type="application/json+chapters"/></item><item><title>We&apos;re On Our Own: Academic Integrity through AI Resilience</title><itunes:title>We&apos;re On Our Own: Academic Integrity through AI Resilience</itunes:title><description><![CDATA[<p>Craig and Rob kick off this episode with a deep dive into Claude's Constitution — the 84-page document Anthropic released to explain how Claude is governed. The document lays out a four-part hierarchy of priorities: be broadly safe, be broadly ethical, follow Anthropic's guidelines, and be genuinely helpful — in that order. Craig walks through the key language, and both hosts zero in on the uncomfortable questions it raises. Who gets to define "broadly ethical"? Whose values count? Craig points out that collectivist and individualist cultures would answer those questions very differently, and Rob raises the example of how privacy has historically carried different social weight in China versus the United States.</p><p>They give Anthropic credit for the transparency. Rob notes that he has no idea what governs ChatGPT by comparison, and Craig argues the openness could become a real differentiator for universities evaluating which AI tools to bring in-house. But the Constitution also includes some curious language — the phrase "during the current phase of development" gives Anthropic significant room to evolve these guardrails over time, and a section on emotional support states that Claude should "show that it cares," which both hosts flag as a strikingly anthropomorphic choice of words.</p><p>Craig shares a fun aside: he used Claude Code to build a clone of the classic Colossal Cave Adventure game — reframed around understanding large language models — using just a few sentences as a prompt. The game was up and running in about an hour. That kind of capability would have been unthinkable a couple of years ago, and it underscores why the Constitution's language about the "current phase" matters so much.</p><p>The big takeaway from the Constitution discussion lands hard: higher ed is on its own when it comes to academic integrity. Anthropic — arguably the most transparent of the major AI companies — has no interest in blocking students from misusing its tools. Rob mentions a new product called Einstein that will watch your Canvas videos, write your discussion posts, reply to classmates, and complete your assignments. All you have to do is hand over your login credentials.</p><p>That sets up the episode's second major topic: AI resilience. Rob explains the concept as designing learning outcomes that hold up regardless of what AI can do. If a major portion of a student's grade depends on writing an essay that AI could produce in seconds, that assignment has very little resilience. The shift Rob advocates moves evaluation toward process — asking students for the prompts they used, reflections on how they refined their approach, and demonstrations that they understand what was produced. He shares the example of a colleague whose programming class now requires students to record videos explaining their code rather than just submitting it.</p><p>Craig raises the scaling problem. He regularly teaches 90 to 100 undergraduates. Rob suggests that AI itself can help with formative feedback on scaffolding assignments, freeing faculty to focus their grading energy on fewer, higher-stakes assessments. Craig uses an analogy from music: scaffolding assignments are like playing scales — you do them to build toward performance, and they don't need to carry grade weight. Both hosts agree this represents a move away from the grade economy, where students rationally minimize effort because every small assignment is a transaction.</p><p>Craig pushes the conversation further by proposing live client projects — or AI-simulated client projects — as a way to create the messiness and ambiguity that real work demands. Rob's initial reaction is skepticism (live client projects are logistically brutal), but he warms to the idea of using AI to simulate clients with realistic fuzziness and scope creep. The broader point: AI could be the lever higher ed needs to fix problems that have been accumulating for decades.</p><p>The episode wraps with an update on NotebookLM. Craig walks through the recent changes — more user control over reports, slide decks, flashcards, quizzes, and other outputs in the Studio panel. You can now specify the structure and focus of custom reports rather than relying solely on canned formats. Slide decks can be exported (though editing remains clunky since each slide is essentially an image). Craig's recommendation: if you have a Google account and you work with knowledge in any form, you should be using NotebookLM. Rob notes that Microsoft Copilot has added a similar notebook feature worth exploring, and they float the idea of a future head-to-head comparison episode.</p><p>Links referenced in this episode:</p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://notebooklm" rel="noopener noreferrer" target="_blank">notebooklm</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://anthropic" rel="noopener noreferrer" target="_blank">anthropic</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://claude" rel="noopener noreferrer" target="_blank">claude</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://google" rel="noopener noreferrer" target="_blank">google</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://canvas" rel="noopener noreferrer" target="_blank">canvas</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://einstein" rel="noopener noreferrer" target="_blank">einstein</a></li></ol><br/><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></description><content:encoded><![CDATA[<p>Craig and Rob kick off this episode with a deep dive into Claude's Constitution — the 84-page document Anthropic released to explain how Claude is governed. The document lays out a four-part hierarchy of priorities: be broadly safe, be broadly ethical, follow Anthropic's guidelines, and be genuinely helpful — in that order. Craig walks through the key language, and both hosts zero in on the uncomfortable questions it raises. Who gets to define "broadly ethical"? Whose values count? Craig points out that collectivist and individualist cultures would answer those questions very differently, and Rob raises the example of how privacy has historically carried different social weight in China versus the United States.</p><p>They give Anthropic credit for the transparency. Rob notes that he has no idea what governs ChatGPT by comparison, and Craig argues the openness could become a real differentiator for universities evaluating which AI tools to bring in-house. But the Constitution also includes some curious language — the phrase "during the current phase of development" gives Anthropic significant room to evolve these guardrails over time, and a section on emotional support states that Claude should "show that it cares," which both hosts flag as a strikingly anthropomorphic choice of words.</p><p>Craig shares a fun aside: he used Claude Code to build a clone of the classic Colossal Cave Adventure game — reframed around understanding large language models — using just a few sentences as a prompt. The game was up and running in about an hour. That kind of capability would have been unthinkable a couple of years ago, and it underscores why the Constitution's language about the "current phase" matters so much.</p><p>The big takeaway from the Constitution discussion lands hard: higher ed is on its own when it comes to academic integrity. Anthropic — arguably the most transparent of the major AI companies — has no interest in blocking students from misusing its tools. Rob mentions a new product called Einstein that will watch your Canvas videos, write your discussion posts, reply to classmates, and complete your assignments. All you have to do is hand over your login credentials.</p><p>That sets up the episode's second major topic: AI resilience. Rob explains the concept as designing learning outcomes that hold up regardless of what AI can do. If a major portion of a student's grade depends on writing an essay that AI could produce in seconds, that assignment has very little resilience. The shift Rob advocates moves evaluation toward process — asking students for the prompts they used, reflections on how they refined their approach, and demonstrations that they understand what was produced. He shares the example of a colleague whose programming class now requires students to record videos explaining their code rather than just submitting it.</p><p>Craig raises the scaling problem. He regularly teaches 90 to 100 undergraduates. Rob suggests that AI itself can help with formative feedback on scaffolding assignments, freeing faculty to focus their grading energy on fewer, higher-stakes assessments. Craig uses an analogy from music: scaffolding assignments are like playing scales — you do them to build toward performance, and they don't need to carry grade weight. Both hosts agree this represents a move away from the grade economy, where students rationally minimize effort because every small assignment is a transaction.</p><p>Craig pushes the conversation further by proposing live client projects — or AI-simulated client projects — as a way to create the messiness and ambiguity that real work demands. Rob's initial reaction is skepticism (live client projects are logistically brutal), but he warms to the idea of using AI to simulate clients with realistic fuzziness and scope creep. The broader point: AI could be the lever higher ed needs to fix problems that have been accumulating for decades.</p><p>The episode wraps with an update on NotebookLM. Craig walks through the recent changes — more user control over reports, slide decks, flashcards, quizzes, and other outputs in the Studio panel. You can now specify the structure and focus of custom reports rather than relying solely on canned formats. Slide decks can be exported (though editing remains clunky since each slide is essentially an image). Craig's recommendation: if you have a Google account and you work with knowledge in any form, you should be using NotebookLM. Rob notes that Microsoft Copilot has added a similar notebook feature worth exploring, and they float the idea of a future head-to-head comparison episode.</p><p>Links referenced in this episode:</p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://notebooklm" rel="noopener noreferrer" target="_blank">notebooklm</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://anthropic" rel="noopener noreferrer" target="_blank">anthropic</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://claude" rel="noopener noreferrer" target="_blank">claude</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://google" rel="noopener noreferrer" target="_blank">google</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://canvas" rel="noopener noreferrer" target="_blank">canvas</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://einstein" rel="noopener noreferrer" target="_blank">einstein</a></li></ol><br/><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></content:encoded><link><![CDATA[https://sites.libsyn.com/502703]]></link><guid isPermaLink="false">b4a2c0e1-cea3-4aa7-8605-fdd6762b889f</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Tue, 03 Mar 2026 04:50:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/b4a2c0e1-cea3-4aa7-8605-fdd6762b889f.mp3" length="46092663" type="audio/mpeg"/><itunes:duration>48:01</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>32</itunes:episode><podcast:episode>32</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/d812bf56-b0a7-4b93-ac3c-10f114191c17/transcript.json" type="application/json"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/d812bf56-b0a7-4b93-ac3c-10f114191c17/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/d812bf56-b0a7-4b93-ac3c-10f114191c17/index.html" type="text/html"/></item><item><title>Students Are Confused About AI and It&apos;s Our Fault (with Dr. Bette Ludwig)</title><itunes:title>Students Are Confused About AI and It&apos;s Our Fault (with Dr. Bette Ludwig)</itunes:title><description><![CDATA[<p>Dr. Bette Ludwig spent 20 years in higher ed working directly with students before leaving to build something different — a Substack (AI Can Do That), a consulting practice, and most recently, the Socratic AI Method, an AI literacy program that teaches students how to think critically alongside AI while keeping their own voice intact.</p><p>That last part is the hard part.</p><p>Craig opens with the question that drives the whole episode: Socratic dialogue requires you to already know enough to ask good questions. So what happens when a student doesn’t know enough to push back on what AI is telling them? Bette’s answer is both practical and unsettling — younger students literally don’t know what they don’t know, and that gap is where the real danger lives.</p><p>The conversation moves into dependency territory when Craig shares a moment from his own morning: Claude froze while he was editing a manuscript, and he felt a flash of genuine panic. Two seconds later, he remembered he could just… write. But he names the uncomfortable truth — his students won’t have that fallback. Bette compares it to the panic we feel when the wifi drops, which is both funny and a little alarming when you sit with it.</p><p>From there, the three dig into the policy mess — teachers across the hall from each other running opposite AI rules, students confused about what’s allowed, and educational systems moving at what Bette calls “a glacial pace” while the technology sprints ahead. Craig shares his own college’s approach: you have to have a policy, it has to be clear, but how restrictive or permissive it is remains your call. The non-negotiable? You can’t leave students in the dark.</p><p>The episode’s most surprising thread might be Bette’s observations about how students actually use AI. It’s not just homework. They’re using it for companionship, personal problems, cooking questions, building apps — ways that don’t even register as “AI use” to most faculty. Her closing point lands hard: students have never used technology the way adults assume they should, and they’re going to do the same thing with AI.</p><h1>Key Takeaways</h1><p><br></p><p><strong>1. The Socratic method has an AI prerequisite problem. </strong>You need existing knowledge to know what questions to ask, which means younger students are especially vulnerable to accepting AI output uncritically. Bette and Craig agree that junior/senior year of high school is roughly where the cognitive capacity for meaningful pushback begins.</p><p><strong>2. AI dependency is already happening to experienced users. </strong>Craig describes a two-second panic when Claude froze mid-editorial. He recovered by remembering he could just write the way he always has. His concern: students who grew up with AI won’t have that muscle memory to fall back on.</p><p><strong>3. The “helpful by default” design is a subtle problem. </strong>Craig raises the point that AI systems are programmed to be agreeable, which means they can lock students into a single mode of thinking without anyone noticing. The hallucinations get all the attention, but the quiet steering might be worse.</p><p><strong>4. Policy chaos is the norm, not the exception. </strong>Teachers in the same hallway can have opposite AI rules. Bette recommends clarity above all: whatever your policy is, make it explicit. In K–12, she argues for uniform policies. In higher ed, where faculty governance complicates things, Craig’s approach works — require a policy, let faculty own the specifics.</p><p><strong>5. Grace matters more than enforcement right now. </strong>Both Craig and Bette push back on the “AI cop” mentality. Students sometimes cross lines they didn’t know existed, just like past generations plagiarized without understanding citation rules. Teaching moments beat punitive responses, especially when the rules themselves are still being written.</p><p><strong>6. Students use AI in ways faculty don’t expect. </strong>Companionship, personal problems, everyday questions, building apps. Bette’s observation: students are as likely to use AI for roommate conflicts as for essay writing. Faculty who don’t use AI themselves can’t begin to understand these patterns.</p><p><strong>7. Education isn’t moving fast enough. </strong>New York got an AI bachelor’s program launched in fall 2025, which Bette calls “Mach speed for higher ed.” Most institutions are still in the resistance-or-denial phase. The shared worry: AI across the curriculum could become another empty checkbox, like ethics across the curriculum before it.</p><p><span class="ql-size-large">Links</span></p><p>Dr. Ludwig's website: <a href="https://www.betteludwig.com/" rel="noopener noreferrer" target="_blank">https://www.betteludwig.com/</a> </p><p>AI Can Do That Substack: <a href="https://betteconnects.substack.com/" rel="noopener noreferrer" target="_blank">https://betteconnects.substack.com/</a></p><p>AI Goes to College: <a href="https://www.aigoestocollege.com/" rel="noopener noreferrer" target="_blank">https://www.aigoestocollege.com/</a></p><p>Craig's AI Goes to College Substack: <a href="https://aigoestocollege.substack.com/" rel="noopener noreferrer" target="_blank">https://aigoestocollege.substack.com/</a></p><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></description><content:encoded><![CDATA[<p>Dr. Bette Ludwig spent 20 years in higher ed working directly with students before leaving to build something different — a Substack (AI Can Do That), a consulting practice, and most recently, the Socratic AI Method, an AI literacy program that teaches students how to think critically alongside AI while keeping their own voice intact.</p><p>That last part is the hard part.</p><p>Craig opens with the question that drives the whole episode: Socratic dialogue requires you to already know enough to ask good questions. So what happens when a student doesn’t know enough to push back on what AI is telling them? Bette’s answer is both practical and unsettling — younger students literally don’t know what they don’t know, and that gap is where the real danger lives.</p><p>The conversation moves into dependency territory when Craig shares a moment from his own morning: Claude froze while he was editing a manuscript, and he felt a flash of genuine panic. Two seconds later, he remembered he could just… write. But he names the uncomfortable truth — his students won’t have that fallback. Bette compares it to the panic we feel when the wifi drops, which is both funny and a little alarming when you sit with it.</p><p>From there, the three dig into the policy mess — teachers across the hall from each other running opposite AI rules, students confused about what’s allowed, and educational systems moving at what Bette calls “a glacial pace” while the technology sprints ahead. Craig shares his own college’s approach: you have to have a policy, it has to be clear, but how restrictive or permissive it is remains your call. The non-negotiable? You can’t leave students in the dark.</p><p>The episode’s most surprising thread might be Bette’s observations about how students actually use AI. It’s not just homework. They’re using it for companionship, personal problems, cooking questions, building apps — ways that don’t even register as “AI use” to most faculty. Her closing point lands hard: students have never used technology the way adults assume they should, and they’re going to do the same thing with AI.</p><h1>Key Takeaways</h1><p><br></p><p><strong>1. The Socratic method has an AI prerequisite problem. </strong>You need existing knowledge to know what questions to ask, which means younger students are especially vulnerable to accepting AI output uncritically. Bette and Craig agree that junior/senior year of high school is roughly where the cognitive capacity for meaningful pushback begins.</p><p><strong>2. AI dependency is already happening to experienced users. </strong>Craig describes a two-second panic when Claude froze mid-editorial. He recovered by remembering he could just write the way he always has. His concern: students who grew up with AI won’t have that muscle memory to fall back on.</p><p><strong>3. The “helpful by default” design is a subtle problem. </strong>Craig raises the point that AI systems are programmed to be agreeable, which means they can lock students into a single mode of thinking without anyone noticing. The hallucinations get all the attention, but the quiet steering might be worse.</p><p><strong>4. Policy chaos is the norm, not the exception. </strong>Teachers in the same hallway can have opposite AI rules. Bette recommends clarity above all: whatever your policy is, make it explicit. In K–12, she argues for uniform policies. In higher ed, where faculty governance complicates things, Craig’s approach works — require a policy, let faculty own the specifics.</p><p><strong>5. Grace matters more than enforcement right now. </strong>Both Craig and Bette push back on the “AI cop” mentality. Students sometimes cross lines they didn’t know existed, just like past generations plagiarized without understanding citation rules. Teaching moments beat punitive responses, especially when the rules themselves are still being written.</p><p><strong>6. Students use AI in ways faculty don’t expect. </strong>Companionship, personal problems, everyday questions, building apps. Bette’s observation: students are as likely to use AI for roommate conflicts as for essay writing. Faculty who don’t use AI themselves can’t begin to understand these patterns.</p><p><strong>7. Education isn’t moving fast enough. </strong>New York got an AI bachelor’s program launched in fall 2025, which Bette calls “Mach speed for higher ed.” Most institutions are still in the resistance-or-denial phase. The shared worry: AI across the curriculum could become another empty checkbox, like ethics across the curriculum before it.</p><p><span class="ql-size-large">Links</span></p><p>Dr. Ludwig's website: <a href="https://www.betteludwig.com/" rel="noopener noreferrer" target="_blank">https://www.betteludwig.com/</a> </p><p>AI Can Do That Substack: <a href="https://betteconnects.substack.com/" rel="noopener noreferrer" target="_blank">https://betteconnects.substack.com/</a></p><p>AI Goes to College: <a href="https://www.aigoestocollege.com/" rel="noopener noreferrer" target="_blank">https://www.aigoestocollege.com/</a></p><p>Craig's AI Goes to College Substack: <a href="https://aigoestocollege.substack.com/" rel="noopener noreferrer" target="_blank">https://aigoestocollege.substack.com/</a></p><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></content:encoded><link><![CDATA[https://sites.libsyn.com/502703]]></link><guid isPermaLink="false">3df960b6-98f7-4bf9-a662-9c0a3f9bce8d</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Mon, 16 Feb 2026 04:30:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/3df960b6-98f7-4bf9-a662-9c0a3f9bce8d.mp3" length="38843958" type="audio/mpeg"/><itunes:duration>40:28</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>31</itunes:episode><podcast:episode>31</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/01d047f3-acbc-4c88-a603-4bf526c46a47/transcript.json" type="application/json"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/01d047f3-acbc-4c88-a603-4bf526c46a47/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/01d047f3-acbc-4c88-a603-4bf526c46a47/index.html" type="text/html"/><podcast:chapters url="https://transcripts.captivate.fm/chapter-50ec3b1c-5280-4127-904a-4a8e6ba50088.json" type="application/json+chapters"/></item><item><title>Human-AI Collaboration: Outsourcing vs Offloading and the Rise of Co-Produced Cognition</title><itunes:title>Human-AI Collaboration: Outsourcing vs Offloading and the Rise of Co-Produced Cognition</itunes:title><description><![CDATA[<p><strong>Recording from the Deep Freeze:</strong> Craig broadcasts from snow-covered north Louisiana (running on generator and Starlink!), where AI helped him MacGyver a propane tank solution involving ratchet straps, a plastic bucket, and a shop light. Welcome to the wild world of practical AI applications.</p><h3>Featured Topics</h3><p><strong>Oboe.com: The Future of Self-Directed Learning?</strong></p><p>Craig and Rob explore Oboe (<a href="https://oboe.com/" rel="noopener noreferrer" target="_blank">oboe.com</a>), a free AI-powered platform that creates customized courses on virtually any topic in minutes. Craig demonstrates by building a course on AI agents, and Rob becomes his first student. The hosts discuss:</p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>How the platform auto-generates quizzes with reasonable multiple-choice options and helpful feedback</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>The potential to revolutionize textbook accessibility with low-cost or no-cost alternatives</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Using Oboe to supplement existing textbooks (like adding blockchain content to their own textbook)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>The limitations: shallow sourcing and need for instructor vetting</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Credit to the <a href="https://every.to/podcast" rel="noopener noreferrer" target="_blank">AI and I podcast</a> from Every.to (makers of Lex.page) for the discovery</li></ol><br/><p><strong>Security First: The Moltbot Warning</strong></p><p>Not all that glitters is AI gold. Rob raises important concerns about new tools like Moltbot that can automate processes but may introduce security vulnerabilities. Key takeaway: Educators must apply the same critical thinking they expect from students when evaluating new AI tools for classroom use.</p><p><strong>Craig's Three-Stage Hierarchy: A Framework for Human-AI Interaction</strong></p><p>The centerpiece discussion introduces Craig's developmental model for understanding how we work with AI:</p><ol><li data-list="ordered"><span class="ql-ui" contenteditable="false"></span><strong>Cognitive Outsourcing</strong> - AI does the task for you (the "easy" but often problematic approach)</li><li data-list="ordered"><span class="ql-ui" contenteditable="false"></span><strong>Cognitive Offloading</strong> - AI handles specific components while you maintain control</li><li data-list="ordered"><span class="ql-ui" contenteditable="false"></span><strong>Co-Produced Cognition</strong> - True collaborative thinking that produces outcomes neither human nor AI could achieve alone</li></ol><br/><p>Craig shares his experience co-writing with Claude, comparing it to the collaborative process of updating their textbook with co-author Franz. The magic: AI enables 24/7 expert-level collaboration that would be impossible with humans alone.</p><p><strong>The Big Idea: This hierarchy should guide our teaching.</strong> Rather than telling students to "think critically" (a vague catchall), educators should actively move students from outsourcing toward co-produced cognition, where AI's power truly unlocks.</p><p><strong>Geeking Out on Affordances</strong></p><p>Craig unpacks how AI is fundamentally "a bundle of affordances" - potential uses that only matter when actualized. Using the metaphor of a rock (hammer? erosion control? weapon? stepladder?), he explains:</p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>The same AI tool can be used to cheat on an assignment or to write a meaningless email nobody will read</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>What matters isn't just what AI <em>can</em> do, but which affordances we choose to actualize</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Understanding affordances helps us guide students toward productive uses</li></ol><br/><p>Rob adds that affordances can be actualized poorly (like dropping a rock on your toe), emphasizing the need for purposeful, intentional use.</p><p><strong>The Balanced Path Forward</strong></p><p>The hosts reject both AI extremism and AI evangelism, calling for nuanced, intentional engagement. Whether it's Oboe.com or ChatGPT, tools can be used for good or ill - context and purpose matter.</p><p><strong>The Challenge:</strong> You can't understand AI's affordances without using it. Even if your conclusion is <em>not</em> to use AI in your classroom, that decision should come from informed experimentation, not avoidance.</p><h3>Key Quotes</h3><p><em>"What we need to do as educators is we need to push students from that outsourcing to the offloading to the co-produced cognition. I see that as our main job with generative AI."</em> - Craig</p><p><em>"The whole idea of think critically I think is a catch all phrase that we use very often that's very hard to quantify... I do really like that example of pushing students towards that co-produced cognition."</em> - Rob</p><p><em>"If you don't use them, you're not going to know what they're capable of either harm or benefit. So it's really, I think anybody in higher ed, it's your responsibility to start using these tools."</em> - Craig</p><h3>Episode Resources</h3><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://oboe.com/" rel="noopener noreferrer" target="_blank">Oboe.com</a></strong><a href="https://oboe.com/" rel="noopener noreferrer" target="_blank"> </a>- Free AI course creation platform (for now)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://oboe.com/learn/introduction-to-ai-agents-1fiurdj" rel="noopener noreferrer" target="_blank">AI Agent Oboe.com course</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://every.to/podcast" rel="noopener noreferrer" target="_blank">AI and I Podcast</a></strong> from Every.to</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Watch out for: Moltbot security concerns</li></ol><br/><h3>Bottom Line</h3><p>Don't be blindly pro-AI or anti-AI. Be intentionally informed. Understanding the affordances of AI tools - and helping students actualize them purposefully - may be one of higher education's most important responsibilities in 2025.</p><p><strong>AI Goes to College</strong> is your guide to navigating generative AI in higher education. Hosted by Dr. Craig Van Slyke (Louisiana Tech University) and Dr. Rob Crossler (Washington State University).</p><p>Takeaways:</p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>In the podcast, we discussed the emergence of Oboe.com, an innovative platform that facilitates self-directed learning through AI.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>We emphasized the importance of critically evaluating new AI tools before implementing them in educational settings.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Our conversation highlighted the significance of distinguishing between cognitive outsourcing and cognitive offloading in the context of AI use.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>The hosts expressed their belief that AI can democratize learning, but it must be used responsibly and with proper oversight from educators.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>We reflected on the collaborative potential of AI, stressing that true innovation arises from synergistic human-AI interactions.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>The episode concluded with a call to action for educators to engage with AI tools to better understand their affordances and implications.</li></ol><br/><p>Companies mentioned in this episode:</p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Google</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Oboe</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Every2</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Lex Page</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Moltbot</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Claude</li></ol><br/><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></description><content:encoded><![CDATA[<p><strong>Recording from the Deep Freeze:</strong> Craig broadcasts from snow-covered north Louisiana (running on generator and Starlink!), where AI helped him MacGyver a propane tank solution involving ratchet straps, a plastic bucket, and a shop light. Welcome to the wild world of practical AI applications.</p><h3>Featured Topics</h3><p><strong>Oboe.com: The Future of Self-Directed Learning?</strong></p><p>Craig and Rob explore Oboe (<a href="https://oboe.com/" rel="noopener noreferrer" target="_blank">oboe.com</a>), a free AI-powered platform that creates customized courses on virtually any topic in minutes. Craig demonstrates by building a course on AI agents, and Rob becomes his first student. The hosts discuss:</p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>How the platform auto-generates quizzes with reasonable multiple-choice options and helpful feedback</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>The potential to revolutionize textbook accessibility with low-cost or no-cost alternatives</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Using Oboe to supplement existing textbooks (like adding blockchain content to their own textbook)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>The limitations: shallow sourcing and need for instructor vetting</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Credit to the <a href="https://every.to/podcast" rel="noopener noreferrer" target="_blank">AI and I podcast</a> from Every.to (makers of Lex.page) for the discovery</li></ol><br/><p><strong>Security First: The Moltbot Warning</strong></p><p>Not all that glitters is AI gold. Rob raises important concerns about new tools like Moltbot that can automate processes but may introduce security vulnerabilities. Key takeaway: Educators must apply the same critical thinking they expect from students when evaluating new AI tools for classroom use.</p><p><strong>Craig's Three-Stage Hierarchy: A Framework for Human-AI Interaction</strong></p><p>The centerpiece discussion introduces Craig's developmental model for understanding how we work with AI:</p><ol><li data-list="ordered"><span class="ql-ui" contenteditable="false"></span><strong>Cognitive Outsourcing</strong> - AI does the task for you (the "easy" but often problematic approach)</li><li data-list="ordered"><span class="ql-ui" contenteditable="false"></span><strong>Cognitive Offloading</strong> - AI handles specific components while you maintain control</li><li data-list="ordered"><span class="ql-ui" contenteditable="false"></span><strong>Co-Produced Cognition</strong> - True collaborative thinking that produces outcomes neither human nor AI could achieve alone</li></ol><br/><p>Craig shares his experience co-writing with Claude, comparing it to the collaborative process of updating their textbook with co-author Franz. The magic: AI enables 24/7 expert-level collaboration that would be impossible with humans alone.</p><p><strong>The Big Idea: This hierarchy should guide our teaching.</strong> Rather than telling students to "think critically" (a vague catchall), educators should actively move students from outsourcing toward co-produced cognition, where AI's power truly unlocks.</p><p><strong>Geeking Out on Affordances</strong></p><p>Craig unpacks how AI is fundamentally "a bundle of affordances" - potential uses that only matter when actualized. Using the metaphor of a rock (hammer? erosion control? weapon? stepladder?), he explains:</p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>The same AI tool can be used to cheat on an assignment or to write a meaningless email nobody will read</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>What matters isn't just what AI <em>can</em> do, but which affordances we choose to actualize</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Understanding affordances helps us guide students toward productive uses</li></ol><br/><p>Rob adds that affordances can be actualized poorly (like dropping a rock on your toe), emphasizing the need for purposeful, intentional use.</p><p><strong>The Balanced Path Forward</strong></p><p>The hosts reject both AI extremism and AI evangelism, calling for nuanced, intentional engagement. Whether it's Oboe.com or ChatGPT, tools can be used for good or ill - context and purpose matter.</p><p><strong>The Challenge:</strong> You can't understand AI's affordances without using it. Even if your conclusion is <em>not</em> to use AI in your classroom, that decision should come from informed experimentation, not avoidance.</p><h3>Key Quotes</h3><p><em>"What we need to do as educators is we need to push students from that outsourcing to the offloading to the co-produced cognition. I see that as our main job with generative AI."</em> - Craig</p><p><em>"The whole idea of think critically I think is a catch all phrase that we use very often that's very hard to quantify... I do really like that example of pushing students towards that co-produced cognition."</em> - Rob</p><p><em>"If you don't use them, you're not going to know what they're capable of either harm or benefit. So it's really, I think anybody in higher ed, it's your responsibility to start using these tools."</em> - Craig</p><h3>Episode Resources</h3><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://oboe.com/" rel="noopener noreferrer" target="_blank">Oboe.com</a></strong><a href="https://oboe.com/" rel="noopener noreferrer" target="_blank"> </a>- Free AI course creation platform (for now)</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><a href="https://oboe.com/learn/introduction-to-ai-agents-1fiurdj" rel="noopener noreferrer" target="_blank">AI Agent Oboe.com course</a></li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span><strong><a href="https://every.to/podcast" rel="noopener noreferrer" target="_blank">AI and I Podcast</a></strong> from Every.to</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Watch out for: Moltbot security concerns</li></ol><br/><h3>Bottom Line</h3><p>Don't be blindly pro-AI or anti-AI. Be intentionally informed. Understanding the affordances of AI tools - and helping students actualize them purposefully - may be one of higher education's most important responsibilities in 2025.</p><p><strong>AI Goes to College</strong> is your guide to navigating generative AI in higher education. Hosted by Dr. Craig Van Slyke (Louisiana Tech University) and Dr. Rob Crossler (Washington State University).</p><p>Takeaways:</p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>In the podcast, we discussed the emergence of Oboe.com, an innovative platform that facilitates self-directed learning through AI.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>We emphasized the importance of critically evaluating new AI tools before implementing them in educational settings.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Our conversation highlighted the significance of distinguishing between cognitive outsourcing and cognitive offloading in the context of AI use.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>The hosts expressed their belief that AI can democratize learning, but it must be used responsibly and with proper oversight from educators.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>We reflected on the collaborative potential of AI, stressing that true innovation arises from synergistic human-AI interactions.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>The episode concluded with a call to action for educators to engage with AI tools to better understand their affordances and implications.</li></ol><br/><p>Companies mentioned in this episode:</p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Google</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Oboe</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Every2</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Lex Page</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Moltbot</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Claude</li></ol><br/><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></content:encoded><link><![CDATA[https://sites.libsyn.com/502703]]></link><guid isPermaLink="false">2dbc599c-4b53-4eb3-985b-d70050f48ecb</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Mon, 02 Feb 2026 04:10:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/2dbc599c-4b53-4eb3-985b-d70050f48ecb.mp3" length="30020457" type="audio/mpeg"/><itunes:duration>31:16</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>30</itunes:episode><podcast:episode>30</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/0cac72d6-be11-436e-82d7-688e9a0978ba/transcript.json" type="application/json"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/0cac72d6-be11-436e-82d7-688e9a0978ba/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/0cac72d6-be11-436e-82d7-688e9a0978ba/index.html" type="text/html"/><podcast:chapters url="https://transcripts.captivate.fm/chapter-32b96b71-210d-44f5-bf9e-d2b474571e82.json" type="application/json+chapters"/></item><item><title>Confronting Higher Ed&apos;s Grade Economy: A Call to Action on AI</title><itunes:title>Confronting Higher Ed&apos;s Grade Economy: A Call to Action on AI</itunes:title><description><![CDATA[<p>Welcome to another episode of AI Goes to College, the podcast where Craig and Rob break down what’s really happening with Generative AI in higher education. In this episode, Rob shares a professional update and the hosts dive straight into a candid conversation about the urgent need for action when it comes to embracing and experimenting with AI in the classroom. Forget waiting for the “perfect plan.” Craig and Rob encourage faculty and academic leaders to start doing, iterating, and learning as the technology—and the educational landscape—continues to evolve.</p><p>They tackle the risks and realities educators face, from teaching evaluations to institutional inertia, and explore the challenges of moving beyond a “grade economy” where effort is traded for grades. The conversation gets real about shifting mindsets, focusing on genuine demonstrations of learning, and the importance of collective action in higher ed to adapt to the AI transformation. Plus, get a practical tip to supercharge your workflow: how to use Chrome Split View (and Edge’s version) to work side-by-side with AI tools and documents.</p><p>If you’re looking for honest discussion, actionable advice, and a bit of humor about the trials and opportunities of AI in academia, this episode is for you. Don’t miss out!</p><p>Takeaways:</p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>The concept of the grade economy has led to a transactional view of education, where students equate effort directly with grades, rather than focusing on genuine learning.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>It is essential for educators to embrace iterative approaches in their classrooms, similar to how college football playoffs evolved, rather than waiting for the perfect solution.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>The rapid evolution of AI tools necessitates that educators continuously adapt their teaching methods to remain relevant and effective in fostering student learning.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>We must challenge the conventional grading system that incentivizes minimal effort by students, and instead focus on developing intrinsic motivation to learn.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Transparency in teaching strategies and the incorporation of AI should be communicated to students to foster a collaborative learning environment.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Educational institutions must engage in systemic change to address the flaws in the current grading system, moving away from a production line mentality towards genuine assessments of learning.</li></ol><br/>]]></description><content:encoded><![CDATA[<p>Welcome to another episode of AI Goes to College, the podcast where Craig and Rob break down what’s really happening with Generative AI in higher education. In this episode, Rob shares a professional update and the hosts dive straight into a candid conversation about the urgent need for action when it comes to embracing and experimenting with AI in the classroom. Forget waiting for the “perfect plan.” Craig and Rob encourage faculty and academic leaders to start doing, iterating, and learning as the technology—and the educational landscape—continues to evolve.</p><p>They tackle the risks and realities educators face, from teaching evaluations to institutional inertia, and explore the challenges of moving beyond a “grade economy” where effort is traded for grades. The conversation gets real about shifting mindsets, focusing on genuine demonstrations of learning, and the importance of collective action in higher ed to adapt to the AI transformation. Plus, get a practical tip to supercharge your workflow: how to use Chrome Split View (and Edge’s version) to work side-by-side with AI tools and documents.</p><p>If you’re looking for honest discussion, actionable advice, and a bit of humor about the trials and opportunities of AI in academia, this episode is for you. Don’t miss out!</p><p>Takeaways:</p><ol><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>The concept of the grade economy has led to a transactional view of education, where students equate effort directly with grades, rather than focusing on genuine learning.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>It is essential for educators to embrace iterative approaches in their classrooms, similar to how college football playoffs evolved, rather than waiting for the perfect solution.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>The rapid evolution of AI tools necessitates that educators continuously adapt their teaching methods to remain relevant and effective in fostering student learning.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>We must challenge the conventional grading system that incentivizes minimal effort by students, and instead focus on developing intrinsic motivation to learn.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Transparency in teaching strategies and the incorporation of AI should be communicated to students to foster a collaborative learning environment.</li><li data-list="bullet"><span class="ql-ui" contenteditable="false"></span>Educational institutions must engage in systemic change to address the flaws in the current grading system, moving away from a production line mentality towards genuine assessments of learning.</li></ol><br/>]]></content:encoded><link><![CDATA[https://sites.libsyn.com/502703]]></link><guid isPermaLink="false">4ba7dd0f-60b2-4aa7-90bc-0be7ebbf4c88</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Wed, 14 Jan 2026 04:14:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/4ba7dd0f-60b2-4aa7-90bc-0be7ebbf4c88.mp3" length="36010231" type="audio/mpeg"/><itunes:duration>37:31</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>29</itunes:episode><podcast:episode>29</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/7bf20430-6735-42e5-bdf3-0fd54ad81865/transcript.json" type="application/json"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/7bf20430-6735-42e5-bdf3-0fd54ad81865/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/7bf20430-6735-42e5-bdf3-0fd54ad81865/index.html" type="text/html"/><podcast:chapters url="https://transcripts.captivate.fm/chapter-5397b5c0-8e21-443f-afe4-6e5e538bbf9a.json" type="application/json+chapters"/></item><item><title>Building Resilience in the AI Era: What Faculty Need to Know (Live from ICISER)</title><itunes:title>Building Resilience in the AI Era: What Faculty Need to Know (Live from ICISER)</itunes:title><description><![CDATA[<h2>Episode Description</h2><p>Join Craig and Rob for the very first live stream of AI Goes to College, recorded at the International Conference on Information Systems Education Research Workshop. In this special episode, we explore how generative AI is fundamentally changing knowledge work, starting with our own field of Information Systems as the "canary in the coal mine."</p><p>Craig shares his surprising experience with vibe coding—creating deployable web applications and productivity tools in hours rather than days—and explains why this signals a massive shift coming for all knowledge workers. We also tackle the troubling trend of students using AI to avoid productive learning friction, discuss the dark side of AI monetization and data privacy, and wrestle with difficult questions about AI companionship in an increasingly lonely society.</p><p>This conversation moves beyond the hype to examine both the genuine opportunities and serious concerns that educators and technologists need to grapple with as AI becomes embedded in every aspect of work and learning.</p><h2>Key Topics &amp; Timestamps</h2><ul><li>Welcome and introduction to the live format</li><li>Rob's surprising AI use case: Students creating machine-voiced presentations to avoid public speaking</li><li>Craig introduces vibe coding and creating deployable apps in minutes</li><li> Information Systems as the "canary in the coal mine" for knowledge work disruption</li><li>When vibe coding works (and when it doesn't): Simple vs. enterprise applications</li><li>The 50% principle: "50% is greater than 100%"</li><li>How AI changes systems analysis and prototyping</li><li>The job market reality: Entry-level positions disappearing</li><li>What should we be teaching students now?</li><li>Privacy concerns and institutional AI tools</li><li>The monetization problem: When AI platforms need to make money</li><li>AI companionship and mental health concerns</li><li>Using AI for 24/7 policy questions and course support</li><li>Should we accept AI as a solution to technology-created loneliness?</li></ul><br/><h2>Key Insights</h2><p><strong>The 50% Principle</strong>: Stop trying to get AI to do 100% of a task. Instead, focus on tools that save you half the effort—that's where the real value lies.</p><p><strong>Vibe Coding Reality</strong>: It's not for enterprise-scale applications, but it's revolutionary for rapid prototyping and creating simple, personal productivity tools without needing current coding skills.</p><p><strong>Productive Friction</strong>: Students are increasingly using AI to avoid uncomfortable but necessary learning experiences, like public speaking, removing the "friction points" that actually drive growth.</p><p><strong>The IS Canary</strong>: Information Systems professionals are experiencing AI disruption first, but similar transformations are coming for accounting, finance, law, and virtually all knowledge work.</p><p><strong>Privacy Warning</strong>: As AI companies struggle to monetize, expect increased data harvesting and advertising. Consider running local models for sensitive work.</p><h2>Resources Mentioned</h2><ul><li><strong>AI Goes to College website</strong>: aigostocollege.com</li><li><strong>LM Studio</strong>: Tool for running large language models locally</li><li><strong>Claude Code, Codex, Anti Gravity</strong>: Professional coding environments mentioned</li><li><strong>Meta's LLAMA</strong>: Open-source AI model (though future releases uncertain)</li></ul><br/><h2>Credits</h2><p><strong>Hosts</strong>: Craig Van Slyke and Rob Crossler</p><p><strong>Audio</strong>: Hazel Crossler</p><p><strong>Sponsored by</strong>: Association for Information Systems Special Interest Group on Education (SIG ED) <a href="https://ais-siged.org/" rel="noopener noreferrer" target="_blank">https://ais-siged.org/</a> </p><p><strong>Event</strong>: International Conference on Information Systems Education Research Workshop</p><p><strong>Special thanks to</strong>: Conference organizers Tanya McGill and Rosetta Romano</p><p><strong>Companies mentioned in this episode:</strong></p><ul><li> Washington State University </li><li> <a href="https://ais-siged.org/" rel="noopener noreferrer" target="_blank">AIS SIG ED</a> </li><li> OpenAI </li><li> Google </li><li> Claude Code </li><li> Gemini </li><li> Copilot </li><li> Facebook </li><li> <a href="https://www.prospectpressvt.com/textbooks/b%C3%A9langer-information-systems-for-business-an-experiential-approach-5-0" rel="noopener noreferrer" target="_blank">Prospect Press of Vermont</a></li></ul><br/><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></description><content:encoded><![CDATA[<h2>Episode Description</h2><p>Join Craig and Rob for the very first live stream of AI Goes to College, recorded at the International Conference on Information Systems Education Research Workshop. In this special episode, we explore how generative AI is fundamentally changing knowledge work, starting with our own field of Information Systems as the "canary in the coal mine."</p><p>Craig shares his surprising experience with vibe coding—creating deployable web applications and productivity tools in hours rather than days—and explains why this signals a massive shift coming for all knowledge workers. We also tackle the troubling trend of students using AI to avoid productive learning friction, discuss the dark side of AI monetization and data privacy, and wrestle with difficult questions about AI companionship in an increasingly lonely society.</p><p>This conversation moves beyond the hype to examine both the genuine opportunities and serious concerns that educators and technologists need to grapple with as AI becomes embedded in every aspect of work and learning.</p><h2>Key Topics &amp; Timestamps</h2><ul><li>Welcome and introduction to the live format</li><li>Rob's surprising AI use case: Students creating machine-voiced presentations to avoid public speaking</li><li>Craig introduces vibe coding and creating deployable apps in minutes</li><li> Information Systems as the "canary in the coal mine" for knowledge work disruption</li><li>When vibe coding works (and when it doesn't): Simple vs. enterprise applications</li><li>The 50% principle: "50% is greater than 100%"</li><li>How AI changes systems analysis and prototyping</li><li>The job market reality: Entry-level positions disappearing</li><li>What should we be teaching students now?</li><li>Privacy concerns and institutional AI tools</li><li>The monetization problem: When AI platforms need to make money</li><li>AI companionship and mental health concerns</li><li>Using AI for 24/7 policy questions and course support</li><li>Should we accept AI as a solution to technology-created loneliness?</li></ul><br/><h2>Key Insights</h2><p><strong>The 50% Principle</strong>: Stop trying to get AI to do 100% of a task. Instead, focus on tools that save you half the effort—that's where the real value lies.</p><p><strong>Vibe Coding Reality</strong>: It's not for enterprise-scale applications, but it's revolutionary for rapid prototyping and creating simple, personal productivity tools without needing current coding skills.</p><p><strong>Productive Friction</strong>: Students are increasingly using AI to avoid uncomfortable but necessary learning experiences, like public speaking, removing the "friction points" that actually drive growth.</p><p><strong>The IS Canary</strong>: Information Systems professionals are experiencing AI disruption first, but similar transformations are coming for accounting, finance, law, and virtually all knowledge work.</p><p><strong>Privacy Warning</strong>: As AI companies struggle to monetize, expect increased data harvesting and advertising. Consider running local models for sensitive work.</p><h2>Resources Mentioned</h2><ul><li><strong>AI Goes to College website</strong>: aigostocollege.com</li><li><strong>LM Studio</strong>: Tool for running large language models locally</li><li><strong>Claude Code, Codex, Anti Gravity</strong>: Professional coding environments mentioned</li><li><strong>Meta's LLAMA</strong>: Open-source AI model (though future releases uncertain)</li></ul><br/><h2>Credits</h2><p><strong>Hosts</strong>: Craig Van Slyke and Rob Crossler</p><p><strong>Audio</strong>: Hazel Crossler</p><p><strong>Sponsored by</strong>: Association for Information Systems Special Interest Group on Education (SIG ED) <a href="https://ais-siged.org/" rel="noopener noreferrer" target="_blank">https://ais-siged.org/</a> </p><p><strong>Event</strong>: International Conference on Information Systems Education Research Workshop</p><p><strong>Special thanks to</strong>: Conference organizers Tanya McGill and Rosetta Romano</p><p><strong>Companies mentioned in this episode:</strong></p><ul><li> Washington State University </li><li> <a href="https://ais-siged.org/" rel="noopener noreferrer" target="_blank">AIS SIG ED</a> </li><li> OpenAI </li><li> Google </li><li> Claude Code </li><li> Gemini </li><li> Copilot </li><li> Facebook </li><li> <a href="https://www.prospectpressvt.com/textbooks/b%C3%A9langer-information-systems-for-business-an-experiential-approach-5-0" rel="noopener noreferrer" target="_blank">Prospect Press of Vermont</a></li></ul><br/><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></content:encoded><link><![CDATA[https://sites.libsyn.com/502703]]></link><guid isPermaLink="false">80902e89-590f-477c-8b93-343ef142b2f3</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Mon, 22 Dec 2025 04:30:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/80902e89-590f-477c-8b93-343ef142b2f3.mp3" length="44854673" type="audio/mpeg"/><itunes:duration>46:43</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>28</itunes:episode><podcast:episode>28</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/6c173114-d3a1-437d-bc1e-43dfeb94c9a8/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/6c173114-d3a1-437d-bc1e-43dfeb94c9a8/index.html" type="text/html"/><podcast:chapters url="https://transcripts.captivate.fm/chapter-3e0dd16f-004c-4b0b-a7cf-80397abb671f.json" type="application/json+chapters"/></item><item><title>AI, Friction, and the Future of Teaching and Learning: Lessons from Gemini 3</title><itunes:title>AI, Friction, and the Future of Teaching and Learning: Lessons from Gemini 3</itunes:title><description><![CDATA[<p>Are you ready to rethink how AI is shaping higher education? Join Craig and Rob in episode 27 of AIGTC as they dive into the recent agentic shift in AI models like Google’s Gemini 3—and what this means for students, faculty, and the future of learning.</p><p>In this thought-provoking conversation, Craig shares his unsettling experience with Gemini 3’s “agentic” behavior, where AI takes the reins with minimal user input—even when that’s not what the user asked for. The hosts examine how this frictionless, super-helpful technology might make academic shortcuts easier than ever, removing the crucial learning struggles that foster true understanding. Are we on the verge of a “skill inversion,” where students need expertise just to avoid cheating themselves out of learning?</p><p>But it’s not all doom and gloom: Rob and Craig explore actionable solutions for instructors, focusing on process-oriented teaching, project-based learning, and authentic reflection assignments that resist easy automation. They challenge educators to try just one process-focused change in their next class and offer themselves as resources for peer collaboration and feedback.</p><p>Throughout the episode, you’ll discover:</p><ul><li>Why “agentic” AI could undermine deep learning—and what to do about it</li><li>The hidden dangers of frictionless assignment completion for student growth</li><li>Practical strategies to make student assessment more process-driven and meaningful</li><li>How reflection and professionalism can help students stand out in the AI era</li><li>The importance of radical thinking and institutional adaptation for the future of higher ed</li></ul><br/><p>Plus, stay tuned for details on AIGTC’s first-ever live stream at the International Conference on Information Systems, and learn how you can join the hosts in Nashville or virtually for engaging Q&amp;A and networking. The live stream will be held from 17:00 - 18:00 US Central time (UTC - 6). Link forthcoming.</p><p>If you’re an educator, administrator, or student eager for honest insights and expert advice on navigating AI in academia, this episode is essential listening. Discover how you can control what you can—and embrace the challenge of teaching and learning in an AI-driven world.</p><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></description><content:encoded><![CDATA[<p>Are you ready to rethink how AI is shaping higher education? Join Craig and Rob in episode 27 of AIGTC as they dive into the recent agentic shift in AI models like Google’s Gemini 3—and what this means for students, faculty, and the future of learning.</p><p>In this thought-provoking conversation, Craig shares his unsettling experience with Gemini 3’s “agentic” behavior, where AI takes the reins with minimal user input—even when that’s not what the user asked for. The hosts examine how this frictionless, super-helpful technology might make academic shortcuts easier than ever, removing the crucial learning struggles that foster true understanding. Are we on the verge of a “skill inversion,” where students need expertise just to avoid cheating themselves out of learning?</p><p>But it’s not all doom and gloom: Rob and Craig explore actionable solutions for instructors, focusing on process-oriented teaching, project-based learning, and authentic reflection assignments that resist easy automation. They challenge educators to try just one process-focused change in their next class and offer themselves as resources for peer collaboration and feedback.</p><p>Throughout the episode, you’ll discover:</p><ul><li>Why “agentic” AI could undermine deep learning—and what to do about it</li><li>The hidden dangers of frictionless assignment completion for student growth</li><li>Practical strategies to make student assessment more process-driven and meaningful</li><li>How reflection and professionalism can help students stand out in the AI era</li><li>The importance of radical thinking and institutional adaptation for the future of higher ed</li></ul><br/><p>Plus, stay tuned for details on AIGTC’s first-ever live stream at the International Conference on Information Systems, and learn how you can join the hosts in Nashville or virtually for engaging Q&amp;A and networking. The live stream will be held from 17:00 - 18:00 US Central time (UTC - 6). Link forthcoming.</p><p>If you’re an educator, administrator, or student eager for honest insights and expert advice on navigating AI in academia, this episode is essential listening. Discover how you can control what you can—and embrace the challenge of teaching and learning in an AI-driven world.</p><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></content:encoded><link><![CDATA[https://sites.libsyn.com/502703]]></link><guid isPermaLink="false">b35961e2-7560-412b-9bf9-203e6e0d276a</guid><itunes:image href="https://artwork.captivate.fm/cf77f5f1-1cb5-45ff-a0e1-195712b41e10/AIGTC-Ep-27-cover.jpg"/><pubDate>Mon, 24 Nov 2025 04:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/b35961e2-7560-412b-9bf9-203e6e0d276a.mp3" length="41108920" type="audio/mpeg"/><itunes:duration>42:49</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>27</itunes:episode><podcast:episode>27</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/595edb31-d887-40b0-934d-b6239ecc39cb/transcript.json" type="application/json"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/595edb31-d887-40b0-934d-b6239ecc39cb/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/595edb31-d887-40b0-934d-b6239ecc39cb/index.html" type="text/html"/><podcast:chapters url="https://transcripts.captivate.fm/chapter-cad0c040-678b-48e7-974b-772da2a3076b.json" type="application/json+chapters"/></item><item><title>Creating the Classroom of Tomorrow: Stephen Fitzpatrick Discusses Generative AI</title><itunes:title>Creating the Classroom of Tomorrow: Stephen Fitzpatrick Discusses Generative AI</itunes:title><description><![CDATA[<p>Are you an educator navigating the new world of generative AI, or a college faculty member wondering how your incoming students are being shaped by technology? Join hosts Craig and Rob on this episode of AI Goes to College as they sit down with Stephen Fitzpatrick—a veteran secondary school history teacher, debate coach, and leading voice on AI in education—to explore how artificial intelligence is fundamentally transforming the classroom experience, from high school to higher ed.</p><p>Stephen shares his journey from classroom innovator to Substack thought-leader, detailing his hands-on experimentation with emerging AI tools like ChatGPT, Claude, NotebookLM, and specialized debate platforms. You'll hear candid stories about students' rapid normalization of AI use, from research projects and note-taking to more controversial applications like essay writing—and the ethical dilemmas teachers face in response.</p><p>This episode reveals why banning AI in schools is a losing game, and why the true challenge is fostering critical thinking, curiosity, and responsible use among students. Discover how high school educators are wrestling with the balance between preserving “AI-free” learning spaces and adapting assignments for an AI-empowered world. Stephen provides actionable insights on:</p><ul><li>The rise of AI-powered research and note-taking among high schoolers—what college faculty need to know</li><li>The importance of clear, consistent policies on AI use across classes and institutions</li><li>How educators' comfort level with technology directly impacts their ability to guide students</li><li>Practical solutions for cultivating AI fluency and resilience in both teachers and students</li><li>Why peer-to-peer training and real-world use cases trump generic professional development</li></ul><br/><p>Whether you're a teacher, administrator, or college professor, this conversation will equip you to meet students where they are in the age of AI—challenging old paradigms and preparing them for a future where intelligent technology is their constant companion. Stephen’s nuanced perspective, grounded in frontline experience and continuous experimentation, will inspire you to rethink resistance and embrace adaptation.</p><p>Ready to hear the real story behind “AI Goes to College”? Tune in to learn how you can empower students to use AI as a tool for deeper thinking and lifelong learning—and don’t forget to check out Stephen’s Substack, Teaching in the Age of AI, linked in the show notes for even more practical wisdom.</p><p>Links:</p><p>Teaching in the Age of AI: <a href="https://fitzyhistory.substack.com" rel="noopener noreferrer" target="_blank">https://fitzyhistory.substack.com </a></p><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></description><content:encoded><![CDATA[<p>Are you an educator navigating the new world of generative AI, or a college faculty member wondering how your incoming students are being shaped by technology? Join hosts Craig and Rob on this episode of AI Goes to College as they sit down with Stephen Fitzpatrick—a veteran secondary school history teacher, debate coach, and leading voice on AI in education—to explore how artificial intelligence is fundamentally transforming the classroom experience, from high school to higher ed.</p><p>Stephen shares his journey from classroom innovator to Substack thought-leader, detailing his hands-on experimentation with emerging AI tools like ChatGPT, Claude, NotebookLM, and specialized debate platforms. You'll hear candid stories about students' rapid normalization of AI use, from research projects and note-taking to more controversial applications like essay writing—and the ethical dilemmas teachers face in response.</p><p>This episode reveals why banning AI in schools is a losing game, and why the true challenge is fostering critical thinking, curiosity, and responsible use among students. Discover how high school educators are wrestling with the balance between preserving “AI-free” learning spaces and adapting assignments for an AI-empowered world. Stephen provides actionable insights on:</p><ul><li>The rise of AI-powered research and note-taking among high schoolers—what college faculty need to know</li><li>The importance of clear, consistent policies on AI use across classes and institutions</li><li>How educators' comfort level with technology directly impacts their ability to guide students</li><li>Practical solutions for cultivating AI fluency and resilience in both teachers and students</li><li>Why peer-to-peer training and real-world use cases trump generic professional development</li></ul><br/><p>Whether you're a teacher, administrator, or college professor, this conversation will equip you to meet students where they are in the age of AI—challenging old paradigms and preparing them for a future where intelligent technology is their constant companion. Stephen’s nuanced perspective, grounded in frontline experience and continuous experimentation, will inspire you to rethink resistance and embrace adaptation.</p><p>Ready to hear the real story behind “AI Goes to College”? Tune in to learn how you can empower students to use AI as a tool for deeper thinking and lifelong learning—and don’t forget to check out Stephen’s Substack, Teaching in the Age of AI, linked in the show notes for even more practical wisdom.</p><p>Links:</p><p>Teaching in the Age of AI: <a href="https://fitzyhistory.substack.com" rel="noopener noreferrer" target="_blank">https://fitzyhistory.substack.com </a></p><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></content:encoded><link><![CDATA[https://sites.libsyn.com/502703]]></link><guid isPermaLink="false">4fbc8953-0ee9-4386-bc00-03c124d47154</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Tue, 28 Oct 2025 04:27:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/4fbc8953-0ee9-4386-bc00-03c124d47154.mp3" length="47093270" type="audio/mpeg"/><itunes:duration>49:03</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>26</itunes:episode><podcast:episode>26</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/dcb4e90a-02a1-4388-a331-1469a0937bf6/transcript.json" type="application/json"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/dcb4e90a-02a1-4388-a331-1469a0937bf6/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/dcb4e90a-02a1-4388-a331-1469a0937bf6/index.html" type="text/html"/><podcast:chapters url="https://transcripts.captivate.fm/chapter-2ebf59fe-cde4-4177-9591-20fcf6f1cf79.json" type="application/json+chapters"/></item><item><title>Context rot, AI over-hype and an intriguing, hilarious video</title><itunes:title>Context rot, AI over-hype and an intriguing, hilarious video</itunes:title><description><![CDATA[<p>Welcome to another episode of "AI Goes to College," where hosts Dr. Rob Crossler and Craig dive into the ever-evolving landscape of artificial intelligence in higher education. In this episode, they kick things off with a lighthearted look at the viral, AI-generated parody "Redneck Star Trek," using it as a springboard to discuss the rapidly advancing capabilities of AI video creation and what this means for both educators and students.</p><p>Rob and Craig explore the implications of AI tools making creative content more accessible, shaking up traditional teaching methods, and opening new doors for engagement. They unpack the excitement—and potential pitfalls—around trend topics like vibe coding, the “agentic layer,” governance concerns, and the phenomenon of “context rot” in AI conversations.</p><p>Along the way, the hosts share personal experiences with different AI platforms, discuss challenges in scaling AI within institutional systems, and highlight the need for critical thinking and strong oversight as universities start to embed AI more deeply into daily operations. Whether you’re a faculty member, student, or just AI-curious, this episode offers practical insights, food for thought, and a dose of humor for anyone navigating the intersection of technology and higher ed.</p><p>So grab your coffee (or sweet tea) and join Rob and Craig as they “go where no podcasters have gone before” in the world of collegiate AI!</p><p><strong>Takeaways</strong>:</p><ul><li> The emergence of AI technologies presents unprecedented opportunities for students to create engaging content, thereby transforming traditional classroom dynamics. </li><li> With AI-generated content, educators can offer varied media presentations, catering to diverse learning styles and enhancing student engagement. </li><li> The recent advancements in AI tools have made it feasible for novices to produce high-quality content, which was previously the domain of experts alone. </li><li> Concerns regarding AI-generated outputs necessitate critical evaluation to avoid potential misinformation and ensure educational integrity. </li></ul><br/><p><strong>Links</strong>:</p><ul><li>Redneck Star Trek - Beam Me Up, Bubba: <a href="https://youtu.be/1eqYswiW4eo?si=XvwPdWGTbiSvx6FL" rel="noopener noreferrer" target="_blank">https://youtu.be/1eqYswiW4eo?si=XvwPdWGTbiSvx6FL</a></li><li>The vibe coding hangover is upon us (Fast Company): <a href="https://www.fastcompany.com/91398622/the-vibe-coding-hangover-is-upon-us" rel="noopener noreferrer" target="_blank">https://www.fastcompany.com/91398622/the-vibe-coding-hangover-is-upon-us</a></li><li>The Agentic Layer: Why the Middle of the Cake Matters in AI-Driven Delivery: <a href="https://cd.foundation/blog/2025/09/05/agentic-layer-ai/" rel="noopener noreferrer" target="_blank">https://cd.foundation/blog/2025/09/05/agentic-layer-ai/</a></li><li>Context rot - What it is and how to avoid it: <a href="https://aigoestocollege.substack.com/p/context-rot-the-hidden-challenge" rel="noopener noreferrer" target="_blank">https://aigoestocollege.substack.com/p/context-rot-the-hidden-challenge</a></li></ul><br/><p><strong>Companies mentioned in this episode:</strong></p><ul><li> Neural Derp </li><li> Grok </li><li> Google Vids </li><li> Wondery </li><li> Claude </li><li> ChatGPT </li><li> Fast Company </li><li> Microsoft Copilot </li><li> Suno AI </li></ul><br/><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></description><content:encoded><![CDATA[<p>Welcome to another episode of "AI Goes to College," where hosts Dr. Rob Crossler and Craig dive into the ever-evolving landscape of artificial intelligence in higher education. In this episode, they kick things off with a lighthearted look at the viral, AI-generated parody "Redneck Star Trek," using it as a springboard to discuss the rapidly advancing capabilities of AI video creation and what this means for both educators and students.</p><p>Rob and Craig explore the implications of AI tools making creative content more accessible, shaking up traditional teaching methods, and opening new doors for engagement. They unpack the excitement—and potential pitfalls—around trend topics like vibe coding, the “agentic layer,” governance concerns, and the phenomenon of “context rot” in AI conversations.</p><p>Along the way, the hosts share personal experiences with different AI platforms, discuss challenges in scaling AI within institutional systems, and highlight the need for critical thinking and strong oversight as universities start to embed AI more deeply into daily operations. Whether you’re a faculty member, student, or just AI-curious, this episode offers practical insights, food for thought, and a dose of humor for anyone navigating the intersection of technology and higher ed.</p><p>So grab your coffee (or sweet tea) and join Rob and Craig as they “go where no podcasters have gone before” in the world of collegiate AI!</p><p><strong>Takeaways</strong>:</p><ul><li> The emergence of AI technologies presents unprecedented opportunities for students to create engaging content, thereby transforming traditional classroom dynamics. </li><li> With AI-generated content, educators can offer varied media presentations, catering to diverse learning styles and enhancing student engagement. </li><li> The recent advancements in AI tools have made it feasible for novices to produce high-quality content, which was previously the domain of experts alone. </li><li> Concerns regarding AI-generated outputs necessitate critical evaluation to avoid potential misinformation and ensure educational integrity. </li></ul><br/><p><strong>Links</strong>:</p><ul><li>Redneck Star Trek - Beam Me Up, Bubba: <a href="https://youtu.be/1eqYswiW4eo?si=XvwPdWGTbiSvx6FL" rel="noopener noreferrer" target="_blank">https://youtu.be/1eqYswiW4eo?si=XvwPdWGTbiSvx6FL</a></li><li>The vibe coding hangover is upon us (Fast Company): <a href="https://www.fastcompany.com/91398622/the-vibe-coding-hangover-is-upon-us" rel="noopener noreferrer" target="_blank">https://www.fastcompany.com/91398622/the-vibe-coding-hangover-is-upon-us</a></li><li>The Agentic Layer: Why the Middle of the Cake Matters in AI-Driven Delivery: <a href="https://cd.foundation/blog/2025/09/05/agentic-layer-ai/" rel="noopener noreferrer" target="_blank">https://cd.foundation/blog/2025/09/05/agentic-layer-ai/</a></li><li>Context rot - What it is and how to avoid it: <a href="https://aigoestocollege.substack.com/p/context-rot-the-hidden-challenge" rel="noopener noreferrer" target="_blank">https://aigoestocollege.substack.com/p/context-rot-the-hidden-challenge</a></li></ul><br/><p><strong>Companies mentioned in this episode:</strong></p><ul><li> Neural Derp </li><li> Grok </li><li> Google Vids </li><li> Wondery </li><li> Claude </li><li> ChatGPT </li><li> Fast Company </li><li> Microsoft Copilot </li><li> Suno AI </li></ul><br/><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></content:encoded><link><![CDATA[https://sites.libsyn.com/502703]]></link><guid isPermaLink="false">da4969ae-615f-4563-9e58-218dc68b02b0</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Mon, 22 Sep 2025 04:33:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/da4969ae-615f-4563-9e58-218dc68b02b0.mp3" length="19994347" type="audio/mpeg"/><itunes:duration>41:39</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>25</itunes:episode><podcast:episode>25</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/87dd3617-5f7d-43a4-8d29-836b30ccc447/transcript.json" type="application/json"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/87dd3617-5f7d-43a4-8d29-836b30ccc447/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/87dd3617-5f7d-43a4-8d29-836b30ccc447/index.html" type="text/html"/><podcast:chapters url="https://transcripts.captivate.fm/chapter-f5ed5520-3b88-4772-a02c-0bd01b7ba47d.json" type="application/json+chapters"/></item><item><title>AI Confessions, Energy Costs and Vibe Coding in Academia</title><itunes:title>AI Confessions, Energy Costs and Vibe Coding in Academia</itunes:title><description><![CDATA[<p><strong>Episode Overview:</strong> In this episode, hosts Craig and Rob discuss the evolving landscape of AI in academia, research ethics, and the surprising environmental impact of AI technologies. They also test-drive AI vibe coding, discuss agentic AI, and share practical advice for instructors, researchers, and students navigating a fast-changing technological world.</p><p>As a bonus, listen to how a border collie would explain epistemic injustice!</p><h3>Key Topics &amp; Takeaways</h3><h4>1. <strong>Academic Honesty &amp; AI ("AI Confessions" in Publishing)</strong></h4><ul><li><strong>Honesty is the Best Policy:</strong> When using AI tools like Elicit or Grammarly for research, be transparent in your academic declarations. Share enough detail to feel honest, but don’t stress if you can’t recall every interaction.</li><li><strong>Journals &amp; AI Use:</strong> Journal policies on AI differ dramatically—some even ban AI use altogether. Question whether those venues align with your publishing values.</li><li><strong>Editors &amp; Transparency:</strong> Journals demand transparency from authors, but rarely provide clear guidelines or disclosure on how your AI usage will be handled.</li><li><strong>Takeaway:</strong> Aim for high-level honesty in your disclosures. If in doubt, err on the side of transparency, but don’t feel compelled to provide exhaustive step-by-steps.</li></ul><br/><h4>2. <strong>The Environmental Cost of AI</strong></h4><ul><li><strong>AI &amp; Resource Consumption:</strong> Training large language models consumes massive electricity and water resources. Data centers may bring economic benefits but create significant energy and environmental tensions.</li><li><strong>Transparency Needed:</strong> AI companies and governments should be more open about environmental impacts and strategies for sustainability.</li><li><strong>User Responsibility:</strong> Everyday users contribute to AI’s environmental footprint—using AI efficiently and mindfully is everyone’s responsibility.</li><li><strong>Takeaway:</strong> Educate yourself on the energy/water cost of AI and advocate for sustainable practices in tech.</li></ul><br/><h4>3. <strong>Vibe Coding &amp; AI-Assisted Programming</strong></h4><ul><li><strong>What is Vibe Coding?</strong> It’s prompting AI (like ChatGPT) to write software for you—sometimes even without traditional coding.</li><li><strong>Practical Examples:</strong> From fun tools that explain complex subjects in dog-speak (‘Colliesplain’), to running advanced text analyses (LDA topic modeling) in Python with minimal programming knowledge.</li><li><strong>Limits &amp; Opportunities:</strong> Fully relying on AI for complex projects can be risky if you can’t debug or fully understand the code. However, AI-assisted coding dramatically speeds up the process and opens doors for those who wouldn’t have coded otherwise.</li><li><strong>Takeaway:</strong> AI is a powerful coding assistant, especially for prototyping or smaller tasks, but a foundational understanding of the code and analysis involved remains essential.</li></ul><br/><h4>4. <strong>Agentic AI &amp; Task Automation</strong></h4><ul><li><strong>What’s Agentic AI?</strong> Tools that not only complete tasks but can string together sequences of tasks or collaborate with other agents.</li><li><strong>Real-World Use:</strong> The hosts discuss planning conference materials and lifelong learning using agentic AI, noting it can handle much of the “grunt work” but still requires human direction for nuanced judgement.</li><li><strong>Governance Cautions:</strong> The delegation of decisions to AI agents (especially in areas like applicant screening) can lead to ethical and legal issues if not managed carefully.</li><li><strong>Takeaway:</strong> Embrace AI agents for efficiency, but institute proper oversight and understand the governance and ethical implications.</li></ul><br/><h4>5. <strong>Navigating AI Tools/Platforms</strong></h4><ul><li><strong>Emerging Tools:</strong> New features like ChatGPT’s Agent Mode, Study &amp; Learn, and Gemini’s Guided Learning are making AI more accessible and interactive for learners and educators.</li><li><strong>Practical Use:</strong> Satisficing—choosing a tool that works well enough rather than chasing constant upgrades—can save time and reduce frustration.</li><li><strong>Institutional Policies:</strong> Heed privacy regulations (like FERPA in the US) when using AI with student or confidential data. Many universities approve only specific platforms.</li><li><strong>Takeaway:</strong> Test new offerings, pick what works for you, but remain vigilant about data privacy and security.</li></ul><br/><p>For all things AI Goes to College, including the podcast, go to <a href="https://www.aigoestocollege.com/" rel="noopener noreferrer" target="_blank">https://www.aigoestocollege.com/</a>.</p><p>Email Rob - Rob.Crossler@AIGoesToCollege.com</p><p>Email Craig - Craig@AIGoesToCollege.com</p><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></description><content:encoded><![CDATA[<p><strong>Episode Overview:</strong> In this episode, hosts Craig and Rob discuss the evolving landscape of AI in academia, research ethics, and the surprising environmental impact of AI technologies. They also test-drive AI vibe coding, discuss agentic AI, and share practical advice for instructors, researchers, and students navigating a fast-changing technological world.</p><p>As a bonus, listen to how a border collie would explain epistemic injustice!</p><h3>Key Topics &amp; Takeaways</h3><h4>1. <strong>Academic Honesty &amp; AI ("AI Confessions" in Publishing)</strong></h4><ul><li><strong>Honesty is the Best Policy:</strong> When using AI tools like Elicit or Grammarly for research, be transparent in your academic declarations. Share enough detail to feel honest, but don’t stress if you can’t recall every interaction.</li><li><strong>Journals &amp; AI Use:</strong> Journal policies on AI differ dramatically—some even ban AI use altogether. Question whether those venues align with your publishing values.</li><li><strong>Editors &amp; Transparency:</strong> Journals demand transparency from authors, but rarely provide clear guidelines or disclosure on how your AI usage will be handled.</li><li><strong>Takeaway:</strong> Aim for high-level honesty in your disclosures. If in doubt, err on the side of transparency, but don’t feel compelled to provide exhaustive step-by-steps.</li></ul><br/><h4>2. <strong>The Environmental Cost of AI</strong></h4><ul><li><strong>AI &amp; Resource Consumption:</strong> Training large language models consumes massive electricity and water resources. Data centers may bring economic benefits but create significant energy and environmental tensions.</li><li><strong>Transparency Needed:</strong> AI companies and governments should be more open about environmental impacts and strategies for sustainability.</li><li><strong>User Responsibility:</strong> Everyday users contribute to AI’s environmental footprint—using AI efficiently and mindfully is everyone’s responsibility.</li><li><strong>Takeaway:</strong> Educate yourself on the energy/water cost of AI and advocate for sustainable practices in tech.</li></ul><br/><h4>3. <strong>Vibe Coding &amp; AI-Assisted Programming</strong></h4><ul><li><strong>What is Vibe Coding?</strong> It’s prompting AI (like ChatGPT) to write software for you—sometimes even without traditional coding.</li><li><strong>Practical Examples:</strong> From fun tools that explain complex subjects in dog-speak (‘Colliesplain’), to running advanced text analyses (LDA topic modeling) in Python with minimal programming knowledge.</li><li><strong>Limits &amp; Opportunities:</strong> Fully relying on AI for complex projects can be risky if you can’t debug or fully understand the code. However, AI-assisted coding dramatically speeds up the process and opens doors for those who wouldn’t have coded otherwise.</li><li><strong>Takeaway:</strong> AI is a powerful coding assistant, especially for prototyping or smaller tasks, but a foundational understanding of the code and analysis involved remains essential.</li></ul><br/><h4>4. <strong>Agentic AI &amp; Task Automation</strong></h4><ul><li><strong>What’s Agentic AI?</strong> Tools that not only complete tasks but can string together sequences of tasks or collaborate with other agents.</li><li><strong>Real-World Use:</strong> The hosts discuss planning conference materials and lifelong learning using agentic AI, noting it can handle much of the “grunt work” but still requires human direction for nuanced judgement.</li><li><strong>Governance Cautions:</strong> The delegation of decisions to AI agents (especially in areas like applicant screening) can lead to ethical and legal issues if not managed carefully.</li><li><strong>Takeaway:</strong> Embrace AI agents for efficiency, but institute proper oversight and understand the governance and ethical implications.</li></ul><br/><h4>5. <strong>Navigating AI Tools/Platforms</strong></h4><ul><li><strong>Emerging Tools:</strong> New features like ChatGPT’s Agent Mode, Study &amp; Learn, and Gemini’s Guided Learning are making AI more accessible and interactive for learners and educators.</li><li><strong>Practical Use:</strong> Satisficing—choosing a tool that works well enough rather than chasing constant upgrades—can save time and reduce frustration.</li><li><strong>Institutional Policies:</strong> Heed privacy regulations (like FERPA in the US) when using AI with student or confidential data. Many universities approve only specific platforms.</li><li><strong>Takeaway:</strong> Test new offerings, pick what works for you, but remain vigilant about data privacy and security.</li></ul><br/><p>For all things AI Goes to College, including the podcast, go to <a href="https://www.aigoestocollege.com/" rel="noopener noreferrer" target="_blank">https://www.aigoestocollege.com/</a>.</p><p>Email Rob - Rob.Crossler@AIGoesToCollege.com</p><p>Email Craig - Craig@AIGoesToCollege.com</p><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></content:encoded><link><![CDATA[https://sites.libsyn.com/502703]]></link><guid isPermaLink="false">ebaaedbe-764b-403f-8a87-be5606412cc7</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Wed, 13 Aug 2025 04:45:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/ebaaedbe-764b-403f-8a87-be5606412cc7.mp3" length="22288943" type="audio/mpeg"/><itunes:duration>46:26</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>24</itunes:episode><podcast:episode>24</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/3d7b2455-b206-4ff1-9259-153f089d0ccb/transcript.json" type="application/json"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/3d7b2455-b206-4ff1-9259-153f089d0ccb/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/3d7b2455-b206-4ff1-9259-153f089d0ccb/index.html" type="text/html"/><podcast:chapters url="https://transcripts.captivate.fm/chapter-10c86782-7ef5-448c-b0ae-d4fdc75ed212.json" type="application/json+chapters"/></item><item><title>The Future of Entry-Level Employment in a Post-AI World</title><itunes:title>The Future of Entry-Level Employment in a Post-AI World</itunes:title><description><![CDATA[<p>In this episode, Craig Van Slyke and Robert E. Crossler tackle a growing concern in higher education: how are students really using AI in their learning? Sparked by an article from the Neuron newsletter, they discuss how many students are using AI tools superficially – what they call "brain rot" – instead of engaging deeply with their coursework. The hosts argue that this shallow engagement with AI could seriously impact students' ability to learn and retain information.</p><p>The conversation then shifts to what this means for students entering the workforce. Van Slyke and Crossler worry about a looming skills gap as AI and automation reshape entry-level jobs. They make a compelling case for moving away from traditional teaching methods toward a mastery-based approach that emphasizes deep understanding and practical skills. This shift, they argue, is crucial for keeping college programs relevant and ensuring graduates are ready for an AI-enhanced workplace.</p><p>A key concept they explore is "cognitive debt" – what happens when students rely too heavily on AI without thinking critically about what they're learning. The hosts stress how important it is for students to develop better thinking skills and be able to explain their reasoning when using AI tools. Throughout the discussion, Van Slyke and Crossler offer a balanced view of both the challenges and opportunities that AI brings to higher education, emphasizing the need for approaches that encourage critical thinking and adaptability in this rapidly changing landscape.</p><p><strong>Takeaways</strong>:</p><p>Key Actions and Insights</p><ol><li>Faculty Development: Prioritize AI training for educators to better guide student use of these tools</li><li>Student Engagement: Design assignments that encourage meaningful AI interaction rather than superficial use</li><li>Skills Focus: Prepare students for an AI-driven job market by emphasizing critical thinking and practical application</li><li>Assessment Strategy: Shift toward mastery-based learning to promote deeper understanding</li><li>Combat "Cognitive Debt": Require students to explain their reasoning when using AI tools</li></ol><br/><p><strong>Links:</strong></p><ul><li>The Neuron Newsletter:<strong> </strong><a href="https://www.theneurondaily.com" rel="noopener noreferrer" target="_blank">https://www.theneurondaily.com</a></li><li>Neuron article - <a href="https://www.theneuron.ai/explainer-articles/wtf-is-going-on-with-ai-and-education" rel="noopener noreferrer" target="_blank">WTF is going on with AI and education?</a></li><li>MIT Working Paper: <a href="https://arxiv.org/abs/2506.08872" rel="noopener noreferrer" target="_blank">Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for [an] essay writing task</a></li><li>Tina Austin on Cognitive Debt:<strong> </strong><a href="https://nickpotkalitsky.substack.com/p/brain-rot-isnt-real-but-cognitive" rel="noopener noreferrer" target="_blank">Brain Rot Isn’t Real, but Cognitive Offloading Is</a></li><li>Nick Potkalitsky's excellent newsletter, Educating AI: <a href="https://nickpotkalitsky.substack.com/" rel="noopener noreferrer" target="_blank">https://nickpotkalitsky.substack.com/ </a></li><li>Note: In the episode, Craig attributed the article to Nick Potkalitsky. The article appears as a guest post in Nick's newsletter, <em>Educating AI</em>.</li><li>Craig’s article: <a href="https://aigoestocollege.substack.com/p/the-belly-of-the-snake-entry-level" rel="noopener noreferrer" target="_blank">The belly of the snake: Entry-level unemployment and the coming skills gap</a></li></ul><br/><p><strong>AI Goes to College Website: </strong>https://www.aigoestocollege.com/</p><p><strong>Email the Hosts: </strong>Rob: <a href="mailto:rob.crosser@aigoestocollege.com" rel="noopener noreferrer" target="_blank">rob.crosser@aigoestocollege.com</a>, Craig: <a href="mailto:craig@aigoestocollege.com" rel="noopener noreferrer" target="_blank">craig@aigoestocollege.com</a></p><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p><p><strong>Craig makes a mistake</strong></p>]]></description><content:encoded><![CDATA[<p>In this episode, Craig Van Slyke and Robert E. Crossler tackle a growing concern in higher education: how are students really using AI in their learning? Sparked by an article from the Neuron newsletter, they discuss how many students are using AI tools superficially – what they call "brain rot" – instead of engaging deeply with their coursework. The hosts argue that this shallow engagement with AI could seriously impact students' ability to learn and retain information.</p><p>The conversation then shifts to what this means for students entering the workforce. Van Slyke and Crossler worry about a looming skills gap as AI and automation reshape entry-level jobs. They make a compelling case for moving away from traditional teaching methods toward a mastery-based approach that emphasizes deep understanding and practical skills. This shift, they argue, is crucial for keeping college programs relevant and ensuring graduates are ready for an AI-enhanced workplace.</p><p>A key concept they explore is "cognitive debt" – what happens when students rely too heavily on AI without thinking critically about what they're learning. The hosts stress how important it is for students to develop better thinking skills and be able to explain their reasoning when using AI tools. Throughout the discussion, Van Slyke and Crossler offer a balanced view of both the challenges and opportunities that AI brings to higher education, emphasizing the need for approaches that encourage critical thinking and adaptability in this rapidly changing landscape.</p><p><strong>Takeaways</strong>:</p><p>Key Actions and Insights</p><ol><li>Faculty Development: Prioritize AI training for educators to better guide student use of these tools</li><li>Student Engagement: Design assignments that encourage meaningful AI interaction rather than superficial use</li><li>Skills Focus: Prepare students for an AI-driven job market by emphasizing critical thinking and practical application</li><li>Assessment Strategy: Shift toward mastery-based learning to promote deeper understanding</li><li>Combat "Cognitive Debt": Require students to explain their reasoning when using AI tools</li></ol><br/><p><strong>Links:</strong></p><ul><li>The Neuron Newsletter:<strong> </strong><a href="https://www.theneurondaily.com" rel="noopener noreferrer" target="_blank">https://www.theneurondaily.com</a></li><li>Neuron article - <a href="https://www.theneuron.ai/explainer-articles/wtf-is-going-on-with-ai-and-education" rel="noopener noreferrer" target="_blank">WTF is going on with AI and education?</a></li><li>MIT Working Paper: <a href="https://arxiv.org/abs/2506.08872" rel="noopener noreferrer" target="_blank">Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for [an] essay writing task</a></li><li>Tina Austin on Cognitive Debt:<strong> </strong><a href="https://nickpotkalitsky.substack.com/p/brain-rot-isnt-real-but-cognitive" rel="noopener noreferrer" target="_blank">Brain Rot Isn’t Real, but Cognitive Offloading Is</a></li><li>Nick Potkalitsky's excellent newsletter, Educating AI: <a href="https://nickpotkalitsky.substack.com/" rel="noopener noreferrer" target="_blank">https://nickpotkalitsky.substack.com/ </a></li><li>Note: In the episode, Craig attributed the article to Nick Potkalitsky. The article appears as a guest post in Nick's newsletter, <em>Educating AI</em>.</li><li>Craig’s article: <a href="https://aigoestocollege.substack.com/p/the-belly-of-the-snake-entry-level" rel="noopener noreferrer" target="_blank">The belly of the snake: Entry-level unemployment and the coming skills gap</a></li></ul><br/><p><strong>AI Goes to College Website: </strong>https://www.aigoestocollege.com/</p><p><strong>Email the Hosts: </strong>Rob: <a href="mailto:rob.crosser@aigoestocollege.com" rel="noopener noreferrer" target="_blank">rob.crosser@aigoestocollege.com</a>, Craig: <a href="mailto:craig@aigoestocollege.com" rel="noopener noreferrer" target="_blank">craig@aigoestocollege.com</a></p><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p><p><strong>Craig makes a mistake</strong></p>]]></content:encoded><link><![CDATA[https://sites.libsyn.com/502703]]></link><guid isPermaLink="false">ebadd1f5-811f-430f-8276-9698e28a959c</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Tue, 22 Jul 2025 04:41:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/ebadd1f5-811f-430f-8276-9698e28a959c.mp3" length="48473744" type="audio/mpeg"/><itunes:duration>50:30</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>23</itunes:episode><podcast:episode>23</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/493bef57-0626-47b6-9c74-aceafa3371f7/transcript.json" type="application/json"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/493bef57-0626-47b6-9c74-aceafa3371f7/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/493bef57-0626-47b6-9c74-aceafa3371f7/index.html" type="text/html"/><podcast:chapters url="https://transcripts.captivate.fm/chapter-afa80b21-e063-4e30-9a76-15b2243e1439.json" type="application/json+chapters"/></item><item><title>AI&apos;s Disruption: What It Means for Knowledge Workers and Higher Ed</title><itunes:title>AI&apos;s Disruption: What It Means for Knowledge Workers and Higher Ed</itunes:title><description><![CDATA[<p>The recent discussion between Craig Van Slyke and Robert E. Crossler centered around the alarming prediction from Anthropic's CEO regarding the potential displacement of up to 50% of entry-level knowledge work positions within the next five years due to advancements in generative AI. This assertion prompts a critical examination of the implications for higher education, particularly concerning the preparedness of graduates entering an increasingly automated workforce. Both hosts express skepticism about the immediacy and extent of such disruptions, emphasizing the necessity for educational institutions to adapt curricula to cultivate higher skill levels among students. They highlight the importance of fostering AI discernment and ethical considerations in the use of AI technologies, advocating for a proactive approach that prepares students for evolving job market demands. As the conversation unfolds, they underscore the urgent need for educators to engage in thoughtful dialogue and innovative practices to effectively equip students for the future.</p><p>Takeaways:</p><ul><li> In recent discussions, a warning was issued stating that potentially half of knowledge work jobs may be eliminated due to AI advancements within the next five years, prompting significant concern among educators and industry professionals. </li><li> The conversation emphasized the importance of preparing students for a future job market that increasingly favors higher-level skills, particularly in light of the potential displacement of entry-level positions by generative AI technologies. </li><li> It was noted that while AI may lead to job displacement, it is also anticipated to create new job opportunities, suggesting a complex landscape where education must adapt to these shifting dynamics. </li><li> The hosts discussed the necessity for higher education institutions to begin incorporating AI discernment into their curricula, ensuring that students understand the ethical implications and operational realities of AI usage in the workplace. </li><li> The episode highlighted the unprecedented grassroots adoption of AI technologies, as individual workers leverage AI tools independently, often circumventing organizational policies or restrictions. </li><li> The hosts concluded with a call to action for educators to embrace AI in their teaching, encouraging experimentation and risk-taking as essential components of evolving educational practices. </li></ul><br/><p>Links referenced in this episode:</p><ul><li>Survey: <a href="https://aigostocollege.com/survey2025" rel="noopener noreferrer" target="_blank">aigostocollege.com/survey2025</a></li><li>Article on NotebookLM's Mind Map: <a href="https://aigoestocollege.substack.com/p/notebooklms-mind-map-a-hidden-gem" rel="noopener noreferrer" target="_blank">https://aigoestocollege.substack.com/p/notebooklms-mind-map-a-hidden-gem</a>  </li></ul><br/><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></description><content:encoded><![CDATA[<p>The recent discussion between Craig Van Slyke and Robert E. Crossler centered around the alarming prediction from Anthropic's CEO regarding the potential displacement of up to 50% of entry-level knowledge work positions within the next five years due to advancements in generative AI. This assertion prompts a critical examination of the implications for higher education, particularly concerning the preparedness of graduates entering an increasingly automated workforce. Both hosts express skepticism about the immediacy and extent of such disruptions, emphasizing the necessity for educational institutions to adapt curricula to cultivate higher skill levels among students. They highlight the importance of fostering AI discernment and ethical considerations in the use of AI technologies, advocating for a proactive approach that prepares students for evolving job market demands. As the conversation unfolds, they underscore the urgent need for educators to engage in thoughtful dialogue and innovative practices to effectively equip students for the future.</p><p>Takeaways:</p><ul><li> In recent discussions, a warning was issued stating that potentially half of knowledge work jobs may be eliminated due to AI advancements within the next five years, prompting significant concern among educators and industry professionals. </li><li> The conversation emphasized the importance of preparing students for a future job market that increasingly favors higher-level skills, particularly in light of the potential displacement of entry-level positions by generative AI technologies. </li><li> It was noted that while AI may lead to job displacement, it is also anticipated to create new job opportunities, suggesting a complex landscape where education must adapt to these shifting dynamics. </li><li> The hosts discussed the necessity for higher education institutions to begin incorporating AI discernment into their curricula, ensuring that students understand the ethical implications and operational realities of AI usage in the workplace. </li><li> The episode highlighted the unprecedented grassroots adoption of AI technologies, as individual workers leverage AI tools independently, often circumventing organizational policies or restrictions. </li><li> The hosts concluded with a call to action for educators to embrace AI in their teaching, encouraging experimentation and risk-taking as essential components of evolving educational practices. </li></ul><br/><p>Links referenced in this episode:</p><ul><li>Survey: <a href="https://aigostocollege.com/survey2025" rel="noopener noreferrer" target="_blank">aigostocollege.com/survey2025</a></li><li>Article on NotebookLM's Mind Map: <a href="https://aigoestocollege.substack.com/p/notebooklms-mind-map-a-hidden-gem" rel="noopener noreferrer" target="_blank">https://aigoestocollege.substack.com/p/notebooklms-mind-map-a-hidden-gem</a>  </li></ul><br/><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></content:encoded><link><![CDATA[https://sites.libsyn.com/502703]]></link><guid isPermaLink="false">e4a8638f-5f99-44e1-99e9-a02cba085bb9</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Wed, 11 Jun 2025 04:30:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/e4a8638f-5f99-44e1-99e9-a02cba085bb9.mp3" length="43098377" type="audio/mpeg"/><itunes:duration>44:54</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>22</itunes:episode><podcast:episode>22</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/35c5610f-8d29-42e3-a8bc-f49d321a1e33/transcript.json" type="application/json"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/35c5610f-8d29-42e3-a8bc-f49d321a1e33/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/35c5610f-8d29-42e3-a8bc-f49d321a1e33/index.html" type="text/html"/><podcast:chapters url="https://transcripts.captivate.fm/chapter-6e796345-60bc-4613-b579-d87400bc1c1f.json" type="application/json+chapters"/></item><item><title>The Ethical Use of AI in Academia: A Conversation with Carlos I. Torres</title><itunes:title>The Ethical Use of AI in Academia: A Conversation with Carlos I. Torres</itunes:title><description><![CDATA[<p>Imagine walking into a classroom where AI isn't the elephant in the room - it's a welcomed partner in learning. That's exactly what's happening in Carlos I. Torres's information security classes at Baylor University. Instead of joining the chorus of educators crying "Ban AI!" Torres is asking a more intriguing question: What if we taught students to dance with artificial intelligence rather than fight against it?</p><p>In this fascinating discussion, Torres pulls back the curtain on his groundbreaking approach. He's not just teaching information security; he's reimagining how students learn in an AI-powered world. His students don't hide their use of AI - they showcase it, document it, and most importantly, learn to think critically about it.</p><p>But here's what makes this conversation truly compelling: Torres isn't just preparing students for exams; he's equipping them for a future where AI will be as common as smartphones are today. As we explore the ethical tightropes and practical challenges of this approach, one thing becomes crystal clear: the future of education isn't about fighting AI - it's about learning to harness its power while keeping our human wisdom firmly in the driver's seat.</p><p>Takeaways:</p><ul><li> The integration of AI within higher education necessitates a nuanced understanding of its capabilities and limitations. </li><li> Carlos I. Torres emphasizes the importance of guiding students on effective AI usage to enhance their learning experience. </li><li> Engaging students with AI prompts fosters critical thinking and deeper engagement in research assignments. </li><li> The assessment of student work should encompass both the final product and the process of interacting with AI tools. </li><li> Ethical considerations surrounding AI usage in academia are paramount, necessitating discussions around transparency and integrity. </li><li> The future workforce must be equipped with skills to supervise AI agents, ensuring their outputs are trustworthy and effective. </li></ul><br/><p>Companies mentioned in this episode:</p><ul><li> Washington State University </li><li> Baylor University </li></ul><br/><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></description><content:encoded><![CDATA[<p>Imagine walking into a classroom where AI isn't the elephant in the room - it's a welcomed partner in learning. That's exactly what's happening in Carlos I. Torres's information security classes at Baylor University. Instead of joining the chorus of educators crying "Ban AI!" Torres is asking a more intriguing question: What if we taught students to dance with artificial intelligence rather than fight against it?</p><p>In this fascinating discussion, Torres pulls back the curtain on his groundbreaking approach. He's not just teaching information security; he's reimagining how students learn in an AI-powered world. His students don't hide their use of AI - they showcase it, document it, and most importantly, learn to think critically about it.</p><p>But here's what makes this conversation truly compelling: Torres isn't just preparing students for exams; he's equipping them for a future where AI will be as common as smartphones are today. As we explore the ethical tightropes and practical challenges of this approach, one thing becomes crystal clear: the future of education isn't about fighting AI - it's about learning to harness its power while keeping our human wisdom firmly in the driver's seat.</p><p>Takeaways:</p><ul><li> The integration of AI within higher education necessitates a nuanced understanding of its capabilities and limitations. </li><li> Carlos I. Torres emphasizes the importance of guiding students on effective AI usage to enhance their learning experience. </li><li> Engaging students with AI prompts fosters critical thinking and deeper engagement in research assignments. </li><li> The assessment of student work should encompass both the final product and the process of interacting with AI tools. </li><li> Ethical considerations surrounding AI usage in academia are paramount, necessitating discussions around transparency and integrity. </li><li> The future workforce must be equipped with skills to supervise AI agents, ensuring their outputs are trustworthy and effective. </li></ul><br/><p>Companies mentioned in this episode:</p><ul><li> Washington State University </li><li> Baylor University </li></ul><br/><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></content:encoded><link><![CDATA[https://sites.libsyn.com/502703]]></link><guid isPermaLink="false">f32da9b9-8c5c-42b1-94ec-a96e32d03e00</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Tue, 13 May 2025 05:00:00 -0400</pubDate><enclosure url="https://episodes.captivate.fm/episode/f32da9b9-8c5c-42b1-94ec-a96e32d03e00.mp3" length="41396897" type="audio/mpeg"/><itunes:duration>43:07</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>21</itunes:episode><podcast:episode>21</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/d82d7d16-27f5-48f4-bc06-544c05430c92/transcript.json" type="application/json"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/d82d7d16-27f5-48f4-bc06-544c05430c92/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/d82d7d16-27f5-48f4-bc06-544c05430c92/index.html" type="text/html"/><podcast:chapters url="https://transcripts.captivate.fm/chapter-81531341-6d3a-4849-89d0-ac6460a0ca02.json" type="application/json+chapters"/></item><item><title>Of Syllabi, Spells, and Structured Prompts: AI for Fall Teaching</title><itunes:title>Of Syllabi, Spells, and Structured Prompts: AI for Fall Teaching</itunes:title><description><![CDATA[<p>This podcast episode elucidates the necessity for higher education professionals to cultivate a comprehensive understanding of generative artificial intelligence (AI) and its implications within the academic sphere. We, Craig Van Slyke and Robert E. Crossler, alongside our esteemed co-author France Belanger, delve into practical anecdotes regarding the integration of AI tools, such as ChatGPT, into pedagogical practices. Through illustrative narratives, we highlight both the advantages and limitations of AI, emphasizing the importance of expertise in ensuring accurate and reliable outcomes when employing these technologies. Furthermore, we discuss actionable strategies for faculty members to prepare for the upcoming academic term, advocating for the enhancement of syllabi and the generation of active learning exercises. Ultimately, we reinforce the imperative for educators to embrace AI, not merely as a technological advancement, but as a vital component of modern educational methodologies.</p><p>Takeaways:</p><ul><li> Incorporating generative AI into educational practices necessitates an understanding of its limitations and capabilities. </li><li> Faculty should actively engage with AI tools to enhance their teaching methodologies and improve student learning outcomes. </li><li> Effective use of AI can streamline the process of creating educational materials, such as syllabi and assessments, thereby saving time. </li><li> AI's role in generating content must be accompanied by critical evaluation to ensure accuracy and relevance in educational contexts. </li></ul><br/><p>Links referenced in this episode:</p><ul><li><a href="https://aigostocollege.com" rel="noopener noreferrer" target="_blank">aigostocollege.com</a></li><li><a href="https://craig@aigostocollege.com" rel="noopener noreferrer" target="_blank">craig@aigostocollege.com</a></li><li><a href="https://rob@aigostocollege.com" rel="noopener noreferrer" target="_blank">rob@aigostocollege.com</a></li><li><a href="https://WashingtonStateUniversity.com" rel="noopener noreferrer" target="_blank">WashingtonStateUniversity.com</a></li></ul><br/><p>In an enlightening exploration of generative AI's role in educational environments, the podcast episode scrutinizes the intricate balance between technological assistance and the necessity for human oversight. The discussion is anchored by a personal narrative involving a Dungeons and Dragons gaming session that serves as both a metaphor and a case study for the broader implications of AI in education. As the hosts recount their experiences, they navigate the myriad challenges and advantages that AI presents, particularly in terms of efficiency and creativity. The episode emphasizes the essential role of educators in critically evaluating and refining the outputs generated by AI systems, thus ensuring that the integrity of educational content is preserved. Furthermore, the hosts advocate for a proactive approach to embracing AI technologies, encouraging educators to experiment and adapt rather than remain mired in traditional methodologies. Ultimately, the conversation serves as a clarion call for educational professionals to engage with AI thoughtfully, fostering not only personal growth but also the evolution of pedagogical practices in an era defined by rapid technological advancement.</p><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></description><content:encoded><![CDATA[<p>This podcast episode elucidates the necessity for higher education professionals to cultivate a comprehensive understanding of generative artificial intelligence (AI) and its implications within the academic sphere. We, Craig Van Slyke and Robert E. Crossler, alongside our esteemed co-author France Belanger, delve into practical anecdotes regarding the integration of AI tools, such as ChatGPT, into pedagogical practices. Through illustrative narratives, we highlight both the advantages and limitations of AI, emphasizing the importance of expertise in ensuring accurate and reliable outcomes when employing these technologies. Furthermore, we discuss actionable strategies for faculty members to prepare for the upcoming academic term, advocating for the enhancement of syllabi and the generation of active learning exercises. Ultimately, we reinforce the imperative for educators to embrace AI, not merely as a technological advancement, but as a vital component of modern educational methodologies.</p><p>Takeaways:</p><ul><li> Incorporating generative AI into educational practices necessitates an understanding of its limitations and capabilities. </li><li> Faculty should actively engage with AI tools to enhance their teaching methodologies and improve student learning outcomes. </li><li> Effective use of AI can streamline the process of creating educational materials, such as syllabi and assessments, thereby saving time. </li><li> AI's role in generating content must be accompanied by critical evaluation to ensure accuracy and relevance in educational contexts. </li></ul><br/><p>Links referenced in this episode:</p><ul><li><a href="https://aigostocollege.com" rel="noopener noreferrer" target="_blank">aigostocollege.com</a></li><li><a href="https://craig@aigostocollege.com" rel="noopener noreferrer" target="_blank">craig@aigostocollege.com</a></li><li><a href="https://rob@aigostocollege.com" rel="noopener noreferrer" target="_blank">rob@aigostocollege.com</a></li><li><a href="https://WashingtonStateUniversity.com" rel="noopener noreferrer" target="_blank">WashingtonStateUniversity.com</a></li></ul><br/><p>In an enlightening exploration of generative AI's role in educational environments, the podcast episode scrutinizes the intricate balance between technological assistance and the necessity for human oversight. The discussion is anchored by a personal narrative involving a Dungeons and Dragons gaming session that serves as both a metaphor and a case study for the broader implications of AI in education. As the hosts recount their experiences, they navigate the myriad challenges and advantages that AI presents, particularly in terms of efficiency and creativity. The episode emphasizes the essential role of educators in critically evaluating and refining the outputs generated by AI systems, thus ensuring that the integrity of educational content is preserved. Furthermore, the hosts advocate for a proactive approach to embracing AI technologies, encouraging educators to experiment and adapt rather than remain mired in traditional methodologies. Ultimately, the conversation serves as a clarion call for educational professionals to engage with AI thoughtfully, fostering not only personal growth but also the evolution of pedagogical practices in an era defined by rapid technological advancement.</p><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></content:encoded><link><![CDATA[https://sites.libsyn.com/502703]]></link><guid isPermaLink="false">52933784-9ff8-4e8f-8a2f-6da5146c6a76</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Tue, 22 Apr 2025 04:22:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/f25221dd-73bc-4ba8-8270-9ad6b1ca8cb6/AIGTC-Ep-20-Full-Edited.mp3" length="31128896" type="audio/mpeg"/><itunes:duration>32:26</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>20</itunes:episode><podcast:episode>20</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/18068dde-d89c-4939-a62b-74ab73765393/transcript.json" type="application/json"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/18068dde-d89c-4939-a62b-74ab73765393/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/18068dde-d89c-4939-a62b-74ab73765393/index.html" type="text/html"/><podcast:chapters url="https://transcripts.captivate.fm/chapter-f25221dd-73bc-4ba8-8270-9ad6b1ca8cb6.json" type="application/json+chapters"/></item><item><title>AI&apos;s Impact on Critical Thinking, the Talent Pipeline, and Academic Research: Implications for Higher Education</title><itunes:title>AI&apos;s Impact on Critical Thinking, the Talent Pipeline, and Academic Research: Implications for Higher Education</itunes:title><description><![CDATA[<p>In a timely discussion, Craig Van Slyke and Robert E. Crossler discuss the latest advancements in generative artificial intelligence, with a particular focus on the unveiling of Claude Sonnet 3.7. This development has prompted a wave of excitement and speculation regarding its implications for the future of programming. The hosts articulate their observations on how this model could revolutionize the way coding is approached, potentially rendering traditional entry-level programming roles obsolete while enhancing the efficiency of seasoned professionals. This raises critical questions about the evolving nature of job markets and the skills required in the face of such technological advancements.</p><p>As the dialogue unfolds, the hosts transition to a discussion on the ethical and educational ramifications of integrating AI into academic environments. They express concerns regarding the diminishing emphasis on critical thinking skills, particularly among students who may rely heavily on AI-generated outputs. Van Slyke and Crossler emphasize the necessity for educators to not only familiarize themselves with these technologies but also to instill a sense of skepticism and analytical rigor in their students. This approach is vital for ensuring that future professionals are equipped to discern and evaluate the information generated by AI, fostering a culture of informed decision-making and innovation. Van Slyke and Crossler offer some interesting ways in which AI can be used to help students improve their critical thinking skills.</p><p>The hosts also discuss how new AI tools, such as OpenAI's ChatGPT Deep Research may reshape the way in which academic research is done, for faculty and students. Higher ed professionals may need to rethink the very purpose of learning activities such as research papers.</p><p>The episode concludes with a call to action for higher education institutions, urging them to rethink their pedagogical strategies in light of the rapid proliferation of AI technologies. By fostering a collaborative and adaptive educational environment, educators can empower students to harness the capabilities of generative AI responsibly, thereby paving the way for a future where technology and critical thinking coexist in ways that enhance critical thinking skills.</p><p>Takeaways:</p><ul><li> The recent advancements in generative AI, particularly Claude Sonnet 3.7, have significant implications for coding practices across various disciplines. </li><li> There exists a growing concern amongst educators regarding the potential displacement of entry-level programming jobs due to the capabilities of generative AI technologies. </li><li> It is essential for higher education institutions to adapt their pedagogical approaches to effectively integrate generative AI into the curriculum for enhanced critical thinking. </li><li> Generative AI tools can serve as valuable resources for academic research, but they must be used carefully to avoid over-reliance and ensure the integrity of scholarly work. </li><li> The conversation around generative AI's impact on critical thinking skills reveals a dual potential for either degradation or enhancement based on how these tools are utilized. </li><li> Educators need to cultivate a deeper understanding of generative AI technologies to guide students in their effective and ethical use in academic contexts. </li></ul><br/><p>Companies mentioned in this episode:</p><ul><li> Anthropic </li><li> OpenAI </li><li> Microsoft </li><li> Peapod </li><li> Doordash </li><li> Uber Eats </li><li> Walmart </li><li> Chewy </li></ul><br/><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></description><content:encoded><![CDATA[<p>In a timely discussion, Craig Van Slyke and Robert E. Crossler discuss the latest advancements in generative artificial intelligence, with a particular focus on the unveiling of Claude Sonnet 3.7. This development has prompted a wave of excitement and speculation regarding its implications for the future of programming. The hosts articulate their observations on how this model could revolutionize the way coding is approached, potentially rendering traditional entry-level programming roles obsolete while enhancing the efficiency of seasoned professionals. This raises critical questions about the evolving nature of job markets and the skills required in the face of such technological advancements.</p><p>As the dialogue unfolds, the hosts transition to a discussion on the ethical and educational ramifications of integrating AI into academic environments. They express concerns regarding the diminishing emphasis on critical thinking skills, particularly among students who may rely heavily on AI-generated outputs. Van Slyke and Crossler emphasize the necessity for educators to not only familiarize themselves with these technologies but also to instill a sense of skepticism and analytical rigor in their students. This approach is vital for ensuring that future professionals are equipped to discern and evaluate the information generated by AI, fostering a culture of informed decision-making and innovation. Van Slyke and Crossler offer some interesting ways in which AI can be used to help students improve their critical thinking skills.</p><p>The hosts also discuss how new AI tools, such as OpenAI's ChatGPT Deep Research may reshape the way in which academic research is done, for faculty and students. Higher ed professionals may need to rethink the very purpose of learning activities such as research papers.</p><p>The episode concludes with a call to action for higher education institutions, urging them to rethink their pedagogical strategies in light of the rapid proliferation of AI technologies. By fostering a collaborative and adaptive educational environment, educators can empower students to harness the capabilities of generative AI responsibly, thereby paving the way for a future where technology and critical thinking coexist in ways that enhance critical thinking skills.</p><p>Takeaways:</p><ul><li> The recent advancements in generative AI, particularly Claude Sonnet 3.7, have significant implications for coding practices across various disciplines. </li><li> There exists a growing concern amongst educators regarding the potential displacement of entry-level programming jobs due to the capabilities of generative AI technologies. </li><li> It is essential for higher education institutions to adapt their pedagogical approaches to effectively integrate generative AI into the curriculum for enhanced critical thinking. </li><li> Generative AI tools can serve as valuable resources for academic research, but they must be used carefully to avoid over-reliance and ensure the integrity of scholarly work. </li><li> The conversation around generative AI's impact on critical thinking skills reveals a dual potential for either degradation or enhancement based on how these tools are utilized. </li><li> Educators need to cultivate a deeper understanding of generative AI technologies to guide students in their effective and ethical use in academic contexts. </li></ul><br/><p>Companies mentioned in this episode:</p><ul><li> Anthropic </li><li> OpenAI </li><li> Microsoft </li><li> Peapod </li><li> Doordash </li><li> Uber Eats </li><li> Walmart </li><li> Chewy </li></ul><br/><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></content:encoded><link><![CDATA[https://sites.libsyn.com/502703]]></link><guid isPermaLink="false">005e7056-28e8-48cf-8399-9da0e0263d31</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Tue, 04 Mar 2025 04:30:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/59464753-d9a1-4563-a184-f13582f6c1f4/AIGTC-Ep-19-Full-Edited.mp3" length="26219928" type="audio/mpeg"/><itunes:duration>36:25</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>19</itunes:episode><podcast:episode>19</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/968c7f40-9c49-433e-b130-08a0fe0957b6/transcript.json" type="application/json"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/968c7f40-9c49-433e-b130-08a0fe0957b6/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/968c7f40-9c49-433e-b130-08a0fe0957b6/index.html" type="text/html"/><podcast:chapters url="https://transcripts.captivate.fm/chapter-59464753-d9a1-4563-a184-f13582f6c1f4.json" type="application/json+chapters"/></item><item><title>Writing, AI, and the Transactional Trap: Rethinking Learning in Higher Ed</title><itunes:title>Writing, AI, and the Transactional Trap: Rethinking Learning in Higher Ed</itunes:title><description><![CDATA[<p>In this wide-ranging discussion, Craig Van Slyke and Robert E. Crossler explore recent AI developments and tackle the fundamental challenges facing higher education in an AI-enhanced world. They begin by examining GPT Tasks, highlighting practical applications like automated news summaries and scheduled tasks, while sharing personal experiments that demonstrate the importance of playful exploration with new AI tools.</p><p>The conversation then turns to Gemini's new fact-checking features, with important cautions about source verification and the need to balance convenience with critical evaluation of AI-generated content.</p><p>The hosts have an engaging discussion about the challenge of "transactional education" - where learning has become a points-for-grades exchange - and explore alternative approaches like mastery-based learning and European assessment models. They discuss concrete strategies for moving beyond traditional grading schemes, including reducing assignment volume and focusing on process over outcomes.</p><p>The episode concludes with an announcement of an upcoming repository for AI-enhanced teaching activities and a call for educators across disciplines to share their innovative approaches.</p><h2>Key Takeaways:</h2><ul><li>GPT Tasks enables automated, scheduled AI interactions - from news summaries to daily content delivery</li><li>Gemini's new fact-checking feature provides source verification but requires careful evaluation of source credibility</li><li>Deepseek, the new AI model, is worth checking out but be aware of privacy concerns</li><li>The challenge of "transactional education" requires rethinking traditional assessment methods</li><li>Practical alternatives to points-based grading include focusing on mastery and reducing assignment volume</li><li>Faculty across disciplines are invited to contribute to a new repository of AI-enhanced teaching activities</li></ul><br/><p><span class="ql-size-large">Outline</span></p><p><strong>GPT Tasks and Functionalities</strong></p><ul><li>AI agents conducting various tasks.</li><li>Use case: AI travel planner and email agent.</li><li>Development of GPT tasks to run scheduled prompts.</li><li>Example uses:</li><li>Receiving AI and higher ed news updates.</li><li>Daily dad joke feature.</li></ul><br/><p><strong>Exploration of New AI Tools</strong></p><ul><li>Importance of experimenting with AI tools for learning.</li><li>Application play and psychological basis for learning technology.</li><li>Encouragement to try new tools without overcomplicating the process.</li></ul><br/><p><strong>Comparison of Search Tools</strong></p><ul><li>Comparing ChatGPT with Google alerts.</li><li>Tailoring information relevance and accuracy.</li><li>Importance of validating AI-generated information.</li></ul><br/><p><strong>Privacy and Availability of AI Tools</strong></p><ul><li>Availability limited to certain user levels and regions.</li><li>Variability in tool features across different platforms.</li></ul><br/><p><strong>DeepSEEK: A New AI Model</strong></p><ul><li>Introduction and capability of DeepSEEK.</li><li>Cost efficiency and openness as an open-source model.</li><li>Privacy concerns related to data sharing and Chinese government access.</li><li>Impact on NVIDIA stock and benchmarks comparison.</li></ul><br/><p><strong>Open Source and Computational Needs</strong></p><ul><li>Role of open source in future AI model development.</li><li>Computational requirements and challenges with local running versions.</li></ul><br/><p><strong>Privacy and Intellectual Property Concerns</strong></p><ul><li>Distinction between privacy and intellectual property.</li><li>Concerns about research data compliance and institutional rules.</li></ul><br/><p><strong>Writing with AI Tools</strong></p><ul><li>Differentiating writing and editing.</li><li>Using AI to enhance rather than replace human creativity.</li><li>Increase in writing quality in published work.</li></ul><br/><p><strong>Transactional Education Model</strong></p><ul><li>Challenges posed by transactional nature of education.</li><li>Importance of mastery and process focus in learning.</li><li>Discussions on final exams vs. continuous assessment.</li></ul><br/><p><strong>Proposed Repository for Active Learning Activities</strong></p><ul><li>Building a shared resource for educators.</li><li>Encouragement for community contribution and collaboration.</li><li>Long-term vision for enhancing educational engagement with AI.</li></ul><br/><p><strong>Conclusion and Call for Interaction</strong></p><ul><li>Inviting listeners to contribute ideas and share practices.</li><li>Encouragement for community-driven improvement in AI-integrated education.</li></ul><br/><h2>Time Stamps:</h2><p>[00:00] Introduction and GPT Tasks discussion</p><p>[15:45] Gemini's new features and source verification</p><p>[25:20] Writing process and AI tools</p><p>[35:10] Transactional education challenges</p><p>[45:00] Announcement of teaching activity repository</p><h2>Links</h2><ul><li>ChatGPT: <a href="https://chat.openai.com/" rel="noopener noreferrer" target="_blank">https://chat.openai.com/</a> </li><li>Gemini: <a href="https://gemini.google.com/" rel="noopener noreferrer" target="_blank">https://gemini.google.com/</a> </li><li>DeepSeek: <a href="https://deepseek.ai/" rel="noopener noreferrer" target="_blank">https://deepseek.ai/</a> </li><li>Anthropic: <a href="https://www.anthropic.com/" rel="noopener noreferrer" target="_blank">https://www.anthropic.com/</a> </li><li>Claude: <a href="https://www.anthropic.com/claude" rel="noopener noreferrer" target="_blank">https://www.anthropic.com/claude</a> </li><li>Llama: <a href="https://ai.facebook.com/blog/large-language-model-llama-meta-ai/" rel="noopener noreferrer" target="_blank">https://ai.facebook.com/blog/large-language-model-llama-meta-ai/</a> (This links to a blog post about Llama, as there isn't a dedicated website for the model itself.) </li></ul><br/><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></description><content:encoded><![CDATA[<p>In this wide-ranging discussion, Craig Van Slyke and Robert E. Crossler explore recent AI developments and tackle the fundamental challenges facing higher education in an AI-enhanced world. They begin by examining GPT Tasks, highlighting practical applications like automated news summaries and scheduled tasks, while sharing personal experiments that demonstrate the importance of playful exploration with new AI tools.</p><p>The conversation then turns to Gemini's new fact-checking features, with important cautions about source verification and the need to balance convenience with critical evaluation of AI-generated content.</p><p>The hosts have an engaging discussion about the challenge of "transactional education" - where learning has become a points-for-grades exchange - and explore alternative approaches like mastery-based learning and European assessment models. They discuss concrete strategies for moving beyond traditional grading schemes, including reducing assignment volume and focusing on process over outcomes.</p><p>The episode concludes with an announcement of an upcoming repository for AI-enhanced teaching activities and a call for educators across disciplines to share their innovative approaches.</p><h2>Key Takeaways:</h2><ul><li>GPT Tasks enables automated, scheduled AI interactions - from news summaries to daily content delivery</li><li>Gemini's new fact-checking feature provides source verification but requires careful evaluation of source credibility</li><li>Deepseek, the new AI model, is worth checking out but be aware of privacy concerns</li><li>The challenge of "transactional education" requires rethinking traditional assessment methods</li><li>Practical alternatives to points-based grading include focusing on mastery and reducing assignment volume</li><li>Faculty across disciplines are invited to contribute to a new repository of AI-enhanced teaching activities</li></ul><br/><p><span class="ql-size-large">Outline</span></p><p><strong>GPT Tasks and Functionalities</strong></p><ul><li>AI agents conducting various tasks.</li><li>Use case: AI travel planner and email agent.</li><li>Development of GPT tasks to run scheduled prompts.</li><li>Example uses:</li><li>Receiving AI and higher ed news updates.</li><li>Daily dad joke feature.</li></ul><br/><p><strong>Exploration of New AI Tools</strong></p><ul><li>Importance of experimenting with AI tools for learning.</li><li>Application play and psychological basis for learning technology.</li><li>Encouragement to try new tools without overcomplicating the process.</li></ul><br/><p><strong>Comparison of Search Tools</strong></p><ul><li>Comparing ChatGPT with Google alerts.</li><li>Tailoring information relevance and accuracy.</li><li>Importance of validating AI-generated information.</li></ul><br/><p><strong>Privacy and Availability of AI Tools</strong></p><ul><li>Availability limited to certain user levels and regions.</li><li>Variability in tool features across different platforms.</li></ul><br/><p><strong>DeepSEEK: A New AI Model</strong></p><ul><li>Introduction and capability of DeepSEEK.</li><li>Cost efficiency and openness as an open-source model.</li><li>Privacy concerns related to data sharing and Chinese government access.</li><li>Impact on NVIDIA stock and benchmarks comparison.</li></ul><br/><p><strong>Open Source and Computational Needs</strong></p><ul><li>Role of open source in future AI model development.</li><li>Computational requirements and challenges with local running versions.</li></ul><br/><p><strong>Privacy and Intellectual Property Concerns</strong></p><ul><li>Distinction between privacy and intellectual property.</li><li>Concerns about research data compliance and institutional rules.</li></ul><br/><p><strong>Writing with AI Tools</strong></p><ul><li>Differentiating writing and editing.</li><li>Using AI to enhance rather than replace human creativity.</li><li>Increase in writing quality in published work.</li></ul><br/><p><strong>Transactional Education Model</strong></p><ul><li>Challenges posed by transactional nature of education.</li><li>Importance of mastery and process focus in learning.</li><li>Discussions on final exams vs. continuous assessment.</li></ul><br/><p><strong>Proposed Repository for Active Learning Activities</strong></p><ul><li>Building a shared resource for educators.</li><li>Encouragement for community contribution and collaboration.</li><li>Long-term vision for enhancing educational engagement with AI.</li></ul><br/><p><strong>Conclusion and Call for Interaction</strong></p><ul><li>Inviting listeners to contribute ideas and share practices.</li><li>Encouragement for community-driven improvement in AI-integrated education.</li></ul><br/><h2>Time Stamps:</h2><p>[00:00] Introduction and GPT Tasks discussion</p><p>[15:45] Gemini's new features and source verification</p><p>[25:20] Writing process and AI tools</p><p>[35:10] Transactional education challenges</p><p>[45:00] Announcement of teaching activity repository</p><h2>Links</h2><ul><li>ChatGPT: <a href="https://chat.openai.com/" rel="noopener noreferrer" target="_blank">https://chat.openai.com/</a> </li><li>Gemini: <a href="https://gemini.google.com/" rel="noopener noreferrer" target="_blank">https://gemini.google.com/</a> </li><li>DeepSeek: <a href="https://deepseek.ai/" rel="noopener noreferrer" target="_blank">https://deepseek.ai/</a> </li><li>Anthropic: <a href="https://www.anthropic.com/" rel="noopener noreferrer" target="_blank">https://www.anthropic.com/</a> </li><li>Claude: <a href="https://www.anthropic.com/claude" rel="noopener noreferrer" target="_blank">https://www.anthropic.com/claude</a> </li><li>Llama: <a href="https://ai.facebook.com/blog/large-language-model-llama-meta-ai/" rel="noopener noreferrer" target="_blank">https://ai.facebook.com/blog/large-language-model-llama-meta-ai/</a> (This links to a blog post about Llama, as there isn't a dedicated website for the model itself.) </li></ul><br/><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></content:encoded><link><![CDATA[https://sites.libsyn.com/502703]]></link><guid isPermaLink="false">c29e79ed-ffef-44fe-880d-3f17201c7248</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Tue, 04 Feb 2025 05:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/797eea69-1d25-468f-b39c-707cc753fc6e/AIGTC-Ep-18.mp3" length="49614801" type="audio/mpeg"/><itunes:duration>51:41</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>18</itunes:episode><podcast:episode>18</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/97570391-44fd-4e03-902e-61866ea28665/transcript.json" type="application/json"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/97570391-44fd-4e03-902e-61866ea28665/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/97570391-44fd-4e03-902e-61866ea28665/index.html" type="text/html"/></item><item><title>Is AI the Future of Learning or the Death of Education?</title><itunes:title>Is AI the Future of Learning or the Death of Education?</itunes:title><description><![CDATA[<p>AI hallucinations, or confabulations, can actually foster scientific innovation by generating a wealth of ideas, even if many of them are incorrect. Craig Van Slyke and Robert E. Crossler explore how AI's ability to rapidly process information allows researchers to brainstorm and ideate more effectively, ultimately leading to significant breakthroughs in various fields. They discuss the need for a shift in how we train scientists, emphasizing critical thinking and the ability to assess AI-generated content. The conversation also touches on the potential risks of AI in education, including the challenge of maintaining student engagement and the fear of students using AI to cheat. As they dive into the latest tools like Google's Gemini and NotebookLM, the hosts highlight the importance of adapting teaching methods to leverage AI's capabilities while ensuring students develop essential skills to thrive in an AI-augmented world.</p><p>The latest podcast episode features an engaging discussion between Craig Van Slyke and Robert E. Crossler about the impact of AI on innovation and education. They dive into the concept of AI hallucinations and confabulations, noting that while these outputs may be inaccurate, they can spark creative thinking and lead to valuable scientific breakthroughs. Crossler emphasizes that trained scientists can sift through these AI-generated ideas, helping to separate the wheat from the chaff. This perspective reframes the way we view AI's role in generating new knowledge and highlights the importance of human expertise in guiding this process. </p><p>As the dialogue progresses, the hosts address the implications of AI on educational practices. They express concern about the reliance on self-directed learning, noting that many students struggle to engage deeply without structured support. Van Slyke and Crossler advocate for a reimagined educational framework that incorporates AI tools, encouraging educators to foster critical thinking and analytical skills. By challenging students to interact with AI outputs actively, such as critiquing AI-generated reports or creating quizzes based on their work, instructors can ensure that learning is meaningful and substantive. </p><p>The episode also explores practical applications of AI tools like Google’s Gemini and NotebookLM for enhancing educational experiences. They discuss how these tools can facilitate research and content creation, making it easier for students to engage with complex topics. However, they also acknowledge the potential for misuse, such as cheating. The hosts argue that by redesigning assignments to focus on critical engagement with AI-generated content, educators can mitigate these risks while enriching the learning process. In summary, the episode provides a thought-provoking examination of how AI can both challenge and enhance the educational landscape, urging educators to adapt their approaches to prepare students for a future where AI is an integral part of knowledge acquisition.</p><p>Takeaways:</p><ul><li> AI hallucinations, referred to as confabulations, can stimulate scientific innovation by generating diverse ideas. </li><li> The rapid consumption of information by AI accelerates connections that human scientists might miss. </li><li> Future scientists must adapt their training to critically assess AI-generated confabulations for practical use. </li><li> Education needs to evolve to help students engage with AI as a tool for learning. </li><li> Using AI tools in the classroom can enhance critical thinking skills and analytical abilities. </li><li> Collaboration among educators is essential to share effective strategies for utilizing AI technologies. </li></ul><br/><p>Links</p><p>	1.	New York Times article: <a href="https://www.nytimes.com/2024/12/23/science/ai-hallucinations-science.html" rel="noopener noreferrer" target="_blank">https://www.nytimes.com/2024/12/23/science/ai-hallucinations-science.html</a></p><p>	2.	Poe.com voice generators: <a href="https://aigoestocollege.substack.com/p/an-experiment-with-poecoms-new-speech?r=2eqpnj" rel="noopener noreferrer" target="_blank">https://aigoestocollege.substack.com/p/an-experiment-with-poecoms-new-speech?r=2eqpnj</a></p><p>	3.	Gemini Deep Research: <a href="https://aigoestocollege.substack.com/p/gemini-deep-research-a-true-game?r=2eqpnj" rel="noopener noreferrer" target="_blank">https://aigoestocollege.substack.com/p/gemini-deep-research-a-true-game?r=2eqpnj</a></p><p>	4.	Notebook LM and audio overviews: <a href="https://open.substack.com/pub/aigoestocollege/p/notebook-lm-joining-the-audio-interview" rel="noopener noreferrer" target="_blank">https://open.substack.com/pub/aigoestocollege/p/notebook-lm-joining-the-audio-interview</a></p><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></description><content:encoded><![CDATA[<p>AI hallucinations, or confabulations, can actually foster scientific innovation by generating a wealth of ideas, even if many of them are incorrect. Craig Van Slyke and Robert E. Crossler explore how AI's ability to rapidly process information allows researchers to brainstorm and ideate more effectively, ultimately leading to significant breakthroughs in various fields. They discuss the need for a shift in how we train scientists, emphasizing critical thinking and the ability to assess AI-generated content. The conversation also touches on the potential risks of AI in education, including the challenge of maintaining student engagement and the fear of students using AI to cheat. As they dive into the latest tools like Google's Gemini and NotebookLM, the hosts highlight the importance of adapting teaching methods to leverage AI's capabilities while ensuring students develop essential skills to thrive in an AI-augmented world.</p><p>The latest podcast episode features an engaging discussion between Craig Van Slyke and Robert E. Crossler about the impact of AI on innovation and education. They dive into the concept of AI hallucinations and confabulations, noting that while these outputs may be inaccurate, they can spark creative thinking and lead to valuable scientific breakthroughs. Crossler emphasizes that trained scientists can sift through these AI-generated ideas, helping to separate the wheat from the chaff. This perspective reframes the way we view AI's role in generating new knowledge and highlights the importance of human expertise in guiding this process. </p><p>As the dialogue progresses, the hosts address the implications of AI on educational practices. They express concern about the reliance on self-directed learning, noting that many students struggle to engage deeply without structured support. Van Slyke and Crossler advocate for a reimagined educational framework that incorporates AI tools, encouraging educators to foster critical thinking and analytical skills. By challenging students to interact with AI outputs actively, such as critiquing AI-generated reports or creating quizzes based on their work, instructors can ensure that learning is meaningful and substantive. </p><p>The episode also explores practical applications of AI tools like Google’s Gemini and NotebookLM for enhancing educational experiences. They discuss how these tools can facilitate research and content creation, making it easier for students to engage with complex topics. However, they also acknowledge the potential for misuse, such as cheating. The hosts argue that by redesigning assignments to focus on critical engagement with AI-generated content, educators can mitigate these risks while enriching the learning process. In summary, the episode provides a thought-provoking examination of how AI can both challenge and enhance the educational landscape, urging educators to adapt their approaches to prepare students for a future where AI is an integral part of knowledge acquisition.</p><p>Takeaways:</p><ul><li> AI hallucinations, referred to as confabulations, can stimulate scientific innovation by generating diverse ideas. </li><li> The rapid consumption of information by AI accelerates connections that human scientists might miss. </li><li> Future scientists must adapt their training to critically assess AI-generated confabulations for practical use. </li><li> Education needs to evolve to help students engage with AI as a tool for learning. </li><li> Using AI tools in the classroom can enhance critical thinking skills and analytical abilities. </li><li> Collaboration among educators is essential to share effective strategies for utilizing AI technologies. </li></ul><br/><p>Links</p><p>	1.	New York Times article: <a href="https://www.nytimes.com/2024/12/23/science/ai-hallucinations-science.html" rel="noopener noreferrer" target="_blank">https://www.nytimes.com/2024/12/23/science/ai-hallucinations-science.html</a></p><p>	2.	Poe.com voice generators: <a href="https://aigoestocollege.substack.com/p/an-experiment-with-poecoms-new-speech?r=2eqpnj" rel="noopener noreferrer" target="_blank">https://aigoestocollege.substack.com/p/an-experiment-with-poecoms-new-speech?r=2eqpnj</a></p><p>	3.	Gemini Deep Research: <a href="https://aigoestocollege.substack.com/p/gemini-deep-research-a-true-game?r=2eqpnj" rel="noopener noreferrer" target="_blank">https://aigoestocollege.substack.com/p/gemini-deep-research-a-true-game?r=2eqpnj</a></p><p>	4.	Notebook LM and audio overviews: <a href="https://open.substack.com/pub/aigoestocollege/p/notebook-lm-joining-the-audio-interview" rel="noopener noreferrer" target="_blank">https://open.substack.com/pub/aigoestocollege/p/notebook-lm-joining-the-audio-interview</a></p><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></content:encoded><link><![CDATA[https://sites.libsyn.com/502703]]></link><guid isPermaLink="false">49be3b9a-cfa9-4dfb-a0dc-9926543d52f5</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Mon, 06 Jan 2025 05:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/b3e4f0e5-d329-419e-82fe-bbb30f57c8a5/AIGTC-Ep-17-FINAL.mp3" length="39750136" type="audio/mpeg"/><itunes:duration>41:24</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>17</itunes:episode><podcast:episode>17</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/a4c09fe7-1f3f-4a8d-ba75-7ba00940178d/transcript.json" type="application/json"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/a4c09fe7-1f3f-4a8d-ba75-7ba00940178d/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/a4c09fe7-1f3f-4a8d-ba75-7ba00940178d/index.html" type="text/html"/><podcast:chapters url="https://transcripts.captivate.fm/chapter-b3e4f0e5-d329-419e-82fe-bbb30f57c8a5.json" type="application/json+chapters"/></item><item><title>Navigating the AI Landscape: Essential Tools for Higher Education Professionals</title><itunes:title>Navigating the AI Landscape: Essential Tools for Higher Education Professionals</itunes:title><description><![CDATA[<p>This episode of AI Goes to College discuss the practical applications of generative AI tools in academic research, focusing on how they can enhance the research process for higher education professionals. Hosts Craig Van Slyke and Robert E. Crossler discuss three key tools: Connected Papers, Research Rabbit, and Scite_, highlighting their functionalities and the importance of transparency in their use. They emphasize the need for human oversight in research, cautioning against over-reliance on AI-generated content, as it may lack the critical thought necessary for rigorous academic work. The conversation also touches on the emerging tool NotebookLM, which allows users to query research articles and create study guides, while raising ethical concerns about data usage and bias in AI outputs. Ultimately, Craig and Rob encourage listeners to explore these tools thoughtfully and integrate them into their research practices while maintaining a critical perspective on the information they generate.</p><p>---</p><p>The integration of generative AI tools into academic research is an evolving topic that Craig and Rob approach with both enthusiasm and caution. Their conversation centers around a recent Brown Bag series at Washington State University, where Rob's doctoral students showcased innovative AI tools designed to assist in academic research. The discussion focuses on three tools in particular: Connected Papers, Research Rabbit, and Scite_. Connected Papers stands out for its transparency, utilizing data from Semantic Scholar to create a visual map of related research, which aids users in finding relevant literature. This tool allows researchers to gauge the interconnectedness of papers and prioritize their reading based on citation frequency and relevance. </p><p>In contrast, Research Rabbit's lack of clarity regarding its data sources and the meaning of its visual representations raises significant concerns about its reliability. Rob's critical assessment of Research Rabbit serves as a cautionary tale for researchers who might be tempted to rely solely on AI for literature discovery. He argues that while tools like Research Rabbit can provide useful starting points, they often fall short of the rigorous standards required for academic research. The hosts also discuss Cite, which generates literature reviews based on user input. Although Cite can save time for researchers, both Craig and Rob emphasize the necessity of critical engagement with the content, warning against over-reliance on AI-generated summaries that may lack depth and nuance. </p><p>Throughout the episode, the overarching message is clear: while generative AI can enhance research efficiency, it cannot replace the need for critical thinking and human discernment in the research process. Craig and Rob encourage their listeners to embrace these tools as aides rather than crutches, fostering a mindset of skepticism and inquiry. They underscore the importance of maintaining academic integrity in the face of rapidly advancing technology, reminding researchers that their insights and interpretations are invaluable in shaping the future of scholarship. By the end of the episode, listeners are equipped with practical advice on how to navigate the intersection of AI and research, ensuring that they harness the power of these tools responsibly and effectively.</p><p>Takeaways:</p><ul><li> Generative AI tools can help streamline academic research but should not replace critical thinking. </li><li> Connected Papers offers transparency in sourcing research papers, unlike some other tools. </li><li> Students must remain skeptical of AI outputs, ensuring they apply critical thought in research. </li><li> Tools like NotebookLM can assist in summarizing and querying research articles effectively. </li><li> Using AI can eliminate busy work, allowing researchers to focus on adding unique insights. </li><li> Educators need to guide students on how to leverage AI tools responsibly and ethically. </li></ul><br/><p>Link to Craig's Notebook LM experiment description: </p><ul><li><a href="https://aigoestocollege.substack.com/p/is-notebook-lm-biased" rel="noopener noreferrer" target="_blank">https://aigoestocollege.substack.com/p/is-notebook-lm-biased</a></li></ul><br/><p>Links referenced in this episode: </p><ul><li>Notebook LM: <a href="https://notebooklm.google.com" rel="noopener noreferrer" target="_blank">notebooklm.google.com</a></li><li>AI Goes to College: <a href="https://aigostocollege.com" rel="noopener noreferrer" target="_blank">aigostocollege.com</a></li><li>Google Learn About: <a href="https://learning.google.com" rel="noopener noreferrer" target="_blank">learning.google.com</a></li><li>Connected Papers: <a href="https://www.connectedpapers.com/" rel="noopener noreferrer" target="_blank">https://www.connectedpapers.com/</a></li><li>Scite_: <a href="https://scite.ai/" rel="noopener noreferrer" target="_blank">https://scite.ai/</a></li><li>Research Rabbit: <a href="https://www.researchrabbit.ai/" rel="noopener noreferrer" target="_blank">https://www.researchrabbit.ai/</a></li></ul><br/><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></description><content:encoded><![CDATA[<p>This episode of AI Goes to College discuss the practical applications of generative AI tools in academic research, focusing on how they can enhance the research process for higher education professionals. Hosts Craig Van Slyke and Robert E. Crossler discuss three key tools: Connected Papers, Research Rabbit, and Scite_, highlighting their functionalities and the importance of transparency in their use. They emphasize the need for human oversight in research, cautioning against over-reliance on AI-generated content, as it may lack the critical thought necessary for rigorous academic work. The conversation also touches on the emerging tool NotebookLM, which allows users to query research articles and create study guides, while raising ethical concerns about data usage and bias in AI outputs. Ultimately, Craig and Rob encourage listeners to explore these tools thoughtfully and integrate them into their research practices while maintaining a critical perspective on the information they generate.</p><p>---</p><p>The integration of generative AI tools into academic research is an evolving topic that Craig and Rob approach with both enthusiasm and caution. Their conversation centers around a recent Brown Bag series at Washington State University, where Rob's doctoral students showcased innovative AI tools designed to assist in academic research. The discussion focuses on three tools in particular: Connected Papers, Research Rabbit, and Scite_. Connected Papers stands out for its transparency, utilizing data from Semantic Scholar to create a visual map of related research, which aids users in finding relevant literature. This tool allows researchers to gauge the interconnectedness of papers and prioritize their reading based on citation frequency and relevance. </p><p>In contrast, Research Rabbit's lack of clarity regarding its data sources and the meaning of its visual representations raises significant concerns about its reliability. Rob's critical assessment of Research Rabbit serves as a cautionary tale for researchers who might be tempted to rely solely on AI for literature discovery. He argues that while tools like Research Rabbit can provide useful starting points, they often fall short of the rigorous standards required for academic research. The hosts also discuss Cite, which generates literature reviews based on user input. Although Cite can save time for researchers, both Craig and Rob emphasize the necessity of critical engagement with the content, warning against over-reliance on AI-generated summaries that may lack depth and nuance. </p><p>Throughout the episode, the overarching message is clear: while generative AI can enhance research efficiency, it cannot replace the need for critical thinking and human discernment in the research process. Craig and Rob encourage their listeners to embrace these tools as aides rather than crutches, fostering a mindset of skepticism and inquiry. They underscore the importance of maintaining academic integrity in the face of rapidly advancing technology, reminding researchers that their insights and interpretations are invaluable in shaping the future of scholarship. By the end of the episode, listeners are equipped with practical advice on how to navigate the intersection of AI and research, ensuring that they harness the power of these tools responsibly and effectively.</p><p>Takeaways:</p><ul><li> Generative AI tools can help streamline academic research but should not replace critical thinking. </li><li> Connected Papers offers transparency in sourcing research papers, unlike some other tools. </li><li> Students must remain skeptical of AI outputs, ensuring they apply critical thought in research. </li><li> Tools like NotebookLM can assist in summarizing and querying research articles effectively. </li><li> Using AI can eliminate busy work, allowing researchers to focus on adding unique insights. </li><li> Educators need to guide students on how to leverage AI tools responsibly and ethically. </li></ul><br/><p>Link to Craig's Notebook LM experiment description: </p><ul><li><a href="https://aigoestocollege.substack.com/p/is-notebook-lm-biased" rel="noopener noreferrer" target="_blank">https://aigoestocollege.substack.com/p/is-notebook-lm-biased</a></li></ul><br/><p>Links referenced in this episode: </p><ul><li>Notebook LM: <a href="https://notebooklm.google.com" rel="noopener noreferrer" target="_blank">notebooklm.google.com</a></li><li>AI Goes to College: <a href="https://aigostocollege.com" rel="noopener noreferrer" target="_blank">aigostocollege.com</a></li><li>Google Learn About: <a href="https://learning.google.com" rel="noopener noreferrer" target="_blank">learning.google.com</a></li><li>Connected Papers: <a href="https://www.connectedpapers.com/" rel="noopener noreferrer" target="_blank">https://www.connectedpapers.com/</a></li><li>Scite_: <a href="https://scite.ai/" rel="noopener noreferrer" target="_blank">https://scite.ai/</a></li><li>Research Rabbit: <a href="https://www.researchrabbit.ai/" rel="noopener noreferrer" target="_blank">https://www.researchrabbit.ai/</a></li></ul><br/><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></content:encoded><link><![CDATA[https://sites.libsyn.com/502703]]></link><guid isPermaLink="false">5e4192e4-e8f8-4733-b119-db1fd9fd9d87</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Mon, 02 Dec 2024 04:15:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/fcf26648-f6e7-44d8-9d12-3929defd5509/AIGTC-Ep-16-FINAL.mp3" length="28993189" type="audio/mpeg"/><itunes:duration>40:16</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>16</itunes:episode><podcast:episode>16</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/5eaf3a9b-b475-4bd2-a522-266ac1b619f3/transcript.json" type="application/json"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/5eaf3a9b-b475-4bd2-a522-266ac1b619f3/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/5eaf3a9b-b475-4bd2-a522-266ac1b619f3/index.html" type="text/html"/><podcast:chapters url="https://transcripts.captivate.fm/chapter-fcf26648-f6e7-44d8-9d12-3929defd5509.json" type="application/json+chapters"/></item><item><title>AI detectors, amazing slides with Beautiful AI and Gemini as an AI gateway</title><itunes:title>AI detectors, amazing slides with Beautiful AI and Gemini as an AI gateway</itunes:title><description><![CDATA[<p>Generative AI is reshaping the landscape of higher education, but the introduction of AI detectors has raised significant concerns among educators. Craig Van Slyke and Robert E. Crosler delve into the limitations and biases of these tools, arguing they can unfairly penalize innocent students, particularly non-native English speakers. With evidence from their own experiences, they assert that relying solely on AI detection tools is misguided and encourages educators to focus more on the quality of student work rather than the potential use of generative AI. The conversation also highlights the need for context and understanding in assignment design, suggesting that assignments should be tailored to class discussions to ensure students engage meaningfully with the material. As generative AI tools become increasingly integrated into everyday writing aids like Grammarly, the lines blur between acceptable assistance and academic dishonesty, making it crucial for educators to adapt their approaches to assessment and feedback. </p><p>In addition to discussing the challenges posed by AI detectors, the hosts introduce Beautiful AI, a powerful slide deck creation tool that leverages generative AI to produce visually stunning presentations. Craig shares his experiences with Beautiful AI, noting its ability to generate compelling slides that enhance the quality of presentations without requiring extensive editing. This tool represents a shift in how educators can approach presentations, allowing for a more design-focused experience that can save significant time. The episode encourages educators to explore such tools that can streamline their workflows and improve the quality of their output, ultimately promoting a more effective use of technology in educational settings. The discussion culminates with a call for educators to embrace generative AI not as a threat but as a resource that can enhance learning and teaching practices.</p><p>Takeaways:</p><ul><li> AI detectors are currently unreliable and can unfairly penalize innocent students. It's essential to critically evaluate their results rather than accept them blindly. </li><li> The biases in AI detectors often target non-native English speakers, leading to unfair accusations of cheating. </li><li> Generative AI tools can enhance the quality of writing and presentations, making them more visually appealing and easier to create. </li><li> Beautiful AI can generate visually stunning slide decks quickly, saving time while maintaining quality. </li><li> Using tools like Gemini can significantly streamline the process of finding accurate information online, offering a more efficient alternative to traditional searches. </li><li> Educators should contextualize assignments to encourage originality and understanding, rather than relying solely on AI detection tools. </li></ul><br/><p>Links referenced in this episode:</p><ul><li><a href="https://gemini.google.com" rel="noopener noreferrer" target="_blank">gemini.google.com</a></li><li><a href="https://beautiful.ai" rel="noopener noreferrer" target="_blank">beautiful.ai</a></li></ul><br/><p><br></p><p>Companies mentioned in this episode:</p><ul><li> Grammarly </li><li> Shutterstock </li><li> Beautiful AI </li><li> Google </li><li> Wright State University </li><li> WSU </li><li> Gemini </li></ul><br/><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></description><content:encoded><![CDATA[<p>Generative AI is reshaping the landscape of higher education, but the introduction of AI detectors has raised significant concerns among educators. Craig Van Slyke and Robert E. Crosler delve into the limitations and biases of these tools, arguing they can unfairly penalize innocent students, particularly non-native English speakers. With evidence from their own experiences, they assert that relying solely on AI detection tools is misguided and encourages educators to focus more on the quality of student work rather than the potential use of generative AI. The conversation also highlights the need for context and understanding in assignment design, suggesting that assignments should be tailored to class discussions to ensure students engage meaningfully with the material. As generative AI tools become increasingly integrated into everyday writing aids like Grammarly, the lines blur between acceptable assistance and academic dishonesty, making it crucial for educators to adapt their approaches to assessment and feedback. </p><p>In addition to discussing the challenges posed by AI detectors, the hosts introduce Beautiful AI, a powerful slide deck creation tool that leverages generative AI to produce visually stunning presentations. Craig shares his experiences with Beautiful AI, noting its ability to generate compelling slides that enhance the quality of presentations without requiring extensive editing. This tool represents a shift in how educators can approach presentations, allowing for a more design-focused experience that can save significant time. The episode encourages educators to explore such tools that can streamline their workflows and improve the quality of their output, ultimately promoting a more effective use of technology in educational settings. The discussion culminates with a call for educators to embrace generative AI not as a threat but as a resource that can enhance learning and teaching practices.</p><p>Takeaways:</p><ul><li> AI detectors are currently unreliable and can unfairly penalize innocent students. It's essential to critically evaluate their results rather than accept them blindly. </li><li> The biases in AI detectors often target non-native English speakers, leading to unfair accusations of cheating. </li><li> Generative AI tools can enhance the quality of writing and presentations, making them more visually appealing and easier to create. </li><li> Beautiful AI can generate visually stunning slide decks quickly, saving time while maintaining quality. </li><li> Using tools like Gemini can significantly streamline the process of finding accurate information online, offering a more efficient alternative to traditional searches. </li><li> Educators should contextualize assignments to encourage originality and understanding, rather than relying solely on AI detection tools. </li></ul><br/><p>Links referenced in this episode:</p><ul><li><a href="https://gemini.google.com" rel="noopener noreferrer" target="_blank">gemini.google.com</a></li><li><a href="https://beautiful.ai" rel="noopener noreferrer" target="_blank">beautiful.ai</a></li></ul><br/><p><br></p><p>Companies mentioned in this episode:</p><ul><li> Grammarly </li><li> Shutterstock </li><li> Beautiful AI </li><li> Google </li><li> Wright State University </li><li> WSU </li><li> Gemini </li></ul><br/><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></content:encoded><link><![CDATA[https://sites.libsyn.com/502703]]></link><guid isPermaLink="false">60ea178e-6b57-4ea7-85f0-995b258332c6</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Mon, 18 Nov 2024 04:30:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/bd07d4b7-53f2-4886-9a30-7220fde063c9/AIGTC-Ep-15.mp3" length="27582916" type="audio/mpeg"/><itunes:duration>28:44</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>15</itunes:episode><podcast:episode>15</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/e9540a91-de38-44d7-a654-822159f8c8b4/transcript.json" type="application/json"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/e9540a91-de38-44d7-a654-822159f8c8b4/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/e9540a91-de38-44d7-a654-822159f8c8b4/index.html" type="text/html"/><podcast:chapters url="https://transcripts.captivate.fm/chapter-bd07d4b7-53f2-4886-9a30-7220fde063c9.json" type="application/json+chapters"/></item><item><title>Google NotebookLM and Our AI Toolkits</title><itunes:title>Google NotebookLM and Our AI Toolkits</itunes:title><description><![CDATA[<p>Craig and Rob dig into the innovative features of Google's Notebook LM, a tool that allows users to upload documents and generate responses based on that content. They discuss how this tool has been particularly beneficial in an academic setting, enhancing students' confidence in their understanding of course materials. The conversation also highlights the importance of using generative AI as a supplement to learning rather than a replacement, emphasizing the need for critical engagement with the technology. Additionally, they share their personal AI toolkits, exploring various tools like Copilot, ChatGPT, and Claude, each with unique strengths for different tasks. The episode wraps up with a look at specialized tools such as Lex, Consensus, and Perplexity AI, encouraging listeners to experiment with these technologies to improve their efficiency and effectiveness in academic and professional environments.</p><p><strong>Highlights</strong>:</p><ul><li>00:17 - Exploring Google's Notebook LM</li><li>01:25 - Rob's Experience with Notebook LM in Education</li><li>02:05 - The Impact of Notebook LM on Student Learning</li><li>04:00 - Creating Podcasts with Notebook LM</li><li>05:35 - Generative AI and Student Engagement</li><li>11:03 - Personal AI Toolkits: What's in Use?</li><li>11:10 - Comparing Copilot and ChatGPT/Claude</li><li>06:00 - The Unpredictability of AI Responses</li><li>09:35 - Innovative Uses of Generative AI</li><li>26:55 - Specialized AI Tools: Perplexity and Consensus</li><li>37:22 - Conclusion and Encouragement to Explore AI Tools</li></ul><br/><p><strong>Products and websites mentioned</strong></p><p>Google Notebook LM: <a href="https://notebooklm.google.com/" rel="noopener noreferrer" target="_blank">https://notebooklm.google.com/</a></p><p>Perplexity.ai: <a href="https://www.perplexity.ai/" rel="noopener noreferrer" target="_blank">https://www.perplexity.ai/</a></p><p>Consensus.app: <a href="https://consensus.app/search/" rel="noopener noreferrer" target="_blank">https://consensus.app/search/</a></p><p>Lex.page: <a href="https://lex.page/" rel="noopener noreferrer" target="_blank">https://lex.page/</a> </p><p>Craig's AI Goes to College Substack: <a href="https://aigoestocollege.substack.com/" rel="noopener noreferrer" target="_blank">https://aigoestocollege.substack.com/</a></p><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></description><content:encoded><![CDATA[<p>Craig and Rob dig into the innovative features of Google's Notebook LM, a tool that allows users to upload documents and generate responses based on that content. They discuss how this tool has been particularly beneficial in an academic setting, enhancing students' confidence in their understanding of course materials. The conversation also highlights the importance of using generative AI as a supplement to learning rather than a replacement, emphasizing the need for critical engagement with the technology. Additionally, they share their personal AI toolkits, exploring various tools like Copilot, ChatGPT, and Claude, each with unique strengths for different tasks. The episode wraps up with a look at specialized tools such as Lex, Consensus, and Perplexity AI, encouraging listeners to experiment with these technologies to improve their efficiency and effectiveness in academic and professional environments.</p><p><strong>Highlights</strong>:</p><ul><li>00:17 - Exploring Google's Notebook LM</li><li>01:25 - Rob's Experience with Notebook LM in Education</li><li>02:05 - The Impact of Notebook LM on Student Learning</li><li>04:00 - Creating Podcasts with Notebook LM</li><li>05:35 - Generative AI and Student Engagement</li><li>11:03 - Personal AI Toolkits: What's in Use?</li><li>11:10 - Comparing Copilot and ChatGPT/Claude</li><li>06:00 - The Unpredictability of AI Responses</li><li>09:35 - Innovative Uses of Generative AI</li><li>26:55 - Specialized AI Tools: Perplexity and Consensus</li><li>37:22 - Conclusion and Encouragement to Explore AI Tools</li></ul><br/><p><strong>Products and websites mentioned</strong></p><p>Google Notebook LM: <a href="https://notebooklm.google.com/" rel="noopener noreferrer" target="_blank">https://notebooklm.google.com/</a></p><p>Perplexity.ai: <a href="https://www.perplexity.ai/" rel="noopener noreferrer" target="_blank">https://www.perplexity.ai/</a></p><p>Consensus.app: <a href="https://consensus.app/search/" rel="noopener noreferrer" target="_blank">https://consensus.app/search/</a></p><p>Lex.page: <a href="https://lex.page/" rel="noopener noreferrer" target="_blank">https://lex.page/</a> </p><p>Craig's AI Goes to College Substack: <a href="https://aigoestocollege.substack.com/" rel="noopener noreferrer" target="_blank">https://aigoestocollege.substack.com/</a></p><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></content:encoded><link><![CDATA[https://sites.libsyn.com/502703]]></link><guid isPermaLink="false">cdc36a16-5377-4450-baca-2ea5e31628b4</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Tue, 22 Oct 2024 05:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/5f3d87d8-2688-45f3-9ca7-a1c0be7fa5f8/AIGTC-Ep-14.mp3" length="37803690" type="audio/mpeg"/><itunes:duration>39:23</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>14</itunes:episode><podcast:episode>14</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/9f689b56-8435-476c-afd6-85a41c324408/transcript.json" type="application/json"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/9f689b56-8435-476c-afd6-85a41c324408/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/9f689b56-8435-476c-afd6-85a41c324408/index.html" type="text/html"/></item><item><title>Leveraging Copilot and Claude to increase productivity in higher ed</title><itunes:title>Leveraging Copilot and Claude to increase productivity in higher ed</itunes:title><description><![CDATA[<p>This episode of AI Goes to College explores the transformative role of generative AI in higher education, with a particular focus on Microsoft's Copilot and its application in streamlining administrative tasks. Dr. Craig Van Slyke and Dr. Robert E. Crossler share their personal experiences, highlighting how AI tools like Copilot can significantly reduce the time spent on routine emails, agenda creation, and recommendation letters. They emphasize the importance of integrating AI tools into one's workflow to enhance productivity and the value of transparency when using AI-generated content. The episode also explores the broader implications of AI adoption in educational institutions, noting the challenges of choosing the right tools while considering privacy and intellectual property concerns. Additionally, the hosts discuss the innovative potential of AI in transforming pedagogical approaches and the importance of students showcasing their AI skills during job interviews to gain a competitive edge.</p><p>In this insightful discussion, Dr. Craig van Slyke and Dr. Robert E. Crossler explored the transformative potential of generative AI in higher education. Drawing from their extensive experience, they examined how Microsoft's Copilot can alleviate the administrative burdens faced by educators. Dr. Crossler shared his firsthand experience with Copilot's ability to draft emails and create meeting agendas, highlighting the significant time savings and productivity gains for academic professionals. This practical use of AI allows educators to redirect their efforts towards more meaningful tasks such as curriculum development and student engagement.</p><p>The hosts also addressed the information overload surrounding AI advancements, advising educators to focus on tools that offer tangible benefits rather than getting caught up in the hype. They discussed the strategic decisions universities face in selecting AI technologies, emphasizing the need for thoughtful integration to maximize educational impact. This conversation underscored the necessity for higher education institutions to remain agile and informed as they navigate the evolving landscape of AI technologies.</p><p>Further, the episode examined AI tools like Claude and Gemini, showcasing their potential to enhance both academic and personal productivity. Claude's artifact feature was highlighted for its ability to organize AI-generated content, providing a structured approach to integrating AI solutions in educational tasks. Meanwhile, Gemini's prowess in tech support and everyday problem-solving was noted as a testament to AI's versatility. The hosts concluded with advice for students entering the job market, encouraging them to leverage their AI skills to gain a competitive edge in their careers.</p><p>Takeaways:</p><ul><li> Generative AI tools can substantially reduce the time spent on routine tasks like email writing. </li><li> Higher education professionals can leverage AI for tasks such as creating meeting agendas and recommendations. </li><li> Using AI requires a shift in how tasks are approached, focusing more on content creation. </li><li> Schools may need to decide which AI tools to support based on their specific needs. </li><li> AI tools like Microsoft Copilot can assist in writing by offering different styles and tones. </li><li> Experimentation with AI in professional settings can lead to significant productivity improvements. </li></ul><br/><p>The AI Goes to College podcast is a companion to the AI Goes to College newsletter (<a href="https://aigoestocollege.substack.com/" rel="noopener noreferrer" target="_blank">https://aigoestocollege.substack.com/</a>). Both are available at&nbsp;<a href="https://www.aigoestocollege.com/" rel="noopener noreferrer" target="_blank">https://www.aigoestocollege.com/</a>.&nbsp;</p><p>Do you have comments on this episode or topics that you'd like us to cover? Email Craig at&nbsp;craig@AIGoesToCollege.com.&nbsp;&nbsp;You can also leave a comment at&nbsp;<a href="https://www.aigoestocollege.com/" rel="noopener noreferrer" target="_blank">https://www.aigoestocollege.com/</a>.&nbsp;</p>]]></description><content:encoded><![CDATA[<p>This episode of AI Goes to College explores the transformative role of generative AI in higher education, with a particular focus on Microsoft's Copilot and its application in streamlining administrative tasks. Dr. Craig Van Slyke and Dr. Robert E. Crossler share their personal experiences, highlighting how AI tools like Copilot can significantly reduce the time spent on routine emails, agenda creation, and recommendation letters. They emphasize the importance of integrating AI tools into one's workflow to enhance productivity and the value of transparency when using AI-generated content. The episode also explores the broader implications of AI adoption in educational institutions, noting the challenges of choosing the right tools while considering privacy and intellectual property concerns. Additionally, the hosts discuss the innovative potential of AI in transforming pedagogical approaches and the importance of students showcasing their AI skills during job interviews to gain a competitive edge.</p><p>In this insightful discussion, Dr. Craig van Slyke and Dr. Robert E. Crossler explored the transformative potential of generative AI in higher education. Drawing from their extensive experience, they examined how Microsoft's Copilot can alleviate the administrative burdens faced by educators. Dr. Crossler shared his firsthand experience with Copilot's ability to draft emails and create meeting agendas, highlighting the significant time savings and productivity gains for academic professionals. This practical use of AI allows educators to redirect their efforts towards more meaningful tasks such as curriculum development and student engagement.</p><p>The hosts also addressed the information overload surrounding AI advancements, advising educators to focus on tools that offer tangible benefits rather than getting caught up in the hype. They discussed the strategic decisions universities face in selecting AI technologies, emphasizing the need for thoughtful integration to maximize educational impact. This conversation underscored the necessity for higher education institutions to remain agile and informed as they navigate the evolving landscape of AI technologies.</p><p>Further, the episode examined AI tools like Claude and Gemini, showcasing their potential to enhance both academic and personal productivity. Claude's artifact feature was highlighted for its ability to organize AI-generated content, providing a structured approach to integrating AI solutions in educational tasks. Meanwhile, Gemini's prowess in tech support and everyday problem-solving was noted as a testament to AI's versatility. The hosts concluded with advice for students entering the job market, encouraging them to leverage their AI skills to gain a competitive edge in their careers.</p><p>Takeaways:</p><ul><li> Generative AI tools can substantially reduce the time spent on routine tasks like email writing. </li><li> Higher education professionals can leverage AI for tasks such as creating meeting agendas and recommendations. </li><li> Using AI requires a shift in how tasks are approached, focusing more on content creation. </li><li> Schools may need to decide which AI tools to support based on their specific needs. </li><li> AI tools like Microsoft Copilot can assist in writing by offering different styles and tones. </li><li> Experimentation with AI in professional settings can lead to significant productivity improvements. </li></ul><br/><p>The AI Goes to College podcast is a companion to the AI Goes to College newsletter (<a href="https://aigoestocollege.substack.com/" rel="noopener noreferrer" target="_blank">https://aigoestocollege.substack.com/</a>). Both are available at&nbsp;<a href="https://www.aigoestocollege.com/" rel="noopener noreferrer" target="_blank">https://www.aigoestocollege.com/</a>.&nbsp;</p><p>Do you have comments on this episode or topics that you'd like us to cover? Email Craig at&nbsp;craig@AIGoesToCollege.com.&nbsp;&nbsp;You can also leave a comment at&nbsp;<a href="https://www.aigoestocollege.com/" rel="noopener noreferrer" target="_blank">https://www.aigoestocollege.com/</a>.&nbsp;</p>]]></content:encoded><link><![CDATA[https://sites.libsyn.com/502703]]></link><guid isPermaLink="false">4b7272da-f0d4-48d7-ac33-16317c5e480b</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Tue, 01 Oct 2024 04:30:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/2630452f-8deb-4b6a-abd5-3930e6901bf9/AIGTC-Ep-13-Copilot-and-Claude.mp3" length="51546114" type="audio/mpeg"/><itunes:duration>53:38</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>13</itunes:episode><podcast:episode>13</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/09ee1650-d5ae-4eea-8492-4f7fa134e5d5/transcript.json" type="application/json"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/09ee1650-d5ae-4eea-8492-4f7fa134e5d5/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/09ee1650-d5ae-4eea-8492-4f7fa134e5d5/index.html" type="text/html"/></item><item><title>Is ChatGPT Bull ... and How to Improve Communication with AI</title><itunes:title>Is ChatGPT Bull ... and How to Improve Communication with AI</itunes:title><description><![CDATA[<p>Is ChatGPT bull ...? Maybe not. </p><p>In this episode Rob and Craig talk about how generative AI can be used to improve communication, give their opinions of a recent article claiming that ChatGPT is bull$hit, and discuss why you need an AI policy. </p><p><strong>Key Takeaways:</strong></p><ul><li><strong>AI can be used to improve written communication, but not if you just ask AI to crank out the message</strong>. You have to work WITH AI. Rob gives an interesting example of how AI was used to write a difficult message. The key is to co-produce with AI, which results in better outcomes than if either the human or the AI worked alone.</li><li><strong>Is ChatGPT Bull$hit?</strong> A recent article in <em>Ethics and Information Technology</em> claims that ChatGPT (and generative AI more generally) is bull$hit. Craig and Rob aren't so sure, although the authors make some reasonable points.</li><li><strong>You need an AI policy, </strong>even if your college doesn't have one yet. Not only does a policy help you manage risk, a clear policy is necessary to help students understand what is, and is not acceptable. Otherwise, students are flying blind.</li></ul><br/><p>Hicks, M.T., Humphries, J. &amp; Slater, J. (2024). ChatGPT is bullshit.&nbsp;<em>Ethics and Information Technology, </em>26(38). <a href="https://doi.org/10.1007/s10676-024-09775-5" rel="noopener noreferrer" target="_blank">https://doi.org/10.1007/s10676-024-09775-5</a> <a href="https://link.springer.com/article/10.1007/s10676-024-09775-5" rel="noopener noreferrer" target="_blank">https://link.springer.com/article/10.1007/s10676-024-09775-5</a></p><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></description><content:encoded><![CDATA[<p>Is ChatGPT bull ...? Maybe not. </p><p>In this episode Rob and Craig talk about how generative AI can be used to improve communication, give their opinions of a recent article claiming that ChatGPT is bull$hit, and discuss why you need an AI policy. </p><p><strong>Key Takeaways:</strong></p><ul><li><strong>AI can be used to improve written communication, but not if you just ask AI to crank out the message</strong>. You have to work WITH AI. Rob gives an interesting example of how AI was used to write a difficult message. The key is to co-produce with AI, which results in better outcomes than if either the human or the AI worked alone.</li><li><strong>Is ChatGPT Bull$hit?</strong> A recent article in <em>Ethics and Information Technology</em> claims that ChatGPT (and generative AI more generally) is bull$hit. Craig and Rob aren't so sure, although the authors make some reasonable points.</li><li><strong>You need an AI policy, </strong>even if your college doesn't have one yet. Not only does a policy help you manage risk, a clear policy is necessary to help students understand what is, and is not acceptable. Otherwise, students are flying blind.</li></ul><br/><p>Hicks, M.T., Humphries, J. &amp; Slater, J. (2024). ChatGPT is bullshit.&nbsp;<em>Ethics and Information Technology, </em>26(38). <a href="https://doi.org/10.1007/s10676-024-09775-5" rel="noopener noreferrer" target="_blank">https://doi.org/10.1007/s10676-024-09775-5</a> <a href="https://link.springer.com/article/10.1007/s10676-024-09775-5" rel="noopener noreferrer" target="_blank">https://link.springer.com/article/10.1007/s10676-024-09775-5</a></p><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></content:encoded><link><![CDATA[https://sites.libsyn.com/502703]]></link><guid isPermaLink="false">09967bf4-58bd-4bc1-8ce8-7a5e2c555560</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Mon, 29 Jul 2024 05:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/1c07f459-2b24-41a3-a75b-56c4e78cf52a/AIGTC-Ep-12-16-July-2024-FINAL.mp3" length="32839194" type="audio/mpeg"/><itunes:duration>34:12</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>12</itunes:episode><podcast:episode>12</podcast:episode><podcast:season>1</podcast:season></item><item><title>AI in higher ed: Is it time to rethink grading?</title><itunes:title>AI in higher ed: Is it time to rethink grading?</itunes:title><description><![CDATA[<p>In this episode of AI Goes to College, Craig and Rob dig into the transformative impact of artificial intelligence on higher education. They explore three critical areas where AI is reshaping the academic landscape, offering valuable perspectives for educators, administrators, and students alike.</p><p>The episode kicks off with a thoughtful discussion on helping students embrace a long-term view of learning in an era where AI tools make short-term solutions readily available. Craig and Rob tackle the challenges of detecting AI-assisted cheating and propose innovative approaches to course design and assessment. They emphasize the importance of aligning learning objectives with real-world skills and knowledge retention, rather than focusing solely on grades or easily automated tasks. At the end of it all, they wonder if it's time to rethink grading.</p><p>Next, the hosts examine recent developments in language models, highlighting the remarkable advancements in speed and capabilities available in Anthropic’s new model, Claude 3.5 Sonnet. They introduce listeners to new features like "artifacts" that enhance user experience and discuss the potential impacts on various academic disciplines, particularly in programming education and research methodologies. This segment offers a balanced view of the exciting possibilities and the ethical considerations surrounding these powerful tools.</p><p>The final portion of the episode covers issues related to the complex world of copyright issues related to AI-generated content. Craig and Rob break down the ongoing debate around web scraping practices for AI training data and explore the potential legal and ethical implications for AI users in academic settings. They stress the importance of critical thinking when utilizing AI tools and provide practical advice for educators and students on responsible AI use.</p><p>Throughout the episode, the hosts share personal insights, anecdotes from their teaching experiences, and references to current research and industry developments. They maintain a forward-thinking yet grounded approach, acknowledging the uncertainties in this rapidly evolving field while offering actionable strategies for navigating the AI revolution in higher education.</p><p>This episode is essential listening for anyone involved in or interested in the future of education. It equips listeners with the knowledge and perspectives needed to adapt to and thrive in an AI-enhanced academic environment. Craig and Rob's engaging dialogue not only informs but also inspires listeners to actively participate in shaping the future of education in the age of AI.</p><p>Whether you're a seasoned educator, a curious student, or an education technology enthusiast, this episode of AI Goes to College provides valuable insights and sparks important conversations about the intersection of AI and higher education.</p><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></description><content:encoded><![CDATA[<p>In this episode of AI Goes to College, Craig and Rob dig into the transformative impact of artificial intelligence on higher education. They explore three critical areas where AI is reshaping the academic landscape, offering valuable perspectives for educators, administrators, and students alike.</p><p>The episode kicks off with a thoughtful discussion on helping students embrace a long-term view of learning in an era where AI tools make short-term solutions readily available. Craig and Rob tackle the challenges of detecting AI-assisted cheating and propose innovative approaches to course design and assessment. They emphasize the importance of aligning learning objectives with real-world skills and knowledge retention, rather than focusing solely on grades or easily automated tasks. At the end of it all, they wonder if it's time to rethink grading.</p><p>Next, the hosts examine recent developments in language models, highlighting the remarkable advancements in speed and capabilities available in Anthropic’s new model, Claude 3.5 Sonnet. They introduce listeners to new features like "artifacts" that enhance user experience and discuss the potential impacts on various academic disciplines, particularly in programming education and research methodologies. This segment offers a balanced view of the exciting possibilities and the ethical considerations surrounding these powerful tools.</p><p>The final portion of the episode covers issues related to the complex world of copyright issues related to AI-generated content. Craig and Rob break down the ongoing debate around web scraping practices for AI training data and explore the potential legal and ethical implications for AI users in academic settings. They stress the importance of critical thinking when utilizing AI tools and provide practical advice for educators and students on responsible AI use.</p><p>Throughout the episode, the hosts share personal insights, anecdotes from their teaching experiences, and references to current research and industry developments. They maintain a forward-thinking yet grounded approach, acknowledging the uncertainties in this rapidly evolving field while offering actionable strategies for navigating the AI revolution in higher education.</p><p>This episode is essential listening for anyone involved in or interested in the future of education. It equips listeners with the knowledge and perspectives needed to adapt to and thrive in an AI-enhanced academic environment. Craig and Rob's engaging dialogue not only informs but also inspires listeners to actively participate in shaping the future of education in the age of AI.</p><p>Whether you're a seasoned educator, a curious student, or an education technology enthusiast, this episode of AI Goes to College provides valuable insights and sparks important conversations about the intersection of AI and higher education.</p><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></content:encoded><link><![CDATA[https://sites.libsyn.com/502703]]></link><guid isPermaLink="false">95c2c399-79e1-4797-baa7-88c2b844a45c</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Mon, 15 Jul 2024 04:30:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/15710999-42af-4e09-b4f9-dab4ecb97794/AIGTC-Ep-11-05-July-2024.mp3" length="43578647" type="audio/mpeg"/><itunes:duration>45:24</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>11</itunes:episode><podcast:episode>11</podcast:episode><podcast:season>1</podcast:season></item><item><title>Encouraging ethical use, AI friction and why you might be the problem</title><itunes:title>Encouraging ethical use, AI friction and why you might be the problem</itunes:title><description><![CDATA[<p>We're in an odd situation with AI. Many ethical students are afraid to use it and unethical students use it ... unethically. Rob and Craig discuss this dilemma and what we can do about it.</p><p>They also cover the concept of AI friction and how Apple's recent moves will address this under appreciated barrier to AI use.</p><p>Other topics include:</p><ul><li>Which AI chatbot is "best" at the moment</li><li>Using AI to supplement you, not replace you</li><li>Why you might be using AI wrong</li><li>Active learning with AI, </li><li>and more!</li></ul><br/><p>---</p><p>The AI Goes to College podcast is a companion to the AI Goes to College newsletter (https://aigoestocollege.substack.com/). Both are available at&nbsp;https://www.aigoestocollege.com/.&nbsp;</p><p>Do you have comments on this episode or topics that you'd like us to cover? Email Craig at&nbsp;craig@AIGoesToCollege.com.&nbsp;&nbsp;You can also leave a comment at&nbsp;https://www.aigoestocollege.com/.&nbsp;</p>]]></description><content:encoded><![CDATA[<p>We're in an odd situation with AI. Many ethical students are afraid to use it and unethical students use it ... unethically. Rob and Craig discuss this dilemma and what we can do about it.</p><p>They also cover the concept of AI friction and how Apple's recent moves will address this under appreciated barrier to AI use.</p><p>Other topics include:</p><ul><li>Which AI chatbot is "best" at the moment</li><li>Using AI to supplement you, not replace you</li><li>Why you might be using AI wrong</li><li>Active learning with AI, </li><li>and more!</li></ul><br/><p>---</p><p>The AI Goes to College podcast is a companion to the AI Goes to College newsletter (https://aigoestocollege.substack.com/). Both are available at&nbsp;https://www.aigoestocollege.com/.&nbsp;</p><p>Do you have comments on this episode or topics that you'd like us to cover? Email Craig at&nbsp;craig@AIGoesToCollege.com.&nbsp;&nbsp;You can also leave a comment at&nbsp;https://www.aigoestocollege.com/.&nbsp;</p>]]></content:encoded><link><![CDATA[https://sites.libsyn.com/502703]]></link><guid isPermaLink="false">9d66181d-b308-4bb8-8b18-47082563c1e9</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Mon, 01 Jul 2024 05:40:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/644309a1-847c-49e7-987b-b832d8eb1ccd/AIGTC-02-July-2024-release.mp3" length="37662764" type="audio/mpeg"/><itunes:duration>39:10</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>10</itunes:episode><podcast:episode>10</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/4c585847-67e9-4968-8543-d022ab82dcb9/index.html" type="text/html"/></item><item><title>The problem with prompt engineering, GPT-4o, and AI hysteria</title><itunes:title>The problem with prompt engineering, GPT-4o, and AI hysteria</itunes:title><description><![CDATA[<p>In this episode of "AI Goes to College," Rob and Craig discuss </p><ul><li>the implications of OpenAI's GPT-4 Omni (GPT-4o)</li><li>AI fatigue and hysteria, and</li><li>why prompt design is better than prompt engineering.</li></ul><br/><p>Craig and Rob explore the implications of GPT-4 Omni's enhanced capabilities, including faster processing, larger context windows, improved voice capabilities, and an expanded feature set available to all users for free.</p><p>They emphasize the importance of exploring and experimenting with these new technologies, highlighting the transition from prompt engineering to prompt design for a more user-friendly approach. They discuss how prompt design allows for a more iterative and creative process, stressing the need for stakeholders to adapt and incorporate generative AI tools effectively, both in teaching and administrative roles within higher education.</p><p>Through their conversation, Rob and Craig address the hype and hysteria surrounding generative AI, encouraging listeners to approach these tools with curiosity and a willingness to adapt. They advocate for a balanced perspective, acknowledging both the benefits and risks associated with integrating AI technologies in educational settings.</p><p>Rob suggests creating a prompt library to capture successful prompts and outputs, facilitating efficiency and consistency in utilizing generative AI tools for various tasks. They also emphasize the importance of listening to stakeholders and gathering feedback to inform effective implementation strategies.</p><p>Rob and Craig conclude the episode by underscoring the value of continuous exploration, experimentation, and playfulness with new technologies, encouraging listeners to share their experiences and creativity in utilizing generative AI effectively.</p><p>To stay updated on the latest trends in generative AI and its impact on higher education, listeners are invited to subscribe to the "AI Goes to College" newsletter and watch informative videos on the AI Goes TO College YouTube channel. The hosts invite feedback and suggestions for future episodes, fostering a dynamic and interactive community interested in leveraging AI technologies for educational innovation.</p><p>Overall, this episode provides valuable insights into navigating the evolving landscape of generative AI in higher education, empowering educators and administrators to adopt a proactive and adaptable approach towards leveraging AI tools for enhanced teaching and administrative practices.</p><p>---</p><p>The AI Goes to College podcast is a companion to the AI Goes to College newsletter (<a href="https://aigoestocollege.substack.com/" rel="noopener noreferrer" target="_blank">https://aigoestocollege.substack.com/</a>). Both are available at&nbsp;https://www.aigoestocollege.com/.&nbsp;</p><p>Do you have comments on this episode or topics that you'd like us to cover? Email Craig at&nbsp;craig@AIGoesToCollege.com.&nbsp;&nbsp;You can also leave a comment at&nbsp;<a href="https://www.aigoestocollege.com/" rel="noopener noreferrer" target="_blank">https://www.aigoestocollege.com/</a>.&nbsp;</p>]]></description><content:encoded><![CDATA[<p>In this episode of "AI Goes to College," Rob and Craig discuss </p><ul><li>the implications of OpenAI's GPT-4 Omni (GPT-4o)</li><li>AI fatigue and hysteria, and</li><li>why prompt design is better than prompt engineering.</li></ul><br/><p>Craig and Rob explore the implications of GPT-4 Omni's enhanced capabilities, including faster processing, larger context windows, improved voice capabilities, and an expanded feature set available to all users for free.</p><p>They emphasize the importance of exploring and experimenting with these new technologies, highlighting the transition from prompt engineering to prompt design for a more user-friendly approach. They discuss how prompt design allows for a more iterative and creative process, stressing the need for stakeholders to adapt and incorporate generative AI tools effectively, both in teaching and administrative roles within higher education.</p><p>Through their conversation, Rob and Craig address the hype and hysteria surrounding generative AI, encouraging listeners to approach these tools with curiosity and a willingness to adapt. They advocate for a balanced perspective, acknowledging both the benefits and risks associated with integrating AI technologies in educational settings.</p><p>Rob suggests creating a prompt library to capture successful prompts and outputs, facilitating efficiency and consistency in utilizing generative AI tools for various tasks. They also emphasize the importance of listening to stakeholders and gathering feedback to inform effective implementation strategies.</p><p>Rob and Craig conclude the episode by underscoring the value of continuous exploration, experimentation, and playfulness with new technologies, encouraging listeners to share their experiences and creativity in utilizing generative AI effectively.</p><p>To stay updated on the latest trends in generative AI and its impact on higher education, listeners are invited to subscribe to the "AI Goes to College" newsletter and watch informative videos on the AI Goes TO College YouTube channel. The hosts invite feedback and suggestions for future episodes, fostering a dynamic and interactive community interested in leveraging AI technologies for educational innovation.</p><p>Overall, this episode provides valuable insights into navigating the evolving landscape of generative AI in higher education, empowering educators and administrators to adopt a proactive and adaptable approach towards leveraging AI tools for enhanced teaching and administrative practices.</p><p>---</p><p>The AI Goes to College podcast is a companion to the AI Goes to College newsletter (<a href="https://aigoestocollege.substack.com/" rel="noopener noreferrer" target="_blank">https://aigoestocollege.substack.com/</a>). Both are available at&nbsp;https://www.aigoestocollege.com/.&nbsp;</p><p>Do you have comments on this episode or topics that you'd like us to cover? Email Craig at&nbsp;craig@AIGoesToCollege.com.&nbsp;&nbsp;You can also leave a comment at&nbsp;<a href="https://www.aigoestocollege.com/" rel="noopener noreferrer" target="_blank">https://www.aigoestocollege.com/</a>.&nbsp;</p>]]></content:encoded><link><![CDATA[https://sites.libsyn.com/502703]]></link><guid isPermaLink="false">65b3d48b-6ce7-4f19-9104-b094aff9686b</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Tue, 28 May 2024 06:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/1d21586a-7660-461d-8951-3558a8add84d/AIGTC-Ep-9-20-May-2024-Paul-edit.mp3" length="23803237" type="audio/mpeg"/><itunes:duration>24:44</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>9</itunes:episode><podcast:episode>9</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/4fdb4bf0-a262-44aa-8131-f9825c180c48/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/4fdb4bf0-a262-44aa-8131-f9825c180c48/index.html" type="text/html"/></item><item><title>The future of generative AI, a great Chrome extension, and using AI to examine an exam</title><itunes:title>The future of generative AI, a great Chrome extension, and using AI to examine an exam</itunes:title><description><![CDATA[<p>In this episode, Craig discusses:</p><ul><li>My vision of the future of generative AI</li><li>Harpa - a great AI Chrome extension</li><li>Using Claude to examine an exam</li><li>Should higher ed fear AI?</li></ul><br/><p>The highlights of this newsletter are available as a podcast, which is also</p><p> called AI Goes to College. You can subscribe to the newsletter and the </p><p>podcast at https://www.aigoestocollege.com/. The newsletter is also </p><p>available on Substack: (https://aigoestocollege.substack.com/).</p>]]></description><content:encoded><![CDATA[<p>In this episode, Craig discusses:</p><ul><li>My vision of the future of generative AI</li><li>Harpa - a great AI Chrome extension</li><li>Using Claude to examine an exam</li><li>Should higher ed fear AI?</li></ul><br/><p>The highlights of this newsletter are available as a podcast, which is also</p><p> called AI Goes to College. You can subscribe to the newsletter and the </p><p>podcast at https://www.aigoestocollege.com/. The newsletter is also </p><p>available on Substack: (https://aigoestocollege.substack.com/).</p>]]></content:encoded><link><![CDATA[https://sites.libsyn.com/502703]]></link><guid isPermaLink="false">25bf68d4-016c-44bf-b44d-abbead48fb14</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Thu, 02 May 2024 07:30:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/11567f1b-2497-4a0b-8d9e-89ef14ad1d1b/AIGTC-Ep-7-Future-of-generative-AI-EDITED-WIthRolls.mp3" length="22998653" type="audio/mpeg"/><itunes:duration>15:58</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>7</itunes:episode><podcast:episode>7</podcast:episode><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/29cd8bf3-dac5-4fd0-bd1f-d1c4ddb7e887/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/29cd8bf3-dac5-4fd0-bd1f-d1c4ddb7e887/index.html" type="text/html"/></item><item><title>Ethics of Human-AI Co-Production (announcement)</title><itunes:title>Ethics of Human-AI Co-Production (announcement)</itunes:title><description><![CDATA[<p>On Tuesday, April 30 at 5 P.M. Eastern time, I’ll be giving a talk on</p><p> the ethics of human-AI co-production. This is part of an annual series </p><p>called the Marbury Ethics Lectures. I’m quite honored to be the speaker;</p><p> two years ago, the speaker was then Louisiana Governor John Bell </p><p>Edwards. </p><p>Anyone in the area is welcome to attend in-person, but the event will also be live streamed:</p><p><a href="https://mediasite.latech.edu/Mediasite/Play/8aa374384ff541bc8d76dcf98be7aab91d" rel="noopener noreferrer" target="_blank">https://mediasite.latech.edu/Mediasite/Play/8aa374384ff541bc8d76dcf98be7aab91d</a> </p><p>I’d love it if you could join us!</p><p>--</p><p>The AI Goes to College podcast is a companion to the AI Goes to College newsletter (<a href="https://aigoestocollege.substack.com" rel="noopener noreferrer" target="_blank">https://aigoestocollege.substack.com</a>/). Both are available at&nbsp;<a href="https://www.aigoestocollege.com/" rel="noopener noreferrer" target="_blank">https://www.aigoestocollege.com/</a>.&nbsp;Do you have comments on this episode or topics that you'd like Craig to cover? Email him at&nbsp;craig@AIGoesToCollege.com.&nbsp;&nbsp;You can also leave a comment at&nbsp;<a href="https://www.aigoestocollege.com/" rel="noopener noreferrer" target="_blank">https://www.aigoestocollege.com/</a>.&nbsp;</p><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></description><content:encoded><![CDATA[<p>On Tuesday, April 30 at 5 P.M. Eastern time, I’ll be giving a talk on</p><p> the ethics of human-AI co-production. This is part of an annual series </p><p>called the Marbury Ethics Lectures. I’m quite honored to be the speaker;</p><p> two years ago, the speaker was then Louisiana Governor John Bell </p><p>Edwards. </p><p>Anyone in the area is welcome to attend in-person, but the event will also be live streamed:</p><p><a href="https://mediasite.latech.edu/Mediasite/Play/8aa374384ff541bc8d76dcf98be7aab91d" rel="noopener noreferrer" target="_blank">https://mediasite.latech.edu/Mediasite/Play/8aa374384ff541bc8d76dcf98be7aab91d</a> </p><p>I’d love it if you could join us!</p><p>--</p><p>The AI Goes to College podcast is a companion to the AI Goes to College newsletter (<a href="https://aigoestocollege.substack.com" rel="noopener noreferrer" target="_blank">https://aigoestocollege.substack.com</a>/). Both are available at&nbsp;<a href="https://www.aigoestocollege.com/" rel="noopener noreferrer" target="_blank">https://www.aigoestocollege.com/</a>.&nbsp;Do you have comments on this episode or topics that you'd like Craig to cover? Email him at&nbsp;craig@AIGoesToCollege.com.&nbsp;&nbsp;You can also leave a comment at&nbsp;<a href="https://www.aigoestocollege.com/" rel="noopener noreferrer" target="_blank">https://www.aigoestocollege.com/</a>.&nbsp;</p><p>Mentioned in this episode:</p><p><strong>AI Goes to College Newsletter</strong></p>]]></content:encoded><link><![CDATA[https://sites.libsyn.com/502703]]></link><guid isPermaLink="false">28298ba6-9f82-4434-8e77-18d7503b2106</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Mon, 29 Apr 2024 09:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/687c0005-4db6-493f-a283-cfcf89537ae5/AIGTC-Marbury.mp3" length="2266715" type="audio/mpeg"/><itunes:duration>02:22</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><podcast:season>1</podcast:season><podcast:transcript url="https://transcripts.captivate.fm/transcript/5922aef0-0059-4096-a89d-6d8e03d4a876/transcript.srt" type="application/srt" rel="captions"/><podcast:transcript url="https://transcripts.captivate.fm/transcript/5922aef0-0059-4096-a89d-6d8e03d4a876/index.html" type="text/html"/></item><item><title>Does AI hurt critical thinking and new tools, good and bad</title><itunes:title>Does AI hurt critical thinking and new tools, good and bad</itunes:title><description><![CDATA[<p data-pm-slice="1 1 []">In this episode of <em>AI Goes to College</em> Craig dives deep into the world of AI in education, exploring new tools and models that could revolutionize the way we approach learning and teaching. Join Craig as he shares insights from testing various AI models and introduces a groundbreaking tool called The Curricula.</p> <p data-pm-slice="1 1 []">In this episode, Craig talks about:</p> <ul> <li> <p>A terrible new anti-AI detection "tool"</p> </li> <li> <p>Does AI hurt critical thinking and academic performance?</p> </li> <li> <p>How not to talk about AI in education</p> </li> <li> <p>Claude 3 takes the lead</p> </li> <li> <p>Using Google Docs with Gemini</p> </li> <li> <p>Claude 3 Haiku - Best combination of speed and performance?</p> </li> <li> <p>The Curricula - A glimpse of what AI can be</p> </li> </ul><br/> <p>Anti-AI detection tool</p> <p>There's a terrible new tool that supposedly helps students get around AI detection systems (which don't work well, by the way). Faculty, you have nothing to worry about here. The tool is a joke. </p> <p>Does AI hurt critical thinking and academic performance?</p> <p>A recent article seems to provide evidence that AI is harmful to critical thinking and academic performance. But, as is often the case, online commenters get it wrong. The paper doesn't show this at all.</p> <p>How not to talk about AI in education</p> <p>An author affiliated with the London School of Economics wrote an interesting article about how NOT to talk about AI in education. Craig comments on what the article got wrong (in his view).</p> <p>Using Google Docs with Gemini</p> <p>There are some interesting integrations between some Google tools, including Docs and Gemini. It works ... OK, but it's a good start.</p> <p>Claude 3 Haiku</p> <p>If you haven't checked Claude 3 Haiku, you should. It may offer the best performance to speed combination in the market. </p> <p>The Curricula</p> <p>The Curricula is an amazing new tool that creates comprehensive learning guides for virtually any topic. Check it out at <a href= "https://www.thecurricula.com/">https://www.thecurricula.com/</a>. </p> <p>Listen to the full episode for the details. </p> <p>To see screenshots and more, check out Issue #6 of the AI Goes to College newsletter at <a href= "https://aigoestocollege.substack.com/">https://aigoestocollege.substack.com/</a></p> <p>--</p> <p>The AI Goes to College podcast is a companion to the AI Goes to College newsletter (<a href= "https://aigoestocollege.substack.com/">https://aigoestocollege.substack.com/</a> ). Both are available at https://www.aigoestocollege.com/. Do you have comments on this episode or topics that you'd like Craig to cover? Email him at craig@AIGoesToCollege.com.  You can also leave a comment at <a href= "https://www.aigoestocollege.com/">https://www.aigoestocollege.com/</a> . </p>]]></description><content:encoded><![CDATA[<p data-pm-slice="1 1 []">In this episode of <em>AI Goes to College</em> Craig dives deep into the world of AI in education, exploring new tools and models that could revolutionize the way we approach learning and teaching. Join Craig as he shares insights from testing various AI models and introduces a groundbreaking tool called The Curricula.</p> <p data-pm-slice="1 1 []">In this episode, Craig talks about:</p> <ul> <li> <p>A terrible new anti-AI detection "tool"</p> </li> <li> <p>Does AI hurt critical thinking and academic performance?</p> </li> <li> <p>How not to talk about AI in education</p> </li> <li> <p>Claude 3 takes the lead</p> </li> <li> <p>Using Google Docs with Gemini</p> </li> <li> <p>Claude 3 Haiku - Best combination of speed and performance?</p> </li> <li> <p>The Curricula - A glimpse of what AI can be</p> </li> </ul><br/> <p>Anti-AI detection tool</p> <p>There's a terrible new tool that supposedly helps students get around AI detection systems (which don't work well, by the way). Faculty, you have nothing to worry about here. The tool is a joke. </p> <p>Does AI hurt critical thinking and academic performance?</p> <p>A recent article seems to provide evidence that AI is harmful to critical thinking and academic performance. But, as is often the case, online commenters get it wrong. The paper doesn't show this at all.</p> <p>How not to talk about AI in education</p> <p>An author affiliated with the London School of Economics wrote an interesting article about how NOT to talk about AI in education. Craig comments on what the article got wrong (in his view).</p> <p>Using Google Docs with Gemini</p> <p>There are some interesting integrations between some Google tools, including Docs and Gemini. It works ... OK, but it's a good start.</p> <p>Claude 3 Haiku</p> <p>If you haven't checked Claude 3 Haiku, you should. It may offer the best performance to speed combination in the market. </p> <p>The Curricula</p> <p>The Curricula is an amazing new tool that creates comprehensive learning guides for virtually any topic. Check it out at <a href= "https://www.thecurricula.com/">https://www.thecurricula.com/</a>. </p> <p>Listen to the full episode for the details. </p> <p>To see screenshots and more, check out Issue #6 of the AI Goes to College newsletter at <a href= "https://aigoestocollege.substack.com/">https://aigoestocollege.substack.com/</a></p> <p>--</p> <p>The AI Goes to College podcast is a companion to the AI Goes to College newsletter (<a href= "https://aigoestocollege.substack.com/">https://aigoestocollege.substack.com/</a> ). Both are available at https://www.aigoestocollege.com/. Do you have comments on this episode or topics that you'd like Craig to cover? Email him at craig@AIGoesToCollege.com.  You can also leave a comment at <a href= "https://www.aigoestocollege.com/">https://www.aigoestocollege.com/</a> . </p>]]></content:encoded><link><![CDATA[http://sites.libsyn.com/502703/does-ai-hurt-critical-thinking-and-new-tools-good-and-bad]]></link><guid isPermaLink="false">d77cbad2-7648-43cc-ab14-0413bd40a781</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Wed, 10 Apr 2024 08:12:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/62728c70-a369-4f92-b84f-23735be9683e/aigtc-ep-6-edited-withrolls.mp3" length="70807541" type="audio/mpeg"/><itunes:duration>29:30</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>6</itunes:episode><podcast:episode>6</podcast:episode><podcast:season>1</podcast:season></item><item><title>Why AI needs a human in the loop and a useful slide generator</title><itunes:title>Why AI needs a human in the loop and a useful slide generator</itunes:title><description><![CDATA[<p data-pm-slice="1 1 []">In this week's episode of AI Goes to College, Craig covers a range of topics related to generative AI and its impact on higher education. Here are the key highlights from the episode:</p> <ul> <li>Importance of Human Review: Craig share a humorous yet enlightening experience with generative AI that emphasizes the crucial role of human review in ensuring the appropriateness and accuracy of AI-generated content.</li> <li>New Features for Chat GPT Teams: The latest developments in chat GPT teams, including improved chat sharing, GPT store functionality, and image generation options, offer exciting possibilities for collaborative AI use.</li> <li>Slide Speak: Craig explores Slide Speak, a promising tool for quickly creating slide decks from documents using AI. While it's not yet perfect, it shows great potential for streamlining the presentation preparation process.</li> </ul><br/> <p>Now, here are the key takeaways for you:</p> <p>1️⃣ Human Review is Crucial: Always ensure that AI-generated content goes through human review, especially for important and public-facing materials.</p> <p>2️⃣ Collaborative AI: New features in chat GPT teams foster better collaboration and creativity in AI-powered conversations and content creation.</p> <p>3️⃣ Streamlining Presentations: Tools like Slide Speak show promise for simplifying and expediting the process of creating slide decks, though they may need some manual adjustments for perfection.</p> <p>Tune in to the full episode for more insights and the latest developments in generative AI! And don't forget to subscribe to the AI Goes to College newsletter for detailed insights and practical tips. Let's keep embracing the future of AI in higher education together!</p> <p>---</p> <p>The AI Goes to College podcast is a companion to the AI Goes to College newsletter (<a href= "https://aigoestocollege.substack.com/">https://aigoestocollege.substack.com/</a>). Both are available at <a href= "https://www.aigoestocollege.com/">https://www.aigoestocollege.com/</a>. Do you have comments on this episode or topics that you'd like Craig to cover? Email him at craig@AIGoesToCollege.com.  You can also leave a comment at https://www.aigoestocollege.com/. </p>]]></description><content:encoded><![CDATA[<p data-pm-slice="1 1 []">In this week's episode of AI Goes to College, Craig covers a range of topics related to generative AI and its impact on higher education. Here are the key highlights from the episode:</p> <ul> <li>Importance of Human Review: Craig share a humorous yet enlightening experience with generative AI that emphasizes the crucial role of human review in ensuring the appropriateness and accuracy of AI-generated content.</li> <li>New Features for Chat GPT Teams: The latest developments in chat GPT teams, including improved chat sharing, GPT store functionality, and image generation options, offer exciting possibilities for collaborative AI use.</li> <li>Slide Speak: Craig explores Slide Speak, a promising tool for quickly creating slide decks from documents using AI. While it's not yet perfect, it shows great potential for streamlining the presentation preparation process.</li> </ul><br/> <p>Now, here are the key takeaways for you:</p> <p>1️⃣ Human Review is Crucial: Always ensure that AI-generated content goes through human review, especially for important and public-facing materials.</p> <p>2️⃣ Collaborative AI: New features in chat GPT teams foster better collaboration and creativity in AI-powered conversations and content creation.</p> <p>3️⃣ Streamlining Presentations: Tools like Slide Speak show promise for simplifying and expediting the process of creating slide decks, though they may need some manual adjustments for perfection.</p> <p>Tune in to the full episode for more insights and the latest developments in generative AI! And don't forget to subscribe to the AI Goes to College newsletter for detailed insights and practical tips. Let's keep embracing the future of AI in higher education together!</p> <p>---</p> <p>The AI Goes to College podcast is a companion to the AI Goes to College newsletter (<a href= "https://aigoestocollege.substack.com/">https://aigoestocollege.substack.com/</a>). Both are available at <a href= "https://www.aigoestocollege.com/">https://www.aigoestocollege.com/</a>. Do you have comments on this episode or topics that you'd like Craig to cover? Email him at craig@AIGoesToCollege.com.  You can also leave a comment at https://www.aigoestocollege.com/. </p>]]></content:encoded><link><![CDATA[https://aigoestocollege.substack.com/]]></link><guid isPermaLink="false">cf2a4e6a-dbb9-4980-8c95-80fe44f76881</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Fri, 05 Apr 2024 08:26:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/9fe9d3de-9f70-4be5-9440-c803f4296c40/aigtc-ep-5-edited-with-rolls.mp3" length="50057957" type="audio/mpeg"/><itunes:duration>20:51</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>5</itunes:episode><podcast:episode>5</podcast:episode><podcast:season>1</podcast:season></item><item><title>Why AI doesn&apos;t follow length instructions, the best $40 you can spend, and more</title><itunes:title>Why AI doesn&apos;t follow length instructions, the best $40 you can spend, and more</itunes:title><description><![CDATA[<p>This week's episode covers:</p> <ul> <li>Generative AI's paywall problem</li> <li>Anthropic release new Claude models that beat GPT</li> <li>Google has a bad week</li> <li>Why generative AI doesn't follow length instructions (and what you can do about it)</li> <li>The best $40 you can spend on generative AI</li> <li>More Useful Things releases some interesting AI resources</li> <li>Chain of thought versus few-shot prompting</li> </ul><br/> <p>--- AI generated description ---</p> <p data-pm-slice="1 1 []">Welcome to AI Goes to College, where we navigate the ever-changing world of generative AI in higher education. In this thought-provoking episode, I, your host, Dr. Craig Van Slyke, delve into the latest developments in the realm of generative AI, from the paywall problem to Anthropic's groundbreaking Claude models that outperform GPT. This episode sheds light on the ethical considerations and challenges facing academic researchers when working with biased training data and the potential limitations in reflecting findings from behind-the-paywall academic journals.</p> <p>But it's not all about the challenges. I also uncover the exceptional potential of Anthropic's new Claude models and the significance of competition in driving innovation and performance in the AI landscape. You'll be immersed in the intriguing discussion about Google's stumbling block in implementing ethical guardrails for generative AI, a pivotal reminder that human oversight remains crucial in the current stage of AI utilization.</p> <p>And let's not forget about practical tips. I share game-changing insights on prompting generative AI, covering the nuances between few shot and chain of thought prompting, and reveal the best $40 investment for enhancing productivity in your AI endeavors.</p> <p>The conversation doesn't end there. I invite you to explore the transformative applications of generative AI in education through a fascinating interview with an industry expert. This episode promises to reshape your perspective on the potential and challenges of generative AI in higher education and leave you equipped with valuable knowledge and practical strategies for navigating this dynamic landscape.</p> <p>Join us as we uncover the profound impact of generative AI on academic research, and gain invaluable insights that will shape your approach to utilizing AI effectively for success in the educational sphere. If you find this episode insightful, don't miss the chance to subscribe to the AI Goes to College newsletter for further invaluable resources and updates. Let's embark on the journey to embracing and leveraging generative AI's potential in higher education.</p>]]></description><content:encoded><![CDATA[<p>This week's episode covers:</p> <ul> <li>Generative AI's paywall problem</li> <li>Anthropic release new Claude models that beat GPT</li> <li>Google has a bad week</li> <li>Why generative AI doesn't follow length instructions (and what you can do about it)</li> <li>The best $40 you can spend on generative AI</li> <li>More Useful Things releases some interesting AI resources</li> <li>Chain of thought versus few-shot prompting</li> </ul><br/> <p>--- AI generated description ---</p> <p data-pm-slice="1 1 []">Welcome to AI Goes to College, where we navigate the ever-changing world of generative AI in higher education. In this thought-provoking episode, I, your host, Dr. Craig Van Slyke, delve into the latest developments in the realm of generative AI, from the paywall problem to Anthropic's groundbreaking Claude models that outperform GPT. This episode sheds light on the ethical considerations and challenges facing academic researchers when working with biased training data and the potential limitations in reflecting findings from behind-the-paywall academic journals.</p> <p>But it's not all about the challenges. I also uncover the exceptional potential of Anthropic's new Claude models and the significance of competition in driving innovation and performance in the AI landscape. You'll be immersed in the intriguing discussion about Google's stumbling block in implementing ethical guardrails for generative AI, a pivotal reminder that human oversight remains crucial in the current stage of AI utilization.</p> <p>And let's not forget about practical tips. I share game-changing insights on prompting generative AI, covering the nuances between few shot and chain of thought prompting, and reveal the best $40 investment for enhancing productivity in your AI endeavors.</p> <p>The conversation doesn't end there. I invite you to explore the transformative applications of generative AI in education through a fascinating interview with an industry expert. This episode promises to reshape your perspective on the potential and challenges of generative AI in higher education and leave you equipped with valuable knowledge and practical strategies for navigating this dynamic landscape.</p> <p>Join us as we uncover the profound impact of generative AI on academic research, and gain invaluable insights that will shape your approach to utilizing AI effectively for success in the educational sphere. If you find this episode insightful, don't miss the chance to subscribe to the AI Goes to College newsletter for further invaluable resources and updates. Let's embark on the journey to embracing and leveraging generative AI's potential in higher education.</p>]]></content:encoded><link><![CDATA[http://sites.libsyn.com/502703/why-ai-doesnt-follow-length-instructions-the-best-40-you-can-spend-and-more]]></link><guid isPermaLink="false">c3f58629-c52e-40ca-866f-69c85d24cba8</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Fri, 22 Mar 2024 08:43:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/d39c0e4f-5fb0-4ba1-91a7-217f6b39fb0f/aigtc-ep-4-final.mp3" length="24854823" type="audio/mpeg"/><itunes:duration>25:49</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>4</itunes:episode><podcast:episode>4</podcast:episode><podcast:season>1</podcast:season></item><item><title>Empowering Students and Faculty with Generative AI: An Interview with Dr. Rob Crossler</title><itunes:title>Empowering Students and Faculty with Generative AI: An Interview with Dr. Rob Crossler</itunes:title><description><![CDATA[<p>Generative AI is transforming education, not just for learning, but also for performing administrative tasks. In this special episode of AI Goes to College, Craig and Dr. Rob Crossler of Washington State University talk about how generative AI can help students learn and faculty streamline those pesky administrative tasks that most of us find so irritating.</p> <p>Rob and Craig dig into a wide array of topics, including the early adoption of technology and the risks it brings, the need to experiment and accept occasional failure, and our ethical obligation to help students learn to use generative AI effectively and ethically. We also discuss the AI digital divide and its potential impacts.</p> <p>Here are just a few of the highlights:</p> <ul> <li>Rob shares an example of how generative AI helped with a challenging administrative task.</li> <li>Rob explains how some students avoid using AI due to fears over being accused of cheating.  </li> <li>Rob and Craig discuss the need to encourage experimentation and accept failure.</li> <li>Craig questions whether students understand the boundaries around ethical generative AI use.</li> <li>Rob emphasizes the need to help students gain expertise with generative AI in order to prepare them for the evolving job market.</li> <li>Rob talks about how he uses generative AI to encourage critical thinking among his students.</li> </ul><br/> <p>---</p> <p>The AI Goes to College podcast is a companion to the AI Goes to College newsletter (<a href= "https://aigoestocollege.substack.com/)">https://aigoestocollege.substack.com/)</a>. Both are available at <a href= "https://www.aigoestocollege.com/">https://www.aigoestocollege.com/</a>. </p> <p>Do you have comments on this episode or topics that you'd like Craig to cover? Email him at <a href= "mailto:craig@AIGoesToCollege.com">craig@AIGoesToCollege.com</a>.   You can also leave a comment at <a href= "https://www.aigoestocollege.com/">https://www.aigoestocollege.com/</a>. </p> <p> </p>]]></description><content:encoded><![CDATA[<p>Generative AI is transforming education, not just for learning, but also for performing administrative tasks. In this special episode of AI Goes to College, Craig and Dr. Rob Crossler of Washington State University talk about how generative AI can help students learn and faculty streamline those pesky administrative tasks that most of us find so irritating.</p> <p>Rob and Craig dig into a wide array of topics, including the early adoption of technology and the risks it brings, the need to experiment and accept occasional failure, and our ethical obligation to help students learn to use generative AI effectively and ethically. We also discuss the AI digital divide and its potential impacts.</p> <p>Here are just a few of the highlights:</p> <ul> <li>Rob shares an example of how generative AI helped with a challenging administrative task.</li> <li>Rob explains how some students avoid using AI due to fears over being accused of cheating.  </li> <li>Rob and Craig discuss the need to encourage experimentation and accept failure.</li> <li>Craig questions whether students understand the boundaries around ethical generative AI use.</li> <li>Rob emphasizes the need to help students gain expertise with generative AI in order to prepare them for the evolving job market.</li> <li>Rob talks about how he uses generative AI to encourage critical thinking among his students.</li> </ul><br/> <p>---</p> <p>The AI Goes to College podcast is a companion to the AI Goes to College newsletter (<a href= "https://aigoestocollege.substack.com/)">https://aigoestocollege.substack.com/)</a>. Both are available at <a href= "https://www.aigoestocollege.com/">https://www.aigoestocollege.com/</a>. </p> <p>Do you have comments on this episode or topics that you'd like Craig to cover? Email him at <a href= "mailto:craig@AIGoesToCollege.com">craig@AIGoesToCollege.com</a>.   You can also leave a comment at <a href= "https://www.aigoestocollege.com/">https://www.aigoestocollege.com/</a>. </p> <p> </p>]]></content:encoded><link><![CDATA[http://sites.libsyn.com/502703/empowering-students-and-faculty-with-generative-ai-an-interview-with-dr-rob-crossler]]></link><guid isPermaLink="false">a8f997a1-04a1-4654-979a-a15554b71a20</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Mon, 11 Mar 2024 11:43:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/29f93151-3dfb-44a7-b463-1400c2b9e105/aigtc-rob-crossler-final.mp3" length="38467335" type="audio/mpeg"/><itunes:duration>40:00</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><podcast:season>1</podcast:season></item><item><title>Detecting fake answers, Zoom meeting magic, and Gemini is pretty awesome</title><itunes:title>Detecting fake answers, Zoom meeting magic, and Gemini is pretty awesome</itunes:title><description><![CDATA[<p>Welcome to AI Goes to College! In this episode, your host, Dr. Craig Van Slyke, invites you to explore the latest developments in generative AI and uncover practical insights to navigate the changing landscape of higher education.</p> <p> Discover key takeaways from Dr. Van Slyke's firsthand experiences with Google's Gemini and Zoom's AI Companion, as he shares how these innovative tools have enhanced his productivity and efficiency. Gain valuable insights into Google's Gemini, a powerful AI extension with the potential to revolutionize administrative tasks in higher education. I'll delve into the finer aspects of Gemini's performance, extensions, and its implications for the academic community.</p> <p> But that's not all—explore the fascinating potential of ChatGPT's new memory management features and get a sneak peek into OpenAI's impressive video generator, SORA. Dr. Van Slyke provides a candid overview of these cutting-edge AI advancements and their implications for educational content creation and engagement.</p> <p> Additionally, you'll receive expert guidance on recognizing AI-generated text, equipping you with the tools to discern authentic student responses from those generated by AI. Uncover valuable tips and strategies to detect and address inappropriate AI use in academic assignments, a crucial aspect in today's educational landscape.</p> <p> Join Dr. Craig Van Slyke in this enlightening episode as he navigates the dynamic intersection of generative AI and higher education, providing invaluable insights and actionable strategies for educators and professionals.</p> <p> Tune in to gain a deeper understanding of the transformative role of generative AI in higher education, and learn how to effectively leverage these innovative tools in your academic pursuits. Embrace the future of AI in education and stay ahead of the curve with AI Goes to College!</p> <p>To subscribe to the AI Goes to College newsletter, go to <a title="AI Goes to College newsletter" href= "https://www.aigoestocollege.com">AIGoesToCollege.com/newsletter</a>. </p> <p> </p> <p>--- Transcript ---</p> <p>Craig [00:00:14]: Welcome to AI Goes to College, the podcast that helps higher education professionals navigate the changes brought on by generative AI. I'm your host, doctor Craig Van Slyke. The podcast is a companion to the AI Goes to College newsletter. You can sign up for the newsletter at ai goes to college.com/ newsletter. This week's episode covers my impressions of Google's Gemini. Here's a spoiler. I really like it. An overview of an awesome zoom feature that a lot of people don't know about.</p> <p>Craig [00:00:47]: A new memory management feature that's coming to chat gpt soon, I hope. OpenAI's scary good video generator, and I'll close with insights on how to recognize AI generated text. Lately, I've found myself using Google's Gemini pretty frequently. I just gave a talk, actually, I'm about to give a talk. By the time you listen to this, I will have given a talk on the perils and promise of generative AI at the University of Louisiana Systems for our future conference. I wanted to include some specific uses of generative AI for the for administrative tasks. I have a lot of use cases for academic tasks, but I wanted something more for the staff side of the house. Gemini was a huge help.</p> <p>Craig [00:01:33]: It helped me brainstorm a lot of useful examples, and then I found one I wanted to dial in on, and it really helped quite a bit with that. I didn't do a side by side comparison, but Gemini's performance felt pretty similar to Chat GPT's. By the way, I use Gemini advanced, which is a subscription, service, and it's kind of Google's equivalent to chat GPT 4. One of the most promising aspects of Gemini is that it has some extensions that may prove really useful in the long run. The extensions will let you do a lot of things, for example, asking questions of Gmail messages and Google Drive documents. There's also a YouTube extension that looks interesting. My initial testing yielded kinda mixed results. It did well in one test, but not so well in another.</p> <p>Craig [00:02:22]: I'll try to do a longer little blurb on this later. The Gmail extension did a pretty good job of summarizing recent conversations. So I don't know. I I think it's something to keep an eye on. So currently, there are extensions for a lot of the Google Suite, Gmail, Google Docs, Google Drive, Google Flights, Google Hotels, Google Maps, and YouTube. So this is certainly worth keeping an eye on. And if you weren't aware, Gemini is Google's replacement for Bard. They rolled out some new underlying models and did a rebrand a few weeks ago.</p> <p>Craig [00:02:54]: So, anyway, if you haven't checked it out in a while, I think it's probably worth doing. One of my favorite new ish, AI tools is Zoom's meeting summary. This thing is really awesome. It actually came out as part of their AI companion, which they released last fall, but didn't really get a lot of press that I saw. And the AI companion does a number of things. I may do a longer, deeper dive on that later. But the killer use for it right now for me is to summarize meetings. It is just awesome.</p> <p>Craig [00:03:25]: If you're like me, you might get caught up in the discussions that go on during a meeting, and you forget to take some notes. And you go back a few days later to start working on whatever project you were involved with in that meeting, and you've forgotten some of the details. So this happens to me a lot. I'm kind of sad to admit, but Zoom's AI companion can help with this tremendously. Just click on start summary, and AI companion will do the rest. A little while after the meeting, if you're the host, you'll receive an email with the summary, and you can just send that to the other attendees. The summaries are also available in your Zoom dashboard, and they're easy to edit or share from there. I find the summaries to be surprisingly good.</p> <p>Craig [00:04:11]: In the newsletter, AI Goes to College, I give an example, kind of a redacted example, of a summary of a recent meeting, and it's pretty comprehensive. And I I looked over the summary right after the meeting, not long after the meeting, and it really captured the essence of the meeting. It gives a quick recap, and then it summarizes, kind of broken down by topics, the entire meeting, and it also lays out next steps. Couple of things to keep in mind, though. 1st, only the meeting host can start the summary. So if you're just an attendee, you're not the host, you can't start the summary. You can ask the host to do it, but you can't do it. AI companion is only available to paid users.</p> <p>Craig [00:04:50]: And because AI companion is only available to paid users, the same can be said about the meeting summaries. If you're using Zoom through your school and you're hosting a session and you don't see the summary button, if I were you, I'd contact your IT folks and see if AI companion needs to be enabled. There are a number of other features of AI companion, but I haven't tried them yet. One particularly interesting capability is the ability to ask questions during the meeting. So you go over to the AI companion, ask questions, and the AI companion will answer based on the transcript of what it's done so far or what it's kinda transcribed so far in the meeting. So if you come in late and you wanna catch up, you don't have to say, hey. Can somebody catch me up with what we've done so far? You can just go over to AI Companion, ask the same thing. Now I haven't tried this yet, but I'm going to.</p> <p>Craig [00:05:40]: In the meantime, you can check out the features. There's a link in the show notes and in the newsletter. By the way, you can subscribe to the newsletter by going to aigosetocollege.com/newsletter. There were a couple of interesting news items, kind of developments over the last week or so that I wanted to bring to your attention. One that has a lot of potential is ChatGPT's new memory management features. So soon, at least I hope it's soon, ChatGPT will be able to remember things across conversations. You'll be able to ask chat gpt to remember specific things, or it'll learn over time kind of on its own, and its memory will get better over time. Unfortunately, very few users have access to this new feature right now.</p> <p>Craig [00:06:30]: I don't. I'm refreshing my browser frequently, hoping I get it soon. But OpenAI promised to share plans for a full rollout soon. I don't know when that is, but soon. So I'm basing a lot of what I'm gonna say right now on OpenAI's blog post that announced the feature, and I'll link to that in the show notes, or you can check out the newsletter. So, for example, in your conversations, you might have told chat gpt that you like a certain tone, maybe a conversational casual tone, maybe a more formal or academic tone in your emails. In the future, Chat GPT will craft messages based on that tone. Let's say you teach management.</p> <p>Craig [00:07:09]: Once ChatGPT knows that and remembers it, when you brainstorm assignment ideas, it'll give you recommendations for management topics, not, you know, English topics or philosophy topics. If you've explained to chat gpt that you'd like to include something like reflection questions in your assignments, it'll do so automatically in the future. And I'm sure there are gonna be a lot of other really good use cases for this somewhere down the road. There's gonna be a personalization setting that allows you to turn memory on and off. That'll be useful when you're trying to do some task where it's beyond what you normally do, and you don't wanna mess up ChatGPT's memory. Another cool feature is temporary chat. The temporary chat doesn't use memory, and it unfortunately also won't appear in your chat history. I think that might be a little bit of a problem.</p> <p>Craig [00:07:57]: Seems to me that memory is the next logical step from...]]></description><content:encoded><![CDATA[<p>Welcome to AI Goes to College! In this episode, your host, Dr. Craig Van Slyke, invites you to explore the latest developments in generative AI and uncover practical insights to navigate the changing landscape of higher education.</p> <p> Discover key takeaways from Dr. Van Slyke's firsthand experiences with Google's Gemini and Zoom's AI Companion, as he shares how these innovative tools have enhanced his productivity and efficiency. Gain valuable insights into Google's Gemini, a powerful AI extension with the potential to revolutionize administrative tasks in higher education. I'll delve into the finer aspects of Gemini's performance, extensions, and its implications for the academic community.</p> <p> But that's not all—explore the fascinating potential of ChatGPT's new memory management features and get a sneak peek into OpenAI's impressive video generator, SORA. Dr. Van Slyke provides a candid overview of these cutting-edge AI advancements and their implications for educational content creation and engagement.</p> <p> Additionally, you'll receive expert guidance on recognizing AI-generated text, equipping you with the tools to discern authentic student responses from those generated by AI. Uncover valuable tips and strategies to detect and address inappropriate AI use in academic assignments, a crucial aspect in today's educational landscape.</p> <p> Join Dr. Craig Van Slyke in this enlightening episode as he navigates the dynamic intersection of generative AI and higher education, providing invaluable insights and actionable strategies for educators and professionals.</p> <p> Tune in to gain a deeper understanding of the transformative role of generative AI in higher education, and learn how to effectively leverage these innovative tools in your academic pursuits. Embrace the future of AI in education and stay ahead of the curve with AI Goes to College!</p> <p>To subscribe to the AI Goes to College newsletter, go to <a title="AI Goes to College newsletter" href= "https://www.aigoestocollege.com">AIGoesToCollege.com/newsletter</a>. </p> <p> </p> <p>--- Transcript ---</p> <p>Craig [00:00:14]: Welcome to AI Goes to College, the podcast that helps higher education professionals navigate the changes brought on by generative AI. I'm your host, doctor Craig Van Slyke. The podcast is a companion to the AI Goes to College newsletter. You can sign up for the newsletter at ai goes to college.com/ newsletter. This week's episode covers my impressions of Google's Gemini. Here's a spoiler. I really like it. An overview of an awesome zoom feature that a lot of people don't know about.</p> <p>Craig [00:00:47]: A new memory management feature that's coming to chat gpt soon, I hope. OpenAI's scary good video generator, and I'll close with insights on how to recognize AI generated text. Lately, I've found myself using Google's Gemini pretty frequently. I just gave a talk, actually, I'm about to give a talk. By the time you listen to this, I will have given a talk on the perils and promise of generative AI at the University of Louisiana Systems for our future conference. I wanted to include some specific uses of generative AI for the for administrative tasks. I have a lot of use cases for academic tasks, but I wanted something more for the staff side of the house. Gemini was a huge help.</p> <p>Craig [00:01:33]: It helped me brainstorm a lot of useful examples, and then I found one I wanted to dial in on, and it really helped quite a bit with that. I didn't do a side by side comparison, but Gemini's performance felt pretty similar to Chat GPT's. By the way, I use Gemini advanced, which is a subscription, service, and it's kind of Google's equivalent to chat GPT 4. One of the most promising aspects of Gemini is that it has some extensions that may prove really useful in the long run. The extensions will let you do a lot of things, for example, asking questions of Gmail messages and Google Drive documents. There's also a YouTube extension that looks interesting. My initial testing yielded kinda mixed results. It did well in one test, but not so well in another.</p> <p>Craig [00:02:22]: I'll try to do a longer little blurb on this later. The Gmail extension did a pretty good job of summarizing recent conversations. So I don't know. I I think it's something to keep an eye on. So currently, there are extensions for a lot of the Google Suite, Gmail, Google Docs, Google Drive, Google Flights, Google Hotels, Google Maps, and YouTube. So this is certainly worth keeping an eye on. And if you weren't aware, Gemini is Google's replacement for Bard. They rolled out some new underlying models and did a rebrand a few weeks ago.</p> <p>Craig [00:02:54]: So, anyway, if you haven't checked it out in a while, I think it's probably worth doing. One of my favorite new ish, AI tools is Zoom's meeting summary. This thing is really awesome. It actually came out as part of their AI companion, which they released last fall, but didn't really get a lot of press that I saw. And the AI companion does a number of things. I may do a longer, deeper dive on that later. But the killer use for it right now for me is to summarize meetings. It is just awesome.</p> <p>Craig [00:03:25]: If you're like me, you might get caught up in the discussions that go on during a meeting, and you forget to take some notes. And you go back a few days later to start working on whatever project you were involved with in that meeting, and you've forgotten some of the details. So this happens to me a lot. I'm kind of sad to admit, but Zoom's AI companion can help with this tremendously. Just click on start summary, and AI companion will do the rest. A little while after the meeting, if you're the host, you'll receive an email with the summary, and you can just send that to the other attendees. The summaries are also available in your Zoom dashboard, and they're easy to edit or share from there. I find the summaries to be surprisingly good.</p> <p>Craig [00:04:11]: In the newsletter, AI Goes to College, I give an example, kind of a redacted example, of a summary of a recent meeting, and it's pretty comprehensive. And I I looked over the summary right after the meeting, not long after the meeting, and it really captured the essence of the meeting. It gives a quick recap, and then it summarizes, kind of broken down by topics, the entire meeting, and it also lays out next steps. Couple of things to keep in mind, though. 1st, only the meeting host can start the summary. So if you're just an attendee, you're not the host, you can't start the summary. You can ask the host to do it, but you can't do it. AI companion is only available to paid users.</p> <p>Craig [00:04:50]: And because AI companion is only available to paid users, the same can be said about the meeting summaries. If you're using Zoom through your school and you're hosting a session and you don't see the summary button, if I were you, I'd contact your IT folks and see if AI companion needs to be enabled. There are a number of other features of AI companion, but I haven't tried them yet. One particularly interesting capability is the ability to ask questions during the meeting. So you go over to the AI companion, ask questions, and the AI companion will answer based on the transcript of what it's done so far or what it's kinda transcribed so far in the meeting. So if you come in late and you wanna catch up, you don't have to say, hey. Can somebody catch me up with what we've done so far? You can just go over to AI Companion, ask the same thing. Now I haven't tried this yet, but I'm going to.</p> <p>Craig [00:05:40]: In the meantime, you can check out the features. There's a link in the show notes and in the newsletter. By the way, you can subscribe to the newsletter by going to aigosetocollege.com/newsletter. There were a couple of interesting news items, kind of developments over the last week or so that I wanted to bring to your attention. One that has a lot of potential is ChatGPT's new memory management features. So soon, at least I hope it's soon, ChatGPT will be able to remember things across conversations. You'll be able to ask chat gpt to remember specific things, or it'll learn over time kind of on its own, and its memory will get better over time. Unfortunately, very few users have access to this new feature right now.</p> <p>Craig [00:06:30]: I don't. I'm refreshing my browser frequently, hoping I get it soon. But OpenAI promised to share plans for a full rollout soon. I don't know when that is, but soon. So I'm basing a lot of what I'm gonna say right now on OpenAI's blog post that announced the feature, and I'll link to that in the show notes, or you can check out the newsletter. So, for example, in your conversations, you might have told chat gpt that you like a certain tone, maybe a conversational casual tone, maybe a more formal or academic tone in your emails. In the future, Chat GPT will craft messages based on that tone. Let's say you teach management.</p> <p>Craig [00:07:09]: Once ChatGPT knows that and remembers it, when you brainstorm assignment ideas, it'll give you recommendations for management topics, not, you know, English topics or philosophy topics. If you've explained to chat gpt that you'd like to include something like reflection questions in your assignments, it'll do so automatically in the future. And I'm sure there are gonna be a lot of other really good use cases for this somewhere down the road. There's gonna be a personalization setting that allows you to turn memory on and off. That'll be useful when you're trying to do some task where it's beyond what you normally do, and you don't wanna mess up ChatGPT's memory. Another cool feature is temporary chat. The temporary chat doesn't use memory, and it unfortunately also won't appear in your chat history. I think that might be a little bit of a problem.</p> <p>Craig [00:07:57]: Seems to me that memory is the next logical step from custom instructions, which pro users are able to use. Custom instructions lets you give ChatGPT persistent instructions that apply to all conversations. So, for example, one of my custom instructions is respond as a very knowledgeable, trusted adviser and assistant. Responses should be fairly detailed. I would like chat GPT to respond as a kind but honest colleague who is not afraid to provide useful critiques. Of course, I'm I do have some privacy concerns about the memory feature. We're gonna need to figure some of those out. You need to be cautious about anything you put into generative AI regardless of this memory feature.</p> <p>Craig [00:08:41]: Your school may have policies that restrict what you can put in. Best bet is just to assume that whatever you put into a generative AI tool is gonna be used for training the models down the road. So I don't know. Wanna keep an eye on that. I think it's gonna be a really interesting feature that could improve performance quite a bit. So I'll give updates. As soon as I get access to the memory feature, I'll give you a full review. The other thing that came out recently, kind of stole some of Gemini's thunder, was OpenAI's SORA.</p> <p>Craig [00:09:10]: That's s o r a. It creates videos based on prompts. And there are a number of tools out there that will produce videos based on prompts, but, you know, they're okay at best. I've tried a couple of them and have abandoned them pretty quickly. Sora is scary. It is absolutely amazing what that tool will produce given some pretty simple prompts. So Sora creates these realistic and imaginative scenes from text instructions. That's from Sora's website.</p> <p>Craig [00:09:42]: And the prompts can be pretty simple. So here's an example of a prompt that they had for a very professional looking video. Here's the whole prompt. Historical footage of California during the gold rush. And there's a link to this video in the show notes, or you can go to Sora's website, which is just openai.com/sora, s o r a. And you can check out the video there. It's probably not as good as what somebody could do who is really a professional cinematographer, but it's really good. When it's released to the public, I can see it being used for a lot of kind of b roll footage, maybe not the part of the main story.</p> <p>Craig [00:10:22]: A lot of news organizations like to use b roll. B roll is just kind of this generic or footage that really isn't part of an interview or or the actual story that's being covered. It might be useful for spicing up online learning materials or for creating recruiting videos or some things like that. I don't know. We're we're gonna have to see. It's not available to the public yet. Although the videos are pretty fantastic, there are some odd things about a few of them. There's a great video of a woman walking down a Tokyo street.</p> <p>Craig [00:10:54]: Every digital person in the video seems to have mostly the same cadence to their walk. It almost looks choreographed. They're not perfectly in sync, but it's close enough to make everything seem a little bit odd and a little bit artificial. And if you look closely enough, you can see little oddities in a lot of the videos, but you kinda have to look for them, at least I did. Right now, Soarer videos are limited to 1 minute, but that'll probably change in the future. One of the things I really like about the Soarer website is that OpenAI includes some failures as well. There's a video of a guy running the wrong way on a treadmill, and there's also a kind of disturbing video of gray wolf pups that seem to emerge from a single pup. It's a little odd.</p> <p>Craig [00:11:42]: Fascinating, though. So I can see this being used for a lot for training videos. I think it could maybe enhance the engagement capabilities of some online learning materials, but I can also see Sora and some of similar tools that are likely to emerge as being a time sink. It's intriguing to create cool new images and videos to add you to your website, your lectures, and presentations. But I can see myself wasting a lot of time on something that, at the end of the day, may not make much difference. I just did this. Was trying to get DALL E and some other AI tools to produce an image for the presentation I'm gonna give or just gave, depending upon when you're listening. And, you know, it got to where it was kind of okay.</p> <p>Craig [00:12:30]: But, eventually, I took about 5 or 10 minutes and just went into a drawing tool and drew one that actually was better at making the point I wanted to make. So kind of beware of the rabbit hole of AI. It it's real, and you can really waste a lot of time. Although, I do have to say it's kind of fun. And there's nothing wrong with wasting a little bit of time, not really wasting, but learning the capabilities of the tool. Alright. Here is my tip of the week. If you give any at home assignments in your courses, you probably received responses that were generated by a generative AI tool.</p> <p>Craig [00:13:07]: And if you haven't, yeah, you probably will soon. It's just the way things are now. AI detectors do not work reliably, although I think they may be getting a little bit better. So what can you do? Well, the best approach, and this is my opinion and that of a lot of other experts, is to modify your assignments to make it harder for students just to cheat with generative AI. I'll write quite a bit about this in an upcoming newsletter edition, and I'll talk about that here on the podcast as well. But in the meantime, getting better about sniffing out generative AI written text is probably a good thing to do. The first thing I suggest is to run your assignments through 1 or 2 generative AI tools. Do this for several assignments, and you'll start to see some common characteristics of generative AI responses.</p> <p>Craig [00:13:57]: Look. Let's face it. A lot of the students who used AI tools inappropriately are lazy, and they'll just copy and paste the answer into Word or into the response section of your learning management system or whatever, they're not gonna work very hard to try to mask the generative AI use. They were willing to work that hard. Maybe they'd use generative AI more appropriately. So if you know a little bit about the TELs, the the indicators of generative AI text, it can be useful in kind of correcting those students. I really encourage you to go to the newsletter, ai goes to college.com/newsletter. You can sign up there and subscribe because a lot of what I'm gonna tell you is kind of hard to explain, but it's pretty clear once you see it.</p> <p>Craig [00:14:44]: So go to the newsletter. The first thing is that generative AI has kind of a standard way that it formats longer responses. It goes back to the fact that a lot of these tools use something called markdown language to format the more complex responses. Markdown is a markup language. I know that's kind of confusing that uses symbols to allow formatted text using a plain text editor rather than a normal word processor. I use markdown to create the newsletter. Because generative AI systems often use markdown, they tend to focus text in kind of a limited number of ways. For example, generative AI tools love number or bulleted lists with bold faced headings, often with details set off with a colon.</p> <p>Craig [00:15:31]: So it'll be bullet point, bold face, colon. It doesn't always do that, but but it's often something like that. Like I said, I put a couple of examples in the newsletter, so you might wanna check that out. So one of the first clues for me is if I see something that's formatted in that way, I start to get really suspicious. Like, it's a reasonable way to format things, and if you use markdown language, it's a pretty good way to format things. But I'm guessing, unless you're maybe in computer science or information systems or something like that, not a lot of your students are using markdown language. So when I see this kind of formatting, I start to get a little bit suspicious. The next tell is the complexity of the answer.</p> <p>Craig [00:16:19]: In my principles of information systems class, the online assignments are really simple. They're just intended to get students to think about the material before the lectures or to reinforce something from a lecture. So I expect 3 or 4 sentence answers. Maybe longer ones for some of the assignments, but usually they're pretty brief. Well, when I get a really long detailed response for example, I've got an assignment where I just say it's along the lines of how did you use web 2.0 technologies during COVID for your school your schoolwork. What technologies did you use? Which worked well and which were challenging? Well, if you put that into CHaT GPT, you get this really nice numbered and bulleted list that's very extensive, and it's quite a good answer in a lot of ways. But it's way too long, it's way too detailed, And so if I saw an answer that was was like that, I'm pretty sure, not just pretty sure, I'm sure the student was using generative AI. And if you look at the answer that that's in the newsletter, you can see that the answer is very impersonal.</p> <p>Craig [00:17:33]: The true answers say things like my school or I really liked or I hated this tool or that tool, sometimes they'll crack on their teachers a little bit or on their schools. The generative AI response is very cold and very factual. And then generative AI likes to use kind of bigger words. In the answer that I put in the newsletter, it uses socioeconomic, which, yeah, students know that word maybe. But how many of them are gonna use it? Continuity of instruction. I've never had a student say, I'm concerned about continuity of instruction. That kind of language is a pretty huge indicator that somebody's using generative AI. Of course, clever students who are industrious can take what generative AI produces and put it in their own words.</p> <p>Craig...]]></content:encoded><link><![CDATA[http://sites.libsyn.com/502703/detecting-fake-answers-zoom-meeting-magic-and-gemini-is-pretty-awesome]]></link><guid isPermaLink="false">3b688ccc-7a71-42e3-9ed6-ab19d5691405</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Wed, 28 Feb 2024 13:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/b10f0ea4-f495-4172-a7d9-b336f2a22314/aigtc-ep-3-edited.mp3" length="25422202" type="audio/mpeg"/><itunes:duration>23:36</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>3</itunes:episode><podcast:episode>3</podcast:episode><podcast:season>1</podcast:season></item><item><title>Perplexity.ai, a mini-rant, and a successful experiment</title><itunes:title>Perplexity.ai, a mini-rant, and a successful experiment</itunes:title><description><![CDATA[<p> </p> <p>In this episode, Craig has a mini-rant about misleading click-bait headlines, discusses two recent generative AI surveys, gives the rundown on Google's rebrand from Brard to Gemini and Perplexity.ai and shares a modest experiment in redesigning an assignment to prevent generative AI academic dishonesty (which is a fancy way to say cheating).</p> <p>More details are available at <a href= "https://www.aigoestocollege.com/p/newsletter/">https://www.aigoestocollege.com/p/newsletter/</a>, where you can subscribe to the AI Goes to College newsletter.</p> <p>Contact Craig at <a href= "https://www.aigoestocollege.com/">https://www.aigoestocollege.com/</a> or <a href= "mailto:craig@EthicalAIUse.com">craig@EthicalAIUse.com</a></p> <p>--- Transcript ---</p> <p>Craig [00:00:10]: Welcome to episode number 2 of AI Goes to College, the podcast that helps higher ed professionals try to figure out what's going on with generative AI. I'm your host, doctor Craig Van Slyke. So this week, I give you a mini rant. It's not a full rant, but a mini rant about misleading headlines, Talk about Google's release of a new model and its big rebrand from Bard to Gemini. My favorite part is gonna be when I talk about dot AI, which is generating a lot of interest right now, and I think it's tailor made for higher ed, even though I don't think that they're restricting the audience to higher ed and some promising results from a little experiment I did in redesigning an assignment. I'm gonna hit the highlights in this episode of the podcast. But if you want the full details, go to AI goes to college.com And click on the newsletter link and subscribe to my newsletter. A lot more details, screenshots, that sort of thing there.</p> <p>Craig [00:01:09]: So here's my rant. Cengage, and if you're in higher ed, you know who Cengage is. They call themselves course material publishers, Just released its 2023 digital learning pulse survey. As far as I can tell, this is the 1st time the survey gathered data about AI. The results are pretty interesting. It says only 23% of faculty at 4 year schools thought that their institutions were prepared for AI related changes, and that number was only 16% for 2 year schools faculty at 2 year schools. 41% of faculty across the 2 different types of institutions thought that generative AI would bring considerable or massive amounts to change to their institutions. What bothers me about this survey, is really not the survey itself, But how it's being reported? So the headline of the article from which I kind of learned about this survey read, Survey reveals only 16% of faculty is ready for Gen AI in higher ed, which is not at all what the survey was about.</p> <p>Craig [00:02:22]: The survey, at least the part of it I'm talking about, asked 2 generative AI related questions. Do you think your institution is is prepared for AI related changes. And how much will AI tools change your institution over the next 5 years? So first of all, that really isn't specific to generative AI, although I think that's what most people would interpret, AI as. The title of the article that led me to the survey said that faculty aren't ready. Well, that's not what the survey asked about. It didn't ask if the faculty were ready, although that would have been a good thing to ask. It asked if they thought their institutions were ready. So I want to caution all of you to do something you already know you should be doing.</p> <p>Craig [00:03:09]: Read these click headlines, and there are a lot of them. Read the articles with a critical eye. If it's something that's important, if it's something that you're going to try to rely on To make any sort of a decision or to form your attitudes, take the time to look at the underlying data. Don't just look at how that particular author is putting the data. Look at the data yourself. All of that being said, I think we're probably not especially well prepared collectively for generative AI, And that's not a big surprise. It's still relatively new, and it's changing very rapidly. So we'll see.</p> <p>Craig [00:03:48]: Speaking of changes, Google Bard is now Google Gemini, and it's not just a rebrand. So Google also, as part of the rebrand, announced that they have some new models. So with Gemini, formerly Bard, which you can find at gemini.google.com. There are 2 versions at the moment, Gemini and Gemini advanced, and this is kind of the same as Chat GPT and Chat GPT Pro. The nomenclature is a little bit confusing. Gemini is a family of models. Ultra is the big dog high performance model. Pro is kind of the regular model, and Nano is a light version optimized for efficiency, which I think signals that Google is gonna make a push into AI on mobile devices.</p> <p>Craig [00:04:36]: I was pretty confused about the names and what models there were and that sort of thing. So I asked Gemini to explain it to me. The details of that conversation are in the newsletter, which is available at AI goes to college .com/newsletter. Gemini is kind of like GPT 3.5. It's fine for most things. If Gemini isn't up to the task, try Gemini advanced, which is kind of like GPT 4. So far, I've been pretty happy with, my use of Gemini advanced. It did a good job of helping me unravel the various names and models related to Gemini, And I've played with it for some course related tasks, and it's performed pretty well.</p> <p>Craig [00:05:17]: I'm not sure I'd give up ChatGPT Pro for Gemini advanced, but it's a nice option, and I'm playing around with all of them. So there's no big surprise. Know, your experiences may vary, but I would suggest that you try it out for yourself. If you do, I'd love to hear your impressions. You can send those to me at craig, That's craig@ethicalaiuse.com. There was another generative AI, education survey that was out in the news recently, a higher ed think tank, and I don't even know what that means. HEPI released a policy note then included some results from a survey of 1250 undergraduate students in the UK. According to that survey, 53% of students used GAI to help with their studies, but only 5% said that they were likely to use AI to cheat.</p> <p>Craig [00:06:13]: I don't doubt that the statistics accurately reflected the survey responses, but I'm pretty skeptical about both of these numbers. 53% seems pretty high, for students that have actually used generative AI to help them with their studies, And 5% seems pretty low for those who said that they might use AI to cheat, but I don't know. I think there are a lot more Students that are either using or going to use generative AI kind of at the edges of, ethical use. So I still think that the uptake of generative AI among students is lower than some of us might think or some people might think, even especially if we consider regular use. So they may have played around with it, but I just don't think many students are using it regularly yet. One of the reasons I like this particular little article, which is linked in the newsletter, is that it did discuss the digital divide problem. The digital divide is real, and it has real consequences for a lot of aspects of society, including higher ed. We need to keep chipping away at the digital divide if we truly wanna adjust society.</p> <p>Craig [00:07:27]: And generative AI is just going to widen The digital divide. More details in the newsletter, which you can can access at AI goes to college.com/newsletter. It feels like it ought to be a drinking game. How many times will they say AI Goes to College .com/newsletter? So let's get to the resource of the week. There's been a lot of online chatter about Perplexity dot AI. The gist of all of this talk is that Perplexity is becoming kind of a go to generative AI tool when you wanna uncover sources. There's a lot of hype that says this is going to Be the new Google, and I'm not so sure about that, but it is a very useful tool. Exactly what it is is a little unclear at first.</p> <p>Craig [00:08:16]: 1st, I'm gonna read you verbatim what the about page says. Perplexity was founded on the belief that searching for information should be a straightforward, efficient Variance free from the influence of advertising driven models. We exist because there's a clear demand for a platform that cuts through the noise of information overload delivering precise, user focused answers in an era where time is a premium. I couldn't argue with any of that, but I don't know what it means. Their overview page is a little bit better, And and it talks about some of the use cases for Perplexity Point AI, answering questions, exploring topics in-depth, Organizing your library and interacting with your data. I can personally attest that perplexity is Pretty good with the first 3, but I haven't really tried it with my own files yet. Here's a problem that we have with search. If you go into Google or some other search engine, you want information, but a search engine doesn't really give you the information you want.</p> <p>Craig [00:09:21]: It gives you a list of websites that may or may not include somewhere the information that you want. Perplexity is much more about giving you the actual information that you're trying to to get and telling you where it got that information from. And so that's a fundamental difference, and I think it could ultimately reshape how we search for information on the web, and I think that's a good thing. There are a number of things that set perplexity apart. It's got a copilot mode that gives you a guided search experience. That can be really helpful. So what it does is it will ask you first, do you wanna focus on particular types of Resources. So right now, it's got 6 different categories.</p> <p>Craig [00:10:09]: Well, 5 and then an all. So it's got all where you search across the entire Internet, Academic, and this is a big one, where it searches only in published academic papers. Writing doesn't really search the web. It just helps you. It'll...]]></description><content:encoded><![CDATA[<p> </p> <p>In this episode, Craig has a mini-rant about misleading click-bait headlines, discusses two recent generative AI surveys, gives the rundown on Google's rebrand from Brard to Gemini and Perplexity.ai and shares a modest experiment in redesigning an assignment to prevent generative AI academic dishonesty (which is a fancy way to say cheating).</p> <p>More details are available at <a href= "https://www.aigoestocollege.com/p/newsletter/">https://www.aigoestocollege.com/p/newsletter/</a>, where you can subscribe to the AI Goes to College newsletter.</p> <p>Contact Craig at <a href= "https://www.aigoestocollege.com/">https://www.aigoestocollege.com/</a> or <a href= "mailto:craig@EthicalAIUse.com">craig@EthicalAIUse.com</a></p> <p>--- Transcript ---</p> <p>Craig [00:00:10]: Welcome to episode number 2 of AI Goes to College, the podcast that helps higher ed professionals try to figure out what's going on with generative AI. I'm your host, doctor Craig Van Slyke. So this week, I give you a mini rant. It's not a full rant, but a mini rant about misleading headlines, Talk about Google's release of a new model and its big rebrand from Bard to Gemini. My favorite part is gonna be when I talk about dot AI, which is generating a lot of interest right now, and I think it's tailor made for higher ed, even though I don't think that they're restricting the audience to higher ed and some promising results from a little experiment I did in redesigning an assignment. I'm gonna hit the highlights in this episode of the podcast. But if you want the full details, go to AI goes to college.com And click on the newsletter link and subscribe to my newsletter. A lot more details, screenshots, that sort of thing there.</p> <p>Craig [00:01:09]: So here's my rant. Cengage, and if you're in higher ed, you know who Cengage is. They call themselves course material publishers, Just released its 2023 digital learning pulse survey. As far as I can tell, this is the 1st time the survey gathered data about AI. The results are pretty interesting. It says only 23% of faculty at 4 year schools thought that their institutions were prepared for AI related changes, and that number was only 16% for 2 year schools faculty at 2 year schools. 41% of faculty across the 2 different types of institutions thought that generative AI would bring considerable or massive amounts to change to their institutions. What bothers me about this survey, is really not the survey itself, But how it's being reported? So the headline of the article from which I kind of learned about this survey read, Survey reveals only 16% of faculty is ready for Gen AI in higher ed, which is not at all what the survey was about.</p> <p>Craig [00:02:22]: The survey, at least the part of it I'm talking about, asked 2 generative AI related questions. Do you think your institution is is prepared for AI related changes. And how much will AI tools change your institution over the next 5 years? So first of all, that really isn't specific to generative AI, although I think that's what most people would interpret, AI as. The title of the article that led me to the survey said that faculty aren't ready. Well, that's not what the survey asked about. It didn't ask if the faculty were ready, although that would have been a good thing to ask. It asked if they thought their institutions were ready. So I want to caution all of you to do something you already know you should be doing.</p> <p>Craig [00:03:09]: Read these click headlines, and there are a lot of them. Read the articles with a critical eye. If it's something that's important, if it's something that you're going to try to rely on To make any sort of a decision or to form your attitudes, take the time to look at the underlying data. Don't just look at how that particular author is putting the data. Look at the data yourself. All of that being said, I think we're probably not especially well prepared collectively for generative AI, And that's not a big surprise. It's still relatively new, and it's changing very rapidly. So we'll see.</p> <p>Craig [00:03:48]: Speaking of changes, Google Bard is now Google Gemini, and it's not just a rebrand. So Google also, as part of the rebrand, announced that they have some new models. So with Gemini, formerly Bard, which you can find at gemini.google.com. There are 2 versions at the moment, Gemini and Gemini advanced, and this is kind of the same as Chat GPT and Chat GPT Pro. The nomenclature is a little bit confusing. Gemini is a family of models. Ultra is the big dog high performance model. Pro is kind of the regular model, and Nano is a light version optimized for efficiency, which I think signals that Google is gonna make a push into AI on mobile devices.</p> <p>Craig [00:04:36]: I was pretty confused about the names and what models there were and that sort of thing. So I asked Gemini to explain it to me. The details of that conversation are in the newsletter, which is available at AI goes to college .com/newsletter. Gemini is kind of like GPT 3.5. It's fine for most things. If Gemini isn't up to the task, try Gemini advanced, which is kind of like GPT 4. So far, I've been pretty happy with, my use of Gemini advanced. It did a good job of helping me unravel the various names and models related to Gemini, And I've played with it for some course related tasks, and it's performed pretty well.</p> <p>Craig [00:05:17]: I'm not sure I'd give up ChatGPT Pro for Gemini advanced, but it's a nice option, and I'm playing around with all of them. So there's no big surprise. Know, your experiences may vary, but I would suggest that you try it out for yourself. If you do, I'd love to hear your impressions. You can send those to me at craig, That's craig@ethicalaiuse.com. There was another generative AI, education survey that was out in the news recently, a higher ed think tank, and I don't even know what that means. HEPI released a policy note then included some results from a survey of 1250 undergraduate students in the UK. According to that survey, 53% of students used GAI to help with their studies, but only 5% said that they were likely to use AI to cheat.</p> <p>Craig [00:06:13]: I don't doubt that the statistics accurately reflected the survey responses, but I'm pretty skeptical about both of these numbers. 53% seems pretty high, for students that have actually used generative AI to help them with their studies, And 5% seems pretty low for those who said that they might use AI to cheat, but I don't know. I think there are a lot more Students that are either using or going to use generative AI kind of at the edges of, ethical use. So I still think that the uptake of generative AI among students is lower than some of us might think or some people might think, even especially if we consider regular use. So they may have played around with it, but I just don't think many students are using it regularly yet. One of the reasons I like this particular little article, which is linked in the newsletter, is that it did discuss the digital divide problem. The digital divide is real, and it has real consequences for a lot of aspects of society, including higher ed. We need to keep chipping away at the digital divide if we truly wanna adjust society.</p> <p>Craig [00:07:27]: And generative AI is just going to widen The digital divide. More details in the newsletter, which you can can access at AI goes to college.com/newsletter. It feels like it ought to be a drinking game. How many times will they say AI Goes to College .com/newsletter? So let's get to the resource of the week. There's been a lot of online chatter about Perplexity dot AI. The gist of all of this talk is that Perplexity is becoming kind of a go to generative AI tool when you wanna uncover sources. There's a lot of hype that says this is going to Be the new Google, and I'm not so sure about that, but it is a very useful tool. Exactly what it is is a little unclear at first.</p> <p>Craig [00:08:16]: 1st, I'm gonna read you verbatim what the about page says. Perplexity was founded on the belief that searching for information should be a straightforward, efficient Variance free from the influence of advertising driven models. We exist because there's a clear demand for a platform that cuts through the noise of information overload delivering precise, user focused answers in an era where time is a premium. I couldn't argue with any of that, but I don't know what it means. Their overview page is a little bit better, And and it talks about some of the use cases for Perplexity Point AI, answering questions, exploring topics in-depth, Organizing your library and interacting with your data. I can personally attest that perplexity is Pretty good with the first 3, but I haven't really tried it with my own files yet. Here's a problem that we have with search. If you go into Google or some other search engine, you want information, but a search engine doesn't really give you the information you want.</p> <p>Craig [00:09:21]: It gives you a list of websites that may or may not include somewhere the information that you want. Perplexity is much more about giving you the actual information that you're trying to to get and telling you where it got that information from. And so that's a fundamental difference, and I think it could ultimately reshape how we search for information on the web, and I think that's a good thing. There are a number of things that set perplexity apart. It's got a copilot mode that gives you a guided search experience. That can be really helpful. So what it does is it will ask you first, do you wanna focus on particular types of Resources. So right now, it's got 6 different categories.</p> <p>Craig [00:10:09]: Well, 5 and then an all. So it's got all where you search across the entire Internet, Academic, and this is a big one, where it searches only in published academic papers. Writing doesn't really search the web. It just helps you. It'll generate text or chat, without searching the web. Wolfram Alpha is a computational knowledge engine. It'll search YouTube, and it'll also search Reddit, which I think is pretty interesting. So you can go broad or you can go really narrow.</p> <p>Craig [00:10:41]: Another thing that it does is it will ask clarifying questions when it feels like it's necessary. Feels like. That's weird. When, it somehow in its large language model brain thinks that it's necessary. I give an example in the newsletter where I'd say I want to explain generative AI to a nonexpert audience. The audience will be intelligent but won't have any Background in AI or computer science, what topics would you cover? And instead of just giving me an answer, now I'm using, Perplexity's Copilot here. It says, what is the main purpose of your explanation? Basic understanding, application to risks and benefits. And so you can specify basic understanding or applications of generative AI or what are the risks or benefits or you can choose all of those or You can provide some other sort of clarifying information.</p> <p>Craig [00:11:35]: That's really useful, so it doesn't take you down as many kind of unproductive paths. Perplexity's Copilot will even kinda show you the steps it took in generating your response or its response rather. So you can take a look at that in the, newsletter. I know I'm saying that a lot, but there's a nice little screenshot that'll give you a better idea of what I'm talking about there. You can also look at the underlying sources that perplexity used to generate its responses. So for example, the little, explain generative AI prompt that I gave it. It came up with 24 different sources along with an answer. So I can dig into any one of those 24 sources to see exactly what it was talking about.</p> <p>Craig [00:12:25]: And when perplexity gives me its answer, it Gives footnotes, little numbers that refer back to those sources so you can dig in. You can also do the normal Chat thing like asking follow-up questions. So it it's really quite good. Here's one of my favorite, favorite, favorite Features of perplexity, it allows you to create what it calls collections. So collections allows you to group together different conversations, perplexity calls, conversations, threads. And so one one of my biggest frustrations with Chat gbt with the interface is I'll have some conversation with it and then wanna go back to that topic a couple of weeks later, and I Can't find that conversation because I've had 200 other conversations in the meantime. A little pro tip, you can search, your conversations in the mobile app, or as far as I know, you can't do it on the web interface yet. But you can always go in, search whatever keyword you're gonna search on, find the conversation, and then Add to it, and it'll pop up on the top of your list on the website.</p> <p>Craig [00:13:37]: So I I know that was kind of a muddled, description. But, if it's unclear, just email me, craig@ethicalaiuse.com. So these collections can be really, really useful. I was working on something for my dean recently. I was using perplexity. I put all of those conversations in a collection. So to me, Perplexity dot AI is one of the more interesting tools that I've seen come out recently. If you haven't checked it out, you should.</p> <p>Craig [00:14:07]: They have a free tier that you can play around with. And I And I'd used it and then almost immediately paid for an annual pro subscription. So, really, I encourage you to check it out. Alright. So here's my little experiment. So I'm teaching principles of information systems this term, And I include some pre class online assignments. These are simple little things that all I'm trying to do is get them to engage with the material little bit before class. They're easy, and I'm very lenient in the grading.</p> <p>Craig [00:14:41]: Basically, if you put any effort into it at all, you get full credit as long as it's not late. But, unsurprisingly, I noticed that some submissions looked suspiciously like they were generated with generative AI. 1st time, I let it slide and just, commented on it in class. The 2nd time, I required students to resubmit. I just Said I'm giving you a 0 for now. Put this in your own words, and I'll give you credit. I'm teaching the same class in the upcoming spring term. We're on the quarter system, by the way.</p> <p>Craig [00:15:14]: And so I started thinking about how to modify these assignments to keep students from just copying and pasting in a generative AI response. I know this is gonna sound incredibly lazy of me, but I don't wanna be spending my entire quarter dealing with academic honesty reports. I'd rather just prevent the problems in the 1st place, and we're gonna have to do this kind of thing. We're gonna have to rethink how we We do evaluations, how we assess learning, how we create our class activities. So I decided, I'm gonna try to modify an upcoming assignment. So this is a little activity I've used for years, And it's very simple. The original assignment was compare and contrast supply chain management systems and customer relationship management systems. Give 3 ways they're similar and 3 ways they're different.</p> <p>Craig [00:16:05]: Like I said, these are kinda little lame o activities, but but you you can see where I'm going with this. I want students to kind of have to look at what the 2 2 different types of enterprise systems that we're talking about are. You know, they'll start to get some understanding of kind of what they're all about, so I decided to change the assignment. And and I'm gonna give you an overview of what I did, but all the details are in the newsletter. So I basically said, Hey. I'm gonna give you a task, and then I'm gonna give you the answer that was given by generative AI. You're then going to compare that answer to the information in the textbook and briefly describe how the information from generative AI And the textbook are similar and how they're different. And then I go through and say, alright.</p> <p>Craig [00:16:56]: Here's what I put into, generative AI, here's what it spit back. And then the students had to to kind of do a little bit of work. I I was Pretty happy with the result. Some students absolutely missed what I was asking them to do, but that's okay because I'm not sure it was entirely clear. But the ones that got what I wanted them to do did a pretty good job of going back into the textbook and kind of seeing what the textbook said and then seeing what The answer, comparing that to the answer generative AI gave. Some students even went so far as to say, like, on page 247 of the textbook, it said this, And generative AI said that. And so I was pretty happy with the results considering I had about 15 minutes into revising the assignment. So I'm gonna do more of these.</p> <p>Craig [00:17:47]: As I said, I'm teaching the class again in the spring, so I'm gonna spend part of the break Redoing some of my assignments, my online activities to make it to where they have they can't just copy and paste the question into generative AI. And I'll report on those experiments, as I go through them. Alright. That's all I have for you today. I'm out of breath, and you're probably tired of listening. So I will talk to you next time. Thanks for listening to AI Goes to College. If you found this episode useful, you'll love the AI Goes to College newsletter.</p> <p>Craig [00:18:28]: Each edition brings you useful tips, news, and insights that you can use to help you figure out what in the world is going on with generative AI and how it's affecting higher ed. Just go to AI goes to college.com to sign up. I won't try to sell you anything, and I won't spam you or share your information with anybody else. As an incentive for subscribing, I'll send you the getting started with generative AI guide. Even if you're an expert with AI, You'll find the guide useful for helping your less knowledgeable colleagues.</p> <p> </p> <p> </p>]]></content:encoded><link><![CDATA[https://www.aigoestocollege.com/]]></link><guid isPermaLink="false">f4f8cb31-231d-4d0f-b987-f309beb321b6</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Thu, 15 Feb 2024 11:00:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/2832d323-edf4-48c7-8bb0-4af14bc4b516/aigtc-ep-2-perplexity.mp3" length="21188067" type="audio/mpeg"/><itunes:duration>19:14</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>2</itunes:episode><podcast:episode>2</podcast:episode><podcast:season>1</podcast:season></item><item><title>Should you trust AI?</title><itunes:title>Should you trust AI?</itunes:title><description><![CDATA[<p data-pm-slice="1 1 []">In the debut episode of AI Goes to College, join host Craig Van Slyke as he delves into the critical question: Should you trust AI? Drawing on his expertise in the field, Craig explores the nuanced answer to this question, shedding light on the capabilities and limitations of generative AI in various contexts. Listeners will gain valuable insights into when it's appropriate to trust AI, and how to navigate the consequences of relying on its output.</p> <p>Additionally, Craig reviews Consensus, a promising AI research app, sharing his firsthand experience and recommendations for its use. The episode also covers recent news items, including Arizona State University's partnership with OpenAI and EdTech firm Anthology's AI policy framework for higher education.</p> <p>To wrap up, Craig shares his top choice for a paid generative AI service, highlighting the unique advantages of Poe and why it stands out amidst other options in the field. He offers practical advice for leveraging generative AI tools and emphasizes the importance of thoroughly understanding their capabilities before integration.</p> <p>Tune in to gain a comprehensive understanding of trusting, utilizing, and verifying generative AI, and discover valuable resources for effectively incorporating AI in the higher education landscape. Embrace the potential of AI as a powerful ally, but with a discerning eye. Don't miss the chance to expand your knowledge and make informed decisions in the ever-evolving world of generative AI.</p>]]></description><content:encoded><![CDATA[<p data-pm-slice="1 1 []">In the debut episode of AI Goes to College, join host Craig Van Slyke as he delves into the critical question: Should you trust AI? Drawing on his expertise in the field, Craig explores the nuanced answer to this question, shedding light on the capabilities and limitations of generative AI in various contexts. Listeners will gain valuable insights into when it's appropriate to trust AI, and how to navigate the consequences of relying on its output.</p> <p>Additionally, Craig reviews Consensus, a promising AI research app, sharing his firsthand experience and recommendations for its use. The episode also covers recent news items, including Arizona State University's partnership with OpenAI and EdTech firm Anthology's AI policy framework for higher education.</p> <p>To wrap up, Craig shares his top choice for a paid generative AI service, highlighting the unique advantages of Poe and why it stands out amidst other options in the field. He offers practical advice for leveraging generative AI tools and emphasizes the importance of thoroughly understanding their capabilities before integration.</p> <p>Tune in to gain a comprehensive understanding of trusting, utilizing, and verifying generative AI, and discover valuable resources for effectively incorporating AI in the higher education landscape. Embrace the potential of AI as a powerful ally, but with a discerning eye. Don't miss the chance to expand your knowledge and make informed decisions in the ever-evolving world of generative AI.</p>]]></content:encoded><link><![CDATA[https://www.aigoestocollege.com/]]></link><guid isPermaLink="false">734c5632-ff95-45d8-a7f6-4a57ce14b720</guid><itunes:image href="https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg"/><pubDate>Wed, 07 Feb 2024 12:29:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/15ea35f1-6c43-40e9-9647-f2b5114a675d/12599-40687-rc.mp3" length="17026561" type="audio/mpeg"/><itunes:duration>17:44</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>1</itunes:episode><podcast:episode>1</podcast:episode></item><item><title>AI Goes to College Trailer</title><itunes:title>AI Goes to College Trailer</itunes:title><description><![CDATA[<p data-pm-slice="1 1 []">In this episode, Craig provides an insightful overview of what to anticipate from the AI Goes to College podcast. He talks about his inspiration for launching the podcast and emphasizes how it can help higher education faculty and staff navigate the transformative impact of generative AI. Tune in to gain a clear understanding of the podcast's purpose and how it can support you in staying abreast of developments within the higher education landscape.</p> <p data-pm-slice="1 1 []">Craig also tells you how you can get his new e-book, Getting Started with Generative AI: A Guide for Higher Ed Professionals. (It's free!)</p> <p data-pm-slice="1 1 []">For more information, or to sign up for the AI Goes to College newsletter, go to https://www.aigoestocollege.com/</p>]]></description><content:encoded><![CDATA[<p data-pm-slice="1 1 []">In this episode, Craig provides an insightful overview of what to anticipate from the AI Goes to College podcast. He talks about his inspiration for launching the podcast and emphasizes how it can help higher education faculty and staff navigate the transformative impact of generative AI. Tune in to gain a clear understanding of the podcast's purpose and how it can support you in staying abreast of developments within the higher education landscape.</p> <p data-pm-slice="1 1 []">Craig also tells you how you can get his new e-book, Getting Started with Generative AI: A Guide for Higher Ed Professionals. (It's free!)</p> <p data-pm-slice="1 1 []">For more information, or to sign up for the AI Goes to College newsletter, go to https://www.aigoestocollege.com/</p>]]></content:encoded><link><![CDATA[https://www.aigoestocollege.com/]]></link><guid isPermaLink="false">24b40330-a7a0-4670-83f7-df2b105a8dec</guid><itunes:image href="https://artwork.captivate.fm/b27ea3fb-740d-4a36-aa22-00f613f42dce/aigtc-square-cover-from-getcovers-20240121-b0zxchp5zn.jpg"/><pubDate>Mon, 29 Jan 2024 11:08:00 -0400</pubDate><enclosure url="https://podcasts.captivate.fm/media/7f5feee8-f61d-498f-a414-03c59ffa7a02/aigtc-trailer-updated.mp3" length="8179651" type="audio/mpeg"/><itunes:duration>05:42</itunes:duration><itunes:explicit>false</itunes:explicit><itunes:episodeType>trailer</itunes:episodeType></item></channel></rss>