<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet href="https://feeds.captivate.fm/style.xsl" type="text/xsl"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:podcast="https://podcastindex.org/namespace/1.0"><channel><atom:link href="https://feeds.captivate.fm/short-and-sweet-ai/" rel="self" type="application/rss+xml"/><title><![CDATA[Short & Sweet AI]]></title><lastBuildDate>Mon, 16 Jan 2023 14:40:21 +0000</lastBuildDate><generator>Captivate.fm</generator><language><![CDATA[en]]></language><copyright><![CDATA[Copyright 2023 Dr. Peper]]></copyright><managingEditor>Dr. Peper</managingEditor><itunes:summary><![CDATA[What is Artificial Intelligence? It's a big part of our daily lives and you want to know. You need to know. But the explanations are so long and boring. Let me give you something short and sweet.

Join me, Dr. Peper, for 5 minute, pleasing, and easy to understand flash talks about everything artificial intelligence. Short and Sweet AI.]]></itunes:summary><itunes:image href="https://artwork.captivate.fm/0f70a8fe-87fd-4897-904a-3c6df4f5e40b/8bczxqo3txpitc4acrr8bmih.png"/><itunes:owner><itunes:name>Dr. Peper</itunes:name></itunes:owner><itunes:author>Dr. Peper</itunes:author><description>What is Artificial Intelligence? It&apos;s a big part of our daily lives and you want to know. You need to know. But the explanations are so long and boring. Let me give you something short and sweet.

Join me, Dr. Peper, for 5 minute, pleasing, and easy to understand flash talks about everything artificial intelligence. Short and Sweet AI.</description><link>https://drpepermd.com</link><atom:link href="https://pubsubhubbub.appspot.com" rel="hub"/><itunes:subtitle><![CDATA[A friendly flash talk about artificial intelligence]]></itunes:subtitle><itunes:explicit>no</itunes:explicit><itunes:type>episodic</itunes:type><itunes:category text="Technology"></itunes:category><itunes:category text="News"><itunes:category text="Tech News"/></itunes:category><itunes:category text="Technology"></itunes:category><itunes:new-feed-url>https://feeds.captivate.fm/short-and-sweet-ai/</itunes:new-feed-url><item><title>Ishiguro’s Klara and the Sun Reveals Three Rights and Two Wrongs About the Future</title><itunes:title>Ishiguro&apos;s Klara and the Sun Reveals Three Rights and Two Wrongs About the Future</itunes:title><description><![CDATA[<p>We all have thoughts of the future. Some of us will only think of it in passing, but others will spend months or even years contemplating the endless possibilities.</p><p>Kazuo Ishiguro’s vision for the future, beautifully presented in his latest book, ‘Klara and the Sun,’ shows an excellent level of thought and research. The British novelist presents an emotionally nuanced concept of what it means to be human or non-human.</p><p>In this episode of Short and Sweet AI, I discuss Ishiguro’s latest book and its depiction of robots and artificial intelligence. I also delve into what immortality could look like for humans – will it be robots in our future or something different?</p><p>In this episode, find out:</p><ul><li>What Ishiguro got right and wrong about the future of robots and AI</li><li>How Ishiguro depicts robots and the future of work</li><li>The debate about immortality – robots vs. the cloud</li><li>The ethical considerations of human-like robots</li></ul><br/><p><strong>Important Links &amp; Mentions:</strong></p><ul><li><a href="https://drpepermd.com/podcast-2/ep-neuralink-update/" target="_blank">Neuralink Update</a></li><li><a href="https://www.nobelprize.org/prizes/literature/2017/ishiguro/facts/" target="_blank">The Nobel Prize: Kazuo Ishiguro</a></li></ul><br/><p><strong>Resources:</strong></p><ul><li>The Atlantic: <a href="https://www.theatlantic.com/magazine/archive/2021/04/kazuo-ishiguro-klara-and-the-sun/618083/" target="_blank">The Radiant Inner Life of a Robot</a></li><li>Wired: <a href="https://www.wired.com/story/future-of-work-remembrance-lexi-pandell/" target="_blank">The Future of Work: ‘Remembrance,’ by Lexi Pandell</a></li><li>CNN International: <a href="https://www.youtube.com/watch?v=vybotERG0SU" target="_blank">Kazuo Ishiguro asks what it is to be human</a></li><li>Waterstones: <a href="https://www.youtube.com/watch?v=6GJ7mrqo9nQ" target="_blank">Kazuo Ishiguro on Klara and the Sun</a></li></ul><br/><p><strong>Episode Transcript:</strong></p><p>Hello to you who are curious about AI, I’m Dr. Peper.</p><p>We all have thoughts about the future, some of us in passing and some spend months and years thinking about it. Kazuo Ishiguro’s vision, beautifully presented in his latest book, Klara and the Sun, shows much thought and research. This British novelist presents emotionally nuanced concepts about what it means to be human and not human. I’m not an artificial intelligence expert nor a Nobel prizing winning author like Ishiguro. But I am someone who’s fascinated by artificial intelligence and want people to understand what AI means for our future. From that perspective, I’ve identified three things Ishiguro got right, and two things I think he got wrong, in his new book Klara and the Sun.&nbsp;</p><p>First, his depiction of Klara, an artificial friend, or robot, meshes with my understanding of what robots will be like in the future. They will have the ability to understand and integrate information and read and understand human emotions. This ability will surpass the ability of the humans around them at times. With exposure to more human situations and more human observations, robots will increase and refine their emotional abilities. They’ll have true feelings, not simulate them.</p><p>The second thing Ishiguro gets right is the future of work. There will be substitutions of humans with machines as machines do more and more of the work. Humans will be displaced and just as in the novel, people will struggle to redefine their role in society and find new meaning.</p><p>And the third thing that Ishiguro accurately writes about is the inequality created by those who choose and can afford to have gene-edited children, described as the lifted kids compared to the non-lifted kids, and those whose parents can’t afford or choose not to have their children’s genes edited before birth. I think this will be a real possibility in the near future. There will also be major inequalities in wealth,...]]></description><content:encoded><![CDATA[<p>We all have thoughts of the future. Some of us will only think of it in passing, but others will spend months or even years contemplating the endless possibilities.</p><p>Kazuo Ishiguro’s vision for the future, beautifully presented in his latest book, ‘Klara and the Sun,’ shows an excellent level of thought and research. The British novelist presents an emotionally nuanced concept of what it means to be human or non-human.</p><p>In this episode of Short and Sweet AI, I discuss Ishiguro’s latest book and its depiction of robots and artificial intelligence. I also delve into what immortality could look like for humans – will it be robots in our future or something different?</p><p>In this episode, find out:</p><ul><li>What Ishiguro got right and wrong about the future of robots and AI</li><li>How Ishiguro depicts robots and the future of work</li><li>The debate about immortality – robots vs. the cloud</li><li>The ethical considerations of human-like robots</li></ul><br/><p><strong>Important Links &amp; Mentions:</strong></p><ul><li><a href="https://drpepermd.com/podcast-2/ep-neuralink-update/" target="_blank">Neuralink Update</a></li><li><a href="https://www.nobelprize.org/prizes/literature/2017/ishiguro/facts/" target="_blank">The Nobel Prize: Kazuo Ishiguro</a></li></ul><br/><p><strong>Resources:</strong></p><ul><li>The Atlantic: <a href="https://www.theatlantic.com/magazine/archive/2021/04/kazuo-ishiguro-klara-and-the-sun/618083/" target="_blank">The Radiant Inner Life of a Robot</a></li><li>Wired: <a href="https://www.wired.com/story/future-of-work-remembrance-lexi-pandell/" target="_blank">The Future of Work: ‘Remembrance,’ by Lexi Pandell</a></li><li>CNN International: <a href="https://www.youtube.com/watch?v=vybotERG0SU" target="_blank">Kazuo Ishiguro asks what it is to be human</a></li><li>Waterstones: <a href="https://www.youtube.com/watch?v=6GJ7mrqo9nQ" target="_blank">Kazuo Ishiguro on Klara and the Sun</a></li></ul><br/><p><strong>Episode Transcript:</strong></p><p>Hello to you who are curious about AI, I’m Dr. Peper.</p><p>We all have thoughts about the future, some of us in passing and some spend months and years thinking about it. Kazuo Ishiguro’s vision, beautifully presented in his latest book, Klara and the Sun, shows much thought and research. This British novelist presents emotionally nuanced concepts about what it means to be human and not human. I’m not an artificial intelligence expert nor a Nobel prizing winning author like Ishiguro. But I am someone who’s fascinated by artificial intelligence and want people to understand what AI means for our future. From that perspective, I’ve identified three things Ishiguro got right, and two things I think he got wrong, in his new book Klara and the Sun.&nbsp;</p><p>First, his depiction of Klara, an artificial friend, or robot, meshes with my understanding of what robots will be like in the future. They will have the ability to understand and integrate information and read and understand human emotions. This ability will surpass the ability of the humans around them at times. With exposure to more human situations and more human observations, robots will increase and refine their emotional abilities. They’ll have true feelings, not simulate them.</p><p>The second thing Ishiguro gets right is the future of work. There will be substitutions of humans with machines as machines do more and more of the work. Humans will be displaced and just as in the novel, people will struggle to redefine their role in society and find new meaning.</p><p>And the third thing that Ishiguro accurately writes about is the inequality created by those who choose and can afford to have gene-edited children, described as the lifted kids compared to the non-lifted kids, and those whose parents can’t afford or choose not to have their children’s genes edited before birth. I think this will be a real possibility in the near future. There will also be major inequalities in wealth, employment, and opportunity as depicted in the novel.</p><p>But one thing that doesn’t make sense is that Klara is able to learn and understand her surroundings so exceedingly well and yet make a very major wrong conclusion. In the book, Klara reasons that people, like robots, need the sun to sustain, nourish and heal them after she misinterprets one example. In the future, robots will have onboard databases that would have quickly given Klara correct information about how humans die from illnesses different than robots.&nbsp;This part of the plot did not seem to fit.</p><p>But even more frustrating and inaccurate, I think, is how the author depicts immortality in the future. The longevity culture thrives today so we know it will be prominent in the decades ahead.&nbsp;In the novel, Ishiguro has immortality being carried out through robots trained to learn and replicate everything about a person they’re going to replace. Klara has the ability to exactly replicate the way a human speaks, walks, and sees the world. As a robot, she can capture a human’s personality, but Ishiguro makes the point that a robot is still unable to capture ‘the human heart”.</p><p>Established futurists, even today, have said that in the future we will achieve immortality in a different way. We won’t rely on a robot to use their deep learning algorithms to learn and mimic everything about a person so a human can live on as a robot after death like in this book.</p><p>No, our immortality will come by uploading our consciousness to the cloud. It’s already begun. Brain–computer interfaces exist today as I’ve discussed in my episode on Neuralink. Highly complicated fields such as synthetic biology, microscopic robots, nanomaterials, and quantum computing, along with others, are merging. It’s inevitable that we’ll have the capability to connect our neocortex and the totality of who we are to a dedicated computer. We can thus choose if we want immortality in a synthetic cloud and choose when to be downloaded at some future time.</p><p>As a final point, I think what Ishiguro addresses so eloquently in the novel by what happens to Klara in the end is one of the most important parts of the book. He makes us feel connected to Klara so that her end kindles in us ethical questions. Ethical questions about how to treat, care for and dispose of robots. When a robot truly feels, rather than simulates emotion, then doesn’t a robot have rights?</p><p>Thanks for listening. Be curious and if you like this episode, leave a comment, or a thumbs up to let me know you like the content. From short &amp; Sweet AI, I’m Dr. Peper.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com]]></link><guid isPermaLink="false">d327af93-5775-42ee-8396-ba74c35718b8</guid><itunes:image href="https://artwork.captivate.fm/0f70a8fe-87fd-4897-904a-3c6df4f5e40b/8bczxqo3txpitc4acrr8bmih.png"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Mon, 26 Apr 2021 02:00:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/d5ff4dd7-97c2-4615-8221-1ae11e80cf95/ep-51-edited.mp3" length="9289288" type="audio/mpeg"/><itunes:duration>06:26</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>51</itunes:episode><itunes:author>Dr. Peper</itunes:author></item><item><title>New ‘Liquid’ AI Has Neuroplasticity Like the Human Brain</title><itunes:title>New ‘Liquid’ AI Has Neuroplasticity Like the Human Brain</itunes:title><description><![CDATA[<p>What is Liquid AI, and could it prove more effective than other types of AI?</p><p>New research into neural nets and algorithms has revealed what some call “Liquid AI,” a more fluid and adaptable version of artificial intelligence.</p><p>In my previous episode, I discussed the <a href="https://drpepermd.com/2021/04/12/a-simple-explanation-of-ai/" target="_blank">basics of AI</a> and the limitations that hold it back. It looks like Liquid AI could provide the very solutions that the AI community has been searching for.</p><p>In this episode of Short and Sweet AI, I explore the new research behind Liquid AI, how it works, and what it does better than other types of AI.</p><p>In this episode find out:</p><ul><li>The limitations of traditional neural networks in AI</li><li>How researchers created Liquid AI</li><li>How Liquid AI differs from other types</li><li>How Liquid AI solves the limitations of computing power with smaller neural nets</li><li>Why Liquid AI is more transparent and easier to analyze</li></ul><br/><p><strong>Important Links &amp; Mentions</strong></p><ul><li><a href="https://drpepermd.com/2021/04/12/a-simple-explanation-of-ai/" target="_blank">A Simple Explanation of AI</a></li><li><a href="https://drpepermd.com/podcast-2/ep-alphafold-the-protein-folding-problem/" target="_blank">AlphaFold &amp; The Protein Folding Problem</a></li><li><a href="https://drpepermd.com/podcast-2/ep-what-is-dall-e/" target="_blank">What is DALL·E?</a></li></ul><br/><p><strong>Resources:</strong></p><ul><li>SingularityHub: <a href="https://singularityhub.com/2021/01/31/new-liquid-ai-learns-as-it-experiences-the-world-in-real-time/" target="_blank">New ‘Liquid’ AI Learns Continuously from Its Experience of the World</a></li><li>Analytics Insight: <a href="https://www.analyticsinsight.net/why-is-liquid-neural-network-from-mit-a-revolutionary-innovation/" target="_blank">Why is a ‘Liquid’ Neural Network from MIT a Revolutionary Innovation?</a></li><li>TechCrunch: <a href="https://techcrunch.com/2021/01/28/mit-researchers-develop-a-new-liquid-neural-network-thats-better-at-adapting-to-new-info/" target="_blank">MIT researchers develop a new ‘liquid’ neural network that’s better at adapting to new info</a></li></ul><br/><p><strong>Episode Transcript:</strong></p><p>Hello to you who are curious about AI, I’m Dr. Peper. Machine learning algorithms are getting an overhaul from a very unlikely source. It’s a fascinating story.</p><p>Neural Nets have Traditional Limitations</p><p>Neural nets are the powerhouse of machine learning. They have the ability to translate whole books within seconds with Google Translate, change written text into images with DALLE, and discover the 3D structure of a protein in hours with AlphaFold. But researchers have struggled with neural networks because of their <strong>limitations</strong>.</p><p>Neural nets cannot do anything other than what they’re trained for. They’re programed with parameters <strong>set</strong> to give the most accurate results. But that makes them <strong>brittle</strong> which means they can break when given new information they weren’t trained on. Today the deep learning neural nets used in autonomous driving have millions of parameters. And the newest neural nets are so complex, with hundreds of layers and billions of parameters, they require very powerful supercomputers to run the algorithms.</p><p>A Neuroplastic Neural Net based on a Nematode</p><p>Now researchers from MIT and Austria’s Science Institute have created a new, adaptive neural network they’re describing as “liquid” AI. The algorithm’s based on the nervous system of a simple worm, C. elegans. And elegant it truly is. This worm has only three hundred and two neurons but it’s very responsive with a variety of behaviors. The teams were able to mathematically model the worm’s neurons and build them into a neural network. I’ve explained neural networks in my previous episode called A Simple Explanation of...]]></description><content:encoded><![CDATA[<p>What is Liquid AI, and could it prove more effective than other types of AI?</p><p>New research into neural nets and algorithms has revealed what some call “Liquid AI,” a more fluid and adaptable version of artificial intelligence.</p><p>In my previous episode, I discussed the <a href="https://drpepermd.com/2021/04/12/a-simple-explanation-of-ai/" target="_blank">basics of AI</a> and the limitations that hold it back. It looks like Liquid AI could provide the very solutions that the AI community has been searching for.</p><p>In this episode of Short and Sweet AI, I explore the new research behind Liquid AI, how it works, and what it does better than other types of AI.</p><p>In this episode find out:</p><ul><li>The limitations of traditional neural networks in AI</li><li>How researchers created Liquid AI</li><li>How Liquid AI differs from other types</li><li>How Liquid AI solves the limitations of computing power with smaller neural nets</li><li>Why Liquid AI is more transparent and easier to analyze</li></ul><br/><p><strong>Important Links &amp; Mentions</strong></p><ul><li><a href="https://drpepermd.com/2021/04/12/a-simple-explanation-of-ai/" target="_blank">A Simple Explanation of AI</a></li><li><a href="https://drpepermd.com/podcast-2/ep-alphafold-the-protein-folding-problem/" target="_blank">AlphaFold &amp; The Protein Folding Problem</a></li><li><a href="https://drpepermd.com/podcast-2/ep-what-is-dall-e/" target="_blank">What is DALL·E?</a></li></ul><br/><p><strong>Resources:</strong></p><ul><li>SingularityHub: <a href="https://singularityhub.com/2021/01/31/new-liquid-ai-learns-as-it-experiences-the-world-in-real-time/" target="_blank">New ‘Liquid’ AI Learns Continuously from Its Experience of the World</a></li><li>Analytics Insight: <a href="https://www.analyticsinsight.net/why-is-liquid-neural-network-from-mit-a-revolutionary-innovation/" target="_blank">Why is a ‘Liquid’ Neural Network from MIT a Revolutionary Innovation?</a></li><li>TechCrunch: <a href="https://techcrunch.com/2021/01/28/mit-researchers-develop-a-new-liquid-neural-network-thats-better-at-adapting-to-new-info/" target="_blank">MIT researchers develop a new ‘liquid’ neural network that’s better at adapting to new info</a></li></ul><br/><p><strong>Episode Transcript:</strong></p><p>Hello to you who are curious about AI, I’m Dr. Peper. Machine learning algorithms are getting an overhaul from a very unlikely source. It’s a fascinating story.</p><p>Neural Nets have Traditional Limitations</p><p>Neural nets are the powerhouse of machine learning. They have the ability to translate whole books within seconds with Google Translate, change written text into images with DALLE, and discover the 3D structure of a protein in hours with AlphaFold. But researchers have struggled with neural networks because of their <strong>limitations</strong>.</p><p>Neural nets cannot do anything other than what they’re trained for. They’re programed with parameters <strong>set</strong> to give the most accurate results. But that makes them <strong>brittle</strong> which means they can break when given new information they weren’t trained on. Today the deep learning neural nets used in autonomous driving have millions of parameters. And the newest neural nets are so complex, with hundreds of layers and billions of parameters, they require very powerful supercomputers to run the algorithms.</p><p>A Neuroplastic Neural Net based on a Nematode</p><p>Now researchers from MIT and Austria’s Science Institute have created a new, adaptive neural network they’re describing as “liquid” AI. The algorithm’s based on the nervous system of a simple worm, C. elegans. And elegant it truly is. This worm has only three hundred and two neurons but it’s very responsive with a variety of behaviors. The teams were able to mathematically model the worm’s neurons and build them into a neural network. I’ve explained neural networks in my previous episode called A Simple Explanation of AI.</p><p>Computer Software with Neuroplasticity</p><p>The worm-brain algorithm is much simpler than the huge neural nets and yet accomplishes similar tasks. In current AI architecture, the neural net’s parameters are <strong>locked</strong> into the system after training. With liquid AI based on the mathematical models of the worm’s neurons, the parameters are able to <strong>change</strong> with time and with experience. This is a fluid neural net. As it encounters new information, it adapts. It’s an artificial brain created out of computer software but shows a kind of built-in neuroplasticity like a human brain.</p><p>When the algorithm was tested on the task of keeping an autonomous vehicle in its lane, it was just as accurate and efficient as more advanced and complex machine learning algorithms. The worm-brain model also adapts new <strong>pathways</strong>. In one example the researchers found the algorithm could <strong>change its underlying mathematical equations</strong> when it had new information like, there was rain on the autonomous vehicle’s windshield.&nbsp;This “neuroplasticity” means the neural net is less likely to break when it’s given data it hasn’t been trained on.</p><p>Liquid AI Uses Less Parameters, Less GPUs</p><p>Also, with this new approach, the researchers reduced the neural net’s size. It has only 75,000 trainable parameters instead of the million or billion parameters in some machine learning algorithms. This decreases the GPUs or computing power needed to run the algorithm. You can appreciate the excitement this has generated. Liquid AI is an adaptable machine learning algorithm that consumes less power, uses a smaller neural net, while being as accurate as the larger machine learning systems.</p><p>New Liquid AI is More Transparent</p><p>But I saved the best for last. For many years AI ethicists and researchers have been deeply troubled by machine learning systems being “black boxes” meaning how they work and arrive at their results is largely impenetrable. No one can determine exactly what’s going on within the neural nets that lead to the successful results. This can be a big problem when unsupervised machine learning models are trained on the unfiltered internet because there’s no way of knowing or controlling what they learn.</p><p>But this AI system was designed differently. It’s a new type of AI <strong>architecture</strong>. This liquid neural net is more open to observation and study. Researchers are able to analyze the neural net’s decision making and diagnose how it arrived at the answers. It’s more transparent. It’s adaptable, efficient, smaller, accurate, and transparent. Liquid AI, now that’s a good thing.</p><p>Thanks for listening. I hope you found this helpful. Be curious and if you like this episode, click the thumbs up button and leave a comment. From Short and Sweet AI, I’m Dr. Peper.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com]]></link><guid isPermaLink="false">4dd46d5b-c8a3-4bf1-b1c4-9828f7ea32d7</guid><itunes:image href="https://artwork.captivate.fm/0f70a8fe-87fd-4897-904a-3c6df4f5e40b/8bczxqo3txpitc4acrr8bmih.png"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Mon, 19 Apr 2021 02:00:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/022f13fe-87d5-478b-9d43-5fe607ee4dae/ep-50-edited.mp3" length="8743912" type="audio/mpeg"/><itunes:duration>06:04</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>50</itunes:episode><itunes:author>Dr. Peper</itunes:author></item><item><title>A Simple Explanation of AI</title><itunes:title>A Simple Explanation of AI</itunes:title><description><![CDATA[<p>What is AI really, and how does it work?</p><p>If you are interested in AI, you’ll undoubtedly know that many of the concepts are a bit overwhelming. There are plenty of terminologies to understand, such as machine learning, deep learning, neural networks, algorithms, and much more.</p><p>With the world of AI continually evolving, it’s good to go over some of the basic concepts to better understand how it’s changing.</p><p>In this episode of Short and Sweet AI, I address some of the questions that I get asked a lot: what is AI? How does AI work? I also delve into some of the limitations of AI and their possible solutions.</p><p>In this episode find out:</p><ul><li>How AI works</li><li>What machine learning and neural networks are</li><li>How deep learning works</li><li>The limitations of AI</li><li>How AI neuroplasticity could solve the limitations of AI</li></ul><br/><p><strong>Important Links &amp; Mentions:</strong></p><ul><li><a href="https://drpepermd.com/podcast-2/ep-alphafold-the-protein-folding-problem/" target="_blank">AlphaFold &amp; The Protein Folding Problem</a></li><li><a href="https://pub.towardsai.net/what-is-machine-learning-ml-b58162f97ec7" target="_blank">What is Machine Learning?</a></li></ul><br/><p><strong>Resources:</strong></p><ul><li>SAS: <a href="https://www.sas.com/en_us/insights/analytics/neural-networks.html" target="_blank">Neural Networks: What they are &amp; why they matter</a></li><li>ExplainThatStuff:<strong> </strong><a href="https://www.explainthatstuff.com/introduction-to-neural-networks.html" target="_blank">Neural networks</a></li><li>Quanta Magazine: <a href="https://www.quantamagazine.org/artificial-neural-nets-finally-yield-clues-to-how-brains-learn-20210218/" target="_blank">Artificial Neural Nets Finally Yield Clues to How Brains Learn</a></li></ul><br/><p><strong>Episode Transcript:</strong></p><p>Hello to you who are curious about AI. I’m Dr. Peper.</p><p>If you’re listening to this, you probably think AI’s interesting and important like me. But sometimes I find the concepts are a little overwhelming. I want to go over something I get asked a lot. People ask me, what is AI really, how does it work? Actually, there’re new things going on with how AI works. So, it’s good to go over some of the basic concepts in order to understand the way AI is changing.</p><p>How does AI work?</p><p>Artificial Intelligence happens with computers. They’re programed using algorithms.&nbsp;Algorithms are step by step instructions telling the computer what to do to solve a problem. Just like a recipe has specific steps you follow in sequence, to bake a cake, or cook something. Computer scientist write algorithms using a programming language the computer understands. These computer languages have strange names like Python or C plus, plus.</p><p>The computers also perform math calculations or computations to analyze the information and give an answer. This is known as computational analysis. Basically, the programing language and math calculations are computer software. Using this software, the algorithms come up with an answer from data sets fed into the computer.</p><p>Machine Learning is a type of AI</p><p>The major AI being used today is called machine learning. Machine learning is carried out by artificial neural networks, or nets for short. Neural nets underpin the most advanced artificial intelligence being used today. They’re called neural networks because they’re based in part on the way neurons in the brain function. In the brain the neuron receives inputs or information, processes the information, and then gives a result or output.</p><p>Artificial intelligence uses digital models of brain neurons. These are artificial neurons, based on the computer binary code of ones and zeros. The digital neurons process information and then pass it along to other higher layers of processing. Higher, meaning the results become more specific, just like in the brain.</p><p>Deep Learning is a type of...]]></description><content:encoded><![CDATA[<p>What is AI really, and how does it work?</p><p>If you are interested in AI, you’ll undoubtedly know that many of the concepts are a bit overwhelming. There are plenty of terminologies to understand, such as machine learning, deep learning, neural networks, algorithms, and much more.</p><p>With the world of AI continually evolving, it’s good to go over some of the basic concepts to better understand how it’s changing.</p><p>In this episode of Short and Sweet AI, I address some of the questions that I get asked a lot: what is AI? How does AI work? I also delve into some of the limitations of AI and their possible solutions.</p><p>In this episode find out:</p><ul><li>How AI works</li><li>What machine learning and neural networks are</li><li>How deep learning works</li><li>The limitations of AI</li><li>How AI neuroplasticity could solve the limitations of AI</li></ul><br/><p><strong>Important Links &amp; Mentions:</strong></p><ul><li><a href="https://drpepermd.com/podcast-2/ep-alphafold-the-protein-folding-problem/" target="_blank">AlphaFold &amp; The Protein Folding Problem</a></li><li><a href="https://pub.towardsai.net/what-is-machine-learning-ml-b58162f97ec7" target="_blank">What is Machine Learning?</a></li></ul><br/><p><strong>Resources:</strong></p><ul><li>SAS: <a href="https://www.sas.com/en_us/insights/analytics/neural-networks.html" target="_blank">Neural Networks: What they are &amp; why they matter</a></li><li>ExplainThatStuff:<strong> </strong><a href="https://www.explainthatstuff.com/introduction-to-neural-networks.html" target="_blank">Neural networks</a></li><li>Quanta Magazine: <a href="https://www.quantamagazine.org/artificial-neural-nets-finally-yield-clues-to-how-brains-learn-20210218/" target="_blank">Artificial Neural Nets Finally Yield Clues to How Brains Learn</a></li></ul><br/><p><strong>Episode Transcript:</strong></p><p>Hello to you who are curious about AI. I’m Dr. Peper.</p><p>If you’re listening to this, you probably think AI’s interesting and important like me. But sometimes I find the concepts are a little overwhelming. I want to go over something I get asked a lot. People ask me, what is AI really, how does it work? Actually, there’re new things going on with how AI works. So, it’s good to go over some of the basic concepts in order to understand the way AI is changing.</p><p>How does AI work?</p><p>Artificial Intelligence happens with computers. They’re programed using algorithms.&nbsp;Algorithms are step by step instructions telling the computer what to do to solve a problem. Just like a recipe has specific steps you follow in sequence, to bake a cake, or cook something. Computer scientist write algorithms using a programming language the computer understands. These computer languages have strange names like Python or C plus, plus.</p><p>The computers also perform math calculations or computations to analyze the information and give an answer. This is known as computational analysis. Basically, the programing language and math calculations are computer software. Using this software, the algorithms come up with an answer from data sets fed into the computer.</p><p>Machine Learning is a type of AI</p><p>The major AI being used today is called machine learning. Machine learning is carried out by artificial neural networks, or nets for short. Neural nets underpin the most advanced artificial intelligence being used today. They’re called neural networks because they’re based in part on the way neurons in the brain function. In the brain the neuron receives inputs or information, processes the information, and then gives a result or output.</p><p>Artificial intelligence uses digital models of brain neurons. These are artificial neurons, based on the computer binary code of ones and zeros. The digital neurons process information and then pass it along to other higher layers of processing. Higher, meaning the results become more specific, just like in the brain.</p><p>Deep Learning is a type of Machine Learning</p><p>Before computers can give us the answers, they have to be trained on large amounts of data. As the computer processes more and more information, it learns from the data. This is called training the machine. Then when you give the computer completely new data, the machine knows what to do with it and can give you a correct answer to your specific question.</p><p>If you have many, many layers of neural networks, each processing and passing the information to another layer, it’s called a <strong>deep neural network</strong>. When machines learn from deep neural networks, it’s called <strong>deep learning</strong>.</p><p>Present day AI has Limitations</p><p>All of the software and computer calculations used in machine learning, especially deep learning, require absurd amounts of data and computer power. The neural nets can be hundreds of layers deep with billions of parameters like AlphaFold or GPT-3 which I talked about in previous episodes. These are gargantuan machine learning algorithms and require very powerful supercomputers to run them. This limits who can use machine learning to only large tech companies and corporations.</p><p>Yet as mighty as these neural nets appear, at a core level they are very narrow. And that’s another limitation. They do exactly the one thing they’re trained to do such as recognize an image, steer a car to the left or right, or translate something from one language to another. When you ask a neural net to do something that deviates from its training, it acts brittle and breaks.</p><p>New AI Neuroplasticity</p><p>But there’s something new in artificial intelligence. AI has reached a point where it’s less artificial and more biological. Like the human brain, AI has developed “neuroplasticity.” Now that you understand basic artificial intelligence, next time let’s discuss something called “liquid” AI which is so cool. It solves a lot of these limitations with a type of artificial neuroplasticity.</p><p>I hope you found this helpful. Be curious and if you like this episode, please follow my channel. Or you can leave a comment and click the thumbs up button, which lets me know you like the content. From Short and Sweet AI, I’m Dr. Peper.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com]]></link><guid isPermaLink="false">51617247-5fab-408e-8ee7-4b064605c60e</guid><itunes:image href="https://artwork.captivate.fm/0f70a8fe-87fd-4897-904a-3c6df4f5e40b/8bczxqo3txpitc4acrr8bmih.png"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Mon, 12 Apr 2021 02:00:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/cbd024dc-43a7-4783-a211-1bd710cc7b46/ep-49-edited.mp3" length="8090242" type="audio/mpeg"/><itunes:duration>05:36</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>49</itunes:episode><itunes:author>Dr. Peper</itunes:author></item><item><title>Microscopic Robots Are Real and May Be Flowing Through Your Bloodstream Soon</title><itunes:title>Microscopic Robots Are Real and May Be Flowing Through Your Bloodstream Soon</itunes:title><description><![CDATA[<p>Microscopic robots might sound like the plot of a futuristic novel, but they are very real.</p><p>In fact, nanotechnology has been a point of great interest for scientists for decades. In the past few years, research and experimentation have seen nanotechnology's science develop in new and fascinating ways.</p><p>In this episode of Short and Sweet AI, I delve into the topic of microscopic robots. The possibilities and capabilities of nanobots are something to keep a watchful eye on as research into nanotechnology starts to pick up speed.</p><p>In this episode, find out:</p><ul><li>What microscopic robots are</li><li>How new research into nanotechnology has improved nanobot design</li><li>Why nanobots use similar technology to computer chips</li><li>The possibilities of nanobots for healthcare</li><li>How nanotechnology could connect humans to technology and the Cloud</li></ul><br/><p><strong>Important Links &amp; Mentions</strong></p><ul><li><a href="https://www.garyshteyngart.com/books/super-sad-true-love-story/" target="_blank">Super Sad True Love Story by Gary Shteyngart</a></li><li><a href="https://drpepermd.com/episode/13-the-singularity-is-near/" target="_blank">The Singularity is Near</a></li><li><a href="https://www.youtube.com/watch?v=2TjdGuBK9mI" target="_blank">March of the Microscopic Robots</a></li><li><a href="https://www.wired.com/story/future-of-work-remembrance-lexi-pandell/" target="_blank">The Future of Work: ‘Remembrance,’ by Lexi Pandell</a></li></ul><br/><p><strong>Resources:</strong></p><ul><li><strong>&nbsp;</strong>Singularity Hub: <a href="https://singularityhub.com/2020/09/08/an-army-of-microscopic-robots-is-ready-to-patrol-your-body/" target="_blank">An Army of Microscopic Robots Is Ready to Patrol Your Body</a></li><li>Interesting Engineering: <a href="https://interestingengineering.com/nanobots-will-be-flowing-through-your-body-by-2030" target="_blank">Nanobots Will Be Flowing Through Your Body by 2030</a></li></ul><br/><p><strong>Episode Transcript:</strong></p><p>Today I’m talking about microscopic robots.</p><p>In the book Super Sad True Love Story by Gary Shteyngart, set in the future, wealthy people pay for life extension treatments. These are called “dechronification” methods and include infusions of “smart blood” which contain swarms of microscopic robots. These tiny robots are about 100 nanometers long and rejuvenate cells and remodel major organs throughout the body via the bloodstream. In this way the wealthy live for over a century.</p><p>That book was my first introduction to the idea of microscopic robots, also known as nanobots, more than a decade ago. Nanotechnology is more than a subplot in a futuristic novel. It’s an emerging field of designing and building robots which are only nanometers long. A nanometer is 1000 times smaller than a micrometer. Atoms and molecules are measured in nanometers. For example, a red blood cell is about 7000 nanometers while a DNA molecule is two and a half nanometers.</p><p>The father of nanotechnology is considered to be Richard Feynman who won the Nobel prize in physics. He gave a talk in 1959 called “There’s Plenty of Room at the Bottom.” The bottom he’s referring to is size, specifically the size of atoms. He discussed a theoretical process for manipulating atoms and molecules which has become the core field of nanoscience.</p><p>The microscopic robots are about the size of a cell and are based on the same basic technology as computer chips. But creating an exoskeleton for robotic arms and getting these tiny robots to move in a controllable manner has been a big hurdle. Then in last few years Marc Miskin, a professor of electrical and systems engineering, and his colleagues, used a fresh, new design concept.</p><p>They paired 50 years of microelectronics and circuit boards to create limbs for the robots and used a power source in the form of tiny solar panels on its back. By shining lasers on the solar panels, they can control the robot’s...]]></description><content:encoded><![CDATA[<p>Microscopic robots might sound like the plot of a futuristic novel, but they are very real.</p><p>In fact, nanotechnology has been a point of great interest for scientists for decades. In the past few years, research and experimentation have seen nanotechnology's science develop in new and fascinating ways.</p><p>In this episode of Short and Sweet AI, I delve into the topic of microscopic robots. The possibilities and capabilities of nanobots are something to keep a watchful eye on as research into nanotechnology starts to pick up speed.</p><p>In this episode, find out:</p><ul><li>What microscopic robots are</li><li>How new research into nanotechnology has improved nanobot design</li><li>Why nanobots use similar technology to computer chips</li><li>The possibilities of nanobots for healthcare</li><li>How nanotechnology could connect humans to technology and the Cloud</li></ul><br/><p><strong>Important Links &amp; Mentions</strong></p><ul><li><a href="https://www.garyshteyngart.com/books/super-sad-true-love-story/" target="_blank">Super Sad True Love Story by Gary Shteyngart</a></li><li><a href="https://drpepermd.com/episode/13-the-singularity-is-near/" target="_blank">The Singularity is Near</a></li><li><a href="https://www.youtube.com/watch?v=2TjdGuBK9mI" target="_blank">March of the Microscopic Robots</a></li><li><a href="https://www.wired.com/story/future-of-work-remembrance-lexi-pandell/" target="_blank">The Future of Work: ‘Remembrance,’ by Lexi Pandell</a></li></ul><br/><p><strong>Resources:</strong></p><ul><li><strong>&nbsp;</strong>Singularity Hub: <a href="https://singularityhub.com/2020/09/08/an-army-of-microscopic-robots-is-ready-to-patrol-your-body/" target="_blank">An Army of Microscopic Robots Is Ready to Patrol Your Body</a></li><li>Interesting Engineering: <a href="https://interestingengineering.com/nanobots-will-be-flowing-through-your-body-by-2030" target="_blank">Nanobots Will Be Flowing Through Your Body by 2030</a></li></ul><br/><p><strong>Episode Transcript:</strong></p><p>Today I’m talking about microscopic robots.</p><p>In the book Super Sad True Love Story by Gary Shteyngart, set in the future, wealthy people pay for life extension treatments. These are called “dechronification” methods and include infusions of “smart blood” which contain swarms of microscopic robots. These tiny robots are about 100 nanometers long and rejuvenate cells and remodel major organs throughout the body via the bloodstream. In this way the wealthy live for over a century.</p><p>That book was my first introduction to the idea of microscopic robots, also known as nanobots, more than a decade ago. Nanotechnology is more than a subplot in a futuristic novel. It’s an emerging field of designing and building robots which are only nanometers long. A nanometer is 1000 times smaller than a micrometer. Atoms and molecules are measured in nanometers. For example, a red blood cell is about 7000 nanometers while a DNA molecule is two and a half nanometers.</p><p>The father of nanotechnology is considered to be Richard Feynman who won the Nobel prize in physics. He gave a talk in 1959 called “There’s Plenty of Room at the Bottom.” The bottom he’s referring to is size, specifically the size of atoms. He discussed a theoretical process for manipulating atoms and molecules which has become the core field of nanoscience.</p><p>The microscopic robots are about the size of a cell and are based on the same basic technology as computer chips. But creating an exoskeleton for robotic arms and getting these tiny robots to move in a controllable manner has been a big hurdle. Then in last few years Marc Miskin, a professor of electrical and systems engineering, and his colleagues, used a fresh, new design concept.</p><p>They paired 50 years of microelectronics and circuit boards to create limbs for the robots and used a power source in the form of tiny solar panels on its back. By shining lasers on the solar panels, they can control the robot’s movements. In fact, you can see a battalion of microscopic robots in a coordinated “march” on a video linked in the show notes.</p><p>The genius of Miskin’s work is that the robot’s brain is based on computer chip technology. The same technology has powered our computers and phones for half a century. This means the tiny robots can be integrated with other circuits to respond to more complex commands.</p><p>The nanobot can be equipped with sensors to report on conditions in whatever environment it’s in. These are truly miniaturized machines capable of being injected through a syringe and still maintain their structure and function. And since they use the same well understood manufacturing process as computer chips, they are easy and cheap to produce. Millions of tiny robots can be made at the same time. The end result is electronically integrated, mass manufactured, microscopic robots.</p><p>Like in the novel Super Sad True Love Story, we could have smart blood with nanobots injected into our bloodstream. The nanobots could be used to deliver cancer drugs in humans right where they’re needed and avoid harmful side effects to other tissues. They could be used to reduce plaque which has built up in arteries, or for treating hard to reach areas of the human body with microsurgery.</p><p>And by the way, the author of that book, Gary Shteyngart, has credited his ideas to Ray Kurzweil, whom you’ve heard me speak of many times. Kurzweil is convinced that nanotechnology is the way we can someday merge humans and technology. As he explained just several years ago, “These robots will go into the brain and provide virtual and augmented reality from within the nervous system rather than from devices attached to the outside of our bodies. The most important application … is that we will connect the top layers of our neocortex to the synthetic cortex in the cloud.”</p><p>Leave a comment and let me know if you could access all its power and knowledge, would you connect your brain to the cloud? What if it meant you could store your consciousness in the cloud after you die? I’ve come across a short story called Remembrance which chillingly depicts this in the future. These may seem like crazy ideas, but microrobots are real and soon may be flowing through our bloodstream.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com]]></link><guid isPermaLink="false">38da3c38-8f18-4946-a8da-0bfbd1f8db4c</guid><itunes:image href="https://artwork.captivate.fm/0f70a8fe-87fd-4897-904a-3c6df4f5e40b/8bczxqo3txpitc4acrr8bmih.png"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Mon, 05 Apr 2021 02:00:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/0ba6ffe4-06f0-4d7e-9f28-3f5aff0a75af/ep-48-edited.mp3" length="8840950" type="audio/mpeg"/><itunes:duration>06:08</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>48</itunes:episode><itunes:author>Dr. Peper</itunes:author></item><item><title>A World Without Work - Daniel Susskind Says It&apos;s a Real Possibility</title><itunes:title>A World Without Work - Daniel Susskind Says It&apos;s a Real Possibility</itunes:title><description><![CDATA[<p>Is a world without work a reality we need to prepare for?&nbsp;</p><p>In my last episode, I discussed whether the fear of machines taking over jobs was truly <a href="https://drpepermd.com/podcast-2/ep-the-future-of-work-misplaced-anxiety/" rel="noopener noreferrer" target="_blank">misplaced anxiety</a>, as experts say. Experts believe that there’s no cause for alarm, but not everyone agrees.</p><p>Some believe that a future where human workers become obsolete is a real possibility we need to prepare for.</p><p>In this episode of Short and Sweet AI, I delve into the theory that our future will be a world without work. I discuss Daniel Susskind’s fascinating book, ‘A World Without Work,’ which explores the topic of technological unemployment in great detail.</p><p><strong>In this episode, find out:</strong></p><ul><li>What Daniel Susskind believes about the future of work</li><li>How machines can replicate even cognitive skills</li><li>Theories on how society could adapt to a world without work</li><li>How we could live a meaningful life without work</li></ul><br/><p><strong>Important Links &amp; Mentions</strong></p><ul><li><a href="https://www.danielsusskind.com/a-world-without-work" rel="noopener noreferrer" target="_blank">A World Without Work</a></li><li><a href="https://drpepermd.com/2021/03/22/the-future-of-work-misplaced-anxiety/" rel="noopener noreferrer" target="_blank">The Future of Work: Misplaced Anxiety?</a></li><li><a href="https://drpepermd.com/episode/how-to-train-your-emotion-ai/" rel="noopener noreferrer" target="_blank">How to Train Your Emotion AI</a></li></ul><br/><p><strong>Resources</strong></p><ul><li>Oxford Martin School: <a href="https://www.youtube.com/watch?v=thZzDi5XRVs" rel="noopener noreferrer" target="_blank">"A world without work: technology, automation and how we should respond" with Daniel Susskind</a></li><li>TED: <a href="https://www.youtube.com/watch?v=2j00U6lUC-c" rel="noopener noreferrer" target="_blank">3 myths about the future of work (and why they're not true) | Daniel Susskind</a></li><li>The New York Times:<strong> </strong><a href="https://www.nytimes.com/2020/01/14/books/review/a-world-without-work-daniel-susskind.html" rel="noopener noreferrer" target="_blank">Soon a Robot Will Be Writing This Headline</a></li></ul><br/><p><strong>Episode Transcript:</strong></p><p>Hello to you who are curious about AI. I’m Dr. Peper and today I’m talking about a world without work.</p><p>In my last episode, I talked about the future of work. Economists, futurists, and AI thinkers generally agree that technological unemployment is not a real threat. Our anxiety about machines taking our jobs is misplaced. There have been three centuries of technological advances and each time, technology has created more jobs than it destroyed. So, no need for alarm. &nbsp;</p><p>But Daniel Susskind, an Oxford economist and advisor to the British government, thinks this time, with artificial intelligence, the threat really is very real. He wants us to start discussing the future of work because as he sees it, the future of work is <u>A World Without Work</u>, which is the title of his recent book. He explains why what’s been called a slow-motion crisis of losing jobs to machines and automation, needs to be discussed now because it really isn’t slow-motion anymore. </p><p>Despite increased productivity and GDP from artificial intelligence, Susskind presents evidence technological unemployment is coming. As he says, we don’t need to solve the mysteries of how the brain and mind operate to build machines that can outperform human beings. </p><p>Machines have been taking over jobs requiring manual abilities for decades. It’s happening now. Although the American manufacturing economy has grown over the past few decades, it hasn’t created more work. Manufacturing produces 70 percent more output than it did in 1986 but requires 30 percent fewer workers to produce it.&nbsp; </p><p>More importantly, machines are...]]></description><content:encoded><![CDATA[<p>Is a world without work a reality we need to prepare for?&nbsp;</p><p>In my last episode, I discussed whether the fear of machines taking over jobs was truly <a href="https://drpepermd.com/podcast-2/ep-the-future-of-work-misplaced-anxiety/" rel="noopener noreferrer" target="_blank">misplaced anxiety</a>, as experts say. Experts believe that there’s no cause for alarm, but not everyone agrees.</p><p>Some believe that a future where human workers become obsolete is a real possibility we need to prepare for.</p><p>In this episode of Short and Sweet AI, I delve into the theory that our future will be a world without work. I discuss Daniel Susskind’s fascinating book, ‘A World Without Work,’ which explores the topic of technological unemployment in great detail.</p><p><strong>In this episode, find out:</strong></p><ul><li>What Daniel Susskind believes about the future of work</li><li>How machines can replicate even cognitive skills</li><li>Theories on how society could adapt to a world without work</li><li>How we could live a meaningful life without work</li></ul><br/><p><strong>Important Links &amp; Mentions</strong></p><ul><li><a href="https://www.danielsusskind.com/a-world-without-work" rel="noopener noreferrer" target="_blank">A World Without Work</a></li><li><a href="https://drpepermd.com/2021/03/22/the-future-of-work-misplaced-anxiety/" rel="noopener noreferrer" target="_blank">The Future of Work: Misplaced Anxiety?</a></li><li><a href="https://drpepermd.com/episode/how-to-train-your-emotion-ai/" rel="noopener noreferrer" target="_blank">How to Train Your Emotion AI</a></li></ul><br/><p><strong>Resources</strong></p><ul><li>Oxford Martin School: <a href="https://www.youtube.com/watch?v=thZzDi5XRVs" rel="noopener noreferrer" target="_blank">"A world without work: technology, automation and how we should respond" with Daniel Susskind</a></li><li>TED: <a href="https://www.youtube.com/watch?v=2j00U6lUC-c" rel="noopener noreferrer" target="_blank">3 myths about the future of work (and why they're not true) | Daniel Susskind</a></li><li>The New York Times:<strong> </strong><a href="https://www.nytimes.com/2020/01/14/books/review/a-world-without-work-daniel-susskind.html" rel="noopener noreferrer" target="_blank">Soon a Robot Will Be Writing This Headline</a></li></ul><br/><p><strong>Episode Transcript:</strong></p><p>Hello to you who are curious about AI. I’m Dr. Peper and today I’m talking about a world without work.</p><p>In my last episode, I talked about the future of work. Economists, futurists, and AI thinkers generally agree that technological unemployment is not a real threat. Our anxiety about machines taking our jobs is misplaced. There have been three centuries of technological advances and each time, technology has created more jobs than it destroyed. So, no need for alarm. &nbsp;</p><p>But Daniel Susskind, an Oxford economist and advisor to the British government, thinks this time, with artificial intelligence, the threat really is very real. He wants us to start discussing the future of work because as he sees it, the future of work is <u>A World Without Work</u>, which is the title of his recent book. He explains why what’s been called a slow-motion crisis of losing jobs to machines and automation, needs to be discussed now because it really isn’t slow-motion anymore. </p><p>Despite increased productivity and GDP from artificial intelligence, Susskind presents evidence technological unemployment is coming. As he says, we don’t need to solve the mysteries of how the brain and mind operate to build machines that can outperform human beings. </p><p>Machines have been taking over jobs requiring manual abilities for decades. It’s happening now. Although the American manufacturing economy has grown over the past few decades, it hasn’t created more work. Manufacturing produces 70 percent more output than it did in 1986 but requires 30 percent fewer workers to produce it.&nbsp; </p><p>More importantly, machines are increasingly being used in the cognitive skills areas, too. AI deep learning is used to read x-rays, compose music, review legal documents, detect eye diseases, and personalize online learning systems. And in the controversial area of synthetic media, AI systems can generate believable videos of events that never happened. &nbsp;&nbsp;</p><p>Machines also have human skills such as empathy and the ability to determine how someone feels. Algorithms are making headway into effectively and accurately reading human emotion through facial recognition and language. I talked about this in my episode on Affective AI.</p><p>The most significant point Susskind makes, in my opinion, is that we think machines can’t perform some human tasks because they can’t perform them the same way humans do. Many doctors use gut instinct and vast hands-on experience when treating patients. Machines won’t be able to diagnose this same way, the way doctors do. Machines <strong>will</strong> be able to accurately perform the task but in a <strong>different wa</strong>y. </p><p>So, the three capabilities that humans use to earn a living, manual skills, cognitive skills, and emotional intelligence, are all being replaced by machines. Susskind doesn’t know exactly when this will happen, but he thinks it will be sooner than most people realize, within just decades, and certainly during the 21<sup>st</sup> century because during the next 80 years, machines will become a trillion times more powerful. </p><p>In a future world without work, Susskind asks how we will earn enough to live on and how will we all find meaning in our life. His assessment is that government, or the big state as he calls it, will be needed to redistribute income and wealth. And even more importantly, governments will need to introduce programs to nudge us into behaviors that will give us fulfillment rather than yielding to Netflix, boredom and despair. Instead of labor market policies, governments will need to form leisure policies that shape the way people use their spare time because the future of work will be the future of leisure. </p><p>At different points in time throughout history large groups from the Greeks to the English have lead life with meaning but without work. For instance, in Victorian England, the upper classes were far from depressed by their idleness. Indeed, they created some of the greatest poetry, literature, and science the world has known. </p><p>According to many AI experts, as well as Susskind, AI advances will continue at an exponential rate. It’s inevitable. Machines will ultimately do most of what humans do. In the science fiction series, The Expanse, in the future there won’t be work for the majority of people on Earth. People exist on a type of universal basic income. But some families feel strongly that they want their children to live a life with meaning that comes from work. Their only option is to move to Mars which exists to defend Earth, and where everyone has a job, working for the military. &nbsp;&nbsp;</p><p>I highly recommend reading <u>A World Without Work</u>. David Susskind goes into much more detail with entertaining examples and comprehensive discussions of universal basic income, the age of labor, the limits of education, and much more. </p><p>But what do you think? Could you cultivate a meaningful life without work? </p><p>Thanks for listening. I hope you found this helpful. If you liked this episode, please leave a review and subscribe. From Short and Sweet AI, I’m Dr. Peper.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com]]></link><guid isPermaLink="false">32168bdd-daec-4a55-927e-fdf8d8dc0e98</guid><itunes:image href="https://artwork.captivate.fm/0f70a8fe-87fd-4897-904a-3c6df4f5e40b/8bczxqo3txpitc4acrr8bmih.png"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Mon, 29 Mar 2021 02:00:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/8bf7b85a-1f80-4aa4-be8f-652c784d930f/ep-47-edited.mp3" length="10206916" type="audio/mpeg"/><itunes:duration>07:05</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>47</itunes:episode><itunes:author>Dr. Peper</itunes:author></item><item><title>The Future of Work: Misplaced Anxiety?</title><itunes:title>The Future of Work: Misplaced Anxiety?</itunes:title><description><![CDATA[<p>Are you anxious that a machine will one day replace your job? It’s a common enough fear, especially with the rate technology is advancing.</p><p>If you have watched any of my previous episodes, you will know that technology is accelerating exponentially! We have seen the equivalent of 20,000 years of technology in just one century.</p><p>Naturally, people worry about what this means for the future of work. Will human workers become obsolete one day?</p><p>In this episode of Short and Sweet AI, I explore “technological unemployment” in more detail and whether it’s something we should be concerned about.</p><p><strong>In this episode find out:</strong></p><ul><li>Why some experts think the anxiety over technological unemployment is misplaced</li><li>Why economists and AI experts are optimistic about AI’s impact on jobs</li><li>How AI could contribute to job creation and loss</li><li>The surprising impact technology has on certain job roles</li></ul><br/><p><strong>Important Links &amp; Mentions:</strong></p><ul><li><a href="https://www.weforum.org/videos/what-will-the-future-of-jobs-be-like" rel="noopener noreferrer" target="_blank">What will the future of jobs be like?</a></li><li><a href="https://www.hbo.com/vice/special-reports/vice-special-report-the-future-of-work" rel="noopener noreferrer" target="_blank">VICE Special Report: The Future of Work</a></li></ul><br/><p><strong>Resources:</strong></p><ul><li>The Takeaway: <a href="https://www.wnycstudios.org/podcasts/takeaway/segments/what-happens-next-future-work" rel="noopener noreferrer" target="_blank">What Happens Next: The Future of Work</a></li><li>Council on Foreign Relations: <a href="https://www.cfr.org/event/discussion-hbo-vice-special-report-future-work" rel="noopener noreferrer" target="_blank">Discussion of HBO VICE Special Report: The Future of Work</a></li><li>Daniel Susskind’s book: <a href="https://www.danielsusskind.com/a-world-without-work" rel="noopener noreferrer" target="_blank">A World Without Work</a></li></ul><br/><p><strong>Episode Transcript:</strong></p><p>Hello to you who are curious about AI. I’m Dr. Peper and today I’m talking about the future of work.</p><p>For centuries there’ve been predictions that machines would put people out of work for good and give rise to technological unemployment. If you’ve been listening to my episodes you know that technology today is accelerating exponentially. We are living at a time when many different types of technology are all merging and accelerating together. This is creating enormous advances which some have said will lead to the equivalent of 20,000 years of technology in this one century. And experts are asking what does that mean for the future of work?</p><p>Historians, economists, and futurists describe the anxiety about new machines replacing workers as a history of misplaced anxiety. Three hundred years of radical technological change have passed and there is still enough work for people to do. The experts say, yes, technology leads to the loss of jobs, but ultimately more new jobs are created in the process. Automation and the use of machines increases productivity which leads to creation of new jobs and increased GDP. </p><p>A well-known example would be the rise in the use of ATM machines in the 1990s which led to many bank tellers losing their jobs. But at the same time, the ATMs enabled banks to increase their productivity and profits and led to more branches being opened and more bank tellers being hired. The bank tellers now spent their time carrying out more value-added, non-routine tasks.</p><p>In the early industrial revolution, when mechanical looms were introduced, many highly skilled weavers lost their jobs, but even more jobs were created for less-skilled workers who operated the machines.</p><p>People who study economics and AI are optimistic. They think machines can readily perform routine tasks in a job but would struggle with non-routine tasks. Humans will still be needed for...]]></description><content:encoded><![CDATA[<p>Are you anxious that a machine will one day replace your job? It’s a common enough fear, especially with the rate technology is advancing.</p><p>If you have watched any of my previous episodes, you will know that technology is accelerating exponentially! We have seen the equivalent of 20,000 years of technology in just one century.</p><p>Naturally, people worry about what this means for the future of work. Will human workers become obsolete one day?</p><p>In this episode of Short and Sweet AI, I explore “technological unemployment” in more detail and whether it’s something we should be concerned about.</p><p><strong>In this episode find out:</strong></p><ul><li>Why some experts think the anxiety over technological unemployment is misplaced</li><li>Why economists and AI experts are optimistic about AI’s impact on jobs</li><li>How AI could contribute to job creation and loss</li><li>The surprising impact technology has on certain job roles</li></ul><br/><p><strong>Important Links &amp; Mentions:</strong></p><ul><li><a href="https://www.weforum.org/videos/what-will-the-future-of-jobs-be-like" rel="noopener noreferrer" target="_blank">What will the future of jobs be like?</a></li><li><a href="https://www.hbo.com/vice/special-reports/vice-special-report-the-future-of-work" rel="noopener noreferrer" target="_blank">VICE Special Report: The Future of Work</a></li></ul><br/><p><strong>Resources:</strong></p><ul><li>The Takeaway: <a href="https://www.wnycstudios.org/podcasts/takeaway/segments/what-happens-next-future-work" rel="noopener noreferrer" target="_blank">What Happens Next: The Future of Work</a></li><li>Council on Foreign Relations: <a href="https://www.cfr.org/event/discussion-hbo-vice-special-report-future-work" rel="noopener noreferrer" target="_blank">Discussion of HBO VICE Special Report: The Future of Work</a></li><li>Daniel Susskind’s book: <a href="https://www.danielsusskind.com/a-world-without-work" rel="noopener noreferrer" target="_blank">A World Without Work</a></li></ul><br/><p><strong>Episode Transcript:</strong></p><p>Hello to you who are curious about AI. I’m Dr. Peper and today I’m talking about the future of work.</p><p>For centuries there’ve been predictions that machines would put people out of work for good and give rise to technological unemployment. If you’ve been listening to my episodes you know that technology today is accelerating exponentially. We are living at a time when many different types of technology are all merging and accelerating together. This is creating enormous advances which some have said will lead to the equivalent of 20,000 years of technology in this one century. And experts are asking what does that mean for the future of work?</p><p>Historians, economists, and futurists describe the anxiety about new machines replacing workers as a history of misplaced anxiety. Three hundred years of radical technological change have passed and there is still enough work for people to do. The experts say, yes, technology leads to the loss of jobs, but ultimately more new jobs are created in the process. Automation and the use of machines increases productivity which leads to creation of new jobs and increased GDP. </p><p>A well-known example would be the rise in the use of ATM machines in the 1990s which led to many bank tellers losing their jobs. But at the same time, the ATMs enabled banks to increase their productivity and profits and led to more branches being opened and more bank tellers being hired. The bank tellers now spent their time carrying out more value-added, non-routine tasks.</p><p>In the early industrial revolution, when mechanical looms were introduced, many highly skilled weavers lost their jobs, but even more jobs were created for less-skilled workers who operated the machines.</p><p>People who study economics and AI are optimistic. They think machines can readily perform routine tasks in a job but would struggle with non-routine tasks. Humans will still be needed for their cognitive, creative, and emotional skills that machines don’t have. In this way, workers will complement machines and will always be needed.</p><p>The World Economic Forum, headed by Klaus Schwab who wrote the 4<sup>th</sup> Industrial Revolution, released a recent report on the Future of Work. They estimated by 2025, 85 million jobs will be lost through artificial intelligence, but 97 million new jobs created. This goes along with the mainstream thinking that technological unemployment is not something to worry about in the foreseeable future. But when you read the report in more detail, some red flags emerge.</p><p>Surveys show 43% of businesses are set to reduce their workforce due to technology, 50% of all employees will need reskilling in the next 5 years, and job creation is slowing while job destruction accelerates.</p><p>Many articles on the world economic forum website also dampen the prevailing optimism for the future of work. One example is the profession of psychologists. </p><p>Previous projections assumed the work of a psychologist requires extensive empathic and intuitive skills. Initially, it was thought very unlikely to be replicated by a machine during our lifetime. But experts have found artificial intelligence has become woven into the fabric of our daily lives at an accelerating pace. With the pandemic, the use of meditation and mindfulness apps such as Headspace and Calm has soared, as well as other technology-mediated forms of therapy. The most recent report concluded it’s almost certain the work of psychologists will be replaced in large part by artificial intelligence.</p><p>So what’s going on here? Is anxiety about technological unemployment misplaced or will machines be able to perform most human tasks, and how soon?</p><p>Well, I’ve uncovered another penetrating viewpoint on the future of work in a book by Daniel Susskind. The book’s been described as “required reading for any presidential candidate.” His premise is the future of work is <u>A World Without Work</u>, which is the title of his book. And I had a glimpse of what a world without work looks like in the science fiction series, The Expanse. I’ll be talking about both in my next episode.</p><p>What do you think? Do you feel anxious that your job will be replaced by a machine during your lifetime?</p><p>Please leave me your thoughts in the comments, and a review if you like this episode. From Short and Sweet AI, I’m Dr. Peper.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com]]></link><guid isPermaLink="false">ae47551d-2a63-45d7-9123-d086ab466459</guid><itunes:image href="https://artwork.captivate.fm/0f70a8fe-87fd-4897-904a-3c6df4f5e40b/8bczxqo3txpitc4acrr8bmih.png"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Mon, 22 Mar 2021 02:00:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/8dbb92e0-0653-4547-94a0-a1f7a4a7d3b8/ep-46-edited.mp3" length="8790622" type="audio/mpeg"/><itunes:duration>06:06</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>46</itunes:episode><itunes:author>Dr. Peper</itunes:author></item><item><title>AlphaFold &amp; The Protein Folding Problem</title><itunes:title>AlphaFold &amp; The Protein Folding Problem</itunes:title><description><![CDATA[<p>What is the protein folding problem that has left researchers stuck for nearly 50 years?</p>
<p>Knowing the 3D shape of proteins is so important for our understanding of various diseases and vaccine development. However, these shapes are fantastically complex and difficult to predict. Researchers have spent years trying to determine the 3D structure of proteins.</p>
<p>Thanks to AI systems like AlphaFold, it’s now much easier and faster to predict protein shapes. AlphaFold is currently leading the way in protein folding research and has been described as a “revolution in biology.”</p>
<p>In this episode of Short and Sweet AI, I explore the protein folding problem in more detail and how AlphaFold is accelerating our understanding of protein structures. </p>
<p><strong>In this episode, find out:</strong></p>
<ul>
<li>Why protein folding is so important</li>
<li>Why it’s so difficult to predict protein structures</li>
<li>How Google’s DeepMind created AlphaFold</li>
<li>How successful AlphaFold has been in predicting protein structures</li>
</ul><br/>
<p><strong>Important Links and Mentions:</strong></p>
<ul>
<li><a href="https://www.youtube.com/watch?v=gg7WjuFs8F4" rel="noopener noreferrer" target="_blank">AlphaFold: The making of a scientific breakthrough</a></li>
<li><a href="https://www.youtube.com/watch?v=KpedmJdrTpY" rel="noopener noreferrer" target="_blank">Protein folding explained</a></li>
<li><a href="https://drpepermd.com/episode/walloped-by-alphago/" rel="noopener noreferrer" target="_blank">Walloped by AlphaGo</a></li>
<li><a href="https://drpepermd.com/episode/what-is-alphazero/" rel="noopener noreferrer" target="_blank">What is AlphaZero?</a></li>
<li><a href="https://deepmind.com/blog/article/AlphaFold-Using-AI-for-scientific-discovery" rel="noopener noreferrer" target="_blank">AlphaFold: Using AI for scientific discovery</a></li>
</ul><br/>
<p><strong>Resources:</strong></p>
<ul>
<li>Nature.com - <a href="https://www.nature.com/articles/d41586-020-03348-4" rel="noopener noreferrer" target="_blank">‘It will change everything’: DeepMind’s AI makes gigantic leap in solving protein structures</a></li>
<li>SciTech Daily - <a href="https://scitechdaily.com/major-scientific-advance-deepmind-ai-alphafold-solves-50-year-old-grand-challenge-of-protein-structure-prediction/" rel="noopener noreferrer" target="_blank">Major Scientific Advance: DeepMind AI AlphaFold Solves 50-Year-Old Grand Challenge of Protein Structure Prediction</a></li>
</ul><br/>
<p><strong>Episode Transcript:</strong></p>
<p>Hello to you who are curious about AI. I’m Dr. Peper and today I’m talking about AlphaFold.</p>
<p>One of Biology’s most difficult challenges, one that researchers have been stuck on for nearly 50 years is how to determine a protein’s 3D shape from its amino-acid sequence. It's known as “the protein folding problem”.</p>
<p>When I first came across the subject, I thought, ok, that’s a biology problem and maybe AI will solve it but there’s no big story here. I was wrong.</p>
<p>Some biologists spend months, years, or even decades performing experiments to determine the precise shape of a protein. Sometimes they never succeed. But they persist because having the ability to know how a protein folds up can accelerate our ability to understand diseases, develop new medicines and vaccines, and crack one of the greatest challenges in biology.</p>
<p>Why is protein folding so important? Proteins structures contain as much, if not more information, than stored in DNA. Their 3D shapes are fantastically complex. Proteins are made up of strings of amino acids, called the building blocks of life. In order to function, the strings twist and fold into a precise, delicate shapes that turn or wrap around each other. These strings can even merge into bigger, megaplex structures.</p>
<p>Only then can these proteins function in the way necessary to build and sustain life. A protein’s shape defines what the protein can]]></description><content:encoded><![CDATA[<p>What is the protein folding problem that has left researchers stuck for nearly 50 years?</p>
<p>Knowing the 3D shape of proteins is so important for our understanding of various diseases and vaccine development. However, these shapes are fantastically complex and difficult to predict. Researchers have spent years trying to determine the 3D structure of proteins.</p>
<p>Thanks to AI systems like AlphaFold, it’s now much easier and faster to predict protein shapes. AlphaFold is currently leading the way in protein folding research and has been described as a “revolution in biology.”</p>
<p>In this episode of Short and Sweet AI, I explore the protein folding problem in more detail and how AlphaFold is accelerating our understanding of protein structures. </p>
<p><strong>In this episode, find out:</strong></p>
<ul>
<li>Why protein folding is so important</li>
<li>Why it’s so difficult to predict protein structures</li>
<li>How Google’s DeepMind created AlphaFold</li>
<li>How successful AlphaFold has been in predicting protein structures</li>
</ul><br/>
<p><strong>Important Links and Mentions:</strong></p>
<ul>
<li><a href="https://www.youtube.com/watch?v=gg7WjuFs8F4" rel="noopener noreferrer" target="_blank">AlphaFold: The making of a scientific breakthrough</a></li>
<li><a href="https://www.youtube.com/watch?v=KpedmJdrTpY" rel="noopener noreferrer" target="_blank">Protein folding explained</a></li>
<li><a href="https://drpepermd.com/episode/walloped-by-alphago/" rel="noopener noreferrer" target="_blank">Walloped by AlphaGo</a></li>
<li><a href="https://drpepermd.com/episode/what-is-alphazero/" rel="noopener noreferrer" target="_blank">What is AlphaZero?</a></li>
<li><a href="https://deepmind.com/blog/article/AlphaFold-Using-AI-for-scientific-discovery" rel="noopener noreferrer" target="_blank">AlphaFold: Using AI for scientific discovery</a></li>
</ul><br/>
<p><strong>Resources:</strong></p>
<ul>
<li>Nature.com - <a href="https://www.nature.com/articles/d41586-020-03348-4" rel="noopener noreferrer" target="_blank">‘It will change everything’: DeepMind’s AI makes gigantic leap in solving protein structures</a></li>
<li>SciTech Daily - <a href="https://scitechdaily.com/major-scientific-advance-deepmind-ai-alphafold-solves-50-year-old-grand-challenge-of-protein-structure-prediction/" rel="noopener noreferrer" target="_blank">Major Scientific Advance: DeepMind AI AlphaFold Solves 50-Year-Old Grand Challenge of Protein Structure Prediction</a></li>
</ul><br/>
<p><strong>Episode Transcript:</strong></p>
<p>Hello to you who are curious about AI. I’m Dr. Peper and today I’m talking about AlphaFold.</p>
<p>One of Biology’s most difficult challenges, one that researchers have been stuck on for nearly 50 years is how to determine a protein’s 3D shape from its amino-acid sequence. It's known as “the protein folding problem”.</p>
<p>When I first came across the subject, I thought, ok, that’s a biology problem and maybe AI will solve it but there’s no big story here. I was wrong.</p>
<p>Some biologists spend months, years, or even decades performing experiments to determine the precise shape of a protein. Sometimes they never succeed. But they persist because having the ability to know how a protein folds up can accelerate our ability to understand diseases, develop new medicines and vaccines, and crack one of the greatest challenges in biology.</p>
<p>Why is protein folding so important? Proteins structures contain as much, if not more information, than stored in DNA. Their 3D shapes are fantastically complex. Proteins are made up of strings of amino acids, called the building blocks of life. In order to function, the strings twist and fold into a precise, delicate shapes that turn or wrap around each other. These strings can even merge into bigger, megaplex structures.</p>
<p>Only then can these proteins function in the way necessary to build and sustain life. A protein’s shape defines what the protein can do and what it cannot do.</p>
<p>But there’s an astronomical number of ways a protein can fold into its final 3D structure. It’s called Levinthal’s paradox. Cyrus Levinthal, a molecular biologist, published a paper in 1969 called “How to Fold Graciously.” He found there are so many degrees of freedom in an unfolded chain of amino acids, the molecule has an astronomical number of possible configurations.</p>
<p>There’re an estimated 200 million known proteins with 30 million new ones discovered every year. Each one has a unique 3D shape which determines how it works and what it does. For the last 50 years, biologists discovered the exact 3D structure of only a tiny fraction of known proteins. </p>
<p>The protein folding problem led to a global competition called CASP, which stands for Critical Assessment of Structure Protein. Scientists measure and compare their research efforts using computer-based predictions. The competition started in 1994 to improve computational methods for accurately predicting a protein’s 3D shape.</p>
<p>DeepMind, an AI research lab owned by Google, has made headlines for creating deep learning neural networks AlphaGo and AlphaZero, which beat the world’s leading chess and Go champions. I’ve talked about them in previous episodes. Protein folding has been called the challenge of a lifetime, and the researchers at DeepMind wanted to use AI, not for only game playing but to make a real-world impact. So, DeepMind went to work creating AlphaFold, a deep learning computer system, to solve the protein folding problem.</p>
<p>In 2018 AlphaFold entered the CASP competition for the first time. It achieved the highest score for accurately predicting various protein structures, scoring 60 out of a possible 100 points. But the AlphaFold researchers thought it could improve the accuracy and developed the deep learning neural network even further.</p>
<p>In addition to using a data set with 170,000 protein structures, DeepMind supercharged the algorithm. They added data about physics, geometry, and evolutionary history into their training model. The algorithm analyzed any buried relationships or patterns and was able to determine highly accurate structures in a matter of days, even hours. It could predict a protein’s shape down to the width of an atom.</p>
<p>The turning point came in the CASP competition in November 2020. AlphaFold, as well as teams from Microsoft and the Chinese tech company Tencent, competed to predict protein structures considered to be moderately difficult. The best performance of the other teams was 75 points on a 100-point scale. AlphaFold performed so unbelievably well, it was called a revolution in biology. AlphaFold scored 90 out of 100.</p>
<p>One researcher had been looking for the structure of a protein for 10 years, an absolute decade. AlphaFold’s predictions gave him the protein’s 3D structure in half an hour. You can’t make this stuff up. His exuberance is understandable when he says: “This will change medicine. It will change research. It will change bioengineering. It will change everything.”</p>
<p>Here are a few more comments made by experts that convey why Alphafold is not just a big story but rivals the discovery of DNA.</p>
<p>One researcher said, “I nearly fell off my chair when I saw these results.” Another proclaimed, “It’s a breakthrough of the first order, certainly one of the most significant results of my lifetime.” Another commented, “…a stunning advance…It’s occurred decades before many people in the field would have predicted.”</p>
<p>And John Moult, a professor who helped to create the CASP competition, describes it as a dream come true. He said “I always hoped I would live to see this day. But it wasn’t always obvious I was going to make it.”</p>
<p>Thanks for listening. I hope you found this helpful. If you like this episode, please leave a review and subscribe, because then you’ll receive my episodes weekly. From Short and Sweet AI, I’m Dr. Peper.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com]]></link><guid isPermaLink="false">0aeae5ee-2989-4db9-8010-02ec986e35af</guid><itunes:image href="https://artwork.captivate.fm/0f70a8fe-87fd-4897-904a-3c6df4f5e40b/8bczxqo3txpitc4acrr8bmih.png"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Mon, 15 Mar 2021 02:00:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/95dd007d-d637-4dc2-90f7-dc97d69f0331/ep-45-edited.mp3" length="10822840" type="audio/mpeg"/><itunes:duration>07:30</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>45</itunes:episode><itunes:author>Dr. Peper</itunes:author></item><item><title>OpenAI: For-Profit for Good?</title><itunes:title>OpenAI: For-Profit for Good?</itunes:title><description><![CDATA[<p>One of the founding principles of OpenAI, the company behind technology such as GPT-3 and DALL•E, is that AI should be available to all, not just the few.</p><p>Co-founded by Elon Musk and five others, OpenAI was partly created to counter the argument that AI could damage society. </p><p>OpenAI was originally founded as a non-profit AI research lab. In just six short years, the company has paved the way for some of the biggest breakthroughs in AI. Recent controversy arose when OpenAI announced that a separate section of its company would become for-profit. </p><p>In this episode of Short and Sweet AI, I discuss OpenAI’s mission to develop human-level AI that benefits all, not just a few. I also discuss the controversy around OpenAI’s decision to become for-profit. </p><p>In this episode, find out:</p><ul><li>OpenAI’s mission</li><li>How human-level AI or AGI differs from Narrow AI</li><li>How far we are from using AGI in everyday life</li><li>The recent controversy around OpenAI’s decision to switch to a for-profit model.</li></ul><br/><p><strong>Important Links and Mentions:</strong></p><ul><li><a href="https://drpepermd.com/podcast-2/ep-what-is-gpt-3/" rel="noopener noreferrer" target="_blank">What is GPT-3?</a></li><li><a href="https://openai.com/charter/" rel="noopener noreferrer" target="_blank">OpenAI’s mission statement</a></li></ul><br/><p><strong>Resources:</strong></p><ul><li><a href="https://www.youtube.com/watch?v=H15uuDMqDK0" rel="noopener noreferrer" target="_blank">Elon Musk on Artificial Intelligence</a></li><li>Technology Review: <a href="https://www.technologyreview.com/2020/02/17/844721/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/" rel="noopener noreferrer" target="_blank">The messy, secretive reality behind OpenAI’s bid to save the world</a></li><li>Wired: <a href="https://www.wired.com/story/compete-google-openai-seeks-investorsand-profits/" rel="noopener noreferrer" target="_blank">To Compete With Google, OpenAI Seeks Investors---and Profits</a></li><li>Wired: <a href="https://www.wired.com/story/company-wants-billions-make-ai-safe-humanity/" rel="noopener noreferrer" target="_blank">OpenAI Wants to Make Ultrapowerful AI. But Not in a Bad Way</a></li></ul><br/><p><strong>Episode Transcript:</strong></p><p>Hello to you who are curious about AI. I’m Dr. Peper and today I’m talking about a truly innovative company called OpenAI.</p><p>So what do we know about OpenAI, the company unleashing all these mind-blowing AI tools such as GPT-3 and DALL·E?</p><p>Open AI was founded as a non-profit AI research lab just 6 short years ago by Elon Musk and 5 others who pledged a billion dollars. Musk has been openly critical that AI poses the greatest existential threat to humanity. He was motivated in part to create OpenAI by concerns that human-level AI could damage society if built or used incorrectly.</p><p>Human-level AI is known as AGI or Artificial General Intelligence. The AI we have today is called Narrow AI, it’s good at doing one thing. General AI is great at any task. It’s created to learn how to do anything. Narrow AI is great at doing what it was designed for as compared to Artificial General Intelligence which is great at learning how to do what it needs to do. </p><p>To be a bit more specific, General AI would be able to learn, plan, reason, communicate in natural language, and integrate all of these skills to apply to any task, just as humans do. It would be human-level AI. It’s the holy grail of the leading AI research groups around the world such as Google’s DeepMind or Elon’s OpenAI: to create artificial general intelligence.</p><p>Because AI is accelerated at exponential speed, it’s hard to predict when human-level AI might come within reach. Musk wants computer scientists to build AI in a way that is safe and beneficial to humanity. He acknowledges that in trying to advance friendly AI, we may create the very thing we are concerned about. Yet he thinks the best]]></description><content:encoded><![CDATA[<p>One of the founding principles of OpenAI, the company behind technology such as GPT-3 and DALL•E, is that AI should be available to all, not just the few.</p><p>Co-founded by Elon Musk and five others, OpenAI was partly created to counter the argument that AI could damage society. </p><p>OpenAI was originally founded as a non-profit AI research lab. In just six short years, the company has paved the way for some of the biggest breakthroughs in AI. Recent controversy arose when OpenAI announced that a separate section of its company would become for-profit. </p><p>In this episode of Short and Sweet AI, I discuss OpenAI’s mission to develop human-level AI that benefits all, not just a few. I also discuss the controversy around OpenAI’s decision to become for-profit. </p><p>In this episode, find out:</p><ul><li>OpenAI’s mission</li><li>How human-level AI or AGI differs from Narrow AI</li><li>How far we are from using AGI in everyday life</li><li>The recent controversy around OpenAI’s decision to switch to a for-profit model.</li></ul><br/><p><strong>Important Links and Mentions:</strong></p><ul><li><a href="https://drpepermd.com/podcast-2/ep-what-is-gpt-3/" rel="noopener noreferrer" target="_blank">What is GPT-3?</a></li><li><a href="https://openai.com/charter/" rel="noopener noreferrer" target="_blank">OpenAI’s mission statement</a></li></ul><br/><p><strong>Resources:</strong></p><ul><li><a href="https://www.youtube.com/watch?v=H15uuDMqDK0" rel="noopener noreferrer" target="_blank">Elon Musk on Artificial Intelligence</a></li><li>Technology Review: <a href="https://www.technologyreview.com/2020/02/17/844721/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/" rel="noopener noreferrer" target="_blank">The messy, secretive reality behind OpenAI’s bid to save the world</a></li><li>Wired: <a href="https://www.wired.com/story/compete-google-openai-seeks-investorsand-profits/" rel="noopener noreferrer" target="_blank">To Compete With Google, OpenAI Seeks Investors---and Profits</a></li><li>Wired: <a href="https://www.wired.com/story/company-wants-billions-make-ai-safe-humanity/" rel="noopener noreferrer" target="_blank">OpenAI Wants to Make Ultrapowerful AI. But Not in a Bad Way</a></li></ul><br/><p><strong>Episode Transcript:</strong></p><p>Hello to you who are curious about AI. I’m Dr. Peper and today I’m talking about a truly innovative company called OpenAI.</p><p>So what do we know about OpenAI, the company unleashing all these mind-blowing AI tools such as GPT-3 and DALL·E?</p><p>Open AI was founded as a non-profit AI research lab just 6 short years ago by Elon Musk and 5 others who pledged a billion dollars. Musk has been openly critical that AI poses the greatest existential threat to humanity. He was motivated in part to create OpenAI by concerns that human-level AI could damage society if built or used incorrectly.</p><p>Human-level AI is known as AGI or Artificial General Intelligence. The AI we have today is called Narrow AI, it’s good at doing one thing. General AI is great at any task. It’s created to learn how to do anything. Narrow AI is great at doing what it was designed for as compared to Artificial General Intelligence which is great at learning how to do what it needs to do. </p><p>To be a bit more specific, General AI would be able to learn, plan, reason, communicate in natural language, and integrate all of these skills to apply to any task, just as humans do. It would be human-level AI. It’s the holy grail of the leading AI research groups around the world such as Google’s DeepMind or Elon’s OpenAI: to create artificial general intelligence.</p><p>Because AI is accelerated at exponential speed, it’s hard to predict when human-level AI might come within reach. Musk wants computer scientists to build AI in a way that is safe and beneficial to humanity. He acknowledges that in trying to advance friendly AI, we may create the very thing we are concerned about. Yet he thinks the best defense is to empower as many people as possible to have AI. He doesn’t want any one person or a small group of people to have AI superpower.</p><p>OpenAI has a 400-word mission statement, which prioritizes AI for all, over its own self-interest. And it’s an environment where its employees treat AI research not as a job but as an identity. The most succinct summary of its mission has been phrased “… an ideal that we want AGI to go well” Two specific parts to its mission are to avoid building human-level AI that harms humanity or unduly concentrates power in the hands of a few.</p><p>But there’s a big controversy. &nbsp;OpenAI recently reorganized to form a separate section that’s for profit. It never released the software for GPT-3 as open code for programmers to use and build on. Instead, it licensed GPT-3 exclusively to Microsoft for a billion dollars. OpenAI realized staying a non-profit was financially untenable. &nbsp;It defends its decision explaining it needs billions of dollars to build AGI and fulfill their mission. Personally, I see the necessity for this. I’ve said elsewhere, “If you’re dedicated to your mission, you first have to find consistent funding. We don’t always need more ideas about how to make the world better. We need more ways to consistently fund the ideas we have.” It’s a huge challenge when you realize DeepMind, Open AI’s main competitor, spent 442 million dollars on research the same year OpenAI spent only 11 million.</p><p>But there’s been an outcry from critics who say switching to a for profit model is inconsistent with OpenAI’s mission to democratize AI for all. I’d be interested to know what you think about OpenAI’s decision. Do you think its non-profit mission justifies it becoming for profit? Let me know your thoughts and leave a comment.</p><p>Thanks for listening, I hope you found this helpful. Be curious and if you like this episode, please leave a review and subscribe because then you’ll receive my episodes weekly. From Short and Sweet AI, I’m Dr. Peper.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com]]></link><guid isPermaLink="false">ea3653b3-28a7-46c6-8470-b7979c0ddc09</guid><itunes:image href="https://artwork.captivate.fm/0f70a8fe-87fd-4897-904a-3c6df4f5e40b/8bczxqo3txpitc4acrr8bmih.png"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Mon, 08 Mar 2021 02:00:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/6e7fa98d-bd36-4017-b082-4798d2f5b759/ep-44-podcast.mp3" length="8016658" type="audio/mpeg"/><itunes:duration>05:34</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>3</itunes:season><itunes:episode>44</itunes:episode><itunes:summary>OpenAI has been responsible for some of the greatest breakthroughs in AI. This research lab created a language generator (GPT-3), a text to image generator (DALL·E), a music generator (MUSE) and more.  It&apos;s mission is to make artificial intelligence human positive and AI research public and free. So why has did it license its latest technology to Microsoft?</itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>What is DALL·E?</title><itunes:title>What is DALL·E?</itunes:title><description><![CDATA[<p>Is DALL·E the latest breakthrough in artificial intelligence?</p><p>It seems there’s no end to the fascinating innovations coming out in the world of AI. DALL·E, the most recent tool developed by OpenAI, was announced just months after unveiling its groundbreaking GPT-3 technology.</p><p>DALL·E is another exciting breakthrough that demonstrates the ability to turn words into images. As a natural extension of GPT-3, DALL·E takes pieces of text and generates images rather than words in response.</p><p>In this episode of Short and Sweet AI, I discuss DALL·E in more detail, how it differs from GPT-3, and how it was developed.</p><p>In this episode, find out:</p><ul><li>What DALL·E is</li><li>How DALL·E can generate images from words</li><li>What unintended yet useful behaviors DALL·E can produce</li><li>The human-like creativity of DALL·E.</li></ul><br/><p><strong>Important Links and Mentions:</strong></p><ul><li><a href="https://openai.com/blog/dall-e/" target="_blank">DALL·E: Creating Images from Text</a></li><li><a href="https://www.technologyreview.com/2021/01/05/1015754/avocado-armchair-future-ai-openai-deep-learning-nlp-gpt3-computer-vision-common-sense/" target="_blank">This avocado armchair could be the future of AI</a></li></ul><br/><p><strong>Resources:</strong></p><ul><li>The Next Web: <a href="https://thenextweb.com/neural/2021/01/10/heres-how-openais-magical-dall-e-generates-images-from-text-syndication/" target="_blank">Here’s how OpenAI’s magical DALL-E image generator works</a></li><li>Venture Beat: <a href="https://venturebeat.com/2021/01/05/openai-debuts-dall-e-for-generating-images-from-text/" target="_blank">OpenAI debuts DALL-E for generating images from text</a></li><li>CNBC: <a href="https://www.cnbc.com/2021/01/08/openai-shows-off-dall-e-image-generator-after-gpt-3.html" target="_blank">Why everyone is talking about an image generator released by an Elon Musk-backed A.I. lab</a></li></ul><br/><p><strong>Episode Transcript:</strong></p><p>Hello to you who are curious about AI. I’m Dr. Peper and today&nbsp;I’m talking about DALL·E.</p><p>In a previous episode, I highlighted a new type of AI tool called GPT-3. GPT-3 is a machine learning language model trained on a trillion words that generates poetry, stories, even computer code. Within months of announcing GPT-3, OpenAI released DALL·E. DALL·E is not just another breathtaking breakthrough in AI technology. It represents the ability, by a machine, to manipulate visual concepts through language.</p><p>DALL·E is a combination of the surrealist artist Salvador Dali and the animated robot Wall-E. What it does is simple but also revolutionary. It’s a natural extension of GPT-3. The AI system was trained with a combination of the 13 billion features of GPT-3 added to a dataset of 12 billion images.</p><p>DALL·E takes text prompts and responds not with words but images. If you give the system the text prompt, “an armchair in the shape of an avocado” it generates an image to match it. It’s a text-to-image technology that’s very powerful. It gives you the ability to create an image of what you want to see with language because DALL·E isn’t recognizing images, it draws them. And by the way, I would buy one of those avocado chairs if they existed.</p><p>You can visit OpenAI’s website and play with images generated by this astounding technology: a radish in a tutu walking a dog, a robot giraffe, a spaghetti knight. The images are from the real world or are things that don’t exist, like a cube of clouds.</p><p>How does It Work?</p><p>Text-to-image algorithms aren’t new but have been limited to things such as birds and flowers or other unsophisticated images. DALL·E is significantly different from others that have come before because it uses the GPT-3 neural network to train on text plus images.</p><p>DALL·E uses the language and understanding provided by GPT-3 and its own underlying structure to create an image prompted by a text. Each time it generates a large set...]]></description><content:encoded><![CDATA[<p>Is DALL·E the latest breakthrough in artificial intelligence?</p><p>It seems there’s no end to the fascinating innovations coming out in the world of AI. DALL·E, the most recent tool developed by OpenAI, was announced just months after unveiling its groundbreaking GPT-3 technology.</p><p>DALL·E is another exciting breakthrough that demonstrates the ability to turn words into images. As a natural extension of GPT-3, DALL·E takes pieces of text and generates images rather than words in response.</p><p>In this episode of Short and Sweet AI, I discuss DALL·E in more detail, how it differs from GPT-3, and how it was developed.</p><p>In this episode, find out:</p><ul><li>What DALL·E is</li><li>How DALL·E can generate images from words</li><li>What unintended yet useful behaviors DALL·E can produce</li><li>The human-like creativity of DALL·E.</li></ul><br/><p><strong>Important Links and Mentions:</strong></p><ul><li><a href="https://openai.com/blog/dall-e/" target="_blank">DALL·E: Creating Images from Text</a></li><li><a href="https://www.technologyreview.com/2021/01/05/1015754/avocado-armchair-future-ai-openai-deep-learning-nlp-gpt3-computer-vision-common-sense/" target="_blank">This avocado armchair could be the future of AI</a></li></ul><br/><p><strong>Resources:</strong></p><ul><li>The Next Web: <a href="https://thenextweb.com/neural/2021/01/10/heres-how-openais-magical-dall-e-generates-images-from-text-syndication/" target="_blank">Here’s how OpenAI’s magical DALL-E image generator works</a></li><li>Venture Beat: <a href="https://venturebeat.com/2021/01/05/openai-debuts-dall-e-for-generating-images-from-text/" target="_blank">OpenAI debuts DALL-E for generating images from text</a></li><li>CNBC: <a href="https://www.cnbc.com/2021/01/08/openai-shows-off-dall-e-image-generator-after-gpt-3.html" target="_blank">Why everyone is talking about an image generator released by an Elon Musk-backed A.I. lab</a></li></ul><br/><p><strong>Episode Transcript:</strong></p><p>Hello to you who are curious about AI. I’m Dr. Peper and today&nbsp;I’m talking about DALL·E.</p><p>In a previous episode, I highlighted a new type of AI tool called GPT-3. GPT-3 is a machine learning language model trained on a trillion words that generates poetry, stories, even computer code. Within months of announcing GPT-3, OpenAI released DALL·E. DALL·E is not just another breathtaking breakthrough in AI technology. It represents the ability, by a machine, to manipulate visual concepts through language.</p><p>DALL·E is a combination of the surrealist artist Salvador Dali and the animated robot Wall-E. What it does is simple but also revolutionary. It’s a natural extension of GPT-3. The AI system was trained with a combination of the 13 billion features of GPT-3 added to a dataset of 12 billion images.</p><p>DALL·E takes text prompts and responds not with words but images. If you give the system the text prompt, “an armchair in the shape of an avocado” it generates an image to match it. It’s a text-to-image technology that’s very powerful. It gives you the ability to create an image of what you want to see with language because DALL·E isn’t recognizing images, it draws them. And by the way, I would buy one of those avocado chairs if they existed.</p><p>You can visit OpenAI’s website and play with images generated by this astounding technology: a radish in a tutu walking a dog, a robot giraffe, a spaghetti knight. The images are from the real world or are things that don’t exist, like a cube of clouds.</p><p>How does It Work?</p><p>Text-to-image algorithms aren’t new but have been limited to things such as birds and flowers or other unsophisticated images. DALL·E is significantly different from others that have come before because it uses the GPT-3 neural network to train on text plus images.</p><p>DALL·E uses the language and understanding provided by GPT-3 and its own underlying structure to create an image prompted by a text. Each time it generates a large set of images. Then another machine learning algorithm called CLIP ranks the images and determines which pictures best match the text. As a result, the illustrations are much more coherent and reflect a blend of more complex concepts. This is what makes DALLE the most realistic text-to-image system ever produced.</p><p>Unintended But Useful Behaviors</p><p>DALL·E also demonstrates another example of “zero-shot visual reasoning.” Zero-shot learning or ZSL, is the ability of models to perform tasks that they weren’t specifically trained to do. They’re unintended but useful behaviors.</p><p>In the case of GPT-3, it can write computer code even though it wasn’t trained to do coding. DALL·E “learned” to generate images from captions or if given the right text prompt it can transform images into sketches. Another task it wasn’t specifically trained to do was to design custom text on street signs. Essentially DALL·E can behave as a Photoshop filter.</p><p>It also shows an understanding of visual concepts. It can, in a sense, answer questions visually. When given hidden patterns and prompted to solve an uncompleted grid with images to match, DALL·E was able to fill in the grid with matching patterns without being given any prompts.</p><p>Creativity is a Measure of Intelligence</p><p>Experts agree language grounded in visual understanding like DALL·E makes AI smarter. This machine learning system has the ability to take two unrelated concepts such as an armchair and an avocado and put them together in a coherent, new way. This is stunning because the ability to coherently blend concepts and use them in a new way is key to creativity. In essence, the machine stores information about our world to use and generalize in a very human-like way. And in the AI world, creativity is one measure of intelligence. So, is this how machine intelligence becomes human-like intelligence?</p><p>Thanks for listening, I hope you found this helpful. Be curious and if you like this episode, please leave a review and subscribe because then you’ll receive these episodes weekly. From Short and Sweet AI, I’m Dr. Peper.&nbsp;</p>]]></content:encoded><link><![CDATA[https://drpepermd.com]]></link><guid isPermaLink="false">00c1ec8a-8697-4df5-a714-53398282a5ad</guid><itunes:image href="https://artwork.captivate.fm/0f70a8fe-87fd-4897-904a-3c6df4f5e40b/8bczxqo3txpitc4acrr8bmih.png"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Mon, 01 Mar 2021 02:00:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/b34d7e7f-9797-411d-ad40-09eb60647cf3/ep-43-edited.mp3" length="8624428" type="audio/mpeg"/><itunes:duration>05:59</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>3</itunes:season><itunes:episode>43</itunes:episode><itunes:author>Dr. Peper</itunes:author></item><item><title>What is GPT-3?</title><itunes:title>What is GPT-3?</itunes:title><description><![CDATA[<p>Some have called it the most important and useful advance in AI in years. Others call it crazy accurate AI.</p><p>GPT-3 is a new tool from the AI research lab OpenAI. This tool was designed to generate natural language by analyzing thousands of books, Wikipedia entries, social media posts, blogs, and anything in between on the internet. It’s the largest artificial neural network ever created.</p><p>In this episode of Short and Sweet AI, I talk in more detail about how GPT-3 works and what it’s used for.</p><p><strong>In this episode, find out:</strong></p><ul><li>What GPT-3 is</li><li>How GPT-3 can generate sentences independently</li><li>What supervised vs. unsupervised learning is</li><li>How GPT-3 shocked developers by creating computer code</li><li>Where GPT-3 falls short.</li></ul><br/><p><strong>Important Links and Mentions:</strong></p><ul><li><a href="https://www.nytimes.com/2020/11/24/science/artificial-intelligence-ai-gpt3.html" rel="noopener noreferrer" target="_blank">Meet GPT-3. It Has Learned to Code (and Blog and Argue)</a></li><li><a href="https://www.gwern.net/GPT-3#literary-parodies" rel="noopener noreferrer" target="_blank">GPT-3 Creative Fiction</a></li><li><a href="https://www.wired.com/story/ai-text-generator-gpt-3-learning-language-fitfully/" rel="noopener noreferrer" target="_blank">Did a Person Write This Headline, or a Machine?</a></li></ul><br/><p><strong>Resources:</strong></p><ul><li>Disruption Theory - <a href="https://www.youtube.com/watch?v=8V20HkoiNtc" rel="noopener noreferrer" target="_blank">GPT-3 Demo: New AI Algorithm Changes How We Interact with Technology</a></li><li>Forbes - <a href="https://www.forbes.com/sites/bernardmarr/2020/10/05/what-is-gpt-3-and-why-is-it-revolutionizing-artificial-intelligence/?sh=5d11d2fd481a" rel="noopener noreferrer" target="_blank">What Is GPT-3 And Why Is It Revolutionizing Artificial Intelligence?</a></li></ul><br/><p><strong>Episode Transcript:</strong></p><p>Today I’m talking about a breathtaking breakthrough in AI which you need to know about.</p><p>Some have called it the most important and useful advance in AI in years.&nbsp;Others call it crazy, accurate AI. It’s called GPT-3. GPT-3 stands for Generative Pre-trained Transformers 3, meaning it’s the third version to be released. One developer said, “Playing with GPT-3 feels like seeing the future”.</p><p><strong>Another Mind-Blowing Tool from OpenAI</strong></p><p>GPT-3 is a new AI tool from an artificial intelligence research lab called OpenAI. This neural network has learned to generate natural language by analyzing thousands of digital books, Wikipedia in its entirety, and a trillion words found on social media, blogs, news articles, anything and everything on the internet. A trillion words. Essentially, it’s the largest artificial neural network ever created. And with language models, size really does matter.</p><p><strong>It’s a Language Predictor</strong></p><p>GPT-3 can answer questions, write essays, summarize long texts, translate languages, take memos, basically, it can create anything that has a language structure. How does it do this? Well it’s a language predictor. If you give it one piece of language, the algorithms are designed to transform and predict what the most useful piece of language should be to follow it.</p><p>Machine learning neural networks study words and their meanings and how they differ depending on other words used in the text. The machine analyzes words to understand language. Then it generates sentences by taking words and sentences apart and rebuilding them itself.</p><p><strong>Supervised vs Unsupervised machine learning</strong></p><p>GPT-3 is a form of machine learning called unsupervised learning. It’s unsupervised because the training data is not labelled as a right or wrong response. It’s free from the limits imposed by using labelled data. This means unsupervised learning can detect all kinds of unknown patterns. The machine works on its own to discover...]]></description><content:encoded><![CDATA[<p>Some have called it the most important and useful advance in AI in years. Others call it crazy accurate AI.</p><p>GPT-3 is a new tool from the AI research lab OpenAI. This tool was designed to generate natural language by analyzing thousands of books, Wikipedia entries, social media posts, blogs, and anything in between on the internet. It’s the largest artificial neural network ever created.</p><p>In this episode of Short and Sweet AI, I talk in more detail about how GPT-3 works and what it’s used for.</p><p><strong>In this episode, find out:</strong></p><ul><li>What GPT-3 is</li><li>How GPT-3 can generate sentences independently</li><li>What supervised vs. unsupervised learning is</li><li>How GPT-3 shocked developers by creating computer code</li><li>Where GPT-3 falls short.</li></ul><br/><p><strong>Important Links and Mentions:</strong></p><ul><li><a href="https://www.nytimes.com/2020/11/24/science/artificial-intelligence-ai-gpt3.html" rel="noopener noreferrer" target="_blank">Meet GPT-3. It Has Learned to Code (and Blog and Argue)</a></li><li><a href="https://www.gwern.net/GPT-3#literary-parodies" rel="noopener noreferrer" target="_blank">GPT-3 Creative Fiction</a></li><li><a href="https://www.wired.com/story/ai-text-generator-gpt-3-learning-language-fitfully/" rel="noopener noreferrer" target="_blank">Did a Person Write This Headline, or a Machine?</a></li></ul><br/><p><strong>Resources:</strong></p><ul><li>Disruption Theory - <a href="https://www.youtube.com/watch?v=8V20HkoiNtc" rel="noopener noreferrer" target="_blank">GPT-3 Demo: New AI Algorithm Changes How We Interact with Technology</a></li><li>Forbes - <a href="https://www.forbes.com/sites/bernardmarr/2020/10/05/what-is-gpt-3-and-why-is-it-revolutionizing-artificial-intelligence/?sh=5d11d2fd481a" rel="noopener noreferrer" target="_blank">What Is GPT-3 And Why Is It Revolutionizing Artificial Intelligence?</a></li></ul><br/><p><strong>Episode Transcript:</strong></p><p>Today I’m talking about a breathtaking breakthrough in AI which you need to know about.</p><p>Some have called it the most important and useful advance in AI in years.&nbsp;Others call it crazy, accurate AI. It’s called GPT-3. GPT-3 stands for Generative Pre-trained Transformers 3, meaning it’s the third version to be released. One developer said, “Playing with GPT-3 feels like seeing the future”.</p><p><strong>Another Mind-Blowing Tool from OpenAI</strong></p><p>GPT-3 is a new AI tool from an artificial intelligence research lab called OpenAI. This neural network has learned to generate natural language by analyzing thousands of digital books, Wikipedia in its entirety, and a trillion words found on social media, blogs, news articles, anything and everything on the internet. A trillion words. Essentially, it’s the largest artificial neural network ever created. And with language models, size really does matter.</p><p><strong>It’s a Language Predictor</strong></p><p>GPT-3 can answer questions, write essays, summarize long texts, translate languages, take memos, basically, it can create anything that has a language structure. How does it do this? Well it’s a language predictor. If you give it one piece of language, the algorithms are designed to transform and predict what the most useful piece of language should be to follow it.</p><p>Machine learning neural networks study words and their meanings and how they differ depending on other words used in the text. The machine analyzes words to understand language. Then it generates sentences by taking words and sentences apart and rebuilding them itself.</p><p><strong>Supervised vs Unsupervised machine learning</strong></p><p>GPT-3 is a form of machine learning called unsupervised learning. It’s unsupervised because the training data is not labelled as a right or wrong response. It’s free from the limits imposed by using labelled data. This means unsupervised learning can detect all kinds of unknown patterns. The machine works on its own to discover information.</p><p>In supervised machine learning, the machine doesn’t learn on its own. The machine is supervised during its training by using data labelled with the correct answer. This method isn’t flexible. It can’t capture more complex relationships or unknown patterns.</p><p>Open AI first described GPT 3 in a research paper in May 2020. Then it allowed selected people and developers to use it and report their experiences online of what GPT 3 can do. There’s even an informative article about GPT 3 written entirely by GPT-3.</p><p><strong>Judge for Yourself</strong></p><p>One researcher used GPT-3 to generate a Harry Potter parody in the style of Ernest Hemingway. Take a listen: "It was a cold day on Privet Drive. A child cried. Harry felt nothing. He was dryer than dust. He had been silent too long. He had not felt love. He had scarcely felt hate. Yet the Dementor’s Kiss killed nothing.”</p><p>I think that sounds pretty good!</p><p>And there’s a twitter feed called gptwisdom which generates quotes using GPT-3. Here are a few examples:</p><p>“Dull as a twice-told tale.” Or: "The point at which a theory ceases to be a theory is called its limit.” Or this thoughtful gpt3 generated quote: “The truthfulness of your simplicity can only grow, as you improve your character.”</p><p><strong>Things to Know About This Technology</strong></p><p>In essence, GPT-3 is a universal language model. The model learned to identify more than 175 billion different distinguishing features of language. These features are mathematical representations of patterns. The patterns are a map of human language. Using this map, GPT-3 learned to perform all sorts of tasks it was not even built to do.</p><p><strong>Unintended Abilities</strong></p><p>One unexpected ability is GPT-3 can write computer code. Makes sense, because computer code is a type of language. But this behavior was entirely new. It even surprised the designers of GPT-3. They didn’t build GPT-3 to generate computer code, they trained it to do just one thing. Predict the next word in a sequence of words.&nbsp;</p><p>All in all, people discovered it can do many tasks that it wasn’t originally trained to do. They found it could build an app by giving it a description of what they wanted the app to do. It can generate charts and graphs from plain English. It can identify paintings from written descriptions. It can generate quizzes for practice on any topic and explain the answers in detail.</p><p><strong>The Best But Flawed</strong></p><p>GPT-3’s ability to generate text is the best that has ever been seen in AI. Yet it’s far from flawless. It can spew offensive and biased language and struggles with questions that involve reasoning by analogy. It isn’t guided by any coherent understanding of reality because it doesn’t have an internal model of the world. Sometimes it produces nonsense because it’s essentially word-stringing. Other AI researchers say it’s like a black box and it’s hard to figure out what this thing is doing.&nbsp;&nbsp;</p><p><strong>A Machine Like Us</strong></p><p>And yet, the consensus is GPT-3 is shockingly good. But because it can generate convincing tweets, blog posts and computer code, people think of it as being like them. They are reading humanity into the GPT-3 system and, as such, run the risk of ignoring its limits. Sam Altman, one of the founders of OpenAI which developed GPT-3, has thanked everyone for their compliments. But he urges caution about the hype. He says, “AI is going to change the world but GPT-3 is just a very early glimpse. We still have a lot to figure out.”</p><p>Thanks for listening, I hope you found this helpful. Be curious and if you like this episode, please leave a review and subscribe because then you’ll receive these episodes weekly. From Short and Sweet AI, I’m Dr. Peper.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com]]></link><guid isPermaLink="false">ffaf9144-1214-4173-a930-c03fff0eeae1</guid><itunes:image href="https://artwork.captivate.fm/0f70a8fe-87fd-4897-904a-3c6df4f5e40b/8bczxqo3txpitc4acrr8bmih.png"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Mon, 22 Feb 2021 02:00:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/8beef168-e965-454b-a299-5e786c34d96b/ep-42-podcast-edit.mp3" length="11229190" type="audio/mpeg"/><itunes:duration>07:47</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>42</itunes:episode><itunes:author>Dr. Peper</itunes:author></item><item><title>What is AI Bias?</title><itunes:title>What is AI Bias?</itunes:title><description><![CDATA[<p>The ethics surrounding AI are complicated yet fascinating to discuss. One issue that sits front and center is AI bias, but what is it?&nbsp;</p><p>AI is based on algorithms, fed by data and experiences. The problem is when that data is incorrect, biased or based on stereotypes. Unfortunately, this means that machines, just like humans, are guided by potentially biased information.&nbsp;</p><p>This means that your daily threat from AI is not from the machines themselves, but their <em>bias. </em>In this episode of Short and Sweet AI, I talk about this further and discuss a very serious problem: artificial intelligence bias.<em>&nbsp;</em></p><p><strong>In this episode, find out:&nbsp;</strong></p><ul><li>What AI bias is?</li><li>The effects of AI bias</li><li>The three different types of bias and how they affect AI</li><li>How AI contributes to selection bias</li></ul><br/><p><strong>Important Links &amp; Mentions:</strong></p><ul><li><a href="https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G" rel="noopener noreferrer" target="_blank">Amazon scraps secret AI recruiting tool that showed bias against women</a></li><li><a href="https://www.washingtonpost.com/technology/2020/12/23/google-timnit-gebru-ai-ethics/" rel="noopener noreferrer" target="_blank">Google Hired Timnit Gebru to be an outspoken critic of unethical AI</a> </li><li><a href="https://www.forbes.com/sites/cognitiveworld/2020/02/07/biased-algorithms/?sh=3d4dc94876fc" rel="noopener noreferrer" target="_blank">Biased Algorithms Learn from Biased Data: 3 Kinds Biases Found In AI Datasets</a></li><li><a href="https://arxiv.org/pdf/2012.02394.pdf" rel="noopener noreferrer" target="_blank">Biased Programmers? Or Biased Data? A Field Experiment in Operationalizing AI Ethics</a></li></ul><br/><p><strong>Resources:</strong></p><ul><li><a href="https://venturebeat.com/2020/12/09/columbia-researchers-find-white-men-are-the-worst-at-reducing-ai-bias/" rel="noopener noreferrer" target="_blank">Venture Beat – Study finds diversity in data science teams is key in reducing algorithmic bias</a></li><li><a href="https://www.nytimes.com/2019/11/11/technology/artificial-intelligence-bias.html" rel="noopener noreferrer" target="_blank">The New York Times - We Teach A.I. Systems Everything, Including Our Biases</a></li></ul><br/><p><strong>Episode Transcript:</strong></p><p>Today I’m talking about a very serious problem: artificial intelligence bias.</p><p><strong>AI Ethics </strong></p><p>The ethics of AI are complicated. Every time I go to review this area, I’m dazed by all the issues. There are groups in the AI community who wrestle with robot ethics, the threat to human dignity, transparency ethics, self-driving car liability, AI accountability, the ethics of weaponizing AI, machine ethics, and even the existential risk from superintelligence. But of all these hidden terrors, one is front and center. Artificial intelligence bias. What is it?</p><p><strong>Machines Built with Bias</strong></p><p>AI is based on algorithms in the form of computer software. Algorithms power computers to make decisions through something called machine learning. Machine learning algorithms are all around us. They supply the Netflix suggestions we receive, the posts appearing at the top of our social media feeds, they drive the results of our google searches. Algorithms are fed on data. If you want to teach a machine to recognize a cat, you feed the algorithm thousands of cat images until it can recognize a cat better than you can. </p><p>The problem is machine learning algorithms are used to make decisions in our daily lives that can have extreme consequences. A computer program may help police decide where to send resources, or who’s approved for a mortgage, who’s accepted to a university or who gets the job. &nbsp;</p><p>More and more experts in the field are sounding the alarm. Machines, just...]]></description><content:encoded><![CDATA[<p>The ethics surrounding AI are complicated yet fascinating to discuss. One issue that sits front and center is AI bias, but what is it?&nbsp;</p><p>AI is based on algorithms, fed by data and experiences. The problem is when that data is incorrect, biased or based on stereotypes. Unfortunately, this means that machines, just like humans, are guided by potentially biased information.&nbsp;</p><p>This means that your daily threat from AI is not from the machines themselves, but their <em>bias. </em>In this episode of Short and Sweet AI, I talk about this further and discuss a very serious problem: artificial intelligence bias.<em>&nbsp;</em></p><p><strong>In this episode, find out:&nbsp;</strong></p><ul><li>What AI bias is?</li><li>The effects of AI bias</li><li>The three different types of bias and how they affect AI</li><li>How AI contributes to selection bias</li></ul><br/><p><strong>Important Links &amp; Mentions:</strong></p><ul><li><a href="https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G" rel="noopener noreferrer" target="_blank">Amazon scraps secret AI recruiting tool that showed bias against women</a></li><li><a href="https://www.washingtonpost.com/technology/2020/12/23/google-timnit-gebru-ai-ethics/" rel="noopener noreferrer" target="_blank">Google Hired Timnit Gebru to be an outspoken critic of unethical AI</a> </li><li><a href="https://www.forbes.com/sites/cognitiveworld/2020/02/07/biased-algorithms/?sh=3d4dc94876fc" rel="noopener noreferrer" target="_blank">Biased Algorithms Learn from Biased Data: 3 Kinds Biases Found In AI Datasets</a></li><li><a href="https://arxiv.org/pdf/2012.02394.pdf" rel="noopener noreferrer" target="_blank">Biased Programmers? Or Biased Data? A Field Experiment in Operationalizing AI Ethics</a></li></ul><br/><p><strong>Resources:</strong></p><ul><li><a href="https://venturebeat.com/2020/12/09/columbia-researchers-find-white-men-are-the-worst-at-reducing-ai-bias/" rel="noopener noreferrer" target="_blank">Venture Beat – Study finds diversity in data science teams is key in reducing algorithmic bias</a></li><li><a href="https://www.nytimes.com/2019/11/11/technology/artificial-intelligence-bias.html" rel="noopener noreferrer" target="_blank">The New York Times - We Teach A.I. Systems Everything, Including Our Biases</a></li></ul><br/><p><strong>Episode Transcript:</strong></p><p>Today I’m talking about a very serious problem: artificial intelligence bias.</p><p><strong>AI Ethics </strong></p><p>The ethics of AI are complicated. Every time I go to review this area, I’m dazed by all the issues. There are groups in the AI community who wrestle with robot ethics, the threat to human dignity, transparency ethics, self-driving car liability, AI accountability, the ethics of weaponizing AI, machine ethics, and even the existential risk from superintelligence. But of all these hidden terrors, one is front and center. Artificial intelligence bias. What is it?</p><p><strong>Machines Built with Bias</strong></p><p>AI is based on algorithms in the form of computer software. Algorithms power computers to make decisions through something called machine learning. Machine learning algorithms are all around us. They supply the Netflix suggestions we receive, the posts appearing at the top of our social media feeds, they drive the results of our google searches. Algorithms are fed on data. If you want to teach a machine to recognize a cat, you feed the algorithm thousands of cat images until it can recognize a cat better than you can. </p><p>The problem is machine learning algorithms are used to make decisions in our daily lives that can have extreme consequences. A computer program may help police decide where to send resources, or who’s approved for a mortgage, who’s accepted to a university or who gets the job. &nbsp;</p><p>More and more experts in the field are sounding the alarm. Machines, just like humans, are guided by data and experience. If the data or experience is mistaken or based on stereotypes, a biased decision is made, whether it’s a machine or a human.</p><p><strong>Types of AI Bias&nbsp;</strong></p><p>There are 3 main types of bias in artificial intelligence: interaction bias, latent bias, and selection bias. </p><p><strong>Microsoft’s Failed Chatbot</strong></p><p>Interaction bias arises from the users who are driving the interaction and their biases. A clear example was Microsoft’s Twitter based chatbot called Tay. Tay was designed to learn from its interactions with users. Unfortunately, the user community on Twitter repeatedly tweeted offensive statements at Tay and Tay used those statements to train itself. As a result, Tay’s responses became racist and misogynistic and had to be shut down after 24 hours.</p><p><strong>Amazon’s Recruiting Bias</strong></p><p>Latent bias is when an algorithm may incorrectly identify something based on historical data or because of an existing stereotype. A well-known example of this occurred with Amazon’s recruiting algorithm. The company realized after several years their program for selecting and hiring software developers favored men. This was because Amazon’s computer systems were trained with a dataset containing resumes from mainly men. </p><p>Because of this, their algorithm penalized resumes that included the word “women’s” as in women’s chess champion. And it downgraded an applicant if they had graduated from an all womens’ college. Amazon ultimately abandoned the program because even with editing, they could not make the program gender neutral.</p><p><strong>Selection Bias Ignores the Real Population</strong></p><p>In selection bias a dataset overrepresents one certain group and underrepresents another. It doesn’t represent the real population. For example, some machine learning datasets come from scrapping the internet for information. But major search engines and the data in their systems are developed in the West. As a result, algorithms are more likely to recognize a bride and groom in a western style wedding but not in an African wedding. </p><p><strong>Can Big Tech Really Self -Police</strong></p><p>Researchers are just beginning to understand the effects of bias in the machine learning algorithms. And the big tech companies which create these systems have pledged to address the problem. But others question their ability to self-police. &nbsp;Google recently fired an expert, vocal, high profile employee who they hired to focus on ethical AI. She was concerned about problems in the language models they used. This raises the point that ethical AI has to mean something to the most powerful companies in the world, for it to mean anything at all &nbsp;</p><p><strong>The Power of Diversity</strong></p><p>So, what can we do about algorithms which judge us and make decisions about us at every stage of our life, without us ever knowing? Experts say we need to be aware of the problem. We need to ensure the datasets are unbiased. We should develop and use programs that can test algorithms to check for bias. And a recent study emphasized that if the people training the systems come from diverse backgrounds, there is less bias. </p><p>We know data scientists inject their bias into the algorithms they build. &nbsp;Having diversity means the algorithms are built for all types of people. We’ve come to learn we need AI ethics because as one headline put it, “We Teach AI Systems Everything Including Our Bias.”&nbsp;&nbsp;</p><p>Thanks for listening, I hope you found this helpful. Be curious and if you like this episode, please leave a review and subscribe because then you’ll receive my podcasts weekly. From Short and Sweet AI, I’m Dr. Peper.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com]]></link><guid isPermaLink="false">cb637dea-136f-4e90-9dad-128ef49210c8</guid><itunes:image href="https://artwork.captivate.fm/0f70a8fe-87fd-4897-904a-3c6df4f5e40b/8bczxqo3txpitc4acrr8bmih.png"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Mon, 15 Feb 2021 02:00:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/49261233-842a-4c56-8e84-874c411db359/ep-41-podcast-edit-v2.mp3" length="6528339" type="audio/mpeg"/><itunes:duration>06:46</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>41</itunes:episode><itunes:author>Dr. Peper</itunes:author></item><item><title>AI + Covid-19 Vaccine</title><itunes:title>AI + Covid-19 Vaccine</itunes:title><description><![CDATA[<p>How fast can you develop a vaccine? Never has this challenge been put to the test quite so intensely as in 2020. </p>
<p>In fact, Jason Moore, who heads Bioinformatics at UPenn thinks that if the virus had hit 20 years ago, the world might have been doomed. It’s only thanks to modern technology that we now have a safe vaccine. He said, “I think we have a fighting chance today because of AI and machine learning.”</p>
<p>So, how did AI help to make the Covid-19 vaccine a reality? The short answer is a combination of computational analysis and the system of AlphaFold. I talk more about how researchers developed the vaccine so fast in this episode of Short and Sweet AI.</p>
<p>In this episode find out: </p>
<ul>
<li>How AI was used to learn more about Covid-19 through data analysis</li>
<li>How AI helped researchers develop the vaccine so quickly</li>
<li>Where we would be without AI and machine learning </li>
</ul><br/>
<p> </p>
<p><strong>Important Links &amp; Mentions</strong></p>
<ul>
<li><a href="https://drpepermd.com/podcast-2/page/3/" rel="noopener noreferrer" target="_blank">Deep Mind, Gaming, + the Nobel Prize</a> </li>
<li><a href="https://deepmind.com/blog/article/AlphaFold-Using-AI-for-scientific-discovery" rel="noopener noreferrer" target="_blank">AlphaFold: Using AI for Scientific Discovery</a></li>
<li><a href="https://www.youtube.com/watch?v=gg7WjuFs8F4&amp;feature=youtu.be" rel="noopener noreferrer" target="_blank">Alpha Fold: the making of a scientific breakthrough</a></li>
</ul><br/>
<p> </p>
<p><strong>Resources:</strong></p>
<ul>
<li>IEEE Spectrum - <a href="https://spectrum.ieee.org/artificial-intelligence/medical-ai/what-ai-can-and-cant-do-in-the-race-for-a-coronavirus-vaccine" rel="noopener noreferrer" target="_blank">What AI Can–and Can’t–Do in the Race for a Coronavirus Vaccine</a></li>
<li>Wired.com - <a href="https://www.wired.com/story/opinion-ai-can-help-find-scientists-find-a-covid-19-vaccine/" rel="noopener noreferrer" target="_blank">AI Can Help Scientists Find a Covid-19 Vaccine</a></li>
<li>Washington Post - <a href="https://www.washingtonpost.com/health/covid-19-artificial-intelligence/2020/10/30/7486db84-1485-11eb-bc10-40b25382f1be_story.html" rel="noopener noreferrer" target="_blank">Artificial Intelligence and Covid-19: Can the Machines Save Us?</a></li>
</ul><br/>
<p><strong>Episode Transcript:</strong></p>
<p>Friends tease me because I’m so fascinated with artificial intelligence that I will claim AI is the reason we have a safe Covid-19 vaccine so quickly. And they’re right, it is one of the reasons. In fact, Jason Moore, who heads Bioinformatics at U Penn thinks if this virus had hit 20 years ago, the world might have been doomed. He said “I think we have a fighting chance today because of AI and machine learning.  </p>
<p>How did AI help to make the Covid-19 vaccine a reality? The short answer is through computational analysis and Alpha Fold. </p>
<p>But first, a little background on vaccines. A vaccine provokes the body into producing defensive white blood cells and antibodies by imitating the infection. In order to imitate an infection, you need to find a target on the virus. Once you find the target you need to understand its 3D shape to make the vaccine against it. But it’s really hard to figure out all the possible shapes before you find the one, unique 3D shape of the target, unless…unless of course you use AI. </p>
<p> </p>
<p>In the case of the Covid-19 vaccine, Google’s machine learning neural network called Alpha Fold saved the day. Alpha Fold predicted the 3D shape of the virus spike protein based on its genetic sequence. And did it really fast, as early as March 2020, three months after the pandemic started. Without AI, it would have taken months and months to come up with what the best possible target protein could be, and it might have been wrong. But with AI, researchers were able to race ahead to ultimately develop the mRNA vaccine.  </p>]]></description><content:encoded><![CDATA[<p>How fast can you develop a vaccine? Never has this challenge been put to the test quite so intensely as in 2020. </p>
<p>In fact, Jason Moore, who heads Bioinformatics at UPenn thinks that if the virus had hit 20 years ago, the world might have been doomed. It’s only thanks to modern technology that we now have a safe vaccine. He said, “I think we have a fighting chance today because of AI and machine learning.”</p>
<p>So, how did AI help to make the Covid-19 vaccine a reality? The short answer is a combination of computational analysis and the system of AlphaFold. I talk more about how researchers developed the vaccine so fast in this episode of Short and Sweet AI.</p>
<p>In this episode find out: </p>
<ul>
<li>How AI was used to learn more about Covid-19 through data analysis</li>
<li>How AI helped researchers develop the vaccine so quickly</li>
<li>Where we would be without AI and machine learning </li>
</ul><br/>
<p> </p>
<p><strong>Important Links &amp; Mentions</strong></p>
<ul>
<li><a href="https://drpepermd.com/podcast-2/page/3/" rel="noopener noreferrer" target="_blank">Deep Mind, Gaming, + the Nobel Prize</a> </li>
<li><a href="https://deepmind.com/blog/article/AlphaFold-Using-AI-for-scientific-discovery" rel="noopener noreferrer" target="_blank">AlphaFold: Using AI for Scientific Discovery</a></li>
<li><a href="https://www.youtube.com/watch?v=gg7WjuFs8F4&amp;feature=youtu.be" rel="noopener noreferrer" target="_blank">Alpha Fold: the making of a scientific breakthrough</a></li>
</ul><br/>
<p> </p>
<p><strong>Resources:</strong></p>
<ul>
<li>IEEE Spectrum - <a href="https://spectrum.ieee.org/artificial-intelligence/medical-ai/what-ai-can-and-cant-do-in-the-race-for-a-coronavirus-vaccine" rel="noopener noreferrer" target="_blank">What AI Can–and Can’t–Do in the Race for a Coronavirus Vaccine</a></li>
<li>Wired.com - <a href="https://www.wired.com/story/opinion-ai-can-help-find-scientists-find-a-covid-19-vaccine/" rel="noopener noreferrer" target="_blank">AI Can Help Scientists Find a Covid-19 Vaccine</a></li>
<li>Washington Post - <a href="https://www.washingtonpost.com/health/covid-19-artificial-intelligence/2020/10/30/7486db84-1485-11eb-bc10-40b25382f1be_story.html" rel="noopener noreferrer" target="_blank">Artificial Intelligence and Covid-19: Can the Machines Save Us?</a></li>
</ul><br/>
<p><strong>Episode Transcript:</strong></p>
<p>Friends tease me because I’m so fascinated with artificial intelligence that I will claim AI is the reason we have a safe Covid-19 vaccine so quickly. And they’re right, it is one of the reasons. In fact, Jason Moore, who heads Bioinformatics at U Penn thinks if this virus had hit 20 years ago, the world might have been doomed. He said “I think we have a fighting chance today because of AI and machine learning.  </p>
<p>How did AI help to make the Covid-19 vaccine a reality? The short answer is through computational analysis and Alpha Fold. </p>
<p>But first, a little background on vaccines. A vaccine provokes the body into producing defensive white blood cells and antibodies by imitating the infection. In order to imitate an infection, you need to find a target on the virus. Once you find the target you need to understand its 3D shape to make the vaccine against it. But it’s really hard to figure out all the possible shapes before you find the one, unique 3D shape of the target, unless…unless of course you use AI. </p>
<p> </p>
<p>In the case of the Covid-19 vaccine, Google’s machine learning neural network called Alpha Fold saved the day. Alpha Fold predicted the 3D shape of the virus spike protein based on its genetic sequence. And did it really fast, as early as March 2020, three months after the pandemic started. Without AI, it would have taken months and months to come up with what the best possible target protein could be, and it might have been wrong. But with AI, researchers were able to race ahead to ultimately develop the mRNA vaccine.  </p>
<p>It’s common knowledge that it can takes years or even decades to develop a vaccine. Before Covid-19, using other approaches, the quickest vaccine to be developed took 4 years. As of September 2020, there were 34 different Covid-19 vaccines being tested in humans. That’s an astonishing number in so short a time. </p>
<p>Neural networks excel at analyzing massive amounts of data to find patterns that humans might not spot. Computers use machine learning to sort and analyze incredible amounts of data to learn and train over time. And that’s been AI’s second big contribution to conquering Covid-19. It’s called computational analysis. It involves using AI to gather insights from huge sources of experimental, and well as real world data, on the virus.  </p>
<p>At the outset of the pandemic The Allen Institute for AI started an online repository of research articles about Covid-19. Today it has over 30,000 academic articles. Researchers can use this data set for the machine learning algorithms to train on, so they better understand the virus.  </p>
<p>For example, as early as April 2020, computational scientists harnessed neural networks to sort through medical records by the thousands. The machines were able to confirm the lack of smell and taste is one of the earliest symptoms of Covid infection. There existed isolated reports of anosmia, which is the medical term for loss of smell and taste, but computer data analysis validated the finding. The CDC then added these to their list of Covid symptoms which helped identify when a person had the infection. </p>
<p>In another instance, medical charts from 96 hospitals in several different countries were analyzed with machine learning. What emerged was insight that many Covid patients had really off the chart readings of blood clotting. This alerted doctors to use blood thinners in patients hospitalized with Covid.  </p>
<p>As scientists explain, the human brain becomes pretty quickly overwhelmed by the endless combinations of things, but when you use AI, the machines can find and directly move in on important findings, very quickly and effectively. AI is routinely depicted as evil in fiction, social media, and by Hollywood, and yet, its revolutionized how vaccines are created. It’s also become a workhorse of this pandemic as a powerful technology for processing massive amounts of information. Maybe, the machines will save us.  </p>]]></content:encoded><link><![CDATA[https://drpepermd.com]]></link><guid isPermaLink="false">88f446e1-4a64-4433-914a-e9ab7a05367e</guid><itunes:image href="https://artwork.captivate.fm/0f70a8fe-87fd-4897-904a-3c6df4f5e40b/8bczxqo3txpitc4acrr8bmih.png"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Mon, 08 Feb 2021 02:00:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/e1c9cc46-8c5c-4081-b80b-6551d3f2e26c/ep-40-podcast.mp3" length="5856475" type="audio/mpeg"/><itunes:duration>06:04</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>40</itunes:episode><itunes:author>Dr. Peper</itunes:author></item><item><title>What is the 4th Industrial Revolution?</title><itunes:title>What is the 4th Industrial Revolution?</itunes:title><description><![CDATA[<p>Technology breakthroughs are disrupting every industry at a rapid rate. In fact, advances in technology are massively transforming every industry exponentially faster than ever before in history. </p><p>What do you call exponentially fast disruption and massive transformation in worldwide industries? &nbsp;</p><p>It’s called the 4th Industrial Revolution, which I talk about in more detail in this episode of Short and Sweet AI.&nbsp;&nbsp;&nbsp;</p><p><strong>In this episode find out:&nbsp;</strong></p><ul><li>What the 4<sup>th</sup> Industrial Revolution is</li><li>A brief overview of the previous industrial revolutions</li><li>Whether the 4<sup>th</sup> Industrial Revolution should be considered a part of the Third Industrial Revolution</li><li>Pros and cons of the new Industry 4.0</li><li>Why inequality may become the greatest threat of the 4th IR</li></ul><br/><p>&nbsp;</p><p><strong>Important Links &amp; Mentions</strong></p><ul><li><a href="https://drpepermd.com/episode/what-is-edge-ai-or-edge-computing/" rel="noopener noreferrer" target="_blank">What Is Edge AI or Edge Computing?</a></li><li><a href="https://drpepermd.com/episode/5g-fifth-generation-wireless-what-is-it/" rel="noopener noreferrer" target="_blank">5G: Fifth Generation Wireless, What Is It?</a></li><li><a href="https://drpepermd.com/episode/what-is-iot-and-why-does-it-matter/" rel="noopener noreferrer" target="_blank">What is IOT and Why Does it Matter?</a></li><li><a href="https://drpepermd.com/2019/11/15/xr-what-is-extended-reality/" rel="noopener noreferrer" target="_blank">XR: What is Extended Reality?</a></li></ul><br/><p><strong>Resources:</strong></p><ul><li><a href="https://www.cnbc.com/2019/01/16/fourth-industrial-revolution-explained-davos-2019.html" rel="noopener noreferrer" target="_blank">CNBC - Everything you need to know about the Fourth Industrial Revolution</a></li><li><a href="https://www.salesforce.com/blog/what-is-the-fourth-industrial-revolution-4ir/#:~:text=The%20Fourth%20Industrial%20Revolution%20is,quantum%20computing%2C%20and%20other%20technologies" rel="noopener noreferrer" target="_blank">Salesforce - What Is the Fourth Industrial Revolution?</a></li><li><a href="https://www.weforum.org/agenda/2016/01/the-fourth-industrial-revolution-what-it-means-and-how-to-respond/" rel="noopener noreferrer" target="_blank">World Economic Forum - The Fourth Industrial Revolution: what it means, how to respond</a></li><li><a href="https://www.youtube.com/watch?v=kpW9JcWxKq0" rel="noopener noreferrer" target="_blank">What is the Fourth Industrial Revolution?</a></li><li><a href="https://www.youtube.com/watch?v=v9rZOa3CUC8" rel="noopener noreferrer" target="_blank">What is the Fourth Industrial Revolution? | CNBC Explains</a></li><li><a href="https://www.amazon.com/Fourth-Industrial-Revolution-Klaus-Schwab/dp/1524758868/ref=sr_1_sc_1?ie=UTF8&amp;qid=1502893274&amp;sr=8-1-spell&amp;keywords=klas+schwab" rel="noopener noreferrer" target="_blank">The Fourth Industrial Revolution by Klaus Schwab</a></li></ul><br/><p><strong>Episode Transcript:</strong></p><p>Welcome to those who are curious about AI. From Short and Sweet AI, I’m Dr. Peper.</p><p>Right here, right now, technology breakthroughs are disrupting every industry and massively transforming every industry, exponentially faster than ever before in history. What do you call exponentially fast disruption and massive transformation in world-wide industries? It’s called the 4<sup>th</sup> industrial revolution.&nbsp;&nbsp;&nbsp;&nbsp;</p><p>The 4<sup>th</sup> industrial revolution is also known as 4 IR or Industry 4.0. But what does it mean? Klaus Schwab, founder of the World Economic Forum, coined the term and wrote a book of the same title. He details how we are now living during a 4<sup>th</sup> industrial revolution characterized by the fusion of AI, robotics, 3D printing, IOT, quantum computing, blockchain, autonomous vehicles, 5G, synthetic biology, virtual reality, and countless other...]]></description><content:encoded><![CDATA[<p>Technology breakthroughs are disrupting every industry at a rapid rate. In fact, advances in technology are massively transforming every industry exponentially faster than ever before in history. </p><p>What do you call exponentially fast disruption and massive transformation in worldwide industries? &nbsp;</p><p>It’s called the 4th Industrial Revolution, which I talk about in more detail in this episode of Short and Sweet AI.&nbsp;&nbsp;&nbsp;</p><p><strong>In this episode find out:&nbsp;</strong></p><ul><li>What the 4<sup>th</sup> Industrial Revolution is</li><li>A brief overview of the previous industrial revolutions</li><li>Whether the 4<sup>th</sup> Industrial Revolution should be considered a part of the Third Industrial Revolution</li><li>Pros and cons of the new Industry 4.0</li><li>Why inequality may become the greatest threat of the 4th IR</li></ul><br/><p>&nbsp;</p><p><strong>Important Links &amp; Mentions</strong></p><ul><li><a href="https://drpepermd.com/episode/what-is-edge-ai-or-edge-computing/" rel="noopener noreferrer" target="_blank">What Is Edge AI or Edge Computing?</a></li><li><a href="https://drpepermd.com/episode/5g-fifth-generation-wireless-what-is-it/" rel="noopener noreferrer" target="_blank">5G: Fifth Generation Wireless, What Is It?</a></li><li><a href="https://drpepermd.com/episode/what-is-iot-and-why-does-it-matter/" rel="noopener noreferrer" target="_blank">What is IOT and Why Does it Matter?</a></li><li><a href="https://drpepermd.com/2019/11/15/xr-what-is-extended-reality/" rel="noopener noreferrer" target="_blank">XR: What is Extended Reality?</a></li></ul><br/><p><strong>Resources:</strong></p><ul><li><a href="https://www.cnbc.com/2019/01/16/fourth-industrial-revolution-explained-davos-2019.html" rel="noopener noreferrer" target="_blank">CNBC - Everything you need to know about the Fourth Industrial Revolution</a></li><li><a href="https://www.salesforce.com/blog/what-is-the-fourth-industrial-revolution-4ir/#:~:text=The%20Fourth%20Industrial%20Revolution%20is,quantum%20computing%2C%20and%20other%20technologies" rel="noopener noreferrer" target="_blank">Salesforce - What Is the Fourth Industrial Revolution?</a></li><li><a href="https://www.weforum.org/agenda/2016/01/the-fourth-industrial-revolution-what-it-means-and-how-to-respond/" rel="noopener noreferrer" target="_blank">World Economic Forum - The Fourth Industrial Revolution: what it means, how to respond</a></li><li><a href="https://www.youtube.com/watch?v=kpW9JcWxKq0" rel="noopener noreferrer" target="_blank">What is the Fourth Industrial Revolution?</a></li><li><a href="https://www.youtube.com/watch?v=v9rZOa3CUC8" rel="noopener noreferrer" target="_blank">What is the Fourth Industrial Revolution? | CNBC Explains</a></li><li><a href="https://www.amazon.com/Fourth-Industrial-Revolution-Klaus-Schwab/dp/1524758868/ref=sr_1_sc_1?ie=UTF8&amp;qid=1502893274&amp;sr=8-1-spell&amp;keywords=klas+schwab" rel="noopener noreferrer" target="_blank">The Fourth Industrial Revolution by Klaus Schwab</a></li></ul><br/><p><strong>Episode Transcript:</strong></p><p>Welcome to those who are curious about AI. From Short and Sweet AI, I’m Dr. Peper.</p><p>Right here, right now, technology breakthroughs are disrupting every industry and massively transforming every industry, exponentially faster than ever before in history. What do you call exponentially fast disruption and massive transformation in world-wide industries? It’s called the 4<sup>th</sup> industrial revolution.&nbsp;&nbsp;&nbsp;&nbsp;</p><p>The 4<sup>th</sup> industrial revolution is also known as 4 IR or Industry 4.0. But what does it mean? Klaus Schwab, founder of the World Economic Forum, coined the term and wrote a book of the same title. He details how we are now living during a 4<sup>th</sup> industrial revolution characterized by the fusion of AI, robotics, 3D printing, IOT, quantum computing, blockchain, autonomous vehicles, 5G, synthetic biology, virtual reality, and countless other technologies. He describes this as a “technological revolution… that is blurring the lines between the physical, digital and biological spheres”.</p><p>Technology merges with humans as our smart watches monitor our hear rate, our temperature or how much we move. It embeds in our daily lives as facial recognition, voice activated assistants, or apps on our phone. This isn’t the future, this is happening now. It’s changing how we live and changing who we are.</p><p>The three previous industrial revolutions also had new technology which fundamentally changed society. And yet, they were different. Let’s go back and look.</p><p>The First Industrial Revolution occured in 1760 with the invention of the steam engine and led to factory manufacturing. Hand-made goods were replaced by mass produced products. And the agricultural society was replaced by a huge migration to the cities. The Second Industrial Revolution came in the late 1800s with inventions such as the internal combustion engine, the lightbulb, the telephone and major infrastructure such as railroads as well as the steel, oil and electricity industries. The Third Industrial Revolution began in the 1960s with the invention of the semiconductor, personal computers and ultimately, the internet. &nbsp;&nbsp;</p><p>Schwab rejects the idea these present-day developments are part of the third industrial revolution.&nbsp;Four IR is evolving superfast, at an exponential, not linear pace, like the previous IRs. For example, it took 75 years for 100 million people to have a traditional telephone, but it only took 2 years for 100 million people to sign up for Instagram and less than a month for 100 million people to use Pokémon Go. The 4<sup>th</sup> industrial revolution involves many, many different technologies. Those technologies are combining and merging together and can transform entire systems, across companies and industries, and across cultures and countries. &nbsp;&nbsp;&nbsp;</p><p>What are the pro and cons of this new Industry 4.0? Advocates point out the increased productivity from technology and the improved quality of daily life where we can have almost anything we want on demand. There will be massive new markets created as more people come online. And more entrepreneurship exploding worldwide as barriers to new businesses are lowered.</p><p>But many thoughtful people are concerned about the cybersecurity risks as everything becomes so connected through the IOT. And disruption of core industries has already begun with Airbnb challenging hotels, Uber and Lyft dissolving the taxi industry, and Amazon threating any business that sells, well, anything. There are ethical concerns about access to data on individuals or groups being wide-spread and used for personal gain and manipulation.</p><p>But perhaps the greatest threat of the 4<sup>th</sup> industrial revolution is the specter of massive inequality. Experts fear there will be a divide of high-skill/high-pay workers and low-skill/low-pay workers in a winner-take-all economy, as the middle-class dissolves. Even Schwab predicts that inequality will be the greatest concern affecting society in the 4<sup>th</sup> Industrial Revolution.</p><p>Typically, early adopters of new technology gain the greatest financial benefits, allowing them to jump ahead, while the income gap widens. Sounds pretty dire and yet, no one knows. The French philosopher Voltaire said, “Doubt is an uncomfortable condition, but certainty is a ridiculous one.” This revolution is creating change at warp speed. And even those with knowledge and preparation may not be able to keep up with the ripple effects from the changes.&nbsp;&nbsp;</p>]]></content:encoded><link><![CDATA[https://drpepermd.com]]></link><guid isPermaLink="false">6f198e6a-10b3-484c-b842-7b9043b602ff</guid><itunes:image href="https://artwork.captivate.fm/0f70a8fe-87fd-4897-904a-3c6df4f5e40b/8bczxqo3txpitc4acrr8bmih.png"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Mon, 01 Feb 2021 02:00:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/25f0cf1f-5d70-44a8-b600-e80cd70f5312/ep-39-podcast-edit.mp3" length="7155164" type="audio/mpeg"/><itunes:duration>07:25</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>2021</itunes:season><itunes:episode>39</itunes:episode><itunes:author>Dr. Peper</itunes:author></item><item><title>Personal Data as Private Property</title><itunes:title>Personal Data as Private Property</itunes:title><description><![CDATA[<p>Is it time we regained control of our data and found new and better ways to protect it?</p>
<p>You and I know that the social media platforms and internet sites we visit collect data on us. In many ways, they monetize our data and use it as a product that can be purchased. </p>
<p>In this episode of Short and Sweet AI, I talk about personal data as private property and whether there is a way for us to choose who gets to use our data.</p>
<p><strong>In this episode find out:</strong></p>
<ul>
<li>The true value of data</li>
<li>Whether we should get paid for our data</li>
<li>Who Professor Song is</li>
<li>How Professor Song and her company “Oasis Labs” are working on a system that could potentially help users protect their data and even get paid for it</li>
<li>How you could potentially make your data your private property</li>
<li>Professor Song’s vision for the future and why she believes that we should get revenue by sharing our data</li>
</ul><br/>
<p></p>
<p><strong>Important Links &amp; Mentions</strong></p>
<ul>
<li><a href="https://www.oasislabs.com/" rel="noopener noreferrer" target="_blank">Oasis Labs</a></li>
<li><a href="https://drpepermd.com/podcast-2/page/4/" rel="noopener noreferrer" target="_blank">Are Machine Learning and Deep learning the Same as AI?</a></li>
</ul><br/>
<p></p>
<p><strong>Resources:</strong></p>
<ul>
<li><a href="https://www.wired.com/story/dawn-song-oasis-labs-data-privacy-wired25/" rel="noopener noreferrer" target="_blank">Oasis Labs' Dawn Song on a Safer Way to Protect Your Data</a></li>
<li><a href="https://www.nytimes.com/2019/11/19/technology/artificial-intelligence-dawn-song.html" rel="noopener noreferrer" target="_blank">Building a World Where Data Privacy Exists Online</a></li>
<li><a href="https://www.aitrends.com/data-privacy-and-security/get-paid-for-your-data-reap-the-data-dividend/" rel="noopener noreferrer" target="_blank">Get Paid for Your Data, Reap the Data Dividend</a></li>
<li><a href="https://medium.com/oasislabs/giving-users-control-of-their-genomic-data-e9ae8685d9ca" rel="noopener noreferrer" target="_blank">Giving Users Control of their Genomic Data</a></li>
<li><a href="https://www.wired.com/video/watch/oasis-labs-dawn-song-in-conversation-with-tom-simonite" rel="noopener noreferrer" target="_blank">Oasis Labs' Dawn Song in Conversation with Tom Simonite</a></li>
<li><a href="https://www.youtube.com/watch?v=eMh5YqKopjE" rel="noopener noreferrer" target="_blank">deeplearning.ai's Heroes of Deep Learning: Dawn Song</a></li>
<li><a href="https://www.npr.org/2019/09/18/762046356/u-s-military-researchers-work-to-fix-easily-fooled-ai" rel="noopener noreferrer" target="_blank">Computer Scientists Work To Fix Easily Fooled AI</a></li>
</ul><br/>
<p></p>
<p><strong>Episode Transcript:</strong></p>
<p>From Short and Sweet AI, I’m Dr. Peper, and today I want to talk with you about personal data as private property. </p>
<p>You and I know that social media platforms and internet sites we visit are collecting data on us. We know they’re selling our data to advertisers. I mean, that’s their business model. They provide a platform for us to connect with each other and we give them our personal data as payment. Data is valuable. Data is the new oil. It brings in billions of dollars of income for Google, Facebook, Instagram, Amazon, and countless other companies. When we’re online and we click on a pop-up that says “accept”, we’re essentially giving away our personal information to that company. And do we really have a choice? You either have to accept the terms or you’re not allowed to use that site. </p>
<p>Well, what if we could be paid for our data, what if we could determine who gets data about what sites we visit, what apps we use on our phones, what physical locations we go to, what conversations we have, basically what if we could be paid for all the information companies are gathering on us now on a daily basis.  And what if we had a...]]></description><content:encoded><![CDATA[<p>Is it time we regained control of our data and found new and better ways to protect it?</p>
<p>You and I know that the social media platforms and internet sites we visit collect data on us. In many ways, they monetize our data and use it as a product that can be purchased. </p>
<p>In this episode of Short and Sweet AI, I talk about personal data as private property and whether there is a way for us to choose who gets to use our data.</p>
<p><strong>In this episode find out:</strong></p>
<ul>
<li>The true value of data</li>
<li>Whether we should get paid for our data</li>
<li>Who Professor Song is</li>
<li>How Professor Song and her company “Oasis Labs” are working on a system that could potentially help users protect their data and even get paid for it</li>
<li>How you could potentially make your data your private property</li>
<li>Professor Song’s vision for the future and why she believes that we should get revenue by sharing our data</li>
</ul><br/>
<p></p>
<p><strong>Important Links &amp; Mentions</strong></p>
<ul>
<li><a href="https://www.oasislabs.com/" rel="noopener noreferrer" target="_blank">Oasis Labs</a></li>
<li><a href="https://drpepermd.com/podcast-2/page/4/" rel="noopener noreferrer" target="_blank">Are Machine Learning and Deep learning the Same as AI?</a></li>
</ul><br/>
<p></p>
<p><strong>Resources:</strong></p>
<ul>
<li><a href="https://www.wired.com/story/dawn-song-oasis-labs-data-privacy-wired25/" rel="noopener noreferrer" target="_blank">Oasis Labs' Dawn Song on a Safer Way to Protect Your Data</a></li>
<li><a href="https://www.nytimes.com/2019/11/19/technology/artificial-intelligence-dawn-song.html" rel="noopener noreferrer" target="_blank">Building a World Where Data Privacy Exists Online</a></li>
<li><a href="https://www.aitrends.com/data-privacy-and-security/get-paid-for-your-data-reap-the-data-dividend/" rel="noopener noreferrer" target="_blank">Get Paid for Your Data, Reap the Data Dividend</a></li>
<li><a href="https://medium.com/oasislabs/giving-users-control-of-their-genomic-data-e9ae8685d9ca" rel="noopener noreferrer" target="_blank">Giving Users Control of their Genomic Data</a></li>
<li><a href="https://www.wired.com/video/watch/oasis-labs-dawn-song-in-conversation-with-tom-simonite" rel="noopener noreferrer" target="_blank">Oasis Labs' Dawn Song in Conversation with Tom Simonite</a></li>
<li><a href="https://www.youtube.com/watch?v=eMh5YqKopjE" rel="noopener noreferrer" target="_blank">deeplearning.ai's Heroes of Deep Learning: Dawn Song</a></li>
<li><a href="https://www.npr.org/2019/09/18/762046356/u-s-military-researchers-work-to-fix-easily-fooled-ai" rel="noopener noreferrer" target="_blank">Computer Scientists Work To Fix Easily Fooled AI</a></li>
</ul><br/>
<p></p>
<p><strong>Episode Transcript:</strong></p>
<p>From Short and Sweet AI, I’m Dr. Peper, and today I want to talk with you about personal data as private property. </p>
<p>You and I know that social media platforms and internet sites we visit are collecting data on us. We know they’re selling our data to advertisers. I mean, that’s their business model. They provide a platform for us to connect with each other and we give them our personal data as payment. Data is valuable. Data is the new oil. It brings in billions of dollars of income for Google, Facebook, Instagram, Amazon, and countless other companies. When we’re online and we click on a pop-up that says “accept”, we’re essentially giving away our personal information to that company. And do we really have a choice? You either have to accept the terms or you’re not allowed to use that site. </p>
<p>Well, what if we could be paid for our data, what if we could determine who gets data about what sites we visit, what apps we use on our phones, what physical locations we go to, what conversations we have, basically what if we could be paid for all the information companies are gathering on us now on a daily basis.  And what if we had a system that only provides our data to who we say with great privacy protection using the security of a block chain type technology. Enter Professor Dawn Song and her company Oasis and we are one step closer to that reality. </p>
<p>Professor Song is considered to be one of the world’s expert on computer security. She is a Mac Arthur “genius’ recipient and a professor at UC Berkley. Much of her work is in the area of machine learning which I’ve talked about in a previous podcast and in adversarial AI. Adversarial AI is the study of how computer systems are hacked to transmit the wrong information. </p>
<p>While still a graduate student at Berkeley, her research drew attention for showing machine learning algorithms can infer what someone is typing. She showed hackers could use software to figure out someone’s password from the timing of their keystrokes picked up by eavesdropping on a network. Professor Song and her students were also the first to demonstrate that computer vision can be fooled. She applied a few benign looking stickers to a stop sign. As a result a driverless vehicle identified the sign as a 40 mile per hour speed limit sign instead of recognizing it as a stop sign and continued through an intersection without stopping.  </p>
<p>She began by showing that a lot of these machine learning algorithms have weaknesses and she became passionate about people having control over their personal data. Her expertise in machine learning, computer security, and blockchain gave birth to Oasis Labs. She describes Oasis as a privacy-first, cloud computing platform on blockchain. She is creating technology which empowers users to protect their personal information, to decide who can use it, and to get paid for  their data.  </p>
<p>Through a program with Stanford Medical School, patients can use the Oasis platform to decide who to share their medical data with and to get paid when it’s used. They agree to have scans of their retina and other medical data shared privately through a blockchain type application on the Oasis platform. And then researchers use this information to train computers to recognize eye diseases. Meanwhile Nebula, a genomics company, is jumping onboard and has integrated with Oasis to give users control of their personal genomic data.</p>
<p>Professors Song’s vision for the future is for people to have a revenue stream from their personal information. It may not be a lot on a monthly basis but could contribute to retirement savings as companies pay for using your data over your lifetime. As she says “Today, companies are taking users’ data and essentially using it as a product: they monetize it. The world can be very different if this is turned around and users maintain control of the data and get revenue from it.” </p>
<p>This is a really revolutionary idea. Professor Song has created an internet platform which uses blockchain technology to give us the ability to control our data and earn an income from it. Personal data as private property, I think it’s time. </p>
<p>If you like these flash talks, please leave a review and subscribe. From Short and Sweet AI, I’m Dr. Peper</p>]]></content:encoded><link><![CDATA[https://drpepermd.com]]></link><guid isPermaLink="false">b0549e32-baac-49b5-867c-9ffe37449958</guid><itunes:image href="https://artwork.captivate.fm/0f70a8fe-87fd-4897-904a-3c6df4f5e40b/8bczxqo3txpitc4acrr8bmih.png"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Mon, 25 Jan 2021 02:00:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/d28ed477-e80c-4e33-a155-62a808f0f873/data-privacy-take-i1.mp3" length="5678299" type="audio/mpeg"/><itunes:duration>05:53</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>2021</itunes:season><itunes:episode>38</itunes:episode><itunes:author>Dr. Peper</itunes:author></item><item><title>Neuralink Update</title><itunes:title>Neuralink Update</itunes:title><description><![CDATA[<p>In this exciting episode of Short and Sweet AI, I talk about the recent update that Elon Musk gave on his company Neuralink – including how and why his team implanted a coin-sized computer chip in a pig’s brain to create a brain-to-machine interface.</p>
<p><strong>In this episode find out:</strong></p>
<ul>
<li>What Neuralink is</li>
<li>How the Neuralink chip device works</li>
<li>How Neuralink works when implanted in a pig’s brain</li>
<li>What the future holds for Neuralink and how it may be able to help cure serious health conditions </li>
</ul><br/>
<p><strong>Important Links &amp; Mentions:</strong></p>
<ul>
<li><a href="https://drpepermd.com/2019/12/20/cyborgs-among-us/" rel="noopener noreferrer" target="_blank">Cyborgs Among Us</a></li>
</ul><br/>
<p><strong>Resources:</strong></p>
<ul>
<li><a href="https://www.wired.com/story/neuralink-is-impressive-tech-wrapped-in-musk-hype/" rel="noopener noreferrer" target="_blank">Neuralink Is Impressive Tech, Wrapped In Musk Hype</a></li>
<li><a href="https://www.cnbc.com/2020/12/05/elon-musks-neuralink-bold-ideas-hurdles.html" rel="noopener noreferrer" target="_blank">Elon Musk’s brain-computer interface company Neuralink has money and buzz, but hurdles too</a></li>
<li><a href="https://www.youtube.com/watch?v=EPUHsnN9R9I" rel="noopener noreferrer" target="_blank">How Neuralink Works</a></li>
<li><a href="https://www.youtube.com/watch?v=vxehbGLoar8" rel="noopener noreferrer" target="_blank">Neuralink Update (2020) - Highlights in 7 minutes</a></li>
</ul><br/>]]></description><content:encoded><![CDATA[<p>In this exciting episode of Short and Sweet AI, I talk about the recent update that Elon Musk gave on his company Neuralink – including how and why his team implanted a coin-sized computer chip in a pig’s brain to create a brain-to-machine interface.</p>
<p><strong>In this episode find out:</strong></p>
<ul>
<li>What Neuralink is</li>
<li>How the Neuralink chip device works</li>
<li>How Neuralink works when implanted in a pig’s brain</li>
<li>What the future holds for Neuralink and how it may be able to help cure serious health conditions </li>
</ul><br/>
<p><strong>Important Links &amp; Mentions:</strong></p>
<ul>
<li><a href="https://drpepermd.com/2019/12/20/cyborgs-among-us/" rel="noopener noreferrer" target="_blank">Cyborgs Among Us</a></li>
</ul><br/>
<p><strong>Resources:</strong></p>
<ul>
<li><a href="https://www.wired.com/story/neuralink-is-impressive-tech-wrapped-in-musk-hype/" rel="noopener noreferrer" target="_blank">Neuralink Is Impressive Tech, Wrapped In Musk Hype</a></li>
<li><a href="https://www.cnbc.com/2020/12/05/elon-musks-neuralink-bold-ideas-hurdles.html" rel="noopener noreferrer" target="_blank">Elon Musk’s brain-computer interface company Neuralink has money and buzz, but hurdles too</a></li>
<li><a href="https://www.youtube.com/watch?v=EPUHsnN9R9I" rel="noopener noreferrer" target="_blank">How Neuralink Works</a></li>
<li><a href="https://www.youtube.com/watch?v=vxehbGLoar8" rel="noopener noreferrer" target="_blank">Neuralink Update (2020) - Highlights in 7 minutes</a></li>
</ul><br/>]]></content:encoded><link><![CDATA[https://drpepermd.com]]></link><guid isPermaLink="false">131fb602-3c78-4706-96ec-ba166442a35d</guid><itunes:image href="https://artwork.captivate.fm/0f70a8fe-87fd-4897-904a-3c6df4f5e40b/8bczxqo3txpitc4acrr8bmih.png"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Mon, 18 Jan 2021 02:00:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/51a1316f-b704-4943-92db-b90bf889fca6/ep-37-podcast-edit.mp3" length="5312789" type="audio/mpeg"/><itunes:duration>05:30</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>2021</itunes:season><itunes:episode>37</itunes:episode><itunes:author>Dr. Peper</itunes:author></item><item><title>The Godfather of AI</title><itunes:title>The Godfather of AI</itunes:title><description><![CDATA[<p>What does it take to be the godfather of AI? And, how does someone come to obtain such a legendary title?</p>
<p>In this episode of Short and Sweet AI, I talk about Geoffrey Hinton, a neuroscientist, computer scientist, and the man Google hired to make AI a reality. In many ways, we have Geoffrey Hinton to thank for developing modern AI and deep learning. It is thanks to him that deep learning has become mainstream in the field of artificial intelligence. </p>
<p>So, how did Geoffrey Hinton rise to become the godfather of AI? Watch this video to find out! </p>
<p>In this episode find out: </p>
<ul>
<li>How Geoffrey Hinton became the godfather of AI</li>
<li>Why Geoffrey Hinton believes machines need to think the way humans do</li>
<li>Understanding how deep neural networks replicate how the brain processes information</li>
<li>How deep learning became mainstream after 30 years in the wilderness</li>
<li>How deep learning became AI's "lunatic core"</li>
</ul><br/>
<p><strong>Important Links &amp; Mentions</strong></p>
<ul>
<li><a href="https://podcasts.apple.com/us/podcast/short-and-sweet-ai/id1485901155?i=1000455695336" target="_blank" rel="noopener">Are Machine Learning and Deep Learning the same as AI?</a></li>
<li><a href="https://podcasts.apple.com/us/podcast/short-and-sweet-ai/id1485901155?i=1000463901239" target="_blank" rel="noopener">ImageNet</a></li>
</ul><br/>
<p><strong>Resources:</strong></p>
<ul>
<li><a href="https://www.youtube.com/watch?v=zl99IZvW7rE" target="_blank" rel="noopener">Geoffrey Hinton: The Foundations of Deep Learning</a></li>
<li><a href="https://www.youtube.com/watch?v=rkiWi4Pdmzc" target="_blank" rel="noopener">Geoffrey Hinton: “Probably machines will get smarter than people in almost everything”</a></li>
<li><a href="https://www.wired.com/2014/01/geoffrey-hinton-deep-learning/" target="_blank" rel="noopener">Meet the Man Google Hired to Make AI a Reality</a></li>
<li><a href="https://www.nytimes.com/2019/03/27/technology/turing-award-ai.html" target="_blank" rel="noopener">Turing Award Won by 3 Pioneers in Artificial Intelligence</a></li>
</ul><br/>]]></description><content:encoded><![CDATA[<p>What does it take to be the godfather of AI? And, how does someone come to obtain such a legendary title?</p>
<p>In this episode of Short and Sweet AI, I talk about Geoffrey Hinton, a neuroscientist, computer scientist, and the man Google hired to make AI a reality. In many ways, we have Geoffrey Hinton to thank for developing modern AI and deep learning. It is thanks to him that deep learning has become mainstream in the field of artificial intelligence. </p>
<p>So, how did Geoffrey Hinton rise to become the godfather of AI? Watch this video to find out! </p>
<p>In this episode find out: </p>
<ul>
<li>How Geoffrey Hinton became the godfather of AI</li>
<li>Why Geoffrey Hinton believes machines need to think the way humans do</li>
<li>Understanding how deep neural networks replicate how the brain processes information</li>
<li>How deep learning became mainstream after 30 years in the wilderness</li>
<li>How deep learning became AI's "lunatic core"</li>
</ul><br/>
<p><strong>Important Links &amp; Mentions</strong></p>
<ul>
<li><a href="https://podcasts.apple.com/us/podcast/short-and-sweet-ai/id1485901155?i=1000455695336" target="_blank" rel="noopener">Are Machine Learning and Deep Learning the same as AI?</a></li>
<li><a href="https://podcasts.apple.com/us/podcast/short-and-sweet-ai/id1485901155?i=1000463901239" target="_blank" rel="noopener">ImageNet</a></li>
</ul><br/>
<p><strong>Resources:</strong></p>
<ul>
<li><a href="https://www.youtube.com/watch?v=zl99IZvW7rE" target="_blank" rel="noopener">Geoffrey Hinton: The Foundations of Deep Learning</a></li>
<li><a href="https://www.youtube.com/watch?v=rkiWi4Pdmzc" target="_blank" rel="noopener">Geoffrey Hinton: “Probably machines will get smarter than people in almost everything”</a></li>
<li><a href="https://www.wired.com/2014/01/geoffrey-hinton-deep-learning/" target="_blank" rel="noopener">Meet the Man Google Hired to Make AI a Reality</a></li>
<li><a href="https://www.nytimes.com/2019/03/27/technology/turing-award-ai.html" target="_blank" rel="noopener">Turing Award Won by 3 Pioneers in Artificial Intelligence</a></li>
</ul><br/>]]></content:encoded><link><![CDATA[https://drpepermd.com]]></link><guid isPermaLink="false">bce544bf-4a38-4fa5-aa54-393011d1a526</guid><itunes:image href="https://artwork.captivate.fm/0f70a8fe-87fd-4897-904a-3c6df4f5e40b/8bczxqo3txpitc4acrr8bmih.png"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Mon, 11 Jan 2021 02:00:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/0e66af42-aed7-4815-ac40-331e74d46ded/ep-36-podcast-edit.mp3" length="5879502" type="audio/mpeg"/><itunes:duration>06:06</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>3</itunes:season><itunes:episode>36</itunes:episode><itunes:author>Dr. Peper</itunes:author></item><item><title>The End of Moore’s Law and Why It’s Important</title><itunes:title>The End of Moore&apos;s Law and Why It&apos;s Important</itunes:title><description><![CDATA[<p>Moore's Law is coming to an end, and many people don't know how to feel about it. In fairness, the end of Moore's Law is not something that crept up on us out of nowhere. Industry experts predicted the termination of Moore's Law years ago. They observed its gradual decline and forecasted a grim future for Moore's Law that has since proved to be an accurate calculation.</p><p>But the question remains… why is Moore's Law ending? And why should you care?</p><p>I'm kicking off the start of the year with a Short and Sweet AI podcast episode that focuses on endings. That is, the end of Moore's Law and why it matters. As always, I focus on AI in simple terms so that whether you're new to AI or a seasoned pro, you can follow along fully immersed!</p><p><strong>In this episode find out:</strong></p><ul><li>What Moore’s Law is, who created it, and why it is so important</li><li>How Google's big "OMG" moment led to the end of Moore’s Law</li><li>What a Tensor Processing Unit (TPU) is</li><li>Why TPU is the “Helen of Troy” of AI</li><li>What could replace Moore’s Law in the future</li></ul><br/><p><strong>Important Links &amp; Mentions</strong></p><ul><li><a href="https://www.intel.com/content/www/us/en/silicon-innovations/moores-law-technology.html" target="_blank">Intel</a></li><li><a href="https://podcasts.apple.com/us/podcast/short-and-sweet-ai/id1485901155?i=1000466624000" target="_blank">5G: Fifth Generation Wireless. What is it?</a></li><li><a href="https://podcasts.apple.com/us/podcast/short-and-sweet-ai/id1485901155?i=1000466990763" target="_blank">What is Edge AI or Edge Computing?</a></li><li><a href="https://podcasts.apple.com/us/podcast/short-and-sweet-ai/id1485901155?i=1000472950901" target="_blank">What is Quantum Computing?</a></li></ul><br/><p><strong>Resources:</strong></p><ul><li><a href="https://www.eye-on.ai/podcast-archive" target="_blank">Eye on AI: The Podcast</a></li><li><a href="https://www.wired.co.uk/article/wired-explains-moores-law" target="_blank"><em>What is Moore's Law? WIRED explains the theory that defined the tech industry</em></a></li><li><a href="https://computer.howstuffworks.com/moores-law5.htm" target="_blank"><em>How Moore’s Law Works</em></a></li><li><a href="https://kids.kiddle.co/images/0/00/Transistor_Count_and_Moore%27s_Law_-_2011.svg" target="_blank">Microprocessor Transistor Counts 1971-2011 &amp; Moore's Law</a></li></ul><br/>]]></description><content:encoded><![CDATA[<p>Moore's Law is coming to an end, and many people don't know how to feel about it. In fairness, the end of Moore's Law is not something that crept up on us out of nowhere. Industry experts predicted the termination of Moore's Law years ago. They observed its gradual decline and forecasted a grim future for Moore's Law that has since proved to be an accurate calculation.</p><p>But the question remains… why is Moore's Law ending? And why should you care?</p><p>I'm kicking off the start of the year with a Short and Sweet AI podcast episode that focuses on endings. That is, the end of Moore's Law and why it matters. As always, I focus on AI in simple terms so that whether you're new to AI or a seasoned pro, you can follow along fully immersed!</p><p><strong>In this episode find out:</strong></p><ul><li>What Moore’s Law is, who created it, and why it is so important</li><li>How Google's big "OMG" moment led to the end of Moore’s Law</li><li>What a Tensor Processing Unit (TPU) is</li><li>Why TPU is the “Helen of Troy” of AI</li><li>What could replace Moore’s Law in the future</li></ul><br/><p><strong>Important Links &amp; Mentions</strong></p><ul><li><a href="https://www.intel.com/content/www/us/en/silicon-innovations/moores-law-technology.html" target="_blank">Intel</a></li><li><a href="https://podcasts.apple.com/us/podcast/short-and-sweet-ai/id1485901155?i=1000466624000" target="_blank">5G: Fifth Generation Wireless. What is it?</a></li><li><a href="https://podcasts.apple.com/us/podcast/short-and-sweet-ai/id1485901155?i=1000466990763" target="_blank">What is Edge AI or Edge Computing?</a></li><li><a href="https://podcasts.apple.com/us/podcast/short-and-sweet-ai/id1485901155?i=1000472950901" target="_blank">What is Quantum Computing?</a></li></ul><br/><p><strong>Resources:</strong></p><ul><li><a href="https://www.eye-on.ai/podcast-archive" target="_blank">Eye on AI: The Podcast</a></li><li><a href="https://www.wired.co.uk/article/wired-explains-moores-law" target="_blank"><em>What is Moore's Law? WIRED explains the theory that defined the tech industry</em></a></li><li><a href="https://computer.howstuffworks.com/moores-law5.htm" target="_blank"><em>How Moore’s Law Works</em></a></li><li><a href="https://kids.kiddle.co/images/0/00/Transistor_Count_and_Moore%27s_Law_-_2011.svg" target="_blank">Microprocessor Transistor Counts 1971-2011 &amp; Moore's Law</a></li></ul><br/>]]></content:encoded><link><![CDATA[https://drpepermd.com]]></link><guid isPermaLink="false">20796838-26bf-48f7-ab5c-6c1d2d4282dc</guid><itunes:image href="https://artwork.captivate.fm/0f70a8fe-87fd-4897-904a-3c6df4f5e40b/8bczxqo3txpitc4acrr8bmih.png"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Mon, 04 Jan 2021 02:00:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/1bd54f67-5891-48d0-8105-805c1c027c67/ep-35-podcast-edited-v1-including-henry-moore.mp3" length="6833900" type="audio/mpeg"/><itunes:duration>07:06</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>2021</itunes:season><itunes:episode>35</itunes:episode><itunes:author>Dr. Peper</itunes:author></item><item><title>What is Quantum Computing? part 2</title><itunes:title>What is Quantum Computing? part 2</itunes:title><description><![CDATA[<p>The world's most powerful supercomputer would take 10,000 years to solve a math problem a quantum computer solved in minutes. Welcome to quantum computing.</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/what-is-quantum-computing-part-2/">What is Quantum Computing? part 2</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>The world's most powerful supercomputer would take 10,000 years to solve a math problem a quantum computer solved in minutes. Welcome to quantum computing.</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/what-is-quantum-computing-part-2/">What is Quantum Computing? part 2</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/what-is-quantum-computing-part-2/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=4790</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Tue, 05 May 2020 12:37:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/6f82a551-cba2-43f1-9d72-9bc42e7d8c52/34-quantum-computing-ii-200504.mp3" length="8176870" type="audio/mpeg"/><itunes:duration>05:41</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>2020</itunes:season><itunes:episode>34</itunes:episode><itunes:summary>From Short and Sweet AI, I’m Dr. Peper and today I’m discussing more about quantum computing.



Regular computers use a binary system of ones and zeros or bits. Quantum computers use quantum bits or qubits which exist in superposition and make them very powerful. Quantum computing is a very different technology from anything we’ve seen because qubits can exist in two states at once. They can be like a coin that is spinning and is both heads and tails at once. In order to explain how this could exist, quantum computing which is based on quantum physics has created theories of the existence of parallel universes. In a parallel universe you could have a coin be heads and in a separate parallel universe, it could be tails. Yeah, this stuff gets pretty crazy, very fast.



In the previous podcast I talked about the super powerful state of superposition. And I talked about entanglements where multiple qubits are physically separated but act like they’re entangled and give similar results. Added to that is this is all taking place in a computer which looks like a fantastic chandelier, made that way in order to create very cold conditions similar to outer space. Absolute zero, outerspace.



But are quantum computers a reality? There are many groups all over the world working on this technology: IBM, Google, Intel, the Chinese government, the US government, private start up groups such as Rigetti Computing and more. All these groups have been working feverishly for the ultimate breakthrough. Then in 2019 Google announced its’ quantum computer had solved a mathematical problem in 3 minutes 32 seconds. It would have taken the most powerful, existing supercomputer more than 10,000 years to solve the problem. That’s the difference in magnitude and power between a regular supercomputer and a quantum computer.



As the scientists explained, the answer to the problem wasn’t important, it really didn’t do anything.  But what the Google quantum computer accomplished was the same as the Wright brothers first plane flight. It showed that quantum computing was really possible even though its true potential is years in the future.



What’s holding the technology back? Well, quantum type problems. Qubits are very sensitive and must be shielded from heat, electrical interference, and other metals, and cooled down to just above absolute zero in order to complete their calculations. And you need at least 50 qubits to have a quantum computer but groups of qubits are very fragile and can fall apart or de-cohere. This leads to errors in the calculations.



Scientists are confident they will solve these problems in the next decade and then we will really see what these computers can do. That goes back to how qubits work. They’re very powerful because they can deal with uncertainty. And that’s how the laws of atoms and subatomic particles called quantum physics work. In nature, things smaller than the atom are not always on or off. They don’t follow the laws of larger things in nature such as gravity, relativity or E equals MC squared. With regular computers if you want to solve a maze, it will go down every single path, one after the other, until it finds the right one. A quantum computer works by the laws of subatomic particles and goes down every path at once because it can operate with uncertainty; it can hold each alternative path as a possibility. 



Technology this powerful can be used to simulate large complicated problems with uncertainty such as forecast financial markets, find better products such as batteries for self-driving cars, new drugs for medications, or even using quantum computing to understand quantum physics. And cryptography will be saved by quantum computing. New quantum encryption uses the uncertainty principle where everything influences th...</itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>What is Quantum Computing?</title><itunes:title>What is Quantum Computing?</itunes:title><description><![CDATA[<p>Quantum computing is an extraordinary technology based on quantum physics which uses quantum bits or qubits to solve problems in a magical way. </p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/what-is-quantum-computing/">What is Quantum Computing?</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>Quantum computing is an extraordinary technology based on quantum physics which uses quantum bits or qubits to solve problems in a magical way. </p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/what-is-quantum-computing/">What is Quantum Computing?</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/what-is-quantum-computing/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=4750</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Tue, 28 Apr 2020 20:11:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/87beae89-14fe-42b5-b9e9-11137e6ef37c/33-quantum-computing-200428.mp3" length="6376302" type="audio/mpeg"/><itunes:duration>04:26</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>33</itunes:episode><itunes:summary>
From Short and Sweet AI, I’m Dr. Peper and today I’m talking about one of the most challenging ideas I’ve ever discussed, quantum computing.



Quantum computing excites and perplexes me. It has all these strange, science fiction parts to it such as superposition, entanglement, parallel universes, yes, I said parallel universes, temperatures as cold a deep space, well, just above absolute zero really, and of course qubits. And quantum computers have been described as looking like steampunk chandeliers.



Quantum Bits = Qubits



Let’s start with qubits. In traditional computers, information is coded as binary units which are either ones or zeros and referred to as bits. They&amp;#8217;re like tiny switches that can be either in the off position, represented by a zero, or in the on position, represented by a one. Computers are made up of millions of these bits in some combination of ones and zeros. This binary system is how our phones, apps, websites and the internet work. Quantum computing is completely different. It involves a philosophical leap really. It involves the idea that a single object can be in two states at the same time, so it can be a one and a zero at the same time, or it can be on and off at the same time. I know, it sounds crazy.



Superposition



Take a coin for example, if you flip a coin, it can be either heads or tails. But during the flip, the coin is spinning and is in both states at once, heads and tails at the same time. This is called superposition. Quantum computing stores a combination of one and zeros in both states, on and off, at once, in the form of qubits. Quantum computers are powered by collections of qubits in superposition and that’s what makes them so powerful.



Entanglements



The other thing qubits do is called entanglement. When two particles are linked together in quantum computing it’s called entanglement even if they&amp;#8217;re physically separate. Normally when you flip a coin, tossing one coin won&amp;#8217;t affect the next coin toss. But in quantum computing, two spinning coins can be linked together and if one comes up heads, the other one will also come up heads.Then if you can string together multiple qubits you can tackle the problems that even our best computers can&amp;#8217;t solve. 



But quantum computers are not really just about doing things faster or more efficiently. They can do things we can’t even dream of, things our everyday supercomputers can’t possibly do.



 Light Bulb, Not Candle 



A quantum physicist, Shohini Ghose, says a quantum computer is not just a more powerful supercomputer just as a light bulb is not a more powerful version of a candle. You cannot build a light bulb by building better and better candles. A light bulb is a different technology just as quantum computing is a different technology. Having a lot more candles won’t achieve the same effect of what a light bulb can do because they’re two different technologies. And just like a light bulb transformed society, quantum computers have the potential to impact many, many different aspects of our lives. 



Magic



Quantum computing is so strange, so futuristic, so exuberant, really, I love it. To me it’s what the science fiction guru, Arthur C. Clarke, was thinking about when he said, “any sufficiently advanced technology is indistinguishable from magic.”



There’s so much more to discuss about qubits, quantum computing, and the space race to quantum supremacy in my next episode.



Until then, from Short and Sweet AI, I’m Dr. Peper.</itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>A Physician during COVID</title><itunes:title>A Physician during COVID</itunes:title><description><![CDATA[<p>I've interrupted my podcasts to care for patients during the COVID surge in my area. Many death certificates, many flags at half-mast.</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/a-physician-during-covid/">A Physician during COVID</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>I've interrupted my podcasts to care for patients during the COVID surge in my area. Many death certificates, many flags at half-mast.</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/a-physician-during-covid/">A Physician during COVID</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/a-physician-during-covid/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=4704</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Wed, 22 Apr 2020 14:54:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/f2360a4f-bdd6-4711-b503-732aa366e0f4/32-md-during-covid-200422mp3.mp3" length="7484730" type="audio/mpeg"/><itunes:duration>05:12</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>32</itunes:episode><itunes:summary>
From short and sweet AI, I&amp;#8217;m Dr Peper.



I&amp;#8217;ve interrupted my podcasts in the last few weeks in order to do what my first passion is, be a physician and care for patients as we’ve experienced a COVID surge in my area. I’ve had to be available 24/7 to provide care to my patients, discuss things with nursing staff and facility staff and speak with families about their loved ones.



The families, they are very worried, scared, not being able to see their mother or father who are living in these facilities but are on lockdown. Patients are in their rooms eating, meals in their rooms, not able to come out to participate in activities in order to protect them and keep them safe from the COVID virus.



It&amp;#8217;s been a very humbling and sad few weeks as many of my patients have died. My team and I at these facilities have worked to make sure that in these, that in unwanted and really complicated situations, they have the best death possible and are able to pass away in what is essentially their homes being taken care of by caregivers who know them with hospice services available to them.



But despite all these efforts, they do end up dying without their families being present. They die separated from their families. They don&amp;#8217;t die alone. The caregivers and nursing staff are there which um brings some comfort to know. And there are many, many people working very bravely and very difficult jobs to ensure the safety and try keep these frail, vulnerable residents safe. So all my time and attention has been my patients in the past few weeks but before all this crescendoed in just a short time, I was working on a podcast about an AI researcher known as, um, called Geoffrey Hinton. He’s someone in the field of artificial intelligence who is known as the godfather of AI. And there were similar resonating themes from what I was learning about him and his life and what we&amp;#8217;re experiencing now.



Things such well, mainly perseverance and dedication and believing in what we&amp;#8217;re doing. And this will become more clear when I&amp;#8217;m able to record and release that podcast. But it does, um, help to know that at all times people have had to deal with difficulties and we are defined not by our successes, but how we deal with the difficulties and the fortitude we’re able to find within ourselves when things aren&amp;#8217;t going well.



And I would say even more so, I’ve been thinking day after day of a scene in the Hamilton musical called Valley Forge and there’s a song where Alexander Hamilton is getting so frustrated trying to help the army and the revolution and George Washington and not receiving any aid from the Continental Congress or other merchants and George Washington tries to counsel Hamilton to be calm but the song ends on a very somber note, which I think is very applicable and plays over in my head on these days when I&amp;#8217;m signing so many death certificates and the song lyrics say that we&amp;#8217;re gonna fly a lot of flags half-mast, and that&amp;#8217;s what we, in this country, are doing now. It’s a battle.



It&amp;#8217;s a fight against an invisible, um, enemy. But what I’ve seen of the people dedicated to doing what they have been trained to do and what they&amp;#8217;ve dedicated their lives to do. I, I see it that we will pull through to the other side of this. And I know we will learn from this and be more vigilant and more ready the next time, so that so many people do not die.From short and sweet AI. I&amp;#8217;m Dr Peper sending you all my best thoughts, be well and stay safe.</itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>How to Train Your Emotion AI</title><itunes:title>How to Train Your Emotion AI</itunes:title><description><![CDATA[<p>AI can help us best if machines can understand our emotions. Emotion metrics are becoming highly accurate but FATE flaws exist. </p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/how-to-train-your-emotion-ai/">How to Train Your Emotion AI</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>AI can help us best if machines can understand our emotions. Emotion metrics are becoming highly accurate but FATE flaws exist. </p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/how-to-train-your-emotion-ai/">How to Train Your Emotion AI</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/how-to-train-your-emotion-ai/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=4422</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Tue, 24 Mar 2020 12:17:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/611d76ba-d916-40fd-bffe-dcab44f0f39c/31-how-to-train-emotion-ai-200324.mp3" length="6742434" type="audio/mpeg"/><itunes:duration>04:41</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>31</itunes:episode><itunes:summary>How do you train neural networks to understand and simulate human emotions?



From Short and Sweet AI, I’m Dr. Peper and today I’m discussing how to train your AI.



We use 10,000 possible combinations of muscle movements in the face to create one facial expression. Add to this more than 400 possible voice inflections, along with thousands of hand and body gestures. All these combinations change continuously throughout a human conversation. Our brains process these complex, sometimes intense emotions, subconsciously, in microseconds, over and over again throughout the day.  



Emotion and Datasets



The way AI can help us is to have machines that can effectively communicate with us and understand what we want.  They need to recognize our emotional state, how we’re feeling, through our voice, facial expressions and nonverbal cues.   



In order to teach computers how to understand emotions, AI researchers use machine learning and neural networks. Machines are very good at analyzing large amounts of data. We’re talking a dataset that has almost 8 million facial expressions. When a machine trains on that many variations, it learns to detect patterns in facial movements and even the nuances between a smirk and a smile. The machines can listen to voice tone and recognize sounds that indicate stress or anger. How does it do this?



Emotion Metrics 



Using computer vision, the algorithms identify key landmarks on the face such as the tip of the nose, the corners of the mouth or the corners of the eyebrows. Deep learning algorithms then analyze the pixels of the images to classify the expressions. Combinations of these facial expressions are then mapped to emotions. Another program for analyzing speech evaluates not what is said, but how it is said, calculating changes in tone, loudness, tempo and voice quality to understand what’s happening and the emotion and gender of the speaker. These are called emotion metrics. And when tested against human emotions, the key emotion metrics have accuracies above 90%.



Many companies are working on emotion AI. Amazon has a network for speech based emotion detection. Another company, Affectiva, has a neural network called SoundNet, that can classify anger from audio data in 1.2 seconds, regardless of the speaker’s language. That’s as fast as a human can detect anger from a voice. Another company, Cogito, has a system which analyzes voices, of military veterans with PTSD, to determine if they need help.



FATE Flaws



But there are worries about this technology. Many people in the field raise concerns that these types of systems have FATE flaws. FATE flaws in AI stand for fairness, accountability, transparency and ethical flaws. For example, a study with one facial recognition algorithm, showed faces of black people are rated as angrier, than faces of white people, even when the faces of black people were smiling.



Lisa Barret, a professor of psychology, spent 2 years along with 4 other scientists scrutinizing the evidence, for the accuracy of emotion AI. They concluded that companies using AI cannot reliably fingerprint, emotions through expressions. However, she does think in the future, emotions can be measured more accurately, when more sophisticated metrics are available.



As she explained: “it’s intuitive that emotions are very complex. Sometimes people cry in anger, sometimes they shout, some people laugh when angry and sometimes, they just sit silently and plan the demise of their enemy”.



From Short and Sweet AI, I’m Dr. Peper.



As always you can find further reading, videos and podcasts in the show notes.</itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>What is Emotion AI?</title><itunes:title>What is Emotion AI?</itunes:title><description><![CDATA[<p>Machines with emotional intelligence, emotion AI, can interact more naturally than ever before by reading emotional cues in voices and facial expressions.</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/what-is-emotion-ai/">What is Emotion AI?</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>Machines with emotional intelligence, emotion AI, can interact more naturally than ever before by reading emotional cues in voices and facial expressions.</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/what-is-emotion-ai/">What is Emotion AI?</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/what-is-emotion-ai/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=4370</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Wed, 18 Mar 2020 01:56:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/78fd20fd-fff5-4545-a5c7-1e73faad4c33/30-emotion-ai-200317.mp3" length="7419528" type="audio/mpeg"/><itunes:duration>05:09</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>30</itunes:episode><itunes:summary>Humans are incredibly skilled at identifying the emotions in a conversation. We can “hear” a smile. And we correctly identify emotions in a voice even when we don’t speak the language. In fact more than 50 categories exist within the human emotions of surprise, joy, anger, sadness and fear. And each is conveyed through body language, words or tone. When you recognize these signals and respond appropriately, you have high emotional intelligence or high EQ. 



AI has high IQ but low EQ



We know emotional intelligence and social skills correlate with a person’s potential for success in life. On the other hand, we live in a high IQ world surrounded with super advanced technology and AI systems developed to help us. But they have absolutely no EQ, no emotional intelligence. We need to build emotionally intelligent machines that truly understand human needs so we can have successful interactions with them. 



Give machines emotions



The idea of making emotionally intelligent AI has been around for a long time. In 1997 an MIT Media lab professor, Rosalind Picard, published a book about computers and emotions entitled “Affective Computing”. Affect is a psychology term and refers to feeling, emotion, or mood.



Picard is credited with starting the field of computer science known as affective computing. It’s also called emotional artificial intelligence or emotion AI. Her book outlined how to give machines the skills of emotional intelligence so they can be genuinely intelligent and interact with us naturally. She believes computers should have the ability to recognize, understand, to even have and express, emotions. And by the way, this sounds very similar to what Ray Kurzweil has predicted in some of his https://www.abundance.video/videos/ray-kurzweil-peter-diamandis (conversations about the future).  



The need for emotion datasets



In 2009 Picard and Rana el Kaliouby, a computer scientist from MIT, started an AI company called Affectiva based on emotion recognition technology. Subsequently, the company created a dataset of 7.9 million faces from 87 countries with recorded expressions for just about every human emotion. Above all, Picard and Kaliouby wanted to avoid biases in Affectiva’s algorithms. They therefore used a diversity of faces to pick up the differences in expressions from all ethnic groups, ages, genders and cultural backgrounds. Incidentally, I talked about the bias in large datasets in a previous flash talk on https://drpepermd.com/episode/imagenet/ (ImageNet). 



Today Affectiva’s algorithms can detect human emotion from facial expressions and vocal cues. But even more, Kaliouby wants to train machines to recognize the subtle nuances in human emotions. Humans use a lot of nonverbal cues. Gestures, body language, voice tone all contribute to how emotions are communicated. For that reason researchers plan to develop emotion AI that is multimodal and can detect emotion the way humans do from multiple channels. Ultimately, Kaliouby wants to fuse digital technology with an ability to understand the humans using it.



The application of emotion AI



The power to detect human emotion has implications for every aspect of society. Emotion AI technology can detect mental and physical ailments based on how patients look or sound. In marketing it determines consumer’s reactions to commercials and TV shows. In the automotive world, emotion AI can identify distractions going on inside the car that could affect safety, such as arguments or a driver’s lack of focus. Finally, the biggest role so far has been in customer service. Call centers are already using emotion AI to identify the mood of customers on the phone.</itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>AI Audiobooks</title><itunes:title>AI Audiobooks</itunes:title><description><![CDATA[<p>AI can generate speech from text to record audiobooks with the exact emotional mix for each word and sentence. Listen for yourself.</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/ai-audiobooks/">AI Audiobooks</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>AI can generate speech from text to record audiobooks with the exact emotional mix for each word and sentence. Listen for yourself.</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/ai-audiobooks/">AI Audiobooks</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/ai-audiobooks/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=4208</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Tue, 10 Mar 2020 12:30:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/99e95e6e-cd8d-4d6c-ab4b-c0cc9cbcd5ff/29-ai-audiobooks-200309.mp3" length="5438402" type="audio/mpeg"/><itunes:duration>03:46</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>29</itunes:episode><itunes:summary>DeepZen has released for purchase the first AI narrated audiobook.



From and Sweet AI, I&amp;#8217;m Dr. Peper and today I&amp;#8217;m talking about AI audiobooks.



In a previous flash talk, I discussed how we&amp;#8217;re entering a https://drpepermd.com/episode/2-voice-first-computing/ (voice first )future. With smart assistants leading the way, we will request and consume information by speaking rather than type or read from a screen. We will type less on our laptops and smart phones and communicate more with voice. And as a result, people will consume more audiobooks. 



Text to Speech



There are about one million books published each year in the US. Despite this only 40,000 books are recorded due to the costs. Audiobooks are time consuming and can cost up to $5000 per book to record. Not surprisingly then, companies have focused on perfecting AI to change text into speech through deep learning based systems. And there&amp;#8217;s a whole history of machine learning breakthroughs over the last few years which has led to progressive improvement in the natural language processing algorithms. One of the biggest hurdles is AI generated voices sound flat and without emotion, in an almost comical way. Remember the Youtube https://www.youtube.com/watch?v=PTUY16CkS-k (Ben Bernanke video) of the financial crisis? Well, all that&amp;#8217;s changed.



DeepZen



DeepZen, an London based AI company, released examples of it&amp;#8217;s latest AI text to speech technology and they sound really good. The DeepZen team trained their algorithms on thousands of hours of narrator speech. As a result, the algorithm produces human sounding, highly emotive audio recordings using text from a book. Judge for yourself. Here&amp;#8217;s a snippet of the audiobook, The Metamorphosis by Franz Kafka, generated by DeepZen&amp;#8217;s text to speech technology. 









Isn&amp;#8217;t that fantastic? This is an audio recording generated by a machine from the text of a book. Because of this AI technology, it&amp;#8217;ll be easy and cost effective to make an audio recording of any book out there. Eventually in all different languages. 



Emotion AI



DeepZen, and other companies like it, are at work on translating human emotion through machine or deep learning for other things besides recording audiobooks. It&amp;#8217;s the field of emotion AI which allows machines to determine a person&amp;#8217;s mood by the sound of their voice. And will create more human like interactions between machines and man. We can talk about that in the next Short and Sweet AI. I&amp;#8217;m Dr. Peper.



https://literallypublicrelations.wordpress.com/2020/03/02/the-future-of-audiobooks-is-ai/ (https://literallypublicrelations.wordpress.com/2020/03/02/the-future-of-audiobooks-is-ai/)



https://news.developer.nvidia.com/inception-spotlight-deepzen-uses-ai-to-generate-speech-for-audiobooks/ (https://news.developer.nvidia.com/inception-spotlight-deepzen-uses-ai-to-generate-speech-for-audiobooks/)









https://www.wired.com/story/opinion-conversational-ai-can-propel-social-stereotypes/ (https://www.wired.com/story/opinion-conversational-ai-can-propel-social-stereotypes/)



https://www.wired.com/story/google-assistant-can-now-translate-on-your-phone/ (https://www.wired.com/story/google-assistant-can-now-translate-on-your-phone/)</itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>AI and Coronavirus</title><itunes:title>AI and Coronavirus</itunes:title><description><![CDATA[<p>Is there an upside to the coronavirus? Nope. But the outbreak did show how AI can be used to predict and accurately track a pandemic. </p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/ai-and-coronavirus/">AI and Coronavirus</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>Is there an upside to the coronavirus? Nope. But the outbreak did show how AI can be used to predict and accurately track a pandemic. </p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/ai-and-coronavirus/">AI and Coronavirus</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/ai-and-coronavirus/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=4115</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Tue, 03 Mar 2020 13:10:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/c70d41e9-d81b-4329-9a0e-f1bfe904e3ee/28-ai-coronavirus-200302mp3.mp3" length="7339280" type="audio/mpeg"/><itunes:duration>05:06</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>28</itunes:episode><itunes:summary>Is there an upside to the coronavirus? Nope. But the outbreak did show how AI can be used to predict and accurately track a pandemic.



From Short and Sweet AI, I&amp;#8217;m Dr. Peper, and today I&amp;#8217;m talking about what everyone is talking about, the coronavirus or COVID 19.



There are so many ways AI impacts the cornavirus outbreak. Chinese drones are disinfecting public areas and track people who don&amp;#8217;t adhere to quarantine. Robots decontaminate hospital rooms.  Self-driving cars in China deliver supplies to medical workers. Facial recognition cameras search for people not wearing their mandated face mask. Infrared temperature scanners detect fevers in large groups of people. Doctors use AI software to find evidence of coronavirus in lung scans from patients who are ill.  



Think about what happened with the SARS virus in 2003 and compare that to the coronavirus today. What you realize is over the last two decades AI has really advanced how we respond to pandemics.



Early Prediction with AI



Early detection means earlier disease containment. And AI can do it faster. Right from the beginning an artificial intelligence company sounded the alarm on the coronavirus. A Canadian company BlueDot used AI powered algorithms to analyze information from many different sources. They were able to identify places were there were outbreaks of diseases and forecast how they spread. The company sent out warnings to it&amp;#8217;s clients to avoid Wuhan on December 31, 2019. The World Health Organization sent out a public warning on January 9, 2020, not until 10 days later.



100,00 reports /day



BlueDot used natural language processing and machine learning to analyze large amounts of data. The company uses an automated infectious disease surveillance program.  The algorithm sifts through foreign language news reports, animal and plant disease publications, new releases from government and public health departments and much, much more.  The data is vast: 100,00 new reports in 65 languages a day.  



BlueDot&amp;#8217;s algorithm doesn&amp;#8217;t use social media data because the company says it&amp;#8217;s too messy.  Finding signs of the virus in a vast soup of rumors, posts about ordinary cold and flu symptoms and lots of speculation, requires as yet unavailable training sets for the algorithms.  But BlueDot does use some unexpected sources such as global airline ticketing data. Using this information, the Bluedot physician and programmers correctly predicted in the first few days the virus would jump from Wuhan to Seoul, Taipei, then Tokyo.  



Humans Validate Conclusions



Bluedot highlights how the best use of AI is to augment human understanding. After the data sifting is finished, epidemiologists take over to make sure, from a scientific stand point, the conclusions from the data make sense. As blueDot&amp;#8217;s founder Kamran Khan points out &amp;#8220;What we have done is use natural language processing and machine learning to train this engine to recognize whether this is an outbreak of anthrax in Mongolia versus an reunion of the heavy metal band Anthrax&amp;#8221;.  



But final supervision requires human input to validate the AI&amp;#8217;s findings. Information from AI algorithms need humans to put it in context to take the next step. The field of artificial intelligence needs people who can operate at the intersection of AI and biology. It&amp;#8217;s not enough to be an AI engineer. What&amp;#8217;s needed is someone who can understand biology well enough to apply what AI comes up with. 



Augmented Intelligence



The use of AI in prediciting the coronavirus pandemic shows us what many experts in artificial intelligence alr...</itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>What is Edge AI or Edge Computing?</title><itunes:title>What is Edge AI or Edge Computing?</itunes:title><description><![CDATA[<p>Data and AI are moving out of the cloud onto the edge of the network. </p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/what-is-edge-ai-or-edge-computing/">What is Edge AI or Edge Computing?</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>Data and AI are moving out of the cloud onto the edge of the network. </p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/what-is-edge-ai-or-edge-computing/">What is Edge AI or Edge Computing?</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/what-is-edge-ai-or-edge-computing/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=4047</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Fri, 28 Feb 2020 18:51:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/4d2248f8-97d3-477e-9847-0730cdb3ec86/27-edge-ai-200228.mp3" length="5894813" type="audio/mpeg"/><itunes:duration>04:05</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>27</itunes:episode><itunes:summary>Data and AI are moving out of the cloud onto the edge of the network.



From Short and Sweet AI, I&amp;#8217;m Dr. Peper. And today I&amp;#8217;m talking about edge AI. 



The Cloud



First  there were mainframe computers which stored lots of data.  Then data was stored on hard drives on desktop computers. Then laptop computers revolutionized our experience and we could access the internet to get unlimited data that&amp;#8217;s stored in the cloud. And as you know, the cloud is a term referring to the massive number of computer servers which collect and store data from the internet. These servers are located in data centers all over the world. Now with mobile devices, we have access to data anytime, anywhere via the cloud. 



Cloud Problems



But connecting to the cloud comes with problems such as the lag time between the cloud and the smart device, the cost of storing data in the cloud, and more pressing issues of privacy. We realize smart assistants are sending snippets of our audio back to the cloud for training purposes. And networks using the cloud can be hacked and data stolen. 



The Edge



Meanwhile the https://drpepermd.com/episode/what-is-iot-and-why-does-it-matter/ (Iot) is connecting millions of devices to the internet through very fast speeds via https://drpepermd.com/episode/5g-fifth-generation-wireless-what-is-it/ (5G technology). But in order for these devices to respond quickly and efficiently, data is now moving out of the cloud and onto the devices or the edge of the network.  And the AI is moving there too. If the data can reside on the device and be processed by artificial intelligence there, it doesn&amp;#8217;t need to be sent back to the cloud. Edge AI is computing that takes place on the device rather than in the cloud. 



No Dad Jokes



As an example, think of a security camera. It collects 24 hours of video data to send to the cloud for processing.  This is can be quite expensive. For 23 of those hours nothing has happened. But with edge computing, the smart security camera knows to send to the cloud the one hour of video where something did happen.  



Another everyday example is our coffee makers. With edge AI these smart devices need to recognize only 200 words to make coffee. It doesn&amp;#8217;t need a cloud worth of data and AI being sent back and forth for the coffee maker to recognize the command  &amp;#8220;brew three cups of coffee&amp;#8221;.  



As Clive Thompson in a https://www.wired.com/story/edge-ai-appliances-privacy-at-home/ (Wired magazine article) explained: &amp;#8221; I don&amp;#8217;t need light switches to tell Dad jokes or acheive self-awareness. When it comes to gadgets that share my house, I&amp;#8217;d prefer they be less smart&amp;#8221;.



Putting It Together 



So how will edge computing, 5G and IoT work together? 5G creates a new layer of edge computing. A new network of IoT connected devices all interacting with each other within milliseconds through the AI and data located on the devices.  enabled by 5G&amp;#8217;s faster speeds and response times, and mmWave technology. IoT, 5G and edge AI can create private local area networks called &amp;#8220;fog&amp;#8221; compared to the traditional networks in the cloud.  These local networks are more reliable and secure because they process data on the devices. Less data sent to the cloud means less chance of networks getting hacked. 



Things are getting edgier because edge AI solves problems of efficiency, cost, and privacy.



Further reading, videos, and podcasts are linked in the show notes. And if you like this episode, reviews are always appreciated.</itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>5G: Fifth Generation Wireless. What is it?</title><itunes:title>5G: Fifth Generation Wireless. What is it?</itunes:title><description><![CDATA[<p>What is 5G and how is 5G, IoT, and edge AI going to exponentially change our world?</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/5g-fifth-generation-wireless-what-is-it/">5G: Fifth Generation Wireless. What is it?</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>What is 5G and how is 5G, IoT, and edge AI going to exponentially change our world?</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/5g-fifth-generation-wireless-what-is-it/">5G: Fifth Generation Wireless. What is it?</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/5g-fifth-generation-wireless-what-is-it/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=3956</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Tue, 25 Feb 2020 13:15:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/7ee32091-bd3f-4136-a995-223c3dffccb8/26-5g-200225.mp3" length="6727388" type="audio/mpeg"/><itunes:duration>04:40</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>26</itunes:episode><itunes:summary>A consumer study found 58% of Americans don&amp;#8217;t understand what 5G is. From Short and Sweet AI, I&amp;#8217;m Dr. Peper, with an explanation of 5G.



What Is 5G?



5G stands for fifth generation cellular wireless. And it&amp;#8217;s important to know it&amp;#8217;s not just another &amp;#8220;G&amp;#8221; of 4G technology. It won&amp;#8217;t work the same as 4G. 2020 is the year super fast 5G is expectd to have a slow roll out. That means the first 5G networks will be faster than 4G. But 5G won&amp;#8217;t achieve it&amp;#8217;s fastest speeds until companies complete the infrastructure.  To get these benefits, users will have to buy new phones and the wireless providers will need to install new networking equipment.



How Fast Is 5G?



How much faster is 5G? Downloading a typical movie to your phone would take 17 secoonds with 5G compared to six minutes for 4G. 5G wireless will have connection speeds 10 &amp;#8211; 20 times faster than the speediest home internet service. And it will be 600 times faster than typical 4G speeds on your phone today. 5G cellular technology uses something called millimeter-wave networks or mmWave.  With mmWave, data can be streamed to phones at extremely fast rates but only over short distances. So a huge number of access points or small cell sites will be needed to transfer the signals instead of a few huge cell towers.



But is it only speed that matters?



There&amp;#8217;s another speed, called latency speed, that&amp;#8217;s perhaps even more important. The time between you asking Siri a question, searching the web, and getting a response will be faster. This is because of the lag time or latency speed with 5G is faster due to newer networking technology and more reliable signals. 



Autonomous Vehicles



But does all this really make 5G revolutionary and justify the hype. Quite honestly, yes. Because it&amp;#8217;s what this technology can now accomplish that&amp;#8217;s so exciting. And this comes back to Iot, https://drpepermd.com/episode/what-is-iot-and-why-does-it-matter/ (the internet of things) which I discussed last time. The shorter latencies of 5G allow things connected to the internet to communicate directly to each other. MmWave technology allows thousands of things to be directly connected together at once.



This technology will truly kickstart fully autonomous vehicles. Remember autonomous vehicles whichhttps://drpepermd.com/episode/14-self-driving-cars-are-we-there-yet/ ( I talked about in a previous episode) have no driver in the car unlike self driving vehicles with a steering wheel and a driver supervising. With 5G, cars are synchronized to traffic lights and each other. Future traffic has been described as being an elaborate street level ballet where cars flow like schools of fish in unison without colliding. There is even a new anticipated infrastructure called CV2X which stands for &amp;#8220;cellular vehicle to everything&amp;#8221;.



Virtual Reality



Because of 5G, IoT and something called edge computing, virtual reality becomes the new reality. With these 3 technologies, the perfect VR that exists today only in controlled scientific labs can now exist anywhere. These are perfect conditions for creating realistic representations of you with your specific tics and mannerisms. VR environments can be digitized and shared between users miles apart in real time. We will replace FaceTime with HologramTime. As we chat and interact with someone in their virtual world, they see us as holograms right next to them in their real world. We are one step closer to Ready Player One. 



I&amp;#8217;ve talked about IoT and 5G but the trifecta includes another technology called edge AI.</itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>What is IoT and Why Does It Matter?</title><itunes:title>What is IoT and Why Does It Matter?</itunes:title><description><![CDATA[<p>The IoT matters because combined with 5G and edge AI, your life is getting so much easier.</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/what-is-iot-and-why-does-it-matter/">What is IoT and Why Does It Matter?</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>The IoT matters because combined with 5G and edge AI, your life is getting so much easier.</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/what-is-iot-and-why-does-it-matter/">What is IoT and Why Does It Matter?</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/what-is-iot-and-why-does-it-matter/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=3893</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Tue, 18 Feb 2020 17:09:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/4576c96c-7eea-47c1-8220-1bbbba08ca97/25-iot-20218.mp3" length="5912368" type="audio/mpeg"/><itunes:duration>04:06</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>25</itunes:episode><itunes:summary>IoT, 5G and Edge Computing: these 3 different terms are everywhere and it&amp;#8217;s  time to talk about them. Today let&amp;#8217;s discuss the Internet of things or IoT and see why it matters.  



IoT



After all my research, the best definition I&amp;#8217;ve found for the internet of things is by https://www.iotforall.com/what-is-iot-simple-explanation/ (Calum McClelland,) and, by the way, he has a lot of c&amp;#8217;s and l&amp;#8217;s in his name. Calum&amp;#8217;s definiton of the internet of things is pretty simple. He says it means taking all the things in the world and connecting them to the internet. Voilà!



But what does that mean, all the things in the world? Well let&amp;#8217;s look at the most popular by far IoT: smartphones. As Calum explains, with a smartphone you can listen to any song in the world. But not every song in the world is stored on your smartphone. In fact every song in the world is stored somewhere else. But your phone can ask for that song by being connected to the internet and then stream it to your device. Contrary to popular belief, your phone is not a super computer with unlimited storage.  But it is connected to supercomputers and vast storage in the cloud.  



Another example is your smart home assistant. When you ask it to play the daily news, it doesn&amp;#8217;t have all the news programs stored in the device.  But it&amp;#8217;s connected to the cloud which does have that information. So it&amp;#8217;s a thing connected to the internet or IoT.



Things Collect and Send



All the things in the world connected to the internet can be grouped into 3 categories. Things that collect and send information such as wearable health monitors. Things that receive intormation and put it into action such as your car keys. And things that can do both.



The things that collect and send information do so by sensors embedded in the smart device. Motion sensors tell us how many steps we&amp;#8217;ve taken each day. Listening sensors can tell how many hours we&amp;#8217;ve slept. Light sensors turn on when they sense the motion of someone in the room. But the important thing is this. The devices collecting and sending information via connection to the internet make our life easier.



Things Receive and Act



Things that receive and act on information, the second category of IoT, are smart devices such as thermostats which receive a command from us and then act on it and turn on the heat. Or alarm systems we can tell to unlock the house. Even refrigerators we can ask to show what food we have. Using these internet of things we can tell machines what to do even if we&amp;#8217;re far away.  



Helpful IoT Collects, Sends, Receives, Acts



And then things start to get awesome when IoT can do both, collect and send information and receive and act on information. An example would be a wearable alert system which use embedded sensors to detect when your body posture changes. The IoT device then sends that information to the cloud which analyzes your motion and determines you fell.  The alert system then acts on that information to call 911. Ta-da!



So why is IoT grouped with 5G and edge computing. In my upcoming talks I&amp;#8217;ll discuss how 5G will connect machines in diverse places such as factories, hospitals, schools and cities via IoT. And 5G will allow infrastructive to be retrofitted with artificial intelligence through edge computing as AI moves out of the cloud onto devices. Don&amp;#8217;t worry! As always, I&amp;#8217;ll make the explanations of 5G and edge AI short and sweet. 



From Short and Sweet AI, I&amp;#8217;m Dr. Peper.



&lt;a href=&quot;https://www.wired.co.</itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>What Are Mentats: Dune&apos;s Alternative to AI</title><itunes:title>What Are Mentats: Dune&apos;s Alternative to AI</itunes:title><description><![CDATA[<p>Do mentats represent the future where 'Dune' intersects with 'The Singularity is Near'?</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/what-are-mentats-dunes-alternative-to-ai/">What Are Mentats: Dune's Alternative to AI</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>Do mentats represent the future where 'Dune' intersects with 'The Singularity is Near'?</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/what-are-mentats-dunes-alternative-to-ai/">What Are Mentats: Dune's Alternative to AI</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/what-are-mentats-dunes-alternative-to-ai/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=3703</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Fri, 14 Feb 2020 14:33:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/35d0b2e7-c3fa-4dbb-b51d-13026ba30bab/24-mentats-200212.mp3" length="6861553" type="audio/mpeg"/><itunes:duration>04:46</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>24</itunes:episode><itunes:summary>We call them computers. They call them mentats. One is a machine, the other human. Both have superhuman intelligence.



From Short and Sweet AI, I&amp;#8217;m Dr. Peper and today I&amp;#8217;m talking about mentats. 



Technology but No AI



In the science fiction novel Dune, the use of computers and artificial intelligence has been outlawed. The author, Frank Herbert was far from mainstream science fiction when he created a future universe conspicuously lacking in artificial intelligence and robots. Don&amp;#8217;t get me wrong, there&amp;#8217;s a lot of technology in the Dune universe: lasguns, atomics, intergalactic navigation powered through the use of prescience created by a substance called spice. But there are no droids, no self driving transporters, no computer vision, no robot storm troopers. 



In the novel&amp;#8217;s back story we learn that in the past men had used thinking machines to enslave humanity. This lead to several centuries of war known as the Butlerian Jihad. In the end humans prevailed and defeated the men with the machines. And thinking machiines were outlawed, use of computers punishable by death. Humans had to enhance their own natural intelligence with rigorous discipline and advanced training. They learned to cultivate superhuman abilities by following a secret training method. They learned to rapidly analyse and process large amounts of data in great detail. Just as an olympic athlete masters the physical demands of competition, these humans honed and expanded their mental abilities. They became living supercomputers known as Mentats. 



Mentat Training



The Mentat training encompassed many elements and different levels of ability. Their skills included logic, inference and extrapolation, insight, future planning, and detailed understanding of events. At the highest level they were skilled in wisdom and diplomacy, negotiated delicate matters, and could judge matters of life and death.  They could make decisions similar to machine learning which is based on data and probabilities but they were not able to make intuitive decisions. Indeed, this inability to make decisions in the absence of data made them ineffective as leaders.  



Dystopian science fiction stories of machines with superintelligence rising up against the humans who created them have become ever more popular. Is it because we&amp;#8217;re living during a fourth revolution created by rapidly advancing artificial intelligence and we&amp;#8217;re fearful of it?  Some think Frank Herbert decided to not have thinking machines or artificial intelligence in Dune for a reason.  Because he wanted to warn against AI and the dangers of a society run by intelligent computers. Is his future where humans have superhuman intelligence like computers really possible?   



Augmented AI



The answer is, perhaps, if you remember Ray Kurzweil&amp;#8217;s prediction discussed in a previous episode about his book, https://drpepermd.com/episode/13-the-singularity-is-near/ (The Singularity is Near).  He believes in the future there will be something called brain computer interfaces. Brain computer interfaces or BCIs will connect all the information and data from the cloud and download it directly to our brains. There&amp;#8217;s a photo of what it might look like on my instagram today. Then we will not have to ban artificial intelligence.  Nor rely on some secret training program to enhance our mental abilities to supercomputer levels as in the novel Dune.



Perhaps it will be a combination of humans augmented by artificial intelligence with brain computer interfaces. In my opinion, it is possible our future will be like Frank Herbert&amp;#8217;s future in Dune, where we will all become Mentats or human supercomputers.</itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>Legendary Science Fiction Novel Dune Has No AI</title><itunes:title>Legendary Science Fiction Novel Dune Has No AI</itunes:title><description><![CDATA[<p>The legendary science fiction novel Dune has futuristic technology but no artificial intellligence. What happened?</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/legendary-science-fiction-novel-dune-has-no-ai/">Legendary Science Fiction Novel Dune Has No AI</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>The legendary science fiction novel Dune has futuristic technology but no artificial intellligence. What happened?</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/legendary-science-fiction-novel-dune-has-no-ai/">Legendary Science Fiction Novel Dune Has No AI</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/legendary-science-fiction-novel-dune-has-no-ai/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=3696</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Tue, 11 Feb 2020 13:01:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/9ddbcef9-dc2b-4493-b57c-524c99a726d9/dune-ai-200211.mp3" length="4630905" type="audio/mpeg"/><itunes:duration>03:13</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>23</itunes:episode><itunes:summary>From Short and Sweet AI, I&amp;#8217;m Dr. Peper and today I&amp;#8217;m discussing artificial intelligence and the fantastic, futuristic novel, Dune.



Intelligent technology, robots, space creatures and other supernatural worlds all live in science fiction. Much of it is dystopian with thinking, conscious machines rising up against humans and taking over and controlling the human race.  But Dune gives us another option. 



Dune takes place 20,000 years in the future, on Arrakis, a planet that is entirely desert. And as the political, ecological and religious battles of the great houses of Duke Leto and Baron Harkkonen and the Imperium play out, artificial intelligence  is conspicuously missing. What gives? How can the best selling science fiction novel of all time not include AI?



Butlerian Jihad



Within the first few pages of the novel, we learn that in the past men used machines to control and enslave the human race. This had lead to a great war, the Butlerian Jihad. The war lasted several centuries until men defeated other men and the machines they used.  In the book thinking machines, basically computers and artificial intelligence, became outlawed. Anyone recreating them was sentenced to death. It was a universal commandment: &amp;#8220;Thou shalt not make a machine in the likeness of a human mind.&amp;#8221; 



An often overlooked but crucial point is that the machines did not somehow enslave humans by themselves. Rather it was men controlling  the machines who enslaved other men. As the book explains &amp;#8220;Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.&amp;#8221; 



Dune 2020



Frank Herbert who wrote Dune seems to be saying that if we let AI do our thinking, we can be controlled by the people who control the AI.  As I discussed in a previous episode, we see this happening with the https://drpepermd.com/episode/why-is-bias-in-ai-important/ (bias) being programed into the computer algorithms. Algortihms that are used to make decisions which affect us on a daily basis. Herbert wrote Dune in 1965. This year there is mounting excitement and tremendous interest in the book. More than half a century later, it&amp;#8217;s being made into a movie that has a fervent following. Dune fans and devotees are saying &amp;#8220;Let us please get the Dune movie we all deserve.&amp;#8221; Should that include a wish for thinking machines that can&amp;#8217;t control us? 



In my next episode I&amp;#8217;ll discuss Dune&amp;#8217;s alternative to artificial intelligence.



From Short and Sweet AI, I&amp;#8217;m Dr. Peper



https://vocal.media/futurism/how-frank-herbert-s-dune-warned-of-the-rise-of-artificial-intelligence (https://vocal.media/futurism/how-frank-herbert-s-dune-warned-of-the-rise-of-artificial-intelligence)



https://steemit.com/philosophy/@zyx066/dune-and-the-thinking-machines (https://steemit.com/philosophy/@zyx066/dune-and-the-thinking-machines)



https://en.wikipedia.org/wiki/Dune_(novel) (https://en.wikipedia.org/wiki/Dune_(novel))










https://drpepermd.com/episode/why-is-bias-in-ai-important/ (Why is Bias in AI Important?)</itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>Why is Bias in AI Important?</title><itunes:title>Why is Bias in AI Important?</itunes:title><description><![CDATA[<p>The less melodramatic but more real threat of AI is not the rise of the machines but the bias researchers inadvertantly feed into the algorithms. </p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/why-is-bias-in-ai-important/">Why is Bias in AI Important?</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>The less melodramatic but more real threat of AI is not the rise of the machines but the bias researchers inadvertantly feed into the algorithms. </p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/why-is-bias-in-ai-important/">Why is Bias in AI Important?</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/why-is-bias-in-ai-important/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=3590</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Tue, 04 Feb 2020 19:29:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/70f8cbc9-2a89-4b10-ae5d-5ac6a2da0339/ai-bias-200204.mp3" length="6303577" type="audio/mpeg"/><itunes:duration>04:22</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>22</itunes:episode><itunes:summary>Why is bias in artificial intelligence so important? Many people don’t realize that the algorithms used in AI today have a great impact on our daily life. These software programs decide whether we&amp;#8217;re invited to a job interview, or eligible for a mortgage or undersurveillance by law enforcement. Organizations make these decisions with algorithms trained using datasets. If the datasets only reflect a few groups such as college educated people or people from certain socioeconomic backgrounds then the decisions will be biased.



bias in = bias out



The researchers who developed the datasets did not make the AI systems this way on purpose or out of malice. It was more unintentional and unconscious. People who create the algorithms have their own experiences and blindspots and the data reflects this.  And you have more bias in the algorithms when the AI teammembers who program the computers are not a diverse group.



A quick example of bias in artificial intelligence is voice assistants like Alexa that’ve been trained on huge datasets of recorded speech from white, upper-middle-class, Americans. As a result the technology doesn&amp;#8217;t understand commands from people with different accents and expressions.



ImageNet Roulette



In September 2019 a program called ImageNet Roulette caused a Twitter storm. People uploaded their selfies to the online program which used ImageNet to create labels. The labels attached to the selfies ranged from benign things like “face’ or “a person of no influence” to more troubling labels such as “first offender” and “rape suspect”. The project showed the dangers of feeding flawed data into an AI algorithm. 



ImageNet is a 15 million image dataset that unlocked the potential of deep learning, a type of artificial intelligence used for everything from facial recognition to self-driving cars. This massive dataset is routinely used to train deep learning algorithms. But ImageNet Roulette’s creators wanted to crack ImageNet open and to show how biased the images are. And how the flawed dataset can lead to many flawed algorithms. As a result, a massive effort was launched to remove the most offensive labels and make the images more diverse.



AI Needs Diversity



Fei Fei Li, the computer vision expert who created ImageNet, has become a champion to make  AI less biased and better for humanity . She left Google to lead Stanford’s new Institute for Human Centered AI. She’s testified before congressional hearings about the need to make changes to ensure there are diverse people engineering AI. And she’s founded AI4All, a summer program for girls in high school to develop more diversity in artificial intelligence.  



Ten years after the launch of ImageNet, Li believes AI research needs to include people in neuroscience, psychology, philosophy and other disciplines to create AI with more human sensitivity. As she has said: “There is nothing artificial about AI. It’s inspired by people. It’s created by people and most importantly, it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility.”



As always, links to further reading, videos and podcasts are in the shownotes. 



From Short and Sweet AI, I&amp;#8217;m Dr. Peper



https://www.wired.com/story/ai-biased-how-scientists-trying-fix/ (https://www.wired.com/story/ai-biased-how-scientists-trying-fix/)



https://www.scmp.com/magazines/post-magazine/long-reads/article/2183463/bias-bias-out-stanford-scientist-out-make-ai-less&lt;/...</itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>ImageNet</title><itunes:title>ImageNet</itunes:title><description><![CDATA[<p>ImageNet is a 15 million image dataset created by computer rock start Fei Fei Li who revolutionized artificial intelligence and changed the world.</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/imagenet/">ImageNet</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>ImageNet is a 15 million image dataset created by computer rock start Fei Fei Li who revolutionized artificial intelligence and changed the world.</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/imagenet/">ImageNet</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/imagenet/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=3447</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Tue, 28 Jan 2020 12:38:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/17ab7d8a-b146-45c0-9f16-c815d1f927f1/imagenet-200127.mp3" length="6644632" type="audio/mpeg"/><itunes:duration>04:37</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>21</itunes:episode><itunes:summary>Hi, I’m Dr. Peper and today I’m talking about ImageNet. The story of AI is the story of the pioneers who created it and ImageNet is about a brilliant AI researcher by the name of Fei Fei Li. As always, the links for further reading, videos, and podcasts for this episode are in the show notes.



Fei Fei Lee is considered the rock star of computer vision and articles about her say she started the deep learning revolution and changed everything. As a freshly graduated computer scientist at Princeton, she came up with a revolutionary approach to teaching computers how to recognize images. At the time, scientists were writing computer code, also known as algorithms, to identify cats and then a different algorithm to identify dogs and so on for each object. She thought this was too narrrow. She thought it should be more like how a child learns to recognize images. Children learn to recognize by looking at millions of images. Then she had a brilliant idea:  it’s not about the algorithms but the data that you gave the algorithm.  So she began to focus on creating datasets. 



Datasets



The idea of creating a data base to train computer algorithms to recognize images was considered so ludicrous, laborious and expensive that she couldn’t get funding. In fact an NIH comment rejecting her grant application stated it was shameful Princeton would research the topic. At first she paid undergraduate students $10 an hour to label images but quickly realized at that the slow pace, it would take nine years to create the data set and so the project stalled and languished until a chance conversation with someone who suggested she look at Amazon’s Mechanical Turks. The Mechanical Turks is a system of workers worldwide being paid very small amounts to do piecemeal work. This was a breakthrough for hiring a cheap, fast labor force to label the images. Even so it took another two and a half years to amass the initial 3.2 million images called ImageNet.



Li and her team then offered their dataset to an image recognition contest. In the competition, AI researchers would use their newly developed algorithms to see how accurately they could identify the images in ImageNet. In the beginning the best algorithms in the contest could identify the images with only 75% accuracy. Then in 2012 something very big happened. Researchers won the contest using a type of deep learning algorithm called a https://drpepermd.com/episode/4-are-machine-learning-and-deep-learning-the-same-as-ai/ (convolutional neural network) with amazing accuracy. And each year after that the neural networks improved until the accuracy was 98%. In effect computers could see better than humans.



Data = Fuel 



The 2012 event triggered a wave of excitement. There was a huge acceleration in using deep learning and convolutional neural networks which launched a https://drpepermd.com/episode/1-three-breakthroughs-unleashing-ai/ (revolution). ImageNet changed the field as people realized the thankless task of making a dataset was at the core of AI research. It wasn’t just about the algorithm or neural networks. 



Today ImageNet has 15 million labelled images and large companies such as Google and Facebook have created their own datasets of voice clips, text snippets, even video datasets of people performing tasks. Datasets are the fuel for the different deep learning neural networks which have ushered in new technologies such as advanced smart phone cameras and self-driving cars. 



And it all started with Fei Fei Li and her quest to teach machines to see.



However, as with all technology, there are unforeseen consequences, the unknowable unknowns. And in my next talk we’ll see how ImageNet has become the poster child of what bias in AI l...</itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>Fake Radiology</title><itunes:title>Fake Radiology</itunes:title><description><![CDATA[<p>Fake radiology is when artificially intelligent malware places fake tumors in patients' CT scans. With a hospital's permission, this threat became real.</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/fake-radiology/">Fake Radiology</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>Fake radiology is when artificially intelligent malware places fake tumors in patients' CT scans. With a hospital's permission, this threat became real.</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/fake-radiology/">Fake Radiology</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/fake-radiology/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=3296</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Fri, 17 Jan 2020 18:02:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/2c3c41e1-40cd-4997-9b04-49eadda360be/20fakeradiolog.mp3" length="5963776" type="audio/mpeg"/><itunes:duration>04:08</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>20</itunes:episode><itunes:summary>Hi, I’m Dr. Peper and today I’m discussing fake radiology.



A person&amp;#8217;s health information is considered so sensitive and private Congress enacted the Health Information Portability and Accountability Act or HIPAA, to ensure each personal&amp;#8217;s medical information is safe. Hospitals are very careful with sharing medical data with outside doctors or other hospitals. But what about the privacy and security of a patient&amp;#8217;s medical data within the hospital system? What if patient&amp;#8217;s medical records, tests, even CT scans were vulnerable to manipulation from malicious software viruses in the hospitals digital system? A group of researchers from the Cyber Security Research Center in Israel wanted to show how the power of AI deep learning could be used to add or remove medical conditions from CT and MRI scans and cause a patient to falsely believe they have a serious illness.



CT Shows Fake Tumor



They showed how this could be accomplished in a real setting and published a paper about the results. Here’s what happened. After a hospital gave the researchers permission, they remotely inserted a malware virus into a hospital&amp;#8217;s radiology network of CT scans and MRI scans. Real lung CT scans were altered by the malware to show fake lung tumors in normal scans and remove real tumors in scans that showed disease. This was serious stuff. As a result, the radiologists reading the scans were tricked into misdiagnosing lung cancer in most of them. Radiologists read the scans as showing cancer 99% of the time when a fake tumor was added to a normal scan. And when a real tumor was removed using the malware, the radiologist said the patient was healthy 94% of the time.



What is even more disturbing about this is most hospitals use AI powered lung scanning software tools to aid the radiologist in detecting tumors and confirm their diagnosis, but in this study, the malware was able to trick the CT software scanning tools into confirming the fake tumors every time.



Hospitals Need Encryption Within



The study sent shock waves through the hospital and medical community as authorities realized they need to encrypt their network system not only from the outside but from within. Hospital officials were quick to note that controls exist to prevent a patient from receiving unwarranted treatment. And there are several steps before a patient goes to surgery or receives radiation or chemotherapy so that a fake result would likely be detected. But there is emotional harm to the patient and the distress of learning they may have cancer even though it&amp;#8217;s subsequently proven to not be true.



Fake Illness + Politics



And in truth, the cybersecurity researchers were thinking of another type of harm when they staged the attack. They wanted to draw attention to the weaknesses in the medical imaging networks to potentially avoid another type of ominous scenario, one that could affect our political system and government. They worry that attackers using this malware could target a presidential candidate or other politicians to trick them into believing they have a serious illness and cause them to withdraw from a race to seek treatment.



I hope this helps you to better understand the real threats of artificial intelligence. 



The specific article and further readings, videos, and other podcasts are linked in the show notes. 



From Short and Sweet AI, I’m Dr. Peper.



https://www.washingtonpost.com/technology/2019/04/03/hospital-viruses-fake-cancerous-nodes-ct-scans-created-by-malware...</itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>What is AlphaZero?</title><itunes:title>What is AlphaZero?</itunes:title><description><![CDATA[<p>Alpha Zero is a computer program that trained itself to play chess, Go and Shogi at superhuman levels in 24 hours. </p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/what-is-alphazero/">What is AlphaZero?</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>Alpha Zero is a computer program that trained itself to play chess, Go and Shogi at superhuman levels in 24 hours. </p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/what-is-alphazero/">What is AlphaZero?</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/what-is-alphazero/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=3227</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Tue, 14 Jan 2020 15:54:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/84e4fc7d-3347-42d0-b5dd-49a2b49d3ff1/alphazero-converted.mp3" length="3849495" type="audio/mpeg"/><itunes:duration>04:01</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>19</itunes:episode><itunes:summary>In my previous flashes I talked about how DeepMind’s AlphaGo beat the world’s best human Go player by using reinforcement learning and deep learning and giving the computer lots of games to analyze and learn from. But what if the computer system had to learn entirely from itself? What if it’s given no human knowledge but had to learn from scratch?
To answer that question DeepMind experts created AlphaZero a single system which taught itself how to master the games of chess, shogi (Japanese chess) and Go. AlphaZero was given the rules for each game and then through random play, and with no built in human knowledge, learned by playing against itself millions of times. Initially it’s games were weak and erratic but over time it learned which game strategies worked and were successful. It learned a pattern that caused it to win a game and used that pattern more and more and patterns that lead to losing were used less and less so that the system was more likely over time to choose more advantageous moves.
AlphaZero ultimately defeated AlphaGo, the world’s best Go player, 100 games to 0. Researchers realized that when you put your preferences and predispositions into the computer system, it made the system weaker. The system that learns from itself is a stronger player. By playing 44 millions games against itself, in 2019 AlphaZero had become the best player in the world for Go and shogi. And Alpha Zero became the best chess player in the world with astonishing speed. The headlines read “Entire human chess knowledge learned and surpassed by DeepMinds Alpha Zero in 4 hours.” The byline was that it was essentially managed in little more than the time between breakfast and lunch.
However the most fascinating part about AlphaZero’s abiliites was the style used by the computer system to win at these games. Being self taught, AlphaZero didn’t follow conventional wisdom of the games but developed it’s own intuition and strategies that were completely novel and never seen before. World champion players described the game playing as ground breaking and highly dynamic. For example in chess, AlphaZero de-emphasized the importance of each piece’s value, sacrificing highly valued pieces early on for an advantage in the game in the long term. In a new book about AlphaZero’s chess games called Game Changer, the authors state, “It’s like discovering the secret notebooks of some great player from the past.”
AlphaZero’s ability to master and become world champion of 3 different complex games demonstrates a self teaching system can work for any information game but more importantly, can discover new knowledge in a range of settings. This brings DeepMind closer to it’s ultimate mission to solve intelligence by creating general learning systems, in essence, artificial general intelligence, and then using that to solve all the other world problems.
A transcript of this and other podflashes, along with additional reading, can be found at my website, drpepermd.com.
From short and sweet AI, I’m Dr. Peper.
https://drpepermd.com/wp-content/uploads/2020/01/19-AlphaZero.docx (#19 AlphaZero Download transcript here )
https://deepmind.com/blog/article/alphazero-shedding-new-light-grand-games-chess-shogi-and-go (https://deepmind.com/blog/article/alphazero-shedding-new-light-grand-games-chess-shogi-and-go)
https://www.newyorker.com/science/elements/how-the-artificial-intelligence-program-alphazero-mastered-its-games (https://www.newyorker.com/science/elements/how-the-artificial-intelligence-program-alphazero-mastered-its-games)

https://deepmind.com/blog/article/podcast-episode-2-go-to-zero (https://deepmind.com/blog/article/podcast-episode-2-go-to-zero)</itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>Walloped by AlphaGo</title><itunes:title>Walloped by AlphaGo</itunes:title><description><![CDATA[<p>In 1997 a chess playing computer built by IBM called Deep Blue beat the world chess champion Gary Kasparov. You […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/walloped-by-alphago/">Walloped by AlphaGo</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>In 1997 a chess playing computer built by IBM called Deep Blue beat the world chess champion Gary Kasparov. You […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/walloped-by-alphago/">Walloped by AlphaGo</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/walloped-by-alphago/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=3159</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Fri, 10 Jan 2020 13:14:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/34dc1322-aa26-49eb-9ef7-0fbf843a8054/alphago-converted.mp3" length="5097495" type="audio/mpeg"/><itunes:duration>05:19</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>18</itunes:episode><itunes:summary>In 1997 a chess playing computer built by IBM called Deep Blue beat the world chess champion Gary Kasparov. You may wonder why AI researchers are so interested in building a computer system to beat human level games. It’s because it’s a way to test a computer’s abilities and drive a new kind of research that could lead to the next big breakthrough in artificial intelligence. Games are a testbed for AI.
That next challenge was Go, an ancient Chinese board game considered to be the most popular game in the world, taught in Chinese schools alongside math. Go is relatively unknown in the Western world but it’s considered to be perhaps the most complex game ever devised by humans. In chess, a player has about 35 possible moves to choose from in a given turn, in Go, it’s around 200. Chess can be thought as a metaphor for a battle. Go is more like a geopolitical war where a move in one corner of the board can ripple everywhere else. The result is that Go players can’t look ahead to the ultimate outcome of each contemplated move, like in chess. The top players use intuition and follow a type of aesthetic which has made it a fascinating game for thousands of years.
Experts at DeepMind got to work creating a computer system known as AlphaGo. David Silver, the lead researcher, began with reinforcement learning algorithms but realized something was missing and combined reinforcement learning with deep learning which had deep layered representations of knowledge known as neural networks. This combination created major AlphaGo breakthroughs.
In 2016 in a televised event with a 100 million people watching, the world’s best Go player, Lee Sedol from South Korea, played the AlphaGo computer system in five games. Lee Sedol had been Go world champion 18 times and Demis Hassabis, the co founder of DeepMind, explained the match pushed AlphaGo to it’s limits.
In one moment in the second game, the audience was transfixed and horrified when AlphaGo made a surprising, unexpected move, now made legendary and referred to as Move 37. It was a move that went against all conventional wisdom used in playing the game. AlphaGo had created a new pattern of playing and came up with a long shot, a move that showed an insight beyond what even the best players could see. The move was later described by Go players as showing intuition and something totally original, it was described as a move of beauty.
Lee Sedol rallied and in game 4 he placed the 78th stone on the board in between 2 of Alpha Go’s stones. It’s called a wedge move and it was brilliant and took AlphaGo by surprise, everything it had done up to that point was rendered useless. AlphaGo ultimately lost the game. Like a human, the machine had blind spots. That move was dubbed God’s Touch and although Lee won that game, in the end, Alpha Go prevailed winning 4 games to 1.
This was a revolutionary accomplishment for a computer system to beat the world’s best Go player and a decade earlier than expected. The world was stunned. First there was sadness that a computer could beat a Go hero. But then there was another emotion, one of excitement that human players could see more possibilities now in playing the game. Lee Sedol said playing against Alpha Go brought him renewed joy in playing and improved his skills and abilities in a way that playing against other human players had not. He went on to win over a 100 games in a row against human players. In 2017 AlphaGo beat the number one world Go player Kie Je from China and after that DeepMind retired AlphaGo while continuing research in other areas.
But interestingly, AlphaGo’s win against Lee Sedol in 2016 was a turning point in China. The Chinese government experienced a “sputnik moment” which convinced them they needed to prioritize and dramatically increase funding for artificial intelligence. The race between the US and China for AI superiority was on.
From short and sweet AI, I’m Dr. Peper.</itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>Deepmind, Gaming and the Nobel Prize</title><itunes:title>Deepmind, Gaming and the Nobel Prize</itunes:title><description><![CDATA[<p>DeepMind is the world’s largest and most prestigious company focused on artificial intelligence and really came into the public eye […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/17-deepmind-gaming-and-the-nobel-prize/">Deepmind, Gaming and the Nobel Prize</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>DeepMind is the world’s largest and most prestigious company focused on artificial intelligence and really came into the public eye […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/17-deepmind-gaming-and-the-nobel-prize/">Deepmind, Gaming and the Nobel Prize</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/17-deepmind-gaming-and-the-nobel-prize/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=3094</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Tue, 07 Jan 2020 12:58:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/56c6acdd-74c0-41af-8390-b73fc231b467/deepmind-gaming-and-the-nobel-prize-converted.mp3" length="4417431" type="audio/mpeg"/><itunes:duration>04:36</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>17</itunes:episode><itunes:summary>DeepMind is the world’s largest and most prestigious company focused on artificial intelligence and really came into the public eye in 2016 when it beat one of the world’s top players in the game of Go, a Chinese game that is more than 4000 years old. That was a breakthrough in AI and came a decade earlier than many experts had predicted.
DeepMind’s been owned by Google since 2014 but was started by Demis Hassabis, who some have described as the brains behind DeepMind. In 2010 he, along with 2 friends, Shane Legg, and Mustafa Suleyman cofounded DeepMind in London, with the ambition to solve intelligence and then use that to solve everything else.
One thing to know about DeepMind is it uses something called reinforcement learning or RL, which is a type of dynamic programming that trains algorithms by using a system of reward and punishment. A reinforcement learning algorithm, also called an agent, learns by interacting with its environment. RL is considered by some to be the future of machine learning.
DeepMind is focused on finding the holy grail of AI which is artificial general intelligence or AGI. Demis Hassabis defines artificial general intelligence as a system capable of solving a whole spectrum of cognitive tasks on a level that is at least as good as humans are able to do.
So what is Google doing with DeepMind?  At Google, DeepMind has continued research into artificial general intelligence while the DeepMind AI has been broadly integrated into Google products and services in areas of speech recognition, image recognition, fraud detection, identifying spam, handwriting recognition, translation and of course, local search.
Two notable areas where DeepMind has made an impressive impact is crazily enough the medical field and the world of video gaming.
In medicine, DeepMind has applied its abilities to protein folding with an accelerated understanding that has astounded eminent researchers. Protein folding is the process by which chains of protein building blocks fold over each other to form 3D structures. Many diseases such as Alzheimer’s and Parkinson’s, are thought to be caused by proteins misfolding and being able to predict the structure of proteins that cause these diseases could lead to more specific drugs to treat them.
Perhaps the most significant accomplishment to date has been that DeepMind has figured out how to beat humans, not only in the landmark win of the game Go, but more recently in 2019 it performed on a level equal to humans to win in a version of capture the flag. DeepMind also showed it was capable of teaming up with both artificial agents (which are the reinforcement learning algorithms I mentioned before), so it was able to team up with other AIs as well as human players to defeat its’ opponents. This is a significant achievement showing that DeepMind can strategically out-think humans. Others have deep concerns it may represent a first step in the Rise of the Machines.
So if artificial general intelligence is the holy grail, how will we know we’ve achieved it? If you ask Demis Hassabis, he says that big moment will be when an AI system comes up with a completely new scientific discovery that’s of Nobel prize winning level. Will it be DeepMind accepting the 2045 Nobel prize in Medicine or maybe Military?
From short and sweet AI, I’m Dr. Peper.
https://drpepermd.com/wp-content/uploads/2020/01/17-Deepmind-Gaming-and-the-Nobel-Prize-.docx (#17 Deepmind, Gaming and the Nobel Prize download transcript here)
https://www.techrepublic.com/article/google-deepmind-the-smart-persons-guide/ (https://www.techrepublic.com/article/google-deepmind-the-smart-persons-guide/)
https://www.techworld.com/startups/google-deepmind-what-is-it-how-it-works-should-you-be-scared-3615354/ (https://www.techworld.com/startups/google-deepmind-what-is-it-how-it-works-should-you-be-scared-3615354/)</itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>How AI Is Disrupting Medicine</title><itunes:title>How AI Is Disrupting Medicine</itunes:title><description><![CDATA[<p>AI and medicine…where to begin? There is so much going on in how AI is impacting healthcare. It’s a meta […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/16-ai-and-medicine/">How AI Is Disrupting Medicine</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>AI and medicine…where to begin? There is so much going on in how AI is impacting healthcare. It’s a meta […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/16-ai-and-medicine/">How AI Is Disrupting Medicine</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/16-ai-and-medicine/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=3039</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Fri, 03 Jan 2020 13:10:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/4ee0f3ee-c027-4da5-89a5-69b37883947d/medical-ai-converted.mp3" length="4711959" type="audio/mpeg"/><itunes:duration>04:54</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>16</itunes:episode><itunes:summary>AI and medicine…where to begin? There is so much going on in how AI is impacting healthcare. It’s a meta trend that’s been developing over the last 20 years. I’m going to highlight 5 areas where AI is driving big changes.
The first technology is the brain computer interface or BCI, which you’ve heard me discuss before in my episode on brain hacking. BCIs are direct connections between computers and the human brain being used to restore function for patients who’ve lost the ability to speak or move or interact with their surroundings. This includes people suffering from ALS, strokes, or the 500,000 victims who have spinal cord injuries every year.
The second disruptive technology is AI radiology tools. Deep learning use neural networks, remember my discussion of them in my episode on deep learning and machine learning, Neural networks work in a way inspired by how the human brain processes information through many layers. Computers using neural networks have already proved their ability to match or exceed the accuracy of human experts when analyzing images. For example there’s a FDA approved AI program embedded in a mobile x-ray machine to identify and prioritize collapsed lungs on STAT x-rays. About 60% of all x-rays in a hospital are marked STAT. This AI enhanced x-ray machine flags images with possible pneumothorax (which is a collapsed lung) so those x-rays get looked at by a radiologist first, speeding up diagnosis and getting care to patients who are most ill.
Similar machine learning systems are used for more accurate detection of diseases from all types of imaging studies like CAT scans, MRIs, mammograms and even everyday detection of broken bones on x-rays.
A third impact is AI is being used in some cases to drive down to the pixel level of tissue biopsies seen under the microscope, thereby detecting changes not routinely observed by the doctors reading them. This is called digital pathology and is important because 70% of all decisions in healthcare are based a pathology result. The more accurate the image, the faster the right diagnosis is made and treatment can begin.
The 4th innovation is harnessing the power of smart phones using their great camera quality. Smart phone photos are analyzed by AI algorithms to diagnose skin cancer and eye diseases. And there are many other phone uses. So far there is a disposable sensor that plugs into a smart phone and can diagnose HIV, a glass ball that attaches to the smart phone camera that turns it into a microscope to detect malaria, and two clinical trials underway. One trial is testing a smart phone app that can diagnose acute respiratory problems in children by analyzing their cough and another trial is investigating the use of FitBits to collect data to diagnose Parkinson’s disease.
The fifth area is using AI to get ahead of chronic disease and could be where AI makes the biggest impact in the healthcare system. Clinically validated machine learning algorithms are being used to generate a patient’s risk for congestive heart failure, macular degeneration, aortic aneurysm and even hospital readmission based on medical data from their charts. Knowing which patients are at risk can lead to earlier interventions and even changes in their current treatment. In this way, AI generated clinical decisions using lots of patient data can make doctors more aware of the nuances in a patient’s health to get ahead of any developing medical problems.
These are 5 of the many, many ongoing impacts AI is making in healthcare. As a doctor it’s overwhelming to me just the medical applications of AI, and makes me even more committed to discuss AI in way everyone can understand, in a way that’s short and sweet. I’m Dr. Peper.
https://drpepermd.com/wp-content/uploads/2020/01/AI-and-Medicine.docx (#16 AI and Medicine Download Transcript Here)
&lt;a href=&quot;https://healthitanalytics.com/news/top-5-use-cases-for-artificial-intelligence-i...</itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>The Turing Test 2029</title><itunes:title>The Turing Test 2029</itunes:title><description><![CDATA[<p>Tomorrow is January 1, 2020 and we do not just start a new year but a new decade. In the […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/15-the-turing-test-2029/">The Turing Test 2029</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>Tomorrow is January 1, 2020 and we do not just start a new year but a new decade. In the […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/15-the-turing-test-2029/">The Turing Test 2029</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/15-the-turing-test-2029/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=2957</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Tue, 31 Dec 2019 12:58:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/96f06aa4-f71f-4bb4-8a62-83057bd985b6/15-the-turing-test-2029.mp3" length="5988854" type="audio/mpeg"/><itunes:duration>04:09</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>15</itunes:episode><itunes:summary>Tomorrow is January 1, 2020 and we do not just start a new year but a new decade. In the world of artificial intelligence, some believe in this decade we will pass the Turing test. What is the Turing test and why is it important?
The Turing test is a measure of the power of Artificial Intelligence. When a computer passes the Turing test, it means it will be equal to humans in every way. The test was developed by the pioneering computer scientist Alan Turing in 1950 to determine whether a machine has human like intelligence.  The machine passes the test if a interviewer cannot tell whether a response is coming from the machine or a human. If someone can&amp;#8217;t tell the difference then we consider AI to be of human intelligence. In 1999 the futurist Ray Kurzweil predicted machines will pass the Turing test by 2029. That is this decade.
So how does the Turing test work? In his paper entitled Computing Machinery and Intelligence, Turing outlined a method for answering the question &amp;#8220;can machines think.&amp;#8221; He proposed a hypothetical game with 3 players. One player is a interviewer separated from the other 2 players, one of which is a computer and the other a human. So you have a human interviewer asking questions of a computer and another human. Through this process the interviewer tries to figure out which is the computer and which is the human. Ultimately if it can&amp;#8217;t be determined which one is a computer, then maybe they&amp;#8217;re dealing with a thinking machine that has passed the Turing test.
Is it realistic to anticipate human level machine intelligence by 2029? AI researchers believe we have the computational power to build Turing&amp;#8217;s thinking machine but a major problem is that computers still struggle with routine small talk and are even much worse than me at telling jokes. Language is widely seen as humankind&amp;#8217;s most distinguishing trait. And for a machine to have a conversation with a person takes more than increasing memory and processing power.  It requires understanding the meaning of language and all the implications in speech.
Kurzweil, head of natural language at Google, is more concerned that when the machine passes the Turing test, we&amp;#8217;ll have to be careful about what kind of feelings that computer has, about it&amp;#8217;s emotions and consciousness and we&amp;#8217;ll have to care about what its&amp;#8217; thoughts are. He thinks future AI is emotion and will come with the Turing test being passed. And although consciousness is a philosophical question not a scientific question, because you can&amp;#8217;t test for it, he believes computers will be conscious and have all the secondary features we associate with consciousness such as having an opinion, an ego, and desires. And that raises questions about what it means to be human.
As an aside, in my reading for this flash talk, I came across a comment which raises a subject I want to discuss in 2020, something I call dystopian AI and encompasses the ethics of artificial intelligence. The comment was: “ I’m not scared of a computer passing the Turing test. I’m terrified of one that intentionally fails it.”
https://drpepermd.com/wp-content/uploads/2019/12/15-The-Turing-Test-2029.docx (#15 The Turing Test 2029 Download Transcript Here)
From Short and Sweet AI, I’m Dr. Peper
https://www.abundance.video/videos/ray-kurzweil-peter-diamandis (https://www.abundance.video/videos/ray-kurzweil-peter-diamandis)
https://www.wired.com/story/ray-kurzweil-on-turing-tests-brain-extenders-and-ai-ethics/ (https://www.wired.com/story/ray-kurzweil-on-turing-tests-brain-extenders-and-ai-ethics/)
https://en.wikipedia.org/wiki/Turing_test (https://en.wikipedia.org/wiki/Turing_test)
https://www.economist.</itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>Self-Driving Cars: are we there yet?</title><itunes:title>Self-Driving Cars: are we there yet?</itunes:title><description><![CDATA[<p>Where are the promised self-driving cars? Right now they're at low speeds in defined routes.</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/14-self-driving-cars-are-we-there-yet/">Self-Driving Cars: are we there yet?</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>Where are the promised self-driving cars? Right now they're at low speeds in defined routes.</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/14-self-driving-cars-are-we-there-yet/">Self-Driving Cars: are we there yet?</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/14-self-driving-cars-are-we-there-yet/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=2936</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Sat, 28 Dec 2019 20:04:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/f7432633-60ed-49ad-a591-062ba1c3a4d0/self-driving-cars-are-we-there-yet-converted.mp3" length="4897790" type="audio/mpeg"/><itunes:duration>05:06</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>14</itunes:episode><itunes:summary>We hear so many different estimates of how long before we see them on the road, but where are the self-driving cars?  I’m using the term self-driving to mean a driverless car capable of navigating, avoiding obstacles, and parking without any human involved. And here it’s important to make the distinction between autonomous cars which have a driver at the wheel so they’re not driverless, and truly self-driving cars that don’t need a human operator or even a steering wheel.
Making a Care Autonomous
A car becomes autonomous by having AI software trained on virtual cars. The car drives billions of miles with every conceivable obstacle and situation thrown at it to see how it responds and uses deep learning algorithms to teach itself what actions lead to crashes. This way the car slowly learns how it should drive on real roads.  Only through using AI can the car then go out on a real road to drive.
The Safety Challenge
An obvious reason for the ongoing delay of autonomous cars on the road is safety.  In 2018 an autonomous car being tested by Uber hit and killed a woman walking a bicycle across the street in Arizona even though the car had a driver at the wheel and elsewhere in the US, 3 Tesla drivers have died in crashes when the drivers and the autopilot failed to detect and react to road hazards. While 80% of the technology exists to put self-driving cars into routine use, the remaining 20% is much more difficult because the AI software still needs to improve to the point where the cars can reliably anticipate what other drivers, pedestrians, even cyclists will do and navigate the unexpected situations.
The Standardization Challenge
The second challenge is more regulatory. Standard definitions are needed for what constitutes reasonable actions taken by the car such as how fast to drive or when to change lanes. All autonomous cars have been programmed with algorithms for speed and lane change but these algorithms need to be standardized for the industry so that automakers can program their cars to act only within those bounds. This also gives a legal framework for evaluating blame in accidents based on whether the car’s decision-making system followed the accepted standards.
Governments are moving to create standards and then approve not just autonomous but self-driving cars for use on a national level. There’s many concerns about the accidents and AI malfunctions. But even more troubling is the concern about malicious AI attacks by hackers, who could, for example, infiltrate the artificial intelligence system of a fleet of self-driving cars and cause them to ignore safety laws. Researchers at a watchdog group called Open AI and whose members include Elon Musk, Max Tegmark and others concerned about responsible AI have called for companies to work with each other and with lawmakers to safe guard against potential vulnerabilities to hacking. But will rivals such as Uber, Waymo, and Tesla be willing to share data for the safety of all in such an intensely competitive market?
Autonomous Vehicles in Use Today
Surprisingly, autonomous vehicles are actually in use today. A company called May Mobility operates autonomous, six passenger golf carts in 3 cities, driving short defined routes at 25 mph and the Brooklyn Navy Yard will have 25 mph driverless shuttles in use this year (fyi my daughter’s workshop is in the Brooklyn Navy Yard so I’ll have to go check it out). At low speeds in defined routes, autonomous vehicles are safer so the technology can be used today.    
Getting back to the question of when will autonomous cars be on the road? Two automakers, Ford and Volkswagon have teamed up with an AI company and predict they will have ride sharing services in a few urban areas as early as 2021. Elon Musk, ever the optimist, has said “I’d be shocked if it’s not next year at the latest.&amp;#8221;
&lt;a href=&quot;https://drpepermd.com/wp-content/uploads/2019/12/Self-driving-cars.</itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>The Singularity Is Near</title><itunes:title>The Singularity Is Near</itunes:title><description><![CDATA[<p>Gradually, and then suddenly From short and sweet AI, I’m Dr. Peper, and today I’m talking about The Singularity is […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/13-the-singularity-is-near/">The Singularity Is Near</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>Gradually, and then suddenly From short and sweet AI, I’m Dr. Peper, and today I’m talking about The Singularity is […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/13-the-singularity-is-near/">The Singularity Is Near</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/13-the-singularity-is-near/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=2875</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Tue, 24 Dec 2019 12:57:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/6ad89781-c321-4d0d-b95b-e5d955097006/gradually-then-suddenly-converted.mp3" length="3473175" type="audio/mpeg"/><itunes:duration>03:37</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>13</itunes:episode><itunes:summary>Gradually, and then suddenly
From short and sweet AI, I’m Dr. Peper, and today I’m talking about The Singularity is Near.
In my research for the my flash show on Cyborgs, I came across Dion Dalton Bridge&amp;#8217;s interesting article in which she referenced a quote from The Sun Also Rises where a character is asked, how did you go bankrupt?   And he responds, “ Two ways. Gradually, then suddenly. “ It’s a highly appropriate description of the current technological tsunami taking place previously described as The Singularity is Near.
Ray Kurzweil is probably the world’s foremost futurist and has written several books about AI and intelligent machines but the one that has hit home perhaps the most is The Singularity is Near. He emphasizes that technology is accelerating at an exponential rate which means that in this century we will not experience 100 years of progress but more like 20,00 years of progress.
He presents the singularity as the moment during this time when human intelligence merges with artificial intelligence and vastly enhance our human capabilities. The word singularity is taken from the mathematical term referring to a value that does not have a finite limitation. So with the Singularity, human intelligence augmented by AI will no longer be limited but can accomplish the infinite. As he says in his book “The Singularity will represent the culmination of the merger of our biological thinking and existence with our technology, resulting in a world that is still human but transcends our biological roots.”
Needless to say Ray Kurzwel is a transhumanist and some consider the book to be the transhumanist manifesto.  But what are some of the specifics of what happens?
Well nanotechnology plays a big role with robots the size of red blood cells inserted in the body augmenting or replacing our major organ systems and allowing the complete scanning of the brain to create a hybrid intelligence previously unknown. Ultimately, he predicts, human intelligence will be mainly non- biological and more of our experiences will take place in virtual reality than in the physical world. This includes having a “back-up” of our consciousness if needed, never getting sick and most importantly, we will never have to die.
His is a radically optimistic and genuinely inspiring vision of the future course of human development but raises many concerns over loss of jobs, increasing inequality, and who decides how we use this technology. Yet one thing to remember is that of 147 predictions Kurzweil has made since the 1990s, fully 115 have turned out to be correct, that’s an 86% accuracy rate.
And BTW, how near is the singularity? In the book, Kurzweil says 2045. 
https://drpepermd.com/wp-content/uploads/2020/01/Gradually-then-suddenly....docx (#13 Gradually, then suddenly. The Singularity Download transcript here..)
https://www.kirkusreviews.com/book-reviews/ray-kurzweil/the-singularity-is-near/ (https://www.kirkusreviews.com/book-reviews/ray-kurzweil/the-singularity-is-near/)
https://www.itweb.co.za/content/dgp45vaG8p5MX9l8 (https://www.itweb.co.za/content/dgp45vaG8p5MX9l8)
https://electronics.howstuffworks.com/gadgets/high-tech-gadgets/technological-singularity.htm (https://electronics.howstuffworks.com/gadgets/high-tech-gadgets/technological-singularity.htm)

</itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>Cyborgs Among Us</title><itunes:title>Cyborgs Among Us</itunes:title><description><![CDATA[<p>Cyborg is short for cybernetic organism and they'e already among us. </p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/12-cyborgs-among-us/">Cyborgs Among Us</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>Cyborg is short for cybernetic organism and they'e already among us. </p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/12-cyborgs-among-us/">Cyborgs Among Us</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/12-cyborgs-among-us/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=2811</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Fri, 20 Dec 2019 13:07:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/7d43f4d5-d6f7-45b6-b64d-8ee58d335c81/cyborgs-among-us-converted.mp3" length="3262359" type="audio/mpeg"/><itunes:duration>03:24</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>12</itunes:episode><itunes:summary>Recently my interest in cyborgs lead me to watch Alita: Battle Angel, a movie that presents  the world of 2563 as being full of cyborgs with varying remnants of humanity. It’s takes place far in the future but in reality cyborgs are already among us and as some futurists tell us, we better get used to it.
Cyborg is short for cybernetic organism and is variously defined as a person whose function is aided by a mechanical or electronic device but a better version is much wider and could be any human relying consistently on some kind of technology.
In 1998 Kevin Warkwick implanted a device into his forearm and linked it to a computer to become the world’s first cyborg. Neil Harbisson was born with a severe colorblindness so he had a chip implanted in his brain which connects to an antenna that translates color into sound. The antenna curves up and over the back of his skull to dangle in front of his forehead making him look like a movie version of a cyborg. Another person had a sensor implanted in her elbow that vibrates whenever an earthquake occurs.
Five paraplegics with implanted spinal electrodes have been able to regain some movement. And a bionic eye systems made up of a camera attached to glasses and connected to a chip in the retina allows the blind to see and read letters again. There are millions of people who have augmented their bodies with cochlear implants to hear, cardiac pacemakers and implantable defibrillators to prevent sudden death and contact lens to improve daily vision.
The ultimate human – machine connection could be something called neural lace an emerging technology I mentioned in my flash briefing on brain hacking. Neural lace is a lace like electronic mesh that is injected into the brain to create a digital layer that sits above the brain and connects to the cloud thus giving access to all its’ stored information. And don’t we do this to some degree already with an external device by our constant interaction with our smartphones.
Lastly there is a global social and philosophical movement called transhumanism which advocates for the use of technology and science to enhance human intellect and abilities.
The lines between humans and machines are blurring everyday.
Maybe Elon Musk expressed it best when he said “We’re already a cyborg” .
From short and sweet AI, I’m Dr. Peper.
https://drpepermd.com/wp-content/uploads/2019/12/Cyborgs-Among-Us.docx (#12 Cyborgs Among Us Download transcript here )
https://www.forbes.com/sites/charlestowersclark/2018/10/01/cyborgs-are-here-and-youd-better-get-used-to-it/#582b5586746a (https://www.forbes.com/sites/charlestowersclark/2018/10/01/cyborgs-are-here-and-youd-better-get-used-to-it/#582b5586746a)
https://hackernoon.com/and-then-we-were-cyborgs-d56abc61442d (https://hackernoon.com/and-then-we-were-cyborgs-d56abc61442d)
https://www.theguardian.com/technology/2017/feb/15/elon-musk-cyborgs-robots-artificial-intelligence-is-he-right (https://www.theguardian.com › technology › feb › elon-musk-cyborgs-rob&amp;#8230;)
https://www.pbs.org/video/scitech-now-present-and-future-cyborgs/ (The future of cyborgs and human augmentation | SciTech Now &amp;#8230;)
https://www.pbs.org/video/scitech-now-present-and-future-cyborgs/ (https://www.pbs.org › video › scitech-now-present-and-future-cyborgs)
https://www.youtube.com/watch?v=LUd4qv2Qr0A (Cyborgs: A Personal Story | Kevin Warwick &amp;#8211; YouTube)
https://www.youtube.com/watch?v=LUd4qv2Qr0A (https://www.youtube.com › watch)</itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>AI Superpowers</title><itunes:title>AI Superpowers</itunes:title><description><![CDATA[<p>Who will win the race to control Artificial Intelligence? China or the U.S? I’m Dr. Peper and today I’m discussing […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/11-ai-superpowers/">AI Superpowers</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>Who will win the race to control Artificial Intelligence? China or the U.S? I’m Dr. Peper and today I’m discussing […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/11-ai-superpowers/">AI Superpowers</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/11-ai-superpowers/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=2583</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Fri, 13 Dec 2019 12:18:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/6e0a6569-b141-46cb-b744-8066047e5c88/ai-superpowers-converted.mp3" length="2725911" type="audio/mpeg"/><itunes:duration>02:50</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>11</itunes:episode><itunes:summary>Who will win the race to control Artificial Intelligence? China or the U.S?
I’m Dr. Peper and today I’m discussing the book AI Superpowers by Kai Fu Lee.
In AI Superpowers, Kai Fu Lee, an accomplished AI scientist, former head of Google China and now one of China’s top venture capitalists, makes some alarming predictions. In his book he says that within the next decade the U.S. which is ahead in AI with new discoveries, will be surpassed by China due to China’s better implementation. The two nations will then dominate as the AI superpowers and he describes the other nations being “left to pick up scraps while the AI superpowers will boost productivity at home and harvest profits from around the globe.”
According to Lee, the U.S. is in the age of AI discovery which requires brilliant scientists and innovative breakthroughs. Next will be the age of AI implementation which requires vast quantities of data to train the computers and super fast computing power. Here China has the lead: China has 800 million internet users (three times more than the U.S.) These users generate huge amounts of data which their government freely collects. And China is investing heavily in quantum computing which exponentially increases super fast computing power,
Kai Fu Lee also describes the additional factors leading to China’s ascendancy in AI as being the Chinese “hypercompetitive&amp;#8221;  business landscape and state support for developing and subsidizing AI industries.
And there is another comparison Lee makes that I and many other readers have found very disquieting. He describes the major difference between Chinese and Silicon Valley tech culture. His observation is Silicon Valley is mission driven to take an original idea and achieve an idealistic goal and change the world. Chinese businesses are market driven to make money and become rich.
AI Superpowers is written from the perspective of China rather than Silicon Valley and questions many of our assumptions about the US and AI technology. Sometimes it’s uncomfortable and that’s why you should read it.
https://www.washingtonpost.com/outlook/in-the-race-for-supremacy-in-artificial-intelligence-its-us-innovation-vs-chinese-ambition/2018/11/02/013e0030-b08c-11e8-aed9-001309990777_story.html (https://www.washingtonpost.com/outlook/in-the-race-for-supremacy-in-artificial-intelligence-its-us-innovation-vs-chinese-ambition/2018/11/02/013e0030-b08c-11e8-aed9-001309990777_story.html)
https://www.goodreads.com/book/show/38242135-ai-superpowers (https://www.goodreads.com/book/show/38242135-ai-superpowers)
https://www.nytimes.com/2018/09/22/opinion/sunday/ai-china-united-states.html (https://www.nytimes.com/2018/09/22/opinion/sunday/ai-china-united-states.html)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 </itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>Facial Recognition</title><itunes:title>Facial Recognition</itunes:title><description><![CDATA[<p>Facial Recognition Technology I’ve been hearing more and more about facial recognition technology and here’s what I’ve learned. Facial recognition […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/facial-recognition/">Facial Recognition</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>Facial Recognition Technology I’ve been hearing more and more about facial recognition technology and here’s what I’ve learned. Facial recognition […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/facial-recognition/">Facial Recognition</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/facial-recognition/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=2577</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Wed, 11 Dec 2019 20:56:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/b5bcc674-8bf9-4a62-9a4d-874db7c86d1d/a-few-things-about-facial-recognition-converted.mp3" length="2676759" type="audio/mpeg"/><itunes:duration>02:47</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>10</itunes:episode><itunes:summary>https://drpepermd.com/wp-content/uploads/2019/12/Facial-Recognition-Technology.docx (Facial Recognition Technology)
I’ve been hearing more and more about facial recognition technology and here’s what I’ve learned.
Facial recognition is a biometric technology using artificial intelligence to identify people. Biometrics is the measurement and analysis of a person’s unique physical and behavioral characteristics. Using a type of AI called machine learning, computers analyze large data sets in the form of photos and videos looking for patterns and learn to recognize facial features. The computer then compares new images to stored images in its databases to identify a face.
The technology is most accurate at identifying white, male faces since the algorithms have been trained on mainly white male visual data sets and is least accurate at identifying people of color and women.
Pro and cons of the technology center around its use to improve security and safety versus the threat to an individual’s privacy.
Important points and facts to be aware of:
San Francisco became the first U.S. city to ban facial recognition by government agencies.
Microsoft and Google reported they have denied facial recognition services to law enforcement while Amazon has faced pushback from its employees and shareholders for selling the technology.
There are new facial recognition smart glasses available only to police which allows users wearing augmented reality glasses to scan faces in a crowd which are then compared to a million image database. Positive matches are sent to a display embedded in the lens of the glasses.
And the New York Times reported facial recognition is growing stronger thanks to your face. Using images from social networks, photo websites and cameras placed in public areas, there is a growing database of collected photos in the public domain available for anyone to download and use for training facial recognition software.
On a more hopeful note, a start up company is working on a tool to let you check whether your image is part of an openly shared database of faces. Somehow I think they may be using AI to develop it.
https://www.nytimes.com/2019/07/13/technology/databases-faces-facial-recognition-technology.html (https://www.nytimes.com/2019/07/13/technology/databases-faces-facial-recognition-technology.html)
https://www.theverge.com/2019/6/10/18659660/facial-recognition-smart-glasses-sunglasses-surveillance-vuzix-nntc-uae (https://www.theverge.com/2019/6/10/18659660/facial-recognition-smart-glasses-sunglasses-surveillance-vuzix-nntc-uae)
https://www.forbes.com/sites/bernardmarr/2019/08/19/facial-recognition-technology-here-are-the-important-pros-and-cons/#1918805214d1 (https://www.forbes.com/sites/bernardmarr/2019/08/19/facial-recognition-technology-here-are-the-important-pros-and-cons/#1918805214d1)
https://www.stopspying.org/our-vision (https://www.stopspying.org/our-vision)
 
 </itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>Meta Trends</title><itunes:title>Meta Trends</itunes:title><description><![CDATA[<p>Meta Trends The world can be exciting but scary as the way we live changes and has changed in just […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/9-meta-trends/">Meta Trends</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>Meta Trends The world can be exciting but scary as the way we live changes and has changed in just […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/9-meta-trends/">Meta Trends</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/9-meta-trends/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=2612</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Fri, 06 Dec 2019 14:11:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/27ca8dd5-3c75-4d53-af1a-a60aff614cb8/meta-trends-converted.mp3" length="2871831" type="audio/mpeg"/><itunes:duration>02:59</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>9</itunes:episode><itunes:summary>https://drpepermd.com/wp-content/uploads/2019/12/Meta-Trends.docx (Meta Trends)
The world can be exciting but scary as the way we live changes and has changed in just the last 10 years. Peter Diamandis, co founder of Singularity University, tells us why it’s changing even faster than before.
One meta trend he sees is increasing abundance. Products and services are cheaper and more available to everyone. The proportion of the world’s population living in extreme poverty is the lowest it’s ever been. And while goods are increasing, the costs of day-to-day things such as food, energy, communication, and transportation are trending down.
Another meta trend is people everywhere are becoming connected as 4 billion more people are brought online by 2025 all connected at gigabit speeds, all wanting to create, discover and invent. And with the meta trend of capital abundance there is more capital to invest in the newly generated ideas. Venture capital funding globally increased to total of 207 billion dollars in 2018, up from 186 billion dollars the year before.
Increasing human intelligence is another meta trend. Having data on everything will lead to something called just in time education where 5G speed combined with AI and augmented reality will allow you to learn something the moment you need it. Then there’s the crazy idea but actually predicted as reality that brain computer interfaces will allow us to connect our brains to the cloud where we will have any and all information available, at anytime.
A final converging meta trend is increasing human longevity. Even as our lifespan has doubled in the last 100 years, the expectation is to double it again. Using technologies such as CRISPR, stem cell therapy, genomic sequencing, 3D printed organs and AI digitized detection of diseases before they develop, it’s likely in the future people will live to 140.
With the negative news cycle wanting our eyes to be focused on their advertisers, the watchword is if it bleeds, it leads. Instead what is less obvious and what we need to understand with data is, the world is quietly getting better.
From Short and Sweet AI, I’m Dr. Peper.
https://singularityhub.com/2019/08/20/these-are-the-meta-trends-shaping-the-future-at-breakneck-speed/ (https://singularityhub.com/2019/08/20/these-are-the-meta-trends-shaping-the-future-at-breakneck-speed/)
https://futurism.com/scientists-genetically-engineer-mice-live-25-percent-longer/ (https://futurism.com/scientists-genetically-engineer-mice-live-25-percent-longer/)
https://futurism.com/new-1-terabit-internet-satellites-will-deliver-high-speed-internet-remote-areas/ (https://futurism.com/new-1-terabit-internet-satellites-will-deliver-high-speed-internet-remote-areas/)
https://www.brookings.edu/blog/future-development/2018/12/13/rethinking-global-poverty-reduction-in-2019/ (https://www.brookings.edu/blog/future-development/2018/12/13/rethinking-global-poverty-reduction-in-2019/)
 </itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>Exponential Technology</title><itunes:title>Exponential Technology</itunes:title><description><![CDATA[<p>Exponential Technology At Singularity University’s 2019 Global Summit, Peter Diamandis, one of the co founders, captivated the audience with his […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/8-exponential-technology/">Exponential Technology</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>Exponential Technology At Singularity University’s 2019 Global Summit, Peter Diamandis, one of the co founders, captivated the audience with his […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/8-exponential-technology/">Exponential Technology</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/8-exponential-technology/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=2586</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Tue, 03 Dec 2019 12:13:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/dec1d582-851e-4568-9021-c2f16fc9e5cc/exponential-technology.mp3" length="4228410" type="audio/mpeg"/><itunes:duration>02:56</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>8</itunes:episode><itunes:summary>https://drpepermd.com/wp-content/uploads/2019/12/Exponential-Technology.docx (Exponential Technology)
At Singularity University’s 2019 Global Summit, Peter Diamandis, one of the co founders, captivated the audience with his discussion of exponential technologies that have already been set in motion creating something called meta trends. A meta trend has been described as a trend that runs deeper and powers more specific trends, like a tidal wave that drives waves to the shore.
And an exponential technology is when the technology’s power or speed doubles each year and/or the cost drops by half. Exponential technologies are those that are rapidly accelerating and shaping major industries and all aspects of our lives, and that idea is actually the subtitle of Diamandis’ latest book, to be released in January 2020. It’s called The Future is Faster than You Think.
He describes many of the meta trends as unstoppable and even somewhat predictable. There are 2 things driving these trends to converge here and now. One is quantum computing which allows all the trends to get strong because of cheaper, faster computing power starting with the move to 5G networks. And the second is there is an acceleration of the acceleration. The rate at which technology is getting faster is itself getting faster. More people are connected than ever before with access to more and more information and the technology they’re using is cheaper than ever before.
So what are the meta trends?

* Accelerating technology which is cheaper and available to everyone
* Increasing global abundance
* Everyone, everywhere is connected at gigabit speeds
* Everything, everywhere is connected
* You can know anything, anywhere, anytime
* Autonomous personalized transport which is fast and cheap
* Increasing human intelligence w/ AI support and brain computer interfaces
* Increasing human longevity
* Capital abundance with access to capital everywhere
* Globally abundant, cheap, renewable energy

I’ll discuss some of these details in the next episode but I want to leave you with this: people have no idea how fast the world is changing.
From Short and Sweet AI, I&amp;#8217;m Dr. Peper.
https://singularityhub.com/2014/06/10/staggering-promise-of-exponential-technologies-in-a-succinct-5-minute-video/ (https://singularityhub.com/2014/06/10/staggering-promise-of-exponential-technologies-in-a-succinct-5-minute-video/)
https://su.org/blog/exponential-technology-trends-defined-2019/ (https://su.org/blog/exponential-technology-trends-defined-2019/)
https://singularityhub.com/2019/05/06/5g-is-here-what-does-that-mean-for-exponential-tech/ (https://singularityhub.com/2019/05/06/5g-is-here-what-does-that-mean-for-exponential-tech/)
https://xponentialworks.com/what-is-exponential-technology/ (https://xponentialworks.com/what-is-exponential-technology/)
https://su.org/blog/exponential-technology-trends-defined-2019/ (https://su.org/blog/exponential-technology-trends-defined-2019/)
 
 
 
 
 
 
 </itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>Brain Hacking</title><itunes:title>Brain Hacking</itunes:title><description><![CDATA[<p>Elon Musk, best known for Tesla, Space X, and Open AI, also has a company called Neuralink which works on […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/7-brain-hacking/">Brain Hacking</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>Elon Musk, best known for Tesla, Space X, and Open AI, also has a company called Neuralink which works on […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/7-brain-hacking/">Brain Hacking</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/7-brain-hacking/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=2466</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Tue, 19 Nov 2019 11:12:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/c8cacd55-56fd-471e-b8c8-0192a73d520f/brain-hacking-converted.mp3" length="3094167" type="audio/mpeg"/><itunes:duration>03:13</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>7</itunes:episode><itunes:summary>Elon Musk, best known for Tesla, Space X, and Open AI, also has a company called Neuralink which works on developing brain computer interfaces.
An interface is a connection between a person and a computer like a keypad or touchscreen or when we use a mouse to control a computer screen. It’s been reported that Neuralink is a very secretive company but Elon Musk recently gave a big presentation to show off some of Neuralink’s revolutionary technology.
By way of background there’s already been a person with spinal cord paralysis who received a brain implant and was able to control a computer cursor to play a game using his thoughts. Another patient also with a brain implant was able to move a robotic arm in a limited fashion.
The Neuralink device represents a substantial advance over this older technology because it allows people to communicate more quickly with computers directly from the brain. Multiple flexible threads, each one much thinner than a human hair, are surgically inserted into the brain with very little damage but with the ability to transfer enormous amounts of data. Right now the implant connects to a computer via a USB cable, but a sensor is being developed that would sit behind a person’s ear to transmit information from the brain threads to the computer wirelessly. The idea is for paralyzed patients to use this interface to control phones or computers with their thoughts. These threads are so flexible many more can be inserted in the brain but their flexibility also make them more difficult to implant.
So Neuralink’s second big advance was to invent a neurosurgical robot that can insert the threads automatically, up to 6 threads with 192 electrodes per minute, which allows for an ultra high bandwidth brain computer interface. As Musk explained everything you perceive, feel, hear, think, it’s all impulses from neurons and these flexible threads can record from and selectively stimulate many, many neurons across diverse brain areas.
In the short term the goal is to treat serious chronic brain diseases and brain damage caused by strokes and trauma. But Musk has an even greater long term goal: to have humans merge with artificial intelligence which he feels is necessary to keep us from becoming irrelevant as AI rapidly advances. His response to why he founded Neuralink is “the existential risk is too high not to.”
https://www.theverge.com/2019/7/16/20697123/elon-musk-neuralink-brain-reading-thread-robot (https://www.theverge.com/2019/7/16/20697123/elon-musk-neuralink-brain-reading-thread-robo)t
https://interestingengineering.com/key-takeaways-from-elon-musks-neuralink-presentation-solving-brain-diseases-and-mitigating-ai-threat (https://interestingengineering.com/key-takeaways-from-elon-musks-neuralink-presentation-solving-brain-diseases-and-mitigating-ai-threat)
https://interestingengineering.com/neuralink-how-the-human-brain-will-download-directly-from-a-computer (https://interestingengineering.com/neuralink-how-the-human-brain-will-download-directly-from-a-computer)</itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>XR: What is Extended Reality?</title><itunes:title>XR: What is Extended Reality?</itunes:title><description><![CDATA[<p>#6 XR- What is Extended Reality? Have you heard of XR, extended reality? It’s something I came across just recently […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/6-xr-what-is-extended-reality/">XR: What is Extended Reality?</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>#6 XR- What is Extended Reality? Have you heard of XR, extended reality? It’s something I came across just recently […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/6-xr-what-is-extended-reality/">XR: What is Extended Reality?</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/6-xr-what-is-extended-reality/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=2378</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Fri, 15 Nov 2019 18:04:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/2d834bec-dee2-482d-a181-fe958377c899/understanding-xr-extended-reality-converted.mp3" length="3025431" type="audio/mpeg"/><itunes:duration>03:09</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>6</itunes:episode><itunes:summary>#6 https://drpepermd.com/wp-content/uploads/2019/11/XR-What-is-Extended-Reality-1.docx (XR- What is Extended Reality?)
Have you heard of XR, extended reality? It’s something I came across just recently and want to share with you because it’s expected to be part of everyday life in the next 10 years.
Extended reality is an umbrella term for augmented reality, AR, virtual reality, VR, and mixed reality MR. These are technologies that merge our physical world with the virtual world, a world simulated by a computer.
Augmented reality augments your physical world by adding a new layer where virtual information and computer simulated objects are overlaid onto the real world by using AR glasses or a smart phone . Pokemon is AR where digital creatures are superimposed onto the real world as well as Snapchat which uses filters to superimpose a hat or glasses onto your head.
In contrast to augmented reality, in virtual reality you are fully immersed in a computer simulated environment using a VR headset or a head mounted display. The headset generates realistic sounds and images and engages all five of your senses to create a world you can interact with as if you were at a live concert, scuba diving in the ocean, or walking on the moon.
Mixed realty or MR is a combination of virtual and augmented reality. The real and virtual worlds are blended to create an environment that mixes physical and digital elements. Like AR, mixed reality can superimpose digital content onto a real world and like VR, in mixed reality, you can move objects and interact with everything around you.
There are challenges to this technology such as the vulnerability of having large amounts of very detailed, personal data collected about what you do, what you look at, even your emotions. And there are difficult technical and hardware issues to get the display, power, motion tracking, and connectivity to give a realistic, immersive experience. Also there’s a high cost to implementing this technology.
Despite these problems, XR will include any future realities, too. It’s a fundamental shift in the way you’ll carry out your daily life. Even as you attend a morning conference in China, go over business plans in New York, and host clients in Brazil for drinks after work, all from your office in LA, you won’t refer to the technology at all because your experience will be seamless. Someone has called it the end of distance.
From Short and Sweet AI, I’m Dr. Peper.
https://www.forbes.com/sites/bernardmarr/2019/08/12/what-is-extended-reality-technology-a-simple-explanation-for-anyone/#d03b03872498 (https://www.forbes.com/sites/bernardmarr/2019/08/12/what-is-extended-reality-technology-a-simple-explanation-for-anyone/#d03b03872498)
https://www.theverge.com/2019/1/28/18197520/ai-artificial-intelligence-machine-learning-computational-science (https://www.theverge.com/2019/1/28/18197520/ai-artificial-intelligence-machine-learning-computational-science)
https://www.visualcapitalist.com/extended-reality-xr/ (https://www.visualcapitalist.com/extended-reality-xr/)
https://www.youtube.com/watch?v=MHz2Ib0JeJY (https://www.youtube.com/watch?v=MHz2Ib0JeJY)
https://www.youtube.com/watch?v=lbJ-IKPn2l8 (https://www.youtube.com/watch?v=lbJ-IKPn2l8)
 
 
 
 </itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>Is Creativity in AI Possible?</title><itunes:title>Is Creativity in AI Possible?</itunes:title><description><![CDATA[<p>Creativity…. in this age of artificial intelligence it is thought to be the thing that distinguishes us from machines. #5 […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/4-creativity-and-ai/">Is Creativity in AI Possible?</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>Creativity…. in this age of artificial intelligence it is thought to be the thing that distinguishes us from machines. #5 […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/4-creativity-and-ai/">Is Creativity in AI Possible?</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/4-creativity-and-ai/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=2263</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Fri, 08 Nov 2019 15:30:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/6f02bee4-3093-44f4-badf-a077c34a74b2/creativity-and-ai-converted.mp3" length="2904087" type="audio/mpeg"/><itunes:duration>03:01</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>5</itunes:episode><itunes:summary>Creativity…. in this age of artificial intelligence it is thought to be the thing that distinguishes us from machines.
#5 https://drpepermd.com/wp-content/uploads/2019/11/Creativity-and-AI.docx (Creativity and AI )
Yes AI can be used to perform routine tasks faster with less mistakes, drive cars, read medical images and even talk to us, but machines do not have the inspiration or originality which is the uniquely human ability to be creative. That has all changed with a type of AI called machine learning which is forging a new exciting era for the arts.
In 2018 the auction house Christie’s sold the first piece of art created by a computer. The painting, which ultimately sold for $430,000, was created by a group of artists and AI researchers using a computer algorithm call Generative Adversarial Networks or GAN. They trained the GAN algorithms on 15,000 portraits dating from the 14th century to the current day. The way a GAN works is when two algorithms compete against each other, that’s the adversarial part, one is the generator which creates content and one part is the discriminator which judges the content. The generator makes a new image and then the discriminator tries to spot the difference between a human made image it trained on and one created by the generator. With the two competing with each other, something new is created, a new sort of art. One that is signed with a mathematical equation at the bottom right instead of an artist’s signature.
There’s also been an AI generated movie. The original goal was to see if a computer could win the 48 hour sci-fi London film Festival. The film created from the screenplay made it into the top 10. The following year the same director and creative technologist working with an algorithm Bot that named itself Ben, won third place. Last year the Bot was placed in charge of all aspects of the movie making from writing the screenplay to selecting the score and stringing sentences together using voice recordings from the actors. The result was a film called Zone Out, Silicon Valley and was generally agreed to be a crazy mess. But it demonstrated the potential to generate a complete film using artificial intelligence.
Whether it’s a painting or a movie, AI can generate new art and in the process is redefining what it means to be human.
From Short and Sweet AI, this is Dr. Peper
https://www.christies.com/features/A-collaboration-between-two-artists-one-human-one-a-machine-9332-1.aspx (https://www.christies.com/features/A-collaboration-between-two-artists-one-human-one-a-machine-9332-1.aspx)
http://www.obvious-art.com (www.obvious-art.com)
https://www.thecreativepenn.com/2019/03/11/ai-and-creativity-with-marcus-du-sautoy/ (https://www.thecreativepenn.com/2019/03/11/ai-and-creativity-with-marcus-du-sautoy/)
https://www.youtube.com/watch?reload=9andv=LY7x2Ihqjmc (https://www.youtube.com/watch?reload=9andv=LY7x2Ihqjmc)
https://www.wired.com/story/ai-filmmaker-zone-out/ (https://www.wired.com/story/ai-filmmaker-zone-out/)</itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>Are Machine Learning and Deep Learning the same as AI?</title><itunes:title>Are Machine Learning and Deep Learning the same as AI?</itunes:title><description><![CDATA[<p>Algorithms, neural networks, data … it involves machines and it’s deep. #4 Are Machine learning and Deep Learning the same […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/4-are-machine-learning-and-deep-learning-the-same-as-ai/">Are Machine Learning and Deep Learning the same as AI?</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>Algorithms, neural networks, data … it involves machines and it’s deep. #4 Are Machine learning and Deep Learning the same […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/4-are-machine-learning-and-deep-learning-the-same-as-ai/">Are Machine Learning and Deep Learning the same as AI?</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/4-are-machine-learning-and-deep-learning-the-same-as-ai/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=1796</guid><itunes:image href="https://artwork.captivate.fm/0f70a8fe-87fd-4897-904a-3c6df4f5e40b/8bczxqo3txpitc4acrr8bmih.png"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Wed, 30 Oct 2019 14:13:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/d6519d40-032b-4aac-a7e9-fe540f767dc8/are-machine-learnining-and-deep-learning-the-same-as-ai-converted.mp3" length="2730903" type="audio/mpeg"/><itunes:duration>02:51</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>4</itunes:episode><itunes:summary>Algorithms, neural networks, data &amp;#8230; it involves machines and it&amp;#8217;s deep.
https://drpepermd.com/wp-content/uploads/2019/10/Are-Machine-Learning-and-Deep-Learning-the-same-thing-as-AI-1.docx (#4 Are Machine learning and Deep Learning the same as AI? transcript)
Just when I thought I was beginning to understand AI, a friend asked me a question I struggled to answer. He asked me if machine learning and deep learning are the same thing as AI. Here’s my explanation that I couldn’t give to him then but that I can share with you now.
Think of AI, machine learning, and deep learning as circles within each other. AI is the biggest circle and then there’s a little smaller circle that is machine learning and within that is another circle that is deep learning.
So AI uses something called algorithms which are essentially a set of instructions of computer code that tell a computer what steps to follow to solve a problem or reach a goal. In machine learning computers take the data you’ve give them and learn from it and make decisions based on that learning. They do this by using something called an artificial neural network that is inspired by how the human brain works. The algorithms behave as though they are interconnected brain cells creating these artificial neural networks which process information the way the brain does. We know from cognitive scientists that the brain is arranged in different layers to process different pieces of information. The information comes into the brain and each level or layer of neurons processes the information, provides insight and passes it on to the next more senior level. That’s how the human brain learns and that mechanism is how these artificial neural networks work. They take the information, learn from it and pass on to the next, more senior level which processes it further and adds to the learning.
So the machines learn from the data you give them and that’s called machine learning. The artificial neural networks are connected one layer to the other and when you have many layers of these neural networks several layers deep that is what is called deep learning. For example Google uses a 30 layered artificial neural network to power Google photos and Facebook has what it calls its DeepFace algorithm which uses deep artificial neural networks to recognize faces with 97% accuracy.
I love it when people ask me questions that I can’t explain because when I figure it out I can share the explanations with you. From short and sweet AI, I’m Dr Peper.
https://www.forbes.com/sites/bernardmarr/2018/09/24/what-are-artificial-neural-networks-a-simple-explanation-for-absolutely-anyone/#53f6f8c21245 (https://www.forbes.com/sites/bernardmarr/2018/09/24/what-are-artificial-neural-networks-a-simple-explanation-for-absolutely-anyone/#53f6f8c21245)
https://blogs.nvidia.com/blog/2016/07/29/whats-difference-artificial-intelligence-machine-learning-deep-learning-ai/ (https://blogs.nvidia.com/blog/2016/07/29/whats-difference-artificial-intelligence-machine-learning-deep-learning-ai/)
https://hackernoon.com/deep-learning-vs-machine-learning-a-simple-explanation-47405b3eef08 (https://hackernoon.com/deep-learning-vs-machine-learning-a-simple-explanation-47405b3eef08)
https://towardsdatascience.com/clearing-the-confusion-ai-vs-machine-learning-vs-deep-learning-differences-fce69b21d5eb (https://towardsdatascience.com/clearing-the-confusion-ai-vs-machine-learning-vs-deep-learning-differences-fce69b21d5eb)
 
 </itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>The Skinny on AI</title><itunes:title>The Skinny on AI</itunes:title><description><![CDATA[<p>Mainstream AI and all existing AI is called narrow artificial intelligence. Take 3 minutes and get the skinny on it. […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/the-skinny-on-ai/">The Skinny on AI</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>Mainstream AI and all existing AI is called narrow artificial intelligence. Take 3 minutes and get the skinny on it. […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/the-skinny-on-ai/">The Skinny on AI</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/the-skinny-on-ai/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=1666</guid><itunes:image href="https://artwork.captivate.fm/d9c7a10c-be38-46a3-b577-46e2b30054c6/screen-shot-2019-10-16-at-11.png"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Sun, 20 Oct 2019 11:24:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/374000c3-c1eb-4489-a2a1-fce24d090682/the-skinny-on-ai-converted.mp3" length="2936441" type="audio/mpeg"/><itunes:duration>03:04</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>3</itunes:episode><itunes:summary>Mainstream AI and all existing AI is called narrow artificial intelligence. Take 3 minutes and get the skinny on it.
https://drpepermd.com/wp-content/uploads/2019/10/The-Skinny-on-AI.docx (#3 The Skinny on AI transcript)
Artificial intelligence is understood to mean machines that mimic human behavior and intelligence. But actually there are 3 different types of AI:  narrow AI, general AI, and super AI.
What we recognize as AI in the everyday world such as Google ranking pages, Amazon knowing what we like, and voice assistants such as Alexa, is considered narrow artificial intelligence or weak AI. It’s considered narrow because the ability to mimic human behavior works within a very limited narrow context and it can’t take on tasks beyond it’s field.
Other examples of narrow AI are Google translation where a book written in Mandarin Chinese can be translated to English in 30 seconds or self-driving cars which are a combination of several different narrow AIs. So you can see narrow intelligence is not low intelligence and because it can do many jobs faster with less mistakes, narrow AI is threatening to displace many human jobs. But even as narrow AI displaces humans from the routine jobs, it’s still not considered human level artificial intelligence.
Narrow artificial intelligence is where we are, general artificial intelligence is where we are going.
General artificial intelligence or strong AI refers to machines that can perform any generalized task which is asked of it, much like a human. This is the sort of AI seen in movies like “Her” and other science fiction movies where machine and operating systems are conscious and driven by emotion and self- awareness.
General AI would be expected to be able to reason, solve problems, understand uncertainty, integrate prior knowledge in decision-making, all at faster processing speeds than humans as well as be innovative, imaginative, and creative and experience consciousness. Most experts believe general AI is possible but since one of the world’s fastest supercomputers took 40 minutes to simulate a single second of brain neural activity and since we have 100 billion neurons in the human brain, I wouldn’t hold my breath.
Artificial super intelligence or super AI is when AI surpasses human intelligence in all aspects from creativity, to problem solving, to general wisdom. Super AI is something futurists speculate about and which we will discuss in a future Short and Sweet AI segment. Until next time, I’m Dr. Peper.
https://www.computerworld.com/article/2906336/what-is-artificial-intelligence.html (https://www.computerworld.com/article/2906336/what-is-artificial-intelligence.html)
https://interestingengineering.com/the-three-types-of-artificial-intelligence-understanding-ai (https://interestingengineering.com/the-three-types-of-artificial-intelligence-understanding-ai)
 
 
 
 </itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>Voice First Computing</title><itunes:title>Voice First Computing</itunes:title><description><![CDATA[<p>The hottest topic in AI this year is something called voice computing. In fact it’s been described as the latest […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/2-voice-first-computing/">Voice First Computing</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>The hottest topic in AI this year is something called voice computing. In fact it’s been described as the latest […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/2-voice-first-computing/">Voice First Computing</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/2-voice-first-computing/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=1629</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Thu, 10 Oct 2019 14:50:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/6c797f47-27ad-49d9-9b4d-4ffe478c4fc5/voice-computing-converted.mp3" length="2773437" type="audio/mpeg"/><itunes:duration>02:53</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:episode>2</itunes:episode><itunes:summary>The hottest topic in AI this year is something called voice computing. In fact it’s been described as the latest disruption in technology.
https://drpepermd.com/wp-content/uploads/2019/10/Voice-Computing.docx (#2 Voice Computing transcript)
Voice computing at its simplest is when you use your voice to control your devices like what you’re doing right now getting this flash briefing, rather than touching your keyboard or screen. Now we use tools to hold, touch or swipe for information or to perform tasks. With voice computing these tools disappear and are replaced by voice assistants on smart speakers who respond to voice commands. The entire notion of what a computer is and does is different because of voice computing humanized with names such as Alexa, Siri, Cortana or Bixby.
Computer speech technology crossed a threshold in the last five years. We know human beings can understand and comprehend an average of 95% of what another human being says. Compare that to artificially intelligent computers which can now understand and comprehend 99% of what a human says to them.
Let’s look at the numbers. There are 2 billion desktop and laptop computers in the world and 5 billion mobile phones. By the end of this year there will be more than 200 million smart speakers worldwide, a number expected to grow to 8 billion by 2023, but there are devices such as home gadgets and appliances, 100 billion of them, expected to connect to these smart speakers and that market is exponentially larger than even mobile.
The days of going to a browser to type in a query or starting at a homepage on a screen are disappearing.   Instead we will have voice discovery of information. Content will be filtered through a voice assistant which is able to understand the semantics and underlying intent and context of what we want because they have data of everything we’ve done, the purchases we’ve made, the podcasts and music we’ve listened to, the vacations in our calendar, the email sequences from a friend. All of this is discoverable by voice assistants and will be brought to bear in understanding the nuances of our requests.
Screens and smart phones won’t be eliminated just as the jet airplane didn’t kill off the car but people will gravitate to the more natural interface of voice. Computers will follow us around rather than us needing to go to them. This is called a voice first future.
From short and sweet AI, I&amp;#8217;m Dr. Peper.
https://www.theverge.com/2019/5/20/18537019/artificial-intelligence-alexa-siri-cortana-google-voice-computing-james-vlahos-talk-to-me (https://www.theverge.com/2019/5/20/18537019/artificial-)
https://www.theverge.com/2019/5/20/18537019/artificial-intelligence-alexa-siri-cortana-google-voice-computing-james-vlahos-talk-to-me (James Vlahos: &amp;#8220;Talk to Me&amp;#8221; | Talks at Google &amp;#8211; YouTube)
https://www.theverge.com/2019/5/20/18537019/artificial-intelligence-alexa-siri-cortana-google-voice-computing-james-vlahos-talk-to-me (https://www.youtube.com › watch)
https://www.theverge.com/2019/5/20/18537019/artificial-intelligence-alexa-siri-cortana-google-voice-computing-james-vlahos-talk-to-me (intelligence-alexa-siri-cortana-google-voice-computing-james-vlahos-talk-to-me)
https://techcrunch.com/tag/voice-computing/ (https://techcrunch.com/tag/voice-computing/)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 </itunes:summary><itunes:author>Dr. Peper</itunes:author></item><item><title>Three Breakthroughs Unleashing AI</title><itunes:title>Three Breakthroughs Unleashing AI</itunes:title><description><![CDATA[<p>Present day artificial intelligence started back in the 1950s, but why is it everywhere now? #1 Three Breakthroughs Unleashing AI […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/1-three-breakthroughs-unleashing-ai/">Three Breakthroughs Unleashing AI</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></description><content:encoded><![CDATA[<p>Present day artificial intelligence started back in the 1950s, but why is it everywhere now? #1 Three Breakthroughs Unleashing AI […]</p>
<p>The post <a rel="nofollow" href="https://drpepermd.com/episode/1-three-breakthroughs-unleashing-ai/">Three Breakthroughs Unleashing AI</a> appeared first on <a rel="nofollow" href="https://drpepermd.com">Dr Peper MD</a>.</p>]]></content:encoded><link><![CDATA[https://drpepermd.com/episode/1-three-breakthroughs-unleashing-ai/]]></link><guid isPermaLink="false">https://drpepermd.com/?post_type=episode&amp;p=1021</guid><itunes:image href="https://artwork.captivate.fm/bf88eebe-e86d-4d48-b0b6-6f5e342846bf/podcastimage.jpg"/><dc:creator><![CDATA[Dr. Peper]]></dc:creator><pubDate>Tue, 01 Oct 2019 23:26:00 -0500</pubDate><enclosure url="https://podcasts.captivate.fm/media/cfba27a4-bc5f-4f65-ac7d-d67e78157abf/3-breakthroughs-unleashing-ai-converted.mp3" length="2738328" type="audio/mpeg"/><itunes:duration>02:51</itunes:duration><itunes:explicit>no</itunes:explicit><itunes:episodeType>full</itunes:episodeType><itunes:season>1</itunes:season><itunes:episode>1</itunes:episode><itunes:summary>Present day artificial intelligence started back in the 1950s, but why is it everywhere now?
https://drpepermd.com/wp-content/uploads/2019/09/Three-Breakthroughs-Unleashing-AI-1.docx (#1 Three Breakthroughs Unleashing AI . transcript)
There are three reasons AI is front and center. Number one is the ability for machines to do enormous amounts of computing very cheaply using something called GPUs and referred to as cheap parallel computation. Number two is massive amounts of data aptly called Big Data. And the third reason is better computer algorithms known as Deep Learning.
Almost a decade ago a new kind of chip known as a GPU or graphics processing unit was developed for the video game world where millions of pixels had to be recalculated second to second, over and over again. Before long experts in the field realized GPUs could be used to run neural networks in parallel making computers more powerful in their computations. Networks running on GPUs enable companies such as Facebook to identify your friends in photos or for Netflix to create reliable recommendations for millions of subscribers.
A second reason for the explosion in AI is due to the literal avalanche of collected data used by computers to learn and teach themselves. A computer needs to see thousands of examples before it can distinguish between, let’s say, a cat or a dog, or play a thousand games of chess before it can play the game at a proficient level. Twenty three hundred gigabytes of data are produced daily worldwide and it is astounding to realize that 90% of all the world’s data has been created in just the last 24 months. This has given us the big data computers need to train themselves and push AI forward.
The third reason AI has taken off is due to something called Deep Learning. Deep Learning evolved from artificial neural networks which are collections of neurons based on the human brain except these are software based calculators that function in a similar manner to human neurons. Artificial neural networks are created when neurons are connected one to another and then organized into multiple layers. These deep layers of networks are why it’s called Deep Learning and has led to computers doing revolutionary things such as using speech to control devices like Alexa, and even analyze human emotion in customer reviews.
It is this perfect storm of cheap parallel computation, Big Data, and Deep Learning that has created a 60 year in the making, overnight success of artificial intelligence. This is Dr. Peper from short and sweet AI.
https://www.forbes.com/sites/bernardmarr/2017/04/25/the-complete-beginners-guide-to-artificial-intelligence/#202a441d4a83 (https://www.forbes.com/sites/bernardmarr/2017/04/25/the-complete-beginners-guide-to-artificial-intelligence/#202a441d4a83)
https://www.wired.com/2014/10/future-of-artificial-intelligence/ (https://www.wired.com/2014/10/future-of-artificial-intelligence/)</itunes:summary><itunes:author>Dr. Peper</itunes:author></item></channel></rss>