In a career spanning several decades, artificial intelligence researcher and professor Stuart Russell has contributed extensive knowledge on the subject, including foundational textbooks. He joined us onstage at TC Sessions: Robotics + AI to discuss the threat he perceives from AI, and his book, which proposes a novel solution.
Russell’s thesis, which he develops in “ is that the field of AI has been developed on the false premise that we can successfully define the goals toward which these systems work, and the result is that the more powerful they are, the worse they are capable of. No one really thinks a paperclip-making AI will consume the Earth to maximize production, but a crime-prevention algorithm could very easily take badly constructed data and objectives and turn them into recommendations that cause real harm.
The solution, Russell suggests, is to create systems that aren’t so sure of themselves — essentially, knowing what they don’t or can’t know and looking to humans to find out.
The interview has been lightly edited. My remarks, though largely irrelevant, are retained for context.
TechCrunch: Well, thanks for joining us here today. You’ve written a book. Congratulations on it. In fact, you’ve actually, you’ve been an AI researcher and author, teacher for a long time. You’ve seen this, the field of AI sort of graduated from a niche field that academics were working in to a global priority in private industry. But I was a little surprised by the thesis of your book; do you really think that the current approach to AI is sort of fundamentally mistaken?
Stuart Russell: So let me take you back a bit, to even before I started doing AI. So, Alan Turing, who, as you all know, is the father of computer science — that’s why we’re here — he wrote a very famous paper in 1950 called “Computing Machinery and Mind,” that’s where the Turing test comes from. He laid out a lot of different subfields of AI, he proposed that we would need to use machine learning to create sufficiently intelligent programs.