In The Moment: Episode 43

Oct 10, 2019 | Listening Guide, Town Crier


Episode Transcript

Please note: This transcript was generated automatically, please excuse typos, errors, or confusing language. If you’d like to join our volunteer transcription team and help us make our transcript more accurate, please email communications@townhallseattle.org.

Jini Palmer: Hello and welcome to town hall Seattle’s podcast. In the moment, I’m your host, Jini Palmer. It’s the second week of October, 2019 and the temperatures have plummeted. The leaves are turning bright oranges and reds and people have been cozying up to our auto bar and the library that’s in the forum before and after events. So come on down, grab a book, a drink, and bring your friends. In the coming week, we’ve got programs about mushrooms, science, and the economy and music to fill your mind and senses. And next Wednesday, professor and author of Melanie Mitchell is coming to our forum stage to talk about AI. She chats with our chief correspondent, Steve share on this episode of in the moment to give you a glimpse into a human thinking about AI.

Facial recognition programs are the latest technology to be touted as the breakthrough in the human quest to create artificial intelligence. But it turns out the software doesn’t work like it does on TV, where high-speed computers magically find just the person being sought. Facial recognition software can be fooled by something as common as a blurred image according to computer scientists, Melanie Mitchell that can make plans to put AI into positions of decision making from self driving cars and security to financial decision making. Ill-Considered they just aren’t that smart. Mitchell has written a book artificial intelligence, a guide for thinking humans. She says her goal is to help readers get a sense of what the field has accomplished and how much further it needs to go before these machines can actually take part in the conversation about their own intelligence. That is to be conscious. Mitchell will be coming to our forum stage at 7:30 PM on October 16th to unravel the promises and pitfalls of artificial intelligence. Our chief correspondent Steve shares spoke with the Portland state university professor of computer science.

***

Steve Scher: Thank you for talking to me. I appreciate it. 

Melanie Mitchell: Oh well thanks for the opportunity. 

SS: I’ll just jump in cause I was wondering and you know even after reading all this, I’m still, I still sometimes get a little vague. Is there anything artificially intelligent about this Skype technology we’re using? 

MM: Well, I guess it depends on your definition of intelligence, I would say. No, but it certainly, it does some very useful things like it probably does some signal processing of our audio and makes, you know, takes out noise and all of that stuff. It’s, you know, people have different definitions of intelligence. 

SS: I know. That’s why I asked. Yeah. So for you know, your, your definition, no? 

MM: No. Yeah, I would say absolutely not.

SS: Yeah. Imagine a, a system that did have intelligence that was doing this, what would be, what would be more sophisticated about it? If sophisticated is even the right word?

MM: Well it would be able to join in our conversation, make some comments, tell you that you’re not speaking loud enough or you’re not speaking clearly enough or maybe you have, I, I didn’t see this, but maybe you have some like a egg on your tie or something I could tell you that

SS: I see that would be intelligent. I mean, Alexa doesn’t do that most of the time. Alexa says, Hmm, I don’t know that one.

MM: Right, exactly. I mean, I don’t think intelligence is a yes or no question. It’s, there are certainly degrees of intelligence. It’s a continuum. There’s also different dimensions of intelligence. So it’s a little hard to say that something is or isn’t intelligent, especially since we don’t have a good definition, but it certainly isn’t intelligent enough to, to join in our conversation. 

SS: Is that one of the issues with trying to write a book about where we stand with artificial intelligence that the definition of intelligence is still under examination?

MM: Yeah. That makes it a little more challenging. But it’s not that unusual. I mean, in science, people use terms all the time in every science that don’t have a very rigorous definition. So, you know, one example is I know that people in genetics are still debating the definition of the word gene. And for a long time, like in physics, the word force, that was just something ill defined. It was kind of a placeholder for something that we didn’t understand yet. And so I think of intelligence is that it’s kind of a placeholder until we understand better what we’re talking about. Consciousness is the same. You know, all these words that just stand in for things we don’t understand yet.

SS: I find it always fascinated to think that there is, there are still all these black boxes and yet scientists or what would you say? Are they, are they scratching around the, on the outsides of these black boxes trying to get in? I mean, what’s the, what’s a good metaphor?

MM: Yeah, that’s that. I think that’s, that’s pretty good. Are there, you know, there if you ever read the book Flatland, you know, something’s in a true dimensional world trying to imagine a three dimensional world. So we’re in our two dimensional world of understanding intelligence and there’s a third dimension, maybe even a fourth that we haven’t even conceptualized yet.

SS: I see, I see. Yeah. We’re working at it though, right? That’s part of what everybody’s trying to do, right? Working at that. 

MM: Absolutely. Yeah. 

SS: So I, I want to know how you started thinking about this and I’m going to take a kind of a geeky roundabout way. There, you know, there’s a, there’s a Star Trek Enterprise episode where where Picard encounters a group of people, Tamarians who communicate in metaphor or story or allegory. It’s always unclear. I mean, what actually is going on. But I always thought it was a remarkable way to think about how we communicate because though we, they had Language they had words that matched. They had no way of actually communicating because it was all in metaphor. And I noticed that one of the first things that you write about that kinda got you thinking about all this was this book by George Lakoff and Mark Johnson, who were pretty amazing writers write about about metaphors. What was it about, about that idea that sort of begin began the spark for you in thinking about artificial intelligence?

MM: So in a sense, we all communicate via metaphors and that is something we’re quite unconscious of. That metaphor has really permeated our language. You know, if we say, Oh, she gave me a very warm welcome. That’s a metaphor, right? It’s a temperature, it’s a physical temperature and it’s not, you know, it’s, it’s not literally a warm in the sense of the temperature didn’t literally go up, but we, that’s the way we think about most concepts. And that was just a real revelation for me. You know, that’s not something I’d ever thought about before I read that book. But then once you see it, you see it everywhere in language and it’s, it, it really gives an insight into the way we think into the way we conceptualize. So that got me really interested in this, this whole question of like, what are our concepts? How do we use concepts in such a flexible way where computers are so rigid and literal? You know, they, they would have a hard time understanding that kind of metaphor. So it seemed like an incredible challenge to be able to get computers to understand things in the same way we do. And a lot of people think that’s impossible. I think it’s possible, but I think we don’t know how to do it yet. So really interesting. Kind of open question

SS: Just on a side note or maybe it’s part of it. I teach at the UW now and I’m teaching a communications class and many of my students are communications majors which is about language and not language and nonverbal interactions and all the ways we humans interact. And sometimes I have a half or three quarters of the students are Chinese speaking students. And I often find myself using a metaphor or an idiom and then looking up and realizing that makes no sense to them. That has no meaning for them because their like, their command of English is like my command of Cantonese or Mandarin doesn’t take us that far. It’s so sophisticated, isn’t it? These concepts, these metaphors that we live by.

MM: Yeah. and you know, each language has its own, it, it, it’s, it’s interesting too, if you try and learn a foreign language to kind of learn what their metaphors are and how, how they differ. One, one example I remember from French was in English we say, we say I had a dream, right? It’s kind of this, this notion that you, you possessed it, right? Or it possessed you. Whereas in French you say, I made a dream and I wonder if they really conceptualize it differently or they, they just have a different way of saying it and there’s a lot of controversy on that. So I think it’s really interesting this kind of cross cultural study of metaphors. 

SS: Have we seen people doing work in AI that are having any breakthroughs by comparing metaphors across different languages?

MM: You know, I haven’t seen that. One of the things I talk about in my book is, is translation programs like Google translate and kind of where they stand. They, and it’s a good question of whether they could deal with translating metaphors. You know, they, they learn, they use statistics, they learn from lots of paired sentences where you have a sentence in English and a sentence in French. And they learned from statistics of associations like that, how to translate phrases and they do it well in some cases and they do it very poorly in others. So I guess the answer is no, we haven’t seen a breakthrough in that yet.

SS: All right. I want to come back to Google translate and all those systems, but let me come back to how you went through. Why was it that this notion of language and metaphor, these ideas of the physical hot, cold warmth and how it translates into language and metaphor. How did that translate for you into wanting to study it through computer technology and computing?

MM: Well, what really got me into the field of AI was Douglas Hofstadter’s book Godel Escher, Bach. I don’t know if you’re familiar with that, but it was a book written back in the 70s that really tackled the question of how is it that something like consciousness or understanding could emerge from something like the brain where you have neurons, which are like little machines. They don’t understand anything individually, but collectively we get the phenomenon of human cognition sort of how does that happen? And you know, I don’t think the book completely answered that question, but it approached it in a really novel and interesting way. And when I read it, it just spoke to me as the most interesting question out there. And so that’s really what got me into the field of AI. And I even ended up studying with Douglas Hofstadter who was my PhD advisor.

SS: But it was, but, but you wanted, why was it through like, you could have studied linguistics, you could have studied cognition. What was it about unwrapping that question of consciousness through looking at how computers function that intrigued you?

MM: That’s a really good question. I guess my background kind of primed me to become a computer scientist. I focused a lot on math and physics as an undergraduate and did a lot of, did a lot of things with computers and kind of grew up with computers. So it just struck me that computers computation as a, as kind of a phenomenon was broader than just in computers. Computation was a way to understand how we think. That was kind of my view back then at least. And so AI was a way, not only to make computers smart but to use computers to understand ourselves.

SS: I see, I see. All right, so I’m gonna let me ask one more thing about this. Let me ask you about this experiment as a way to understand how this stuff gets applied and, and I don’t think you, you are, you’re reporting on this experiment, but this test, but explain it a little bit. One group of researchers, a group of researchers was, was trying to test how the physical, something is warm relates then to how somebody perceived somebody eat with these metaphors. Somebody warm, somebody cold. Do you remember that experiment that you wrote about?

MM: Yeah, I do.

SS: What, what was, can you describe the experiment and what was being studied.

MM: Yeah. So I guess the, the, the thing that was being studied was, are these metaphors just things that we say or are they, do we actually have any sort of literal, physical grounding of these metaphors in our bodies? 

So here’s the experiment. You have the subject come in to the lab and while the subjects being sort of checked in by experimenter, the experimenter says, could you hold this cup for me? And it’s sort of framed as being part of the sort of check in rather than part of the experiment in the cup is either hot coffee, the cup is hot coffee or cold water, let’s say. And then the experiment, the subject reads about someone and they’re asked to describe that person’s personality. And it turned out that if the person had been asked to hold a cup of hot coffee, they used metaphors like she’s very warm, or she’s very cold if they held a glass of cold water. That the actual physical temperature that they had recently experienced influenced the way that they communicated metaphorically. So there was evidence that these metaphors are actually, we actually conceptualize them in terms of physical, bodily states. And this has been replicated a lot of times in different ways.

And so the theory is that all of our abstract concepts really relate back to our bodies and the way our bodies sense temperatures, sense space, sense time. That our thinking is fundamentally rooted in our bodies. So that was, that was a discussion that was a part of the book where I was talking about how it is that humans understand the world and kind of focusing on the question, well computers, they don’t have bodies, right? Could they ever understand the world in the same way we do without a body or does it even matter? Do they need to in order to attain human level intelligence? 

SS: And where are you at now in your thoughts about that? 

MM: I’m kind of coming around to the embodied cognition ideas that we, that we can’t get computers to understand our language or our world without some kind of body that can experience the same things we do. Now that brings up another question. Why would we want them to understand our world in the way we do? Well, I think it’s, it’s an open question of whether they can actually do the things we want them to do, like drive on their own or make decisions about people or, you know, war or any kind of social thing that involves humans. Can we trust them to make those decisions without actually understanding things in the same way we do? And that’s a very big question. And I think it’s an open question. A lot of people in AI say, we don’t have to have, we don’t have to mimic human intelligence. Computers don’t have to have the same kind of intelligence we have, but then there’s the question, how can we trust them? So that’s kind of what that whole section was about.

SS: Well, it also raises a question of what would their intelligence be if it wasn’t grounded in the human intelligence? Of course, that’s the, that’s the bugaboo, right? Oh, AI will be this, the, the singularity that takes over the whole world. And you touch on that, you write about that. But is there a way to think about an interactive intelligence? You know the Skype that is commenting on the egg, on my tie that is outside our way of conceptualizing intelligence? 

MM: I don’t know. I think that’s a very good question. You know, we, we certainly have other kinds of intelligences in the world. We have animals that don’t have exactly the same kind of intelligence we have. And perhaps they think in different kinds of metaphors depending on how advanced they are. And we have trouble communicating with them, but we want computers that can communicate with us and can assist us and that can in some sense share our values. So I’m not sure that we want to have computers that have a different kind of intelligence

SS: That’s a wild notion. Computers that share our values. Has anybody been working on that aspect of artificial intelligence?

MM: There’s a lot of talk about it. So there’s a lot of talk about imbuing computers with morality, the kind of moral value systems that we humans care about. And how one could do that. There’s several, actually several books that are just coming out on that very topic. And there’s even research centers that look at that kind of question. But it’s very hard because in order to have moral values, you really need to have concepts. You need to understand something about the world. And that’s exactly what computers aren’t able to do yet. So I think it all goes together. You know, intelligence, morality, having values, it’s hard to imagine separating them out.

SS: Well, and you know, you talk about in the book you talk about the famous one that has come off of propagated the fundamental rules of robotics or robot may not injure a human being or through inaction allow a human being to come to harm. A robot must obey the orders given to it by human beings, except where such orders conflict with the first law. And then a robot must protect its own existence as long as the protection does not conflict with the first or second law. And you talk about how Asimov was a visionary. But you mentioned that he postulated a story where a robot got stuck because it couldn’t make a decision between those. But between those laws, it was very much in a feedback loop, not able to be, I guess, intelligent.

MM: Right. I think his purpose in stating those laws was really to show it’s in some sense, ironically, that any fixed law is going to lead to a behavior, you know, a suboptimal behavior. You have to be flexible about it. You know, computers are too literal. And the robot in those stories was trying to literally apply the law without having any flexibility. And it couldn’t, it got stuck in a loop. So that really was the point. And that the problem is that, and people have seen this throughout the history of AI. It’s impossible to write down rules for behavior, because behavior just doesn’t fit into a set of fixed rules. We can’t engineer knowledge into computers because knowledge doesn’t fit into a set of fixed logical rules. And that’s why AI has had so much trouble over the years and people have turned to this, you know, learning from big data rather than trying to program rules into computers. But each approach kind of has its own limitations.

SS: All right, so let’s look at that. I mean, this book, Artificial Intelligence: A Guide for Thinking Humans is an assessment of where we are now with AI. So let’s pick a few things. Where are we now? And all right, let’s go back to Google translate. Is Google translate a sophisticated AI?

MM: I would say it’s a very sophisticated program. It works very well, a lot of the time. It can have pretty spectacular failures, which I think if anyone’s tried to use it to translate anything beyond sort of relatively simple texts, they can see that. And it’s interesting because it’s able to do these, these translations sometimes very well without really understanding anything about the text that it’s processing. And that’s this kind of the state of AI in general that we have these systems that can do very well in many domains, except they occasionally fail. And the reason they fail is because they really don’t understand the data that they’re processing. No context. The context, the sort of the models of the world that we humans have, these systems don’t have. Some people like to call it common sense, you know, they are missing common sense, which really means that they don’t have the knowledge about the world and the ability to apply that knowledge in new kinds of situations.

SS: Well, who has been successful in even limited ways in bringing common sense to AI?

MM: That’s a good question. Really no one.

SS: They’ve tried though right, there are people who were trying, right? 

MM: This is a bit, you know, this goes back to the beginning of the field itself. Tried to give computers common sense. You know, and there’s been different approaches. One is program at all in, tell computers, every single fact about the world. Well, we can’t do that because there’s just too many facts about the world that we don’t even consciously think of ourselves. So the new approaches, let the systems learn, let them learn from data. But how do we, how do they learn? So far this kind of learning approach hasn’t resulted in breakthroughs in common sense. And in fact, there’s this new government program to fund AI researchers called Foundations of Common Sense. It’s funded by DARPA, the Defense Advanced Research Projects Agency.

And the goal of it, it’s kind of a grand challenge. And the goal is to create a program with the common sense of an 18 month old baby and to kind of have it go through all the developmental stages of an 18 month old baby. So to me it’s a really interesting contrast because we have these computers that can beat any human at chess or go, that can drive on their own in many situations that can translate between languages, that can do all these incredible tasks, but it’s still a grand challenge to have a program that has anything like the knowledge or common sense of an 18 month old who really can’t do much at all. Like have all those really fancy kinds of intelligence. And yet that’s sort of the horizon that everyone’s looking at now and saying that’s our goal. And it comes back to this this famous statement by Marvin Minsky where he said what he’s learned about AI is that easy things are hard, namely the things that are easiest for us, like the things that an 18 month old baby can do are in fact the hardest things for AI.

MM: Talk about IBO and what was the process by which you were able to do an experiment to train one of those little robot dogs to kick a soccer ball, but what the process by which this action, which on the outside, only with little robot dog can walk up to a soccer ball and kick it and then wait for me to kick it back. What was the, what was entailed in that? 

SS: So first I’ll say that I didn’t actually train a real IBO dog. I trained a simulation. And I trained it via a process that’s called reinforcement learning. It kind of mimics early psychology experiments where people used to train like rats to, to running mazes and so on where you give them rewards if they do the right thing. And that’s all you do is you let them take actions. And if they ever do the right thing, you give them a reward. But those rewards they can learn from those rewards and they can learn kind of by looking a little bit back in time. So if the dog, the robot dog actually manages to kick the ball, you give it a reward. Now in a computer, of course, it’s not like a real dog that, you know, gets pleasure from getting a dog treat. It’s kind of a simulated reward. But it can then learn that the things that it did to lead up to kicking the ball were actually good too, even though it didn’t get a reward for those. And reinforcement learning allows the system to figure out back in time what it did right to lead up to the part that gave it a reward. This was an extremely simple reinforcement learning system that I used, but a version of it, much more complicated was exactly what was used to train machines to play chess and go better than any human. So it’s a very powerful learning technique.

SS: Is it intelligence? 

MM: As I said, I don’t think you can say yes or no. It has, I would say it has some intelligent behavior but very limited. And one problem is that unlike humans, if something changes about the way I present the problem, like let’s say I give it a, a much bigger ball. It might have problems adapting its knowledge or its intelligent behavior to this new situation, or if there’s some obstacle in front of the ball, it wouldn’t know what to do. And similarly, you know, these systems that play chess or go better than any human, they’re incredibly intelligent at that task. But if you change the task a little bit, like you changed the shape of the go board or you you do something like that, it’s unable to adapt. So I think a big part of intelligence is that ability to adapt your knowledge to new kinds of situations, to changes in your environment. And that’s something that AI really struggles with.

SS: So in your book, you talk about a series of puzzles that are Russian researcher had done, I’m trying to find that you probably know more than me. Is that we’re, where different shapes and sizes are trying to be compared. Is that, is that sort of what you’re working on then?

MM: It’s related to what I’m working on. Yeah. So that, those are a set of visual puzzles where you’re, you’re asked to see abstract similarities. They’re very difficult, not for machines in particular, but also for people. Some of them are quite difficult. They’re kind of like an IQ test in a way. But what makes them most difficult for machines is that there’s very few examples. You have to sort of abstract from very few examples. And right now what computers are best at is when they have millions of examples. And that’s something they’re quite different from humans. Humans don’t need that many examples to learn something or to abstract the concept. So I’m interested in how we can do that kind of abstraction as people do with very few examples.

SS: How do you feel about it? I mean, I know you like doing it and it’s fun. How do you feel about, in terms of what you’ve been talking about and you know, in terms of the evolution of AI?

MM: I’m excited about it. I think it’s a very hard problem. And I don’t think we have all the ideas yet that are needed to get it to work. But it’s always exciting to work on open problems that don’t have solutions yet. And that’s one of the reasons I like AI is it’s all the problems are still open.

SS: I love, I hope you don’t mind. I love the last paragraph of your book. “The impacts of AI will continue to grow for all of us. I hope that this book has helped you as a thinking human to get a sense of the current state of this burgeoning discipline, including its many unsolved problems, potential risks and benefits of technologies, and the scientific and philosophical questions it raises for understanding our own human intelligence. And if any computers are reading this, tell me what ‘it’ refers to in the previous sentence, and you’re welcome to join in the discussion.” I mean that’s very much, it’s encapsulates everything you’re talking about. It’s, it’s the understanding of self in its relation to the world that we are still waiting to see emerge from the machines we’re building.

MM: That’s exactly right. Yeah.

SS: And I guess we’re still waiting, but in the meantime we have these, these dumb machines that are sort of taking over various aspects of our world. 

MM: It’s not necessarily all bad. We all rely on them and they can do a lot of useful things, but I guess we should really think hard before we turn over all of our decision making to these machines.

SS: Alright, professor, I appreciate you taking the time to talk to me. 

MM: Thanks. I enjoyed it very much. 

***

JP: Get yourself a signed copy of Melanie’s book, Artificial Intelligence: A Guide For Thinking Humans and be part of the conversation next Wednesday, October 16th at Town Hall Seattle. The program starts at 7:30 PM but you can come early and hang out in our downstairs library or Otto bar and be part of our Town Hall community.

Thank you for listening to in the moment. If there’s anyone coming to town hall that you’d like to hear from, email us your requests at podcast@townhallseattle.org. Our theme music comes from the Seattle based band, Hibou and Seattle’s own Barsuk records. If you can’t make it to an event, you can always listen to them on our arts and culture, civics and science series, podcasts. Just search town hall Seattle and subscribe and if you’d like to support town hall, consider becoming a member. We have many different tiers of membership, but everything goes towards supporting the civic and community institution, our inclusive programming, and enriching the connections and cultural experiences that you’ll have. Till next time. Thanks for joining us right here in the moment.

Upcoming Events