Microsoft Research India Podcast
Podcast: Potential and Pitfalls of AI with Dr. Eric Horvitz
Episode 001 | March 06, 2020
Dr. Eric Horvitz is a technical fellow at Microsoft, and is director of Microsoft Research Labs, including research centers in Redmond, Washington, Cambridge, Massachusetts, New York, New York, Montreal, Canada, Cambridge, UK, and Bengaluru, India. He is one of the world’s leaders in AI, and a thought leader in the use of AI in the complexity of the real world.
On this podcast, we talk to Dr. Horvitz about a wide range of topics, including his thought leadership in AI, his study of AI and its influence on society, the potential and pitfalls of AI, and how useful AI can be in a country like India.
Transcript
Eric Horvitz: Humans will always want to make connection with humans, sociologists, social workers, physicians, teachers, we’re always going to want to make human connections and have human contacts.
I think they’ll be amplified in a world of richer automation so much so that even when machines can generate art and write music, even music with lyrics that might put tear in someone’s eye if they didn’t know it was a machine, that will lead us to say, “Is that written by a human. I want to hear a song sung by a human who experienced something, the way I would experience something, not a machine.” And so I think human touch, human experience, human connection will grow even more important in a world of rising automation and those kinds of tasks and abilities will be even more compensated than they are today.
(music plays)
Host: Welcome to the Microsoft Research India podcast, where we explore cutting-edge research that’s impacting technology and society. I’m your host, Sridhar Vedantham.
Host: Our guest today is Dr. Eric Horvitz, Technical Fellow and director of the Microsoft Research Labs. It’s tremendously exciting to have him as the first guest on the MSR India podcast because of his stature as a leader in research and his deep understanding of the technical and societal impact of AI.
Among the many honors and recognitions Eric has received over the course of his career are the Feigenbaum Prize and the Allen Newell Prize for contributions to AI, and the CHI Academy honor for his work at the intersection of AI and human-computer interaction. He has been elected fellow of the National Academy of Engineering (NAE), the Association of Computing Machinery (ACM) and the Association for the Advancement of AI , where he also served as president. Eric is also a fellow of the American Association for the Advancement of Science (AAAS), the American Academy of Arts and Sciences, and the American Philosophical Society. He has served on advisory committees for the National Science Foundation, National Institutes of Health, President’s Council of Advisors on Science and Technology, DARPA, and the Allen Institute for AI.
Eric has been deeply involved in studying the influences of AI on people and society, including issues around ethics, law, and safety. He chairs Microsoft’s Aether committee on AI, effects, and ethics in engineering and research. He established the One Hundred Year Study on AI at Stanford University and co-founded the Partnership on AI. Eric received his PhD and MD degrees at Stanford University.
On this podcast, we talk to Eric about his journey in Microsoft Research, his own research, the potential and pitfalls he sees in AI, how AI can help in countries like India, and much more.
Host: Eric, welcome to the podcast.
Eric Horvitz: It’s an honor to be here. I just heard I am the first interviewee for this new series.
Host: Yes, you are, and we are really excited about that. I can’t think of anyone better to do the first podcast of the series with! There’s something I’ve been curious about for a long time. Researchers at Microsoft Research come with extremely impressive academic credentials. It’s always intrigued me that you have a medical degree and also a degree in computer science. What was the thinking behind this and how does one complement the other in the work that you do?
Eric Horvitz: One of the deep shared attributes of folks at Microsoft Research and so many of our colleagues doing research in computer science is deep curiosity, and I’ve always been one of these folks that’s said “why” to everything. I’m sure my parents were frustrated with my sequence of whys starting with one question going to another. So I’ve been very curious as an undergraduate. I did deep dives into physics and chemistry. Of course, math to support it all – biology and by the time I was getting ready to go to grad school I really was exploring so many sciences, but the big “why” for me that I could not figure out was the why of human minds, the why of cognition. I just had no intuition as to how the cells, these tangles of the cells that we learn about in biology and neuroscience could have anything to do with my second to second experience as being a human being, and so you know what I have to just spend my graduate years diving into the unknowns about this from the scientific side of things. Of course, many people have provided answers over the centuries- some of the answers are the foundations of religious beliefs of various kinds and religious systems.
So I decided to go get an MD-PhD, just why not understand humans deeply and human minds as well as the scientific side of nervous systems, but I was still an arc of learning as I hit grad school at Stanford and it was great to be at Stanford because the medical school was right next to the computer science department. You can literally walk over and I found myself sitting in computer science classes, philosophy classes, the philosophy of mind-oriented classes and cognitive psychology classes and so there to the side of that kind of grad school life and MD-PhD program, there are anatomy classes that’s being socialized into the medical school class, but I was delighted by the pursuit of- you might call it the philosophical and computational side of mind- and eventually I made the jump, the leap. I said “You know what, my pursuit is principles, I think that’s the best hope for building insights about what’s going on” and I turned around those principles into real world problems in particular since that was, had a foot in the medical school, how do we apply these systems in time-critical settings to help emergency room, physicians and trauma surgeons? Time critical action where computer systems had to act quickly, but had to really also act precisely when they maybe didn’t have enough time to think all the way and this led me to what I think is an interesting direction which is models of bounded-rationality which I think describes us all.
Host: Let’s jump into a topic that seems to be on everybody’s mind today – AI. Everyone seems to have a different idea about what AI actually is and what it means to them. I also constantly keep coming across people who use AI and the term ML or machine learning as synonyms. What does AI mean to you and do you think there’s a difference between AI and ML?
Eric Horvitz: The scientists and engineers that first used the phrase artificial intelligence did so in a beautiful document that’s so well written in terms of the questions it asks that it could be a proposal today to the National Science Foundation, and it would seem modern given that so many the problems have not been solved, but they laid out the vision including the pillars of artificial intelligence.
This notion of perception building systems that could recognize or perceive sense in the world. This idea of reasoning with logic or other methods to reason about problems, solve problems, learning how can they become better at what they did with experience with other kinds of sources of information and this final notion they focused on as being very much in the realm of human intelligence language, understanding how to manipulate symbols in streams or sequences to express concepts and use of language.
So, learning has always been an important part of artificial intelligence, it’s one of several pillars of work, it’s grown in importance of late so much so that people often write AI/ML to refer to machine learning but it’s one piece and it’s an always been an important piece of artificial intelligence.
Host: I think that clarifies the difference between AI and ML. Today, we see AI all around us. What about AI really excites you and what do you think the potential pitfalls of AI could be?
Eric Horvitz: So let me first say that AI is a constellation of technologies. It’s not a single technology. Although, these days there’s quite a bit of focus on the ability to learn how to predict or move or solve problems via machine learning analyzing large amounts of data which has become available over the last several decades, when it used to be scarce.
I’m most excited about my initial goals to understand human minds. So, whenever I read it a paper on AI or see a talk or see a new theorem being proved my first reaction is, how does it grow my understanding, how does it help to answer the questions that have been long-standing in my mind about the foundations of human cognition? I don’t often say that to anybody but that’s what I’m thinking.
Secondly, my sense is what a great endeavor to be pushing your whole life to better understand and comprehend human minds. It’s been a slow slog. However, insights have come about advances and how they relate to those questions but along the way what a fabulous opportunity to apply the latest advances to enhancing the lives of people, to empowering people in new ways and to create new kinds of automation that can lead to new kinds of value, new kinds of experiences for people. The whole notion of augmenting human intellect with machines has been something that’s fascinated me for many decades. So I love the fact that we can now leverage these technologies and apply them even though we’re still very early on in how these ideas relate to what’s going on in our minds.
Applications include healthcare. There’s so much to do in healthcare with decreasing the cost of medicine while raising the quality of care. This idea of being able to take large amounts of data to build high quality, high precision diagnostic systems. Systems that can predict outcomes. We just created a system recently for example that can detect when a patient in a hospital is going to crash unexpectedly with organ system failures for example, and that can be used in ways that could alert physicians in advanced, medical teams to be ready to actually save patient’s lives.
Even applications that we’re now seeing in daily life like cars that drive themselves. I drive a Tesla and I’ve been enjoying the experience of the semi-automated driving, the system can do. Just seeing how far we’ve gotten in a few years with systems that recognize patterns like the patterns on a road or that recognize objects in its way for automatic braking. These systems can save thousands of lives. I’m not sure about India but I know the United States statistics and there are a little bit more than 40,000 lives lost on the highways in the United States per year. Looking at the traffic outside here in Bangalore, I’m guessing that India is at least up there with tens of thousands of deaths per year. I believe that that AI systems can reduce these numbers of deaths by helping people to drive better even if it’s just in safety related features.
Host: The number of fatalities on Indian roads is indeed huge and that’s in fact been one of the motivators for a different research project in the lab on which I hope to do a podcast in the near future.
Eric Horvitz: I know it’s the HAMS project.
Host: It is the HAMS project and I’m hoping that we can do a podcast with the researchers on that sometime soon. Now, going back to AI, what do you think we need to look out for or be wary of? People, including industry leaders seem to land on various points on a very broad spectrum ranging from “AI is great for humanity” to “AI is going to overpower and subsume the human race at some point of time.”
Eric Horvitz: So, what’s interesting to me is that over the last three decades we’ve gone from AI stands for almost implemented, doesn’t really work very well. Have fun, good luck to this idea of just getting things up and running and being so excited there’s no other concerns but to get this thing out the door and have it for example, help physicians diagnose patients more accurately to now, “Wait a minute! We are putting these machines in places that historically have always relied upon human intelligence, as these machines for the first time edge into the realm of human intellects, what are the ethical issues coming to the fore? Are there intrinsic biases in the way data is created or collected, some of which might come from the society’s biases that creates the data? What about the safety issues and the harms that can come from these systems when they make a mistake? When will systems be used in ways that could deny people consequential services like a loan or education because of an unfair decision or a decision that aligns mysteriously or obviously with the way society has worked amplifying deep biases that have come through our history?”
These are all concerns that many of us are bringing to light and asking for more resources and attention to focus on and also trying to cool the jets of some enthusiasts who want to just blast ahead and apply these technologies without thinking deeply about the implications, I’d say sometimes the rough edges of these technologies. Now, I’m very optimistic that we will find pathways to getting incredible amounts of value out of these systems when properly applied, but we need to watch out for all sorts of possible adverse effects when we take our AI and throw it into the complexity of the open world outside of our clean laboratories.
Host: You’ve teed-up my next question perfectly. Is it incumbent upon large tech companies who are leading the charge as far as AI is concerned to be responsible for what AI is doing, and the ethics and the fairness and all the stuff behind AI which makes it kind of equitable to people at large?
Eric Horvitz: It’s a good question. There are different points of view on that question. We’ve heard some company leaders issue policy statements along the lines of “We will produce technologies and make them available and it’s the laws of the country that will help guide how they’re used or regulate what we do. If there are no laws, there’s no reason why we shouldn’t be selling something with a focus on profit to our zeal with technology.”
Microsoft’s point of view has been that the technology could be created by experts inside its laboratories and by its engineers. Sometimes is getting ahead of where legislation and regulation needs to be and therefore we bear a responsibility as a company in both informing regulatory agencies and the public at large about the potential downsides of technology and appropriate uses and misuses, as well as look carefully at what we do when we actually ship our products or make a cloud service available or build something for a customer.
Host: Eric, I know that you personally are deeply involved in thinking through AI and it’s impact on society, how to make it fair, how make it transparent and so on. Could you talk a little bit about that, especially in the context of what Microsoft is doing to ensure that AI is actually good for everybody?
Eric Horvitz: You know, these are why this is such a passion for me – I’ve been extremely interested starting with the technical issues which I thought- I think- really deep and fascinating, which is when you build a limited system by definition that’s much simpler than a complex universe that’s going to be immersed in, you take it from the laboratory into the open world. I refer to that as AI in the open world. You learn a lot about the limitations of the AI. You also learn to ask questions and to extend these systems so they’re humble, they understand their limitations, they understand how accurate they are, you get them a level of self-knowledge. This is a whole area of open world intelligence that I think really reads upon some of the early questions for me about what humans are doing, what their minds are doing, and potentially other animals, vertebrates.
It started there for me. Back to your question now, we are facing the same kind of things when we take an AI technology and put it in the hands of a judge who might make decisions about criminal justice looking at recommendations based on statistics to help him or her take an action. Now we have to realize we have systems we’re building that work with people. People want explanations. They don’t want to look at a black box with an indicator on it. They will say, why is this system telling me this?
So at Microsoft we’ve made significant investments, both in our research team and in our engineering teams and in our policy groups at thinking through details of the problems and solutions when it comes to a set of problems, and I’ll just list a few right now. Safety and robustness of AI systems, transparency and intelligibility of these systems- can they explain themselves, bias and fairness, how can we build systems that are fair along certain dimensions, engineering best practices. Well, what does it mean for a team working with tools to understand how to build a system and maintain it over time so, that it’s trustworthy. Human AI collaboration – what are principles by which we can enable people to better work in a fluid way with systems that might be trying to augment their intelligence such that is a back and forth and understanding of when a system is not confident, for example. Even notions about attention and cognition is, are these systems being used in ways that might be favorable to advertisers, but they’re grabbing your attention and holding them on an application because they’ve learned how to do that mysteriously – should we have a point of view about that?
So Microsoft Research has stood up teams looking at these questions. We also have stood up an ethics advisory board that we call the Aether Committee to deliberate and provide advice on hard questions that are coming up across the spectrum of these issues and providing guidance to our senior leadership team at Microsoft in how we do our business.
Host: I know you were the co-founder of the Partnership on AI. Can you talk a little bit about that and what it sought to achieve?
Eric Horvitz: This vision arose literally at conferences and, in fact, one of the key meetings was at a pub in New York City after meeting at NYU, where several computer scientists got together, all passionate about seeing it go well for artificial intelligence technologies by investing in understanding and addressing some of these rough edges and we decided we could bring together the large IT companies, Amazon, Apple, Facebook, Google, Microsoft to think together about what it might mean to build an organization that was a nonprofit that balanced the IT companies with groups in civil society, academic groups, nonprofit AI research to think through these challenges and come up with best practices in a way that brought the companies together rather than separating them through a competitive spirit. Actually this organization was created by the force of the friendships of AI Scientists, many of whom go back to being in grad school together across many universities, this invisible college of people united in an interesting understanding how to do AI in the open world.
Host: Do you think there is a role for governments to play where policies governing AI are concerned, or do you think it’s best left to technology companies, individual thinkers and leaders to figure out what to do with AI?
Eric Horvitz: Well, AI is evolving quickly and like other technologies governments have a significant role to play in assuring the safety of these technologies, their fairness, their appropriate uses. I see regulatory activity being of course largely in the hands of governments being advised by leadership in academia and in industry and the public which has a lot to say about these technologies.
There’s been quite a bit of interest and activity, some of that is part of the enthusiastic energy, you might say, going into thinking through AI right now. Some people say there’s a hype-cycle that’s leaking everywhere and to all regimes, including governments right now, but it’s great to see various agencies writing documents, asking for advice, looking for sets of principles, publishing principles and engaging multi-stakeholder groups across the world.
Host: There’s been a lot of talk and many conversations about the impact that AI can have on the common man. One of the areas of concern with AI spreading is the loss of jobs at a large scale. What’s your opinion on how AI is going to impact jobs?
Eric Horvitz: My sense is there’s a lot of uncertainty about this, what kind of jobs will be created, what kinds of jobs will go away. If you take a segment like driving cars, I was surprised at how large a percentage of the US population makes their living driving trucks. Now, what if the long haul parts of truck driving, long highway stretches goes away when it becomes automated, it’s unclear what the ripples of that effect will be on society, on the economy. It’s interesting, there are various studies underway. I was involved in the international academy study looking at the potential effects of new kinds of automation coming via computer science and other related technologies and the results of that analysis was that we’re flying in the dark. We don’t have enough data to make these decisions yet or to make these recommendations or they have understandings about how things are going to go. So, we see people saying things on all sides right now.
My own sense is that there’ll be some significant influences of AI on our daily lives and how we make our livings. But I’ll say one thing. One of my expectations and it’s maybe also a hope is that as we see more automation in the world and as that shifts in nature of what we do daily and what were paid to do or compensated to do what we call work, there’ll be certain aspects of human discourse that we simply will learn, for a variety of reasons, that we cannot automate, we aren’t able to automate or we shouldn’t automate, and the way I refer to this as in the midst of the rise of new kinds of automation some of which reading on tasks and abilities we would have in the past assumed was the realm of human intellect will see a concurrent rise of an economy of human around human caring. You think about this, humans will always want to make connection with humans, sociologists, social workers, physicians, teachers, we’re always going to want to make human connections and have human contacts.
I think they’ll be amplified in a world of richer automation so much so that even when machines can generate art and write music, even music with lyrics that might put tear in someone’s eye if they didn’t know it was a machine, that will lead us to say, “Is that written by a human. I want to hear a song sung by a human who experienced something, the way I would experience something, not a machine.” And so I think human touch, human experience, human connection will grow even more important in a world of rising automation and those kinds of tasks and abilities will be even more compensated than they are today. So, we’ll see even more jobs in this realm of human caring.
Host: Now, switching gears a bit, you’ve been in Microsoft Research for a long time. How have you seen MSR evolve over time and as a leader of the organization, what’s your vision for MSR over the next few years?
Eric Horvitz: It’s been such an interesting journey. When I came to Microsoft Research it was 1992, and Rick Rashid and Nathan Myhrvold convinced me to stay along with two colleagues. We just came out of Stanford grad school we had ideas about going into academia. We came up to Microsoft to visit, we thought we were just here for a day to check things out, maybe seven or eight people that were then called Microsoft Research and we said, “Oh come on, please we didn’t really see a big future.” But somehow we took a risk and we loved this mission statement that starts with “Expand the state-of-the-art.” Period.
Second part of the mission statement, “Transfer those technologies as fast as possible into real products and services.” Third part of the statement was, “Contribute to the vibrancy of this organization.” I remember seeing in my mind as we committed to doing this, trying it out- a vision of a lever with the fulcrum at the mountain top in the horizon. And I thought how can we make this company ours, our platform to take our ideas which then were bubbling. We had so many ideas about what we could do with AI from my graduate work and move the world, and that’s always been my sense for what Microsoft Research has been about. It’s a place where the top intellectual talent in the world, top scholars, often with entrepreneurial bents want to get something done can make Microsoft’s their platform for expressing their creativity and having real influence to enhancing the lives of millions of people.
Host: Something I’ve heard for many years at Microsoft Research is that finding the right answer is not the biggest thing, what’s important is to ask the right, tough questions. And also that if you succeed in everything you do you are probably not taking enough risks. Does MSR continue to follow these philosophies?
Eric Horvitz: Well, I’ve said three things about that. First of all, why should a large company have an organization like Microsoft Research? It’s unique. We don’t see that even in competitors. Most competitors are taking experts if they could attract them and they’re embedding them in product teams. Microsoft has had the foresight and we’re reaching 30 years now since we kicked off Microsoft Research to say, if we take top talent and attract this top talent into the company and we give these people time and we familiarize them with many of our problems and aspirations, they can not only come up with new ideas, out-of-the-box directions, they can also provide new kinds of leadership to the company as a whole, setting its direction, providing a weathervane, looking out to the late-breaking changes on the frontiers of computer science and other sciences and helping to shape Microsoft in the world, versus, for example, helping a specific product team do better with an existing current conception of what a product should be.
Host: Do you see this role of Microsoft Research changing over the next few years?
Eric Horvitz: Microsoft has changed over its history and one of my interests and my reflections and I shared this in an all-hands meeting just last night with MSR India. In fact, they tried out some new ideas coming out of a retreat that the leadership team from Microsoft Research had in December – just a few months ago, is how might we continue to think and reflect about being the best we can, given who we are. I’ve called it polishing the gem, not breaking it but polishing, buffing it out, thinking about what we can do with it to make ourselves even more effective in the world.
One trend we’ve seen at Microsoft is that over the years we’ve gone from Microsoft Research, this separate tower of intellectual depth reaching out into the company in a variety of ways, forming teams, advising, working with outside agencies, with students in the world, with universities to a larger ecosystem of research at Microsoft, where we have pockets or advanced technology groups around the company doing great work and in some ways doing the kinds of things that Microsoft Research used to be doing, or solely doing at Microsoft in some ways.
So we see that upping the game as to what a center of excellence should be doing. I’m just asking the question right now, what are our deep strengths, this notion of deep scholarship, deep ability, how can we best leverage that for the world and for the company, and how can we work with other teams in a larger R&D ecosystem, which has come to be at Microsoft?
Host: You’ve been at the India Lab for a couple of days now. How has the trip been and what do you think of the work that the lab in India is doing?
Eric Horvitz: You know we just hit 15 here – 15 years old so this lab is just getting out of adolescence- that’s a teenager. It seems like just yesterday when I was sitting with the Anandan, the first director of this lab looking at a one-pager that he had written about “Standing up a lab in India.” I was sitting in Redmond’s and having coffee and I tell you that was a fast 15 years, but it’s been great to see what this lab became and what it does. Each of our labs is unique in so many ways typically based on the culture it’s immersed in.
The India lab is famous for its deep theoretical chops and fabulous theorists here, the best in the world. This interdisciplinary spirit of taking theory and melding it with real-world challenges to create incredible new kinds of services and software. One of the marquee areas of this lab has been this notion of taking a hard look and insightful gaze at emerging markets, Indian culture all up and thinking about how computing and computing platforms and communications can be harnessed in a variety of ways to enhance the lives of people, how can they be better educated, how can we make farms, agriculture be more efficient and productive, how can we think about new economic models, new kinds of jobs, how can we leverage new notions of what it means to do freelance or gig work. So the lab has its own feel, its own texture, and when I immerse myself in it for a few days I just love getting familiar with the latest new hires, the new research fellows, the young folks coming out of undergrad that are just bright-eyed and inject energy into this place.
So I find Microsoft Research India to have a unique combination of talented researchers and engineers that brings to the table some of the deepest theory in the world’s theoretical understandings of hard computer science, including challenges with understanding the foundations of AI systems. There’s a lot of work going on right now. Machine learning as we discussed earlier, but we don’t have a deep understanding, for example, of how these neural network systems work and why they’re working so well and I just came out of a meeting where folks in this lab have come up with some of the first insights into why some of these procedures are working so well to understand that and understand their limitations and which ways to go and how to guide that, how to navigate these problems is rare and it takes a deep focus and ability to understand the complexity arising in these representations and methods.
At the same time, we have the same kind of focus and intensity with a gaze at culture at emerging markets. There are some grand challenges with understanding the role of technology in society when it comes to a complex civilization, or I should say set of civilizations like we see in India today. This mix of futuristic, out-of-the-box advanced technology with rural farms, classical ways of doing things, meshing the old and the new and so many differences as you move from province to province, state to state, and these sociologists and practitioners that are looking carefully at ethnography, epidemiology, sociology, coupled with computer science are doing fabulous things here at the Microsoft Research India Lab. Even coming up with new thinking about how we can mesh opportunistic Wi-Fi with sneakers, Sneakernet and people walking around to share large amounts of data. I don’t think that project would have arisen anywhere, but at this lab.
Host: Right. So you’ve again teed-up my next question perfectly. As you said India’s a very complex place in terms of societal inequities and wealth inequalities.
Eric Horvitz: And technical inequality, it’s amazing how different things are from place to place.
Host: That’s right. So, what do you think India can do to utilize AI better and do you think India is a place that can generate new innovative kinds of AI?
Eric Horvitz: Well, absolutely, the latter is going to be true, because some of the best talent in computer science in the world is being educated and is working in this, in this country, so of course we will see fabulous things, fabulous innovations being originating in India in both in the universities and in research labs, including Microsoft Research. As to how to harness these technologies, you know, it takes a special skill to look at the currently available capabilities in a constellation of technologies and to think deeply about how to take them into the open world into the real world, the complex messy world.
It often takes insights as well as a very caring team of people to stick with an idea and to try things out and to watch it and to nurture it and to involve multiple stakeholders in watching over time for example, even how a deployment works, gathering data about it and so on. So, I think some very promising areas include healthcare. There are some sets of illnesses that are low-hanging fruit for early detection and diagnosis, understanding where we could intervene early on by looking at pre-diabetes states for example and guiding patients early on to getting care to not go into more serious pathophysiologies, understanding when someone needs to be hospitalized, how long they should be hospitalized in a resource limited realm, we have to sort of selectively allocate resources, doing them more optimally can lead to great effects.
This idea of understanding education, how to educate people, how to engage them over time, diagnosing which students might drop out early on and alerting teachers to invest more effort, understanding when students don’t understand something and automatically helping them get through a hard concept. We’re seeing interesting breakthroughs now in tutoring systems that can detect these states. Transportation – I mean, it’s funny we build systems in the United States and this what I was doing to predict traffic and to route cars ideally. Then we come to India and we look at the streets here we say, “I don’t think so, we need a different approach,” but it just raises the stakes on how we can apply AI in new ways. So, the big pillars are education, healthcare, transportation, even understanding how to guide resources and allocations in the economy. I think we’ll see big effects of insightful applications in this country.
Host: This has been a very interesting conversation. Before we finish do you want to leave us with some final thoughts?
Eric Horvitz: Maybe I’ll make a call out to young folks who are thinking about their careers and what they might want to do and to assure them that it’s worth it. It’s worth investing in taking your classes seriously, in asking lots of questions, in having your curiosities addressed by your teachers and your colleagues, family. There’s so much excitement and fun in doing research and development, in being able to build things and feel them and see how they work in the world, and maybe mostly being able to take ideas into reality in ways that you can see the output of your efforts and ideas really delivering value to people in the world.
Host: That was a great conversation, Eric. Thank you!
Eric Horvitz: Thank you, it’s been fun.