Humans + AI

Humans + AI


Michael I. Jordan on a collectivist perspective on AI, humble genius, design for social welfare, and the missing middle kingdom (AC Ep15)

August 20, 2025

“The fact is that its input came from billions of humans… When you’re interacting with an LLM, you are interacting with a collective, not a singular intelligence sitting out there in the universe.”

–Michael I. Jordan

About Michael I. Jordan

Michael I. Jordan is the Pehong Chen Distinguished Professor in Electrical Engineering and Computer Science and professor in Statistics at the University of California, Berkeley, and chair of Markets and Machine Learning at INRIA Institute in Paris. His many awards include the World Laureates Association Prize, IEEE John von Neumann Medal, and the Allen Newell Award. He has been named in the journal Science as the most influential computer scientist in the world.

Website:

arxiv.org

LinkedIn Profile:

Michael I. Jordan

University Profile:

Michael I. Jordan

What you will learn
  • Redefining the meaning of intelligence

  • The social and cultural roots of human genius

  • Why AI is not true superintelligence

  • Collective genius as the driver of innovation

  • The missing link between economics and AI

  • Decision making under uncertainty and asymmetry

  • Building AI systems for social welfare

Episode Resources Transcript

Ross Dawson: Michael, it’s wonderful to have you on the show.

Michael I. Jordan: My pleasure to be here.

Ross: Many people seem to be saying that AI is going to beat all human intelligence very soon. And I think you have a different opinion.

Michael: Well, there’s a lot of problems with that framing for technology. First of all, we don’t really understand human intelligence. We think we do because we’re intelligent, but there’s depths we haven’t probed, and there’s the field of psychology just getting going—not to mention neuroscience.

So just saying that something that mimics humans, or took a vast amount of data and brute-forced mimicked humans, seems like a kind of leap to me—that it has human intelligence nailed. Moreover, the idea that it was a sequence of logic doesn’t particularly work for me. We figured out human intelligence, now we can put it in silicon and scale it, and therefore we’ll get superintelligence.

Every step there I mean the scaling part, I guess, is okay, but we have not figured out human intelligence. Even if we had, it’s not really clear to me as a technology that our goal should be to mimic or replace humans. In some jobs, sure, but we should think more about overall social welfare and what’s good for humans. How do we complement humans?

So, no, I don’t think we’ve got human intelligence figured out at all. It’s not that it’s a mystical thing, but we have creativity. We have experience and shared experience, and we plumb the depths of that when we interact and when we create things.

Those machines that are doing brute force gradient descent on large amounts of text and even images or whatever—they’re not getting there. It is brute force. I don’t think sciences have really progressed by just having brute force solutions that no one understands and saying, “That’s it, we’re done.”

So if you want to understand human intelligence It’s going to be a while.

Ross: There’s a lot to dig into there, but perhaps first: just intelligence. You frame that as, among other things, social and cultural, not just cognitive?

Michael: Absolutely. I don’t think if you put me on a desert island, I’d do very well. I need to be able to ask people how to do things. And if you put me not just on a desert island, but in a foreign country, and you don’t give me the education—the 40 years of education I had as well—that imbued me with the culture of our civilization.

Anytime I’m not knowledgeable about something, I can go find it, and I can talk to people. Yes, I can now use technology to find it, but I’m really talking to people through the technology. I don’t think we appreciate how important that cultural background is to our thinking, to our ability to do things, to execute, and then to figure out what we don’t know and what we’re not good at. That’s how we trade with others who are better at it, how we interact, and all that.

That’s a huge part of what it means to be human, and how to be a successful and happy human. This mythological Einstein sitting all by himself in a room, thinking and pondering—I think we’re way too wedded to that. That’s not really how our intelligence is rolled out in the real world.

Generally, we’re very uncertain about things in the real world. Even Einstein was uncertain, had to ask others, learn things, and find a path through the complexity of thought.

Also, I’ve worked on machine learning for many years, and I’m pretty comfortable saying that learning is a thing we can define, or at least start to define: you improve on certain tasks. Intelligence—I’m just much less happy with trying to define it. I think there’s a lot of social intelligence, so I’m using that term loosely. But human, single intelligence—what is that? What does it mean to generalize it?

Talking about thought in the computer is the old dream of AI. I don’t know if we have thought in a computer. Some people sort of say, “Yeah, we have it,” because it’s doing these thinking-like things. But it’s still making all kinds of errors. You can brute force around them for as long as you can and get humans to aid you when you’re making errors.

But at some point you have to say, “Wait a minute, I haven’t really understood thought. I’m not getting it. I’m getting something else. What am I getting? How do I understand that? How does it complement things? How does it work in the real world?”

Then you need to be more of an engineer—try to build it in a way that actually works, that is likely to help out human beings, and think like an engineer and less like a science fiction guru.

Ross: So you’ve used the phrase “human genius” as a sort of what we compare AI with. And the phrase “human collective genius,” I suppose, ties into some of your points here—where that genius, or that ability to do exceptional things, is a collective phenomenon, not an individual one.

Michael: Oh no, without a doubt. I’ve known some very creative people, and every time you talk to them, they make it very clear that the ideas came from the ether—from other people. Often, they just saw the idea develop in their brain, but they don’t know why.

They are very aware of the context that allowed them to see something differently, execute on it, and have the tools to execute. So my favorite humans are smart and humble. Right now in technology, we have a lot of people who are pretty smart but not very humble, and they’re missing something of what I think of as human genius: the ability to be humble, to understand what you don’t know, and to interact with other humans.

Ross: One of the other things you emphasize is when we’re designing these systems. We’ve created some pretty amazing things. But as you suggest, there seems to be this very strange obsession with artificial general intelligence as a focus.

For all of the reasons that’s flawed, one of them is being able to imbue social welfare as a fundamental principle that we should be using to design these.

Michael: I think you’ve just hit on it. To me, that’s the fundamental flaw with it. I mean, you can say the flaw is that you can’t define it, and so on and so forth. But for me, the flaw is really that it’s an overall system.

In fact, if you think about an LLM, whether it’s smart or not, or intelligent or not, it’s almost beside the point. The fact is that its input came from billions of humans, and those humans did a lot of thinking behind that. They worked out problems, they wrote them down, they created things. Sometimes they agreed, sometimes they disagreed, and the computer takes all that in.

To the extent that there’s signal, and there’s a lot of agreement among lots of humans, it’s able to amplify that and create some abstractions that characterize that. But when you’re interacting with an LLM, you are interacting with essentially all those humans. You’re interacting with a collective. You are not interacting with a singular intelligence sitting out there in the universe.

You’re interacting with all of humanity—or at least a lot of humanity—and all of the background that those people brought to it. So if you’re interacting with a collective, then you have to ask: is there a benefit to the collective, and what’s my contribution? What’s my role in that overall assemblage of information?

It’s not just that the whole goal is the Libertarian goal of the individual being everything. Somehow, the system should work such that there are overall good outcomes for everyone. It’s kind of obvious.

It’s obvious like traffic. All of us want to get as fast as possible from point A to point B. But a designer of a good traffic and highway system does not just think about the individual and how fast the car will go. They think about the overall flow of the system, because that may slow down some people, but it’ll make everybody ideally get there as fast as possible.

It’s a sum over all the travel times of all the people. Let’s call that social welfare. The design is usually a huge amount of hard work to achieve such a thing, and then empirically test it out and work out some theory of that.

And that’s going to be true of just about any domain. Think of the medical domain. It’s really not just the doctor and a patient and focusing on one relationship. It’s the overall system. Does it bring all the tools to the right place, at the right time? Has it tested things out in the right way? Things that have been learned about one group of people or one person—does that transfer easily to other people?

Any really working system of humans at scale is someone to sit down and think about the overall flow and flux at a social level. And again, this is not at all novel to say. Economists talk about this.

Yes, what economists do is think about the group and then the overall social welfare. How does the outcome lead to allocations that everyone considers favorable and fair? And then people argue about boundary conditions. Should you make sure there’s a floor or a ceiling, or whatever, and so on? Lots of people talk that language.

Computer scientists, for some reason, seem immune to thinking about economic principles, microeconomic principles, and social welfare. It comes as an afterthought. They build their system, they try it out, it doesn’t work, and they say, “Oh, we screwed up social welfare somehow.”

Then you get people criticizing, other people defending. And it’s like—is this the way to develop a technology? Roll it out, let it mess things up, give life to the critics, and then defend yourself. It’s just a mess right now.

Ross: Yeah, well, particularly given the extraordinary power of these tools. So I think the perspective is useful.

Michael: They’re powerful, and there’s absolutely no denying they’re surprisingly good. I call it brute force and all, but I don’t mean to denigrate it. At that scale, it really is better than one would have thought.

But what’s the business model? They’re powerful—for who? Yes, they sort of empower all of us to do certain things. But in the context of an overall economy, are they actually going to be healthy for everybody?

Are they going to make the rich much, much richer, and put that power in the hands of a few? Definitely those issues are what a lot of people talk about and think about. But Silicon Valley, again, seems immune to worrying about it.

They just say, “This brute force thing is a good machine. Obviously there’ll be some problems, but not big ones. We’ll figure them out as we go.”

That just hasn’t happened in other fields of engineering, to the extent it’s happening now. In chemical engineering, electrical engineering—people thought about the overall systems they were building and whether they’d work or not as they were building them.

Here, there’s just very little thought leaders and a lot of irresponsible people.

Ross: Which takes me to your recent paper—excellent paper—A Collectivist, Economic Perspective on AI. That seems to crystallize a lot of the thinking, a lot of what we’ve been talking about. There’s quite a lot of depth to the paper, and I wonder if you’re able to distill the essence of that in a few words.

Michael: Sure. Thanks for calling out that paper. I hope people will read it. I worried about the title for quite a while. The word “collectivist,” of course, was just a kind of little bit of a thumb.

In the libertarian tradition in Silicon Valley, “collectivist” has been associated historically with socialism, communism, and so on. But really, it’s a technical word that we should own and imbue with our support, with our technology. It is an economics word.

So I made sure the word “economics” is in there, because to me, that is the critical missing ingredient. There has been a lot of talk about networks and data, and then cognition and so on. Rarely do we hear talk about incentives and social welfare.

The paper also aims not to be just negative. There are a lot of people who use the arguments, who are pained in the same way I am about the way technology is being rolled out—but it’s just a critique. I want to turn it into an engineering field.

I want to say: look, what you can do with data and data flows at large scale is make even better markets than we ever had before, and different kinds of markets. Markets arose organically thousands of years ago where people would trade. You had to have some information, but there was always some hidden information.

This is what economics calls information asymmetry. There’s also always a social context to the things you’re doing.

One of the examples I give in the paper is about a duck—or I forget what example I use in the paper, but in my talks I use a duck. A duck is trying to figure out where to get food. There are two choices: one side of the lake or the other side. There’s twice as much grain to be found on one side of the lake than on the other.

The duck has been a statistician over the years and has gotten good estimates of those values. So what should it do the next day?

A Bayesian-optimizing duck would go to the side of the lake where there’s twice as much food. Of course, it’ll give the optimal amount of food. But that’s not what actual ducks do, nor what humans do. They do what’s called probability matching. That means if there’s twice as much food here as there, then twice as much time I will go to that side than the other side.

That’s viewed as a flaw in ducks and in humans. If you’re in a casino and you do that, it’s kind of dumb. But evolutionarily, it makes total sense.

If we’re not just one person but a collective, and all the ducks go to one side, then there’s a resource not being used on the other side. You could say the goal is to build a collectivist system that tells who should go where. But that’s the Soviet Union—that doesn’t work, that’s top-down.

Instead, you ask: are there algorithms that will actually do a better allocation, that aren’t just everybody for themselves? There’s an algorithm—randomized probability matching. With probability two-thirds I go there, with probability one-third I go there. If everyone does that algorithm, they don’t have to coordinate at all. They just go. That will lead to the maximum amount of food being eaten by everybody. That’s called high social welfare.

Now you see that the context of the problem I’m trying to solve—the decision-making problem—involves the collective. If I didn’t have the collective in the context, I would do the wrong thing. In the context of the larger collective, evolution worked that out.

But as engineers, we’re trying to build these new systems, and we don’t have time to wait for evolution. We have to build the system such that the collective is taken into account in the design.

I go through examples like that where uncertainty is shaped by the collective, and then the collective helps reduce uncertainty. Because, again, I can ask people when I don’t know things, and LLMs reduce uncertainty. That’s kind of what they’re doing. It’s part of the you know, their collective property is that they help the collective to reduce its total uncertainty. But then I also get into the so that’s kind of one side of economics is, how do you mitigate uncertainty and how do you think about the social context of your decisions.

 

And the other probably even more important side is incentives and information asymmetry.

If I come into a market, I don’t know a lot of things. Why am I still incentivized to come in, especially if I know there are adversaries in this market? Well, I’m incentivized because I know enough, and I can probe and I can test, and there are mechanisms I can use to still get value.

We’ve learned how to do that, and our systems should be able to know that kind of way of thinking. And so information asymmetry.

So there are two kinds of uncertainty that, as engineering-oriented people, I think we have to be focusing on—and machine learning has been kind of remiss in thinking about them.

One is just statistics and error bars. We see that in our LLMs: there’s very little concern about error bars around answers, about uncertainty. It’s ad hoc. The LLM might say, “Well, I’m not very sure.” Or, actually, it tends to be oversure: “I’m very sure.” Then it changes its mind in the conversation completely.

Humans are much, much better at saying, hey, when I’m sure and when I’m not, and that’s kind of statistical uncertainty. I haven’t got enough data. I need more. As soon as I get more data, my confidence goes up and so on. So most machine leaders are aware of that, but it’s not very actionable in the machine learning field. Just get more data and the problem will go away. That’s not true in many domains—most domains.

But the other kind of uncertainty is information asymmetry. If you and I are interacting in a market setting, you’re trying to get me to do something, there will probably be a payment involved. You’re going to offer me some price for my labor. What price you offer depends on how good I am.

Well, I know that. So I’m going to pretend to be better than I am—or maybe the opposite way, pretend to be less good than I am, so I can loaf on the job and still make as much money.

All of these things I know that you don’t know—you would love to know them, and then you could design an optimal policy, which in this case would be a price. But you don’t know them.

So what are you reduced to doing? You’re reduced to making some modeling assumptions. Or you can do what economists call Contract Theory. You give me a list of options, and each option has different features associated with it and a price.

If I go to the airline and I want to get on an airplane, there’s not going to be just one price. There’s business class and economy and so on. Everybody gets the same list, but everyone doesn’t make the same choices because they have a different context. The airlines don’t know that context, but the people do.

That’s a different mindset in designing a system: you can’t just dictate everything, you can’t know everything. You have to build in options that are smarter—options that lead to actual good social welfare.

I just don’t think Silicon Valley gets that. I think they think the goal is this superintelligence that somehow knows everything, and we’ll just go to the superintelligence and it’ll tell us the answers.

Just because of information asymmetry, not true. There’ll be lots of lying going on—by the computer, but also by the humans involved in the system. Because lying is not a bad thing. It’s how you interact when there’s uncertainty and information asymmetry.

Ross: One of the things that comes out from what you’re saying is the overlap between decision making—where I’d like to get to in a minute—and that economic structure, which is emergent from decisions.

But just coming back to the paper, you refer to this missing middle kingdom which, crudely, could be described as what’s missing between engineering and the humanities. So how is it that we can fill that? What is that middle kingdom, and how can we fill that so that we do have that bridge between engineering—the main tools we’re creating—and the humanities, in understanding us as a collective group of humans?

Michael: That point in the paper was really somewhat narrowly construed. It was for academics. Anyone who’s been in a university has seen this wave: first it was called data science, or big data, then machine learning, then AI, and so on.

As this wave has hit, there have been initiatives to bring people together on campus. It’s not enough to just have engineers building systems with data. You’ve got to have commentary on that, or critique of that.

There’s a side of campus that loves to comment and critique—and that’s often humanities. Historians will weigh in on previous waves of technology, ethicists will weigh in, sociologists will weigh in.

The language gap is so huge that it just turns into bickering. The computer scientist will say, “Well, our system works. That’s all I care about. You get bits across the stream. I can’t think of anything else.”

The ethicist will say, “We have consequences, and the consequences are this, and blah, blah, blah.” But there are no solutions proposed across that gap. Both are right at some level, but the overall consequence is no progress. There’s no dialog.

I’ve seen many institutes created at many universities—I won’t name them—but it’s basically a computer scientist next to philosophers, and they call it an institute. They talk and “solve” problems. Or you add a few classes in AI and ethics to a computer science curriculum, or a couple of programming classes to a philosophy curriculum.

The naïveté of that is breathtaking.

There are others on campus—and hopefully more of them emerging—that sit more in the middle. Economists, for example, are in the middle. They can talk the technical language, they can think about systems, but they also do it as a social science. Many are behavioral economists, actually studying social systems, so they are really a bridge.

But they’re not the only bridge. Statisticians are also a bridge. They want real data, they want to test things, they want to find causes. Many work with social networks, social systems, and scientific problems.

I could go on. There are large numbers of people in academia, and in the intellectual sphere more generally, who can talk the technical language and the social language. And the social gets into the legal and ethical.

Really, there should be a big collaboration of all these things. If the only “middle” is humanities on one side and engineers on the other, that’s naïve. Unfortunately, that’s what many institutions do. They create institutes where philosophy meets computer science, and think it’s done. Usually it’s physicists creating these things, and it’s just a mess.

Part of the problem is dialog. A journalist will write about some new tech development and explain how exciting and breathtaking it is. Then they bring in an ethicist who says, “Yes, but the consequences will be terrible.”

We’re so awash in this.

Ross: Clearly, you think far more at the systems level than at the granularity of how academic institutions are structured. But I’d like to turn to decision making.

It’s a massive topic. Some of your work has shown that you can actually delegate fully to algorithmic systems, decisions that can be safe within particular domains.

But what I’m most interested in is complex, uncertain decisions—around strategy, life choices, building systems, better frames.

There are a number of aspects that come together here. You’ve already discussed some of them—uncertainty in decision making, information asymmetry. But if we just think from a humans-plus-AI perspective: we’ve got humans with intelligence, perspective, understanding. We have AI with great deals of confidence.

How can we best combine humans and AI in complex, uncertain decisions?

Michael: That’s the million-dollar—or billion-dollar—question. That’s what I think we should all be working on. I don’t have the answer to it. I believe we’re being extremely naïve about how we approach it.

You just gave a good problem statement. When faced with grand problems like that, I typically go into a more concrete vertical. I’ll think about transportation or health care, and I’ll try to write down: who are the participants? What are the players? What are the stakes? What are the resources?

Now, what’s different from just a classical economist or operations research person of the past? Well, again, there’s this huge data source flowing around.

It’s not that now everyone knows everything, and it’s not that you should pull it all into a superintelligence that becomes the source of all knowledge. Rather, you should think about that as you’re thinking about how the system is going to work.

Search engines already did this. They made us capable of knowing things more quickly than we otherwise would have. That changed things.

I think what will probably happen in the first wave—beyond just systems design—is almost an anthropology of this. We already see LLMs in all kinds of environments, like companies, being used in certain ways. There’ll be best practices that emerge.

Meta-systems will arise that don’t just give everybody an LLM. They’ll structure interactions in certain ways. That structure will involve meeting certain human needs that are not being met.

I don’t think it’s going to be academics or mathematics dictating or telling us the story. First, there will be lots of use cases. That’s true of other engineering fields I’ve alluded to, like chemical or electrical engineering.

You had a basic phenomenon—electricity could be moved from here to there, motors could be built, basic chemicals created. Then people would try it out, and they would say, oh, that approach didn’t work. And they would reorganize. There had to be auditors, checkers, specialists in aspects of the problem.

There’ll be brokers emerging. In fact, I don’t see many of us necessarily interacting with LLMs very directly. Take the medical domain: instead, there’ll be brokers whose job is to bring together different kinds of expertise. I bring in a problem, they assemble the appropriate expertise in that moment.

They themselves could be computing-oriented, but probably not purely. It’ll be a blend of human and machine. I’m not going to trust just a computing system—I’ll want a human in the loop for various reasons.

So there’ll be a whole network of brokers emerging. Mathematics won’t tell us how to build that, but it will support us in thinking, “Oh, here’s a missing ingredient. We didn’t take into account information asymmetry, or a certain kind of statistical uncertainty, or causal issues.”

Then people using systems will say, “Oh yeah, let’s do that,” and they’ll try things out. That’s how humans make progress: people become aware of what they could do, and aware of what’s missing. Best practices start to emerge.

I think it’ll be pretty far from where we are right now. The search engine–oriented human-LLM interaction, scaled up to superintelligence—that doesn’t feel right. It’ll be much richer.

Ross: So like you, I think of it in terms of some of the interfaces. What are the interfaces, and how do we present AI inputs in terms of, as you mentioned, degrees of certainty and a whole array of other factors—visualizations to provide effective input to humans?

But just to come back to that phrase of the broker—and whether that aligns with what I’m describing here—what specifically is the nature of that broker in being able to bring together the humans and AIs for complementary decision making?

Michael: Yeah. In my paper, I have another set of examples of different kinds of markets. I try to make them very concrete so that people will resonate with them.

One of them is the music market. You have people who make music, and you have people who listen to music. But you also have brands and other entities that use music in various ways as part of their business model.

For example, the National Basketball Association has music behind its clips. What music? Well, you don’t just randomly pick a song. There’s someone who helps pick the song. Sometimes it’s a recommendation system that uses data from the past to pick it. But it’s also a human making judgments.

You connect all this up. Certain listeners like certain kinds of music—that’s a classical recommendation system. Musicians see that, and they make different kinds of music. But now, especially with brands in the mix, they have money, and they’re willing to pay for things.

So now incentives come into play. Am I incentivized to write a certain kind of song because a brand will be interested in me? Maybe I will. And if a brand notices that a certain demographic listens to a certain artist, they may want to pair with that artist.

All of that is not just made up by sitting down and looking at an Excel spreadsheet. It’s a big system. It has past data, it has to be adaptive, and it has to take into account asymmetries—people gaming the system. It’s a very interesting kind of system.

Plus, you’ll analyze the content itself. The music will be analyzed by the computer, helping to make good decisions.

Ross: So currently, AI is an economic facilitator.

Michael: AI is that economic facilitator. It helps create a market and make that market more effective, more efficient, more desirable.

It doesn’t try to just replace the musician with an AI making music. Rather, it thinks about what kind of overall system we’re trying to build here. What do people really want?

Well, people want to make music. And some people really want to listen to music that is obviously made by humans. That difference, that gap, will continue to be there.

Some brands want to ally themselves with actual humans, not robotic entities—not with Elon Musk. Supporting those kinds of multi-way markets with technology.

You could have talked about that in economics years ago: “I have three kinds of entities, here are my preferences and utilities.” But it wouldn’t have been operable in the real world.

Now, with all the data flowing around, you can have all these connections be viable. You can think about it as a system.

So in some ways, this is not a unique perspective, not all that new. But it really helps. I’m just trying to get people to reorient.

And I keep mentioning Silicon Valley because I can’t believe more of them are not understanding a path that has more of an economic side to it. Instead, they’re just competing on these very, very large-scale things where the business model is unclear. That boggles my mind.

Ross: So to round out, I believe in the potential of humans plus AI. What do you think we should be doing? What are the potentials? What is it right now that can lead us towards humans plus AI as complements—humans first and AI to be able to amplify? 

Michael: I guess I’m more of an optimist. I don’t think humans will tolerate being told what to do by computers, or having the computers take over things that really matter.

They’ll be happier when computers take over things that don’t really matter, or things they don’t want to do. I do think humans will keep in the driver’s seat for quite a while.

I am very concerned, though, about the asymmetry of a few people having not just money, but immense power—and all the data flowing to them. The incentives can get way out of whack, and it would take a long time to undo some of that.

Like with the robber barons 100 years ago—there was some good in it, but then it became bad and had to be unwound. I hope we don’t have to get too much to the unwinding stage, but I think we are headed there.

On the other hand, you do see evidence of entities collecting data and using it in various ways, telling Google, “We will not just give you this data, you have to pay for it.” And Google saying, “Yeah, okay, we’ll pay.”

I do think there are some enlightened people who agree that’s a better model. The words “pay” and “markets”—it’s funny. The engineers and computer scientists I know never use those words. But then the humanities people get outraged when you talk about markets and payments.

That’s not human? Of course it is. It’s deeply human to value things and to make clear what your values are.

So I think there will be some good. Right now we just see kind of a mess. But I think that actual humans, when they start using systems and really start to care about the outcomes, and when payments are being used effectively.

These experiments are being run all around the world. It’s not just one country doing it. I don’t think the idea that China or the US is going to take this technology and dominate the world is right. That’s another dumb way to think.

Rather, these experiments will be done worldwide. Different cultures will come up with different ways of using it. Favorable best practices will emerge. People will say, “Look at how they’re doing it, that’s much better,” and those things will flow.

So overall, I’m more optimistic than not. But it is a very weird time to be living in.

Ross: More or less the right things all had the right way. So—

Michael: I can’t tell you where humans should go. I just know that, for example, when the search engine came in, young in my career, it was great. And I think to most of us, it just made life better.

That was an example of technologies expanding our knowledge base, and then people did what they did with it. The designers of the search engine kind of knew it would help people find stuff, but they couldn’t anticipate all the ways it got used.

Another part of technology—more like 100 years ago—was music. The fact that you could have recorded music and everyone could listen to it by themselves changed a lot of people’s lives for the better.

I don’t think the people who wrote down Maxwell’s equations—Maxwell himself, writing down the equations of electromagnetism—were necessarily aware that this would be a consequence. But humans got the best out of that in some ways. And then there were side effects.

Same thing here. I think humans will aim to get the best out of this technology. The technology won’t dictate.

Humans are damn smart. And I really think this “superintelligence” word just bothers me—especially because I think it’s diminishing of how smart humans really are. We can deal with massive uncertainty. We can deal with social context. Our level of experience and creative depth comes through in our creations in ways these computers don’t.

They’re doing brute force prediction sorts of things. Sure, they can write a book, a screenplay, whatever—but it won’t be that good.

I do think humans will be empowered by the tool and get even more interesting. The computers will try to mimic that, but it’s not going to be a reversal.

Ross: Yeah, absolutely agree. Thank you so much for your time and your insight, and also your very strong and distinctive voice, which I think most people should be listening to.

Michael: I appreciate that. Thank you.

 

The post Michael I. Jordan on a collectivist perspective on AI, humble genius, design for social welfare, and the missing middle kingdom (AC Ep15) appeared first on Humans + AI.