Humans + AI

Humans + AI


Matt Beane on the 3 Cs of skill development, AI augmentation design templates, inverted apprenticeships, and AI for skill enhancement (AC Ep12)

July 30, 2025

“The primary source of our reliable ability to produce results under pressure—i.e., skill—is attempting to solve complicated problems with an expert nearby.”

–Matt Beane

About Matt Beane

Matt Beane is Assistant Professor at University of California Santa Barbara, and a Digital Fellow with Stanford’s Digital Economy Lab and MIT’s Institute for the Digital Economy. He is founder of Internet of Things startup Humatics, and author of the highly influential book The Skill Code: How to Save Human Ability in an Age of Intelligent Machines.

Website:

mattbeane.com

LinkedIn Profile:

Matt Beane

University Profile:

Matt Beane

Book:

The Skill Code

 

What you will learn
  • Redefining skill development in the age of AI
  • Why training alone doesn’t build true expertise
  • The three Cs of optimal learning: challenge, complexity, connection
  • How AI disrupts traditional apprenticeship models
  • Inverted apprenticeships and bi-directional learning
  • Designing workflows that upskill while delivering results
  • The hidden cost of ignoring junior talent development
Episode Resources Transcript

Ross Dawson: Matt, it is awesome to have you on the show.

Matt Beane: I’m delighted to be here. Really glad that you reached out.

Ross: So you are the author of The Skill Code. This builds on, I think, research for well over a decade. It came out over a year ago, and now this is very much of the moment, as people are saying all over the place that entry-level jobs are disappearing, and we’re talking about inverted pyramids and so on. So, what is The Skill Code?

Matt: Right. The first third of the book is devoted to the working conditions that humans need in order to build skill optimally.

The myth that is supported by billions of dollars of misdirected investment is that skill comes out of training. And that is—we just have a mountain of evidence that that’s not so. It can help, it can also hurt. But the primary source of our reliable ability to produce results under pressure, IE, skill, is attempting to solve complicated problems with an expert nearby.

Basically, we can learn, of course, without these conditions—sort of idealized conditions—but it can be great. And the first third of the book is devoted to what does it take for it to be great?

I got there sort of backwards by studying how people were trying to learn in the midst of trying to deal with new and intelligent technologies at work—and mostly failing. But a few succeeded. And so I just looked at those success cases and saw what they had in common across many industries and so on.

So, I break that out in the beginning of the book into three C’s—thankfully, in English, this broke out that way: Challenge, Complexity, and Connection. And those roughly equate—well, pretty precisely, actually, I should own the value of the book—they equate to four chunks of characteristics of the work that you’re embedded in that need to be in place in order for you to learn.

Challenge basically is: are you working close to, but not at, the edge of your capacity?

And complexity is: in addition to focusing on getting good at a thing that you’re trying to improve at, are you also sort of looking left and looking right in your environment to digest the full system you’re embedded in? That’s complexity.

And connection is building warm bonds of trust and respect between human beings. All three of those things—I could go into each—but basically, in concert, in no particular sequence—each workplace, each situation is different—but these are the base ingredients.

I used a DNA metaphor in the book. These are sort of the basic alphabet of what it takes to build skill, and your particular process or approach or situation is going to vary in terms of how those show up.

Ross: So, for getting to solutions or prescriptions, I mean, it’s probably worth laying out the problem.

AI or various technologies are making those who are entering the workforce—or entering particular careers—be able to readily do what they do. And essentially, a lot of the classic apprenticeship-style model has been that you learn by making mistakes and, as you say, alongside the masters.

And if people, if organizations, are saying, “Well, we no longer need so many entry-level people to do the dirty, dull work,” then we don’t have this pathway for people to develop those skills in the way you described.

Matt: Yes, and it’s even worse than that.

So, for those that remain—because, of course, organizations are going to hire some junior people—the problems that I document in my research, starting in 2012… Robotic surgery was one early example, but I’ve since moved on to investment banking and bomb disposal—I mean, very diverse examples.

When you introduce a new form of intelligent automation into the work, the primary way that you extract gains from that is that the expert in the work takes that tool and uses it to solve more of the problem per unit time, independently.

That word independently—I saw in stark relief in the operating room. When I saw traditional surgery—I watched many of these—there’s basically two people, shoulder to shoulder, four hands inside of a body, working together to get a job done. And that’s very intensive for that junior person, the medical resident in that case, and they’re learning a lot.

By contrast, in robotic surgery, there are two control consoles for this one robot that is attached through keyhole incisions into the patient. One person can control that robot and do the entire procedure themselves. And so, it is strictly optional then for that senior surgeon to decide that it’s time to give the controls to the junior person.

And when’s the right time to do that, given that that junior person will be slower and make more mistakes? This is true in law, in online education, in high finance, professional services—you name it. The answer is: never.

It is never a good time. Your CFO will be happy with you for not turning the controls over to the junior practitioner. And you yourself, as an expert, are going to be delighted.

People these days, using LLMs to solve coding problems, report lots more dopamine because they can finally get rid of all this grunt work and get to the interesting bits. And that’s marvelous for them. It’s marvelous for the organization—even if it’s uncertain there’s a little ROI.

But the primary, the net, nasty effect of that is that the novice—the junior person trying to learn—is no longer involved in the action. Because why would you?

And that breaks the primary ladder to skill for that person. And so, that, I think, is happening at great scale across…

Let’s put it this way: the evidence I have in hand indicates to me that there will be very rare and rare exceptions to the rule that junior people will be cut out of the action. Even when they’re hired and in the organization and are supposed to be involved, they will just be less involved—because they’re less necessary to support the work.

So even if you get a job as a junior person, you’re not necessarily guaranteed to be learning a dang thing. It’ll be harder these days by default.

Some interesting exceptions—and that’s what I focus on in the book. But that is the—in my view—I’ve done some arithmetic around this, and it’s all estimation of course. I published a piece in The Wall Street Journal on this about eight months ago.

This is a trillion-dollar problem for the economy, in my view.

Ross: Obviously, this is not destiny. These are challenges which we can understand, acknowledge, and address.

So, let’s say—obviously, part of it is, of course, the attitudes of the senior people and how it is they’ll be on frame. A lot can be organizational structures and how work is allocated. There’s a whole array of different things that can be done to at the very least mitigate the problem—or, I think, as you lay out in your book, move to an even better state for the ability to learn and grow and develop in conjunction, not just using learning tools.

But why don’t we go straight to Nirvana? Or what an ideal organization might do. What are some of the things they might do to be able to give these pathways where people can contribute and add value immediately, as well as rapidly grow and develop their capabilities?

Matt: Right. So, I’ll give you a few examples, one of which was evident in my book—and a couple examples, one of which was in the book, and one of which is new since the book’s publication.

So, the one that’s in the book—and that has always occurred, I think, and is more intensely available now and is a real cool and valuable opportunity for organizations—is what I called inverted apprenticeships.

This comes out of a study that I did with a colleague at NYU named Callan Anthony, where we contrasted our surgical and high finance data. We both have sort of “who said what to who every five seconds” kind of transcript data on thousands of hours of work in both contexts.

What was very clear, as we looked across our data, is that it’s not common for this to go well—but it can go well—for senior people to learn about new tech from junior people.

The “ha ha” example at a cocktail party is the CEO learning about TikTok from their executive assistant. But in the real world, senior software developers are definitely learning about how to use AI to amplify their productivity from junior people.

Organizations now are talking out of both sides of their mouth. On the one hand, you have people saying, “Well, we’re only going to hire senior people.” At the same time, “You have to be AI-native as a junior person.” That’s what we’re looking for, and that’s a prized skill.

Whether they know that that’s what they’re after or not, what they’re setting up when those people arrive is this relationship where the junior person hangs out and works with—and gets to teach, so to speak, or show by example—the senior person how to use AI.

The senior person, sort of as the price of entry for that working relationship, gives that junior person more access to their work and complex problem solving.

The paper itself is worth reading. The section in the book is worth reading because there are lots of ways to do this that are quite exploitative with respect to that junior person—sort of, they have to pay double. But there are ways of doing it where both people—it’s sort of a win-win.

That mode of simultaneous bi-directional learning is going to be really important if you want to adapt as an organization, just on a hyper-local level. So, that’s example one.

The other example—I’ve been, in the last four months now, in a new study I’ve been doing with five doctoral students here at University of California, Santa Barbara. It’s an interview-based study of the use of generative AI in software development across over 80 organizations.

One of the things that has emerged as a working pattern there, that I think is really intriguing and potentially a great example to think with—a sort of design template for how to set work up in a way that seizes the gains while also involving junior people and building your bench strength—is that:

In some cases, anyway, senior software engineers, rather than writing code, will get, say, four to five junior engineers together and give them all impossible tasks—like hugely complicated work and very limited time.

They will all try their… and by the way, obviously, the only way you could attempt this is to use AI—just cheat as aggressively as possible—and then submit your code. You’re talking three weeks of work in two hours, or eight hours, or something like that.

Under that kind of pressure, junior people—their neuroplasticity and willingness to throw themselves into the breach—is the hugest asset.

Everyone involved knows that what they submit may work, and it will be terrible. But it will be terrible in subtle ways.

Then that senior person spends some time with each of those junior people to do a code review or some pair programming, to say, “Right, here are the three or four areas. I’m not going to tell you what the problems are—where there’s problems—go have a go at figuring out what they are and fixing them.”

Or maybe: “I’ll just tell you what they are, and do you see why those are problems?”

Basically, we’re just focusing on the parts of what you built that are problematic—that you might not quite get yet. But 80% of what you built is fit for duty, and I got it 90% faster than I would have otherwise.

That senior person then is sort of a filter feeder. They deal with code and process and review a lot more than they used to actually just write.

But the unit total factor productivity for that group is an order of magnitude higher than it used to be. So, that’s become the sort of template—or the sort of fractal example—that I think…

Treating this hallucination and inconsistency and output problem as a feature, not a bug, and designing your organization to take advantage of that—I could easily see that kind of example scaling into professional services, into law, into medicine.

I mean, where failure in process is acceptable—it’s the output that needs to be high quality—it just seems like savvy organizations are going to be making design choices like that left and right.

Ross: That’s fantastic. So, where did that come from? Is that something which you created and then shared with these organizations? Or did you see this in the wild?

Matt: This is from this interview study. We have a globally representative sample of firms, and all we’re doing is asking them, “What are you doing with Gen AI in software development?” And then they talk for an hour, basically. We have a bunch of specific questions.

So no, we’re not priming anything, we’re not suggesting anything, we’re not sharing information in between them. And this is showing up independently across a number of organizations.

So anyway, there are lots of other cool things popping up, but the fact that these organizations aren’t in touch with one another—they don’t seem to be—they aren’t saying that they got this trick off of Reddit or from some influencer on Twitter, and that some subset of them have invented it locally, is a pretty strong indicator that it’s at least representative of a new potential direction.

Ross: So, this is work yet to be published?

Matt: Correct.

Ross: When? When will it be out?

Matt: That doesn’t operate on AI time. That’s on academic time. If we get enough findings together that I believe will meet the high A+ academic journal standard that I’m used to—which is not obvious, but I think we have a good shot—we’ll submit it for publication sometime in the fall.

Then it’ll probably be two years before the findings come out. You can post a working paper right away, and so as soon as we can do that, we will.

Ross: Awesome. Yeah, because this is the Humans Plus AI podcast. And really, the core of what I think about is humans plus AI workflow.

What is the flow of work between humans and AI? What are their respective roles? How does it move? What are the parallel and linear structures, and so on?

And what you’ve described is a wonderful, pretty clear humans plus AI workflow which is replicable. It can work in different contexts, as you say, across different domains. And these archetypes—if we can uncover these archetypes at work—then that is extraordinary value.

Matt: I think so, yeah. And what’s important is that, I think for them to be valid, they have to show up independently in very different contexts.

Then you’ve got your hands on—potentially, anyway—something that is suited to the new environment. There are many, many cases in which these best practices get trotted out, and they’ve been started by one organization and then shared across.

You can see a clear lineage, and then you have real questions about what, in academic speak, is endogeneity. In other words, it might be that this new best practice is not actually useful. It’s just that people are persuasive about it, and it travels fast because people are desperate for solutions.

So, we have to be very careful about grabbing best practices and labeling them as such.

Ross: You mentioned investment banking as a domain you’ve been exploring. And I think—I look a lot around professional services—and I think professional services are not just your classic accounting and law, and so on.

I mean, arguably, many industries—healthcare is professional services. I mean, if you look inside a consumer products company, they are professionals. You know, the building… there’s a lot of archetypes of things or structures there.

So I’m very interested to see what of what you have seen work in that context—what has been effective in being able to develop capabilities of junior staff.

Matt: Right, yeah. And I have less data there, but I’m always on the hunt for patterns in work that—when you look at them—you think, “I would need some evidence to conclude that that is not valuable or showing up somewhere else.”

In other words, it seems quite portable and generalizable. It’s not bound to the content of the work or some legal barriers or structures around the occupation or profession.

There are some places where that really is true. But as long as it seems like you could do the same thing in any knowledge work profession, then I agree with you. I think those are really important tactics.

And I don’t think anybody really—aside from what I offer in the book, which was my best offering then and I still feel very good about it now—is that whatever the new workflow is, imagined workflow, I offer a ten-point checklist for each of those three Cs in each of those chapters.

It’s about how you would know—very specifically and measurably—whether work was skill-enhancing or skill-degrading the more you did it over time.

Anyone, anywhere, I think, can take a look at any new way of doing the work that involves AI and interrogate it from that lens. So, in addition to a productivity lens—which is obviously critical—you can also say, “Is this likely to enhance skill development or not, if we do it this new way?”

And you can. It takes work, but I think that’s quite necessary.

Ross: So, looking at your three elements of challenge, complexity, and connection—AI used well could assist on a number of those.

Perhaps for me, most obviously in connection, where we have a lot of great studies in collaborative intelligence, where AI is playing a role in being able to nudge interactions that support collective intelligence. Again, we could have AI involved in interactions and able to say, “Well, here’s an opportunity to connect in a particular way to a particular person in a particular context,” for example.

Or it could be able to say, “You’re working on this particular challenge. Let’s give some context to this,” and so on. So either hypothetically or in practice—where are ways you’ve seen AI being able to amplify the challenge, complexity, or connection of skill development?

Matt: I have a Substack called Wild World of Work. It’s at wildworldofwork.org, and one of the first posts I wrote there—forgotten, it’s over a year ago now—is called Don’t Let AI Dumb You Down.

In that piece, I talk about how default use of GenAI—to ChatGPT—is, just as with all these other forms of intelligent automation I’ve studied, likely to deprive you of skill over time.

I’ll just start with connection. One of the reasons for that is that you don’t leave your screen. You get your answer, and it might even be good, and you might even learn some new information—so it’s not just passive, like “do my homework for me” kind of interaction.

But what you won’t notice is missing—and definitely is—is another human being. And ChatGPT is currently not configured—it’s not post-trained, technically—to do anything about that, to attend to that, or to have your welfare with respect to your skills in its consideration set at all.

You can make it do that, though. This is the amazing thing. Even what I suggested in that article back then is still true today. You can go into the custom settings for ChatGPT—and all these models have this now—and you can tell it how to interact with you, basically.

What I have in my custom settings in ChatGPT are specific sets of instructions around: basically, annoy me to some degree so that I need to do things for myself. Keep me challenged. Expose me to complexity—other things going on related to this work—and, as you just said, push me towards other human beings and building bonds of trust and respect with them.

Because otherwise, I’ll just rely on you. And that is what ChatGPT does to me every single time now.

Do I heed its advice all the time? No, of course not. But I have definitely learned a lot of things and met new people that I wouldn’t have if I hadn’t done that. It’s certainly not perfect. And it’s gotten better, but still.

And by the way, it should not be incumbent on the user, in my opinion, to go fix these things for themselves. That’s like asking cigarette smokers to install their own filters or something. You could, in principle, do that, but…

I think—put it this way, positively—there’s a huge market opportunity for these model providers. For any one of them to hold up their hand and say, “We have configured our system such that just by using it, you’re going to have more skills at the end of next week than this week. And you can have your results too.”

None of them have done that. Isn’t that interesting?

I’m trying to embarrass them into doing it, basically, because I think people have a strong and growing intuition that they’re trading something away in exchange for just getting their answer from this magical tech.

A few people aren’t. A few people are both getting their answer and pushing themselves farther than they ever could have before. That’s magical territory, and we need to understand it.

Anyway, I think once the word gets out that this trade-off is going on, then people are gonna start to insist. And I hope we can get some model company to lead in that regard.

Ross: Fantastic. In your book, you refer to Cabrera’s—essentially bringing the humans and AI together. Obviously, people use different terminology around that.

But where do you see the potential now for these human-AI integrations?

Matt: Yep. I have not yet seen this implemented, but the idea I’m just about to describe could have been implemented a year ago—very clearly. Technically, it was possible then; it’s even better now.

Let’s just say I’m a worker at Procter & Gamble, and I work in the marketing function. My agent could be eating all of my emails, all my calendar appointments, and all the documents I produce. It could be looking at my projects and looking for opportunities for me that might offer useful sort of upramp for a certain skill area that I’m interested in.

That agent could then also be conferring with other agents of project managers throughout the corporation to see if there’s a good match.

We’ve seen this “chain of thought” in models before. Just imagine two models meshing their chains of thought. Lots of back-and-forth, like:

“Hey, Matt Beane’s looking to develop this kind of skill, and it looks like you’ve got a project over there.”

This agent over there is more plugged into that context. They spend some time—they can do this at the speed of light—but there’s a quick burning of tokens to assess the utility of that match from my point of view and from that project’s point of view.

Then you get much finer-grained, higher-quality matches of resources, human resources—to projects. The project wins. I win because I get a skill development opportunity. And those agents do most of the legwork to make that match.

You could likewise imagine that with a performance review. So if you’re my manager and I’m your employee, our agents are conferring regularly, constantly about my work, your availability, and so on.

Your agent might pop back to you and say, “Hey, I’ve been talking to Matt’s agent, and it looks like now’s a pretty good time for you two to have a quick performance-oriented conversation about his project—because he’s done really well on these three things and is struggling on these and could use your guidance.”

Then we get these regular—but AI-driven and scheduled—performance review conversations. Both those agents could help us prep for those conversations.

“Here’s a suggested conversation for you two.”

When it comes time for performance reviews—the formal one—we’ve already had a bunch of those. But they aren’t just some arbitrary every-two-week check-in kind of thing. Each is driven by a real, actual, evident challenge or opportunity or strength in my work.

So anyway, I think those are just two kind of hand-wavy examples that I think are implementable now.

Increasingly autonomous AI systems that can call tools, have access to memory, and confer with one another can solve this sort of talent mobility problem within firms—making matches so that I build my skill and we get results and performance optimization.

Any firm would be… I mean, that’s low-hanging fruit almost. For somebody who has no technical expertise to set up—you can just build an internal GPT that does those things. There’s a little bit more required, but anyway…

There is a universe of new modes of organizing that assume agents will be doing most of the talking, and just set humans up for success whenever possible.

You can always turn it away. It’s like getting a potential match on a dating app. You’d be like, “No, not that one.”

But at least—no human could ever manage an organization that well and make matches at that frequency and level of fidelity.

Ross: Yeah, this goes very much to what I’ve long described as the fluid organization, where people get connected to where they can best apply their capabilities—and also to learn—completely fluidly.

Not depending on where their particular job description lies, but simply where their talents and their talent development can be best applied across the organization.

There have been, for quite some time, talent platforms within organizations for connecting people with opportunities or work, and so on. But obviously, AI-enabled—and particularly with a talent development focus—provides far more opportunity.

Matt: I’ve been trying to track this pretty closely because I have a startup now focused on this joint optimization of work performance measurement with human capability development.

The previous wave of firms—B2B SaaS firms—that are trying to solve this talent mobility problem have really been focused on extracting skills from workers’ data and collecting those as a bag of nouns, and trying to match that bag of nouns against a potential opportunity.

And those nouns are just not sufficiently rich to capture what it is that those people are capable of—or not.

But I think a much richer sort of dialogue-based, dynamic, up-to-date, in-the-moment interaction between two informed agents…

You’re informed about the opportunity on the project. You have all the project docs spun up into you—I mean you as an agent.

And then another agent—that is mine—advocates for me on my behalf and has a giant RAG-based system (or whatever is the state of the art) that knows all about me: my preferences, what motivates me, my background, my capabilities under pressure, my career aspirations—all the rest.

Then they could spend a 100-turn conversation assessing fit in a few seconds. And that will be radically better than, “Does this noun match that noun?”

Ross: Yeah, a lot of potential.

So, to round out—for organizational leaders, whether they be the C-suite, or board C-suite, or HR, or L&D, or organizational development—what are the prescriptions you give? What is the advice you would give on how to evolve into an organization where you can have a talent pipeline and maximize the learning that is going to be relevant for today?

Matt: I mention this in the book—lean on the vendors of these AI systems and demand that they give you a product that will enhance the skills of its users while generating results.

There are plenty of design decisions you could make about how to build the organization. We’ve talked about some of them. I think those are important. They’re necessary.

You can hire for AI-native talent. You can set up inverted apprenticeships. But if the root stock—or the new tool that everyone is supposed to use to optimize whatever they’re trying to optimize—is infected with a virus, and the virus is that it will drive experts and novices apart in search of results, almost unwittingly…

Very few will even notice this, or if they do notice it, they’re just not incented to care.

There’s really—I mean, L&D is maybe the only function in the organization that is explicitly put together to know about and deal with this problem—but it’s now a compliance function. The training that L&D offers is just kind of a box-checking activity too often.

So you can’t count on yourself and your own organization and your own chutzpah—and pulling yourself up, or asking your employees to pull themselves up by their bootstraps—as a primary means of ensuring that you grow your talent bench while improving results from AI.

I think companies—and executives in particular—are in a very powerful position right now to choose between model vendors and ask:

“Give them two extra weeks to come back with something in their proposal that gives you reasonable assurance that just by using their product—versus their competitors’—your employees will build skill more and end up with better career outcomes, while still getting productivity gains.”

How can we use this tech and build employee skill at the same time? — that is the powerful question.

So it’s not… I think these vendors need to start to feel some heat. And if you’re a manager, you should be thinking:

“Fine, I’m getting some uncertain and notional—or nominal—productivity gain out of these new tools now just by buying them, and I don’t want to get left behind.”

So not buying is probably not an option.

But anyway, know also that if you just turn it on and hand out licenses, you will de-skill your workforce faster than you expect, and you will be knee-capping your organization for, say, three years from now or five years from now. And you will lose to your competitors.

I guarantee it.

Well, no—guarantee with a big asterisk. There will be many cases in which having fewer junior employees is the right thing to do. There will be many cases in which you don’t really care about de-skilling relative to the gains that you could get productivity-wise. I’m not naive about any of that.

But if you have areas in your organization where you have highly paid talent that is very mobile and wants to learn and grow, they will figure out which organizations are giving them work that will drive their skill curve upward—and they will vote with their feet.

And then you will stop getting high-quality talent.

That is one problem area I would get ready for.

And the other is: get ready to offer remedial training for those people who should know how to do their jobs—but in fact, have not been upskilling because they’ve been using AI too much. And you’ll be bearing that cost as well.

Organizations that invest now to address this problem—they will not. Might come slower out of the gate right now. Maybe they won’t. Maybe they’ll jump ahead faster.

So I think intervening with the model provider is one unexpected and easy place to go—because they won’t see it coming. They will be surprised.

And if a smart business development person—who wants their commission—will go back to their organization and tell OpenAI or Anthropic or Google, “Hey, what can we do?”

And I’m hearing this from lots of people. So I’m not naive to think that just me saying this to you on this podcast is going to have that effect.

I think really what’s starting to happen is that professionals—especially software professionals, right now—are starting to notice this effect without Matt Beane being in the picture at all.

There are articles out there now by software developers saying: “The death of the junior developer” is one. It’s a great one.

They’re all getting concerned on their own.

So I hope that the pressure just gets turned up, and that one of these companies comes out with something that will make a difference.

Ross: Fantastic. Thanks so much for your time, Matt.

Matt: Pleasure

Ross: Wonderful work. Very, very much on point for these days. Extraordinarily relevant. And I very much look forward to seeing what you continue to uncover and share and publish.

Matt: Perfect. Thank you. Like I said, I really appreciated the invite and happy to talk.

The post Matt Beane on the 3 Cs of skill development, AI augmentation design templates, inverted apprenticeships, and AI for skill enhancement (AC Ep12) appeared first on Humans + AI.