Humans + AI
Lisa Carlin on AI in strategy execution, participative strategy, cultural intelligence, and AI’s impact on consulting (AC Ep27)
“You’re using AI to generate solutions for ideation. Once you’ve got the ideas, you can do an initial cull with AI, or you can do it via humans.”
–Lisa Carlin
About Lisa CarlinLisa Carlin is Founder of strategy execution group The Turbochargers and Founding Director of FutureBuilders Group, where she has delivered over 50 major strategy projects. She was previously a consultant at major firms including McKinsey & Co, and has been Chair of a number of organizations.
Website:
LinkedIn Profile:
What you will learn- How AI is transforming strategy development and execution, leading to faster and more creative outcomes
- Practical methods for integrating AI into workshop processes, ideation, and customer feedback analysis
- Balancing human judgment with AI input to ensure effective decision-making in strategic planning
- Techniques for using AI in diagnosing and working within an organization’s culture for successful transformation
- Ways AI is boosting consultant and client productivity, reducing operational time, and increasing self-sufficiency
- Real-world examples of AI-driven analytics, including clustering survey data and generating management insights
- The outlook on the future of consulting, including why AI may reduce the number of consultants required
- Tactical uses of AI for ideation, communication effectiveness, and predicting customer engagement metrics
Ross Dawson: Lisa, it is wonderful to have you on the show.
Lisa: Thanks, Ross. I love chatting with you.
Ross Dawson: So you’ve been spending a lot of time over many, many years in strategy and strategy execution. I’d love to start off by hearing how you are applying AI in the strategy process.
Lisa: Well, it’s made things so much easier, made things take a shorter amount of time, saving huge amounts of time. And I feel like my work has gotten more creative. Let me give you some examples of how that plays out. One example is working with an ed tech early-stage business, a small business, and they wanted to basically build AI-native products for customer education. I can actually mention the name of the company because the CEO posted after we worked together and is building in public, so it’s HowToo, an Australian ed tech firm that’s funded mainly out of the US, but also locally in Australia.
They’ve been providing education products for ages and are moving towards customer education embedded into technology products. We went through an iterative process of workshops, starting with some of the board members and some of the senior folks in a small group with an ideation session, and then iterating through to everybody in the business. Normally, that process would work where we would do some research with the customers first, then bring that research in, do some analysis, and then put it into the context for the workshop, work through what that means, come up with some ideas in the workshop, take it to the second workshop, and there you go.
What we’re now able to do is iterate with AI. So we’ve got the notes from the meetings captured with AI—this is from the customer meetings. Then we’re able to pull out the pain points of customers in a really deep way, using AI to iterate through and synthesize the client feedback, and then also apply human insight into that, coming up with a really clear list of pain points. Then we ask AI to be virtual customers, and they can add to that process, so you get a very rich set of pain points.
As we go through the process of product strategy and implementation, we’re able to use AI at every step of the process. For example, when we look at decision criteria for prioritizing, we can go to AI and say, “These are some of the things we’re considering. What else have we left out?” As we iterate with people in workshops and then with AI, we just get a much richer solution in the process.
In fact, we came out with some really amazing insights about how you provide customers with learning about how to use these products to onboard them quickly, how you provide them with personalized contextual information so they can learn and get value from the product much faster. It’s led to a number of significant deals that HowToo has negotiated as a result of that work.
Ross Dawson: So is this prompting directly with LLMs?
Lisa: Yeah, it is. My favorite one is actually ChatGPT, which—you know, you’re probably waiting for some surprise, some unique and interesting or weird or specific product. I do use specific products for certain use cases, but for general logic, I’ve found that ChatGPT Pro is actually the best that I’ve come across, and certainly better than some of the enterprise solutions that I’m seeing people use.
They feel protected and they’re happy to have a safe, private, directly hosted solution, but the logic in some of those models are not as good.
Ross Dawson: So that’s the ChatGPT Pro, the top level, which not that many people have access to. I guess one of the big questions here is this balance between humans and AI. Most people have a human process where there’s a lot of value in bringing in the AI, and then we’re also getting all of these software products, which are saying “McKinsey in a box,” and they sort of say, “Just give us everything, and we’ll give you the final solution,” and it comes out as AI and there’s not a human involved. How do you tread that balance between where you bring in the human insight and where the AI complements it?
Lisa: Yeah, that’s a good question. I think the key thing is that people need to feel like they are in control of the process. I’m a huge advocate for open strategy, for example. These are open strategy processes that are highly participative with people and CEOs, in particular, get worried because they worry they’re going to lose control of their process. So it’s always important that strategy is not democratic. Ultimately, the CEO has to make a captain’s call on things, and they need to feel like they’re in control of the process.
The key thing is that you use AI at particular points of the process, and then you’ve got humans in the loop at other, specific decision-making points. You’re using AI to generate solutions for ideation. Once you’ve got the ideas, you can do an initial cull with AI, or you can do it via humans, but it’s the humans who are setting the parameters and making the decisions about which parameters to use, ultimately.
I’ll give you another example with a multinational that I’ve been working with. They’re actually pretty far down the track on implementation of AI itself, and they’re doing a lot of transformation work around agents and around making their services— they provide high-end knowledge services B2B. They’re quite far advanced in terms of developing AI and thinking about what the technology architecture needs to look like with people. The difficulty that these organizations are facing is that there are a number of moving parts. Many organizations haven’t even finished the integration of different technology platforms. There’s still a hangover from the pandemic, from different types of competitive and business models that they’re implementing. So there’s all that legacy change underway.
Plus, now you’ve got the impetus to use AI, and I’m seeing an increasing number of stakeholder complexities, because everybody has their own legacy projects, plus now we’ve got new projects coming in with AI, new strategic imperatives.
In this particular organization—very sophisticated, very capable people—the challenge is, how do you sequence all of these things that you’ve got on your plate, and also get agreement and alignment with the stakeholders around these different priorities? We went through a workshop process where we defined the decision process itself, and I used AI to give me some examples of what the answer could look like before we went into the workshop. As a facilitator, that’s very powerful, because I’ve got some solutions in my back pocket that if the team gets stuck, I can whip them out and say, “Well, actually, I’ve been thinking about this. I’ve prompted AI around this. What do you think?” It just helps that conversation go forward faster in the room. But people are still very much in control of what the process and the plan need to look like.
Ross Dawson: That’s great. In what you’ve been saying in both these examples is what I call framing, where the human always does the frame: this is the context, these are the objectives, this is the situation, these are the parameters. That’s where everything needs to happen within that. Part of it is choosing the right points within it. I think that’s a great example you just gave, where you are getting them to do the work, but then, when you get stuck or when you’ve got things, you can pull something out to say, “Well, here’s something to consider.” You don’t give them the solution first—it may not be the right solution anyway—but once they’ve considered it, they can consider these new ideas very well.
And then it’s always this thing of, if you’ve got these very extended processes, how do you accelerate the timeframe? I think what you’re describing is something where you judiciously use that sort of pre-work, which has been assisted by AI, and that can definitely accelerate a group human process.
Lisa: You do such an amazing job always, Ross, at pulling out the themes. I guess that’s what being a futurist is all about—the themes of what I’m saying. I could spend a day just responding to so many of the things you’ve just said there. But absolutely, the framing and the context need to be human. In fact, I see a lot of the upside of AI, a lot of the benefits that people get, are from appropriate context and going broad enough to give the context to the AI, particularly in agents where the AI needs to be autonomous. There’s such a huge benefit in being able to do repeatable work by agents, where they have access to the same context that you’ve created, and then they can update that context when they learn. That’s very powerful.
I’ve done a list— I’ve got about 39 points on the list so far— of different things, different tasks that AI can help with along the strategy process. My focus is mainly on implementation, but of course, I get involved in the strategy by default, either because there isn’t one quite, or it’s too broad and needs to be taken down a level of detail before we can implement it, or because there are some holes in it. From my background at McKinsey, I can look at strategies and see where some of the issues are straight away, so I get involved sometimes a bit earlier in the strategy process. But AI is incredibly useful at reviewing information and finding the flaws or the problems and just honing in on those problems. That’s one of the big use cases.
The other big one that I haven’t spoken about, that I just want to mention as well, is the analytics. I have these conversations online where people respond to some of the things I’m saying about the future of the work that I do, which is management consulting, and they ask how much of it can be done by AI. I’m saving anywhere between— not so long ago, I was saying half a day a week, then I was saying a day a week, now, like last week, I saved two days in the week because of this big use case. This is analytics: AI taking a simple spreadsheet of survey results and sorting it into clusters, being able to understand and calculate what’s happening in those clusters, compare them.
When I did it last year, I have a transformation success score that I measure, that I’ve done a whole lot of research online, publicly, so people submit their perspective on things, and then I can compare different groups—like, what do change managers say, what do change leaders say, what do project professionals say, what do strategists say about transformation work or strategy execution work? I use them fairly interchangeably, although there are some nuances. I got incorrect answers last year from ChatGPT, and I did it correctly this year. So there’s been a huge improvement in the model. It saved all that time. Not only did it do the heavy lifting on the analytics, it did the insights, it drew the graphs, and it gave me a report, all produced beautifully together.
Sure, I had to iterate it a bit, and now I’ve got the final AI version. I will take that and redraft sections of it so that it’s got my voice and some of the nuances that AI hasn’t picked up, but it’s pretty good. It’s 80/20—it’s done 80% of the work for me. That’s why last week I saved almost a day and a half of my time on this report.
Ross Dawson: One of the critical points being, of course, that you do check, and you do make sure that you bring in your insights on top of the AI, rather than presuming that it’s done it correctly. Let’s go on to just another domain in which you are using AI. All organizations need to be changing, and they need to be changing pretty fast these days. It’s this transformation of organizations, which includes, of course, culture. AI is not human, despite it giving the appearance of being that at some times. How can AI be used to support or augment or be part of the role in cultural transformation?
Lisa: My work in culture is twofold. One is implementing changes to the culture, and this can take a year or more if it’s a big organization, if it’s a strong culture. Cultures that are strong have a lower variation around the mean statistically—in other words, they are more consistent internally, whereas some cultures are quite weak, and that’s where you think it’s wishy-washy, so some parts are different to other parts. If the culture is strong, at least a year—they’re long-tailed, long-term change projects.
Something else that I always say to clients: you’ve got to work within the culture to change the culture. This you can do very quickly, and people don’t always think about this when they think about executing strategy. They always think about the long term, changing the culture, but to change, you’ve got to be on the inside. You’ve got to be accepted by the culture, or else as a CEO or executive, or even a staff member, you’re pushed out of the organization faster than you can do anything.
That’s just to frame our conversation, which is really important, because I think people miss that first piece, and this is what I teach people in our community. I’ve got the Turbo Charges Hub, which is a community of professionals working in change, transformation, strategy, execution, and it’s a blend of disciplines—strategy, project management, and change management together. I teach people how to identify what the culture is in the organization, and then they can work effectively in that organization to either change the culture or to implement whatever kinds of improvements—sales improvements, AI itself, whatever they’re trying to do.
That first part is diagnosing what kind of culture you have, and AI is really good at taking data and analyzing it. If you had a conversation with— I’ll give you one really easy example that people can do, and this is what I talk to folks in the community about. I give them, in a workshop, some culture types, which you can get out of ChatGPT. They say, “What kind of organization are you in?” Let’s say it’s a global, multinational organization, structured around geography—different countries. Let’s say it’s a product-based organization, it’s got eight product divisions, and I have a suspicion that this organization is quite innovative, but I don’t know—how would I describe the culture? What are the different options?
You’d get quite a nice list from ChatGPT, for example, of what those cultural types are as a starting point. Then you could have a conversation with people in a room and start saying, “Well, what is the culture here?” Then you choose just one or two words, three at the most, that describe the culture. Then you can ask ChatGPT for some ideas of how to work within that culture: what are some examples of effective behaviors, what are some things to avoid, what are some of the obstacles that might come up?
For example, in a culture that’s a tech firm, highly innovative, and global, one of the things you might get are siloed effects, where different divisions are off doing their own thing, and that could be a risk and an issue. Another might be, “We’re highly innovative, so we respond to customer requests, and we’re a little bit chaotic, constantly adding new priorities because we’re trying so hard to meet customer demands and invent new things for them.” By having this conversation with ChatGPT, you can get some really good ideas about tangible things you can do, what the issues are, and then tangible things you can do.
The other big thing is to have conversations with people. I find, after talking to about eight people in a business, I’ve usually got a really clear idea of what the culture is. The stronger the culture and the more consistent it is, the faster you will get those themes. You may not even learn anything after talking to five or six people, but certainly, after about eight—certainly no more than twelve—you would need to talk to one-on-one to get a very good idea to be able to diagnose where the culture is, and that’s the starting point.
In the good old days, we used to do surveys that could cost clients up to a million dollars or more, and they were paper-based surveys that people used to use to define the culture, because they gave you a nice point to be able to measure before and after. Now it’s so much easier. There are so many ways you can do that.
Ross Dawson: Using AI, both for description, then diagnosis, and then potential intervention phases.
Lisa: Exactly.
Ross Dawson: This goes a little bit to something which you’ve very publicly said, where you believe there’ll only be 20% as many consultants as there are today. I’m not sure if I agree, but I’d love to hear the case of why you think that’s true.
Lisa: Yeah. So, look, the people that disagree with me say, “Consulting is so bespoke and it’s all about judgment and human relationships, so how can you say that we’ll only need one in five consultants?” The thing is that there are so many parts of the consulting work that can now be done by AI, and I can see really clearly how— it was saving me half a day, a day a week, now it’s saving me two days a week— the whole leverage model of consulting businesses, or even independents like myself, is collapsing. I’m able to work so much faster and do the work of more people.
I can see it in every respect of my work, both in terms of the work that I’m doing myself—already now, I’m saving one day, some weeks two days of work a week. That just frees up huge amounts of capacity to do more work. We’ve got AI helping so many parts of this. Even the freelancers that I used to get in to do work for me— that’s been reduced by about 75%. I’m just doing more and more myself. Instead of needing the leverage of a team around me, I’ve got the leverage of AI.
I’ve got clients who are using AI more and more, so they are increasingly sophisticated and able to do things that previously would have been much harder to do. Even analytical work that used to be done by external people can now be done in-house. Clients are becoming more self-sufficient. Consultants are becoming more self-sufficient. Consultants are able to do much more work. I see the trajectory improving— it’s almost vertical sometimes in terms of exponential improvements in the capability of AI.
All I can look at is the trajectory of where I am, where I’ve been. I’ve been doing this for my whole career. I started with Accenture doing systems development, McKinsey doing strategy development, worked for a boutique culture change organization, and then 25 years on my own doing implementation in the trenches. I can see how all the different parts of the management consulting process—from preparing for a meeting with a client, giving the proposal, winning the proposal, setting up the system to do the work, planning the work, doing the analytical design and analytical stages, through to delivery of the end work product—so much of that process is being automated, or can be automated or assisted through AI. That’s where I think we will head. The 20% is that human judgment, and I can talk more about that if you want. I realize I’m going on a lot because I give you a long answer because I feel so passionate.
Ross Dawson: It’s a big topic, so it’s fair enough to lay out your case. In a very compact response, I guess the one thing which I think is really critical is that clients are vastly enabled, and I think that’s the really big one. Now clients have a choice—they can go out and get a consultant, which is probably not very cheap, or they can use AI. Hopefully, as many clients have been developing their capabilities in many domains, they’re not just asking AI, they’re using it well. I think that’s the big one.
But the biggest counterpoint, I guess, is whether the amount of consulting or the amount of professional services or value in the future is anything like it is at present. Yes, the current amount of external advice can be done with far fewer people, but it’s one of those things—Jevons paradox—the more you have, the more demand there is. If you can have higher quality advice on more domains, better delivered, and the consulting firms are able to apply that, I still think that the demand for consulting is going to perhaps be five times what it is, or maybe four times. Perhaps each consultant can amplify themselves five times as much. So I think there will be more demand for this AI-amplified advice, far more than today.
Lisa: Yeah, interesting. I spoke at an amazing evening at New South Wales Parliament House last week, and one of the speakers was talking about Jevons paradox, and I got very excited when he started talking about it because I hadn’t come across it before. This was Dr. Teodor Mitew at University of Sydney. I thought, “Oh, maybe there’s a path there that I haven’t thought about,” because it’s not in my interests—I love consulting work and helping organizations, I don’t want to see the whole management consulting industry decimated and down to 20% of its size. So I thought, “That’s fantastic, maybe there’s something I haven’t envisaged about demand for consulting services here,” because if we just increase the pie so much and it’s much cheaper, clients will get more, organizations can outsource more work, and we can do so much more in the time that we’ve got.
But the problem is that a lot of the demand we’re seeing now is actually temporary, and I think it’s masking a long-term structural decline. As you’ve said, more work is being done internally by clients, and consultants are doing the work much faster. I don’t think the demand for consulting is infinitely elastic and that there’s this unlimited client appetite that’s going to appear. So I actually don’t think that the Jevons paradox applies here.
The caveat around all of this that you and I and others listening have is this whole concept of superintelligence—ASI—and that we’re potentially going to get to this point where machines are much more clever than we are, and they can see things that we can’t see. There may be something that I can’t see right now that’s going to create that additional demand. I’m an optimistic person, and I think that humans are very creative and have amazing ingenuity, and we don’t know. We just don’t know. We can’t answer that question.
But for the current amount of client appetite, if that stays more or less where it is now—and I mean stripping out that artificial piece around current demand, around the whole AI boom—then do you agree, Ross, that it’s possible to foresee that one consultant might completely replace five in terms of the value that AI can add?
Ross Dawson: I don’t think so. I think it depends on the consultant and the domain and how they’ve been working in the past. But so much of what the consultant does is this external reference point. It’s the emotional engagement—”I’ve got somebody else who’s given me something.” That external reference point is a critical part of that value. But we’ll see how that plays out. I’m sure both you and I and a lot of others will be watching the trajectory.
To round out, you’ve had a wonderful article in the Australian Financial Review about some of the ways you use AI specifically. Perhaps you’d like to share just a couple of things you think listeners of the podcast would find value in using in their own work?
Lisa: Sure, happy to. One of the things they quoted me on in the Australian Financial Review is that it’s like having a team with four extra members. I guess I’ve covered some of those things, but ideation is a really critical component. Instead of bringing together a group—I’d say a group of five for ideation, bring four others into a room or virtual room to ideate—that would be a good number. I can get a very excellent result by setting up personas for AI, or even just asking AI bluntly for some ideas and then iterating with AI in a conversation.
Gemini actually has a particularly nice feature where you can have a conversation with it and just use it as a conversation function, talking to and fro, and that just mimics the natural conversation you would have in a team. That works really well for ideation, and I particularly like that.
Ross Dawson: Fabulous.
Lisa: Do you want any more, or is that enough?
Ross Dawson: Yeah, just one more.
Lisa: One more—look, AI is really good at predictions. Communication is everything when you are running a transformation project. It’s all about getting the right communication between all the different layers of staff to get that momentum and enthusiasm and get people on board. To do that, you’ve got to cut through a lot of noise and reach the people you need to. AI is able to predict open rates on emails by subject heading, and I use that in my Turbo Charge Weekly.
That’s my newsletter that’s all about fast-tracking strategy with AI and cultural intelligence. I use that to work out, “What’s the best subject line to use that will get the highest open rate?” I used to do A/B testing, which is what most marketers do, but after trying week after week, comparing the A/B testing results—”Is A better than B, which is the better subject line that’s got the higher open rate, therefore go with that one?”—instead of doing that week after week, And asking AI, “When CEOs are reading my email, what topics are going to give me the highest open rate?” AI has been correct every time. So I don’t do any A/B testing anymore. There you go—predictions.
Ross Dawson: That’s very, very useful. So where can people go to find out more about your work Lisa?
Lisa: theturbocharges.com—everything’s on me at my website, and thank you for asking, Ross. Also LinkedIn—people will find me on LinkedIn.
Ross Dawson: Fantastic. Thanks for all of your wonderful work and sharing your insights.
Lisa: Great to chat.
The post Lisa Carlin on AI in strategy execution, participative strategy, cultural intelligence, and AI’s impact on consulting (AC Ep27) appeared first on Humans + AI.





Subscribe