Humans + AI

Humans + AI


Nicole Radziwill on organizational consciousness, reimagining work, reducing collaboration barriers, and GenAI for teams (AC Ep26)

December 09, 2025

“Let’s get ourselves around the generative AI campfire. Let’s sit ourselves in a conference room or a Zoom meeting, and let’s engage with that generative AI together, so that we learn about each other’s inputs and so that we generate one solution together.”

–Nicole Radziwill

About Nicole Radziwill

Nicole Radziwill is Co-Founder and Chief Technology and AI Officer at Team-X AI, which uses AI to help team members to work more effectively with each other and AI. She is also a fractional CTO/CDO/CAIO and holds a PhD in Technology Management. Nicole is a frequent keynote speaker and is author of four books, most recently “Data, Strategy, Culture & Power”.

Website:

team-x.ai

qualityandinnovation.com

LinkedIn Profile:

Nicole Radziwill

X Profile:

Nicole Radziwill

 

What you will learn
  • How the concept of ‘Humans Plus AI’ has evolved from niche technical augmentation to tools that enable collective sense making
  • Why the generative AI layer represents a significant shift in how teams can share mental models and improve collaboration
  • The importance of building AI into organizational processes from the ground up, rather than retrofitting it onto existing workflows
  • Methods for reimagining business processes by questioning foundational ‘whys’ and envisioning new approaches with AI
  • The distinction between individual productivity gains from AI and the deeper organizational impact of collaborative, team-level AI adoption
  • How cognitive diversity and hidden team tensions affect collaboration, and how AI can diagnose and help address these barriers
  • The role of AI-driven and human facilitation in fostering psychological safety, trust, and high performance within teams
  • Why shifting from individual to collective use of generative AI tools is key to building resilient, future-ready organizations
Episode Resources Transcript

Ross Dawson: Nicole, it is fantastic to have you on the show.

Nicole Radziwill:Hello Ross, nice to meet you. Looking forward to chatting.

Ross Dawson: Indeed, so we were just having a very interesting conversation and said, we’ve got to turn this on so everyone can hear the wonderful things you’re saying. This is Humans Plus AI. So what does Humans Plus AI mean to you? What does that evoke?

Nicole Radziwill: The first time that I did AI for work was in 1997, and back then, it was hard—nobody really knew much about it. You had to be deep in the engineering to even want to try, because you had to write a lot of code to make it happen. So the concept of humans plus AI really didn’t go beyond, “Hey, there’s this great tool, this great capability, where I can do something to augment my own intelligence that I couldn’t do before,” right?

What we were doing back then was, I was working at one of the National Labs up here in the US, and we were building a new observing network for water vapor. One of the scientists discovered that when you have a GPS receiver and GPS satellites, as you send the signal back and forth between the satellites, the signal would be delayed. You could calculate, to very fine precision, exactly how long it would take that signal to go up and come back. Some very bright scientist realized that the signal delay was something you could capture—it was junk data, but it was directly related to water vapor.

So what we were doing was building an observing system, building a network to basically take all this junk data from GPS satellites and say, “Let’s turn this into something useful for weather forecasting,” and in particular, for things like hurricane forecasting, which was really cool, because that’s what I went to school for. Originally, back in the 90s, I went to school to become a meteorologist.

Ross Dawson: My brother studied meteorology at university.

Nicole Radziwill: Oh, that’s cool, yeah. It’s very, very cool people—you get science and math nerds who have to like computing because there’s no other way to do your job. That was a really cool experience. But, like I said, back then, AI was a way for us to get things done that we couldn’t get done any other way. It wasn’t really something that we thought about as using to relate differently to other people.

It wasn’t something that naturally lent itself to, “How can I use this tool to get to know you better, so that we can do better work together?” One of the reasons I’m so excited about the democratization of, particularly, the generative AI tools—which to me is just like a conversational layer on top of anything you want to put under it—the fact that that exists means that we now have the opportunity to think about, how are we going to use these technologies to get to know each other’s work better?

That whole concept of sense making, of taking what’s in my head and what’s in your head, what I’m working on, what you’re working on, and for us to actually create a common space where we can get amazing things done together. Humans plus AI, to me, is the fact that we now have tools that can help us make that happen, and we never did before, even though the tech was under the surface.

So I’m really excited about the prospect of using these new tools and technologies to access the older tools and technologies, to bring us all together around capabilities that can help us get things done faster, get things done better, and understand each other in our work to an extent that we haven’t done before.

Ross Dawson: That’s fantastic, and that’s really aligned in a lot of ways with my work. My most recent book was “Thriving on Overload,” which is about the idea of infinite information, finite cognition, and ultimately, sense making. So, the process of sense making from all that information to a mental model. We have our implicit mental models of how it is we behave, and one of the most powerful things is being able to make our own implicit mental models explicit, partly in order to be able to share them with other people.

Currently, in the human-AI teams literature, shared mental models is a really fundamental piece, and so now we’ve got AI which can assist us in getting to shared mental models.

Nicole Radziwill: Well, I mean, think about it—when you think about teams that you’ve worked in over the past however many years or decades, one of the things that you’ve got to do, that whole initial part of onboarding and learning about your company, learning about the work processes, that entire fuzzy front end, is to help you engage with the sense making of the organization, to figure out, “What is this thing I’ve just stepped into, and how am I supposed to contribute to it?”

We’ve always allocated a really healthy or a really substantive chunk of time up front for people to come in and make that happen. I’m really enticed by, what are the different ways that we’re going to— for lack of a better word—mind meld, right? The organization has its consciousness, and you have your consciousness, and you want to bring your consciousness into the organization so that you can help it achieve greater things. But what’s that process going to look like? What’s the step one of how you achieve that shared consciousness with your organization?

To me, this is a whole generation of tools and techniques and ways of relating to each other that we haven’t uncovered yet. That, to me, is super exciting, and I’m really happy that this is one of the things that I think about when I’m not thinking about anything else, because there’s going to be a lot of stuff going on.

Ross Dawson: All right. Well, let me throw your question back. So what is the first step? How do we get going on that journey to melding our consciousness in groups and peoples and organizations?

Nicole Radziwill: Totally, totally. One of the people that I learned a lot from since the very beginning of my career is Tom Redman. You know Tom Redman online, the data guru—he’s been writing the best data and architecture and data engineering books, and ultimately, data science books, in my opinion, since the beginning of time, which to me is like 1994.

He just posted another article this week, and one of the main messages was, in our organizations, we have to build AI in, not bolt it on. As I was reading, I thought, “Well, yeah, of course,” but when you sit back and think about it, what does that actually mean? If I go to, for example, a group—maybe it’s an HR team that works with company culture—and I say to them, “You’ve got to build AI in. You can’t bolt it on,” what they’re going to do is look back at me and say, “Yeah, that’s totally what we need to do,” and then they’re going to be completely confused and not know what to do next.

The reason I know that’s the case is because that’s one of the teams I’ve been working with the last couple of weeks, and we had this conversation. So together, one of the things I think we can do is make that whole concept of reimagining our work more tangible. The way I think we can do that is by consciously, in our teams, taking a step back and saying, rather than looking at what we do and the step one, step two, step three of our business processes, let’s take a step back and say, “Why are we actually doing this?”

Are there groups of related processes, and the reason we do these things every day is because of some reason—can we articulate that reason? Do we believe in that reason? Is that something we still want to do? I think we’ve got to encourage our teams and the teams we work with to take that deep step back and go to the source of why we’re doing what we’re doing, and then start there.

Make no assumptions about why we have to do what we’re doing. Make no assumptions about the extent to which we have to keep doing what we’re doing. Just go back to the ultimate goal and say, with no limitations, “How might I do that now, if I didn’t have the corporate politics, if I didn’t have these old, archaic, crusty systems that I had to fight with, what would I do?” Because we’re now in a position where the technical debt of scrapping some of those and starting some things new from scratch maybe is not quite as oppressive as it might have been in the past.

So that’s what I think the first step would be—go back to the why. Why are we doing these business processes? It’s great food for thought.

Ross Dawson: Yeah, well, I am a big proponent of redesigning work in organizations. So basically, all right, call whatever you’ve got in the past—now it’s humans plus AI. You have wonderful humans, you’ve got wonderful AI, how do you reconfigure them? Obviously, there are many pathways—most of them, unfortunately, will be de facto incremental, as in saying, “Well, this is what we’ve got and how do we move forward?” But you have to start with that vision of where it is you are going.

To your point, saying, “Well, why? What is it you’re trying to achieve?” That’s when you can start to envisage that future state and the pathway from here to there. But we’re still only getting hints and glimpses of what these many, many different architectures of humans plus AI organizations can be.

Nicole Radziwill: Totally great. Have you seen any examples recently that really stand out in your mind of organizations that are doing it really well?

Ross Dawson: What I’ve been looking at—so it’s on my agenda to try to find some more—but what I have been looking at is professional service firms that have re-architected, some of them from scratch. So we have Case Team and Super Good, sort of relatively small organizations. Then there’s—forgotten his name—but it’s a new one founded by the former managing partners of EY and PwC in the UK, which is basically from—and I haven’t seen inside it, but I got an inkling that they’re having a decent approach.

But these are relatively fresh, and so it’s harder to see the examples of ones which have shifted from older workflows to new ones. Though, I mean, again, there’s not a lot of transparency. But the best—the sense, as it were, of the best of the top professional firms, or the best if you find the right pockets in the largest ones—

Nicole Radziwill: I totally resonate with what you say about professional services. Those are the organizations that are picking it up more quickly, because they have to. I mean, who’s going to engage a professional services firm that says, “Oh yeah, we haven’t started working with the AI tools yet, we’re just doing it the old way”? No one is going to pick you up, because usually, what do you engage professional services firms for? It’s because they have skills that you don’t have, or because they have the time and the freedom or flexibility to go figure out those new things. You want their learning, you want to bring that into your organization.

So, yeah, that’s a really good thing that you picked up on there, because I’ve seen the same thing.

Ross Dawson: Well, I guess everything is—there’s a lot of rhetoric, as in they’re trying to sell AI services, and they say, “Yeah, well, look, we’re really good at it. Look at all these wonderful things,” and that may or may not reflect the reality. But again, I think the point of saying, look at the best of EY, look at the best of McKinsey, look at the best of Bain—Bain is actually doing some interesting stuff. But unfortunately, there’s not enough visibility, other than the PR talk, to really know how this is architected.

Nicole Radziwill: And you know, also, the other thing that I think about is, when you have a great idea and you’re bringing it into your organization, it doesn’t matter how extensively you’ve researched it, how many prototypes you’ve built—let’s say you have the most amazing idea to revamp the productivity of your organization right now—what’s stopping you is not the sanctity of your idea. It’s overcoming the brain barrier between you and other people.

How many times have you gone into an organization with a really great idea for improvement, but it just takes a long time to talk to people about it, to maybe educate them about the background or why you thought this was a good idea? Maybe you have to convince them that your new idea actually is something that would work in their pre-existing environment that they’re super comfortable with. The challenge is not the depth of the solution—it’s our ability to get into each other’s heads and agree upon a course of action and then do it.

That human part has always been the most difficult, but it’s been easy to think, “Oh no, it’s the technology part, because it takes longer.” The thing that I’m really intrigued by right now is that, since the time to develop technology is shrinking smaller and smaller, it’s going to force us to solve some of the human issues that are really holding us back. And I think that’s pretty exciting.

Ross Dawson: So you are a co-founder of Team-X AI, which I’ve got to say looks like a very interesting organization. Perhaps before talking about what it does, I’d like to ask, what’s the premise, what is the idea that you are putting into practice in the company?

Nicole Radziwill: Cool, cool. So my goal has always been—as a, I mean, the first team that I managed, like I said, was back in the late 90s—my goal has always been to help people work better together and with the new emerging technologies. The nature of the emerging technology is going to change over time; it doesn’t matter what it is right now. It’s to help people work better together with each other, with AI, particularly for generative AI tools.

The thing that’s holding back organizational performance, at least from the teams that I’ve seen implement this, is that people have tended to adopt AI tools for personal productivity improvements. Everybody’s got access to the licenses, and they go in, they try and figure out, “How can I speed up this part of my process? How can I reduce human error here? How can I come into work in the morning and have my day be better than it would be without these tools?” So it’s been very individually focused.

But even a year, year and a half ago, some of my collaborators and I were noticing that the organizations that were really on the leading edge had taken a slightly different starting point. Instead—well, I don’t say instead of, it is in addition to—in addition to using the AI tools for personal productivity, they also said, “Let’s see how we can use these collaboratively. Let’s see how we can study our processes that are cross-cutting, processes that bring us all together in pursuit of results. Let’s study those. Let’s get ourselves around the generative AI campfire.

Let’s sit ourselves in a conference room or a Zoom meeting, and let’s engage with that generative AI together, so that we learn about each other’s inputs and so that we generate one solution together.” Those are the organizations that were really getting the biggest results. And surprisingly, now, a year plus later, that’s still the chasm that organizations have to cross. Think about the people that you’ve worked with—lots of people are saying, “We know how to prompt now, we feel comfortable prompting. When are we going to start seeing the results?” So it’s the transition from individual improvements to improvements at that team level, that are really working at the process level, that’s what’s going to cause people to surge forward.

That’s why we decided to start with that premise and figure out how to help teams work with the people that they had to work with, figure out what the barriers to collaboration were with the people, and in order to make collaboration with AI at that team level more streamlined, more able for the team to pick it up. We wanted to crack that code, and so that’s what we did.

So the Team-X stuff is an algorithm that actually looks at the space between people to help bust up those barriers to collaboration between the humans, so that the humans can collaborate better together with AI.

Ross Dawson: It definitely sounds cool. I want to dig in there. So is it essentially a facilitator, in the sense of being able to understand the humans involved and what they’re trying to achieve, in order to ensure that you have a collective intelligence emerging from that team? And if so, how specifically does it that it?

Nicole Radziwill: Yeah, okay, so for about 10 years, we were studying cognitively diverse teams. One of the problems we were trying to solve was, how do you get groups of people who are completely different from one another—who may be over-indexed in things like anxiety or depression or sensory-seeking or sensory-avoiding characteristics—when you get a group of extremely cognitively diverse people together, how do you help them be the most productive, the fastest? That was the premise 10 years ago. Actually, it’s even more than 10 years ago—if it’s 2025, 13 years ago.

By studying how to engage with those teams, how to be part of one of those teams, how do you do the forming, storming, and norming to get to performing? That was really the question to answer. Over the course of those years, by working through a lot of really unexpected situations, we started to see patterns—not within individual people, but what happened when you got different people together.

Here’s an example of this: when you get people together, the number one most common unspoken norm, hidden tension that we see emerging in groups is where you have people whose preference for receiving information is in writing—if you’re going to tell me something that I need to know, I prefer that you give that to me in writing so that I have reference, I can see it, I can review it, I can keep it and refer to it later. But guess what? The most likely possibility is that my preference to give information to you is talking.

So think about the conflict that’s set up—if I expect everyone to give me information in writing so that I can be most productive, but I expect that I can speak it to you, there’s an imbalance there, because someone is not going to be getting what they need in order to be able to understand that information best.

Just looking at little conflicts like that—these are aspects of work styles, work habits, anything that is part of your style that contributes to how you get results—can get into conflict with other people if your baseline assumptions are different. Here’s another great example.

Ross Dawson: I can see how—what I think you’re describing is saying, okay, you’re picking up some patterns of team dysfunctions, as it were, and I can see how generative AI could be able to do that. It’s a little harder to see how you can get the analysis which would enable machine learning algorithms to identify those patterns.

Nicole Radziwill: Yeah, it’s vintage AI underneath the surface, so the conversational aspect comes later. That’s a really interesting thing to bring up, too—you know that you can’t solve all problems with generative AI, right? Some parts of your problem are best solved deterministically, some parts are best solved statistically, and some parts are best solved using Gen AI completely stochastically, where the window for the types of responses is larger, and that’s fine.

One of the things we had to do was be very cognizant about where we put the machine learning models, what they were producing, and then how we used those to help people engage with their teams so that they could reduce those barriers to collaboration. What we built is a mix of vintage AI—mostly unsupervised PCA and other clustering algorithms. From those, we figured out, here are the patterns that we see a lot, and then from those, we applied the generative AI to help get them to build the narratives that the teams can use to understand what they mean.

Ross Dawson: So crudely, it’s diagnosis, and then solution—

Nicole Radziwill: Diagnosis, Solution, and then human facilitation. So, yeah. Basically, when a team comes in and says, “We want to do Team-X,” we crunch a lot of data, use our models to figure out what are those hidden tensions, what are those unspoken norms, and what are the options available to reduce barriers to collaboration for you. But then we work with the team for them to come up with, “What does that mean for us? How can we create the environment for each other so that we can move beyond our natural challenges, so that we can use generative AI more effectively together?”

Ross Dawson: So there’s a human facilitator that—

Nicole Radziwill: Yes, there’s algorithms, plus a human facilitator, plus ongoing support.

Ross Dawson: So describe that then in terms of saying, all right, you have the analysis which feeds into the diagnosis, the patterns, which feeds into the way in which you’re working with the team. So could you then frame this as a humans plus AI facilitation, as in both the human facilitation—

Nicole Radziwill: Yep, exactly. We collect data, run the algorithms, facilitate a session to get understanding, and then we—

Ross Dawson: So how does the human facilitator work with AI in order to be an effective facilitator of better outcomes?

Nicole Radziwill: Oh, I mean, mainly it’s just learning how to interpret the output and then learning how to guide the team towards the answer that’s right for them. What the algorithms do is they get you in the neighborhood, but the algorithms aren’t going to know exactly what are the challenges you’re dealing with right now. It’s through those immediate challenges that any group is having at the moment that you can really highlight and say, what are the actions that we need to take?

So we get to both of those points, and then we facilitate to bring the results from the algorithm together with what’s meaningful and important to the team right now, so that they can solve a pressing issue for them that they might not have solved any other way.

Ross Dawson: So in that case, the human facilitator has input from the AI to guide their facilitation, because there is, as you know, a body of interesting work around using AI for behavioral nudges in teams.

Nicole Radziwill: Oh, yeah, yeah, yeah. Didn’t that start with Laszlo Bock, the Google guy? He had some great work back then. He started a company, and then he sold the company, but the work that they were doing even back then—we relied upon that heavily as we were building on some of our ideas.

Ross Dawson: Yeah, well, Anita Williams at Carnegie Mellon is doing quite a bit in that space at the moment, and also there’s work in Australia’s CSIRO and a number of others.

Nicole Radziwill: Oh, yeah, yeah.

Ross Dawson: So, tell me, what’s the experience, then, of taking this into organizations? What is the response? Do people feel that they are—yeah, I mean, obviously having a human facilitator is vastly helpful—what’s the response?

Nicole Radziwill: The managers and the leaders feel like, finally, they have someone who they can talk to, who can help them get answers about how to engage with their team in ways that they haven’t gotten answers before. That’s pretty cool. I like the feeling of helping people who otherwise might have just felt like they have to deal with these people situations and the technology situations on their own.

That’s great. We have people say things like, “It’s like personalized medicine for the teams.” The other comment that I thought was really cool is that the person said, “I’ve done a lot of assessments, and the assessments are all at the individual level. This is the only one that helps me figure out what I should do when I have to manage all of these people and somehow get them to work together to get this thing done right now. I don’t have a choice to move people in or out. I have to deal with the positives and the negatives here.

How can I relate to the members of my team as humans and get them what they need so that they can be more productive together?” I like how it’s helping shift the perspective. When I was first leading teams back in the 90s and early 2000s, I really thought it was my job to create an environment where the people are going to be able to work together harmoniously, where you’ll feel satisfied, where you’ll feel engaged, where you’ll feel invigorated. It was crushing to realize, no matter how well I set that up, someone was always going to think it was absolutely terrible, it wasn’t meeting their needs.

So I probably spent 20 years being crushed about, “Why can’t I set up the perfect team?” But then I realized part of creating a perfect team is acknowledging its imperfection and doing it out loud so that people don’t have expectations that are too high of each other. I mean, everyone comes to work for different reasons, right? I always went to work wanting to get self-actualization—how can I better achieve my purpose through this job—and not everybody feels that way.

So instead of me making a value judgment, saying, “That darn person, they’re just not taking their job seriously,” it helps to be able to have an algorithm say, “You should talk about what professionalism and engagement means. You should talk about the extent to which your soul is engaged in your work, and whether that’s a good thing here or not,” because none of those other methods bring stuff up like that—it’s just a little too touchy. So we’re not afraid to bring it up and see what happens.

Ross Dawson: So I understand some of the underlying data is self-reported style or engagement style and issues, but does it also include things like meeting conversations or online interactions?

Nicole Radziwill: No, not at all. In fact, that was one of the things that was most important to me. I don’t like surveillance. I don’t think surveillance is the right thing to do. I would not want to be a part of building any product that did that. Fortunately, one of the things we concluded was the person that you bring to work is largely constructed by your past experiences—last year, the year before, 20 years ago—the experiences that influence how you engage with your team. It’s much more long-term, and not just, “Are there great policies for time off now?”

So that really helps the data collection, because all we need to do is get a sense for—to sample your work habits and your styles over time, and then we can compare people to each other on the basis of that. There tends to be less conflict when you work with people who have similar unspoken habits and patterns as you do. Where the conflict arises is if somebody is behaving way differently, and then people put meaning on it where maybe there isn’t the meaning that they had for that action or that reaction.

Ross Dawson: So from here, what excites you about humans plus AI, or humans plus AI and teams, your work, or where do you see the frontiers we need to be pushing?

Nicole Radziwill: Yeah, okay, so I think I was mentioning to you at the very beginning, but I’ll bring it back up. One of the concepts that’s germane to what we’ve been doing is psychological safety, right? We all know that when you’re engaged in a team that has psychological safety, it’s easier to get adults, people are more satisfied, and performance in general goes up.

But it turns out, when you look at all of the studies, going all the way back to Edmondson’s studies and before, the one factor that’s been—I won’t say left out, but kind of not acknowledged as much—is that it takes a long time for psychological safety to build. You need those relationships, you need the constant reiteration of scenarios, of experiences with each other that encourage you to trust each other.

What we know from practice is the vibe of a team can shift from moment to moment. It takes psychological safety a long time to form. It can be fragile—a new person coming into a team or a person leaving can completely shift the vibe. When trust is broken, the cost to the psychological safety of the team can be extreme. It’s slow to form, and it’s fragile, and can leave quickly.

So when I think about that concept, it reminds me that trust in an organization is constructed. You need a lot of experiences with each other for that to build up. This goes back to one of the things I was mentioning earlier about individual use of Gen AI versus collective use of Gen AI. I think just shifting our perception of what we should be doing from those individual productivity improvements to, “How can we use Gen AI to learn together, to reduce friction, to do that sense making, and to manage our cognitive load?”—I think that is how we construct trust actively.

That’s how we get over the challenge of it taking a long time to build psychological safety, and it being fragile. We just get in the habit of using those generative AI tools collectively as teams to get us literally on the same page. I honestly think that’s the solution that we’re all going to start marching towards over these next couple of years.

Ross Dawson: Yeah, I’m 100% with you. I mean, that’s what I’m focusing on at the moment as well.

Nicole Radziwill: Encourage people to do it, Ross. You’ve got to encourage people to do it, because it’s so easy to get some of those individual improvements and then just stop, or to say, “We know how to prompt and we’re just not getting the ROI we thought we would.” It’s going to be up to people like you to get the message out in the world that there is another level. There’s another place you can go, and it can really unlock some fantastic productivity, excellence, improvements—not just productivity, but true excellence.

Ross Dawson: Yeah, which goes back to what we’re saying about, essentially, the organizations of the future.

Nicole Radziwill: Yeah, I want to live in one of those organizations of the future. I think I felt it long ago, and it’s just been so disappointing that we haven’t gotten there yet. But people are going to be people. We’re always going to have our social dynamics, our power dynamics, but I really think that collective use of the new generation of AI tools is going to help us get somewhere that maybe we didn’t imagine getting to before.

Ross Dawson: So where can people find out more about your work and your company?

Nicole Radziwill: The best place to find me is on LinkedIn, because I’m one of the only Nicole Radziwills on LinkedIn. So I invite new connections, and always like to get into conversations with people. The other place is through our company’s webpage—it’s team-x.ai, and you can get in touch with me either one of those places. But usually, LinkedIn is where I post what I’m thinking or articles or books that I am writing, and I’ve got two books coming up this upcoming year, so I’ll be posting those there too.

Ross Dawson: Fantastic. Thank you so much for your time and your insights and your work Nicole.

Nicole Radziwill: Thank you, Ross. It’s been delightful to chat with you.

The post Nicole Radziwill on organizational consciousness, reimagining work, reducing collaboration barriers, and GenAI for teams (AC Ep26) appeared first on Humans + AI.