Humans + AI

Iskander Smit on human-AI-things relationships, designing for interruptions and intentions, and streams of consciousness in AI (AC Ep18)
“I really believe that we need to design friction into the system, not what is usually the goal in digital spaces, where you try to remove all the friction.”
–Iskander Smit
About Iskander SmitIskander Smit is founder and chair of Cities of Things Foundation, a research program originating at Delft University. He works as an independent researcher and creative strategist at the intersection of design, technology, and society, focusing on the evolving relationship between humans and AI in physical environments.
Website:
LinkedIn Profile:
What you will learn
- How human, AI, and ‘things’ relationships are evolving beyond digital tools into physical environments
- The concept of collaborative intelligence—how human and AI co-performance shapes creativity and productivity
- Ways AI can mirror human thinking, deepen reflection, and reveal cognitive biases when used intentionally
- Designing AI interfaces for meaningful interaction, including the value of friction, interruption, and transparency
- How the role of designers is shifting from crafting static products to directing co-creative, adaptive systems with AI
- Why deliberately designing for thoughtful, exploratory, and emancipatory conversations with AI matters
- Challenges and insights from experimenting with AI in team settings and educational contexts
- The importance of treating AI as a collaborator or team member rather than simply as a tool
- How thoughtful human-AI relationships can unlock greater collective intelligence and transform work in sectors like health and education
Ross Dawson: Iskander, it’s fantastic to have you on the show.
Iskander Smit: Yeah, thanks for inviting me. Really excited to talk about this topic, of course.
Ross: One of the things is you very much focus on collaborative intelligence, and I think that happens in conversation. So hopefully we can have a good conversation.
Iskander: Yeah, me too.
Ross: One of the starting points is you talk about human, AI, and things—relationships. So tell me about the human, the AI, and the things. What are the relationships?
Iskander: Yeah, it really originated from the research program I started back in 2017 at the University in Delft. It was called Cities of Things—how we are going to live together with intelligent, autonomous things.
We were thinking about what will happen, what the consequences are, if we live together with more autonomous things. That was before we had these generic LLMs and the developments happening now. But even then, we were already curious: how are we going to have a kind of co-performance with things?
That’s why I added the “things” relation—because I really see now, of course, there’s a lot of use of AI in the digital space and in digital life. But it also starts to pop up in the physical space. So authentic AI for the physical space, I think, is a very interesting domain to look into. What will happen when we live within AI, when we are immersed in AI?
That’s why I really look not so much at the specific function of the AI or the tool, but more at what kind of relationship we are building with these machines or things—or whatever we want to call them.
Ross: Yeah. That’s why I dig into the relationships in the sense of the extended mind idea. Part of it is things we use, which enable us to do more. We’ve long had relationships with things.
As those things become more autonomous, that changes. And the relationship with AI, which is far more human-like by design, also changes. So what are the types of relationships? When it’s not just humans and AI but also the things, what is the nature of these?
Iskander: Yes, a good question. What type of relationships do we have? I’m really thinking about what the interaction is we have with things, and how we can define which are best suited for AI, which for humans, and how we relate to that.
How do we perform together in a certain way? It’s an interesting question. Some people think that AI is just an early stage of being human-like. But I think we have evolved for such a long time that AI is definitely a different type of breed, maybe.
So, what types of relations can we have here? There is, of course, a lot—especially when we had these conversational devices starting to pop up in our relationships.
Ross: So one of the strongest relationships, I suppose, is collaboration. And so that’s kind of this idea around intelligence—collaboration—where we have collective human intelligence between humans, which we’ve had since we’ve gathered around fires.
And now, of course, as you say, this intelligence is different but hopefully complementary to us. And so there’s a whole set of relationships with a set of humans, a set of AI. And so intelligence, I think you’re suggesting, emerges from that collaboration.
Iskander: Definitely, yes. That’s an interesting point indeed, because also when you use it yourself, or even the current iteration of it, there’s this reflection that you have, or the interaction that you have with the current tools already.
It’s also how I use them myself, mainly for writing now. In my weekly column that I write, I try to always put my first stream of consciousness in the AI and see how it responds to it. And that’s not so much that it makes something for me, but it’s really reflecting on myself.
So it’s an interesting one—how it’s mirroring my own thinking, and how it can deepen that. So it’s in-depth collaboration, more like a co…
Ross: So have you designed the tools to be digital twins, to mirror yourself, or to be a complement to you? Or, if so, how have you done that?
Iskander: Not mirroring, but more like a co-author or intern. Different levels. I think it’s a way to make it more accessible.
I’d say, well, I just have some support, based on what I see. How can I put it in a little bit more structure and use these capabilities of the AI tools for that? But also, if the right ones are used, they could give more real reflections—whether it’s a good stream of thoughts, or introducing new things.
That would be the ideal case, of course. I think you can really open a path that you didn’t see yet, or challenge your own biases. I think that’s the real value of a good human-AI team: you can correct each other.
Ross: So, how can you best get it to open up new pathways for you, or to uncover or reflect on your biases? Specifically, how do you use it to do that?
Iskander: Well, it’s just pointing it in certain directions, asking certain questions. You put some sources into it and see if it finds similar things.
And it’s always an interesting question—if it’s really doing new things and coming up with new stuff, or if it’s more like taking what you’re already thinking about yourself and just structuring it more. That’s still the phase we’re in now, I guess.
Ross: So that’s really about the intent. You’ve got your interfacing with an LLM. So this is one relationship at this point. We’re talking about a human—you in this case—with an LLM.
And so you’re saying it’s around the intent, that you’re always looking for it to open up new pathways for you, or to compensate for your biases, and so on. So it’s really the way you guide your conversations to get the value. Is that right?
Iskander: True. Yeah, I think that’s true.
And of course, I’ve been thinking about the research on predictive relations. I call it: what will happen when this AI becomes more intelligent, or when we have more information from similar situations? It’s not really predicting, but more like having a sort of knowledge beforehand.
How will it change your relation to that one thing you’re using? The mental model can change because it can add some extra information.
So if you ask what type of relations we have—this is what I now described as the positive version. You use it, and you reflect on it. But it could also become, of course, something like a chilling effect, where you adapt to it because you expect it will start to behave in a certain way.
That’s maybe not happening—but you are. That’s the other side of the coin.
Ross: You start multiple frames here. One phrase that you used in your writing is hypothesizing that humans may not be, as you describe it, at the top of the cognitive hierarchy.
I mean, I guess one of the points I always make is that cognition, or intelligence, is not one-dimensional. There are some dimensions where AI is far more intelligent than humans, and others where humans are far superior.
I still don’t necessarily see that every single dimension of human intelligence will be transcended. But just looking at that point, saying, all right, well, let’s say AI has better and better cognition, better and better intelligence. What does that then do to the human-AI relationship in collaborative intelligence?
Iskander: That’s an interesting question. Of course, is cognitive knowledge the same as intelligence?
I think what you are also saying is that it’s not a kind of general “on top of the cognitive hierarchy,” but maybe more on specific topics. You can use it almost more as a tool to find out more things.
You cannot read everything, you cannot do everything. But you can make more sense. I think humans still have more intelligent capability to synthesize and make sense of stuff, to come up with new ideas. Even if some of these tools can help you with that, or be creative in a certain way, it’s still related to what you feed them.
I don’t know if this was an answer to your question, by the way.
Ross: Yes. I mean, I understand you are essentially from a design background, and so I guess there are a number of questions here. One is, of course, from the very outset, when we started to have AI, my thinking was around interfaces.
What’s the interface between them? How does the human get what is useful from the machine? How do you get those feedback loops?
But there’s another layer, where design itself—the nature of what design is—almost starts to change. So I’d really love to hear your thoughts, first of all, on the human-AI interface. And of course, many people are working on this and trying to get better. But just this framing of that, and then more broadly, thinking about how design itself is changing.
Iskander: Yeah, we see, of course, that design itself is a very important aspect of how we use AI, and how it becomes usable and accepted.
The chat interface became such a dominant thing. But the way that you combine it—I used to really look at more conversational parts, but also at how to design for notifications, or more like interruptions or intentions.
Having this conversation part, I think, is really important. And what will happen when it’s more hidden? How can we prevent living in a space where we don’t have any idea, where we live in a kind of world of black-box conversations?
So I really believe we need to design friction into the system. Usually in digital spaces, the aim is to remove all the friction. But I think it’s really important for us to understand what’s happening in that system.
Ross: Interruptions before—that was interesting. So, for example, interruptions as one of the interaction devices. What is the role of interruptions, and how does that work?
Iskander: Well, interruptions in the sense that you have this… I’m thinking how I can frame that, but yes—thinking of interruptions.
I’ll play this background noise as an interruption, by the way. I don’t know if it’s filtered out, but for me, it’s an interruption.
So yes, being interrupted can be something that makes you stop and think, or that brings in something at a certain moment. Designing something where you really think about journeys—not as a fixed thing, but more like a narrative that adapts to what happens, including interruptions.
Ross: Pulling up to bigger frames—or up to macro frames or micro frames of changing cognitive frames—just being a little linear flow.
Iskander: Yeah, something like that. And I really think we have a different type of experience as humans. We have this immediacy concept, and we really consume things differently already. We are changing how we consume media, how knowledge is based, and how it is acknowledged.
How can we design deliberately more thoughtful conversations sometimes? That’s not always the main goal of design, because, as I said, it’s a kind of friction that you bring. But it can make interactions so much more valuable and much deeper in that sense.
Ross: So you’re designing, I suppose, a more exploratory or emancipatory architecture, rather than just a single art—that’s how you get paid to be.
Iskander: Yeah, you make, and you make it relatively interesting. How much the five coding kind of ceiling is also going—will it become more like a part of everything that we make? Are we even more making personal services and devices? Is that what we will grow into, or is that just for a couple of people who really like to do that?
That’s of course not clear. But I can imagine that this type of interaction in the beginning—what you need, what you want, how you want to use something—becomes a little bit more common as a way of doing things.
Ross: So, just speaking about design—I can’t remember the words you said right in one of your newsletters—something to the effect of: AI is changing design. Not just process, not just how we use AI in design, but at a more fundamental level. So, what is the future of design? Let’s put the question that way.
Iskander: Yeah, it’s a hard question, of course. What is the future of design? One of the thoughts I know is that, of course, we have different tooling now. That’s the short answer.
But also, the way we collaborate with these tools allows us to have a different way of working in the design process. What we design is often not new—it’s more like a remix of something that’s already there.
We are also using intelligence almost as an informant, an informing layer of things that we can use. How will that play out? Maybe things start to design themselves, not entirely from the start, but gradually. We really become more like collaborative partners in thinking, and perhaps more like the creative director or creative manager of the things.
That’s already happening in digital design. You see that a lot of roles are shifting—from doing the detailed design to giving more direction about what you want to achieve. The real detailing of artifacts or assets is not so important anymore, or is more delegated to the tools.
Of course, we are still at the beginning. But things are moving rapidly in that direction, and you can see how it influences the labor force of design.
Ross: Design becomes a co-creative process. Rather than design being static or created by the designer, it is co-created by the user together with the system, and it is constantly evolving.
So we have co-design, co-created, perpetually evolving design, as opposed to something static and imposed.
Iskander: Well, definitely. And it was always—it’s not that new in that sense. A long time ago, when I was fully in the design of digital products, we were thinking about personalized and adaptive websites, and how they could model themselves.
But it was always kind of scripted. And now we have, of course, a much more open canvas that can be filled in. That’s really interesting.
We are still in a phase where we’re looking for how to approach it. You see that some agencies are experimenting with synthetic personas, or trying to test things only with AI role-playing as the user. I think that’s not really the right way to go, because you’re still making things for humans—or maybe for humans and AI. Maybe we should find a way to test these combinations. That’s an interesting one.
What you’re saying is right: it remains a process. You’re designing much more of the rules or the forms, or whatever we call that shape of a product or thing that changes over time with use.
Ross: Yeah. So, you’ve got an excellent newsletter—thank you for creating and sharing that.
I wanted to ask about how you use tools. We talked about that a little bit, but I’d like to pull that into the context of your newsletter, because you get exposed to interesting things, you reflect on them, and then crystallize some writing.
You also find a bunch of very interesting links, which you have some structure around. So you clearly have a fairly divergent thinking process generally, but you’re able to converge into this interesting newsletter.
That would be a great point to see—how do you use the tools throughout the week?
Iskander: Yeah, that’s also nice, because it has changed over the last one or two years. I’ve been writing this newsletter for a longer pre-LLM time already—collecting interesting news, RSS feeds, and all that kind of stuff. That’s the traditional way of filtering out what I think is interesting. That was purely a human thing.
Nowadays, every week I find something that sparks my interest, something that makes a difference.
Ross: And then you’ve got a good RSS feed, and you scan that?
Iskander: Yeah. It could be anything. It could also be a video or a TikTok—whatever sparks something. There’s always some trick or idea that gives an extra impact or a different lens.
Like this week, I combined a couple of things. The Mars intelligence—someone even wrote something about it on a molecular level—and I connected that to immersive AI. But also, one of my favorite writers had a post about rethinking errors and the meaning of AI.
So I try to combine these kinds of things. First, I combine them in my head. Then I start brainstorming with myself, speaking in and out, creating a kind of stream of consciousness. Sometimes I do this a couple of times.
Then I start using the LLMs. I use this tool called Lex, which has different models and is specifically made for writing. I put in this stream of consciousness and ask it to structure the text without changing the content, just giving it a little structure.
Sometimes I have more conversations with the AI. Sometimes it’s fine as it is. It depends. The rest of it is still quite human—my personal touch, my personal thinking and reflections.
So I really use the AI mainly for the writing piece of the little column.
Ross: So it’s other, because obviously you have lots of interesting human collaborators. Are there any structures for—even just human-to-human—but also other ways you’re finding useful or emerging, in how we can have multi-person or multi-entity conversations that surface interesting things?
Iskander: Yeah, still quite traditional that I always get something out of conversations with people. But I’m also an organizer of events. I have my own conference that a couple of people organize every year, and that triggers thinking about themes of the year and having some in-between events. That’s a way to get people together and get new insights. That helps a lot.
So that’s still something very important. And I’m also now working with another agency to think about what we should create. Can we create a kind of masterclass about team AI and team human-AI, and what will it be?
While speaking about that, you come to more insights about what it’s really about. As I said in the beginning, it’s much more about thinking about the relations between these two. What are you designing? How are you shaping these teams based on the relations you want to build between AI or a couple of AI workers?
And without trying to replace a whole team with AI—which some people try to do—I think that’s not really the best way.
So yes, finding ways to talk to people is still a very important part. Listening to podcasts, hearing people talk—you need to have this to tap into that. I think that’s the main point.
Ross: Yeah, I think still conversations. I think human conversations are the best source of ideas and pleasure and exactly everything good. I don’t think we’re going to end conversation soon.
But what is interesting, as you say, is that one of the things in some collective intelligence work has been AI used for behavioral nudges. In a group, these nudges facilitate the collective intelligence of the group.
More generally, this comes back to the relationship piece you’ve been laying out. A moment ago, you said it’s around knowing what you want from it, and shaping what those relationships need to be. And we’re obviously right in the middle of working all this out at the moment.
Iskander: Well, yeah, it’s really depending. We are looking at certain cases, like an English social workers team that needs information about people they are helping—youth health or something else.
You can imagine that there are already some expert systems in that team, traditional knowledge systems that may become more like AI. How will that become more part of that relation?
If you can say, well, it’s not only a tool that you use, but you can also use it as a reflection on your own thinking. There’s already some research that shows the combination—especially in health contexts, with doctors—is valuable. You don’t want to have one AI doctor. The combination is important.
I can remember research showing the trick was that it’s very important the doctor is open to new insights. You also have to be critical of new insights.
Are you still there? You were gone for a moment, I think.
But maybe it’s interesting—an example here in the Netherlands. Two weeks ago, there was a professor at another university who did an experiment for master’s graduation students. All the coaching was done by AI. It was also part of an assignment or research about that.
They really tried it out. Of course, he had to find all these committees to approve it, but he didn’t really do anything except reflect at the end. I thought, well, okay, nice experiment—and of course, grabbing a lot of attention.
He was already concluding that in a functional way—finding new information or deepening knowledge for the student—AI was okay. But it was not really a critical reflection on what was done. And it was also not really teaching the students about academic thinking and research. So there were definitely things lacking there.
Ross: Yeah. Back to the design of the relationships. It was an experiment, but you can see you can learn from what sorts of relationships work and don’t work. You could also learn in other contexts.
Iskander: You could also think it would be very interesting—or maybe more interesting and with better outcomes—if you had the AI in the same role, but discussed every week or every two weeks, together with the student and the professor: what is the AI now advising you, and how can we use that?
Then you’re creating a team. You just have the AI as a tool. That’s, I think, a better step between them. Then he’s trying. He’s kind of an AI teams.
Ross: Great. So where can people go to find out more about your work—like your newsletter, or anything else?
Iskander: Yeah, well, the newsletter, of course, what you’re mentioning is pondering my name without an email iskandersmit.nl. That’s the direct link to it.
Or you can just go to Series of Things—that’s my other lens, my research—thingsofthings.org will get you there. That’s where I share all my research. Those are my main things, I think.
Ross: All right. Well, thanks very much for your time and your insights.
Iskander: Yeah, okay, super. Good luck and good night.
The post Iskander Smit on human-AI-things relationships, designing for interruptions and intentions, and streams of consciousness in AI (AC Ep18) appeared first on Humans + AI.