Humans + AI

Humans + AI


Suranga Nanayakkara on augmenting humans, contextual nudging, cognitive flow, and intention implementation (AC Ep16)

August 26, 2025

“There’s a significant opportunity for us to redesign the technology rather than redesign people.”

–Suranga Nanayakkara

About Suranga Nanayakkara

Suranga Nanayakkara is founder of the Augmented Human Lab and Associate Professor of Computing at National University of Singapore, and Honorary Professor at the Auckland Bioengineering Institute (ABI) at the University of Auckland. He is founder of a number of startups and social enterprises including Spark Academy. His awards include MIT TechReview young inventor under 35 in Asia Pacific and Outstanding Young Persons of Sri Lanka.

Website:

ahlab.org

intimidated.info

LinkedIn Profile:

Suranga Nanayakkara

University Profile:

Suranga Nanayakkara

What you will learn
  • Redefining human-computer interaction through augmentation

  • Creating seamless assistive tech for the blind and beyond

  • Using physiological sensors to detect cognitive load

  • Adaptive learning tools that adjust to flow states

  • The concept of an AI-powered inner voice for better choices

  • Wearable fact-checkers to combat misinformation

  • Co-designing technologies with autistic and deaf communities

Episode Resources Transcript

Ross Dawson: Suranga, it’s wonderful to have you on the show.

Suranga Nanayakkara: Thanks, Ross, for inviting me.

Ross: So you run the augmented human lab. So I’d love to hear more about what does augmented human mean to you, and what are you doing in the lab?

Suranga: Right? I started the lab back in 2011 and part of the reasoning is personal. And my take on augmentation is really, everyone needs assistance. All of us are disabled, one way or the other.

It may be a permanent disability. It may be you’re in a country that you don’t speak the language, you don’t understand the culture. For me, when I first moved to Singapore, I never spoke English. I was very naive to computers, and to the point that I remember very vividly back in the day, Yahoo Messenger had this notification sound of knocking, and I misinterpreted that as being somebody knocking on my door.

That was very, very intimidating. I felt I’m not good enough, and that could have been career-defining. And with that experience, as I got better with the technology, and when I wanted to set up my lab, I wanted to think of ways. How do we redefine these human-computer interfaces such that it provides assistance and everyone needs help?

And how do we, instead of just thinking of assistive tech, think of augmenting our ability depending on your context, depending on your situation, how to use that? I started the lab as augmented sensors. We were focusing on sensory augmentation, but a couple of years later, with the lab growing, we created a bit more broad definition of augmenting human, and that’s when the name became augmented human lab.

Ross: Fantastic. And there’s so many domains in which so many projects which you have on which are very interesting and exciting. So just one. We would just like to go through some of those in turn. But the one you just mentioned was around assisting blind people. I’d love to hear more about what that is and how that works.

Suranga: Right. So the inspiration for that project came when I was a postdoc at MIT Media Lab, and there was a blind student who took the same assistive tech class with me. The way he accessed his lecture notes was he was browsing to a particular app on his mobile phone, then he opened the app and took a picture, and the app reads out notes for him.

For him, this was perfect, but for me, observing his interactions, it didn’t make sense. Why would he have to do so many steps before he can access information? And that sparked a thought: what if we take the camera out and put it in a way that it’s always accessible and you need minimum effort?

I started with the camera on the finger. It was a smart ring. You just point and ask questions. And that was a golf ball-sized, bulky interface, just to show the concept. As you iterate, it became a wearable headphone which has the camera, speaker, and a microphone. So the camera sees what’s in front of you. The speaker can speak back to you, the microphone listens to you.

With that, you can enable very seamless interaction for a blind person. Now you can just hold the notes in front of you and just ask, please read this for me. Or you might be in front of a toilet, you want to know which one is female, which one is male. You can point and ask that question.

So essentially, this device, now we call ISee, is a way of providing this very seamless, effortless interaction for blind people to access visual information. And now we realize it’s not just for blind people. For me, I actually used it.

Recently I went to Japan, and I don’t read anything Japanese, and pretty much everything is in Japanese. I went to a pharmacy, I wanted to buy this medicine for headache, and ISee was there for me to help. I can just pull out a package and ask, ISee, hey, help me translate this, what is in this box? So it translates for me.

So the use cases, as I said, although it started with a blind person, cut across various abilities. And again, it is supporting people to achieve things that are otherwise hard to achieve.

Ross: Fantastic. So just hopping to one of the many other projects or research which you’ve done, and is around AI-augmented reasoning. This is something which can assist anybody, and you particularly focus on this area of flow.

We understand flow from the original work of Csikszentmihalyi and so on, how to get into this flow state. I understand that you have sensors that can understand when people are in flow states, to be able to help them in their reasoning as appropriate.

Suranga: Right. So this is very early stage. We just started this a few months ago. The idea is we have been working with some of the physiological sensors — the skin conductance, heart rate variability — and we understand that based on this, you can infer the cognitive state.

For example, when you are at a high cognitive state, or when you are at a low cognitive state, these physiological sensors have certain patterns, and it’s a nice, non-invasive way of getting a sense of your cognitive load.

As the flow theory says, this is about making the task challenging enough — not too challenging or too easy. We can measure the load based on these non-invasive signals, at least get an estimate, so that you can adjust the difficulty level of the task.

That’s one of the very early stage projects where we want to have these adaptive interfaces. The user doesn’t drop the task because it’s too difficult, or drop the task because it’s too easy. You can adjust the task difficulty based on the perceived cognitive load.

Ross: So interested. Where do you think the next steps are there? What is the potential from being able to sense degree of cognitive load or your frame of mind, so that you can interact differently?

Suranga: One of the things I’m really excited about is lifelong learning, continuous learning. Because of the emergence of technology, there’s a lot of emphasis on needing to upskill and reskill.

I’m also overseeing some of our university adult learning courses. If you think of adults who are trying to re-upskill themselves, the way to teach and provide materials is very different from teaching, say, regular undergraduate classes.

For those, there is a possibility of providing certain learning materials when the adult learner is ready to learn. They’re busy with lots of other responsibilities — work, families, and all these things. So if we can have a way of providing these learning opportunities based on when they are ready to learn, it may be partly based on cognitive state, partly based on their schedules.

I think one way to use this information is to decide when to initiate and how to increase or decrease the level of difficulty of the learning material as you go. If you can detect the cognitive load and then maintain the flow, that’s a hugely potential area.

Ross: Yeah, absolutely. So one of the projects was called Prospero, which is, I think, on the lines which you’re discussing. It’s a tool to help memorize useful material, but it understands your cognitive context as to when and how to feed you things for learning.

Suranga: Right. This we started specifically for older adults, and the idea was we wanted to help train their prospective memory. One of the techniques that has been reported as effective in literature is called intention implementation.

So basically, if I want to remember that when I meet Ross, I need to give you something, you mentally visualize that as an if-then technique. Firstly, we tried, okay, can we digitize that without a human through a mobile app? I provide what I would like to do, break it down to if-then statement, and get me to visualize that. That was the first part.

We saw that digitization does retain the effectiveness. Then the next question was, is there a better timing to initiate this training? That’s where we brought in the cognitive load estimation. Instead of doing a time-based or user pre-assigned time to train, we compared our technique, which is based on the cognitive load.

We found that when you provide this nudging to start training when the user has less load, they are more likely to notice this and more likely to actually start the training.

I think this principle probably goes beyond just training memory. It could be used as a strategy for getting attention to any notification. Rather than notifying randomly, you can notify when you think the person is more likely to attend to that notification.

Ross: Yeah, no, I think that’s part of it. If you have a learning tool, you want to use it at the right times. There’s partly a bit of self-guidance, as in saying, well, this is a good time for me to study or not. But I think it’s wonderful if the tools start to recognize when is a good time for you to be learning or saying, hey, now’s the time when this is a good task to do.

If we can proactively understand cognitive state or cognitive load and then guide what are appropriate activities, resting might be the best thing to do. Or something provided with a more entertaining frame in another state. Or sometimes it may say, okay, well, this is more complex, and this is the right time to serve it to you.

So very deeply, as I think all of your work is, context-aware.

Suranga: Yeah, exactly. And that’s a keyword. I think just the cognitive load alone may not be the cut. For example, I may be in a low cognitive load, but contextual information, like time, might matter. It’s the middle of the night, so there’s no point nudging me. Or my schedule might indicate I’m in a party.

So we need to take this contextual information — time, the location, what’s in my schedule — plus your body context through these physiological sensors, so that we can try and make the best decision to support the user.

Ross: Which goes to just one of your other many wonderful projects around AI in a voice for contextual nudging. I believe very much in this idea of behavioral nudges and AI being able to understand when and how are the best nudges for behavioral change. Could you tell us more about this AI inner voice?

Suranga: Right. This is actually a joint project between my former advisor, Pranav Mistry from the Media Lab, and my lab. The students explored this idea where you have your better self.

You promise yourself that you’re going to eat healthy, and then you have that perfect self. With contextual-aware wearables, let’s say, for example, if I’m now seeing a chocolate and I’m very tempted to take it, the wearable might see there’s some apples on the side. Then your better version, your own voice, says, “Hey, that apple looks fresh. Why don’t you try that?”

Or say, for example, I’m facing an interview and I’m searching for words, and my better self, who wanted to be confident, might whisper to me, “Hey, you can do this,” and even suggest a couple of words for me to fill in the gaps.

So that’s the concept we published last year in one of the main Human-Computer Interaction conferences, to show that this inner voice, your voice clone, has a lot of opportunities to nudge you, making you more likely to change your behavior.

Ross: That’s an absolutely fabulous idea. So is this just a concept of this voice, or is this being implemented?

Suranga: In the research paper, we showed this proof of concept — making better choices of what you eat, being able to face an interview more confidently. We showed a couple of proof-of-concept cases where this was actually implemented as a working prototype.

Ross: Another thing which is very relevant today is a wearable fact checker. Because facts are sometimes not facts wherever we go. So it’s good to have a wearable fact checker. How does this function?

Suranga: As you rightly said, these are very emerging and again very early stage projects. But the idea is, how do we allow users to be more aware of the presence of potential misinformation?

The way we have implemented our initial prototype is it listens to the conversation, and then firstly, it tries to differentiate what’s just an opinion versus what’s a fact-checkable statement. If that’s the case, it then looks for factual consistency, looking for agreement among multiple sources from a knowledge-based search.

If there is a potential of this being a factually wrong statement, it nudges the user through a vibration on your smartwatch at that point. The user could then tap that and see why this is nudging, what the contradiction might be.

So we are, as we speak, running a study to figure out how people respond when they watch videos. Some videos look very real, some are not actually deepfakes — they are real — but especially some of the political speeches where lots of statements are factually incorrect. We are nudging the users, and we want to see what that nudging leads to.

Do users stop the video, go and search for themselves, and make informed decisions? Or do they just continue to watch it because they believe in that particular person so much? Or do they take the nudging as completely true — because AI can make mistakes — and mark all those statements where they felt a nudge as incorrect?

So we are trying to look at how actual users behave when there is a system that gives you a vibration nudge when it thinks there is potential misinformation. We will see the results very soon, and hopefully we want to put that as a research paper.

Ross: Very interesting indeed. So more generally, you know, you just started off by saying that being able to assist people were required, and so some of the tools are also in situations such as autism or dyslexia. And you know, there’s obviously any number of ways in which we can assist in those veins. So where do you think in the most promising directions for technology to support — let’s start with autism.

Suranga: So I think the key thing, even before the technologies, what we realize is the co-design. One of the projects we did with kids with autism, we actually worked with the therapist, the school teachers for about a year to come up with what might be effective.

Rather than doing a technology push, we wanted to co-design so that we are not building things for the sake of building, but there’s a real value. And one specific example is we built these interactive tiles. They can be on the floor. Smaller versions can be on the wall, and they light up. They sense the user’s touch. They can also make sound.

It’s a simple technology, but the use case was, again, after this year-long co-design process, the teachers were like, we want this to have specific interactions to support their social skills, support their physical skills, support their cognitive skills.

So for example, the teachers can put these tiles and make them light up in a certain order. The kids have to follow the same order — that’s training their memory. The same tiles can be spread across the room, and then they light up, and the kids have to run and tap them before the light goes off — that’s getting them to engage physically.

These tiles can also be distributed among a set of kids, and each tile becomes a music instrument, and then they can jam together. That’s getting them to enhance their social interaction.

Yeah, I think that the main lesson I learned is there’s a huge potential of technology, but it’s also equally important to work with the stakeholders so that we know what’s the best way to utilize them, so that the end solution is going to be effective and used in real context.

Ross: Yeah, which I think goes to this point of feedback loops in building these systems, where part of it is, as you say, the co-design. You’re not just giving something to somebody and saying, hey, use it, but helping them to design it and create it. But also the way in which things are used, or the outcomes they have, start to flow back into the design. And I imagine that there’s various ways AI can be very useful in being able to bring that feedback to refine or improve the product or interaction.

Suranga: Yep, that’s very true. And the other beautiful thing with this co-design process is sometimes you discover things as you go. You don’t go with a preset of things that you just want to convince the other stakeholder. True co-design is you discover things as you develop.

I remember my PhD project, which was about providing musical experience to a deaf kid through converting music into vibration so that you can feel. Initially, thinking of the sensitivity range of vibration sensation — the hearing is 20 to about 20,000 hertz, whereas vibration is much lower, it cuts off around 1,000 hertz.

So initially we thought, why don’t we compress all the audio into the haptic range and then provide that through the vibration feedback mechanism? But it didn’t work. Some of the deaf kids and the therapists we worked with were like, no, when you compress awkwardly, these kids can also feel that awkwardness. Some of them said this is not even music.

Accidentally, one of the kids tried our system bypassing that whole compression, just playing the music as per normal, and letting their body pick up different vibration frequencies. The legs and back are good at picking up low frequency. The fingertip is good at picking up high frequency. That completely changed the design.

So instead of doing our own filtering, we let the body become the filter and just convert the music without preprocessing through this chair structure. And that was super useful.

Why that’s impactful is that now for about 15 years, these school kids are using this on a daily basis, feeling music and developing their own preference to different music genres. For me, that was a moment of discovery. Rather than forcing what you thought and trying to convince others, you kind of discover as you go.

Ross: Absolutely, that’s a great example of that. So I’d like to come back to the beginning, where you said you were confused by Yahoo Messenger and felt you were confused by technology. And I think that’s a universal experience. Almost everybody comes across and thinks, this is hard, this is difficult, it’s confusing. But you obviously went past that to now being able to use technology as an enabler to understand the capabilities.

So what is it that enabled you, what brought you from being confused by technology to now being able to use it to help so many people?

Suranga: I think a bit of that was the thought process. Initially, as I said, I was very concerned that I wasn’t good enough for engineering. But when I really thought about that specific example, what a sensible person would do when you hear a knocking sound was just checking the door, right? Nobody would expect you to check what’s on the screen.

So it convinced me that what I did, although it was a mistake, was the sensible thing to do. And it also established a deep belief that technology has the opportunity to be redesigned. I don’t need to change myself to learn them. There should be a way to redesign them so that changing our natural behavior so much should not be the case.

And one particular example that I did immediately after my graduation was moving digital media across devices. In our culture, we have this color powder. We take them from a container, put it here. That’s copy-paste. And we enabled the technique where you can just touch a phone number on a web page and drop it to the other device. It copies.

Of course, the digital transfer happens through the cloud, but the interaction is super simple. And with those examples, my belief became more and more stronger that there’s a significant opportunity for us to redesign the technology rather than redesign people.

Ross: No, totally. 100% right. So I gotta say, there are so many times when I’m using a technology, I think, am I stupid? No, the technology is badly designed.

Yeah, it’s still amazing — it’s 2025, and we still have so much bad design. If it’s not easy to use, if it’s not intuitive, if we can’t work it out for ourselves, if it’s confusing — that’s bad design. It’s not a stupid person.

So where do you see the potential? What’s next? You’re obviously doing so many exciting things at the moment. What’s on the horizon for Augmented Human Lab?

Suranga: I think there’s a lot of momentum from the ecosystem. If you think about it, AI is going to stay here. Every morning when you wake up, there’s a new model being released and a new paper being published. There’s momentum there.

I think it’s a matter of time before robotics is going to catch up. Also, some of these wearable devices at a consumer level have become commodities, so you can have very easy ways of building things that are super seamless to wear.

With all these things, I think there’s a significant opportunity for us to create these augmentations that help us make better decisions, help us learn things, basically help us become better versions of ourselves. And they shouldn’t even need to be so dependent on things. They could be done in a way that helps us acquire certain skills, and then they can drop off.

So they should be more like crutches than permanent augmentation. That’s why I believe so much in this non-invasive augmentation, where I need to get a particular skill, and just like a rocket engine, it might push me to a certain level, and then it can drop off.

With this emergence of AI, robotics, and some of the wearables, we are excited to design this next layer of human-computer interfaces.

Ross: That’s fantastic. So where can people go to find out more about your work?

Suranga: They can check out our work at our website, www.ahlab.org — and that has all the stuff that we have been doing.

Ross: Fantastic. Thank you so much for your time and your insights and your wonderful work.

Suranga: Thanks, Ross.

The post Suranga Nanayakkara on augmenting humans, contextual nudging, cognitive flow, and intention implementation (AC Ep16) appeared first on Humans + AI.