The Communication Solution

The Communication Solution


Mastering Empathy: Navigating the Future of Motivational Interviewing in the Age of AI

October 24, 2023

Welcome to today’s episode of The Communication Solution podcast with Casey Jackson, John Gilbert and Danielle Cantin. We love talking about Motivational Interviewing, and about improving outcomes for individuals, organizations, and the communities that they serve.
In this thought-provoking podcast episode, we delve into the intersection of Motivational Interviewing (MI) and Artificial Intelligence (AI). We explore the potential implications of AI on the practice of MI, touching on topics such as emotional intelligence, genuineness, and the profound connection that occurs in human interactions.


In this episode, we discuss:
  • Exploring the profound impact of artificial intelligence on motivational interviewing.
  • Reflecting on the potential of AI to emulate cognitive empathy, but the challenge of replicating emotional empathy.
  • The significance of genuineness and its complexity in the realm of AI and human interaction.
  • Considering the depth of connection and meaning that arises from genuine, soul-to-soul human interactions.
  • Delving into the concept of spiritual intelligence and its potential role in the future of human interactions.
  • Reflecting on the ever-evolving landscape of AI and the need to approach it with compassion as a moral compass.
  • Embracing the role of humans as the masters of AI, recognizing the importance of individual biases and experiences.
  • Discussing the potential for AI as a tool and the responsibility of humans to navigate it with wisdom and empathy.
  • Playfully envisioning a podcast episode on Spiritual Intelligence (SI) as a continuation of this thought-provoking discussion.

You don’t want to miss this one! Make sure to rate us or share this podcast. It would mean so much to us!


Thank you for listening to the Communication Solution Podcast with Casey Jackson and John Gilbert. As always, this podcast is all about you. If you have questions, thoughts, topic suggestions, or ideas, please send them our way at casey@ifioc.com. For more resources, feel free to check out ifioc.com.



Join MI PLUS+

Want a transcript? See below!

 Hello, and welcome to the communication solution podcast with Casey Jackson and John Gilbert. I’m your host, Danielle Cantin. We love to talk about communication. We love to talk about solutions, and we love to talk about providing measurable results for individuals, organizations.


Welcome to the communication solution that will change your world. Hi everyone. This is Danielle Canton. I’m facilitating the communication solution podcast with Casey Jackson and John Gilbert, the motivational, oh gosh, motivational interviewing. I want to say nerds, experts, like the whole world of MI.  We are diving into part two,  a pretty cool conversation we had earlier about artificial intelligence.


I think we started with the implications it might have on motivational interviewing, but true to form, you two took it mind blowingly huge and wide with so many great concepts. To dial in on from, from ethical to practical,  considerations around AI. So let’s do part two and dig in and see if we can kind of rate it in for our viewers and listeners.


Well, you know, I think that they, it just struck me. It kind of my own internal joke now is just my own. How much do I have artificial intelligence,  based on my own bias about my own knowledge basis.  So. Yeah. Yeah, that’s good. That’s a good meta one. I like it.


intelligence is relatively artificial. You know, I think when we went so wide, it is hard for my brain to bring it into the here and now and go back to that. My brain always tries to go to the end of one once I go macro. But what I do look at is, Whether we’re overrun with AI, as long as there’s human experience, I really do believe that if that’s where the pendulum swings, there will be a pendulum that swings back to, I want to see a chef cook the meal at my table.


Um. Even if it tastes better by a robot doing it precisely, I think there’ll be always be a niche and there’s always going to be, you know, the potential for a pendulum swing back to, I just want to talk to a real person about real issues,  and see their perspective on it with motivation. When, when I look through the eye lens, as we talk about it, as, as I teach it, I think because I have such a fascination with equipoise.


And usually cap it when people start to understand it, that it does not mean we’re robots. And I’ve been saying that for several decades now, that just because we’re in equipoise and trying to keep our bias out of the equation, it doesn’t mean we’re robots. I’m still very actively engaged in conversation, operating from a place of deep compassion and accurate empathy and, you know, trying to draw out someone else’s thoughts and feelings.


And I think. Looking at some of those qualities, like even evocation is you do want to find a bead, you know, get a bead running on someone else’s thought process. And I do think that our little supercomputers in our skull have the capacity to, to do that, to kind of catch the beat and be able to run with a bead in a way that maybe.


Maybe I will catch up with, but I think that’s gonna be a tough one for AI to catch up with. So when John starts to talk about a topic or an issue or gets into some emotional, you know, quagmire, my capacity to try to get linked into that in his mindset, as well as his words or voice intonation. There is something that is going to be hard pressed for, you know, a computer program, you know, or even AI that teaches itself to find that beat of how do I draw out and elicit and evoke John’s thoughts and ideas,  in a way that I think is going to shape it towards a better outcome.


Or deeper into the despair, there’s a mastery of that. And I’m sure that there can be an algorithm that’s developed, but I think this is where I’m finding that where’s the safe place. You know, where will am I exist in a, in an AI world? There is something I think about our ability to. Process data so quickly in human interaction in a human way, that’s, I think, I think when I see the people that are masters in motivational interviewing, that’s what I’m always inspired by.


And I think when I watch other people listen to, like, if I have conversations or videos that I’ve done. Using motivation. It’s like, how does your brain think that fast? Like, how does it look so effortless? And I think so. There’s some things that I can replicate with that, but I think there’s aspects of it, like this mastery of it, that that is going to take a while for that to get there.


Even when I see AI images,  you know, or humans that are being generated AI, our brains can kind of process. It’s not quite right. You know, especially if I get up close to it and see an NHD, it’s like, wait a second. That’s not a really human face. Like there’s just something I can just tell the sheen is not right.


There’s just something and it’ll keep improving. But those are the things I think of with, from an MI perspective. As well, too, that there’s just, there’s, there’s nuances and depth and dimensionality that a, I probably can master, but I don’t think I’m gonna be able to master that when I look into John’s eyes, when I’m having a conversation, that’s at my base and see that something in his brain just got tripped differently because of what came out of my mouth.


And how do I pursue that,  or reimagine how I’m going to communicate that little blip that I saw spark in their brain when I opened my mouth. That point, Casey, to me, relates to a wide world that gets used a lot, just like the word empathy gets used a lot in a lot of ways, but there are actual definitions and components to it.


What I’m about to say also has a lot of depth and components to it, but it’s called EI, emotional intelligence. And how much is there even a capability of AI to have EI is its own interesting question. But EI is a big part of what I’m picking up, at least that you’re,  expressing with eyes, with a sense with, I’m going to add that wasn’t explicitly mentioned, but context of what has happened, what, what we’re in right now, what I know about you going on in your life.


Thank you. That there’s a multitude of things with emotional intelligence, of my own self regulation of stuff, your stuff. With a computer that can paraphrase, I’m wondering, for example, it’s depiction of accurate empathy and depth of that, or if that even matters. I’m wondering if… They can see the eye shift this way, or and I say, they, I wonder if the statistical algorithm that is AI can pick up on why I love you because we’re anthropomorphizing something that’s a set of algorithms.


So, yes, I was, I was, I was talking about this, I think part 1. So, yes, I appreciate you appreciate it, but it’s like, I don’t want to perpetuate. That we’re anthropomorphizing this thing, this entity. But anyways, that’s to say that is there the ability to do that? Well, first of all, we have a lot of human stuff around that, that we could work on.


We can go to Mars and do all this other stuff. There’s also a lot on this planet to take care of. Well, we can go to metaphorical Mars with AI while we continue to really probably need some help. With our own emotional intelligence. So how much do we go these routes? How much does it help learn emotional intelligence?


Maybe it does for someone on the spectrum and it provides feedback I don’t know, but I feel like what you’re talking about Casey is this Emotional intelligence part, and I feel that when you especially take into account what you said in the effective psychotherapy podcast, we did, and the components there, when you take that, what I’ll call, am I, even though it’s not just, am I, you put it with E.


I understanding what you just said, and then I. You could have an extremely interesting, complex, helpful way of being and treating someone or something or, or, or doing that. Is it going to pick up all the things? I don’t know. I think maybe eventually, but I would say there’s always going to be, I also agree with you, Casey, some degree of wanting to connect.


With someone, not some thing. I haven’t seen that movie, her, it’s its own area, philosophical, deep, whatever. Just, I’ve heard some things about it, but like I could see the desperation. Leading us there if we do not cultivate these tenants of in the world more, which is not to say it’s a cult or some belief system per se, but that it’s cultivating these ways of seeing and treating other people as a foundation.


To AI so that AI serves that rather than does the paperclip thing or whatever it is, right? All these things in AI where now it’s going to destroy every human because it’s more efficient to make paper clips about humans, right? I’m just hoping that MI can be a guide along with emotional intelligence.


That’s that’s baked into it as well as people’s emotional intelligence. I think that those two things, the evidence is showing that if we have those with AI, it could be great, but I also think there’s a fear there that maybe then it supersedes,  everyone’s helping profession or. Leads to destruction with that.


So I just thought I would add that to the thing I was picking up from you. I think it’s emotional intelligence is a key there. I love that. And the other thing that I see to like when we’re talking about that in the podcast about Miller and Moyer’s book on ethical or effective, excuse me, psychotherapists, you know, their clinical skills and improve outcomes.


You don’t want to look through those part of A. I requires it. Someone to put those algorithms in there and create the algorithms and when you look at take four of those top concepts just to get humans to come up with a definition that are consistent around that and a concept of what that is, and then to program the zeros and ones.


In a way that makes sense, it’s going to probably take some time. And I think this is that wherever the bridge is going to be to emotional intelligence, you know, bridging that into artificial emotional intelligence is probably that bridge is probably quite a ways out there. You know, again, always may or may not.


And I don’t want to underestimate the power of how quick technology advances, but that’s going to be complex. I’ll tell you what struck me, John, when you’re talking, what I was thinking about was. Why I started developing curriculum around motivation, being in trauma informed is somebody had kept repeating it to me, Casey,  Susan Dreyfus had multiple, multiple times secretary of DSHS and said, Casey, there’s, there’s a, there, there, you need to start to explore this.


She said it so many times that it never really clicked until I was talking with a client who’d experienced significant trauma. And I remember there’s a reflection that I landed a very complex reflection, a deep, a motive, empathetic. Reflection and I saw something in their eyes, something in their brain.


I could actually see shift in the communication. Like I saw something click. I don’t know what it’s going to take to program a robot to be able to do that. And then to connect the dots to I think that motivation could work in trauma informed care this way, because when I’m using language, I can see how it’s impacting the brain in the moment.


And if I, and when we look at things like sustained talk and change talk, which, which AI could do maybe a better job of, but the timing of when to do that is so human based because. Mathematically, there may be enough energy to give change talk, but you could go to change talking. Somebody gets annoyed and pissed off saying that’s not what I meant.


Don’t aren’t you listening to me? I’m angry and upset. I don’t want to move forward right now. Like, you know, the level of programming it’s going to take a I to be able to differentiate that when I’m looking at Danielle going, I think she wants to move forward. But I think if I try to push it right now, it’s going to tick her off.


 So yes, I could go for change talk, but I think because of her upbringing and her cultural background and gender and all these other things that I can see, if I try to go there too soon, I could actually see this, even though change talk could exist in this one second right here, I don’t think that’s the smartest decision based on everything I know about Danielle does that make?


So I think those, I think trying to program that it’s going to be much more complex. But could it actually reflect change? Talk? Absolutely. Can it reflect sustained talk with high accurate empathy? Absolutely. But can it pick up those human nuances? I think that’s going to be a ways out there. This is where I think I get excited about what we teach and what we believe.


And some of the things that Miller’s and Moyers are writing about is it is about that human being. Not about the technique. It’s it’s do you embody genuineness? Do you believe in acceptance? Are you focused in conversation? That is significantly more complex. And I think for the human brain to be able to digest and synthesize that and then communicate person to person.


That’s like I said before. It’s why people are just like, Oh, when I want, how did Dr Moyers do that in that conversation? How did Casey do that? How did John do that? How did you know? When we watch these videos, it’s like, how does their brain think that quickly? How do they adjust so quickly? If most people like are wowed by that, it’s going to take a while for a programmer to be able to replicate that.


And so I think that’s where I always think of these deeper things of, you know, are you going to be able to train a robot to do double sided reflections? Absolutely. They’re getting, they can do that. Can you get them to express empathy and do a positive affirmation? Absolutely. But to do with a level of genuineness and acceptance,  In a way that draws out and evokes a deeper level from an individual that I think is going to be quite a ways down the road.


I am going to, for the sake of this right now, also highlight for those of you that do not know Casey and or have not heard him in his practice. Seems to, according to me, with just one tool alone, having now, I believe, officially over 4, 500 mica coatings, has an exceptionally high degree of empathy capabilities within him that I don’t know how fair that is to other humans, but I think it’s worth aspiring to that it is a, the most important component when we look back at effective psychotherapy tenants in that podcast and that thing you took, that’s critical.


We know that’s critical according to research, according to the human feeling, the anecdote, the beauty, the whatever that is. The emotional intelligence, that’s a part of emotional intelligence is empathy. So we know that’s a huge piece if, if, if AI were to come into MI, that’s not just repeating back.


That’s not just paraphrasing, but you said a mode as deep beyond what said. So that’s a big piece that might be a limiter. Where I could see the potential is this term and, and I know something that, that, you know, can be important for us, like you’re putting out Danielle of defining certain things, but it can be helpful for us to think about what could be a foundation that AI sets for people.


And I feel like, I don’t know how this would come together, but as you were talking, Casey, it spurred in me like, well, equipoise, this equal position, this non judgment place that in the part one was what spurred this whole conversation by Danielle, that we want to feel safe. That’s a part of, you could call it Maslow’s hierarchy, all sorts of things of values and needs.


We want to feel safe and comfortable and all that stuff, right? We want, and we have these values. We have these, whatever you want to say that I’m going to a place of, well, maybe,   an AI could do acceptance pretty dang well, and probably more consistently. Then humans on the average. I don’t know.


That’s a hypothesis. I need to be tested. That’s my advice. Positive regard. I don’t know because sometimes people’s eyes, you know, those people that like, they like, see you in that way or have done the work or whatever you want to call that. That’s just like, wow, I feel in your presence, right? Like, so could you get that maybe with a really good AI model that has those eyes.


Or maybe an AI model that does affirmation, but I feel like we, I believe we still don’t know what we don’t know. And there’s so many blind spots going into this of what it means to be genuine. What does a feeling of positive regard remain to someone else in a different country of a different situation than what we’re talking about right now?


And how there is the weird Western industrialized all the,  the,  types of research that’s been done is with particular populations. So I feel there’s a larger thing to be had with and in the meantime, could there be ways of having that’s more accessible? So you don’t need to get on a wait list that uses tenants of it of to try to be helpful in some way or suicide hotline, or I don’t know.


Is that too dangerous? I don’t know. But I feel to your earlier point, Casey,  in talking about like ethics and access, I think we should do our best with what we know around ethics and access and then do our best what we know to use human brains for human brains. And I believe like what you’re pointing to is maybe there’s at least in the short term a limit to complex empathy for, for AI, but maybe there’s some stuff that it could do that could help us in some sort of a way.


And I don’t know exactly how those two worlds come together, but some people have a tough time having acceptance. I know I do at times. And sometimes it’s hard to have positive regard for someone that has different beliefs than you, you know, so it’s like, can we use that to help us help people? I just don’t know how that would work, but I feel there’s a there there as well.


You know, 1 of the things that Dr Miller was differentiating when he was talking about empathy, it links into what you were talking about earlier, John, is they were, he was differentiating between cognitive empathy and emotional,  And so I think the A. I. Is probably going to master cognitive empathy very quickly and probably already has.


I think the emotional empathy is what you’re talking about because of millery, the word he even used, which is I love the word is, you know, he said there are empaths and even he even referenced star track, you know, like true empaths that they can’t help. But feel what other people are feeling that’s emotional empathy.


And, you know, I think of that as sensitives or empaths that they’re just AI is just, I, I just don’t know how my brain is too limited to think that they could actually replicate that level of emotional empathy. And I think the other thing, and just with recent conversations I have, when I’ve talked about the one woman that I was talking to is actively suicidal, you know, last week and.


And honestly, a personal conversation that Danielle and I had recently, there is something on, you know, again, I’m not talking religion, but there’s something on a spiritual level that when somebody feels seen and heard by another human being, there isn’t, there is a neurochemical energetic thing that happens when you feel heard and seen and understood.


And I, it’s going to, for me, it’s going to be hard pressed to see. Artificial intelligence replicate that because there is something there is a there’s a depth of meaning there. There’s a depth of humanity. There’s a depth of soul or spiritual connection that I do think AI is just never going to be able to catch by definition.


It could not capture that. And I think I could be a naive in that perspective. But as of right now, I’m believing that that it’s just it’s too complex to reduce that into an algorithm.  Because there is something about a soul connection, you know, it generations and cultures have believed in soulmates, you know, it’s just, there’s just so much there that I think the reductionist version, that’s going to be really complex.


So when I think of that level of empathy and what you said that I love was genuineness, this part of the, you know, the, those top eight that Miller and Moyers are talking about, that’s going to be hard. Genuineness is by definition, that is more of a human trait. You know, can a, can a robot emulate or replicate genuineness?


Absolutely. I think that almost is not part of the definition of genuineness. So I think that’s the part where I start to find that that solid ground is that there’s always going to be somebody that wants to talk to someone who is deeply empathetic and is present and knows them and they know their history and looks at that person and looks through that person to the soul of who they are.


I just think. Hey, I was going to have a hard time getting there and it’s going to be a longer process and I think that’s why I’m so excited about the evolution of research that it’s not about motivational interviewing. It’s about these characteristics or these traits or these mindsets and skill sets that we can evolve individually.


That improves outcomes for other human beings. So in some ways, and I don’t get as intimidated by the thought of, I like the thought there’s AI that can help somebody that’s suicidal. And instead of waiting weeks to get to see someone that, you know, if something breathes a little bit more air into their lungs and provides their brain a different way to think about things, that’s awesome.


But. There’s going to be always a need for that human to human soul to soul connection that even when we learn to program, you know, artificial emotional intelligence, you know, we, I don’t know if there’s a concept yet about spiritual intelligence, but if, if we went from. IQ to EQ, you know, maybe, you know, as we evolve as human beings, we’re going to desk you about spiritual, you know, quotient.


So, you know, that I’m more excited about that construct than AI in some ways is, and we find a way to get to spiritual intelligence. And what does that create in the world? Well, you and Bill Miller, both, he has the book,  quantum change that he says is from, at least I can. Report from his mouth. I don’t, I can’t find the quote where he said it, but he told me he said to me that he thinks that’s his most important work.


He’s,  ever produced more than, am I, which is quite a statement,  I must say. And what I want to, as we’re coming towards the end of this discussion, I want to just highlight. How little we still know. And so what does it mean right now? It means right now we’re kind of assessing AI related to MI. And I was kind of asking, well, how do we harness AI and MI?


But before, you know, getting to kind of ahead of ourselves, what do we even know that it means to be genuine? What do we even know about who’s creating these algorithms? Is it a bunch of… Relatively above middle class white on average males that aren’t taking into account these, you know, like, like, there’s still so much going on here around all our imaginations that run an incredible amount of almost infinity.


And yet there’s still such a finite amount, and we haven’t even talked about how then there’s different algorithms that teach itself. And then that get human feedback or that don’t get human feedback along the way. So that could be. And in two complex learning, then there’s also quantum computing at like subzero temperatures.


That’s like crazy, crazy in the future. That’s literally a thing. Now it’s like, there’s still so much going on. It’s hard to even fathom what could be possible, but there does seem to be something worth. Training for in our world worth putting out into the world of what am I is standing for, which is these tenants of compassion, these tenants of treating other people and helping, you know, uplift them, support them, help them with their behaviors and values.


There seems to be a there there that’s worth supporting, you could say fighting for, but worth being in love for,  that I would like to believe. I can hopefully help and not suffocate and that’s what I’m seeing as worthwhile of where to go from here is be aware of stuff and start trying to help it help us help people and using compassion as maybe a moral compass.


 And hopefully we can not lead into the matrix and all the other stuff with it. You know, and I think that’s where I, it’s such a nice thought in my brain to think, and nothing in the AI world is going to call me up to come over and have a cup of coffee and sit in front of the fireplace with a snuggly blanket on my lap and listen to who I am and help me work through some of my struggles.


 That’s always going to be a place for, you know, the underlying tenets of motivational learning in terms of helping people get clear about their values, how to navigate our ambivalence in a really effective way, and to feel like there’s somebody there that cares and understands that’s not trying to make you be or do something that is not how you choose to define yourself like that, there’s going to be a place for that in this world always.


And I think that’s where I just, you know, I just kind of settled into that thought after being overwhelmed with the, the takeover of the machines into, you know, what those machines are not going to be able to, you know, brew a, you know, a cup of hot chocolate or a nice cup of tea and sit down with me and, and,  you know, put their hand on my leg and just look in my eyes and just say, no, you’re going to be okay.


Life is good. You know, you’re a good human being and you’ve got lots of great things ahead of you like that. It’s gonna be a while till they replicate that experience. I love that you just brought, brought that full circle to who I’ve experienced you as Casey is you paint a picture when you’re training, when you’re talking, when you’re sharing, you paint such a vivid picture that says so much more than can even be explained.


And so to leave this note, this two part podcast with that vision is so lovely. I can’t thank you enough.  Also with the humility of saying, and I don’t know, and what else, what is there? What can we learn from it? I appreciate you both so much to go on a journey that just reminds me that, you know, we are The ones in charge of the mastery, we are the ones, you know, having gone to engineering school way back, I was the one doing the zeros and ones.


 And I’m a person with biases with, you know, so it’s just like tying it all in together to realize it is a tool and how can we be the best master of that tool? Absolutely. And,  and then I have to chuckle. You guys just brought us from M. I. to E. I. to A. I. To now, Casey, we’re doing a podcast on SI, spiritual intelligence.


Love it. Love it. Love it. Thanks so much guys. What a, what a great program. Thank you listeners for being here.  We can’t wait to hear from you and what your thoughts are on all of these topics. See you next time. Thank you for listening to the communication solution podcast with Casey Jackson and John Gilbert.


As always, this podcast is about empowering you on your journey to change the world. So if you have questions, suggestions, or ideas, send them our way at Casey@IFIOC.Com. That’s CASEY@IFIOC.COM. For more information or to schedule a training, visit IFIOC.Com. Until our next communication solution podcast, keep changing the world.