The Tech Humanist Show

The Tech Humanist Show


How Tech and Social Media Impact Our Mental Health

April 28, 2022

On this week’s episode, we’re talking about how technology and social media impact our mental health, and has led to a mental health crisis that some have called “the next global pandemic.” From the algorithms that decide what we see to the marketing tricks designed to keep us constantly engaged, we explore how our assumptions about work have led to a feedback loop that keeps us feeling worse about ourselves for longer.



But never fear! At the Tech Humanist Show, we’re about finding solutions and staying optimistic, and I spoke with some of the brightest minds who are working on these problems.



Guests this week include Kaitlin Ugolik Phillips, John C. Havens, Rahaf Harfoush, Emma Bedor Hiland, and David Ryan Polgar.





The Tech Humanist Show is a multi-media-format program exploring how data and technology shape the human experience. Hosted by Kate O’Neill.



To watch full interviews with past and future guests, or for updates on what Kate O’Neill is doing next, subscribe to The Tech Humanist Show hosted by Kate O’Neill channel on YouTube.



Full Transcript:



Kate: Hello humans! Today we look at a global crisis that’s affecting us all on a near-daily basis… No, not that one. I’m talking about the other crisis—the one getting a lot less media attention: the Global Mental Health Crisis. In December, Gallup published an article with the headline, “The Next Global Pandemic: Mental Health.” A cursory Google search of the words “mental health crisis” pulls up dozens of articles published just within the past few days and weeks. Children and teenagers are being hospitalized for mental health crises at higher rates than ever. And as with most topics, there is a tech angle: we’ll explore the role technology is playing in creating this crisis, and what we might be able to do about it.



Let’s start with social media. For a lot of us, social media is a place where we keep up with our friends and family, get our news, and keep people updated on what we’re doing with our lives. Some of us have even curated feeds specifically with positivity and encouragement to help combat what we already know are the negative effects of being on social media too long. There’s a downside to this, though, which I spoke about with Kaitlin Ugolik Phillips, the author of The Future of Feeling: Building Empathy in a Tech-Obsessed World.



Kaitlin: I wrote about this a little bit in an article about mental health culture on places like Instagram and Pintrest where you have these pretty images that have nice sayings and sort of the commodification of things like anxiety and depression and it’s cool to be not okay, but then you’re comparing your ‘not-okay’ness to other people’s.



Kate: We’ve even managed to turn ‘being not okay’ into a competition, which means we’re taking our attempts to be healthy and poisoning them with feelings of inferiority and unworthiness, turning our solution back into the problem it was trying to solve. One of the other issues on social media is the tendency for all of us to engage in conversations–or perhaps ‘arguments’ is a better word–with strangers that linger with us, sometimes for a full day or days at a time. Kaitlin explains one way she was able to deal with those situations.



Kaitlin: Being more in touch with what our boundaries actually are and what we’re comfortable and capable of talking about and how… I think that’s a good place to start for empathy for others. A lot of times, when I’ve found myself in these kind of quagmire conversations (which I don’t do so much anymore but definitely have in the past), I realized that I was anxious about something, or I was being triggered by what this person is saying. That’s about me. I mean, that’s a pretty common thing in pscyhology and just in general—when someone is trolling you or being a bully, it’s usually about then. If we get better at empathizing with ourselves, or just setting better boundaries, we’re going to wade into these situations less. I mean, that’s a big ask. For Millennials, and Gen Z, Gen X, and anyone trying to survive right now on the Internet.



Kate: But social media doesn’t make it easy. And the COVID pandemic only exacerbated the issues already prevalent within the platforms. Part of the problem is that social media wasn’t designed to make us happy, it was designed to make money. John C. Havens, the Executive Director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, elaborates on this idea.



John: Often times, the value is framed in exponential growth, right? Not just profit. Exponential Growth is an ideology that’s not just about getting some profit or speed, it’s about doing this. But when you maximize any one thing, other things by definition take less of a focus. And especially with humans, that can be things like mental health. This is not bad or evil, but it is a decision. And in this case it’s a key performance indicator decision, the priority is to get something to market, versus, how can we get something to market focused on well-being? How can we make innovation about mental health?



Kate: The upside is that our time indoors led some people to more quickly realize the issues with technology and its effects on us. Early in the pandemic, I spoke with Rahaf Harfoush — a Strategist, Digital Anthropologist, and Best-Selling Author who focuses on the intersections between emerging technology, innovation, and digital culture — about what she learned about our relationship to technology during that time.



Rahaf: For me I think it just amplified a lot of the issues with the way we were using tech before. I noticed in my social networks and friend groups, people were home more, so what can we do but turn to our online, to this never-ending content and distraction and connections. And in the first couple weeks, everyone was about the Zoom everything, and then there was a Zoom burnout… for me, there’s a couple big issues at play. The first is that we have more bandwidth because we’re at home, so we’re consuming more information. A lot of these platforms leverage this addictive constant-refresh, breaking-news cycle, and with something as complex and nuanced as COVID, a lot of us were glued to our screens refreshing refreshing refreshing… that was not the best thing I could have done for my mental well being or anxiety. At one point I was like, “i need to step away!” because I was just addicted to the news of instead of increasing knowledge. And the other thing is that for many people, the forced pause made us realize that we use productivity as a coping mechanism, and what does it mean that we have more time? A lot of people started trying to make their personal time as productive as their professional time—pushing themselves to pick up 10 new hobbies and learn 10 new languages and take 10 new classes! One or two of those things is great, but I really saw people loading up. That was a good indication to me of our lack of comfort with not doing anything. I noticed I was guilting myself for not writing and not learning and then I was like, you know what? we’re undergoing this immensely traumatic, super-stressful thing… it’s okay to not do anything, like that’s fine.



Kate: If you’re anything like me, that’s a lot easier said than done. Even if you’ve mostly resumed your life as normal, you’re probably still in the habit of working all day, and then filling your free time with more work, hobbies, or time on social media. I asked Rahaf what someone trapped in this cycle could do about it.



Rahaf: Your brain needs at least a week to just unwind from the stress of work. If you’re just constantly on planes and in deliverables and client stuff… you’re never going to take the time to imagine new opportunities for yourself. The trick is we have to balance periods of actually producing the thing with periods of intangible creativity. A lot of the thinking you can’t see—in our culture, we don’t like things that we can’t see. But how many of us have gone for a walk about got that idea, or were daydreaming and got that idea? So creatives, we need that downtime. And by the way, downtime isn’t taking a coffee break and being on social media. Downtime is really downtime. Daydreaming, just letting your brain go. Which is why we need a different framework, because for a writer or strategist, like you, you spend so much time thinking about things… but to think about things, you need the time to think about them!”



Kate: Most of us don’t have the luxury to just shut off our Internet usage entirely. If you’re someone, like most of us, who needs technology to get by. , how do we find that balance? And why is it so difficult?



Rahaf: I think it’s because we’ve shamed ourselves into thinking if we’re not doing stuff, it’s a waste. And that’s the problem, the problem is intentional recovery, prioritizing and choosing rest, that’s really hard for us, because we constantly hear these stories of CEOs and celebrities, and Elon Musk sleeping on the floor of his factory, and Tim Cooke waking up at 4:30 in the morning, and we think, I can’t take a nap, I can’t watch a movie, I can’t go for a walk, because then I’m not really committed to being successful! And that’s the most toxic belief system we’ve incorporated into our society today, especially for creatives. The breakthrough that I had was that it’s not actually about systems or organizations, it’s about us as people. We are our hardest taskmasters, we will push ourselves to the limit, even when other people tell us to take a break. If we’re gonna move to a more humane productivity mindset, we have to have some uncomfortable conversations about the role of work in our lives, the link between our identity and our jobs and our self-worth, our need for validation with social media and professional recognition, our egos… all of these things battle it out, which is why I can’t just come on here and be like, “okay guys, take a break here, do this…” we’re not going to do it! We really have to talk about, ‘growing up, what did your parents teach you about work ethic?’ how is that related to how you see yourself? Who are the people that you admire? And then there are statements you can ask yourself, like “if you work hard, anything is possible!” All these things, you can start testing your relationship with work, and you start to see that we have built a relationship with work psychologically where we feel like if we don’t work hard enough, we’re not deserving. And not only do we have to work hard, we have to suffer! We have to pull all-nighters! Think of the words we use, ‘hustle’ and ‘grind’… these horrible verbs! The reason that’s important to dig into is that our views about our work become assumptions that we don’t question. We don’t ever stop and say, ‘does this belief actually allow me to produce my best possible work, or is it just pushing me to a point where I’m exhausted and burnt out? The second thing is, a lot of the stories we’ve been told about success aren’t true. As a super-quick example, if there’s an equation for success, most people think it’s “hard work = success.” But in reality, while hard work is important, it’s not the only variable. Where you’re born, your luck, your gender, your race… all of these things are little variables that add into the equation. So what I don’t like about “hard work = success,” it’s that the flip side of that tells people, “if you’re not successful, it’s because you aren’t working hard enough.” And part of the awakening is understanding that there are other factors at play here, and we’re all working pretty hard! We don’t need more things telling us that we’re not enough and we’re not worthy.



Rahaf: When I had my own burnout, I knew better but didn’t do better. That was really frustrating to me, it’s like, I have the knowledge, why could I not put the knowledge to practice? And then I realized, all these belief systems and stories are embedded in every IG meme and every algorithm that asks you to refresh every 10 seconds, and every notification that interrupts your time, and the design of these tools to socially shame people for not responding fast enough. With Whatsapp for example, the blue checkmark that lets you know if someone has seen your message. What is that if not social pressure to respond? We’ve also shaped technology to amplify the social norms that if you’re ‘left on read,’ that’s a breach of etiquette.



Kate: We, as a culture, believe things about success that aren’t true. Then, we program those beliefs into our technology, and that technology ramps up and exacerbates the speed at which we’re exposed to those flawed ideas. It creates a downward spiral for the user — or, the person using these platforms — to believe these untrue truths more deeply, broadening the disconnect between our ideal selves and reality. And yet, despite these outside forces at play, there is an urge to place responsibility on the user, to say that each of us is solely responsible for our own mental health. Emma Bedor Hiland — the author of Therapy Tech: The Digital Transformation of Mental Healthcare — calls this “Responsibilization”



Emma: I draw from the work of Michelle Foucault who writes about neo-liberalism too. So the way I use it in the book is to say that there is an emphasis when we talk about neo-liberalism upon taking responsibility for yourself, anything that could be presumably in your control. And in this day and age, we’re seeing mental health, one’s own mental health, being framed as something we can take responsibility for. So in tandem with this rollback of what would ideally be large-scale support mechanisms, local mental health facilities to help people in need, we’re seeing an increasing emphasis upon these ideas like ‘use the technology that you can get for free or low cost to help yourselves.’ But at the same time, those technologies literally don’t speak to or reflect an imagined user who we know in this country need interventions most badly.



Kate: Thankfully, we live in a world where once a problem has been identified, some enterprising people set out to design a potential solution. Some of those solutions have been built into our technology, with ‘screen time tracking’ designed for us to think twice about whether we should spend more time on our phones, and Netflix’s “are you still watching?” feature that adds a little friction into the process of consuming content.



When it comes to mental health specifically, there is a growing Telemental Healthcare industry, including online services such as BetterHelp, Cerebral, or Calmerry. These, however, may not be the solutions we want them to be.



Emma: “A lot of my research, it’s so interesting looking back at it now, my interviews with people who provide tele-mental health were conducted prior to the pandemic. It was really challenging at that time to find people who were advocates and supporters of screen-based mental health services, they told me that their peers sort of derided them for that because of this assumption that when care is screen-based, it is diluted in fundamental ways that impact the therapeutic experience. Which is understandable, because communication is not just about words or tone or what we can see on a screen, there’s so much more to it. But when interactions are confined to a screen, you do lose communicative information. One of the things I’ve grappled with is I don’t want it to seem like I don’t think telemental health is an important asset. One of my critiques is that a lot of the times in our discussions, we assume people have access to the requisite technologies and access to infrastructure that makes telemental healthcare possible in the first place. Like having smart devices, even just Smartphones, if not a laptop or home computer station, as well as reliable access to an internet connection, in a place where they could interface with a mental healthcare provider. So a lot of the discourse is not about thinking about those people whatsoever, who due to the digital divide or technology gap, even using technology couldn’t interface with a healthcare provider. Some of my other concerns are related to the ways our increased emphasis and desire to have people providing screen-based care also are actually transforming people who provide that care, like psychiatrists, psychologists, etc, into members of the digital gig economy, who have to divide up their time in increasingly burdensome ways, and work in ways where their employment tends to be increasingly tenuous. Relatedly, I am also worried about platforms. I know people are becoming more familiar with the idea that these places exist that they can go to on their laptops or wherever, assuming they have that technology, and be connected to service providers, but as we’ve seen with Crisis Text Line, there are a lot of reasons to be concerned about those platforms which become hubs of collecting and aggregating and potentially sharing user data. So while I think telemental healthcare services are important, I’d like to see dedication of resources not just to technologically facilitated care, but using that care to direct people to in-person care as well. We know due to the COVID Pandemic, we saw so many people offering services that were solely screen-based, and for good reason. A lot of clinics that provided healthcare for people without insurance or who are living, considered in poverty, relied upon in-person clinic services, and haven’t been able to get them due to their shuttering due to the pandemic. So I worry about the people who we don’t talk about as much as I worry about the negative consequences and affects of mental healthcare’s technologization



Kate: So while some people’s access to mental healthcare has increased with technology, many of the people who need it most have even less access to help. On top of that, the business model of these platforms makes it so that healthcare professionals have to work harder for longer in order to make their living. On top of all this, as a means of sustaining the companies themselves, they sometimes turn to sharing user data, which is a major concern for myriad reasons, one of which is people who use that data to create predictive algorithms for mental health. Next, Emma elaborates on this concept.



Emma: People have been trying this for a number of years; aggregating people’s public social media posts and trying to make predictive algorithms to diagnose them with things like ADHD, depression, anxiety… I’m still unsure how I feel about trying to make predictive algorithms in any way that try to make predictions in any way about when people are likely to harm themselves or others, simply because of how easy it is to use that type of software for things like predictive policing. I write in the book as well that people want to harness internet data and what people do on social media to try to stop people from violent behavior before it starts, so it’s very much a slippery slope, and that’s why I find data sharing in the realm of mental health so difficult to critique, because of course I want to help people, but I’m also concerned about privacy.



Kate: For those saying, “but what about the free services? Things like Crisis Text Line or Trevor Project?”



Emma: Crisis Text Line, when it comes into fruition in 2013 and it says, “we can meet people where they are by allowing them to communicate via text when they’re experiencing crises”… I think that’s a really laudable thing that was done, and that people thought it was an intervention that could save lives, and based on research from external and internal researchers, we know that is the case. But for people who might not be aware, Crisis Text Line doesn’t put people in contact with professional mental healthcare workers, instead it’s often people who have no background or training in mental healthcare services, and instead go through training and serve as volunteers to help people in dire moments need and crisis. In Therapy Tech I also describe how I perceive that as a form of exploitative labor, because although in the past there were conversations about whether to provide financial compensation for volunteers, they ultimately decided that by emphasizing the altruistic benefits of volunteering, that sort of payment wasn’t necessary. And then I compare that to Facebook’s problematic compensation of its content moderators, and the fact that those moderators filed a lawsuit against Facebook—although it hasn’t been disclosed what the settlement was, at least there’s some acknowledgement that they experienced harm as a result of their work, even if it wasn’t volunteering. So I do take some issue with Crisis Text Line and then, in relation to neo-liberalism and responsibilization, again I feel that CTL is not the ultimate solution to the mental healthcare crisis in this country, or internationally, and CTL has created international partners and affiliates. I underwent training for a separate entity called Seven Cups of Tea which is both a smartphone app as well as an internet-accessible platform on a computer. And Seven Cups of Tea’s training, compared to what I know CTL volunteers have to go through, is incredibly short and I would characterize as unhelpful and inadequate. For me it took 10 minutes, and I can’t imagine it would take anyone more than a half hour. So the types of things I learned were how to reflect user statements back to them, how to listen empathetically but also not provide any advice or tell them what to do, because you never know who’s on the other end! At the time I conducted the research, I started to volunteer on the platform. A lot of the messages I got were not from people who were experiencing mental distress necessarily, but from people who just wanted to chat or abuse the platform. But even though I only had a few experiences with people who I felt were genuinely experiencing mental distress, I still found those experiences to be really difficult for me. That could be just because of who I am as a person, but one of the things I’ve realized or feel and believe, is that my volunteering on the platform was part of a larger-scale initiative of 7CoT to try to differentiate between who would pay for services after I suggested to them because of my perception of them experiencing mental distress, and those whose needs could be fulfilled by just being mean to me, or having their emotions reflected back to them through superficial messaging. I very rarely felt that I was able to help people in need, and therefore I feel worse about myself for not being able to help as though it’s somehow my fault, related to this idea of individual responsibilization. Me with my no knowledge, or maybe slightly more than some other volunteers, feeling like I couldn’t help them. As though I’m supposed to be able to help them. I worry about the fatalistic determinism types of rhetoric that make it seem like technology is the only way to intervene, because I truly believe that technology has a role to play, but is not the only way.



Kate: Technology isn’t going anywhere anytime soon. So if the products and services we’ve built to help us aren’t quite as amazing as they purport themselves to be, is there a role for tech interventions in mental health scenarios? Emma explains one possible use-case.



Emma: I think technology can help in cases where there are immediate dangers. Like if you see someone upload a status or content which says there is imminent intent to self-harm or harm another person. I think there is a warrant for intervention in that case. But we also know that there are problems associated with the fact that those cries for help (or whatever you want to call them) are technologically mediated and they happen on platforms, because everything that happens via a technology generates information / data, and then we have no control, depending on the platform being used, over what happens with that data. So I’d like to see platforms that are made for mental health purposes or interventions be held accountable in that they need to be closed-circuits. It needs to be that they all pledge not to engage in data sharing, not engage in monetization of user data even if it’s not for-profit, and they need to have very clear terms of service that make very evident and easily-comprehendible to the average person who doesn’t want to read 50 pages before agreeing, that they won’t share data or information.



Kate: Now, I do like to close my show with optimism. So first, let’s go to Rahaf once again with one potential solution to the current tech issues plaguing our minds.



Rahaf: To me one of the most important things that we need to tackle—and I don’t know why we can’t just do this immediately—we need to have the capacity on any platform that we use to turn off the algorithm. Having an algorithm choose what we see is one of the biggest threats, because think about all the information that you consume in a day, and think about how much of that was selected for you by an algorithm. We need to have an ability to go outside of the power that this little piece of code has to go out and select our own information, or hold companies accountable to produce information that is much more balanced.



Kate: And that sounds like a great solution. But how do we do that? We don’t control our technology, the parent companies do. It’s easy to feel hopeless… unless you’re my friend David Ryan Polgar, a tech ethicist and founder of All Tech Is Human, who’s here to remind us that we aren’t bystanders in this. I asked him what the most important question we should be asking ourselves is at this moment, and he had this to say.



David: What do we want from our technology? This is not happening to us, this is us. We are part of the process. We are not just magically watching something take place, and I think we often times forget that. The best and brightest of our generation should not be focused on getting us to click on an ad, it should be focused on improving humanity. We have major societal issues, but we also have the talent and expertise to solve some of these problems. And another area that I think should be focused on a little more, we are really missing out on human touch. Frankly, I feel impacted by it. We need to hug each other. We need to shake hands as Americans. I know some people would disagree with that, but we need warmth. We need presence of somebody. If there was a way that if we ended this conversation and like, we had some type of haptic feedback, where you could like, pat me on the shoulder or something like that… everybody right now is an avatar. So I need to have something to say like, “Kate! You and I are friends, we know each other! So I want a greater connection with you than with any other video that I could watch online. You are more important than that other video.” But right now it’s still very two dimensional, and I’m not feeling anything from you. And I think there’s going to have to be a lot more focus on, how can I feel this conversation a little more. Because I mean listen, people are sick and tired right now, ‘not another Zoom call!’ But if there was some kind of feeling behind it, then you could say, “I feel nourished!” whereas now, you can sometimes feel exhausted. We’re not trying to replace humanity, what we’re always trying to do is, no matter where you stand on an issue, at the end of the day, we’re actually pretty basic. We want more friends, we want more love… there are actual base emotions and I think COVID has really set that in motion, to say, hey, we can disagree on a lot in life, but what we’re trying to do is get more value. Be happier as humans, and be more fulfilled. Be more educated and stimulated. And technology has a major role in that, and now, it’s about saying how can it be more focused on that, rather than something that is more extractive in nature?



Kate: Whether we like it or not, the Internet and digital technology play a major role in our collective mental health, and most of the controls are outside of our hands. That can feel heavy, or make you want to throw in the towel. Those feelings are valid, but they aren’t the end of the story. I asked David for something actionable, and this is what he had to say.



David: Get more involved in the process. Part of the problem is we don’t feel like we can, but we’re going to have to demand that we are, and I think frankly some of this is going to come down to political involvement, to say ‘we want these conversations to be happening. We don’t want something adopted and deployed before we’ve had a chance to ask what we actually desire.’ So that’s the biggest part is that everyone needs to add their voice, because these are political issues, and right now people think, ‘well, I’m not a techie!’ Guess what? if you’re carrying around a smartphone…



Kate: All the more reason we need you!



David: Right! We need everybody. Technology is much larger. Technology is society. These are actually social issues, and I think once we start applying that, then we start saying, ‘yeah, I can get involved.’ And that’s one of the things we need to do as a society is get plugged in and be part of the process.



KO: There are a lot of factors that contribute to our overall sense of happiness as humans. And although it may sound like a cliche, some of those factors are the technologies that we use to make our lives easier and the algorithms that govern the apps we thought we were using to stay connected. But that doesn’t mean things are hopeless. If we keep talking about what matters to us, and make an effort to bring back meaningful human interaction, we can influence the people building our technology so that it works for our mental health, instead of against it.