The Post-Human Movement Away from Human Care
Some words against de-humanizing mental health care.
Intro
We’ve been over-exposed to “AI” Hype. I don’t need to say the names of the large language models. They’re the ones selling hollow sculptures made from scrapings off the internet floor. Behavior science knows how to influence behavior. The anti-human movement, fronted as efficiency and accessibility of care, will be difficult to slow down.
I’m not talking about digital helpers that make known coping strategies accessible. I’m talking about bioinformatics, smartphone surveillance and internet scrapers that collect data without informed consent, and then “improve” themselves on the stolen data and more surveillance. Big-tech behavior science and corporate legal teams know where to put the secret sauce for the movement. It’s in the last place any real human would look: Terms of Service.
Decide for yourself about whether lines are being crossed. I’ve put up a couple twitter threads; one relating to crisis lines and so-called sentiment detection, and one on lack of consent for research about suicide, including language modeling, using massive health data stores.
Vice Article
I was just recently given a couple questions as prompts from freelance journalist Emma Pirnay. This was for her April 27 article in Vice about large language models being used as therapy substitutes. My answers follow next.
Also see Emma Pirnay’s Warmline documentary project linked at the end.
Full Comment for Vice Article
I’m a former volunteer for Crisis Text Line who was terminated after seeking reforms from within the organization, as covered in January 2022 by Alexandra Levine for POLITICO.
I’ve continued my advocacy which has connections to mental health, data ethics, ethics of consent, mental health research, algorithmic harms, bioinformatics and large language models. Everything is connected and I try to keep up but it’s all too much for any one person. One of the reasons why journalism in this field is so important.
Though I’m coming from crisis line experience, this all relates to powerful trends in all of mental health right now. Powerful forces are pushing algorithms and large language model technologies into use as tools for therapy and throughout mental health care.
There are experts who can address the technical, privacy, ethical, even mathematical problems, certainly deception, bias and inequity. Paternalism? Absolutely and it’s as obvious as the national 988 Lifeline’s policy to send intervention teams, including police, when they decide the situation warrants, without consent.
You don’t have to look far to see corporations already positioning themselves for 988 funding or other ways to monetize these new technologies. For example, claims about voice analysis for so-called sentiment detection, and algorithms supposedly reading emotion and suggesting responses from text. It scares me how many resources and how much questionable research is feeding this frenzy right now. I’m scared for the people who will not receive the care they need because attention and resources are being diverted away.
On the crisis line, what I experienced was the beauty and power of the human touch. Even over text, the intimacy of that connection is real. The heart pounds, sorrow, fear, all the emotions shared, the tears flow. Sometimes at the beginning of a conversation a person will say “Is this a bot? Can you prove you’re real?” There is extreme sensitivity to deception, which is only natural when you’ve experienced disturbing situations.
“Is this a bot? Can you prove you’re real?”
For me, it’s a simple answer, never use these large language models to simulate human connection. Never in a mental health or therapeutic context. Never. I’m very worried about the false sense of urgency throughout the health care systems, which are very large systems, rushing to capitalize on telehealth and data harvested from smartphones.
They’re doing this in the name of access for people that society has pushed to the margins, but look at where the money is going to flow.
A therapy session online is access. An “I hear you” coming from an oversized parrot of a language model is obscene. The answer, I believe, is to bring the margins into the center of the page. Fund and support human care. The loss of trust and loss of connection that these technologies will bring, will never be recovered in dollar cost savings.
Acknowledgements
Thank you to Lauren Goodlad with Critical AI for the connection.
Thank you to dear friend Hilary Freed for permission to use all photos credited to her on this website. The English garden photo featured here, and other photos are available for purchase.
Warmline Documentary Project
Emma Pirnay is collecting testimonies from crisis line safety advocates and psychiatric survivors in this oral history project about crisis lines and surveillance.