De-Identified You
Advisory – The crisis line subject matter is difficult and may be triggering. The poem references human anguish, thoughts of death, and self-harm. The poem is not about a specific person.
Advisory – The crisis line subject matter is difficult and may be triggering. The poem references human anguish, thoughts of death, and self-harm. The poem is not about a specific person.
Some words against de-humanizing mental health care.
Intro
We’ve been over-exposed to “AI” Hype. I don’t need to say the names of the large language models. They’re the ones selling hollow sculptures made from scrapings off the internet floor. Behavior science knows how to influence behavior. The anti-human movement, fronted as efficiency and accessibility of care, will be difficult to slow down.
I’m not talking about digital helpers that make known coping strategies accessible. I’m talking about bioinformatics, smartphone surveillance and internet scrapers that collect data without informed consent, and then “improve” themselves on the stolen data and more surveillance. Big-tech behavior science and corporate legal teams know where to put the secret sauce for the movement. It’s in the last place any real human would look: Terms of Service.
Decide for yourself about whether lines are being crossed. I’ve put up a couple twitter threads; one relating to crisis lines and so-called sentiment detection, and one on lack of consent for research about suicide, including language modeling, using massive health data stores.
Vice Article
I was just recently given a couple questions as prompts from freelance journalist Emma Pirnay. This was for her April 27 article in Vice about large language models being used as therapy substitutes. My answers follow next.
Also see Emma Pirnay’s Warmline documentary project linked at the end.
Full Comment for Vice Article
I’m a former volunteer for Crisis Text Line who was terminated after seeking reforms from within the organization, as covered in January 2022 by Alexandra Levine for POLITICO.
I’ve continued my advocacy which has connections to mental health, data ethics, ethics of consent, mental health research, algorithmic harms, bioinformatics and large language models. Everything is connected and I try to keep up but it’s all too much for any one person. One of the reasons why journalism in this field is so important.
Though I’m coming from crisis line experience, this all relates to powerful trends in all of mental health right now. Powerful forces are pushing algorithms and large language model technologies into use as tools for therapy and throughout mental health care.
There are experts who can address the technical, privacy, ethical, even mathematical problems, certainly deception, bias and inequity. Paternalism? Absolutely and it’s as obvious as the national 988 Lifeline’s policy to send intervention teams, including police, when they decide the situation warrants, without consent.
You don’t have to look far to see corporations already positioning themselves for 988 funding or other ways to monetize these new technologies. For example, claims about voice analysis for so-called sentiment detection, and algorithms supposedly reading emotion and suggesting responses from text. It scares me how many resources and how much questionable research is feeding this frenzy right now. I’m scared for the people who will not receive the care they need because attention and resources are being diverted away.
On the crisis line, what I experienced was the beauty and power of the human touch. Even over text, the intimacy of that connection is real. The heart pounds, sorrow, fear, all the emotions shared, the tears flow. Sometimes at the beginning of a conversation a person will say “Is this a bot? Can you prove you’re real?” There is extreme sensitivity to deception, which is only natural when you’ve experienced disturbing situations.
“Is this a bot? Can you prove you’re real?”
For me, it’s a simple answer, never use these large language models to simulate human connection. Never in a mental health or therapeutic context. Never. I’m very worried about the false sense of urgency throughout the health care systems, which are very large systems, rushing to capitalize on telehealth and data harvested from smartphones.
They’re doing this in the name of access for people that society has pushed to the margins, but look at where the money is going to flow.
A therapy session online is access. An “I hear you” coming from an oversized parrot of a language model is obscene. The answer, I believe, is to bring the margins into the center of the page. Fund and support human care. The loss of trust and loss of connection that these technologies will bring, will never be recovered in dollar cost savings.
Acknowledgements
Thank you to Lauren Goodlad with Critical AI for the connection.
Thank you to dear friend Hilary Freed for permission to use all photos credited to her on this website. The English garden photo featured here, and other photos are available for purchase.
Warmline Documentary Project
Emma Pirnay is collecting testimonies from crisis line safety advocates and psychiatric survivors in this oral history project about crisis lines and surveillance.
This poem is not about a specific person. Advisory – the content implies sexual assault.
A call for researchers and study authors to honor meaningful consent and privacy for persons experiencing crisis and, openly, to help make things right.
Crisis Text Line has deceived us. They’ve deceived us about consent, and about storing and using conversations. To restore trust, the deception needs to be acknowledged and ended.
No more research on crisis conversations without consent. No more algorithms in the conversations, triaging the queue or otherwise. No more storing conversations forever.
Why I Feel This
In my brief experience as a volunteer from 2020-2021, I had opportunity to talk with about 200 persons, at least one a very sweet child, experiencing some of her most difficult moments. They taught me more than I can express. Above all, I was gifted with a feeling of deep respect. Difficult lives are being lived and they deserve care and support. Support with no strings attached.
And what was it, in the crisis moment, that gave support? Something simple. The protected space for sharing in safety. A space to be together for a brief time. Listening, showing understanding, meeting a person where they are, allowing for emotion to flow without judgment, trusting them to know themselves best of all.
No doubt far fewer people would reach a crisis point if our society provided for basic human needs. But when the breaking point arrives, it’s the human touch that reaches towards connection, and a calming movement away from severe emotional distress.
The Deception about Consent
Crisis Text Line has represented all along that they have the utmost respect for persons using the service, and they understand the sensitivity of the conversations. They view information and data as a way to help communities respond to crisis trends in real time. They view research as a way to learn from the unique opportunity that their service provides, to help the mental health community better care for those who struggle. In 2016-2017 they undertook a peer-reviewed study, overseen by a Data Ethics Committee, that scrutinized the issues of consent, privacy, data security, and ethical use specifically for research. The refrain of “data for good” was and is foundational to how Crisis Text Line describes its mission and purpose.
But if we peer down deeper beneath this smooth surface, the deceptions begin to appear, like dark forms of rocks and snags.
The study just mentioned was titled “Protecting User Privacy and Rights in Academic Data-Sharing Partnerships: Principles from a Pilot Program at Crisis Text Line”. It was published in 2019 by the Journal of Medical Internet Research (JMIR).
There were irregularities. Seven members of the Data Ethics Committee to the study were authors. Two more authors, Dr. Anthony Pisani and Dr. Madelyn Gould, had affiliations as advisory board members to Crisis Text Line, undeclared within the paper. Three more authors were on staff with Crisis Text line, one of which was Crisis Text Line’s Chief Data Scientist Bob Filbin (who also chaired the Data Ethics Committee to the study). The others were Dr. Shairi Turner and Nitya Kanuri. This makes 12 of 13 authors affiliated with Crisis Text Line. Yet only two of those affiliations were declared in the paper: Crisis Text Line staff members Bob Filbin and Dr. Shairi Turner.
So, when the paper characterizes consent for research as having been granted by virtue of an “easy-to-understand” Terms of Service agreement, it gives pause about the study’s objectivity.
The 2019 JMIR paper, on consent for research:
“CTL provides texters with a link to an easy-to-understand Terms of Service [b], including a disclosure of potential future data use, before every crisis conversation.
[b] Terms of service: ‘We have created a formal process for sharing information about conversations with researchers at universities and other institutions. We typically share data with trusted researchers when it will result in insights that create a better experience for our texters. We follow a set of best practices for data sharing based on the University of Michigan’s Inter-University Consortium of Social and Political Research, one of the largest open data projects in the U.S., which includes stringent ethical, legal, and security checks. For more details, see our policies for open data collaborations’.”
To say that Terms of Service was the means for consent to research, granted by persons using the service, is implausible to say the least. The authors and Crisis Text Line knew the context was the crisis moment. They knew the demographic was young. The “easy-to-understand” language quoted by the study was located 80% of the way through a 3700 word agreement. Even if easy-to-understand, it was not easy-to-find.
I compared the Terms of Service and Privacy Policy (Terms) at that time and found the quoted research consent language was first added to the Terms as of February 16, 2016. See Disclosure to Third Parties – Researchers. But the language quoted in the paper removed the phrase “As of Feb 16, 2016,” from the beginning. The paper quotes from the December 9, 2016 version of the Terms which had removed the time qualifier. Consent remains a problem regardless, but this wordsmith change was a deception incorporated by direct quote into the study.
More currently, on January 28, 2022, POLITICO broke the story about for-profit Loris.ai, Inc., which is the company formed by Crisis Text Line in 2017 to create and market customer service software.
On January 29, 2022, Crisis Text Line posted a twitter thread in reply to the POLITICO story (without naming POLITICO). In the thread, Crisis Text Line cited the 2019 JMIR paper saying
“Leading academics published research in the Journal of Medical Internet Research after spending 18 months evaluating our data sharing practices. They found that our ‘practices yielded key principles and protocols for ethical data sharing.’”
No mention was made that their own Chief Data Scientist and co-founder Bob Filbin was one of these “leading academics”, and no mention that twelve of 13 authors had affiliations to Crisis Text Line. The twitter thread begins
“A recent story cherry-picked and omitted information about our data privacy policies. We want to clarify and fill in facts that were missing so people understand why ethical data privacy is foundational to our work.”
Here Crisis Text Line projects its own cherry-picking and omissions onto journalist Alexandra Levine and POLITICO. Crisis Text Line’s response to heated criticism of their commercial data use was to cite a study that was specific to research use only. Another deception.
At the time of the study (2016-2017) Crisis Text Line’s website was clear that commercial use of data was prohibited and not planned, ever. On October 3, 2017 Crisis Text Line removed the commercial use prohibition from its website. They did this one month after the research ethics study funding cycle ended (August 30, 2017, click on year 2015 (bottom) grant in list), and one month before Loris.ai, Inc. was incorporated (November 16, 2017, enter entity name Loris.ai, Inc.). See Timeline.
There’s a sense of well-orchestrated planning to these facts. To the outside observer, it raises questions. To heal from past deceptions, it’s important we be given full and truthful answers from persons with first-hand knowledge. As a step in that direction, on January 31, 2022 Dr. danah boyd contributed a lengthy personal explanation of her involvement. The reader is invited to read the entire blog post for context, but I feel there is no context that changes the meaning of her own words about consent that follow.
In Her Own Words
Emphasis in bold, below, has been added.
…“ToS [Terms of Service] is not consent”…
“… I know that no one in crisis reads lawyer-speak to learn this [code word to request deletion of data], …”
“…Given the trade-offs, I made a judgment call early on that not only was using texter data to strengthen training of counselors without their explicit consent ethical, but that to not do this would be unethical…”
“… This [using algorithms to triage the queue] means using people’s data without their direct consent, to leverage one person’s data to help another…”
“…This then led to the more complicated issue of whether or not to allow external researchers to study our data with an eye towards scholarship…I also knew that people who came to us in crisis were not consenting to be studied. Yes, there was a terms of service that could contractually permit such use, but I knew darn straight that no one would read it, and advised everyone involved to proceed as such…”
“…I elected to be the board member overseeing the research efforts…”
“…Instead, we gave the founder of Crisis Text Line permission to start a new organization, with us retaining a significant share in the company; we also retained the right to a board seat. This new entity was structured as a for-profit company designed to provide a service to businesses, leveraging what we had learned helping people. This company is called Loris.ai…”
“…Instead, we opted to create a data-sharing agreement that paralleled the agreement we had created with researchers: controlled access to scrubbed data solely to build models for training that would improve mental health more broadly…”
Though she wrote in her personal capacity, she was quoting herself saying and doing these things while in her Board of Directors role, a fiduciary position. Crisis Text Line retweeted her words, indicating they are fully aware. It’s very hard to make sense of it.
In sum, Crisis Text Line’s own board member overseeing the research efforts admits they don’t have consent and that key personnel knew as much, well before the paper was submitted for publication to JMIR on July 13, 2018. It’s also clear that planning for commercial use was underway while the study was being conducted. Yet the authors, twelve of 13 being affiliated or on staff with Crisis Text Line, submitted and pursued its publication anyway.
I’m interested in talking with data ethics advisory board members. My very early inquiry suggests the Data Ethics Committee was not involved in meaningful ways, in particular when it came to Loris.ai, Inc. This is consistent with reporting by Alexandra Levine for Forbes, that members of the Data Ethics Committee had not been convened in years. In my review of the Crisis Text Line website history, a data ethics advisory committee has been advertised on their website continuously since 2015. In the Forbes article, Dr. Megan Ranney was quoted as saying she had not been made aware of the Loris.ai, Inc. data sharing arrangement. While Dr. Ranney was clearly opposed to the Loris.ai, Inc. commercial use, she was a co-author of the 2019 JMIR paper, announcing its publication on twitter here, while serving on the study’s Data Ethics Committee.
Some questions:
What does it mean when there’s an intentional, knowing misrepresentation of the truth for purposes of taking possession and use of data from persons in crisis, by fiduciaries in a non-profit corporation?
Who did Dr. danah boyd tell about the lack of meaningful consent for research?
What was shared and what was withheld from Data Ethics Committee members?
What was shared and what was withheld from other study authors?
Do the authors stand by the paper’s assertion that Terms of Service provided “easy-to-understand” consent?
If Dr. danah boyd felt there was no meaningful consent for research, how could there have been consent for commercial use?
Bob Filbin, now with Meta, is a key person. Next, Data Ethics Committee members and all authors. These are individuals with first-hand knowledge, and we could benefit from hearing their perspectives.
The deception about consent continues to spread. Several of the research papers listed by Crisis Text Line use conversation transcript data and point to the 2019 JMIR paper for justification that consent has been obtained. This spreads an illusion about consent in ways that make it difficult to remove. According to JMIR’s metrics, the 2019 paper has been viewed 5,300 times as of May 2021. The 2019 Paper was selected “as a best paper of 2019” for the 2020 International Medical Informatics Association (IMIA) Yearbook, Special Section on Ethics in Health Informatics.
The names and reputations of the authors provide a legitimizing endorsement of consent by Terms of Service. Their help is needed to restore meaning to the word consent in the crisis context.
The first paper was published May 26, 2022 and cites to the 2019 paper:
“…Following carefully developed procedures to ensure privacy, appropriate data use, and other protections for texters (Pisani et al., 2019), CTL provided de-identified CC reports, texter surveys, and metadata…gathered from conversations initiated by texters between October 12, 2017, and October 11, 2018. CTL anonymized the data by removing personally identifiable information using natural language processing (NLP).”
The second paper was published May 22, 2022 and doesn’t cite directly to the 2019 JMIR paper.
Both papers appear to invoke the form of exemption used by federal agencies (federal exemption) where consent is not required for secondary research on data when the “…identity of the human subjects cannot readily be ascertained…” The federal exemption does not apply to Crisis Text Line, but Institutional Review Boards may grant exemption from consent along similar lines.
“The study’s protocol involving secondary analysis of de-identified data without access to links was considered to meet Federal and University criteria for exemption by the University of Rochester’s Institutional Review Board (IRB) and not to meet the definition of Human Subjects research requiring review by the New York State Psychiatric Institute/Columbia University Department of Psychiatry’s IRB.”
Crisis Text Line, and some researchers, are having it both ways with the millions of crisis conversations in their data store. They claim to have consent to do research, and they claim that consent is not needed to do the research.
Crisis Text Line, Inc. has taken some positive steps and it couldn’t have been easy. I personally appreciate those steps, and the effort. My thoughts on the substance:
Trust Issues Lingering from 2020
I wonder, are the gestures designed to be just enough for the spotlight to move on? For example, when Crisis Text Line posted its letter to the community after firing Nancy Lublin in June 2020, it allowed time to go by and then took that very important blog post down.
Loris.ai, Inc.
Cutting the tie and deleting the data is an empty gesture. Loris.ai finished beta and last year they released product. Crisis Text Line, Inc. and Loris.ai, Inc. already took the desperate conversations from about a million people, and the expertise from over 20,000 trained volunteers, without the informed consent of any of them. All that sorrow and hope fed the algorithms until they were satisfied. Loris.ai can harvest its own data now, if it so chooses, by contracting with its clients for customer conversations and associated data. Also, the FCC twitter post.
Updated Terms of Service
Moving key items towards the top is helpful, but doesn’t create consent. It’s a testable hypothesis whether a person using the service viewed the Terms of Service page, and how long they stayed on the page. Do that study for a year and report the findings. Or save the trouble because we all know the result. It’s good to know other means to obtain consent will be tested. Some ideas that occur to me:
Poor Alternative–place check boxes in a form, based on the Terms of Service, with all permissions easy to read and select. This confirms consent, but it remains inappropriate to ask for much at entry.
Better—ask little at entry. Store data six months, then delete. Continue reporting trends using data about the data, such as from volunteer surveys. Ask for any items of informed consent in an optional exit survey. If that’s not ideal for researchers who want representative data, then don’t do that study on that data.
Consent / Research
Quoting from Crisis Text Line board member Dr. danah boyd’s January 31, 2022 personal blog post, made in her personal capacity only:
“Yes, there was a terms of service that could contractually permit such use, but I knew darn straight that no one would read it, and advised everyone involved to proceed as such.”
and:
“… (A ToS is not consent.) …”
I assume that “ToS” refers to Terms of Service Agreement.
After being awarded nearly one million dollars in funding from the Robert Wood Johnson Foundation, during 2016-2017 Crisis Text Line conducted an 18-month study of ethics, privacy and security issues for research on the data. In 2019, the study was documented in a paper titled Protecting User Privacy and Rights in Academic Data-sharing Partnerships: Principles from a Pilot Program at Crisis Text Line, published in the Journal of Medical Internet Research (2019 JMIR Paper). In a November 24, 2021 letter to JMIR, I raised ethical concerns about consent and conflict of interest. The quotes from Dr. danah boyd above confirm to me that Crisis Text Line, Inc. acted in bad faith with respect to the 2019 JMIR Paper. I believe the following actions are appropriate:
Transparency
Some quick ideas: Make more board of director and advisory board meetings public. [2022/2/5. Ironically, Crisis Text Line removed the individual names of advisory board members after this blog posted. The link has now been updated to a recent, prior archived page.] Share all advisory board meeting minutes and final recommendations to the board of directors, publicly. Share all algorithms used on the data publicly. Disclose all shareholders in Loris.ai, Inc. who are or have been affiliated with Crisis Text Line, Inc. along with the time period those shares were held. Change the donation policy so that no anonymous donations greater than $10,000 will be accepted. Make public, with appropriate redaction, all algorithm and data audit reports on ethics, bias, privacy and security.
I feel that we don’t hear enough from individual board members and advisory board members. With encouragement for increased transparency, these individual experts could feel empowered to share their views, even if contrary to final board action.
Alternatives to AI
Start an international school teaching the techniques of active listening, validation of feelings and non-judgment along with the arc of the conversation. Teach the emotional landscape and human condition from the growing knowledge base, with attention to the voices of those who have lived through, and are living through the most difficult ways of existing. Use human teachers to teach other humans. Adapt the curriculum to any human endeavor such as crisis response, schools, corporate board rooms, customer service, law enforcement, social work, health care, data science, agriculture, civil service. Don’t stop until all humans have been trained.
There are individual volunteers who have helped thousands. Find teachers among them. (Am I wrong to guess that existing algorithms have already found “effective” volunteers and learned to add weight to their words and phrases in machine learning? Whether that speculation is true or not, experienced volunteers can always be asked directly for insights.)
The circumstances here are different than Reddit, where Jack Hanlon can share in a podcast interview that “Google, Facebook, Microsoft, and Open AI all used Reddit’s dataset to train their premier text (inaudible) models.” (Within excerpt from 12:22 – 13:40) [2022/2/5. Jack Hanlon was listed as a member of the Crisis Text Line Data, Ethics, and Research Advisory Board. His affiliation was given as, Global Head of Data at Reddit, Inc.]
Recognize that trained volunteers who are happy–because they are freed from ethical concerns about the data they are creating–are the most cost-effective and on-mission value to the crisis service. Elevate them. Recruit more of them to reduce waits at peak times. It was a brilliant idea to cultivate so many willing volunteers within Ireland, reaching from their day to night within the USA. Cheers to Ireland!
The crisis response community already knows what works, for example a quick history is highlighted in this 1 min youtube video. The ethics here are simple: algorithms aren’t needed to extend the warmth of care or to guide people towards a calmer emotional state. Trust the techniques already in use. Try using an entry survey for triage to expedite individuals when wait times grow. Use known statistical risk factors. Trust people.
(Aside: Crisis Text Line’s reported 89% success for predicting high and medium risk levels from within the queue with its triage algorithm seems impressive. Because the model was fine-tuned to detect the difference between high and medium risk, I would like to see the high-risk success rate reported as a separate number. The phrase “I want to die” can mean many different things. It may be worth considering that, as an alternative, self-reporting of urgency might be reasonably accurate and respectful, and could eliminate the intrusion of algorithms.)
Organization Behavior
I remain concerned about hurtful and inappropriate behavior by the organization. I learned about the power and control wheel from my training and experience as a volunteer. Then I felt it used against me. I was accused of having a reckless disregard for the truth. I was cautioned about the potential unintended consequences to persons in crisis, to staff, to donor income.
On August 23, 2021 I received this email from Dr. danah boyd from a personal email account. She was replying to the open letter transmitting my opinion paper. I was upset by her reply as fully explained in this response email. Even with my email request for a boundary with the organization, Mr. Rodriguez told POLITICO reporter Alexandra Levine (now with Forbes) that I was terminated for a Code of Conduct violation. I now give Crisis Text Line my permission to share publicly all internal communications about me, from me, to me. If I’m wrong, I’ll try my best to own it.
This type of behavior needs to stop. It’s mind-bending to be on the receiving end of it, from a crisis response organization. I tell the story not for me but to give testimony that it happened. I want to hold up and acknowledge the pain that others have experienced in the past and prevent any future manipulative, deceptive behavior. Please treat all volunteers according to the wonderful training they are provided. Then show respect for all those using the service, and the volunteers who help them, by affording all parties the right to consent.
I do appreciate these statements from Dr. danah boyd’s personal blog post:
“We strived to govern ethically, but that doesn’t mean others would see our decisions as such. We also make decisions that do not pan out as expected, requiring us to own our mistakes even as we change course.”
We’re all in this together after all. I wish no harm. I believe the organization has more unrealized potential than it realizes. Thanks to everyone who has been sending me information, messages, encouragement. I’m tired. I need to put this in the community’s good care for a time while I take a break. I remain hopeful.
My work on these issues is dedicated especially to those 200 or so beautiful people that I had the privilege to hold space with, during their crisis moments. Crisis Text Line made this possible, and I thank the organization with no reservations at all. You gave me the chance to tell a very young person, “I’m proud of you”, and that child said “Thank you, no one ever tells me that”.
Share this Post
Since 2013, about one million people have texted with trained volunteers at Crisis Text Line seeking help during desperate moments. The well-known service is reached by texting the word “HOME” to 741741. What is not well known is that de-identified transcripts of the crisis conversations are retained by Crisis Text Line indefinitely.
For those who don’t want their conversations stored and used, there are ways to request deleting the data. This is currently very difficult information to find, though important to know for persons using the service, and for those who refer people there.
The following information applies for persons located within the USA.
During a Conversation
Because LOOFAH is a difficult word to remember, you can text the volunteer:
Volunteers are trained to provide you with the word DELETE LOOFAH if you ask. If you are using the Spanish language option for the service, the same word DELETE LOOFAH is used.
Any Time After a Conversation
Even if you texted the service years ago, you may request deletion of the conversation transcript and your personal data. Here’s how:
Send an email request to legal@crisistextline.org. Example message:
My name is [Your Name].
I used [my phone / Facebook Messenger / Whatsapp / webchat] from [phone number / account name] to send text messages. Please search all methods, accounts and numbers listed.
I request that all my personal information and all text/transcripts be permanently deleted. Please confirm back to me after this request has been fulfilled.
Crisis Text Line will want to confirm your identify when considering your request. Keep in mind you are making a request only. Crisis Text Line will decide whether to honor the request. Whatever is decided, you deserve explanation, so feel free to ask. If you are using the Spanish language option for the service, the same contact email is used.
Reforms Needed
Persons experiencing crisis moments are in no position to give consent to anything complicated. Crisis Text Line assumes, and writes in their Terms, that if you use the service you agree they can keep and use the transcript of your conversation. They promise to remove personal identifying information. Even so, many people may not be comfortable having details of situations from their personal life retained and used by Crisis Text Line and other third parties. Crisis Text Line allows use of the conversations for research purposes, for evaluating its own services, and also for commercial purposes by license agreement with a for-profit spinoff company that it created. [Update: On January 31, 2022 Crisis Text Line, Inc. announced concessions about the for-profit spinoff company.]
Here are some reforms that I believe are needed:
During my volunteer training at Crisis Text Line, when I first saw the word LOOFAH, I assumed it was an acronym for a federal privacy regulation. It actually refers to a natural plant, and a cleaning implement made from the plant. As used by Crisis Text Line it is a word for “scrubbing” or removing data.
Texting Through Third Parties (Facebook, WhatsApp)
Use of a third party app when sending text messages to Crisis Text Line creates another platform for storage of data. According to Crisis Text Line, you would need to contact these other parties separately to request removal of the conversation data held by them.
A Question
A question arises, how is it possible to delete text/transcript data associated with an individual after the data is de-identified? There are different ways to de-identify data, and so I won’t presume to know the answer. If I’m able to find the answer I will provide an update here. If personal identifying information is retained 7 years, right now the clock would go back to January 2015 for retention. De-identified transcripts would still be retained indefinitely.
There’s a complication. The Crisis Text Line Terms of Service & Privacy Policies have changed over the years. In the 2013 Terms (Privacy Policy), no specific time was given for how long data would be retained either for personally identifying or de-identified data (see “Retention of Information” section at the link). The 2016 Terms contained the first mention I could find of specific retention times: personal information would be retained for 7 years then scrubbed, with de-identified data to be retained indefinitely. Whether the organization relates its Terms over time to data retention practices, I don’t know.
What I do know, and what is presented here, are the instructions for requesting deletion as of January 2022.
Related Post: Consent in Crisis
For Reference: Terms of Service Quoted Language
The information presented above is paraphrased from Crisis Text Line’s Terms of Service and website (excerpts included below). The information is based on the below quoted language from the current Terms of Service (dated January 13, 2022). As time goes by, you may wish to visit the Terms of Service for any updated requirements.
ACCESS/CHANGES TO PERSONALLY IDENTIFIABLE INFORMATION
If you are located within the U.S., you may request access to or changes to Personally Identifiable Information of yours which we have stored by emailing legal@crisistextline.org. You will be asked to verify your identity in order for us to grant you access to or process changes to your Personally Identifiable Information.
If you are located within the U.S., You may request that we delete your Personally Identifiable Information, such as your full name, physical address, zip code, phone number, and texts/message transcripts by messaging the word DELETE LOOFAH to us through the modality (text, Facebook Messenger, WhatsApp, or webchat) you used to reach out to us, after which you will be prompted to confirm your request.
We will make reasonable efforts to process requests promptly, but may decline to process requests that are unreasonably repetitive, require unreasonable technical effort, jeopardize the privacy of you or others, are otherwise impractical, or which would conflict with a law enforcement matter. Additionally, we will decline to process requests if we believe preservation of your Personally Identifiable Information is necessary: (A) to comply with the law or in response to a subpoena, court order, government request, or other legal obligations; (B) to protect the interests, rights, safety, or property of CTL, its employees, agents, or volunteers; (C) to enforce our Terms; (D) in connection with a sale, merger, or change of control of CTL or its affiliates, or (E) to address fraud, security, technical issues, or to operate the Services or their infrastructure systems properly.
References from the Internet Archive:
2013 Privacy Policy:
https://web.archive.org/web/20130816134904/http://www.crisistextline.org/privacy-hotline/
2015 Terms:
https://web.archive.org/web/20150206031020/crisistextline.org/privacy/
2016 Terms:
https://web.archive.org/web/20180331084851/https://www.crisistextline.org/privacy
Share this Post
In early 2020 I was brand new to volunteering at a crisis helpline. Soon after beginning and still with much to learn, I found myself advocating for reform. Knowing the support of another person can make all the difference in difficult moments, I want to talk openly about my concerns and help make things better.
It begins at the doorway to a crisis service, when a person experiencing crisis has decided to reach out. Before entering, they will be asked to give consent to the organization.
I had some questions about this. Is it consent when consent is required to receive service? Is it consent when the person is overwhelmed, anxious, panicked, hopeless, in danger, or twelve years of age? What are the words in the organization’s Terms of Service & Privacy Policy?
Is this a mere formality, or is this a moment of great import for the whole community?
Consent, Respect and Understanding
My volunteer service was with Crisis Text Line. They provided training in a wonderfully supportive environment, and I served from April 2020 to August 2021. As I gained experience in conversations on shift, and off-shift as I did homework to complement my training, I heard something. I heard voices longing for respect, acknowledgment, and understanding, not only during moments of crisis, but in daily living with challenges of health and well-being. The message was so loud and clear I couldn’t miss it or ignore it. At the same time, I was finding a lack of respect in the system, when it came to consent.
Welcome! (Sign Here)
Crisis service organizations place a wide welcome mat in front of the door to care, but entry is conditional. I use Crisis Text Line as an example because of first-hand experience, but it’s not an isolated issue. At Crisis Text Line, when I asked questions about consent, I experienced resistance. My questions had become very specific. They were about storage and use of “anonymized” conversations as data; not only for study and research but also for monetized commercial use by a for-profit spinoff company. [1, 2 (p.14), 3] Crisis Text Line defended its practices for consent and data ethics, but when I persisted and asked for open discussion I was warned, then terminated.
If you compare among Terms of Service agreements used by other well-known crisis response and service organizations, you will find a range from deliberate, full protections for the person using the service, to the opposite where nearly all rights to information and data are reserved to the organization. Realistically, these broad takings are happening without the person’s knowledge. Also, it’s important to understand that individual organizations tend to resist change and will be unlikely to reform on their own. [4]
Leadership within the Community
Organizations and institutions within the health care community must lead when it comes to consent. Consent must be understood as a gesture of respect at both personal and institutional levels.
Authentic, informed consent is free from any form of manipulation, deception, or concealment. Consent is the willing, fully informed, clear-headed gift of permission for a known and specific thing. The giver of consent is in control and can change their mind.
When consent is discussed in the context of abuse and sexual violence, the full meaning of the word must be protected. This full meaning should be guarded for consistency within all systems of health care.
It’s not easy to reach out for help. People know and feel when they are being patronized and their concerns dismissed. When care systems lose trust, it’s a difficult injury to heal. Honoring full and authentic informed consent is an investment in trust.
When raising questions we may be told that Terms of Service are legal requirements that can’t be changed. Valid legal protections are not the issue. Remember that legal staff are advisors to organizations. Look to the board of directors or other deciding body when seeking accountability.
The entire community is hurt when individual organizations betray trust. Just as the community tends to support and pull together in common directions, so can the community seek accountability from all its members.
Individual Responsibility
My message to health professionals, those in academia, data scientists and researchers, besides “thank you”, is this: Please take individual responsibility to learn whether data is ethically sourced in terms of consent. The burden for establishing respectful consent practices must not be placed on persons who are being served within the health and wellness space.
Reform
From respect, reform can come naturally. Here are two things I believe the community can do.
(1) Create a model Terms of Service and Privacy Agreement for non-profit crisis service providers.
(2) Create a scoring system for existing Terms of Service and Privacy Agreements. Example categories:
Clarity. Are key elements of agreement accessible and understandable to a person experiencing crisis? Is too much being asked?
Automatic Opt-out. No long-term data collection or processing is the default. Informed consent can be requested within an exit survey, but only with great care. Do not ask too much.
Transparency. Apply this lens to the entire organization. Are board meetings public?
Data identification. What information does the organization collect?
Data retention. Is data destroyed within a short time, or retained forever?
Data use. Does the service share data with third parties? Does it monetize data? What limits are placed on research? Are there affiliated for-profit interests?
To bring these ideas to life, I think it’s important to begin with persons having lived experience, then involve the whole community. I believe these reforms will naturally build more respect into systems of care, where it is desperately needed.
AUTHOR
Tim Reierson (he/him) is just another human on the planet, still learning. In his professional life he is a civil engineer. He advocates for data ethics and consent in the health space.
REFERENCES
1. Crisis Text Line. (2018, March 12). What is Loris.ai [Blog post]? Retrieved November 23, 2021 page capture on December 8, 2021 from https://web.archive.org/web/20211123054630/https://www.crisistextline.org/blog/2018/03/12/what-is-loris-ai/
2. Friedman LLP. (2018, September 6). Crisis Text Line, Inc. and Subsidiary Consolidated Financial Statements Year Ended December 31, 2017 and Independent Auditors’ Report [PDF file, page 14]. CTL. Retrieved December 8, 2021 from https://www.crisistextline.org//srv/htdocs/wp-content/uploads/2020/03/2017financialstatement-1.pdf [Note: Previous link has changed, updated link retrieved January 14, 2022 from https://www.crisistextline.org/wp-content/uploads/2020/03/2017financialstatement-1.pdf]
3. Loris (n.d.). Retrieved December 15, 2018 page capture on December 8, 2021 from https://web.archive.org/web/20181215035011/https://www.loris.ai/
4. Bella, D. (1987). Organizations and Systematic Distortion of Information. Journal of Professional Issues in Engineering, 113(4), 360-370. [Available for purchase from https://ascelibrary.org/doi/abs/10.1061/%28ASCE%291052-3928%281987%29113%3A4%28360%29]