Deception Has No Place Here

A call for researchers and study authors to honor meaningful consent and privacy for persons experiencing crisis and, openly, to help make things right.

Crisis Text Line has deceived us. They’ve deceived us about consent, and about storing and using conversations. To restore trust, the deception needs to be acknowledged and ended.

No more research on crisis conversations without consent. No more algorithms in the conversations, triaging the queue or otherwise. No more storing conversations forever.

Why I Feel This

In my brief experience as a volunteer from 2020-2021, I had opportunity to talk with about 200 persons, at least one a very sweet child, experiencing some of her most difficult moments. They taught me more than I can express. Above all, I was gifted with a feeling of deep respect. Difficult lives are being lived and they deserve care and support. Support with no strings attached.

And what was it, in the crisis moment, that gave support? Something simple. The protected space for sharing in safety. A space to be together for a brief time. Listening, showing understanding, meeting a person where they are, allowing for emotion to flow without judgment, trusting them to know themselves best of all.

No doubt far fewer people would reach a crisis point if our society provided for basic human needs. But when the breaking point arrives, it’s the human touch that reaches towards connection, and a calming movement away from severe emotional distress.

The Deception about Consent

Crisis Text Line has represented all along that they have the utmost respect for persons using the service, and they understand the sensitivity of the conversations. They view information and data as a way to help communities respond to crisis trends in real time. They view research as a way to learn from the unique opportunity that their service provides, to help the mental health community better care for those who struggle. In 2016-2017 they undertook a peer-reviewed study, overseen by a Data Ethics Committee, that scrutinized the issues of consent, privacy, data security, and ethical use specifically for research. The refrain of “data for good” was and is foundational to how Crisis Text Line describes its mission and purpose.

But if we peer down deeper beneath this smooth surface, the deceptions begin to appear, like dark forms of rocks and snags.

The study just mentioned was titled “Protecting User Privacy and Rights in Academic Data-Sharing Partnerships: Principles from a Pilot Program at Crisis Text Line”. It was published in 2019 by the Journal of Medical Internet Research (JMIR).

There were irregularities. Seven members of the Data Ethics Committee to the study were authors. Two more authors, Dr. Anthony Pisani and Dr. Madelyn Gould, had affiliations as advisory board members to Crisis Text Line, undeclared within the paper. Three more authors were on staff with Crisis Text line, one of which was Crisis Text Line’s Chief Data Scientist Bob Filbin (who also chaired the Data Ethics Committee to the study). The others were Dr. Shairi Turner and Nitya Kanuri. This makes 12 of 13 authors affiliated with Crisis Text Line.  Yet only two of those affiliations were declared in the paper: Crisis Text Line staff members Bob Filbin and Dr. Shairi Turner.

So, when the paper characterizes consent for research as having been granted by virtue of an “easy-to-understand” Terms of Service agreement, it gives pause about the study’s objectivity.

The 2019 JMIR paper, on consent for research:

“CTL provides texters with a link to an easy-to-understand Terms of Service [b], including a disclosure of potential future data use, before every crisis conversation.

[b] Terms of service: ‘We have created a formal process for sharing information about conversations with researchers at universities and other institutions. We typically share data with trusted researchers when it will result in insights that create a better experience for our texters. We follow a set of best practices for data sharing based on the University of Michigan’s Inter-University Consortium of Social and Political Research, one of the largest open data projects in the U.S., which includes stringent ethical, legal, and security checks. For more details, see our policies for open data collaborations’.”

To say that Terms of Service was the means for consent to research, granted by persons using the service, is implausible to say the least. The authors and Crisis Text Line knew the context was the crisis moment. They knew the demographic was young. The “easy-to-understand” language quoted by the study was located 80% of the way through a 3700 word agreement. Even if easy-to-understand, it was not easy-to-find.

I compared the Terms of Service and Privacy Policy (Terms) at that time and found the quoted research consent language was first added to the Terms as of February 16, 2016. See Disclosure to Third Parties – Researchers. But the language quoted in the paper removed the phrase “As of Feb 16, 2016,” from the beginning. The paper quotes from the December 9, 2016 version of the Terms which had removed the time qualifier.  Consent remains a problem regardless, but this wordsmith change was a deception incorporated by direct quote into the study.

More currently, on January 28, 2022, POLITICO broke the story about for-profit Loris.ai, Inc., which is the company formed by Crisis Text Line in 2017 to create and market customer service software.

On January 29, 2022, Crisis Text Line posted a twitter thread in reply to the POLITICO story (without naming POLITICO). In the thread, Crisis Text Line cited the 2019 JMIR paper saying

“Leading academics published research in the Journal of Medical Internet Research after spending 18 months evaluating our data sharing practices. They found that our ‘practices yielded key principles and protocols for ethical data sharing.’”

No mention was made that their own Chief Data Scientist and co-founder Bob Filbin was one of these “leading academics”, and no mention that twelve of 13 authors had affiliations to Crisis Text Line.  The twitter thread begins

“A recent story cherry-picked and omitted information about our data privacy policies. We want to clarify and fill in facts that were missing so people understand why ethical data privacy is foundational to our work.”

Here Crisis Text Line projects its own cherry-picking and omissions onto journalist Alexandra Levine and POLITICO. Crisis Text Line’s response to heated criticism of their commercial data use was to cite a study that was specific to research use only. Another deception.

At the time of the study (2016-2017) Crisis Text Line’s website was clear that commercial use of data was prohibited and not planned, ever. On October 3, 2017 Crisis Text Line removed the commercial use prohibition from its website. They did this one month after the research ethics study funding cycle ended (August 30, 2017, click on year 2015 (bottom) grant in list), and one month before Loris.ai, Inc. was incorporated (November 16, 2017, enter entity name Loris.ai, Inc.). See Timeline.

There’s a sense of well-orchestrated planning to these facts. To the outside observer, it raises questions. To heal from past deceptions, it’s important we be given full and truthful answers from persons with first-hand knowledge. As a step in that direction, on January 31, 2022 Dr. danah boyd contributed a lengthy personal explanation of her involvement. The reader is invited to read the entire blog post for context, but I feel there is no context that changes the meaning of her own words about consent that follow.

In Her Own Words

Emphasis in bold, below, has been added.

…“ToS [Terms of Service] is not consent”…

“… I know that no one in crisis reads lawyer-speak to learn this [code word to request deletion of data], …”

“…Given the trade-offs, I made a judgment call early on that not only was using texter data to strengthen training of counselors without their explicit consent ethical, but that to not do this would be unethical…”

“… This [using algorithms to triage the queue] means using people’s data without their direct consent, to leverage one person’s data to help another…”

“…This then led to the more complicated issue of whether or not to allow external researchers to study our data with an eye towards scholarship…I also knew that people who came to us in crisis were not consenting to be studied. Yes, there was a terms of service that could contractually permit such use, but I knew darn straight that no one would read it, and advised everyone involved to proceed as such…”

“…I elected to be the board member overseeing the research efforts…”

“…Instead, we gave the founder of Crisis Text Line permission to start a new organization, with us retaining a significant share in the company; we also retained the right to a board seat. This new entity was structured as a for-profit company designed to provide a service to businesses, leveraging what we had learned helping people. This company is called Loris.ai…”

“…Instead, we opted to create a data-sharing agreement that paralleled the agreement we had created with researchers: controlled access to scrubbed data solely to build models for training that would improve mental health more broadly…”

Though she wrote in her personal capacity, she was quoting herself saying and doing these things while in her Board of Directors role, a fiduciary position. Crisis Text Line retweeted her words, indicating they are fully aware.  It’s very hard to make sense of it.

In sum, Crisis Text Line’s own board member overseeing the research efforts admits they don’t have consent and that key personnel knew as much, well before the paper was submitted for publication to JMIR on July 13, 2018. It’s also clear that planning for commercial use was underway while the study was being conducted. Yet the authors, eleven of 13 being affiliated or on staff with Crisis Text Line, submitted and pursued its publication anyway. 

I’m interested in talking with data ethics advisory board members.  My very early inquiry suggests the Data Ethics Committee was not involved in meaningful ways, in particular when it came to Loris.ai, Inc.  This is consistent with reporting by Alexandra Levine for Forbes, that members of the Data Ethics Committee had not been convened in years.  In my review of the Crisis Text Line website history, a data ethics advisory committee has been advertised on their website continuously since 2015.  In the Forbes article, Dr. Megan Ranney was quoted as saying she had not been made aware of the Loris.ai, Inc. data sharing arrangement.  While Dr. Ranney was clearly opposed to the Loris.ai, Inc. commercial use, she was a co-author of the 2019 JMIR paper, announcing its publication on twitter here, while serving on the study’s Data Ethics Committee.

Some questions:

What does it mean when there’s an intentional, knowing misrepresentation of the truth for purposes of taking possession and use of data from persons in crisis, by fiduciaries in a non-profit corporation?

Who did Dr. danah boyd tell about the lack of meaningful consent for research?

What was shared and what was withheld from Data Ethics Committee members?

What was shared and what was withheld from other study authors?

Do the authors stand by the paper’s assertion that Terms of Service provided “easy-to-understand” consent?

If Dr. danah boyd felt there was no meaningful consent for research, how could there have been consent for commercial use?

Bob Filbin, now with Meta, is a key person. Next, Data Ethics Committee members and all authors. These are individuals with first-hand knowledge, and we could benefit from hearing their perspectives.

The deception about consent continues to spread. Several of the research papers listed by Crisis Text Line use conversation transcript data and point to the 2019 JMIR paper for justification that consent has been obtained. This spreads an illusion about consent in ways that make it difficult to remove. According to JMIR’s metrics, the 2019 paper has been viewed 5,300 times as of May 2021. The 2019 Paper was selected “as a best paper of 2019” for the 2020 International Medical Informatics Association (IMIA) Yearbook, Special Section on Ethics in Health Informatics.

The names and reputations of the authors provide a legitimizing endorsement of consent by Terms of Service.  Their help is needed to restore meaning to the word consent in the crisis context.

Research Consent Postscript
 
Two more research papers were published very recently (May 2022), evaluating the perceived effectiveness of Crisis Text Line in advance of the USA federal 988 crisis line rollout. They are companion papers, with the same 8 authors, except one additional author on the first paper. The authors include Dr. Anthony Pisani and Dr. Madelyn Gould, both highly respected and long-time advisory board members to Crisis Text Line. Both were among the 13 authors of the 2019 JMIR paper.
 

The first paper was published May 26, 2022 and cites to the 2019 paper:

“…Following carefully developed procedures to ensure privacy, appropriate data use, and other protections for texters (Pisani et al., 2019), CTL provided de-identified CC reports, texter surveys, and metadata…gathered from conversations initiated by texters between October 12, 2017, and October 11, 2018. CTL anonymized the data by removing personally identifiable information using natural language processing (NLP).”

The second paper was published May 22, 2022 and doesn’t cite directly to the 2019 JMIR paper.

Both papers appear to invoke the form of exemption used by federal agencies (federal exemption) where consent is not required for secondary research on data when the “…identity of the human subjects cannot readily be ascertained…” The federal exemption does not apply to Crisis Text Line, but Institutional Review Boards may grant exemption from consent along similar lines.

“The study’s protocol involving secondary analysis of de-identified data without access to links was considered to meet Federal and University criteria for exemption by the University of Rochester’s Institutional Review Board (IRB) and not to meet the definition of Human Subjects research requiring review by the New York State Psychiatric Institute/Columbia University Department of Psychiatry’s IRB.”

Crisis Text Line, and some researchers, are having it both ways with the millions of crisis conversations in their data store. They claim to have consent to do research, and they claim that consent is not needed to do the research.

[I feel that the federal regulation exempting research from consent, when data is anonymized, is not appropriate to invoke for crisis conversations. Also, when federal regulations don’t apply but Institutional Review Boards waive consent, I feel that is inappropriate in the crisis context.  I’ll be writing separately about that.]
 
It’s easy to understand why Crisis Text Line has connections with Big Data. But these other supporters, from academia in particular, present a more complex, nuanced set of questions about deception, consent for research, and Crisis Text Line. Understanding and unraveling that story is important work that remains to be done.
 
The Deception about Conversation Privacy and Confidentiality
 
Persons using the service expect privacy. Comments have been made on social media by people who used the service and volunteers, expressing feelings of betrayal and violation. Trust in our systems of care, such as they are, has been damaged. People wonder: Where can I go? Where should I refer others?
 
While it’s true that some might support data storage and use for research, there are intractable problems. How is it possible to obtain informed consent for research that hasn’t yet been defined? How is research possible on a limited sample size (zero conversations so far with informed consent), and would informed consent infuse the data with bias? There’s a way to resolve these questions: Respect the conversations by staying out of them, except for quality control by humans.
 
The “Anonymized” Data Deception
 
The crisis moment is specifically unique to one. The crisis conversation, unique to two. It seems that every imaginable human suffering, in every possible combination, has happened, and is likely to be happening to someone right now. If viewed as one, mass anonymized devastating condition, we see hopelessness. If viewed one person at a time, we are invited to engage in one person’s unique story, worth listening to. Then, it becomes possible to offer a warm blanket of comfort and care in the crisis moment. One, and then another, like this, can do a whole world of good.
 
It’s easy to fall into a trap of lumping people together within categories: self-harm, anxiety, depression, persistent thoughts of ending one’s life. Presuming too much from a label is a form of erasure, a form of anonymizing the individual. In crisis conversations, there are no generic circumstances, no generic conversations. It’s all unique detail, personal to each person, and to the volunteers who answer.
 
This viewpoint stands on its own. The very serious concerns about whether it’s even possible to anonymize conversations, remain. Thank goodness for those who advocate there.
 
Here, I’m talking about the intimacy, the emotional vulnerability, the surrender of self that is required to open a possibility. A possibility for one human to help another. When someone answers that call, and offers to help hold that fragile space, then there are two, and that makes all the difference. There’s no room for anything else.
 
No More Deception
 
This means:
 
1. No research on the massive accumulation of crisis conversation transcripts. Without consent, this is a deception, a theft.
 
2. No algorithms in the data, “learning” from it, or triaging the queue. Let people answer for themselves, use a simple, properly designed entry survey.
 
3. No more perpetual storage of crisis conversation transcripts. As if people’s lives and crises are being kept frozen in a cryogenic data lab for a future language processing breakthrough. No. Delete all conversations after six months, to allow reasonable time for quality control.
 
Please, no more deception.
 
 
 
 
Disclosures, More Information
a. Shortly after I was terminated as a volunteer by Crisis Text Line on August 21, 2021, I  received this email from Dr. danah boyd, and responded with this email.
b. For the curious, here’s the opinion paper I shared, resulting in my termination.
c. In November 2021, I wrote a letter of concern to the Journal of Medical Internet Research. At the time of this writing, it’s been shared with the paper’s authors. The concerns are under review at the time of this writing.  I’ve been generously invited by JMIR to write a commentary on the 2019 paper, which I plan to submit by mid-August 2022.
 
August 20, 2022 update:  The article has been updated to reflect that an additional author was on Crisis Text Line staff.  Nitya Kanuri was the Open Data Collaborations Manager for Crisis Text Line. This affiliation was not declared in the 2019 JMIR paper.
 
August 31, 2022 update:  To clarify that Crisis Text Line, Inc., as a private non-profit corporation, is not bound by terms of the federal exemption for consent.