A call for researchers and study authors to honor meaningful consent and privacy for persons experiencing crisis and, openly, to help make things right.
Crisis Text Line has deceived us. They’ve deceived us about consent, and about storing and using conversations. To restore trust, the deception needs to be acknowledged and ended.
No more research on crisis conversations without consent. No more algorithms in the conversations, triaging the queue or otherwise. No more storing conversations forever.
Why I Feel This
In my brief experience as a volunteer from 2020-2021, I had opportunity to talk with about 200 persons, at least one a very sweet child, experiencing some of her most difficult moments. They taught me more than I can express. Above all, I was gifted with a feeling of deep respect. Difficult lives are being lived and they deserve care and support. Support with no strings attached.
And what was it, in the crisis moment, that gave support? Something simple. The protected space for sharing in safety. A space to be together for a brief time. Listening, showing understanding, meeting a person where they are, allowing for emotion to flow without judgment, trusting them to know themselves best of all.
No doubt far fewer people would reach a crisis point if our society provided for basic human needs. But when the breaking point arrives, it’s the human touch that reaches towards connection, and a calming movement away from severe emotional distress.
The Deception about Consent
Crisis Text Line has represented all along that they have the utmost respect for persons using the service, and they understand the sensitivity of the conversations. They view information and data as a way to help communities respond to crisis trends in real time. They view research as a way to learn from the unique opportunity that their service provides, to help the mental health community better care for those who struggle. In 2016-2017 they undertook a peer-reviewed study, overseen by a Data Ethics Committee, that scrutinized the issues of consent, privacy, data security, and ethical use specifically for research. The refrain of “data for good” was and is foundational to how Crisis Text Line describes its mission and purpose.
But if we peer down deeper beneath this smooth surface, the deceptions begin to appear, like dark forms of rocks and snags.
The study just mentioned was titled “Protecting User Privacy and Rights in Academic Data-Sharing Partnerships: Principles from a Pilot Program at Crisis Text Line”. It was published in 2019 by the Journal of Medical Internet Research (JMIR).
There were irregularities. Seven members of the Data Ethics Committee to the study were authors. Two more authors, Dr. Anthony Pisani and Dr. Madelyn Gould, had affiliations as advisory board members to Crisis Text Line, undeclared within the paper. Three more authors were on staff with Crisis Text line, one of which was Crisis Text Line’s Chief Data Scientist Bob Filbin (who also chaired the Data Ethics Committee to the study). The others were Dr. Shairi Turner and Nitya Kanuri. This makes 12 of 13 authors affiliated with Crisis Text Line. Yet only two of those affiliations were declared in the paper: Crisis Text Line staff members Bob Filbin and Dr. Shairi Turner.
So, when the paper characterizes consent for research as having been granted by virtue of an “easy-to-understand” Terms of Service agreement, it gives pause about the study’s objectivity.
The 2019 JMIR paper, on consent for research:
“CTL provides texters with a link to an easy-to-understand Terms of Service [b], including a disclosure of potential future data use, before every crisis conversation.
[b] Terms of service: ‘We have created a formal process for sharing information about conversations with researchers at universities and other institutions. We typically share data with trusted researchers when it will result in insights that create a better experience for our texters. We follow a set of best practices for data sharing based on the University of Michigan’s Inter-University Consortium of Social and Political Research, one of the largest open data projects in the U.S., which includes stringent ethical, legal, and security checks. For more details, see our policies for open data collaborations’.”
To say that Terms of Service was the means for consent to research, granted by persons using the service, is implausible to say the least. The authors and Crisis Text Line knew the context was the crisis moment. They knew the demographic was young. The “easy-to-understand” language quoted by the study was located 80% of the way through a 3700 word agreement. Even if easy-to-understand, it was not easy-to-find.
More currently, on January 28, 2022, POLITICO broke the story about for-profit Loris.ai, Inc., which is the company formed by Crisis Text Line in 2017 to create and market customer service software.
On January 29, 2022, Crisis Text Line posted a twitter thread in reply to the POLITICO story (without naming POLITICO). In the thread, Crisis Text Line cited the 2019 JMIR paper saying
“Leading academics published research in the Journal of Medical Internet Research after spending 18 months evaluating our data sharing practices. They found that our ‘practices yielded key principles and protocols for ethical data sharing.’”
No mention was made that their own Chief Data Scientist and co-founder Bob Filbin was one of these “leading academics”, and no mention that twelve of 13 authors had affiliations to Crisis Text Line. The twitter thread begins
“A recent story cherry-picked and omitted information about our data privacy policies. We want to clarify and fill in facts that were missing so people understand why ethical data privacy is foundational to our work.”
Here Crisis Text Line projects its own cherry-picking and omissions onto journalist Alexandra Levine and POLITICO. Crisis Text Line’s response to heated criticism of their commercial data use was to cite a study that was specific to research use only. Another deception.
At the time of the study (2016-2017) Crisis Text Line’s website was clear that commercial use of data was prohibited and not planned, ever. On October 3, 2017 Crisis Text Line removed the commercial use prohibition from its website. They did this one month after the research ethics study funding cycle ended (August 30, 2017, click on year 2015 (bottom) grant in list), and one month before Loris.ai, Inc. was incorporated (November 16, 2017, enter entity name Loris.ai, Inc.). See Timeline.
There’s a sense of well-orchestrated planning to these facts. To the outside observer, it raises questions. To heal from past deceptions, it’s important we be given full and truthful answers from persons with first-hand knowledge. As a step in that direction, on January 31, 2022 Dr. danah boyd contributed a lengthy personal explanation of her involvement. The reader is invited to read the entire blog post for context, but I feel there is no context that changes the meaning of her own words about consent that follow.
In Her Own Words
Emphasis in bold, below, has been added.
…“ToS [Terms of Service] is not consent”…
“… I know that no one in crisis reads lawyer-speak to learn this [code word to request deletion of data], …”
“…Given the trade-offs, I made a judgment call early on that not only was using texter data to strengthen training of counselors without their explicit consent ethical, but that to not do this would be unethical…”
“… This [using algorithms to triage the queue] means using people’s data without their direct consent, to leverage one person’s data to help another…”
“…This then led to the more complicated issue of whether or not to allow external researchers to study our data with an eye towards scholarship…I also knew that people who came to us in crisis were not consenting to be studied. Yes, there was a terms of service that could contractually permit such use, but I knew darn straight that no one would read it, and advised everyone involved to proceed as such…”
“…I elected to be the board member overseeing the research efforts…”
“…Instead, we gave the founder of Crisis Text Line permission to start a new organization, with us retaining a significant share in the company; we also retained the right to a board seat. This new entity was structured as a for-profit company designed to provide a service to businesses, leveraging what we had learned helping people. This company is called Loris.ai…”
“…Instead, we opted to create a data-sharing agreement that paralleled the agreement we had created with researchers: controlled access to scrubbed data solely to build models for training that would improve mental health more broadly…”
Though she wrote in her personal capacity, she was quoting herself saying and doing these things while in her Board of Directors role, a fiduciary position. Crisis Text Line retweeted her words, indicating they are fully aware. It’s very hard to make sense of it.
In sum, Crisis Text Line’s own board member overseeing the research efforts admits they don’t have consent and that key personnel knew as much, well before the paper was submitted for publication to JMIR on July 13, 2018. It’s also clear that planning for commercial use was underway while the study was being conducted. Yet the authors, eleven of 13 being affiliated or on staff with Crisis Text Line, submitted and pursued its publication anyway.
I’m interested in talking with data ethics advisory board members. My very early inquiry suggests the Data Ethics Committee was not involved in meaningful ways, in particular when it came to Loris.ai, Inc. This is consistent with reporting by Alexandra Levine for Forbes, that members of the Data Ethics Committee had not been convened in years. In my review of the Crisis Text Line website history, a data ethics advisory committee has been advertised on their website continuously since 2015. In the Forbes article, Dr. Megan Ranney was quoted as saying she had not been made aware of the Loris.ai, Inc. data sharing arrangement. While Dr. Ranney was clearly opposed to the Loris.ai, Inc. commercial use, she was a co-author of the 2019 JMIR paper, announcing its publication on twitter here, while serving on the study’s Data Ethics Committee.
What does it mean when there’s an intentional, knowing misrepresentation of the truth for purposes of taking possession and use of data from persons in crisis, by fiduciaries in a non-profit corporation?
Who did Dr. danah boyd tell about the lack of meaningful consent for research?
What was shared and what was withheld from Data Ethics Committee members?
What was shared and what was withheld from other study authors?
Do the authors stand by the paper’s assertion that Terms of Service provided “easy-to-understand” consent?
If Dr. danah boyd felt there was no meaningful consent for research, how could there have been consent for commercial use?
Bob Filbin, now with Meta, is a key person. Next, Data Ethics Committee members and all authors. These are individuals with first-hand knowledge, and we could benefit from hearing their perspectives.
The deception about consent continues to spread. Several of the research papers listed by Crisis Text Line use conversation transcript data and point to the 2019 JMIR paper for justification that consent has been obtained. This spreads an illusion about consent in ways that make it difficult to remove. According to JMIR’s metrics, the 2019 paper has been viewed 5,300 times as of May 2021. The 2019 Paper was selected “as a best paper of 2019” for the 2020 International Medical Informatics Association (IMIA) Yearbook, Special Section on Ethics in Health Informatics.
The names and reputations of the authors provide a legitimizing endorsement of consent by Terms of Service. Their help is needed to restore meaning to the word consent in the crisis context.
The first paper was published May 26, 2022 and cites to the 2019 paper:
“…Following carefully developed procedures to ensure privacy, appropriate data use, and other protections for texters (Pisani et al., 2019), CTL provided de-identified CC reports, texter surveys, and metadata…gathered from conversations initiated by texters between October 12, 2017, and October 11, 2018. CTL anonymized the data by removing personally identifiable information using natural language processing (NLP).”
The second paper was published May 22, 2022 and doesn’t cite directly to the 2019 JMIR paper.
Both papers appear to invoke the form of exemption used by federal agencies (federal exemption) where consent is not required for secondary research on data when the “…identity of the human subjects cannot readily be ascertained…” The federal exemption does not apply to Crisis Text Line, but Institutional Review Boards may grant exemption from consent along similar lines.
“The study’s protocol involving secondary analysis of de-identified data without access to links was considered to meet Federal and University criteria for exemption by the University of Rochester’s Institutional Review Board (IRB) and not to meet the definition of Human Subjects research requiring review by the New York State Psychiatric Institute/Columbia University Department of Psychiatry’s IRB.”
Crisis Text Line, and some researchers, are having it both ways with the millions of crisis conversations in their data store. They claim to have consent to do research, and they claim that consent is not needed to do the research.