Crisis Text Line, Inc. has taken some positive steps and it couldn’t have been easy. I personally appreciate those steps, and the effort. My thoughts on the substance:
Trust Issues Lingering from 2020
I wonder, are the gestures designed to be just enough for the spotlight to move on? For example, when Crisis Text Line posted its letter to the community after firing Nancy Lublin in June 2020, it allowed time to go by and then took that very important blog post down.
Cutting the tie and deleting the data is an empty gesture. Loris.ai finished beta and last year they released product. Crisis Text Line, Inc. and Loris.ai, Inc. already took the desperate conversations from about a million people, and the expertise from over 20,000 trained volunteers, without the informed consent of any of them. All that sorrow and hope fed the algorithms until they were satisfied. Loris.ai can harvest its own data now, if it so chooses, by contracting with its clients for customer conversations and associated data. Also, the FCC twitter post.
Updated Terms of Service
Moving key items towards the top is helpful, but doesn’t create consent. It’s a testable hypothesis whether a person using the service viewed the Terms of Service page, and how long they stayed on the page. Do that study for a year and report the findings. Or save the trouble because we all know the result. It’s good to know other means to obtain consent will be tested. Some ideas that occur to me:
Poor Alternative–place check boxes in a form, based on the Terms of Service, with all permissions easy to read and select. This confirms consent, but it remains inappropriate to ask for much at entry.
Better—ask little at entry. Store data six months, then delete. Continue reporting trends using data about the data, such as from volunteer surveys. Ask for any items of informed consent in an optional exit survey. If that’s not ideal for researchers who want representative data, then don’t do that study on that data.
Consent / Research
Quoting from Crisis Text Line board member Dr. danah boyd’s January 31, 2022 personal blog post, made in her personal capacity only:
“Yes, there was a terms of service that could contractually permit such use, but I knew darn straight that no one would read it, and advised everyone involved to proceed as such.”
“… (A ToS is not consent.) …”
I assume that “ToS” refers to Terms of Service Agreement.
After being awarded nearly one million dollars in funding from the Robert Wood Johnson Foundation, during 2016-2017 Crisis Text Line conducted an 18-month study of ethics, privacy and security issues for research on the data. In 2019, the study was documented in a paper titled Protecting User Privacy and Rights in Academic Data-sharing Partnerships: Principles from a Pilot Program at Crisis Text Line, published in the Journal of Medical Internet Research (2019 JMIR Paper). In a November 24, 2021 letter to JMIR, I raised ethical concerns about consent and conflict of interest. The quotes from Dr. danah boyd above confirm to me that Crisis Text Line, Inc. acted in bad faith with respect to the 2019 JMIR Paper. I believe the following actions are appropriate:
- Retract the 2019 JMIR Paper which asserts ethical consent based on agreement to Terms of Service.
- Make a full report of accountability, for all parties to the 2019 JMIR Paper.
- Retract or formally note up ethical concerns for all published research conducted on the conversation transcripts to date. Place a hold on any ongoing research.
- Admit to lack of consent for historic dataset.
- Repay the Robert Wood Johnson Foundation its study funding of $962,080 USD out of future profits from Loris.ai, Inc.
Some quick ideas: Make more board of director and advisory board meetings public. [2022/2/5. Ironically, Crisis Text Line removed the individual names of advisory board members after this blog posted. The link has now been updated to a recent, prior archived page.] Share all advisory board meeting minutes and final recommendations to the board of directors, publicly. Share all algorithms used on the data publicly. Disclose all shareholders in Loris.ai, Inc. who are or have been affiliated with Crisis Text Line, Inc. along with the time period those shares were held. Change the donation policy so that no anonymous donations greater than $10,000 will be accepted. Make public, with appropriate redaction, all algorithm and data audit reports on ethics, bias, privacy and security.
I feel that we don’t hear enough from individual board members and advisory board members. With encouragement for increased transparency, these individual experts could feel empowered to share their views, even if contrary to final board action.
Alternatives to AI
Start an international school teaching the techniques of active listening, validation of feelings and non-judgment along with the arc of the conversation. Teach the emotional landscape and human condition from the growing knowledge base, with attention to the voices of those who have lived through, and are living through the most difficult ways of existing. Use human teachers to teach other humans. Adapt the curriculum to any human endeavor such as crisis response, schools, corporate board rooms, customer service, law enforcement, social work, health care, data science, agriculture, civil service. Don’t stop until all humans have been trained.
There are individual volunteers who have helped thousands. Find teachers among them. (Am I wrong to guess that existing algorithms have already found “effective” volunteers and learned to add weight to their words and phrases in machine learning? Whether that speculation is true or not, experienced volunteers can always be asked directly for insights.)
The circumstances here are different than Reddit, where Jack Hanlon can share in a podcast interview that “Google, Facebook, Microsoft, and Open AI all used Reddit’s dataset to train their premier text (inaudible) models.” (Within excerpt from 12:22 – 13:40) [2022/2/5. Jack Hanlon was listed as a member of the Crisis Text Line Data, Ethics, and Research Advisory Board. His affiliation was given as, Global Head of Data at Reddit, Inc.]
Recognize that trained volunteers who are happy–because they are freed from ethical concerns about the data they are creating–are the most cost-effective and on-mission value to the crisis service. Elevate them. Recruit more of them to reduce waits at peak times. It was a brilliant idea to cultivate so many willing volunteers within Ireland, reaching from their day to night within the USA. Cheers to Ireland!
The crisis response community already knows what works, for example a quick history is highlighted in this 1 min youtube video. The ethics here are simple: algorithms aren’t needed to extend the warmth of care or to guide people towards a calmer emotional state. Trust the techniques already in use. Try using an entry survey for triage to expedite individuals when wait times grow. Use known statistical risk factors. Trust people.
(Aside: Crisis Text Line’s reported 89% success for predicting high and medium risk levels from within the queue with its triage algorithm seems impressive. Because the model was fine-tuned to detect the difference between high and medium risk, I would like to see the high-risk success rate reported as a separate number. The phrase “I want to die” can mean many different things. It may be worth considering that, as an alternative, self-reporting of urgency might be reasonably accurate and respectful, and could eliminate the intrusion of algorithms.)
I remain concerned about hurtful and inappropriate behavior by the organization. I learned about the power and control wheel from my training and experience as a volunteer. Then I felt it used against me. I was accused of having a reckless disregard for the truth. I was cautioned about the potential unintended consequences to persons in crisis, to staff, to donor income.
On August 23, 2021 I received this email from Dr. danah boyd from a personal email account. She was replying to the open letter transmitting my opinion paper. I was upset by her reply as fully explained in this response email. Even with my email request for a boundary with the organization, Mr. Rodriguez told POLITICO reporter Alexandra Levine (now with Forbes) that I was terminated for a Code of Conduct violation. I now give Crisis Text Line my permission to share publicly all internal communications about me, from me, to me. If I’m wrong, I’ll try my best to own it.
This type of behavior needs to stop. It’s mind-bending to be on the receiving end of it, from a crisis response organization. I tell the story not for me but to give testimony that it happened. I want to hold up and acknowledge the pain that others have experienced in the past and prevent any future manipulative, deceptive behavior. Please treat all volunteers according to the wonderful training they are provided. Then show respect for all those using the service, and the volunteers who help them, by affording all parties the right to consent.
I do appreciate these statements from Dr. danah boyd’s personal blog post:
“We strived to govern ethically, but that doesn’t mean others would see our decisions as such. We also make decisions that do not pan out as expected, requiring us to own our mistakes even as we change course.”
We’re all in this together after all. I wish no harm. I believe the organization has more unrealized potential than it realizes. Thanks to everyone who has been sending me information, messages, encouragement. I’m tired. I need to put this in the community’s good care for a time while I take a break. I remain hopeful.
My work on these issues is dedicated especially to those 200 or so beautiful people that I had the privilege to hold space with, during their crisis moments. Crisis Text Line made this possible, and I thank the organization with no reservations at all. You gave me the chance to tell a very young person, “I’m proud of you”, and that child said “Thank you, no one ever tells me that”.