Crisis Text Line, which is a global not-for-profit organization providing free & 24X7 mental health texting service through confidential crisis intervention via SMS message, has allegedly used and shared the data as a sliced and repackaged version of that information to create and market customer service software.

The non-profit company collects data from its online text conversations with people and uses big data and Artificial Intelligence (AI) to help people cope with traumas such as self-harm, emotional abuse and thoughts of suicide, said a report by Politico, a political journalism company based in the United States and internationally.

According to the report, Crisis Text Line has allegedly shared data with its spin-off called '', an AI-driven customer service software platform that uses machine-learning to provide companies AI-driven customer conversations with empathy. Customer companies can install Loris.AI as an app into their existing customer service platform such as Zendesk, Salesforce, etc.

Loris.AI is headquartered in New York, U.S. and also has an office in Tel Aviv, Israel. 

The report further said that Loris.AI has pledged to share some of its revenue with Crisis Text Line. Moreover, Crisis Text Line also holds an ownership stake in Loris.AI, and the two entities shared the same CEO for at least a year and a half. The two call their relationship a model for how commercial enterprises can achieve charity-tasks.

Crisis Text Line, however says that any data it shares with, has been wholly “anonymized,” stripped of any details that could be used to identify people who contacted the helpline in distress. Both entities say their goal is to improve the world — in Loris’ case, by making “customer support more human, empathetic, and scalable.”

Notably, Crisis Text Line has got financial backing from some of biggest tech names and VC funds including Reid Hoffman, Melinda Gates, The Ballmer Group, and Omidyar Network.

The services of Crisis Text Line are available 24 hours a day, every day, throughout the United States, Canada, UK, and Ireland.

Further, in a statement on its website, Crisis Text Line, said -"During these past days, we have listened closely to our community’s concerns...We hear you. Crisis Text Line has had an open and public relationship with Loris AI. We understand that you don’t want Crisis Text Line to share any data with Loris, even though the data is handled securely, anonymized and scrubbed of personally identifiable information.” will delete any data it has received from Crisis Text Line.

This incidence also raised questions on currently prevalent 'AI Ethics' in businesses. In an ideal "Ethical AI standards", a system of moral principles and techniques intended to inform the development and responsible use of artificial intelligence technology should be practiced.


Post a Comment

Previous Post Next Post
Like this content? Sign up for our daily newsletter to get latest updates.