Released at the end of October, the Samaritans ‘suicide watch’ app was suspended after serious concerns were raised. Lucy Maguire looks into whether Radar was well meant, or simply naive.
After just a week, the suicide prevention charity Samaritans have dropped their new app Radar. The app was intended to notify Twitter users when people they were following could potentially be in distress, using an algorithm that searched for certain key phrases, such as “depressed” and “need someone to talk to”. Users would then have received a private alert via email, with guidance on how to reach out to the presumably troubled Tweeter. However, as of the 7th November the app has been suspended, due to what the charity referred to as “serious concerns raised by some people with mental health difficulties”.
Clearly the issue of how to best support those suffering with mental health problems is a delicate one, and all those involved deserve to be treated with respect. Samaritans have been offering their services to those in any sort of emotional distress for over sixty years, and have successfully helped many people with their confidential, judgement-free approach. With this service mostly provided by telephone, it seems natural for the charity to have attempted expansion onto social media platforms. Research has, after all, identified the potential of social media output to serve as an early warning of suicidal behaviour. With Twitter as a popular, and mostly public, social media site, all signs seemed to indicate a success. However, especially coming from such a reputable organisation, the app’s actual design is remarkably naïve.
The biggest surprise, and perhaps the biggest mistake when it comes to Radar, is its emphasis on analysis and action. The charity promise on their website that they “won’t make decisions for you”, but their app seems to do the opposite.
Aside from possible issues of privacy in the analysis of a personal tweet, calculating by algorithm an individual’s emotional state is undoubtedly a diagnostic exercise. This would be true of even the most sophisticated analysis, but the idea that this can be achieved by identifying ‘key words’ in updates of 140 characters or less is not only unlikely, but also reductive to those who really are suffering from complex feelings of emotional distress.
Asking users to then ‘reach out’ with words of support on the basis of this assumption also seems heavy-handed, and uncharacteristic from a group who advertise their services as free from judgement or pressure. Without taking into account your relationship to the person you follow, the app could inadvertently inspire feelings of guilt, or encourage inappropriate action. Most worryingly, Radar supplied you with this information assuming your intentions are good – an alert that tells you when someone you follow may be at their most vulnerable has the potential for massively harmful misuse, as a petition for the app’s suspension made clear.
With a little empathy and reflection, Radar is as a well-intentioned misstep by a charity that won’t make the same mistake twice. However, with the same simplistic logic it employs turned back on itself, the app reads as offensively reductive and potentially damaging, and all from an organisation that should have known better.
What do you think about Radar? Was it well meaning or completely misjudged? Let us know in the comments or via email at firstname.lastname@example.org. For everything else games and tech, check us out onFacebook and Twitter.bookmark me