Facebook has announced new “proactive detection” software that will scan its nearly-3 billion user feed for patterns and language associated with suicidal thoughts. The new artificial intelligence technology will send mental health resources to those whose content shows signs of self-harm, enabling at-risk users to contact first responders before any kind of tragedy occurs. The technology aims to shrink human response time by essentially attempting to predict certain behavior before it happens.
Data Mining for Good
This marks a bold leap forward, but it’s not the social media giant’s first foray into potentially testy territory. Facebook previous tested an early version of its AI detection software, implementing it into its Messenger feature in a more peer-to-peer manner. Now the site is attempting to more deeply analyse a myriad of content across its global reach, which might prove difficult as it wades across the water. The feature is already facing regional limitations; the EU has strict privacy laws, with General Data Protection Regulation preventing the mining of sensitive user information for corporate gain (even if the intention is, by all means, well meaning).
Facebook says it will use the software to prioritize what it qualifies as “risky or urgent” user behavior in order allow site moderators to more quickly identify the posts and address them. The AI is designed to intuit based on word patterns and imagery that has been manually reported, and to scan for comments that use specific turns of phrase, such as “are you ok?” and “do you need help?”, implying that the technology still depends on the user’s inner-network to alert the system.
The software will instantly bring up local language resources, as well as user-localized first responder contact information. The company’s internal infrastructure may in fact change to better serve the new feature, dedicating more moderators to the strict task of suicide prevention by training them to deal with cases. The site has partnered with programs like Save.org, Forefront and the National Suicide Prevention Lifeline (which has, in the past, collaborated with local media and broadcast organizations for broader reach).
Still, if this has you screaming Minority Report, you’re not alone. Social media’s gradual burrowing into the sediment of our everyday lives has users reckoning with a new kind of reality, questioning the impact of services they increasingly can’t stop themselves from using. Developing intuitive technology that alerts the powers that be about sensitive content has some scratching their heads. What’s to stop the tech giant from using the information to deduce petty crimes committed by its users, or silence political dissent? Certainly a notable concern if the Zuckerberg 2020 mumbles turn out to be true.
Regardless of the oft-hyperbolic concerns regarding social media’s near-dystopian influence on privacy, Facebook’s announcement comes at a time in which the companies find themselves in hot water when it comes to issues of mental health. Suicide rates for teens rose significantly between 2010 and 2015 after nearly two decades of decline. And while much of the reasoning remains inconclusive, many point to social media as a notable variable over the course of those five years.
According to Common Sense Media, teens average about 6.5 hours of screen time a day. Cyberbullying is on the rise, with 43% of children admitting to being bullied online. In as much as Facebook might be overstepping its boundaries to a degree, there is no denying that it and other sites like it are playing a large part in the social life and mental health of the youth. The suicide prevention AI might reek of some Orwellian nightmare on paper, but in the cloud it might be the only thing standing in the way of the youth and their worst impulses, proving that sometimes it takes a monster to defeat one.