AI-Based Suicide Prevention
With increasing evidence that social media can influence suicide-related behavior, Facebook is stepping up their game by using artificial intelligence to save lives. Using an algorithm to find disturbing and worrisome posts can shave off critical seconds in contacting help, and the social media giant is becoming proactive in their detection.
The fusion of the technology and social interaction is relatively new considering Facebook was founded in 2004 but didn’t catch fire until around 2010. The popularity of social media sites has increased rapidly and with it came a whole new type of bullying: cyber-harassment.
It seems people are bolder when they aren’t face-to-face with their victim and instead they hide behind their online persona of their digital device. As of 2015, the suicide rate among teenage girls ages 15 to 19 hit a 40-year high, according to the Centers for Disease Control and Prevention.
Between 2007 and 2015, the rates doubled among girls and rose by more than 30 percent among teen boys.
And just this past week, researchers in the U.K. published similar discoveries in a study on self-harm that showed a dramatic increase in the number of adolescent girls who engage in it: Self-harm rose 68 percent in girls ages 13 to 16 from 2011 to 2014, with girls more common to report self-harm than boys (37.4 per 10,000 girls vs. 12.3 per 10,000 boys).
See the original post here: Is Social Media Contributing to Rising Teen Suicide Rate?
Not only has cyberbullying caused an increase in suicide rates, but some desperate individuals have used sites like Facebook to actually live stream their attempt to kill themselves. With 2 billion users, it is virtually impossible to monitor all streaming video but this new method of prevention is a step in the right direction.
In 2016, Facebook rolled out tools to help prevent suicides by allowing other users to flag a message as one that could raise concern about suicide or self-harm. The recent update using AI can reportedly find these same posts quicker and cut down on the time it takes to provide resources.
Facebook will also have more humans looking at posts flagged by its algorithms. According to Engadget, the new AI tool has already “pinged over 100 first responders about potentially fatal posts, in addition to those that were reported by someone’s friends and family.”
Comments such as “Can I help?” and “Are you okay?” are an obvious indication that someone is having trouble, and these are being immediately flagged by the new system. Earlier today, CEO Mark Zuckerberg put out this statement:
If you read the comments following the above post, there is a discussion that artificial intelligence can be used for more than preventing suicide, but also for flagging conversations concerning potential terrorist attacks. However, some people feel that it is rather invasive to our privacy and question whether it is necessary.
The new tool will not be used for countries in the E.U. where their privacy laws are more strict, but it is being rolled out worldwide as of today.
With all of the concern that AI could eventually cause the end of mankind, it is really good to see it being used in a positive way. The tools represent a big step forward in getting help to individuals who are in a dark place as well as possibly preventing someone from hurting others.