How Facebook Tries to Prevent Suicide

AI03

Healthcare professionals and government officials have been confronting suicide for years by implementing various self-harm and suicide prevention programs. Now, social-media companies like Facebook are taking part in the fight against suicide—by using artificial intelligence (AI).

Companies that run social media platforms are becoming more and more concerned about their users’ digital self-harm. For example, this year Facebook invested in a form of AI that reads the text in posts to look for individual users’ emotional distress and potential for self-harm. The software also looks for users’ comments such as “Are you OK?” and “Can I help?”—these can indicate that someone has suicidal thoughts.

When the software finds posts like these, it sends them to a team of trained reviewers who can work with first responders if necessary, or else contact users via Facebook Messenger with links to verified crisis-support resources.

Facebook reported that in the month before Thanksgiving 2017, the trained team worked with first responders on more than 100 possible crisis situations that the AI software identified. (Users can still report suspicion of people in crisis, and these reports go through the same human reviews that the AI reports do.)

Computer science is increasingly able to identify people’s risks for harming themselves, and this is a novel approach to mental health care. The hope is that this online strategy may help people who might be afraid to seek treatment “in real life.”

Technology can be a double-edged sword when it comes to our wellbeing. It’s important every so often to take a step back and think: does our use of social media makes us feel better? When we log onto apps, it can can help us connect with loved ones for support—but on the other hand, it can also turn into an outlet for negative interactions and thoughts.

Challenge yourself to use social media to support yourself and others! Keep your wellness in mind when using Instagram, Snapchat, Facebook.

Read more about Facebook’s new artificial intelligence project here.

What do you think about Facebook using AI to monitor your risks for self-harm? Do you think this will make for a safer social-media environment in the future? We would love to hear from you … please share your thoughts below!

AI01

Moderator ★

Hi! The moderator is a research team member with a background in behavioral health. We're here to help answer your questions and stimulate some great conversation! We don't provide therapy and are not available 24-7 so please if you are in crisis, go to our crisis page: https://sova.pitt.edu/i-need-help-now We look forward to talking to you!

You may also like...

Leave a Reply