Facebook, Google, Yahoo, and Instagram are using AI to tackle the problem.

Mark Zuckerberg’s 5,500-word manifesto for Facebook’s future had a heavy focus on using artificial intelligence to automate the social network’s functions.

Now, just weeks after the essay by the company founder was posted, Facebook has started to use its AI to identify people who may be considering suicide.

By looking at users’ posts and comments, the network’s AI will alert its human review team to potential issues. The team will then suggest ways that particular user can seek help.

In particular, the AI uses pattern recognition algorithms that can identify text suggesting a person is struggling. These algorithms were trained using posts previously identified as containing language of those at risk. The trial of the system is being started in the US and is part of a wider update to Facebook’s self-harm tools. If successful, it is likely the system will be adopted in other countries.

“This artificial intelligence approach will make the option to report a post about ‘suicide or self injury’ more prominent for potentially concerning posts,” Facebook said in a statement. The company added it has been closely working with suicide prevention experts to create the system and calls it a “streamlined reporting process”.

In February 2016, Zuckerberg’s hugely successful platformintroduced its first suicide prevention tools in the UK. The tools allow users to notify Facebook if a friend appears to be in serious distress. Those highlighted as being at risk would then be provided with information about where to seek help and deal with suicidal thoughts or feelings.

As well testing the AI-led method, Facebook is also adding suicide prevention tools to its Live streaming video platform. Those watching live videos can report behaviour to Facebook and a person sharing a video will see support resources on their screen. It comes in response to individuals killing themselves while others watch.

Facebook has also added the ability for those at risk to contact either the US Crisis Text Line, the National Eating Disorder Association and the National Suicide Prevention Lifeline and talk to their staff in real-time using Facebook Messenger.

Elsewhere, the company has recently automated its Safety Check tool that lets individuals ‘check-in’ to show they are safe when a natural disaster or terrorist attack has happened. AI scans posts where an incident may have occurred.

 

But Facebook is not the only firm using AI to try to solve online issues such as abuse and self-harm. At the end of February, Google released its troll-hunting AI system to anyone. Developed by Google Jigsaw, the code, called Perspective, looks at posts online and identifies the meaning of messages sent.

The system can, for example, identify when the phrase “nasty woman” is being used with a level of toxicity. Anyone can help to test the system on Perspective’s website. Yahoo has similarly developed an anti-abuse AI.

Elsewhere, Instagram, which is owned by Facebook, recently introduced tools to offer support to those with mental health issues. In October 2016, the firm belatedly launched an anonymous reporting tool that allows reporting of potential issues to happen.

The tool redirects those using banned hashtags – such as #thinspo – to Instagram’s support system. If someone anonymously reports a post to Instagram a message will appear. It says: “Someone saw one of your posts and thinks you might be going through a difficult time. If you need support, we’d like to help.”

Advertisements