Facebook tests AI to tackle offensive Facebook Live videos
Facebook has taken a small yet significant step towards tackling their persistent Fake News problem and censorship of offensive content by testing Artificial Intelligence to automatically flag offensive Facebook Live videos.
Joaquin Candela, Facebook’s Director of Applied Learning told Reuters that the algorithm currently undergoing testing, detects nudity, violence and other unsuitable content that does not adhere to Facebook’s policies.
Candela was also quoted admitting that the algorithm would not only have to avoid making mistakes that could lead to Facebook come under scrutiny and criticism, but also be fast enough to recognize and flag offensive Live Videos, whereupon they can be reviewed and removed if necessary.
The company has struggled to contain fake news running amok on their platform ever since laying off their team of human editors due to allegations of bias, choosing to go with Artificial Intelligence to curate Trending Topics. Under performing could be an understatement for the Facebook AI that stuttered since it was employed, promoting several fake news articles to the top of Trending Topics.
By preventing fake news websites from using their advertising network, Facebook had tried to fix the persistent problem but it did not help much.
Facebook had also laboured in their censorship duties as the social media giant came under fire for censoring one of the most iconic photographs from the Vietnam War, citing nudity, which once again put into question Facebook’s decision to trust Artificial Intelligence for a job that often involves consulting and putting into account the context of content before censoring it.
The Pulitzer Prize winning photograph, popularly known as Napalm Girl, blocked by Facebook can be viewed here.
The recently concluded United States Presidential Elections resulted in a rise of fake news circulating on the social networking platform, reporting the Pope had endorsed Republican candidate and now President elect, Donald Trump which some believe influenced voters, tilting the election in favour of Trump.
Facebook does not appear to be abandoning their plans to rely on Artificial Intelligence even after its consecutive glitches and fake news promotions, censoring content and difficulty to contain hate mongering and bullying.
In the past, Facebook completely relied on their users to report offensive content which was then reviewed and taken down, which the company seems to not prioritize recently. Facebook users can still report offensive content, but that is highly subjective to different individuals and their perceptions.
It has been a few months since Facebook has been trying to address the issue of fake news on their platform, while continually reiterating that they are a tech company, and that they do not decide the kind of posts Facebook users see.
Facebook should be very careful before putting the AI to use, as another mistake could bring a lot of negative publicity and criticism to their doorstep.