Dubai- Artificial Intelligence Journalism
Can artificial intelligence technologies play a perfect role in detecting fake content? What are the risks that could be accrued?
As the coronavirus pandemic swept the world, social media giants like Facebook, Google and Twitter did what other companies did. They turned to algorithms to watch and control the fake content, according to Politico.
When the social media giants announced the changes, they acknowledged the algorithms might struggle to discriminate between legitimate and illegitimate content. And indeed, the effects were almost immediate.
How would artificial intelligence technologies detect the fake content?
Figures for 2020
Facebook and Google roughly doubled the amount of potentially harmful material they removed in the second quarter of this year compared with the three months through March, according to the companies’ most recent transparency reports. Twitter has yet to provide figures for 2020.
While far more content was flagged and removed for allegedly breaking the companies’ rules on what could be posted online, in some areas dangerous and possibly illegal material was more likely to slip past the machines.
In other high-profile areas, like child exploitation and self-harm, the number of removals fell by at least 40 percent in the second quarter of 2020 because of a lack of humans to make the tough calls about what broke the platforms’ rules, according to Facebook’s transparency report.
Social media content moderators review thousands of explicit posts each day and are given little mental support to handle the graphic imagery they have to police. Their decisions are then pumped into the companies’ machine-learning tools, which require large datasets of removal decisions to learn from, according to Tarleton Gillespie, who works at Microsoft Research, an independent research unit of the tech company.