Facebook includes Instagram in its transparency report for the first time
Facebook includes Instagram in its transparency report for the first time
But information on fake accounts and hate speech is missing
Facebook is including Instagram in its transparency report for the first time, releasing data on how the company moderates content related to child exploitation, self-harm, terrorist propaganda, and drug and firearm sales.
Notably absent from the report is information on how fake accounts, hate speech, and violent content are regulated on the photo-sharing app.
The information is part of Facebook’s quarterly “community standards” transparency report, which tracks the company’s ongoing efforts to moderate content on the platform.
The last report, released in May, showed a sharp increase in the number of abusive accounts on Facebook, and a downtick in the number of posts containing violent content the company detected and removed.
The addition of Instagram to the report is critical as misinformation about the 2020 election continues to spread on social media.
Last month, the left-leaning human rights group Avaaz reported that stories containing misinformation were viewed almost 158.9 million times on the social network, continuing to spread even after they were proven to be false.
The news was worrisome as the company has invested heavily in AI systems to detect misleading information since the 2016 election.
THE ADDITION OF INSTAGRAM TO THE REPORT IS CRITICAL AS MISINFORMATION ABOUT THE 2020 ELECTION CONTINUES TO SPREAD ON SOCIAL MEDIA
It also brings into sharper focus the specific danger that misinformation on Instagram presents. In 2016, Russian trolls posted more than 3,000 ads across both Facebook and Instagram.
As memes and photos become an even larger component of election interference efforts, Instagram has become a prime target ahead of the 2020 election. The platform also has fewer resources to fight misinformation than Facebook, making the ongoing spread of fake news far more likely.
About the author