Mark Zuckerberg, CEO of Facebook, has come under scrutiny for Facebook’s lack of monitoring false content.  GETTY IMAGES

Social Media Claims Increased

With the 2020 presidential election just months away, attention has turned to the role of social media in the spreading of misinformation and propaganda. How far should Facebook, YouTube, and Twitter go to put a halt to false information?

Facebook has given approval for political campaigns to hire social media “influencers” to promote their candidates. This is called “branded content.” It’s an ad without looking exactly like an ad. Michael Bloomberg’s campaign had Instagram influencers post endorsements. Instagram is owned by Facebook. Before Bloomberg’s influencer posts, there was no rule on Facebook regulating branded content. And with Bloomberg spending $7 million dollars a day on his campaign, the amount of branded content he might achieve is staggering.

Social Media Claims Increased

These branded content posts won’t be found in Facebook’s listing of political ads. This listing allows journalists and investigative watchdog groups to view political campaign advertising content. Therefore, they may get posted without being monitored.

The more lenient the rules are for posting political content, the more social media is open to propaganda from foreign governments and other sources. In 2016, Facebook was infiltrated by Russian accounts, sowing discord amongst members in an attempt to sway the presidential election. Facebook’s response was to require campaigns to give their US mailing address and state how much they spent on each ad.

YouTube announced a ban of content that has been manipulated in “a way that misleads users…and may pose a risk of egregious harm.” However, they will still allow video clips taken out of context. They are also banning false information about the voting process, with an example of viewers being given an incorrect voting date. In addition, they are banning videos that claim that a candidate is not eligible for office due to false information on his or her citizenship — so called “birther” content. YouTube states that their Intelligence Desk was formed in 2018 to pinpoint inappropriate and harmful content, and also work with Google’s Threat Analysis Group. YouTube is owned by Google.

Read more: https://www.forbes.com/sites/stephaniesarkis/2020/02/20/social-media-claims-increased-monitoring-of-election-content/#6022566b710e