{ads}

Facebook: After the election, to suspend all political ads

It is safe to say that this is an election season here in the United States, unlike any other, with tensions running exceptionally high. Facebook, which reaches the overwhelming majority of the US population through its set of applications, has again introduced a new slew of measures to mitigate the damage that misinformation on its platforms can trigger. Some of these steps are sound ideas, but sadly, two of its recent attempts are once again waiting for the horse to make it halfway around the world before you shut the door of the stable.


Facebook: After the election, to suspend all political ads


In a corporate blog post yesterday, Facebook clarified what its Election Day activities on both Facebook and Instagram would look like. In order to prevent any candidate from claiming victory before a race is actually called, the company has vowed for months that it will run real-time fact-checking on and after November 3, and it illustrated what that process will look like.

In that post, Facebook also said that while advertisements are "an san important way to express voice," after the polls close on November 3, it aims to impose a temporary moratorium on "all social issue, electoral, or political ads in the US" to "reduce opportunities for confusion or abuse." That stance would place Facebook, at least for the moment, in the same stance as Twitter's position on political ads.


Too late?


However, misunderstanding and violence are now rampant. For the past year, when it comes to fact-checking election ads, Facebook has held a notorious hands-off role. Occasionally, typically after media pressure, the site has interfered when such ads cross the line with one of its other policies. For example , Facebook pulled a line of Trump campaign ads in June to use Nazi imagery, and it pulled another batch of Trump campaign advertisements to target refugees in September. Other commercials have, however, been left alone, including deceptively manipulated photos and videos of Democratic presidential nominee Joe Biden.

Facebook also bans content calling for "militarized" poll-watching and coercion of voters, as the Trump campaign ramps up rhetoric calling for supporters to "to go into the polls and watch very carefully." However, this policy only extends to new content published in the future and not to content that has already gone viral.

Such content, sadly, has already generated millions of views. In September, Donald Trump Jr. posted a Facebook video calling for a "Army for Trump" to protect against the Democratic Party 's alleged widespread election fraud efforts. (There is no evidence of any such effort, and while in recent years there have been reported cases of election fraud, it is extremely rare.)

"We need every able-bodied man and woman to join Army for Trump's election security operation," Trump Jr. says in the video, encouraging viewers to "enlist now." Although that video very clearly breaks the new rules, however, it's not going anywhere.

"When we change our policies, we generally do not apply them retroactively," Facebook executive Monika Bickert said. "Under the new policy, if that video were to be posted again, we would indeed be removing it."

Unfortunately, fears about a surge in violence leading up to the election are not unfounded. For instance, the FBI announced just this morning that it had uncovered a plot by five Michigan men and one Delawarean to kidnap Gov. Gretchen Whitmer of Michigan and 'overthrow' the state government, which was partly coordinated on Facebook and other social media platforms.


Coordinated response


It seems obvious that the threat of co-ordinated misinformation may not be adequately handled by social media sites working alone. When proposing a series of plans for new rules or legislation that would apply to both it and other social media, Facebook today said just as much.

"If malicious actors coordinate off our platforms, we may not identify that collaboration," Facebook head of security policy Nathaniel Gleicher wrote. "If they run campaigns that rely on independent websites and target journalists and traditional media, we and other technology companies will be limited to taking action on the components of such campaigns we see on our platforms. We know malicious actors are in fact doing both of these things... There is a clear need for a strong collective response that imposes a higher cost on people behind influence operations in addition to making these deceptive campaigns less effective."

For instance , Facebook today said it removed 200 fake accounts linked to a marketing company working on behalf of two conservative political action groups in the United States, Turning Point USA and Inclusive Conservation Group. The marketing firm is now banned from Facebook, but apparently much of its employees' collaboration may have taken place using resources other than Facebook, and it could still be involved on other sites through its networks of fake accounts and misinformation.

"Regulations can be powerful tools in our collective response to these deceptive campaigns" Facebook said, proposing seven main principles. Transparency and cooperation are foremost among them: businesses should agree to exchange threat signals across "platforms, civil society , and government, while preserving the privacy of innocent users who might be caught up in these campaigns," suggested Facebook.

But Facebook still requires the assistance of the law in the form of real penalties for those forms of operations of power. The business is asking regulators to "impose economic, diplomatic and/or criminal penalties" on the promoters of these campaigns for disinformation.



Tags

Post a Comment

0 Comments
* Please Don't Spam Here. All the Comments are Reviewed by Admin.

#buttons=(Accept !) #days=(20)

Our website uses cookies to enhance your experience. Learn More
Accept !