Facebook and TikTok have failed to block ads with “flagrant” misinformation about how and when to vote in November’s US congressional elections and the integrity of the voting process, according to a new watchdog report. of human rights Global Witness and the Cybersecurity for Democracy (C4D) Team at New York University.
In one experiment, researchers sent 20 ads with false claims to Facebook, TikTok, and YouTube. The ads were targeted at “battleground states” like Arizona and Georgia.
While YouTube was able to detect and reject any test submissions and suspend the channel used to post them, the other two platforms fared significantly worse, according to the report.
TikTok approved 90 percent of ads that contained blatantly false or blatant information, the researchers found, while Facebook approved “a significant number,” according to the report, although it was significantly lower than TikTok’s. .
The ads, presented in English and Spanish, contained information stating that voting days would be extended and that social media accounts could double as a voter verification tool.
These ads also contained claims designed to discourage voter turnout, such as claims that election results could be hacked or that the outcome was predetermined.