15 / 10 / 2020
19 days before the presidential elections in the United States, the New York Post newspaper has published “an email that reveals that Hunter Biden introduced a Ukrainian businessman to his father when he was vice president . ” Facebook has decided to limit the dissemination of that article and Twitter directly prevents it from being disseminated.
Here we explain in detail the information from the tabloid and why the two networks have doubts about it, but this is a good time to tell you why at Maldita.es we believe that the platforms have to be much more transparent in decisions like this, why They should have independent verifiers and how they can do it better.
A few hours after the article was published in the New York Post , a Facebook spokesperson explained that the article was “eligible to be analyzed by the fact-checkers” of its external verification program ( of which Maldita has been part since 2019 ). . However, it is the following phrase from that spokesperson that has drawn the most attention: “In the meantime, we are reducing its distribution on our platform . ”
https://twitter.com/andymstone/status/1316395902479872000
At Maldita.es we were surprised that Facebook limited the dissemination of content that had not yet been verified and that it did so without the participation of external verifiers . We have asked ourselves how and who made that decision and based on what criteria. It didn’t take us long to see that the director of the International Fact-checking Network (IFCN) , the international alliance to which we belong along with more than 70 fact-checkers from around the world, was asking himself the same questions. Many of the IFCN members also participate in Facebook’s third-party verification program.
https://twitter.com/baybarsorsek/status/1316426609247821824?s=28
How long has Facebook been reducing the dissemination of unverified content without having verifiers? What happens if a verifier says after the fact that this content is true? These are several of the questions that Baybars Örsek, the general director of the IFCN , raised and that we share.
We have spoken to Facebook and they have explained to us that the measure is part of their effort to «protect the elections.» That in some countries, including the US, reserves the ability to «temporarily reduce the distribution of content» if it had signs that it could be false. They say this allows time for an independent verifier to work on the content, which is sometimes a long process.
The explanation is reasonable, but for us the key is how that decision is made. Who is the one who decides to «temporarily reduce distribution»? With what criteria? Are these criteria clear, public and transparent? Can content be penalized before an independent verification process has even been initiated? Is it due to a unilateral decision by the platform or does someone else have an opinion? How long can this penalty last if no fact-checker intervenes to corroborate or deny the content?
In the case of Twitter, the platform has decided to directly prevent its users from sharing the article . Twitter does not have a third-party verification program and does all its content moderation itself, without the intervention of independent fact-checkers.
In this case, the message that users who try to spread the New York Post article about Joe Biden’s son receive is “We cannot complete your request because Twitter or one of our business partners has identified this link as potentially harmful . ” The company has explained its reasons through a corporate account.
Twitter reminds that its terms of service prevent “distributing content obtained without authorization” and that they do not want to “incentivize hacking by allowing Twitter to be used to distribute material that has possibly been obtained illegally . ” In the case of the article about Biden’s son, the New York Post itself explains that the email in question has been recovered from a laptop that someone took to be repaired and did not pick up. The owner of the store handed it over to the FBI, but not before making a copy of the hard drive and sharing it with a person close to Rudy Giuliani, advisor to President Trump, who is the one who sent it to the newspaper.
Twitter gives this explanation, but the user does not receive it at the time they are prevented from sharing the article. Furthermore, it is a decision made unilaterally by the company without any type of external verification . The company itself has acknowledged that “we have work to do to provide clarity when we apply our rules in this way. “We should add clarity and context when we prevent tweeting or direct messaging links that violate our rules.” Twitter founder and CEO Jack Dorsey has said the company has poorly explained his decision and that blocking content without giving “context as to why” is “unacceptable.”
At Maldita.es we believe that the problem goes deeper. Our mission is to fight misinformation, but when decisions are made about what can and cannot be shared, these affect freedom of expression and must be adopted in accordance with absolutely clear and transparent rules for the entire community.
For example, when we rate content for Facebook, it is not deleted ; a message of our qualification is noted and access is given to our denial. In addition, the person rated can appeal and can rectify and have the rating change. No platform should be able to even afford the appearance that it is censoring content on a whim, evaluating each content based on the moment.
This is evident at any time, but even more so during the electoral period. Social networks have become a public square in which citizens share their opinions and debate as a society, and this means that the rules that govern online life must be analogous to offline rules. There is a need for uniform rules and transparent decisions that are not made in an opaque process, that can be explained and defended publicly and that allow any citizen to repeat the process by which they have reached the conclusion that something is true or false. And to do this we believe that you should always have independent verifiers.
At Maldita.es we take our role very seriously when it comes to verifying what is disinformation and what is not, and we can do that job because we have a very clear methodology that we apply in all cases . The decisions made by Facebook and Twitter in this matter show that the platforms still have to travel part of that path, although at least these two show that they want to do something, unlike others like YouTube where misinformation in Spanish, for example about COVID-19 is rampant without measures being taken.
29 / 07 / 2024
Maldita.es is participating in the project QYourself: Question what you get. Media education to combat disinformation led by the University of the...
24 / 07 / 2024
At Maldita.es we are aware that we are all vulnerable to the scam attempts that cybercriminals try to sneak us in. For this reason...
22 / 07 / 2024
Fundación Maldita has published a report assessing the response of Facebook Instagram TikTok X and YouTube to disinformation related to the European...