Public Policy

The position of the Maldita.es Foundation on the government’s proposals to “defend digital rights on social media”

No concrete legal text has yet been presented, making it difficult to analyze the scope and potential effectiveness of some of these reforms if they are approved. Prohibiting minors' access to social media cannot eliminate online anonymity or lead platforms to collect more sensitive data from minors.

February 6, 2026
The position of the Maldita.es Foundation on the government’s proposals to “defend digital rights on social media”

The government has announced a package of measures “to defend digital rights on social media,” including a ban on access for minors under 16; penalties for algorithmic manipulation and the amplification of illegal content; criminal liability for executives of digital platforms; and “zero tolerance” for sexualized content involving minors. At the Maldita.es Foundation, we have spent many years investigating the role that major platforms play in disinformation and their responsibilities, and we therefore want to share our view.

First of all, based on what we know today, it is difficult to form a definitive opinion. Several of the measures require legislative reform, and the government has not yet presented a legal text, which is essential to understand the scope and effectiveness of these reforms—if they ultimately become law after parliamentary debate. Even so, we believe we can contribute to this public discussion based on our experience and the best available evidence. It is also necessary to clarify what types of platforms would be considered “social media” and therefore affected by this measure: those that European legislation classifies as “very large platforms” due to their number of users, or also, for example, forums such as Reddit or Discord, or messaging applications.

A ban on social media access for minors under 16: THE DEVIL IS IN THE DETAILS

This debate is not unique to Spain. A similar ban has been in force in Australia for two months and is under consideration in France and the United Kingdom. In all these countries, supporters of the measure rely on similar arguments, pointing to various studies documenting problems of addiction and negative consequences for learning or mental health. They also note that the law already prohibits minors from accessing other products considered harmful, such as alcohol or tobacco, or from entering certain establishments such as betting shops. There is an underlying discussion about whether social media are truly comparable to those products in terms of the harm they cause. Beyond that, however, there is a debate that at Maldita.es we consider fundamental: how such a ban would be implemented in practice. The Prime Minister has specifically stated that the intention is not to rely on “tick boxes, but real barriers that work.” And that may pose a problem.

A measure intended to protect minors cannot in practice become the end of online anonymity or a state control mechanism to determine who does or does not have a social media profile. Nor can platforms alone be responsible for enforcement if this requires them to collect and process sensitive data from all users, especially minors. At the same time, other age-estimation technologies, such as facial recognition, are proving ineffective in Australia, where minors themselves report that the actual impact of the measure has been very limited. It must also be considered that banning access to certain platforms may push minors toward more dangerous and less regulated online spaces, such as Roblox.

In any case, since the details of the legislative change—intended to be incorporated into the Draft Law on the Protection of Minors in Digital Environments—are not yet known, we remain open to reconsidering some of our concerns if those problems are addressed. What will not change is our position regarding other actions public authorities can take to protect minors in the digital environment.

Minors and all Spaniards need far more education to face online threats, to develop critical thinking, and to strengthen media literacy. Schools and families are appropriate spaces for these conversations, and public authorities can support them, starting by officially including these topics as cross-cutting competencies in the educational curriculum. Maldita.es has proposed this to different governments at all levels.

Criminal liability for executives and for algorithmic manipulation and amplification of illegal content: ECONOMIC LIABILITY IS MORE EFFECTIVE IN ACHIEVING CHANGE

The largest digital platforms are failing to comply with their legal obligations to remove unlawful content, as we have documented repeatedly at Maldita.es. This creates a sense of impunity that undermines public trust, as digital spaces sometimes appear to operate outside the rules imposed by democratic states. However, it is not clear that criminal prosecution is the most effective way to achieve the ultimate goal: ensuring that these platforms make the necessary changes to address the problem.

In our experience, increasing the economic and financial liability of these platforms has been more effective. Elon Musk is unlikely to come to Spain to testify as a defendant, but X has a thriving business in Spain that can be subject to fines. At Maldita.es, we believe that economic consequences are what can truly drive change. Although European legislation already provides for very substantial fines for similar violations, Spain can also make progress in this direction.

The government’s proposal refers to several issues, so it is important to clarify some concepts. Disinformation must not be confused with illegal content: much illegal content is not disinformation, and most disinformation is not illegal. In the case of illegal content—particularly content that is clearly illegal (child sexual abuse material, threats, obvious scams)—platforms are obliged to remove it or assume legal responsibility for the harm caused. In the case of harmful but legal content, which is often the case with disinformation, platforms can be required to take measures.

However, at Maldita.es we have long argued that deletion practices by platforms such as YouTube or TikTok are not effective against disinformation; on the contrary, they create greater distrust, leave users less protected, and give platforms an excuse not to do what is truly effective: provide more information, add context, and explain to users why the content they are seeing is problematic.

European Union legislation already sets out how to report illegal content to a platform and how platforms must act to block it. The same law also imposes obligations on the largest platforms to take measures to tackle disinformation, which do not necessarily involve removal. What the government appears to propose (again, pending the actual legal text) is to move forward with criminal liability for platform executives “particularly when they fail to comply with an order to remove unlawful content,” but it also mentions “toxic content.”

First, we assume that such an “order to remove” would refer to a judicial order; anything else would be inconceivable. It is important to clarify that, legally, a platform can already be held liable for failing to act on content that a user reported as illegal if a judge later determines that it was clearly unlawful. But ultimately, it must always be a judge who has the final say.

In any case, regarding the criminal liability of executives, we must return to the question of usefulness. If the final text refers to the criminal liability of global platform leaders, it is unlikely those executives would end up standing trial in Spain. If instead it concerns facilitating criminal liability for the legal representatives of these platforms in Spain, it does not seem that the threat of criminal punishment for those individuals would significantly alter corporate behavior.

As for the government’s announced actions to “penalize algorithmic manipulation and the amplification of illegal content,” it is essential to know the details of the text before forming an opinion. As noted, European legislation already addresses the responsibility of large platforms in managing these risks, with the European Commission holding exclusive supervisory competence. However, algorithmic amplification is a key issue whose effects we have studied in depth at Maldita.es.

We are very interested in understanding how a legal reform would establish such responsibility and how manipulation would be proven, particularly when platforms do everything possible to avoid legally binding audits of their algorithms. If, instead, the focus is on users who manipulate systems to amplify illegal content, it will be interesting to see what penalties are contemplated—although, again, we believe financial liability would be a more appropriate instrument.

“Zero tolerance for offenses related to sexualized content involving minors”: ADAPTING EXISTING LAWS TO TECHNOLOGICAL CHANGE

Some of our recent investigations make it clear that major platforms such as TikTok have a problem with sexualizing content involving minors that they cannot—or will not—effectively address. In this area, it may be a good idea to clarify the legal definition of the crime of possession and distribution of child pornography in order to adapt it to new formats, such as sexualized content generated through artificial intelligence systems. On this matter, the government has announced its intention to work with other public institutions, particularly the Public Prosecutor’s Office, to enforce existing laws. We believe it is especially important that the government also push for cooperation at the European level and ensure that these risks receive priority attention from the teams responsible for enforcing the EU Digital Services Act within the European Commission.

RECOMMENDATIONS FROM THE MALDITA.ES FOUNDATION TO LAWMAKERS

  1. The declared objective of the reform is positive: “the protection of minors, mental health, and the spread of disinformation.” But illegal content is only part of the problem, and criminal liability is only part of the solution.
  2. Legal concepts must be clearly defined regarding what is to be pursued and how those rules would interact with the existing EU regulatory framework governing large platforms.
  3. The first responsibility of public authorities in addressing disinformation is not to produce disinformation themselves; second, to strengthen citizens’ capacity to confront it through education and by supporting a high-quality media ecosystem; and only third, regulation—although regulation can play a significant role in addressing the responsibilities of large platforms.
  4. Any reform must be guided by the protection of the rights to freedom of expression and to receive truthful information, as set out in Article 20 of the Spanish Constitution. It must not only defend these rights, but also clearly explain to citizens why actions against disinformation and in favor of greater accountability for major platforms are beneficial to the enjoyment of those rights, and seek the broadest possible social consensus to implement them.
Related Topics
Institutional