By: FCP - info (@) algorithm-law.org, CC BY-SA 4.0, 09/25/24 (ver 1.0) Propuesta de Ley de Algoritmo - Versión en Español
The following Algorithm Law proposal has been made anonymously and privately. This Algorithm Law aims to empower users to have control over the algorithms that influence their digital experience, mitigating adverse effects such as addiction, anxiety, or polarization. Complementary to the GDPR and the Cookie Law, this regulation emphasizes transparency, informed consent, and the right to disable algorithms, contributing to a more ethical and healthier digital environment.
Ensure transparency, control, and accountability in the use of algorithms in interactive products, offering users clear information about their scope, purpose, and potential adverse effects, while giving them the option to disable them, especially those that may generate polarization, addictions, or negative impacts on mental health.
This law will be complementary to the General Data Protection Regulation (GDPR) and the Cookie Law, and applies to all companies offering interactive products at both national and international levels.
Information Obligation: Companies using algorithms must provide users with clear, accessible, and understandable information on:
Option to Disable Algorithms: Users must have the option to disable any non-essential algorithm for the basic functioning of the product, including but not limited to:
Prior Warning: Before a user begins interacting with a product that uses algorithms, they must be presented with a clear warning about the potential adverse effects of using such algorithms, including but not limited to:
Supervision: A national or international regulatory body (depending on jurisdiction) will be responsible for:
Monitoring compliance with this law by companies. Periodically auditing the algorithms used on platforms to verify their transparency and compliance with protection regulations.
Legal Responsibility: Platforms that fail to comply with the obligation to inform or provide the option to disable algorithms will be subject to penalties. Additionally, if algorithms cause demonstrable harm to users' mental health (such as anxiety or depression), companies will be held legally responsible.
Protection for Minors: Additional protections will be established for minors, strictly prohibiting the use of algorithms that may emotionally or psychologically manipulate this vulnerable group.
Platforms must provide adapted versions free of manipulative algorithms for minors or request explicit parental consent for algorithm activation in children's accounts.
Prohibited Targeted Advertising: Targeted advertising to minors using algorithms based on behavioral data collection will not be allowed.
Complementarity: This law will apply in a complementary manner to the regulations established in the Cookie Law and the GDPR. Platforms must comply with transparency requirements in the use of cookies, while algorithms will be subject to the principles of transparency and informed consent provided for in this law. Any data collection by algorithms must comply with the GDPR provisions regarding personal data protection and explicit consent.
Right to Rectification and Data Deletion: Users will have the right to access, modify, and delete data collected through algorithms, in line with what is stipulated by the GDPR.
Progressive Fines: Companies that do not comply with the law will be subject to fines that can reach a significant percentage of their global revenue, following a model similar to the GDPR.
Penalties for Psychological Harm: If it is proven that an algorithm has caused psychological harm (such as an increase in anxiety or depression), the responsible company must bear the costs of treatment and compensate the affected user.