Algorithm Law Proposal

By: FCP - info (@) algorithm-law.org, CC BY-SA 4.0, 09/25/24 (ver 1.0)   Propuesta de Ley de Algoritmo - Versión en Español

Description:

The following Algorithm Law proposal has been made anonymously and privately. This Algorithm Law aims to empower users to have control over the algorithms that influence their digital experience, mitigating adverse effects such as addiction, anxiety, or polarization. Complementary to the GDPR and the Cookie Law, this regulation emphasizes transparency, informed consent, and the right to disable algorithms, contributing to a more ethical and healthier digital environment.

Objective:

Ensure transparency, control, and accountability in the use of algorithms in interactive products, offering users clear information about their scope, purpose, and potential adverse effects, while giving them the option to disable them, especially those that may generate polarization, addictions, or negative impacts on mental health.

Article 1: Definition and Scope

This law will be complementary to the General Data Protection Regulation (GDPR) and the Cookie Law, and applies to all companies offering interactive products at both national and international levels.

Article 2: Algorithmic Transparency

Information Obligation: Companies using algorithms must provide users with clear, accessible, and understandable information on:

  1. Number of algorithms used on their platform or product.
  2. Purpose of each algorithm: Detailing whether it is for content personalization, targeted advertising, recommendations, security, etc.
  3. Scope: Information on the extent to which the algorithm influences the user experience (for example, in the ranking of news, ad selection, contact suggestions).
  4. Potential adverse effects: Detailing if algorithms may contribute to issues such as addiction, anxiety, depression, polarization, or behavioral manipulation.
  5. Information Updates: Any changes to algorithms that alter their purpose, operation, or impact must be promptly notified to users.

Article 3: Right to Disable Algorithms

Option to Disable Algorithms: Users must have the option to disable any non-essential algorithm for the basic functioning of the product, including but not limited to:

  1. Content recommendation algorithms that may produce ideological or political polarization.
  2. Algorithms that promote addiction to continued platform use.
  3. Algorithms related to targeted advertising or excessive personalization based on detailed user profiles.
  4. Prohibition to Prohibit: Under no circumstances may platforms block access or limit basic product use due to user algorithm deactivation. Disabling algorithms must not affect the essential functionality of the service.

Article 4: Warnings and Informed Consent

Prior Warning: Before a user begins interacting with a product that uses algorithms, they must be presented with a clear warning about the potential adverse effects of using such algorithms, including but not limited to:

Article 5: Supervision and Accountability

Supervision: A national or international regulatory body (depending on jurisdiction) will be responsible for:

Monitoring compliance with this law by companies. Periodically auditing the algorithms used on platforms to verify their transparency and compliance with protection regulations.

Legal Responsibility: Platforms that fail to comply with the obligation to inform or provide the option to disable algorithms will be subject to penalties. Additionally, if algorithms cause demonstrable harm to users' mental health (such as anxiety or depression), companies will be held legally responsible.

Article 6: Special Protection for Minors

Protection for Minors: Additional protections will be established for minors, strictly prohibiting the use of algorithms that may emotionally or psychologically manipulate this vulnerable group.

Platforms must provide adapted versions free of manipulative algorithms for minors or request explicit parental consent for algorithm activation in children's accounts.

Prohibited Targeted Advertising: Targeted advertising to minors using algorithms based on behavioral data collection will not be allowed.

Article 7: Relation to the Cookie Law and GDPR

Complementarity: This law will apply in a complementary manner to the regulations established in the Cookie Law and the GDPR. Platforms must comply with transparency requirements in the use of cookies, while algorithms will be subject to the principles of transparency and informed consent provided for in this law. Any data collection by algorithms must comply with the GDPR provisions regarding personal data protection and explicit consent.

Right to Rectification and Data Deletion: Users will have the right to access, modify, and delete data collected through algorithms, in line with what is stipulated by the GDPR.

Article 8: Penalties and Fines

Progressive Fines: Companies that do not comply with the law will be subject to fines that can reach a significant percentage of their global revenue, following a model similar to the GDPR.

Penalties for Psychological Harm: If it is proven that an algorithm has caused psychological harm (such as an increase in anxiety or depression), the responsible company must bear the costs of treatment and compensate the affected user.




CC BY-SA 4.0