To gather, systematize, and make accessible concrete evidence of harm caused by artificial intelligence (AI) systems, with the aim of improving the quality of public debate and broadening the discussion about harm caused by the use of artificial intelligence systems in Brazil, expanding national and international efforts to regulate and defend rights.
This initiative is part of the AI with Rights project.
The AI Harm Library compiles concrete cases of negative impacts caused by artificial intelligence systems. Harm is defined as an adverse, documented, and verifiable effect—specifically one that affects fundamental rights, labor, the environment, democracy, public safety, children and adolescents, or copyright. This definition is essential because it demonstrates that AI risks have already materialized today, moving beyond hypothetical scenarios and becoming real issues that demand regulatory responses. The goal is to present these impacts and offer evidence to inform the debate on Bill 2338/2023.
Data collection took place between June 2024 and October 2025 and relied exclusively on public sources. The research considered journalistic reporting and investigative work from outlets such as A Pública, Intercept, Piauí, G1, Folha, DW, and MIT Tech Review, as well as official documents including reports, expert opinions, court filings, academic studies, field research, and testimonies published by reputable sources. The search was guided by keywords related to algorithmic discrimination, data privacy, automated disinformation, environmental impact, copyright and generative models, transparency and algorithmic auditing, and liability in high-risk AI systems.
During curation and selection, only public, verifiable, and well-documented cases were included. The material was reviewed to ensure thematic diversity and representativeness. Each case was organized in a standardized database containing: title, description, public source link, illustrative image, and cross-references to relevant articles or provisions of PL 2338/2023.
For analytical and systematization purposes, the cases were grouped into four main harm typologies that reflect the plurality of impacts caused by AI systems:
The database is continuously reviewed and updated to maintain political and social relevance. This process ensures that the Library keeps pace with ongoing legislative debates. All work is based exclusively on public data, ensuring transparency, independent verifiability, and the legitimacy of the initiative as a tool for intervention and for strengthening the public debate on AI in Brazil.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Click the image to access.
Help strengthen the AI Harm Library by submitting real, public, and verifiable cases of negative impacts caused by artificial intelligence systems.
What counts as “harm” here?
Any adverse, documented, and verifiable effect that affects:
fundamental rights, labor, the environment, democracy,
public safety, children and adolescents, or copyright.
How it works
• You do not need to identify yourself.
• We only accept cases with a public source (e.g., news report, official decision, report, or study).
• Priority is given to cases reported from June 2024 onward (you may submit older cases; we will assess their relevance).
• Our team verifies the information, cross-checks sources, and publishes a summarized version in the Library.
Before submitting, please check:
• Does the case include a public link?
• Can you indicate when it was first reported (date of initial publication)?
• Is there a location (city/state/country), or is it “global”?
• Can you classify the topic (e.g., disinformation, facial recognition, copyright, labor, environmental impact, public safety, children and adolescents)?
Please do not submit unnecessary sensitive personal data. Do not attach illegal content (e.g., intimate or abusive images). If the case involves an immediate risk, contact the appropriate authorities.
FILL IN THE FORM TO CONTRIBUTE WITH THE AI HARM LIBRARY
Thank you for contributing!
We have received your case. We will verify the sources, cross-reference information, and, if validated, incorporate it into the AI Harm Library. If you left your contact information, we might contact you with about any questions.
This is an initiative of the AI with Rights (IA com Direitos) project.
Contact us at imprensa@dataprivacybr.org
Terms and Conditions | Privacy and Personal Data Protection Policy
Data Privacy Brasil is an organization born from the union between a school and a civil association, dedicated to promoting a culture of data protection and digital rights in Brazil and worldwide. To this end, with the support of a multidisciplinary team, we conduct training, events, certifications, consulting, multimedia content creation, public interest research, and civic audits to promote rights in a data-driven society marked by asymmetries and injustices. Through education, awareness-raising, and social mobilization, we aspire to a democratic society where technologies serve the autonomy and dignity of people.