Shortly after Colombian presidential candidate Miguel Uribe Turbay was shot at a political rally in June, hundreds of videos of the attack flooded social media. Some of these turned out to be deepfakes made with artificial intelligence, forcing police and prosecutors to spend hours checking and debunking them during the investigation. A teenager was eventually charged.
Increasing adoption of AI is transforming Latin America’s justice system by helping tackle case backlogs and improve access to justice for victims. But it is also exposing deep vulnerabilities through its rampant misuse, bias, and weak oversight as regulators struggle to keep up with the pace of innovation.
Law enforcement doesn’t yet “have the capacity to look at these judicial matters beyond just asking whether a piece of evidence is real or not,” Lucia Camacho, public policy coordinator of Derechos Digitales, a digital rights group, told Rest of World. This may prevent victims from accessing robust legal frameworks and judges with knowledge of the technology, she said.
Justice systems across the world are struggling to address harms from deepfakes that are increasingly used for financial scams, in elections, and to spread nonconsensual sexual imagery. There are currently over 1,300 initiatives in 80 countries and international organizations to regulate AI, but not all of these are laws and nor do they all cover deepfakes, according to the Organisation for Economic Co-operation and Development.
Deepfake videos surged by 550% between 2019 and 2023 worldwide, according to a report by Security Hero, an independent platform that investigates data protection and digital security. Less than 1% of deepfakes are created in Latin America, compared to more than 70% in Asia, but countries including Mexico, Chile, Brazil, and Colombia have seen some of the highest growth rates, according to a separate study.
Some countries have acted. South Korea and Australia criminalize specific deepfake abuses, and the U.S. recently passed the “Take It Down Act” that penalizes the creation and distribution of deepfakes without consent. In Latin America, Brazil banned the use of deepfakes in electoral campaigns last year, while Peru and Colombia this year passed AI laws that consider deepfakes an aggravating factor to a crime. Argentina recently proposed a bill criminalizing AI-generated content with up to six-year prison sentences.
But while some countries in Latin America have used the European Union’s framework as a model, local iterations aren’t as robust, Franco Giandana, a policy analyst for Latin America and the Caribbean at Access Now, a digital rights group, told Rest of World. Often, “the language is too abstract and there’s still little grasp of the national and regional challenges — not just to regulate AI but to build a coherent development strategy suited to our context,” he said.
85% Percentage of judges in Colombia who use free versions of ChatGPT or Microsoft Copilot
Prosecutors in Chile, Argentina, and Mexico — where deepfakes are not yet regulated — have struggled to get convictions in recent months on cases involving high school students who created and distributed explicit deepfake images of female classmates without consent.
In December, a judge in Mexico — which criminalizes the distribution of sexual content without consent — acquitted a 20-year-old man charged with using AI to create sexual images of more than 1,000 women and minors, on a “lack of sufficient evidence to prove his involvement.” The victims’ legal team appealed the ruling and the case remains open. The man was separately sentenced to five years by the Mexico City prosecutor’s office for possessing child sexual abuse material.
Last year in Argentina, an 18-year-old man was accused of creating and publishing pornographic deepfake videos of at least 16 of his female classmates on pornography websites, with their real names. Because this is not a crime, José M. D’Antona, the victims’ defense attorney, built his case around digital crimes legislation and the psychological harm inflicted upon his clients.
While the prosecutor’s office issued orders for the de-indexation of the images from the websites, the victims’ names can still be found on some pornography sites, D’Antona told Rest of World. “The damage persists,” he said.
The unregulated use of AI has caused other harms. Across the region, police use AI-based facial recognition systems to track criminals, and have inadvertently harmed innocent citizens.
Last year, João Antônio Trindade Bastos was watching a football game in Aracaju, Brazil, when military police dragged him from the stadium in front of 10,000 fans. A surveillance camera had misidentified him as a wanted fugitive. Weeks earlier, a woman was arrested during carnival celebrations for someone else’s crime. Both of them were released once it was determined they were not the individuals in question.
Such cases are inevitable. Most AI systems are trained primarily on data from white populations, generating “false positives” when scanning Indigenous, Afro-descendant, and female faces, Dilmar Villena, executive director of Hiperderecho, a Peruvian digital rights organization, told Rest of World. Similarly, Chile’s Urban Criminal Prediction System has been criticized for relying on what experts call “dirty data” — police records riddled with racial profiling and selective enforcement.
Chile’s National Police and the Deputy Ministry of Public Security did not respond to a request for comment from Rest of World.
Such use of facial recognition and other AI tools “should be heavily regulated because we’re talking about the state using weapons against its own citizens,” Felipe Rocha, digital coordinator at Lapin, a Brazilian justice-tech research organization, told Rest of World. In June, Brazil’s Ministry of Justice and Public Security issued a ruling that allows public security agencies to use AI in criminal investigations, and bans the use of remote biometric identification in public spaces except when searching for missing persons or when there is imminent threat to life.
Even as regulators struggle to keep up, courts across Latin America are increasingly turning to AI to classify cases and automate repetitive tasks.
Self-regulation is the only efficient path. No legal framework will ever keep pace with the speed of AI.”
In 2023, Colombian judge Juan Manuel Padilla chose a simple case on his docket to test the application of AI: He drafted a ruling on an autistic child who needed state-funded medical treatment with the help of ChatGPT.
The case set a precedent in the use of AI in judicial decisions in the country. Not long after, Colombia established AI courtroom guidelines, allowing AI for routine tasks while mandating human oversight. About 85% of judges in the country now use free versions of ChatGPT or Microsoft Copilot, according to a 2025 study by the Universidad de los Andes. But most judges receive minimal AI training, leaving them to independently navigate integrating algorithms into legal traditions.
Elsewhere in the region, Brazil uses SAJ Digital, an AI tool, to help speed up cases and streamline magistrates’ workload. In Argentina, Prometea has cut legal opinion processing from 190 days to one hour. Colombia’s Constitutional Court has deployed PretorIA, an AI tool based on Argentina’s technology. It sorts through more than 2,700 acciones de tutela — legal requests by Colombian citizens for the protection of their fundamental rights — daily.
Padilla, the Colombian judge, relies on AI tools to streamline case documentation, but strict regulation alone will not be enough to confront the issue of deepfakes within the justice system, he told Rest of World.
“Self-regulation is the only efficient path,” he said. “No legal framework will ever keep pace with the speed of AI.”