From Rosalía to the institute: artificial intelligence generalizes the creation of non-consensual pornographic images

Posted on
School violence Almendralejo
An Internet user browses a page of fake pornographic videos of famous women.Luis Sevillano

The detection of fake nude photographs of teenagers in Almendralejo (Badajoz) shows that the plague of sexist violence with hyperrealistic creations of artificial intelligence for malicious purposes, especially pornographic, has reached Spain. The United States, the United Kingdom and South Korea are at the forefront of complaints, but the cases of the singer Rosalía, victim of another singer who spread a false nude, and the model Laura Escanes, who reported a similar attack, demonstrate that this scourge It has also settled in Spain and has spread to all areas, including groups of minors. Weapons are easily accessible applications of artificial intelligence, despite the limitations that some companies try to impose.

The number of these contents on the Internet, known as deepfakes, doubles every six months and garners more than 134 million views. In 95% of cases, the intention is to create pornography without consent and women are the victims in nine out of 10 creations, according to Sensity AIa research company that tracks fake hyper-realistic videos on the internet.

And deepfake It is a video or a static image (it can also be sound) where the face of the original image is replaced by that of any other person, famous or not. They are based on deep machine learning techniques based on complex algorithms that, once implemented in an application, are easy to use.

Artificial intelligence has made this process accessible. It recreates a sequence by learning from thousands of similar images and uses adversarial generative neural networks (GAN) to develop patterns and facilitate hyper-realistic reproduction. It is not even necessary to have a powerful personal computer to create them, since there are resources on cloud platforms, in applications, websites and even on Telegram. Some companies like Google have launched a database with thousands of manipulated videos to develop tools to detect fakes.

Corporate responsibility

Dall-e, one of the digital creation applications, claims that it has “limited the ability to generate violent, hateful or adult images” and has developed technologies to “avoid photorealistic generations of faces of real individuals, including those of public figures.” . A group of 10 companies has signed a guideline catalog on how to build, create and share AI-generated content responsibly.

But many applications roam freely on the Internet or on messaging or open source platforms. In addition, the same artificial intelligence that is applied to create the images makes it easy to bypass controls.

Los deepfakes The most well-known affect popular characters. Two of the first celebrities affected were actresses Emma Watson and Natalie Portman. But they can be created by anyone accessing their images on social networks or stealing them from personal computers.

“It is crucial to keep images protected and ensure privacy, since many cybercriminals can access them through various techniques,” says Marc Rivero, security researcher at Kaspersky. However, according to a report from this company43.5% of Spaniards do not protect the cameras of any of their devices or their images.

Sexist cyber violence

The use of deepfakes of women “is a problem of sexist violence,” he says to the Massachusetts Institute of Technology (MIT) Adam Dodge, founder of EndTAB, a non-profit organization for education in technological uses.

He European Institute for Gender Equality He also considers it this way and includes these creations in his report on cyber violence against women as another form of sexist aggression.

“The violence online It is nothing more than a continuation of the violence that occurs against women and girls on a daily basis. This is amplified, extended and worsened with the use of the internet. and digital devices,” explains Iris Luarasi, president of the Expert Group in the Fight against Violence against Women and Domestic Violence (Grevio).

The European Institute for Gender Equality (OWNfor its acronym in English) includes among the “emerging” attacks against women “the dissemination of false images or the receipt of explicit sexual content not required.”

Experts like Borja Adsuara, a university professor specialized in digital law, advocate for the unequivocal identification of content as false and for automatic trackers of these creations. In his opinion, what separates the deepfake of a personal creation protected by freedom of expression is its verisimilitude and purpose. Adsuara explains that the current regulation focuses on the result of the actions and intention of the offender. “If the scene has never existed because it is fake, a secret is not being discovered. It should be treated as a case of insult or as a crime against moral integrity by spreading it with the intention of publicly humiliating another person,” she explains.

The lawyer points out as a solution that the concept of pseudopornography, which is already applied with minors, is incorporated. “This would allow crimes against privacy to include not only real videos, but also realistic ones with intimate images that resemble those of a person.”

Last May, Junts per Catalunya presented a bill that addressed the reform of the Penal Code to include the dissemination of deepfakes, but it was rejected and relegated for debate in the next legislature. The text proposed prison sentences of six months to two years for the dissemination of these creations by any means when “it seriously undermines honor or privacy.” A fine of one to three months was also requested for anyone who, having received them through any platform, “disseminates, reveals or transfers them to third parties without the consent of the affected person.” The greater penalty was directed at spouses or friends, if there was also a profit motive, and when the victim was a minor or a person with a disability.

You can follow EL PAÍS Technology in Facebook y Twitter or sign up here to receive our newsletter semanal.

Leave a Reply

Your email address will not be published. Required fields are marked *