Tomás is the fictitious name of a real person. This week, the skilled professional had arrived home from work when someone sent him a message in English on his iPhone’s Messages app, which doesn’t require a phone number and can work with email accounts.

What Tomás saw left him speechless. His interlocutor showed him the screenshot of a sexual video in which his face appeared, integrated into a foreign body using an artificial intelligence program.

“I did it with my talent,” boasted the blackmailer, who requested payment of a considerable sum of money in exchange for not disclosing the video among Tomás’ contacts on social networks.

Several of the profiles of those friends and family appeared in a short list. A nightmare. Tomás decided not to succumb to blackmail and reported him to the police station closest to his home.

Knowing that he was not going to charge anything, the blackmailer told Tomás that he had just sent the sex video to 200 people who knew him. It was bravado. He did not do it. The police officers took note of the case, but the crime, apart from the blackmail attempt, never occurred.

The use of artificial intelligence to commit crimes is already a reality and does not only affect famous people, it can happen to any of us.

Recently, 22 families of girls aged 12 to 14 from Almendralejo (Badajoz) reported that pornographic images with their faces created using artificial intelligence had been distributed. The alleged authors of these manipulated images had been children from their environment and of those ages – criminally liable from the age of 14, according to the law on minors.

In another similar case, two families filed complaints in Alcalá de Henares (Madrid) for photos of minors manipulated using AI to become pornographic material. It has also happened in Ayamonte (Huelva), where another minor allegedly spread the manipulated images of 20 classmates.

The popularization of generative AIs, capable of creating text, music, voices, images and even videos, has increased the occurrence of these serious incidents, although the different police forces do not yet have statistics on them. There is no time to measure it.

In these cases, the dissemination of false but realistic images of child pornography is punishable by prison sentences. When someone discovers images of this type on the Internet, they have the Priority Channel of the Spanish Data Protection Agency (AEPD), which can initiate the procedure for the immediate removal of these contents.

Imitation of people in different ways is one of the main risks of this technology. The FBI detected cases last year in the United States of falsified videos and voice recordings.

In one of these files, the criminals recreated with artificial intelligence the voice of the CEO of a company, with the perfect imitation of his characteristic accent, to request a company executive to make a transfer of 220,000 euros to an account that turned out to be be false and from which the money disappeared without possibility of being traced.

A study by the Dawes Center for Future Crime at University College London (UCL) brought together 31 experts in 2020 to analyze the seriousness of the possible 20 crimes that a group of university experts consider will develop in the next 15 years through the use of artificial intelligence.

The 20 types of crimes are rated in the UCL study according to their level of severity. At the highest level are impersonation of people using audio and video, whether for fraud, extortion, damage to reputation or security violation.

Also targeted in the serious category are driverless vehicles as weapons, phishing using personal data, disruption of AI-controlled infrastructure (food logistics, public services, traffic control), large-scale blackmail and the creation of fake news.

The study’s medium threat level includes military robots, fraudulent services advertised as genuine, manipulation of data available to AI systems, and cyberattacks based on this technology.

Other possible crimes would include attack drones (either individually or in swarms), denying an individual access to computer-controlled resources, deceiving facial recognition, and bombing the stock and financial markets with massive operations. A number of low threats also remain.

The future European artificial intelligence law foresees that generative AIs reveal the content generated with their technology, prevent those that are illegal and publish copyright data. Last June, the European Parliament approved the proposal, which is now negotiating with the European Commission and Council. It is expected to come into force in early 2025.