On a recurring basis, the companies that govern the designs of artificial intelligence admit the possibility that it may be used to attack humanity. In addition to speculation about a possible Skynet moment, in which an AI can make its own decisions that threaten the existence of human beings, the closest risks we face are those involving people. Yes, bad guys also turn to technology to profit and the first signs should keep us on alert.

One of the most disturbing things about the development of powerful AI models is that those who have built them are not clear about their scope. OpenAI, the company behind ChatGPT, is constantly doing studies on the scope of its technology. In one of the latest reports, they have evaluated the risks that GPT-4, the language model that their chatbot is powered by, increases the risk that someone with bad ideas could create a biological weapon. The conclusions are not to rest assured.

OpenAI admits that GPT-4 “can increase the ability of experts to access biological threat information, particularly with regard to task accuracy and completeness.” But what is most alarming is this phrase: “we are not sure of the importance of the observed increases.” Not content with that vagueness, they recognize that there is “a clear and urgent need to continue working in this area” and consider that because of the way in which cutting-edge AI systems are progressing, “it seems possible that future systems could provide considerable benefits to malicious actors.

It would be enough for just one of those malicious actors – although it is not a fictional thing, I like to personalize it in the villain par excellence, Darth Vader – managed to create a dangerous pathogen to put us in trouble. Unlike the authors of the study, those who try to create a biological weapon have unlimited time to achieve a result while, day by day, the power of AI systems increases.

The OpenAI report recalls that last summer Dario Amodei, executive director and co-founder of the AI ??system Anthropic, declared before a committee of the United States Senate that in the medium term the “most alarming combination of imminence and severity”, which includes the possibility of malicious actors using this advanced technology to produce biological weapons.

Whether or not that threat materializes, the uses of AI by criminals are already causing other types of havoc. This week the first million-dollar scam achieved through deepfake, a fake video that clones someone’s image and voice, was recorded. The Hong Kong authorities have explained that the employee of a multinational received an email from the (supposed) chief financial officer of his company who summoned him for a video conference. For the worker, it was, in effect, that person in charge, as well as five other colleagues with whom he also met by video call. As a result of these virtual meetings, the man was ordered to transfer 25 million dollars to a certain account, which he did. The virtual meetings were with people impersonated by AI. The line between real and fake has blurred. In New Hampshire, a company spread telephone messages with the cloned voice of the president of the United States, Joe Biden, recommending not to vote in the primary elections. Hold on, because curves are coming.