Artificial intelligence makes fun of itself in facial recognition

Engineers face another challenge linked to high-caliber artificial intelligence. Advances in the automation of facial recognition have been developing at a good pace. But the same thing has happened with the tools that have spread cases of deepfake everywhere, that is, falsifications (fakes) carried out through deep learning.

So the technology that can help companies and institutions identify human beings is applied to generate ultra-realistic fictional images. It is often said that machines can outperform people in speed and solvency. Judging by the experts’ interpretation, this is not what is happening with facial recognition and deepfake.

One of the most authoritative voices who has expressed his opinion on the matter is analyst Akif Khan, vice president of the consulting firm Gartner. “Organizations should start questioning whether the digital tools they are using to verify identities are trustworthy enough,” he says. “It is difficult to distinguish what is real from what is false,” he acknowledges regarding this controversy.

The forecast is that computer attacks on companies based on deepfakes will increase. Gartner technicians estimate that a third of companies will have to change these security systems. Either that, or they will have to reinforce this solution with other, more recent and better articulated ones. For Khan, deepfakes can be understood as “a turning point in artificial intelligence.”

The Gartner vice president highlights that various “malicious actors” can undermine “biometric authentication.” This warning, according to him, should worry those who until now had entrusted this work to specific, although relatively conventional, programs. It would be the umpteenth demonstration of artificial intelligence to avoid limits, even those defined by it.

The most fashionable technology in recent years makes fun of itself, and it does so not to provide humanity with increasing happiness, but to encourage lies, deception and fraud. And, as Khan adds, protection standards are becoming obsolete in a very short time. So-called “injection attacks”, to explore vulnerabilities in a website or application, increased by 200% in 2023.

Exit mobile version