An open letter signed by hundreds of artificial intelligence experts and entrepreneurs calls for all AI labs to “immediately pause for at least six months the training of AI systems more powerful than GPT-4” . Among the signatories are well-known personalities, such as Elon Musk, one of the founders of OpenAI, or Steve Wozniak, co-founder of Apple; academics, such as Stuart and Ramón López de Mántaras, or writers, such as Yuval Noah Harari. In total, 1,124 signatures.
The missive, published by the Future of Life Institute (future of life) and titled Pause in giant AI experiments, points out that “AI systems with an intelligence competitive with that of humans can pose profound risks to to society and humanity, as demonstrated by numerous researches and recognized by the main AI laboratories”.
The signatories question whether it is worth letting advances in AI impact society without any regulation, as they do now: “Should we let machines flood our information channels with propaganda and falsehoods? Should we automate all jobs, including satisfying ones? Should we develop non-human minds that will eventually outnumber us, outsmart us, become obsolete and replaceable? Should we risk losing control of our civilization?”.
The signatories respond that “these decisions should not be delegated to unselected technology leaders. Powerful AI systems should only be developed when we are sure that their effects will be positive, and their risks, controllable.”
The letter considers that we have reached a point in the development of artificial intelligence where it is important to “obtain an independent review before we start training future systems”. They even propose limiting “the growth rate of computing used to create new models.”
The signatories’ proposal is that it be used “to jointly develop and apply a set of shared security protocols for the design and development of advanced AI that are rigorously audited and monitored by independent external experts.”
With these theoretical shared security protocols to which all AI systems should adhere, it would be guaranteed, according to the signatories, that they would be “secure beyond any reasonable doubt”. In his opinion, this stop would not imply a pause in the development of artificial intelligence technology, but “a step back in the dangerous race towards unpredictable and increasingly large black box models with emerging capabilities” .
The statement calls for this semester of “public and verifiable” break and that, otherwise, there is intervention by governments to impose it. Proponents believe that those developing AI today “must work with policymakers to dramatically accelerate the development of robust governance systems.”
This would require a series of elements, such as new regulatory authorities; more powerful AI monitoring and tracking; systems that make it possible to determine the provenance of a content, a “watermark that helps distinguish what is real from what is synthetic”; auditing and certification; liability for damages caused by AI; robust public funding for technical research, and “resources to deal with the drastic economic and political disruptions (especially to democracy) that AI will cause”.