An open letter signed by hundreds of artificial intelligence experts and entrepreneurs calls for all AI labs to “immediately pause training of AI systems more powerful than GPT-4 for at least 6 months.” Among the signatories are well-known personalities, such as Elon Musk, one of the founders of OpenAI; or Steve Wozniak, co-founder of Apple; academics such as Stuart Russell and Ramón López de Mántaras; or writers like Yuval Noah Harari. In total, 1,124 signatures.

The letter, published by the Future of Life Institute and titled Pause in giant AI experiments, notes that “AI systems with intelligence competitive with humans can pose profound risks to society and humanity.” , as demonstrated by numerous investigations and recognized by leading AI labs.”

The signatories question whether it is worth letting advances in AI impact society without any regulation, as has happened until now: “Should we let the machines flood our information channels with propaganda and falsehoods? Should we automate all jobs, including successful ones? Should we develop non-human minds that will eventually outnumber, out-intelligence, out-date, and replace us? Should we risk losing control of our civilization?

The signatories respond that “these decisions should not be delegated to unelected technology leaders. Powerful AI systems should only be developed when we are sure that their effects will be positive and their risks controllable.” The letter believes that we have reached a point in AI development where it is important “to obtain independent review before beginning to train future systems.” They even propose limiting “the growth rate of the computation used to create new models.”

What good would that six-month moratorium do? The signatories’ proposal is that it be used “to jointly develop and apply a series of shared security protocols for the design and development of advanced AI that are rigorously audited and supervised by independent external experts.”

With these theoretical shared security protocols that all AI systems should join, it would be guaranteed, according to the signatories, that they would be “secure beyond all reasonable doubt.” In his opinion, this stoppage would not imply a pause in the development of artificial intelligence technology, although it would mean “a step back in the dangerous race towards increasingly large and unpredictable black box models with emerging capabilities.”

The statement calls for that semester of “public and verifiable” pause and that, otherwise, there be government intervention to impose it. Proponents believe that AI developers today “must work with policy makers to dramatically accelerate the development of strong governance systems.”

For this, a series of elements would be necessary, such as new regulatory authorities; more powerful AI monitoring and tracking; systems that make it possible to determine the origin of a content, a “watermark that helps distinguish what is real from what is synthetic”; audit and certification; liability for damages caused by AI; strong public funding for technical research; and “resources to deal with the drastic economic and political disruptions (especially for democracy) that AI will cause.”