Companies developing some of the leading AI language models, Anthropic, Google, Microsoft and OpenAI, have announced the Frontier Model Forum, “an industry body focused on ensuring the safe and responsible development of frontier AI models.” . The goal is to “advance research on AI security, identify best practices and standards, and facilitate information sharing between policy makers and the industry.”
The initiative seems an immediate response to the meeting that the leaders of these companies held last Friday with the President of the United States at the White House, where they expressed their commitment in this regard.
The members of the Frontier Model Forum have explained that they will draw on the technical and operational experience of their companies “to benefit the entire AI ecosystem” for which they will develop, among other initiatives, “a public library of solutions to support the best industry practices and standards. The forum aims to “advance AI security research to promote responsible development of frontier models, minimize risks, and enable independent and standardized assessments of capabilities and security.”
Another of its purposes will be to “identify best practices for the responsible development and deployment of cutting-edge models”, in a way that helps the general public “to understand the nature, capabilities, limitations and impact of technology”. The signatories of this agreement also ensure that they will collaborate with “politicians, the academic world, civil society and companies to share knowledge about risks to trust and security.”
The main fields for which this forum of companies promises to develop applications are those that involve “society’s greatest challenges, such as mitigation and adaptation to climate change, early detection and prevention of cancer, and the fight against cyberthreats.”
The group will establish membership criteria to open up to other artificial intelligence companies that want to join. “Governments and industry agree that while AI offers enormous potential to benefit the world, adequate barriers are needed to mitigate the risks,” the forum noted before adding that “US governments. and the UK, the European Union, the OECD, the G7 (through the Hiroshima AI process) and others have already made significant contributions to these efforts.”
Over the next year, the Frontier Model Forum will focus on three aspects of developing the most advanced AI models, which will be: identifying best practices that prevent risk, driving security research, and supporting the security ecosystem of openly, facilitating the exchange of information between companies and governments and independent access to research.
The president of global affairs for Google and its parent company, Alphabet, Kent Walker, has pointed out that “we are all going to need to work together to make sure that AI benefits everyone”, while the president of Microsoft, Brad Smith, has commented that “companies that create AI technology have a responsibility to ensure that it is safe and remains under human control.”
OpenAI Vice President of Global Affairs Anna Makanju said: “It is vital that AI companies – especially those working on the most powerful models – align on common ground and advance thoughtful and adaptive security practices to ensure that powerful AI tools have the greatest possible benefit.” The CEO of Anthropic has indicated that “the Frontier Model Forum will play a vital role in coordinating best practices and sharing research on AI security.”