Can Tech Giants Finally Play Nice on AI?

The Gist
- Tech titans. Four major tech companies form the Frontier Model Forum.
- Open invitation. Membership is available to organizations meeting specific criteria.
- Forum focus. The group concentrates on safety standards, research and communication.
Technology titans Anthropic, Google, Microsoft and OpenAI joined forces today and created the Frontier Model Forum, a significant move aimed at ensuring safety and responsibility in the rapidly evolving frontier of AI technology.
Within this collaborative platform, these top tech companies will be pooling their expertise and resources as part of a collective endeavor to steer industry standards and promote best practices. The overarching goal is to significantly bolster the AI ecosystem and include the establishment of technical assessments and benchmarks, coupled with the creation of a comprehensive public repository of solutions.
While it’s hard to measure the specific impact on marketing and customer experience professionals, the Forum aims to provide best practices around usage and security with artificial intelligence. Those are efforts marketers and CX leaders should be examining now as they infuse AI into campaigns, content and customer support, and forthcoming outcomes from the Forum likely will only help.
Brad Smith, vice chair and president at Microsoft, called the initiative a vital step in bringing the tech sector together to advance AI responsibly and tackle the challenges now so that it benefits all of humanity. “Companies creating AI technology have a responsibility to ensure that it is safe, secure and remains under human control.” Smith added.
Anna Makanju, VP of global affairs at OpenAI, said advanced AI technologies have the potential to profoundly benefit society; achieving this requires oversight and governance.
“It is vital that AI companies — especially those working on the most powerful models — align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible,” Makanju said. “This is urgent work, and this forum is well-positioned to act quickly to advance the state of AI safety.”
Related Article: FTC Won’t Tolerate Generative AI Deception in Marketing, Customer Service
Who Can Join the Frontier Model Forum?
While the initiative is spearheaded by four premier tech companies, the new forum is by no means exclusive. Membership is open to organizations that meet certain criteria to join the collective effort.
So, who can join?
- Organizations that develop “frontier models” which they define as “large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models and can perform a wide variety of tasks.”
- Organizations that exhibit a commitment to the safety of frontier models, employing both technical and institutional measures.
- Organizations that showcase a readiness to help propel the Forum’s efforts, including active participation in collaborative endeavors and support for the initiative’s development and operations.
What Are the Objectives of the Frontier Model Forum?
The Forum’s core objectives span multiple domains of AI development, including research on AI safety, identification and communication of best practices, knowledge sharing with various stakeholders and enabling applications for societal benefits in areas such as climate change, healthcare and cybersecurity.
While they acknowledge that several entities including the United States, United Kingdom, European Union, Organization for Economic Cooperation and Development (OECD) and G7, among others, have made progress in setting AI standards, the forum aspires to go a step further by becoming a unifying platform, fostering cross-organizational dialogue and action centered on AI safety and responsibility.
Related Article: G7 World Leaders Want Global Standards in Generative AI
The Frontier Model Forum: Three Key Areas of Focus
Over the coming year, the Forum plans to concentrate on three key areas:
- Highlighting best practices. With a spotlight on ensuring safety standards and effective measures to counter a broad spectrum of potential risks, the goal is to foster a knowledge exchange and promote best practices among industries, governments, civil societies and academic institutions.
- Promoting AI safety research. The Forum seeks to bolster the AI safety ecosystem by pinpointing critical open research questions related to AI safety. Coordinating research efforts in areas like adversarial robustness, mechanistic interpretability, scalable oversight, independent research access, emergent behaviors and anomaly detection will be prioritized. An initial focal point will be the development and dissemination of a public repository filled with technical evaluations and benchmarks specifically for frontier AI models.
- Promoting inter-organizational communication. The Forum aims to set up trusted and secure channels for exchanging information among companies, governments and pertinent stakeholders on matters related to AI safety and associated risks. Following established responsible disclosure protocols, akin to those in fields such as cybersecurity, will be a guiding principle for the Forum.
What’s Next for the Frontier Model Forum?
Over the coming months, the Frontier Model Forum plans to set up an advisory board to guide its strategy, with members of diverse backgrounds and perspectives. Next, work begins to create a charter, establish a governance and develop funding structures.
The Forum intends to collaborate with existing government and multilateral initiatives and build on the work of industry, civil society and research efforts across its focus areas. And it aims to explore ways to support and work with other multi-stakeholder efforts like the Partnership on AI and MLCommons.
“We’re excited to work together with other leading companies, sharing technical expertise to promote responsible AI innovation,” Kent Walker, president of global affairs at Google and Alphabet, said in a statement. “We’re all going to need to work together to make sure AI benefits everyone.”
Anthropic CEO Dario Amodei trusts that the Frontier Model Forum will play a vital role in coordinating best practices and sharing research on frontier AI safety.
“Anthropic believes that AI has the potential to fundamentally change how the world works,” Amodei said. “We are excited to collaborate with industry, civil society, government and academia to promote safe and responsible development of the technology.”
Have a tip to share with our editorial team? Drop us a line: