AI Titans Ink Pledge for Safer Tech Future
[ad_1]
The Gist
- AI safety. Major AI firms sign Biden administration pledge to prioritize safety in technology development.
- Industry collaboration. Tech companies vow to work together to ensure AI transparency.
- Tech accountability. Leading AI businesses accept responsibility for secure AI practices.
At the invitation of President Biden, representatives from seven of the country’s top artificial intelligence companies gathered at the White House to add their signatures to a pledge confirming their commitment to “advancing the safe, secure, and transparent evolution of AI technology.”
On Friday, the Biden-Harris administration secured voluntary commitments from Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI, affirming their individual responsibility to assure safety, uphold the highest standards and “ensure that innovation doesn’t come at the expense of Americans’ rights and safety.”
“These commitments, which the companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AI — safety, security, and trust — and mark a critical step toward developing responsible AI,” the White House said in a statement. “As the pace of innovation continues to accelerate, the Biden-Harris administration will continue to remind these companies of their responsibilities and take decisive action to keep Americans safe.”
Beyond penning the commitments, the White House is also in the process of crafting an executive order they hope will position America at the forefront of conscientious technological innovation.
Overall, these commitments will likely necessitate changes in how these companies develop, test and communicate about their AI models. So, exactly what did these leading AI companies agree to? The pledge focuses on three key issues: safety, security and trust.
Let’s take a closer look…
Related Article: Microsoft, Google, OpenAI Respond to Biden’s Call for AI Accountability
Safeguarding AI: Commitment to Thorough Safety Checks, Enhanced Transparency and Collaborative Standards
Under the “Safety” banner, the administration seeks commitments for comprehensive internal and external reviews (red teaming) of AI models. These reviews will focus on mitigating potential misuse like societal and national security threats, including bio, chemical and cyber vulnerabilities such as bias and discrimination. In effect, these companies may now find it necessary to involve independent domain experts in their red-teaming processes — or be compelled to reveal more about their safety procedures to the public.
Further, the pledge asserts a commitment to advance AI safety research, particularly in making AI decision-making more understandable. It also advocates for collaboration between companies and governments to share information about safety risks, emergent threats and attempts to bypass security measures. By signing, the companies promise to participate in (or establish) forums that allow the development, enhancement and adoption of shared standards to serve as platforms for sharing information about frontier capabilities, emerging risks and threats — and to engage with governments, civilians and academia, as needed.
Boosting AI Security: Investing in Cybersecurity Measures and Encouraging Third-Party Detection of Issues and Vulnerabilities
For AI models that have yet to be released, the companies have agreed to treat them like important secrets, maintaining limited access to the particulars. This means only those who need to know the specific details for their work will have access to them.
The companies have also agreed to create strong programs to spot threats from inside the company and must store and handle these details in a safe place to lower the chance of being leaked without permission. They also acknowledge that even after red-teaming, AI systems might still have flaws and weak spots. To handle this, they promise to set up reward systems like bounty programs, contests or prizes for those who responsibly point out these flaws or unsafe behaviors in their AI systems, promoting a culture of vigilance and proactive threat management.
Related Article: UK Authority Launches Initial Review of Artificial Intelligence Models
Trust in AI: Companies Pledge Collaboration and Transparency for AI-Generated Content
The White House believes it’s important for the public to be able to tell the difference between content made by humans and content made by AI. In order to enhance trust, companies were asked to adhere to the following requirements.
- Enabling users to discern AI-generated audio or visual content through robust mechanisms such as provenance and watermarking — including the development of tools that can confirm if content was generated by their system.
- Regularly disclose details about their AI models or systems including capabilities, limitations and appropriate use scenarios — sharing reports that discuss societal risks like fairness and bias implications.
- Prioritize research on societal risks associated with AI systems with a focus on harmful bias, discrimination and ensuring privacy protection.
- Using frontier AI systems to tackle pressing societal challenges such as climate change, early cancer detection and combating cyber threats — including initiatives supporting AI education and understanding for the public.
“The companies developing these pioneering technologies have a profound obligation to behave responsibly and ensure their products are safe,” the White House said in a statement. “The voluntary commitments that several companies are making today are an important first step toward living up to that responsibility. These commitments — which the companies are making immediately — underscore three principles that must be fundamental to the future of AI: safety, security, and trust.”
Have a tip to share with our editorial team? Drop us a line:
[ad_2]
Source link