Crafting the Future With Microsoft, Google and OpenAI

The Gist
- Industry engagement. High-profile tech firms contribute to AI policy discussions.
- Regulatory balance. Need for nuanced, risk-based AI regulations underscored.
- Transparency advocacy. Increased AI accountability and transparency widely advocated.
Missed your shot at molding the future of AI? You’re not alone.
The US government’s nationwide call from the National Telecommunications and Information Administration (NTIA) for public commentary in April drew in 1,447 varied responses — including heavy hitters like Microsoft, Google and OpenAI — all aimed at chiseling the contours of America’s AI accountability landscape. Now, that window of opportunity has officially closed this month.
In the ever-expanding universe of artificial intelligence, President Biden’s administration is on a mission to maintain an equilibrium between breakneck innovation and prudent regulation, with the NTIA steering the ship. Nestled within the Department of Commerce, the NTIA is responsible for advising the President on matters of telecommunications and information policy.
The heart of the issue lies in the crafting of policies for AI audits, assessments, certifications and other trust-building mechanisms. The goal is a comprehensive framework that inspires confidence in AI systems and regulates their usage effectively.
Related Article: European Union Pioneers Landmark AI Regulation
Customer Experience Companies Weigh in on AI Regulation
Among the commenters, several were those focused on the impact of generative AI on customer experience, including Salesforce, a customer relationship platform (CRM), and splyt.co, a B2B mobility marketplace.
Splyt.co: Heavy Regulations Stall Small Companies
In a short, one-page note, Luis Barrera, co-founder and CEO of Splyt.co., suggested that “using current data and privacy rules as a guide could help make this easier for everyone” and encouraged NTIA to think about copyright issues, saying, “we also need to respect the rights of those who create that content.”
“At Splyt, we use AI to help our business grow and to make our customers’ experiences better. We’ve seen first-hand how powerful it can be, but also know it’s important to use it responsibly,” said Barrera. “When it comes to new rules, we hope you’ll remember the small startups. Heavy regulations can slow us down a lot, and penalties can hit us hard. We need clear, fair rules that match the level of risk involved.”
Salesforce: AI Chatbots Must Be Transparent With Customers
Meanwhile, in its 11-page response, Salesforce shared its thoughts on regulation. Salesforce advocates for clear user notification when interacting with AI and chatbot systems, and consent requirement for person-simulating products. The company believes users should have the ability to evaluate the fairness, safety and accuracy of AI recommendations, with mandatory human intervention for significant decisions.
Salesforce supports risk-based accountability mechanisms in AI, implemented at various development stages to ensure meaningful consumer protection. It added AI regulations should be risk-based and context-specific, with responsibilities assigned based on roles within the AI ecosystem.
The company also emphasizes the need for consistency between existing sector-specific rules and new AI regulations and stresses that AI and chatbot systems should inform users they’re interacting with an AI; any product simulating a person must have consent or be clearly labeled as a simulation.
“Regulation has a critical role both to protect people and also to foster innovation. AI regulation should apply a risk-based framework to proportionately address a full spectrum of harms that might be caused by AI,” Saleforce said in its statement. “In a risk-based framework, the more rigorous AI regulatory obligations should focus on the high-risk AI applications that are more likely to cause the most significant impacts or harms on individuals. Similarly in a risk-based framework, AI regulations should have less-intensive obligations for low-risk applications.”
Now, let’s delve into the perspectives of three AI industry titans: Microsoft, Google and OpenAI.
Microsoft’s 6-Point Plan for AI Oversight
In its comprehensive 16-page commentary, Microsoft distilled six recommendations for AI regulation:
- Leverage government-led AI safety frameworks: Microsoft highlighted the AI Risk Management Framework (RMF) by NIST as a particularly promising foundation, offering a robust template for AI governance that can be immediately utilized to manage AI risks.
- Formulate legal and regulatory frameworks based on AI’s tech architecture: Microsoft stressed the necessity of a regulatory approach that considers all layers of the AI tech stack and the potential risks inherent at each level. The company advocates for the enforcement of existing laws and regulations, particularly at the application layer.
- Prioritize transparency for AI accountability: To enhance accountability, Microsoft suggests the creation of a public, inspectable registry of high-risk AI systems and the implementation of “know your content” requirements, enabling users to discern when content has been generated by AI.
- Invest in capacity building for lawmakers and regulators: Microsoft recognizes the importance of equipping lawmakers and regulators with the necessary knowledge and resources to effectively manage AI systems, and that includes bolstering agency budgets, providing education on new technologies and fostering inter-agency coordination for a consistent regulatory regime.
- Promote research to address open socio-technical questions: Microsoft encourages further technical and human-centered research to refine AI governance. The main research priorities should include the development of real-world evaluation benchmarks, improvement of AI model explainability and emphasis on human-computer interactions that respect users’ values and goals.
- Develop and align with international standards and best practices: Microsoft stresses the need for ongoing development of foundational international AI standards and accelerating the adoption of government-led AI safety frameworks, citing the ISO 27001 series as an existing model of best practices for information security.
Related Article: How Will the EU Digital Services Act Impact Marketing?
Google’s AI Regulation Guidebook: Value, Variety and Vigilance
Google’s 33-page response outlined several points related to AI regulation:
- Recognize AI’s value and iversity: Google underscores the importance of distinguishing between various types of AI, stating that generative AI and large language models are not synonymous with all AI. Google stresses that new regulations should not hinder systems like Google Search, Gmail, Maps and Translate, that all use AI to deliver services to users.
- Adopt a multi-layered, multi-stakeholder Approach: Google advocates for a collective approach to AI governance that involves industry, civil society and academic experts. The company recommends a “hub-and-spoke” model of national regulation and stresses the need for international coordination on regulatory approaches, geopolitical security, and competitiveness.
- Promote accountability and AI innovation: Google urges policymakers to eliminate legal barriers to AI accountability efforts and adopt legislation that supports innovation. They suggest a risk-based approach, where deployers of high-risk AI systems provide documentation about their systems and undergo independent risk assessments. They also stress the importance of defining accountability metrics and benchmarks and recognizing that even imperfect AI systems can enhance service levels, reduce costs and increase availability.
- Invest in capacity building and international policy alignment: Google emphasizes the need to build technical and human capacity for effective AI risk management. It also stresses the importance of working with allies and partners to develop common approaches to AI regulation and governance that reflect democratic values.
- Advocate for risk-based assessments for AI systems: Google supports risk-based assessments when developing or deploying AI systems. They assert that with the right policies supporting trustworthy AI and innovation, the United States can continue to lead in AI development.
OpenAI’s 6-Page Strategy: Unveiling Its AI Safety and Accountability
In its succinct six-page response, OpenAI provided suggestions, many of which are based on its own current processes:
- System cards: OpenAI emphasizes transparency in the development of AI systems. As part of this approach, they publish a document, referred to as a System Card, for new AI systems they deploy. This document aims to analyze and describe the impacts of a system, taking into consideration factors such as use case, context and real-world interactions, rather than focusing solely on the model itself.
- Qualitative model evaluations via red teaming: Red teaming involves qualitatively testing models and systems in various domains to gain a holistic view of their safety profile. This process is conducted internally, as well as with independent individuals, and involves methods such as stress testing and boundary testing.
- Quantitative model evaluations: In addition to qualitative evaluations, OpenAI also creates automated, quantitative evaluations for various capabilities and safety-oriented risks. These evaluations allow comparison between different versions of models, and act as an input into decision-making about which model versions to deploy.
- Usage policies: OpenAI has usage policies in place that disallow the use of their models and tools for certain activities and content. These policies aim to prevent the use of their models and tools in ways that cause individual or societal harm.
- Assessing potentially dangerous capabilities: OpenAI acknowledges that highly capable foundation models have both beneficial and potentially harmful capabilities. They are working on measures to evaluate these potentially dangerous capabilities and are collaborating with academic and industry experts to develop diverse evaluation suites.
- Open questions about independent assessments: OpenAI acknowledges the increasing value of independent assessments of models and systems, including third-party assessments, to enhance accountability and transparency. They are considering their own approach to these assessments, including the selection of auditors/assessors and the establishment of appropriate expectations for such assessments.
- Registration and licensing for highly capable foundation models: OpenAI supports the development of registration and licensing requirements for future generations of highly capable foundation models. They suggest that such models could be subject to disclosure and registration expectations for training processes, and AI developers could be required to receive a license to create such models.
NTIA’s AI Commentary Window Closes
As the NTIA’s call for public commentary concludes, we are left with a trove of insights and recommendations from some of the industry’s most influential players. The viewpoints shared by Microsoft, Google, OpenAI, Salesforce and Splyt.co shed light on the complexities of AI regulation and the delicate balance between innovation and accountability.
These perspectives will serve as a guiding light for the NTIA as it moves forward in the challenging task of drafting and issuing a report on AI accountability policy.