Crafting the Future With Microsoft, Google and OpenAI


The Gist

  • Industry engagement. High-profile tech firms contribute to AI policy discussions.
  • Regulatory balance. Need for nuanced, risk-based AI regulations underscored.
  • Transparency advocacy. Increased AI accountability and transparency widely advocated.

Missed your shot at molding the future of AI? You’re not alone.

The US government’s nationwide call from the National Telecommunications and Information Administration (NTIA) for public commentary in April drew in 1,447 varied responses — including heavy hitters like Microsoft, Google and OpenAI — all aimed at chiseling the contours of America’s AI accountability landscape. Now, that window of opportunity has officially closed this month.

In the ever-expanding universe of artificial intelligence, President Biden’s administration is on a mission to maintain an equilibrium between breakneck innovation and prudent regulation, with the NTIA steering the ship. Nestled within the Department of Commerce, the NTIA is responsible for advising the President on matters of telecommunications and information policy.

The heart of the issue lies in the crafting of policies for AI audits, assessments, certifications and other trust-building mechanisms. The goal is a comprehensive framework that inspires confidence in AI systems and regulates their usage effectively.

Related Article: European Union Pioneers Landmark AI Regulation

Customer Experience Companies Weigh in on AI Regulation

Among the commenters, several were those focused on the impact of generative AI on customer experience, including Salesforce, a customer relationship platform (CRM), and, a B2B mobility marketplace. Heavy Regulations Stall Small Companies

In a short, one-page note, Luis Barrera, co-founder and CEO of, suggested that “using current data and privacy rules as a guide could help make this easier for everyone” and encouraged NTIA to think about copyright issues, saying, “we also need to respect the rights of those who create that content.”

“At Splyt, we use AI to help our business grow and to make our customers’ experiences better. We’ve seen first-hand how powerful it can be, but also know it’s important to use it responsibly,” said Barrera. “When it comes to new rules, we hope you’ll remember the small startups. Heavy regulations can slow us down a lot, and penalties can hit us hard. We need clear, fair rules that match the level of risk involved.”

Salesforce: AI Chatbots Must Be Transparent With Customers

Meanwhile, in its 11-page response, Salesforce shared its thoughts on regulation. Salesforce advocates for clear user notification when interacting with AI and chatbot systems, and consent requirement for person-simulating products. The company believes users should have the ability to evaluate the fairness, safety and accuracy of AI recommendations, with mandatory human intervention for significant decisions.

Salesforce supports risk-based accountability mechanisms in AI, implemented at various development stages to ensure meaningful consumer protection. It added AI regulations should be risk-based and context-specific, with responsibilities assigned based on roles within the AI ecosystem.

The company also emphasizes the need for consistency between existing sector-specific rules and new AI regulations and stresses that AI and chatbot systems should inform users they’re interacting with an AI; any product simulating a person must have consent or be clearly labeled as a simulation.


Source link