Stop Being Deceptive With Generative AI
[ad_1]
The Gist
- Avoid manipulation. The FTC is concerned about brands using generative AI to steer people unfairly or deceptively into harmful decisions.
- Ensure transparency. The FTC emphasizes the importance of clearly distinguishing between organic content and advertisements when using AI in marketing.
- Assess risks. The FTC expects brands to conduct thorough risk assessments and provide adequate training for responsible AI use in marketing and customer experience campaigns.
The Federal Trade Commission (FTC) has issued another round of stern warnings to marketers and customer experience professionals about the use of AI in marketing campaigns and customer service.
Don’t be deceptive in customizing marketing and advertising using generative AI. And don’t be deceptive in using generative AI in customer service communications like when a customer wants to cancel a service.
What else constitutes a “bad look” in the eyes of the FTC? Firing personnel devoted to ethics and responsibility for AI and engineering. Microsoft, are you listening?
“If the FTC comes calling and you want to convince us that you adequately assessed risks and mitigated harms, these reductions might not be a good look,” Michael Atleson, attorney for the FTC Division of Advertising Practice, wrote in a May 1 blog post.
FTC: We’re Watching Generative AI Very Closely
The FTC investigates and prevents unfair methods of competition, and unfair or deceptive acts or practices affecting commerce. What does its latest message about artificial intelligence mean to marketers and customer experience professionals?
For now, it means the US government is closely watching generative AI deployments in customer experience and marketing. And it likely means some formal regulation may be in order. And let us not forget: The FTC is being pressured by at least one policy group to shut down development of OpenAI’s GPT-4 language model and investigate the generative AI new kid on the block.
The FTC’s message this week is one of many calls for responsible use of AI in marketing and customer experience circles — all of which came on the heels of the wild ascension of OpenAI’s chatbot, ChatGPT, everyone’s new favorite creative and analytical assistant that debuted in November.
In February, the FTC urged brands to “keep your AI claims in check” by asking questions:
- Are you exaggerating what your AI product can do?
- Are you promising that your AI product does something better than a non-AI product?
- Are you aware of the risks?
- Does the product actually use AI at all?
“If you think you can get away with baseless claims that your product is AI-enabled, think again. In an investigation, FTC technologists and others can look under the hood and analyze other materials to see if what’s inside matches up with your claims. Before labeling your product as AI-powered, note also that merely using an AI tool in the development process is not the same as a product having AI in it,” FTC officials wrote.
In March, the FTC in a post, “Chatbots, deepfakes, and voice clones: AI deception for sale,” urged brands to consider the FTC Act’s prohibition on deceptive or unfair conduct can apply “if you make, sell, or use a tool that is effectively designed to deceive — even if that’s not its intended or sole purpose.”
Some questions the FTC asked brands to consider with generative AI and synthetic media include:
- Should you even be making or selling it?
- Are you effectively mitigating the risks?
- Are you over-relying on post-release detection?
- Are you misleading people about what they’re seeing, hearing or reading?
“While the focus of this post is on fraud and deception, these new AI tools carry with them a host of other serious concerns, such as potential harms to children, teens, and other populations at risk when interacting with or subject to these tools. Commission staff is tracking those concerns closely as companies continue to rush these products to market and as human-computer interactions keep taking new and possibly dangerous turns,” the FTC wrote.
Related Article: AI Policy Group Wants FTC to Investigate OpenAI, Shut Down GPT-4 Innovation
Avoid Customer Service Manipulation
This week’s message? It gets even more specific for marketers and customer experience professionals.
FTC’s Atleson focused his message on how brands use of generative AI tools “to better persuade people and change their behavior.” And “when that conduct is commercial in nature,” he added, “we’re in FTC territory, a canny valley where businesses should know to avoid practices that harm consumers.”
Chatbots, he noted, are designed to provide information, advice, support and companionship and are “effectively built to persuade and are designed to answer queries in confident language even when those answers are fictional.”
A key FTC concern? Brands that use generative AI in ways that, “deliberately or not, steer people unfairly or deceptively into harmful decisions in areas such as finances, health, education, housing, and employment.”
“Companies thinking about novel uses of generative AI, such as customizing ads to specific people or groups, should know that design elements that trick people into making harmful choices are a common element in FTC cases, such as recent actions relating to financial offers, in-game purchases, and attempts to cancel services,” Atleson added. “Manipulation can be a deceptive or unfair practice when it causes people to take actions contrary to their intended goals. Under the FTC Act, practices can be unlawful even if not all customers are harmed and even if those harmed don’t comprise a class of people protected by anti-discrimination laws.”
Marketing and Advertising: Transparency Is a Must
The FTC is also carefully monitoring how marketers place ads within a generative AI feature, just as they can place ads in search results. It warns against dark patterns (design practices that trick or manipulate users into making choices they would not otherwise have made and that may cause harm) and native advertising (digital advertising that resembles news, feature articles, product reviews, entertainment or other non-advertising online content.).
“Among other things, it should always be clear that an ad is an ad, and search results or any generative AI output should distinguish clearly between what is organic and what is paid,” Atleson said. “People should know if an AI product’s response is steering them to a particular website, service provider, or product because of a commercial relationship. And, certainly, people should know if they’re communicating with a real person or a machine.”
Related Article: FTC Issues Stern Guidance to Marketers on AI Messaging
Get Serious About Training, Risk Assessment With Generative AI
So what will the FTC be looking for in terms of responsible uses of generative AI in marketing and customer experience campaigns?
Two big areas are risk assessment and training.
“Your risk assessment and mitigations should factor in foreseeable downstream uses and the need to train staff and contractors, as well as monitoring and addressing the actual use and impact of any tools eventually deployed,” Atleson said. “If we haven’t made it obvious yet, FTC staff is focusing intensely on how companies may choose to use AI technology, including new generative AI tools, in ways that can have actual and substantial impact on consumers. And for people interacting with a chatbot or other AI-generated content, mind Prince’s warning from 1999: ‘It’s cool to use the computer. Don’t let the computer use you.’”
Have a tip to share with our editorial team? Drop us a line:
[ad_2]
Source link