Stop Being Deceptive With Generative AI

[ad_1]

The Gist

  • Avoid manipulation. The FTC is concerned about brands using generative AI to steer people unfairly or deceptively into harmful decisions.
  • Ensure transparency. The FTC emphasizes the importance of clearly distinguishing between organic content and advertisements when using AI in marketing.
  • Assess risks. The FTC expects brands to conduct thorough risk assessments and provide adequate training for responsible AI use in marketing and customer experience campaigns.

The Federal Trade Commission (FTC) has issued another round of stern warnings to marketers and customer experience professionals about the use of AI in marketing campaigns and customer service.

Don’t be deceptive in customizing marketing and advertising using generative AI. And don’t be deceptive in using generative AI in customer service communications like when a customer wants to cancel a service.

What else constitutes a “bad look” in the eyes of the FTC? Firing personnel devoted to ethics and responsibility for AI and engineering. Microsoft, are you listening? 

“If the FTC comes calling and you want to convince us that you adequately assessed risks and mitigated harms, these reductions might not be a good look,” Michael Atleson, attorney for the FTC Division of Advertising Practice, wrote in a May 1 blog post.

FTC: We’re Watching Generative AI Very Closely

The FTC investigates and prevents unfair methods of competition, and unfair or deceptive acts or practices affecting commerce. What does its latest message about artificial intelligence mean to marketers and customer experience professionals?

For now, it means the US government is closely watching generative AI deployments in customer experience and marketing. And it likely means some formal regulation may be in order. And let us not forget: The FTC is being pressured by at least one policy group to shut down development of OpenAI’s GPT-4 language model and investigate the generative AI new kid on the block.

The FTC’s message this week is one of many calls for responsible use of AI in marketing and customer experience circles — all of which came on the heels of the wild ascension of OpenAI’s chatbot, ChatGPT, everyone’s new favorite creative and analytical assistant that debuted in November.

In February, the FTC urged brands to “keep your AI claims in check” by asking questions:

  • Are you exaggerating what your AI product can do?
  • Are you promising that your AI product does something better than a non-AI product? 
  • Are you aware of the risks?
  • Does the product actually use AI at all? 

“If you think you can get away with baseless claims that your product is AI-enabled, think again. In an investigation, FTC technologists and others can look under the hood and analyze other materials to see if what’s inside matches up with your claims. Before labeling your product as AI-powered, note also that merely using an AI tool in the development process is not the same as a product having AI in it,” FTC officials wrote.

In March, the FTC in a post, “Chatbots, deepfakes, and voice clones: AI deception for sale,” urged brands to consider the FTC Act’s prohibition on deceptive or unfair conduct can apply “if you make, sell, or use a tool that is effectively designed to deceive — even if that’s not its intended or sole purpose.”

Some questions the FTC asked brands to consider with generative AI and synthetic media include:

  • Should you even be making or selling it?
  • Are you effectively mitigating the risks?
  • Are you over-relying on post-release detection?
  • Are you misleading people about what they’re seeing, hearing or reading?

“While the focus of this post is on fraud and deception, these new AI tools carry with them a host of other serious concerns, such as potential harms to children, teens, and other populations at risk when interacting with or subject to these tools. Commission staff is tracking those concerns closely as companies continue to rush these products to market and as human-computer interactions keep taking new and possibly dangerous turns,” the FTC wrote.

Related Article: AI Policy Group Wants FTC to Investigate OpenAI, Shut Down GPT-4 Innovation

Avoid Customer Service Manipulation 

This week’s message? It gets even more specific for marketers and customer experience professionals.

[ad_2]

Source link

digiflowz
Digiflowz
Logo