5 AI Ethics Questions Marketers Must Ask


The Gist

  • FTC focus. The FTC has begun to outline what constitutes fraudulent AI use and has already addressed an ecommerce case that involved a fraud claim of AI usage.
  • Risk complexity. The fleeting nature of service delivery combined with the scale of AI makes identifying fraud risk complicated for marketers.
  • Ethical inquiry. Marketers should consider the ethical implications of their AI strategies to ensure customer trust and compliance.

It’s an understatement to say there has been a rapid introduction of AI-based products and services. The public’s adoption of tools such as ChatGPT, Propensity, and Bard — now Gemini — has created immense curiosity for learning how to use almost anything AI related or AI infused, raising important AI ethics considerations.

AI Ethics: Balancing Curiosity and Truth

Marketers of products that include AI are looking to leverage that curiosity. But when does courting customers cross the line into false advertising? What ethical concerns become a risk to how customers experience an AI-based product or service?

With AI, marketers must be more direct in identifying and understanding the benefits of AI-based offerings. Campaign tactics that do not clearly explain the benefits of a product or service can mislead customers with an overpromise of expectations that cannot be met.

Related Article: AI, Privacy & the Law: Unpacking the US Legal Framework

An FTC Warning

Like many leaders from organizations and verticals, the FTC has been keeping an eye on AI developments — but with a concern for marketplace transparency. Last year Michael Atleson, an attorney for the Federal Trade Commission (FTC) Division of Advertising Practices, posted an FTC notice online raising the concern that AI based products are overhyped and that interest should be balanced with a modicum of caution.

Homepage of the Federal Trade Commission website on the display of computer screen in piece about AI ethics.
Like many leaders from organizations and verticals, the FTC has been keeping an eye on AI developments — but with a concern for marketplace transparency. mehaniq41 on Adobe Stock Photos

The FTC’s Four Key AI Ethics Questions

The letter outlines four key questions the FTC will use to examine the validity of AI-based solutions:

  • Are you exaggerating what your AI product can do?

  • Are you promising that your AI product does something better than a non-AI product?

  • Are you aware of the risks?

  • Does the product actually use AI at all? 

Related Article: Is AI Executive Order a Data Privacy Compass for Customer Experience?

AI Value: Proving Enhancement and Impact

All of these are terrific questions. I imagine the last question will come up frequently among marketers of AI-influenced solutions, as many popular software solutions incorporate AI tools at a feverish pace.

Yet proving the value of a product enhanced with AI will also be particularly difficult. How does a consumer know that an enhancement has made their experience with a product or service significantly better?

Related Article: FTC Won’t Tolerate Generative AI Deception in Marketing, Customer Service

FTC Cracks Down on AI Misrepresentation

One case already demonstrates the major risk of not delivering AI-based experiences as promised. This past February three business coaches settled an FTC claim that they deceived affluent ecommerce clients with unfounded promises of increased earnings from their consulting. Part of their offering featured operating online sites on the clients’ behalf that would be featured in networks such as Walmart and Amazon. The coaching was advertised to include AI-powered services for the sites. In the end the vast majority of the clients did not achieve the promised earnings, with Walmart and Amazon suspending many sites for policy violations. The FTC settlement meant the accused coaches had to return nearly $21 million in assets and accept permanent bans from consulting in the ecommerce space.

Related Article: Executive Order on AI: A Needed Step or Kitchen-Sink AI Governance?

AI Ambiguity: Navigating Benefits and Claims

As described in Atleson’s letter, AI encompasses a number of different frameworks, which makes benefits very ambiguous to define. Bad actors often exploit this ambiguity to sell ineffective products to unsuspecting customers. Consumers must be able to measure or compare benefit claims. For example, a health drink can claim to contain iron as an ingredient, but it may not have a sufficient amount of iron to really benefit the body. Consumers can compare the quantity of iron in one drink against another by comparing nutrition labels.

But many offerings are services, and customer experiences with a service are fleeting experiences. It can be more difficult to determine the delivery of outcomes from such transient experiences.

Related Article: AI in Customer Experience: The Impact on Customer Journey


Source link