AI Efficiency vs. Legal Rights: A Critical Look

[ad_1]

The Gist

  • AI copyright infringement. Lawsuits highlight the clash between innovation and copyright, urging a legal reevaluation.
  • AI ethical issues. Scaling personal outreach into spam reveals the fine line between efficiency and ethical misuse.
  • Permission principle. Consent is key in distinguishing between innovation and intrusion in digital communication.

The New York Times and many other content creators and authors are suing generative AI pioneer OpenAI and Microsoft, claiming they trained their ChatGPT product on copyrighted material without permission nor compensation. These AI copyright infringement lawsuits have many people asking some variation on the question:

Should we penalize a chatbot for doing what all human beings do — that is, assimilating information to inform responses — just because it does so more efficiently?

I’m not a lawyer or legal scholar, but it seems like the answer is almost assuredly yes. That’s because superhuman efficiency and scale routinely lead to abuse when it comes to AI copyright infringement and other issues. In fact, it’s easy to find compelling examples without even leaving the world of marketing.

Let’s discuss a few AI ethical issues.

A stack of dominoes fall into a golden copyright symbol in piece about AI copyright infringement.
Superhuman efficiency and scale routinely lead to abuse when it comes to AI copyright infringement and other issues.alexlmx on Adobe Stock Photos

Cold Email vs. Spam

Sending one email to someone who hasn’t asked to hear from you is sending a cold email. That’s legal everywhere in the world. That’s just one person taking their precious time to reach out to another person.

However, if you do that same thing at scale, then you’re a spammer. Traditionally, that means you’ve used software to use a small amount of your time to send the same (or largely the same) message to lots of people, collectively wasting a lot of their time. Spam is a blight on the modern world, which is why the European Union, Canada, and many other nations outlaw spam.

Operating under the highly permissive and very antiquated CAN-SPAM Act of 2003, the US sadly hasn’t made spamming illegal yet. However, inbox providers aggressively block spam and consumers aggressively report unwanted email as spam.

In this new world of generative AI, I would argue that using generative AI to substantially automate the writing of cold emails is also spam, because you’ve similarly drastically increased the scale of your cold emailing.

Related Article: Is the Anti-Spam Law CAN-SPAM Now Meaningless?

Cold Calls vs. Spam Robocalls

When a person calls someone they don’t know personally and who hasn’t given them permission to call, that’s a cold call, which like cold email is perfectly legal. However, when someone uses a little of their time to record a message and then uses an automated call system to push that message out to tons of people on behalf of an organization that doesn’t have permission to call those recipients, that’s a spam robocall.

Robocalls are among the top complaints to the Federal Communications Commission. And robocalls that try to sell you something are illegal, unless the organization has written permission from the phone number owner to call them with such offers, according to the Federal Trade Commission.

Generative AI will surely take robocalls to a new level of irritation and deceptiveness. A robocall impersonating Joe Biden that tried to discourage New Hampshire democrats from participating in the state’s primary last month is a hint of what’s likely to come.

Related Article: Machine Learning and Generative AI in Marketing: Critical Differences

Individual Surveillance vs. Mass Spying

To depart from the marketing world for just a moment, most people don’t have a problem with individual surveillance, where a law enforcement officer shadows a person who’s suspected of wrongdoing to collect potential evidence. Limited personnel ensure that this tool is used with considerable discipline and prioritizes people who are the most likely to commit a serious crime.

However, when machines engage in similar activities, it becomes mass surveillance, where everyone is presumed to be guilty of something until proved otherwise. Less than a quarter of Americans approve of their government spying on them, according to Amnesty International. And generative AI will turn mass surveillance into mass spying, where AI helps comb through all of that data for potential wrongdoing.

To bring things back to marketing, privacy has been a huge issue in the digital marketing space for years, and there have been huge steps taken. Third-party cookies have been sunset by most browsers, with Google’s Chrome set to finally follow suit this year. Apple has launched Mail Privacy Protection, App Tracking Transparency, and Link Tracking Protection. And all of that is in the wake of the EU passing GDPR.

Related Article: Email Marketing’s Increasing Role as Third-Party Cookies Disappear

The Critical Value of Permission When It Comes to AI Ethical Issues

In all three of those cases, the essential element in turning something pernicious into something acceptable is permission from the person whose time you’re taking. We have clients that regularly email millions of people, and it’s fine because they secured permission from all of those email address owners. Robocalls are similarly fine when people opt in to get them. And tracking web, app and email activity are OK with permission, helping brands send their customers the relevant messages they expect.

And, of course, the lack of permission is core to the lawsuits against OpenAI by The New York Times and others. With OpenAI admitting that it’s “impossible to train today’s leading AI models without using copyrighted materials,” the discussion seems to have shifted to attacking overly generous copyright laws.

For example, Yann LeCun, Meta’s chief AI scientist, argued that “most books” — those that don’t generate “significant money” — should be “freely available to download” because of the good it would do for society. (And by “society,” he seems to mean generative AI and other big tech companies.)

While the terms of copyrights are absolutely bloated (thanks, Disney), experts also acknowledge that copyright laws chiefly benefit corporations rather than individuals, who rarely have the legal muscle to defend their rights. In their own way, it’s hard not to see OpenAI, Meta and other tech companies as perpetuating that imbalance.

[ad_2]

Source link

digiflowz
Digiflowz
Logo