So You Do AI, Too? Think Twice Before Saying Yes

8 Min Read By: Christopher Wlach

Artificial Intelligence: A Popular—and Fuzzy—Label

What isn’t branded “AI” these days? The past year has seen an explosion of companies, products, and services touting their ties to artificial intelligence (AI) technology. If they’re not “AI-powered,” they’re “AI-driven” or “AI-enabled.” In fact, so many organizations are itching to add “.ai” to their web addresses that Anguilla—the island controlling the domain name—is set to reap millions in domain registration fees this year. Nor is it just new market entrants adopting the term. When the company once known as Twitter announced its rebrand in July, its CEO declared that X too would be “[p]owered by AI.”

The term has jumped into the business world’s vocabulary with a swiftness that should feel familiar. Last year “metaverse” was rolling off executives’ tongues, with the word suddenly a fixture on corporate earnings calls. “Blockchain” experienced a similar phenomenon a few years earlier, with use of the term in company press releases growing more than twentyfold in the course of a year.

Driving this most recent marketing shift is AI’s enormous perceived economic potential. A June 2023 McKinsey report predicts that in the coming years generative AI—artificial intelligence that can generate original content—will add trillions in value to the global economy. Microsoft, too, is bullish on the technology, staking billions on OpenAI and its popular chatbot ChatGPT. With its promise of bounty and investor allure, “AI” is a label that every company wants to wear.

But commercial benefits alone don’t explain the speed or scale of industry’s marketing repositioning around AI. Countless businesses have been able to quickly trumpet their AI capabilities because . . . no one can say exactly what “AI” means.

“Artificial intelligence” is “an ambiguous term with many possible definitions,” the Federal Trade Commission (FTC) observed in guidance earlier this year. The two words sweep in a host of different technologies that serve different ends—from generative AI tools that artists prompt to create fantastic imagery, to machine-learning programs that financial companies can use to predict loan risk.

As Theodore Claypoole, a partner of Womble Bond Dickinson’s Intellectual Property Practice Group, notes in a September 2023 Business Law Today article, this “lumping together of disparate functionalities into a single unmanageable term” presents a challenge for those designing rules for the technologies. Yet for a company’s marketing team, the term’s broadness and flexibility may seem a grand thing. As AI investment swells, businesses whose products involve any of these various technologies may see only upside in raising their hand and saying: “Artificial intelligence? We do that, too!”

They should pause a moment before doing so. Others—just as interested in AI as the business world is—are watching whose hands are up.

The Marketing Concern: Truth in Advertising Laws

For companies planning to market their AI skills and tools, one immediate consideration isn’t unique to AI: It’s those darn truth in advertising laws. In February the FTC reminded businesses that laws around product claims apply to today’s advanced tech just as they do to traditional goods and services. If an advertiser claims that its product is “AI-enabled,” then it needs to be; merely using AI in the product development process won’t cut it. Similarly, advertisers must substantiate performance claims and comparative claims about automated technologies, and the claims must accurately reflect the technologies’ limitations. The FTC is coupling this guidance with action. In August the agency sued Automators AI for falsely and misleadingly promising consumers financial gains from using AI.

Companies selling assets made by AI also need to consider whether the AI-generated nature of their content warrants additional disclosures. The FTC has been clear that deceptively peddling AI-generated lookalikes and sound-alikes as the work of real artists or musicians violates the law. At the state level, the California Bolstering Online Transparency Act (BOT) similarly prohibits using a bot to deceive people in a sales- or election-related context. And the FTC has even suggested that companies offering generative AI products may need to disclose to what extent their training data includes copyrighted or protected materials.

Despite the FTC’s flurry of new guidance on the topic, the agency acknowledges that “artificial intelligence” remains a nebulous phrase. And the varied and vague meanings of the term may still allow many companies to claim—lawfully—that they’re indeed AI-driven.

The Bigger Worry: A Growing Web of Laws and Regulations

With guidance from their lawyers, companies should be able to ensure that their marketing claims about AI are truthful and evidence based. But false advertising laws shouldn’t be the only considerations for organizations deciding whether to tread into the new territory. Businesses rushing toward AI’s gold-laden hills should realize that lawmakers, regulators, and others are heading to the same place.

For these other parties, the definitional fuzziness of the term “AI” isn’t a marketing opportunity; it’s a broad target. Take federal agencies, for example. In April, weeks after Bill Gates proclaimed that “[t]he Age of AI has begun,” the FTC and three other agencies—the Consumer Financial Protection Bureau, the Justice Department’s Civil Rights Division, and the Equal Employment Opportunity Commission—jointly pledged to “vigorously use our collective authorities” to monitor the emerging tech. The four agencies see their authority as extending over not just the already-expansive category of AI but across all “automated systems”—a term they use “broadly” to encompass any “software and algorithmic processes . . . used to automate workflows and help people complete tasks or make decisions.” Bottom line: If you’re doing anything involving AI or adjacent to AI, you’re on these regulators’ radar.

President Biden’s October Executive Order (EO) on artificial intelligence will only heighten the regulatory buzzing. The lengthy EO directs several agencies to propose regulations and provide guidance on AI. For instance, the Secretary of Commerce must issue guidelines and best practices for developing “safe, secure, and trustworthy AI systems”—guidance that will likely affect how private industry designs and deploys AI.

Legislators, too, have big plans for the tech. Indeed, some laws regulating AI and similar technology are already on the books. For instance, the California Privacy Rights Act of 2020 (CPRA) charges the California Privacy Protection Agency with issuing regulations on how businesses employ automated decision-making technology. The California Age-Appropriate Design Code Act, passed in 2022, likewise limits how businesses use algorithms in services and products likely to be accessed by minors. East of the Golden State, Colorado adopted a regulation effective in November that seeks to prevent life insurers from using algorithms and predictive models to racially discriminate. And across the Atlantic, the European Union’s General Data Protection Regulation (GDPR) has for years let individuals opt out of certain automated decision-making.

Other legislative changes are coming. On Capitol Hill, a bipartisan group of lawmakers is reportedly developing a “sweeping framework” to regulate AI, including licensing and auditing requirements, rules around data safety and transparency, and liability provisions. State legislatures are a step ahead of Congress, with several AI-focused laws proposed or already passed. And the EU has drafted what it calls “the world’s first rules on AI.” Its AI Act would impose comprehensive requirements—involving security, training, data governance, and transparency—on any company using, developing, or marketing “AI systems” in the EU. And while the European Parliament acknowledges industry groups’ concerns that the term “AI systems” is too wide-reaching, the EU still intends to adopt a “broad definition” to cover both current and future developments in AI technology.

In response to these changes, a new of category of “AI governance” professionals has emerged to shepherd companies through the upcoming law-making and regulation. Such roles may soon become must-haves for companies looking to commercialize AI.

Whatever one’s views on the need for state intervention into this new technology—and expectations for how effective it will be—this governmental pencil-sharpening should make any prudent company think a moment before slapping an “AI” sign on its storefront. Just as a complex and ever-growing regulatory regime governs privacy and personal data, a thicket of statutes and regulations has been sprouting around AI and anything like it. Stepping into that thicket shouldn’t be a hurried marketing move—it needs to be a calculated decision.

To Brand as “AI” or Not to Brand as “AI”

Many commentators foresee AI as the next internet or mobile phone—a revolutionary technology destined to be the bedrock of any modern business. And it may well be. Yet those predictions are made in a temporary Eden of minimal regulation. They fail to consider the horde of governmental actors poised to shake up the landscape.

For some companies, the costs of operating within a complex regulatory regime—and the risks of noncompliance—may outweigh any potential benefits from repositioning their business around AI. These companies may deliberately choose not to get into the AI game.

With the AI frenzy still in full effect, that may sound far-fetched. But it shouldn’t. Regulatory avoidance—that is, structuring one’s business to lawfully avoid laws and regulations—is common today. Many organizations choose not to operate in certain jurisdictions, for instance, or decline to process sensitive data like biometrics or minors’ personal information, so that they can limit their legal obligations and business exposures. While that may mean some lost commercial opportunities, the organizations don’t see the value as justifying the additional regulatory burden.


The AI wave may be inevitable, with every business forced to swim along or else sink. But companies are naive if they think the “AI” tag attracts nothing but customers—governmental bodies around the world see it, too. Before rebranding, then, smart organizations should consider not just the immediate marketing benefits of being “powered by AI” but also its long-term costs and risks.

By: Christopher Wlach

Connect with a global network of over 30,000 business law professionals


Login or Registration Required

You need to be logged in to complete that action.