A series of icons labeled Policies, Requirements, Regulations, Rules, Transparency, Standards, and Law all pointing towards Compliance.

AI Claims, Like Any Other Ad, Need to Be Substantiated—FTC Warns

ShareHIDE

As artificial intelligence continues to become a more significant part of our lives, it’s important that companies who sell AI-powered products make sure they remain accountable.

The Federal Trade Commission (FTC) is making sure its voice is heard on the issue, warning companies that any claims made about the benefits and effectiveness of their products must be supported with reliable evidence. Consumers need to be able to trust these claims without having to worry whether or not what they’re being told about a product has been properly validated.

Michael Atleson from the FTC Division of Advertising Practices wants AI developers along with marketers and advertisers to understand that “AI” is a hot marketing term and should not be taken advantage of, overused, or abused.

With the popularity of generative tools like OpenAI’s ChatGPT or Microsoft’s Bing chat AI tool, AI hype is at an all-time high—not to mention our decades-long obsession with science fiction and the possibilities of AI guidance.

We all know what happens when a word gets hyped: marketers throw it around to catch wandering eyes. Be careful not to make unsubstantiated claims, or you’ll be faced with more than just financial fines.

What are AI claims?

AI Claims are any representations made by artificial intelligence (AI) developers and marketers about their AI-powered products and services. These claims may range from the benefits to performance indicators or other facts related to it.

Consumers must be able to trust these claims without having doubts as to their validity, which is why it’s important that companies using AI technology remain accountable for them. The Federal Trade Commission (FTC) recognizes this need and has issued warnings insisting that developers and marketers back up such assertions with reliable evidence.

This not only gives consumers confidence in the product they decide to purchase but also reinforces consumer protection laws around marketing practices more generally.

In addition, the FTC needs AI makers and promoters alike to clearly label these skills as “automated” or “assisted by automation” when labeling certain features so customers understand how their ability may differ from human labor in terms of its accuracy rate, reliability, or aspects of its operation such as speed or judgment.

In doing so, customers can have realistic expectations when engaging with an automated skill within a service provided by AI creators and marketers – allowing buyers to ultimately make better-informed decisions on what they buy based on trusted appraisals done prior to entering into a contractual relationship regarding these goods/services powered by AI.

Why do AI claims need to be substantiated?

AI has revolutionized the way brands reach their target audience, due to its ability to personalize messages and optimize advertising. Technology companies are increasingly using AI technology as a major selling point for their products, making claims about its supremacy when it comes to performance, accuracy, or dependability.

However, those claims need reliable evidence in order to ensure that consumers can trust what they’re being told about the product. This is why the FTC is taking an active role in ensuring that all AI-related claims are substantiated before artificial intelligence-related products can even enter the market.

The FTC regulates truth-in-advertising rules which require marketers and developers of AI technology to be held responsible for any marketing or product representations (including advertising language like phrases such as “the most accurate” or “the first-ever”) claiming superiority against any competitor.

As such, stipulated images cannot serve as adequate proof of success — claims must be backed up with solid data from testing and consumer reviews — something that no trustworthy business should have an issue providing if they believe in their product’s efficacy anyway.

Questions to Ask as a Marketer

Here’s some food for thought about the AI claims you make about your product:

Do you claim the AI product you sell can do something it, in its current state, cannot?

Of course, every product has a roadmap. But you cannot make AI claims about the future of your product and the capabilities you hope it will have one day. You must make claims about what it can do at this moment. Nothing more.

Your claims also cannot apply to a certain group of users or under specific conditions, just like any other ad claim. It has to apply to the public/audience in its entirety. Results should easily be replicated by new users from any demographic.

Do you claim your product is better than a similar product that is not powered by AI?

It is standard practice to claim that your product’s features are superior to those of non-AI products. However, you must be able to prove that a significant difference in performance exists between your AI and the other products.

If that cannot be proven, then you should not make such unsupported claims as it can easily lead to consumer deception and FTC takedown.

Does your product actually use AI?

There is a difference between selling an AI product and using AI to create a product.

The FTC has the power to check your backend and see what, exactly, is powering your product. So, if you make AI claims about using it to power your product when it doesn’t OR you make statements that can mislead consumers into believing your product has AI without saying so explicitly, the FTC will know.

Using an AI too in the development process of your product DOES NOT mean your product includes AI. You cannot make exaggerations about the use of AI and how relates to your product.

Are you aware of the risks?

You are the seller of the product, the merchant. You are the one that takes responsibility if your product does not work as advertised. Not the developers, not the marketing team, not the consumer—you.

These are the reasonably foreseeable risks you must assume responsibility of at all times, whether you’re a startup or a 7-figure business owner. The risk is the same for everyone.

What are the challenges to validating AI claims?

One of the key challenges to validating AI claims is determining how best to appraise the data generated by these systems. Since artificial intelligence involves complex algorithms and processes, it can be difficult to validate results since they don’t always have reasonable explanations for their decisions or conclusions.

Furthermore, some AI technologies are so advanced that companies may not even fully understand exactly how it functions and why certain decisions were made due to their intricate nature. This lack of sufficient knowledge about a product makes claims about its efficacy hard or impossible for consumers to believe.

Additionally, traditional regulatory approaches are ill-suited for assessing AI performance given variables like dynamic operational parameters and continuously changing datasets associated with natural language processing systems. Existing regulatory processes typically require a great deal of time and resources which necessitates an appropriate application of enforcement action when false claims are encountered.

The FTC will likely need to adjust its laws and regulations in order to meet the challenge of properly monitoring promises related to artificially intelligent products as well as other claims made by marketers concerning new technologies. Tools must be developed that can provide accurate assessments while also taking into account all relevant contextual details such as user input types and outcomes available from past experiences using different models.

Examples of Substantiated AI Claims

AI claims are no different than any other advertising and need to meet the same standards of credibility. As such, companies using AI technology need to ensure that the performance outcomes they promise consumers with their products can be backed up by reliable evidence. In particular, if an advertisement does not accurately convey what a product can do and fails to deliver on those promises then it will be viewed as deceptive advertising practices by the FTC.

Here are five examples of AI claims that can be substantiated:

  1. An AI-powered customer service chatbot can resolve 90% of customer inquiries within 5 minutes.
  2. An AI-powered translation tool can accurately translate complex legal documents from English to Spanish with an average accuracy rate of 95%.
  3. An AI-powered medical diagnosis system can accurately detect early-stage lung cancer with a sensitivity of 92% and a specificity of 95%.
  4. An AI-powered financial forecasting tool can accurately predict stock market trends with a margin of error of less than 1%.
  5. An AI-powered image recognition software can accurately identify objects in photos with an accuracy rate of 99%.

While these claims can be spruced up, each is specific and measurable, and there is evidence to support the accuracy of the claim (which you would link to).

How Companies Can Ensure Compliance with FTC Guidelines for AI Claims

To ensure compliance with FTC guidelines for AI claims, companies need to examine the accuracy of their marketing material and any insights they may be drawing from research. Specifically, marketers should carefully evaluate all existing empirical data used for comparison purposes and whether specific AI technology can deliver on its stated goals. Marketers must never allege that results obtained from a particular AI are guaranteed or not based on solid scientific evidence.

Another important aspect of compliance is maintaining transparency in communication about algorithms and methods used to power products leveraging AI technology, as well as openly discussing any issues customers might encounter using them. This also includes debunking popular myths associated with artificial intelligence of which developers should be aware, such as suggesting that “more raw data automatically leads to better performance” without considering issues like possible over-fitting due to excessive information included in training sets.

If changes have been made after launch which could potentially lead to differences in predicted outcomes, then these updates should always be communicated.

One More Major Risk of Selling AI Products Online

AI is exciting and new. We’re in a phase of figuring out how tech like this should be regulated. While many factors may still remain up in the air, substantiation will always be part of the requirements for any ad, no matter the industry.

Selling any digital product, including AI-powered products, is high risk. As you’ve read above, the FTC has you in its sights. But it’s not just the FTC, it’s payment processors as well. What’s the point in selling a million-dollar-idea product if you can’t accept the payment?

DirectPayNet will help. We hook you up with a high-risk merchant account linked with a payment processor that accepts your business model. Get in touch today to secure your business.

About the author

As President of DirectPayNet, I make it my mission to help merchants find the best payment solutions for their online business, especially if they are categorized as high-risk merchants. I help setup localized payments modes and have tons of other tricks to increase sales! Prior to starting DirectPayNet, I was a Director at MANSEF Inc. (now known as MindGeek), where I led a team dedicated to managing merchant accounts for hundreds of product lines as well as customer service and secondary revenue sources. I am an avid traveler, conference speaker and love to attend any event that allows me to learn about technology. I am fascinated by anything related to digital currency especially Bitcoin and the Blockchain.