1) Vague Claims with No Technical Detail
If a tool promises “AI-powered everything” but can’t explain how it works (even at a high level), be cautious. Real builders can describe inputs, outputs, limits, and data needs.
One such case is: FTC vs. DoNotPay
The Parties:
Plaintiff: Federal Trade Commission (FTC)
Defendant: DoNotPay, Inc.
The Claim:
DoNotPay marketed itself as "the world's first robot lawyer" and promised to use its AI to provide legal services that could substitute for the expertise of a human lawyer. The company's marketing was full of vague, sweeping claims, such as: Allowing consumers to "sue for assault without a lawyer." Generating "perfectly valid legal documents in no time." Promising to "replace the $200-billion-dollar legal industry with artificial intelligence."
The Outcome:
The FTC's complaint alleged that these claims were false and unsubstantiated. The company had not conducted testing to prove that its AI chatbot's output was equivalent to that of a human lawyer, and it did not even employ or retain any attorneys. The FTC's action was part of a larger enforcement sweep called "Operation AI Comply," which targeted companies using AI hype to mislead consumers. DoNotPay settled the charges, agreeing to pay a fine and, crucially, to stop making claims about its ability to substitute for any professional service without evidence to back it up. This case is a perfect illustration of the government's crackdown on "AI washing," where a company makes broad, technically unsupported claims about its AI capabilities to attract customers.NOW WHETHER THEY WERE NOT HAPPY WITH THE SERVICES OR THE SERVICE PROVIDER MADE VAGUE CLAIMS. YOU DECIDE BUT "DO NOT PAY INC" ENDED UP PAYING THE FINE
2) Over-Polished Demos with No Live Trial
Highly produced videos can hide manual steps. Ask for a sandbox or freemium tier. If you can’t test it, you can’t trust it.
One such Case is: SEC vs. Albert Saniger (Nate App)
The Parties:
Plaintiff: Securities and Exchange Commission (SEC) and U.S. Department of Justice (DOJ)
Defendant: Albert Saniger, founder and former CEO of Nate, a mobile shopping app.
The Claim:Nate, a mobile shopping app, was marketed as a revolutionary AI-powered tool that allowed users to make purchases on any e-commerce site with "one click." The company created and distributed highly produced videos and demonstrations that showed this seamless, automated process, which the company claimed was powered by its "AI, machine learning, and neural networks." However, according to the SEC and DOJ, these videos and claims were fraudulent. The core functionality demonstrated in the polished videos was not automated by AI at all. Instead, it was performed manually by a team of human contract workers in the Philippines, who were instructed to process the orders by hand. The company allegedly fabricated success metrics and instructed engineers to be on standby during demonstrations to manually complete test orders, thereby creating a false impression of a fully functional AI system.
The Outcome:The SEC and DOJ filed parallel actions against Saniger, alleging securities fraud. The case is a prime example of regulators cracking down on companies that make false and misleading statements about their AI capabilities, especially when those statements are used to raise money from investors. The SEC’s complaint specifically cited the deceptive product demonstrations and fabricated metrics, highlighting that the company was faking its AI capabilities.
3) “Lifetime Deal” on Heavy Compute
Inference costs money. If a tool offers unlimited generations for a one-time fee, the economics likely won’t hold—or quality will suffer later.
FTC has been vocal about its concern regarding "unlimited" promises in the AI space
As part of a broader crackdown on deceptive marketing practices. The core claim is that these "unlimited" or "lifetime" deals are inherently misleading because the underlying technology is not free to run. AI models, especially large ones for generation tasks, require significant and continuous GPU compute power. This has a direct, per-use cost.
Companies offering these deals often:
Hidden Caps and Throttling:
They advertise "unlimited" but quietly implement usage caps, fair use policies, or severe throttling after a certain amount of use. Users start with high-quality, fast service, but as they use the product, the performance degrades or they hit a hard limit.Degraded Quality:
To manage costs and keep the lights on, companies may downgrade the model they use for "lifetime" customers or use lower-quality settings. The "unlimited" generation you get at the start may not be the same quality you receive a year later.
Bait and Switch:
The company may later change the terms of the deal, forcing "lifetime" customers to a new subscription model for continued access to the features they were promised.The Outcome:
The FTC has not yet brought a landmark case that solely rests on the failure of an AI "lifetime deal," but it has issued stern warnings and has pursued cases that demonstrate the underlying principles. The agency's actions against companies like DoNotPay and the general guidance it provides to businesses underscore that any promise, whether "unlimited," "free," or "lifetime," must be truthful and substantiated. The FTC can and does take action for deceptive marketing under the FTC Act.
The business reality is that the financial model of a "lifetime deal" on a service with variable, high-cost computing is unsustainable. Many companies that have tried this have either gone out of business, been acquired and had their terms changed, or had to throttle their services to a point where the "deal" becomes useless to heavy users.
4) No Real Customers, Only Affiliates
Look for case studies and logos. If every mention is an affiliate post, it’s a signal the product hasn’t earned organic adoption yet.
While AI is a new layer to this, the underlying legal issues have been the subject of numerous successful enforcement actions by the Federal Trade Commission (FTC). The FTC's primary focus is on whether a multi-level marketing (MLM) or affiliate program is a legitimate business or an illegal pyramid scheme. A key factor in this determination is whether the revenue is derived from real, retail sales or primarily from the recruitment of new participants.
Case Example: FTC vs. Vemma
The Parties:
Plaintiff: Federal Trade Commission (FTC)
Defendant: Vemma Nutrition Company, and its founder B.K. Boreyko
The Claim:
Vemma marketed itself as a health and wellness company selling nutritional drinks. However, the FTC's complaint alleged that it was an illegal pyramid scheme. The "Vemma Affiliates" (the term they used for their distributors) were incentivized to recruit new members and buy large "affiliate packs" of products, which they were encouraged to sell to other affiliates rather than to outside customers. The company's compensation plan was structured so that affiliates earned money primarily from recruiting new members and their initial purchases, not from sales to the general public. The FTC argued that the vast majority of Vemma's revenue was generated from the required purchases of its affiliates, not from genuine, retail sales. The compensation was tied to a "recruitment-based pyramid" where new money from new members was used to pay commissions to those at the top of the pyramid. The company had no real, sustainable customer base outside of its own network of distributors.
The Outcome:
The FTC successfully shut down Vemma, and the court issued a temporary restraining order freezing its assets. The case was ultimately settled, and Vemma was ordered to pay a significant sum and was banned from operating as a pyramid scheme. The court’s findings were clear: Vemma's business model was a pyramid scheme because its participants were compensated primarily for recruitment, not for selling products to real customers.
This case is a textbook While Vemma wasn't an "AI" company, the fraudulent business model—relying on new "affiliates" instead of real customers—is directly analogous to what you might see in an AI context. An AI tool might be marketed with a lucrative "affiliate program" where the real product is the fee to join, and the only "customers" are the next wave of affiliates. The Vemma case provides the perfect legal precedent for how regulators would pursue such a scheme.
5) Hallucinated Integrations & Partners
Verify claimed partnerships on both sides. If a vendor lists “Works with X/Y/Z,” check the other platforms’ pages for confirmation.
While specific, named cases focusing on this exact issue for an AI company are still emerging, the legal principles are long-standing. The SEC and FTC have a history of pursuing companies that make fraudulent partnership claims to mislead investors and consumers. Here is a classic example of this type of fraud, even before the AI boom, that directly applies to our point:
One such case could be: SEC vs. Lernout & Hauspie Speech Products N.V.
The Parties:Plaintiff: U.S. Securities and Exchange Commission (SEC)
Defendant: Lernout & Hauspie (L&H)
The Claim:
L&H was a Belgian company that was a major player in the voice recognition and translation technology space in the late 1990s. To inflate its revenues and stock price, the company engaged in a massive accounting fraud. A key component of this fraud was fabricating partnerships and sham transactions. The company claimed to have numerous lucrative partnerships and software licensing deals with a wide array of companies, especially in Asia. These were often described as "major technology integrations." However, the SEC's investigation revealed that many of these claimed partnerships were with shell companies or entities that were either controlled by or had no real business relationship with L&H. The revenue from these "partnerships" was entirely fictitious. L&H created a network of "partners" that would "buy" software licenses from L&H, but the company would secretly lend the money to these partners to pay for the licenses, creating a circular flow of cash that generated no real revenue.
The Outcome:
The SEC filed charges against L&H and its executives for securities fraud. The company's stock collapsed, and it filed for bankruptcy. The executives were criminally prosecuted. This case is a perfect example of how a company can use "hallucinated integrations and partners" to deceive the market. Investors and auditors failed to verify the legitimacy of these claimed partnerships, relying on the company's word and the appearance of a fast-growing business. In an AI context, this practice is even more insidious. A company could claim to have a partnership with a major cloud provider (e.g., "powered by AWS") or a large-scale data provider, when in reality, they are only using a standard API with no formal partnership, or they are using a small, public data set. This case provides a strong legal precedent for why such claims are not just bad business, but a form of illegal fraud.
6) “Proprietary Model” That’s Just a Wrapper
Using APIs is fine, pretending you trained the model isn’t. Ask direct questions about architecture, vendors, rate limits, and safety layers.
assing off an API wrapper as a proprietary, internally-developed AI model—is a form of fraud. It's a key area of scrutiny because the claims of a proprietary model often directly influence a company's valuation and ability to raise capital.
The most direct and recent examples of this come from the SEC's crackdown on AI-related fraud.
One such case is: SEC vs. Delphia (USA) Inc. and Global Predictions Inc.
The Parties:
Plaintiff: U.S. Securities and Exchange Commission (SEC)
Defendants: Delphia (USA) Inc. and Global Predictions Inc., two investment advisory firms.
The Claim:
This is a landmark case because it's one of the first where the SEC has taken direct action against companies for making false and misleading claims about their use of AI. The core of the complaint against both firms was that they misrepresented their AI capabilities to clients and investors.
Delphia:
Claimed it used "a predictive algorithmic model" and "machine learning to analyze the collective data shared by its members to make intelligent investment decisions." In reality, the SEC found that the company did not actually use its clients' data in its investment process as claimed. It was using a more traditional, rules-based approach, and the "AI" claims were a form of marketing puffery with no substance.
Global Predictions:
Claimed to be the "first regulated AI financial advisor" and to use "Expert AI-driven forecasts." The SEC alleged that these claims were false and misleading. The company was not a proprietary AI financial advisor as claimed.
The Outcome:
Both firms settled with the SEC, paying civil penalties and agreeing to cease and desist from future violations. The significance of this case is that it established a clear precedent: the SEC will use existing securities laws to prosecute. While The agency is not waiting for new, specific AI regulations, The SEC's action sends a message that companies must be truthful and transparent about their technological claims, especially when they are used to attract investment. it shows what happens when companies forget to ask those direct questions about architecture and vendors. An investor in Delphia might have been led to believe the firm had a unique, proprietary model for data analysis, when in fact it was an empty claim. The SEC's enforcement action highlights that these vague, unsubstantiated claims are not just marketing fluff, but can be a form of fraud.
Quick Due Diligence Checklist
- Test a live demo (or free plan) with your own data.
- Read recent changelogs and roadmap; is it alive?
- Search “[tool name] downtime” or “[tool name] pricing” on X/Reddit.
- Compare output quality vs. cost to existing leaders.