The MEDVi case is a masterclass in what happens when AI-powered growth outpaces compliance — and a warning to every business leveraging AI in 2026.
Key Takeaways:
- Regulatory actions against AI-using businesses make all payment processors nervous — not just the ones serving the company in trouble. When one high-profile case hits the news, risk thresholds tighten for every merchant leveraging AI.
- There are two distinct AI compliance risks: selling an AI product or service (where processors now evaluate your safety guardrails, not just your business model) versus using AI to run or market your business (where you’re liable for every claim AI generates on your behalf).
- AI-generated marketing is held to the same legal standard as human-created marketing. The FTC, FDA, and state attorneys general don’t care whether a human or a chatbot wrote the deceptive ad copy.
- AI chatbots that hallucinate pricing, invent products, or make false promises create direct chargeback liability that can get your merchant account terminated.
- Compliance failures cascade into payment processing shutdowns. The typical path runs: regulatory scrutiny → processor review → fund holds → MATCH listing → business death.
- A dedicated merchant account built for your industry is the single best protection against sudden shutdowns by aggregators like Stripe, PayPal, and Square.
A two-person telehealth startup projects $1.8 billion in revenue. It built its website, marketing, customer service, and analytics dashboards almost entirely with AI tools. ChatGPT, Claude, MidJourney — the full stack. The New York Times profiles it as the future of business.
Six weeks earlier, the FDA had already sent a warning letter for misbranding violations on the company’s website.
That company is MEDVi. And while it operates in healthcare, the fallout doesn’t stay in healthcare.
Every time a high-profile regulatory action like this hits the news, it sends a signal to payment processors. The message is clear: businesses using AI carry a new category of risk. Stripe, PayPal, and Square start looking more closely at merchants leveraging AI. So do traditional payment processors and the acquiring banks behind them. Everyone in the payments chain starts asking harder questions.
That’s the part most founders miss. You don’t have to be doing anything wrong. But when regulators crack down on AI-generated deceptive marketing in one vertical, the ripple effect hits everyone. Compliance reviews get stricter. Risk thresholds get lower. Account freezes happen faster and with less explanation.
In this AI era, understanding the dos and don’ts of AI compliance isn’t just about staying on the right side of the FTC or the FDA. It’s about staying in your payment processor’s good graces. Because losing your ability to accept payments is the fastest way to kill a business, no matter how good your product is.
What Happened With MEDVi: The Short Version
MEDVi was founded by Matthew Gallagher in September 2024 with $20,000 in capital. He used AI tools across the entire operation to sell compounded GLP-1 weight-loss medications. Rather than building clinical infrastructure, the company partnered with CareValidate and OpenLoop Health. Those partners handled the doctors, pharmacies, and shipping.
The growth was undeniable. MEDVi reported $401 million in 2025 revenue. The New York Times independently verified that number.
But federal regulators found problems. In February 2026, the FDA sent MEDVi a warning letter for misbranding violations. The agency found that the company’s website falsely suggested MEDVi compounded the drugs it sold. On top of that, certain marketing claims implied FDA approval of the compounded products. No such approval existed.
The FDA warned that failure to fix the violations could lead to seizure or injunction. It also noted that the violations listed were not exhaustive.
Investigative reporting revealed even more issues. MEDVi’s advertising operation reportedly used AI-generated before-and-after photos and fabricated video content. It also ran social media profiles posing as doctors who didn’t appear to be real medical professionals.
As a result, the National Consumers League named MEDVi in a request for an FTC investigation. Their argument: the company’s marketing practices violated the FTC Act.
Meanwhile, the company’s AI customer service chatbot was making up drug prices. It also invented product lines that didn’t exist. The founder had to manually correct those errors.
Two Types of AI Risk — and Most Businesses Don’t Realize They Have Both
Here’s a distinction that most coverage of this story has missed. MEDVi is not an AI company. It doesn’t sell AI software or AI-powered tools. It’s a telehealth company that used AI tools — ChatGPT, Claude, MidJourney, Runway — to run its operations at a scale that would normally require dozens of employees.
That distinction matters. There are two fundamentally different categories of AI-related compliance risk. And they compound on top of each other.
Category A: You sell an AI product or service. Think SaaS marketing tools, AI content generators, or AI-powered analytics platforms. Your product is the AI. The compliance risks center on what your AI product does and whether the claims you make about it are truthful.
Category B: You use AI to run or market your business. This is MEDVi. It’s also every e-commerce brand, supplement company, coaching business, and service provider using ChatGPT for ad copy or MidJourney for creative assets. The compliance risks center on whether the output of those AI tools meets the same legal standards as if a human had created it.
Many businesses fall into both categories at the same time. That’s where things get especially dangerous. Let’s break down the specific risks.
Risk 1: Businesses That Sell AI Products and Services
If you’re building or selling an AI-powered tool, you face a unique set of compliance pressures. These pressures directly affect your ability to process payments. Here’s how.
The overpromising problem
The FTC’s “Operation AI Comply” initiative has already targeted businesses that exploit AI hype to mislead consumers. The agency’s position is clear: using AI tools to trick, mislead, or defraud people is illegal.
The FTC has also drawn a line between selling an AI product and using AI to create a product. Making unsubstantiated claims about your AI’s capabilities counts as deceptive advertising. Saying your tool “uses AI” when it doesn’t? That’s a violation. Overpromising what your AI can deliver? Also a violation.
You’re the merchant. You take responsibility if your product doesn’t work as advertised.
The guardrails problem (the one most AI founders miss)
Overpromising is only half the equation. Payment processors and acquiring banks now also scrutinize what your AI product cannot do. More specifically, they want to know what you’ve done to prevent misuse.
If you sell an AI tool, processors want to see guardrails:
- It can’t generate content that causes harm to users
- It can’t produce output that breaks the law — copyright-infringing material, defamatory content, or regulated health and financial claims
- It can’t be weaponized for fraud, harassment, or illegal activity by your end users
This shift catches a lot of AI founders off guard. Your payment processor isn’t just evaluating your business model and chargeback ratio anymore. They’re evaluating your product’s safety architecture.
Think about it this way. Can your AI writing tool generate phishing emails? That’s your liability. Can your AI image generator produce deepfakes with no restrictions? That’s your liability too. Does your AI marketing platform let users create ads with unsubstantiated health claims? That liability flows uphill to you. Your processor knows it.
The absence of content moderation, output filtering, and usage policies is itself a red flag. It can get your merchant account denied or terminated.
The subscription billing multiplier
On top of all this, many AI SaaS products run on subscription billing. That alone puts you in a higher-risk category for chargebacks.
Now add regulatory scrutiny and product safety concerns on top of subscription churn disputes. The result? A fast track to getting shut down by Stripe or any other mainstream payment service provider.
Risk 2: Using AI to Create Marketing Materials (The MEDVi Problem)
This is the risk category that MEDVi makes impossible to ignore. And it applies to every business using AI for advertising — not just telehealth companies.
What MEDVi actually did
MEDVi didn’t get in trouble because it used AI. It got in trouble because its marketing was deceptive. The AI just made it easier to produce deception at scale.
The company reportedly used AI to generate fake before-and-after photos. It fabricated video content. It created social media profiles posing as doctors who didn’t appear to be real people.
In response, a bipartisan coalition of 35 state attorneys general wrote to Meta about AI-generated deceptive weight-loss advertising. They called it “particularly dangerous.”
The FDA warning letter identified a separate but related problem. The company’s website implied FDA approval of its compounded products. No such approval existed. Whether a human or an AI wrote that copy doesn’t matter to the FDA. The claims were false or misleading. The products were misbranded.
The rule that applies to everyone
Here’s the core lesson for any business using AI to produce marketing content. The compliance standard doesn’t change because AI wrote the copy, generated the image, or created the video.
Every claim still needs to be truthful and substantiated. Every image still needs to represent reality. Every endorsement still needs to come from a real person who actually uses your product.
These are the scenarios that can trigger regulatory action or payment processor shutdowns:
- AI-generated testimonials or reviews that don’t represent real customers
- AI-created before-and-after images for any product or service
- AI-fabricated endorsements or expert profiles
- Marketing copy where AI has made claims you haven’t verified
- Affiliate marketers using AI to create deceptive ads on your behalf
The affiliate blind spot
That last point deserves special attention. The FTC requires advertisers to have “reasonable programs” to oversee their affiliates. Health-related marketing requires even more supervision.
MEDVi reportedly ran around 30% of its advertising through affiliates. Some of those affiliates used what appeared to be AI-generated content. At one point, over 5,000 active ad campaigns mentioning MEDVi were live on Meta’s platform.
Here’s the bottom line: if your affiliates use AI to create misleading ads for your products, you face the consequences. The FTC comes after you. Your payment processor shuts you down. Your customers file chargebacks against you.
High-risk verticals get hit hardest
For businesses in the supplement, telehealth, or wellness space, the stakes are even higher.
Stripe explicitly prohibits nutraceuticals, telemedicine providers, and supplement companies from using its platform. So when your AI-generated marketing draws regulatory scrutiny in one of these verticals, your payment processing doesn’t just become difficult. It disappears overnight.
Risk 3: AI Chatbots Making Promises Your Business Can’t Keep
MEDVi’s AI chatbot made up drug prices and invented non-existent product lines. In a healthcare business, that could directly affect patient treatment decisions. The founder had to manually correct these errors. As volume scales, that becomes unmanageable.
However, this risk extends far beyond healthcare. Any business deploying an AI chatbot for customer service is letting an AI make representations on the company’s behalf. When that chatbot quotes a price, describes a product feature, or promises specific refund terms, those statements carry the same weight as if a sales rep said them.
From a payment processing perspective, AI chatbot errors lead directly to customer disputes. When a customer gets charged based on wrong information from your chatbot, they will dispute that charge. If your chargeback ratio climbs above 0.5%, processors like Stripe will shut you down within 24 hours.
This is a compliance risk that many businesses aren’t thinking about yet. Your AI customer service tool makes binding representations on behalf of your business. And you’re liable for every one of them.
The Payment Processing Domino Effect
Here’s how compliance failures typically cascade — whether the gap was caused by AI-generated marketing, a chatbot gone rogue, or unsubstantiated claims about an AI product.
Stage 1: Regulatory scrutiny. The FDA, FTC, or state attorneys general identify violations in your marketing or business practices. For MEDVi, this started when the FDA reviewed the company’s website in December 2025.
Stage 2: Payment processor review. When regulatory actions go public — or when chargebacks spike — your payment processor takes notice. Platforms like Stripe, PayPal, and Square operate with a “better safe than sorry” mentality. They freeze or terminate accounts with minimal explanation.
Stage 3: Fund holds. Even after termination, processors typically hold your remaining funds for 90 to 180 days. If you’re processing significant volume, this can mean hundreds of thousands of dollars trapped in your account. Meanwhile, you have no way to accept new payments.
Stage 4: MATCH listing. In severe cases, you could end up on the MATCH list (Member Alert to Control High-Risk). This is effectively a payment industry blacklist. Getting approved for a new merchant account becomes nearly impossible.
Stage 5: Business death. Without the ability to accept payments, your business stops generating revenue. This is exactly why Stripe isn’t a merchant account — it’s a shared service that can disappear in seconds. Businesses in high-risk verticals need a dedicated merchant account as their primary payment infrastructure.
The Bigger Picture: AI Compliance in 2026
The MEDVi situation isn’t happening in a vacuum. The regulatory landscape is tightening for both businesses that sell AI products and businesses that use AI to operate. Payment processors are responding accordingly.
The FTC has made AI deception enforcement a priority. State attorneys general are coordinating enforcement actions across state lines. The EU AI Act’s high-risk system requirements take full effect in August 2026. And payment processors are updating their prohibited business lists to include categories with the highest AI-related compliance risks.
For digital marketing agencies and SaaS companies, the message is clear. The old playbook of “move fast and break things” now comes with real financial consequences. When your growth runs on AI but your compliance doesn’t, you’re building on a foundation that can collapse at any moment.
Meta’s advertising platform has also come under scrutiny for permitting deceptive AI-generated health ads. Reports suggest a significant portion of its ad revenue comes from questionable advertisers. On top of that, the platform has introduced new restrictions targeting health and beauty advertising. That creates additional compliance layers for businesses that depend on social media ads.
How to Protect Your Business
Whether you sell an AI product (Category A), use AI to market and operate your business (Category B), or both — here’s what you need to do. These steps will help you avoid the compliance-to-payment-shutdown pipeline.
Audit every AI-generated customer-facing output. This includes marketing copy, ad creatives, chatbot responses, and email campaigns. If AI created it and customers see it, a human needs to review it before it goes live. MEDVi’s chatbot was inventing prices and products. That kind of error is preventable with basic quality controls.
Substantiate every claim. The FTC requires advertising claims to be truthful and backed by evidence. This standard doesn’t change because AI wrote the copy. If your AI generates a claim you can’t prove, remove it. Review the FTC’s .com Disclosures guidance and apply it to everything your AI produces.
Monitor your affiliates. If you work with affiliate marketers, you need a documented oversight program. The FTC expects advertisers to supervise their affiliates — especially in health-related categories. Automated monitoring of affiliate ad content is table stakes in 2026. This is critical if those affiliates use AI to create their ads.
Know your regulatory landscape. If you sell supplements, health products, or anything that touches FDA compliance, learn the specific rules that apply to your marketing claims. The same goes for financial services, legal services, and any other regulated industry.
Get a dedicated merchant account. This is the single most important step for business continuity. Third-party processors like Stripe will cut you off the moment your risk profile changes. A dedicated high-risk merchant account gives you stability. The processor understands your industry. It has underwritten your specific business. And it won’t shut you down based on an algorithm’s assessment of aggregate risk.
Build payment redundancy. Even with a dedicated merchant account, smart businesses diversify their payment processing across multiple processors. If one relationship gets disrupted, you can keep operating while you resolve the issue.
Implement chargeback prevention. AI-related customer disputes are rising. Whether it’s a chatbot that gave wrong information or marketing that overpromised, chargebacks are the fastest path to losing your payment processing. Invest in fraud prevention and clear billing descriptors so customers recognize your charges.
The Bottom Line
MEDVi’s story isn’t about one telehealth company making mistakes. It’s about what happens when AI-powered growth outpaces compliance. And it shows how fast the consequences compound when payment processors, regulators, and customers all lose confidence at the same time.
The company built a $401 million business with two employees and AI tools. But the FDA warning letter, the FTC investigation request, the deceptive advertising allegations, and the chatbot errors all point to the same problem. Compliance infrastructure didn’t scale with revenue.
Every business using AI faces some version of this challenge. The question isn’t whether you’re using AI. It’s whether your compliance, your marketing oversight, and your payment infrastructure can handle the scrutiny that AI-powered growth attracts.
If your business is in a high-risk industry, uses AI for marketing or customer service, or is growing faster than your compliance team can manage, you need a payment processing partner who understands these risks. Contact DirectPayNet to discuss how a dedicated merchant account can protect your business from the compliance domino effect.
DirectPayNet specializes in high-risk merchant accounts for businesses in regulated industries. From chargeback prevention to compliance consulting, we help merchants build payment infrastructure that supports growth without the risk of sudden shutdowns.



