The AI MFA Arms Race: Why the Placement Quality Problem Is Getting Harder
Made-for-advertising sites existed before ChatGPT. Content farms, link farms, scraped news aggregators — the model of building cheap web properties to attract programmatic ad spend has been around since display advertising started. What changed in 2023 and 2024 was the cost structure.
Generating a convincing-looking article used to require a human writer, even a cheap one. At $5–10 per article, running a content farm meant real operational costs. Large language models collapsed that cost to fractions of a cent per article. The result was a step-change in the scale and sophistication of MFA site production.
What Changed
Volume. The number of new MFA domains entering the ecosystem accelerated sharply after mid-2023. Ad fraud detection firm Pixalate reported a 23% increase in invalid traffic in programmatic display between Q2 2023 and Q2 2024. This isn't coincidence.
Quality. Old MFA content was obviously bad. Grammatical errors, incoherent sentences, keyword stuffing so aggressive it read like nonsense. Advertisers and their tools could catch it with basic text analysis. AI-generated content reads cleanly. It passes grammar checks. It sounds authoritative. Brand safety classifiers trained on the old signals miss it.
Speed. A fraudster who previously needed weeks to stand up a convincing-looking content site can now do it in hours. Google's enforcement — which requires human review, legal processes, and policy adjudication — operates on a timeline that can't match automated site creation.
The Detection Gap
Brand safety technology is playing catch-up. Most contextual classification systems were trained to identify problematic content — hate speech, misinformation, adult material. They weren't designed to detect fake publishers — sites that look legitimate but exist only to collect ad impressions.
The signals that used to work are weaker now:
- Content quality signals — AI content passes them
- Domain age — fraudsters have learned to buy aged domains
- Traffic patterns — sophisticated operations use real paid traffic
- Author bylines — AI can generate fake author personas with headshots and bios
What still works: behavioral signals (extremely high bounce rates, very low session duration, anomalous time-on-site relative to the content category) and structural analysis (ad density ratios, internal link depth, absence of real editorial infrastructure).
The Economic Reality
The incentive to run MFA sites is strong and unlikely to go away. A moderately successful MFA site can generate $10,000–50,000/month in ad revenue with near-zero ongoing operating costs once it's set up. That's an extremely attractive return on a few days of setup work.
The ad networks' incentive to aggressively clean up the ecosystem is weaker than it might appear. Every impression removed is revenue lost. Policy enforcement happens, but at a pace that preserves the majority of inventory.
What This Means for Your Campaigns
The arms race doesn't resolve in your favor automatically. Display campaigns running on broad targeting or auto-managed placements will continue to accumulate MFA spend at roughly the same rate unless you actively manage it.
The practical implication: placement audits need to happen more frequently than they used to, and the signals you're looking for have shifted. An unfamiliar domain with clean-looking content is no longer safe to assume legitimate. Check the structural signals — ad density, navigation depth, traffic sources, social presence — not just the content itself.
The technology behind MFA sites got better. The countermeasure is active exclusion list management, not passive reliance on platform-level protections.