← Back
March 31, 2026

Smart Bidding Got Smarter. Your Placement List Didn't.

Google's automated bidding has improved significantly over the past few years. The models are better, the signal processing is faster, and for high-volume accounts with clear conversion signals, Smart Bidding genuinely outperforms manual bidding strategies for most practitioners.

Here's the problem: automated bidding doesn't clean your placement list. It optimizes on top of whatever inventory it's given.

What Smart Bidding Actually Does

Smart Bidding predicts conversion likelihood for each auction based on user signals: device, location, time of day, recent search behavior, audience list membership. It raises bids when conversion probability is high and lowers them when it's low.

This is powerful when the signal is clean. A user who has been researching your product category, is browsing on a desktop during work hours, and is on a known high-intent audience list — Smart Bidding can identify that person and bid appropriately.

What it doesn't do: evaluate the quality of the publisher. Smart Bidding sees user signals. It doesn't independently assess whether the site the ad will appear on is a real publisher or an MFA farm.

The Dirty Inventory Problem

MFA sites have traffic patterns that look legitimate. The users are real. They navigate to the site from a content recommendation network or a search query. Smart Bidding sees a real human with reasonable behavioral signals and bids accordingly.

The commercial failure of that impression — the fact that users on these sites have zero intent, bounce immediately, and never convert — shows up later in aggregate performance data. By then, hundreds of auctions have already been won.

Smart Bidding will eventually adjust downward on domains where post-click performance is consistently bad. But the learning is slow, and new MFA domains enter the ecosystem constantly. The system never reaches a stable clean state; it's perpetually catching up.

The Compounding Effect

Here's what makes this worse: the better your Smart Bidding optimization, the more budget it concentrates. A well-optimized campaign on a Target ROAS strategy will push spend toward placements it believes are performing. If some of those placements are MFA sites with inflated engagement signals — high session counts, decent click-through from the content recommendation network — Smart Bidding may actively favor them.

You end up with an efficiently optimized campaign leaking budget to the exact placements it shouldn't be on.

The Fix Is Manual

Automated systems handle what they're designed for. Placement quality management isn't what Smart Bidding is designed for — it's what you're designed for.

The intervention is straightforward: pull your placement report, identify placements with high spend and no conversions over a meaningful window (60–90 days minimum), add them to an exclusion list. Review it monthly.

This isn't opposed to Smart Bidding — it feeds it. Removing junk inventory from the pool gives the algorithm a cleaner distribution to optimize over. You see better signal quality, more conversion data from real placements, and faster learning.

On the Question of AI Targeting Generally

AI-powered audience targeting is a different question from bid optimization, and worth separating.

Automated audience expansion and similar audience targeting can be valuable. They can also expose you to placements you'd never have chosen manually. When Google expands your targeting to reach users "similar to your converters," those users land on pages across the entire Display network — including the long tail of MFA sites.

The answer isn't to avoid AI targeting. It's to run it with a maintained exclusion list as a guardrail. The targeting finds the users; the exclusion list ensures those users are on placements worth paying for.

AI improves the demand side of the equation. The supply side — what inventory you're actually buying — remains your responsibility.