You can’t optimize what you can’t measure.
But the deeper problem is that most companies think they’re measuring when they’re actually guessing. The dashboard says ROAS is 4x. The ad platform says conversions are up. But revenue isn’t growing the way those numbers suggest it should.
Something is wrong, but nobody knows what.
This is why we start every engagement with discovery. Not an audit that produces a report and sits on a shelf. A structured process that establishes a baseline - an honest answer to “what is the state of things today” - that becomes the reference point for every decision that follows.
Without a baseline, you’re optimizing in the dark.
You make a change. Results go up. Was it the change, or was it seasonality? You scale a channel. Efficiency drops. Was it saturation, or was the efficiency never real to begin with?
A baseline gives you something to measure against. It separates signal from noise. It lets you actually learn from your experiments instead of just reacting to whatever number moved most recently.
The companies that grow efficiently aren’t the ones running the most experiments. They’re the ones who can actually interpret the results.
Our discovery process moves through five layers, each building on the one before it.
We start with strategic alignment. Before you can measure anything, you need to know what matters. This sounds obvious, but it’s rarely clear. The CEO cares about revenue. The growth lead is measured on new customer acquisition. The CFO wants to see contribution margin. The ad buyer is optimizing for in-platform ROAS. These aren’t the same thing. And when different people are optimizing for different metrics, you get a marketing program that’s pulling in multiple directions.
So we establish the north star metric - the single number that best represents whether the business is winning. Then we identify the indicator metrics that predict movement in the north star. Everything else is diagnostic at best, vanity at worst.
Once we know what we’re supposed to be measuring, we ask whether we’re actually measuring it correctly. This is where most marketing infrastructure falls apart. Events fire multiple times. Conversion pixels are misconfigured. The CRM doesn’t match the ad platform doesn’t match Google Analytics. Server-side tracking was implemented but never validated. Someone changed the purchase event definition six months ago and didn’t tell anyone.
The question we’re trying to answer: can we account for the exact source of every dollar generated? If we can’t, then every optimization decision downstream is built on a foundation of bad data.
From there, we look at infrastructure efficiency. Most marketing stacks grow by accretion. Someone added a tool three years ago. Someone else added a different tool that does 80% of the same thing. There’s a CDP that’s half-implemented, a tag manager with 200 tags (half of which are orphaned), and three different analytics platforms that all show different numbers.
Tool bloat isn’t just expensive. It actively degrades data quality. More tools means more places for data to get lost, duplicated, or transformed incorrectly. We map the stack, identify redundancy, and assess whether the tools that exist are actually serving the measurement needs we’ve established.
Then we look at acquisition performance - what’s actually happening in the ad accounts. How are campaigns structured? What’s the creative testing cadence? How is budget allocated across channels and objectives? What does performance look like when you cut it by audience, placement, and creative?
Most importantly: where is revenue actually coming from? Not where the platforms say it’s coming from - where it’s actually coming from when you trace dollars through your own systems.
This is where we typically find the first major “aha” moment. Most companies discover that the vast majority of their revenue comes from a small number of sources. That’s not necessarily bad, but it’s important to know. You can’t ride concentration forever. Understanding where revenue actually originates tells you where you’re exposed and where your next wins might come from.
The final layer is where discovery diverges from a standard PPC audit or analytics review.
We call it activation and retention reality. Everything before this tells you what’s happening at the top of the funnel. This layer tells you what happens after.
Which acquisition sources produce customers who actually activate? Which produce customers who retain? Which channels look efficient on a CAC basis but bring in users who churn within 30 days? Which channels look expensive but deliver customers with 3x the LTV?
This is the layer that most engagements skip entirely. PPC specialists stay in the ad accounts. Retention specialists work on retention without connecting it back to acquisition source. Nobody sits in the middle.
But the middle is where the insight lives.
Each layer has value in isolation. You can fix your tracking and have better data. You can clean up tool bloat and save money. You can optimize your ad accounts and improve efficiency.
But acquisition performance without activation and retention reality is dangerous.
You’re optimizing to a proxy - acquisition cost, ROAS, conversion volume - without validating that the proxy correlates with the actual outcome you care about. You might be scaling a channel that fills your funnel with customers who never activate. You might be starving a channel that looks expensive but produces your best customers.
When you connect acquisition source to downstream behavior, the picture changes. Sometimes dramatically.
We’ve seen channels that looked like heroes become villains when you factor in activation rates. We’ve seen “inefficient” channels turn out to be the primary source of high-LTV customers. We’ve seen creative variations that performed identically on CTR and CPA produce completely different retention curves.
You can’t see any of this if you stop at the ad accounts.
At the end of discovery, you have a baseline: a clear-eyed view of where things stand today across all five layers.
It’s not a grade. It’s not a judgment. It’s a starting point.
From the baseline, you can make informed decisions about what to fix first. You can design experiments that will actually teach you something. You can measure future performance against a reference point that you trust.
And when someone asks “is this working?” you can answer with something better than “the dashboard says so.”
Want to understand how we’d approach discovery for your business? Get in touch →