Switching dunning tools, onboarding a new recovery process, or simply reviewing your performance over time — these are all moments when you'll want to measure results. That's a good instinct. But recovery rate is one of those metrics where the wrong approach to measurement can lead you to confident conclusions that are flat-out wrong.
We've seen this play out hundreds of times. A team switches tools, compares "before and after," sees a number they don't like, and starts making changes — when the data was never telling them what they thought it was.
Here are the most common measurement pitfalls we see, and how to avoid them.
This is the most common mistake we see during migrations. You pull a recovery rate from your previous tool, pull one from your new tool, and compare them side by side. The problem is that different tools define and calculate recovery rate differently.
Some tools exclude cancellations from the denominator. Some count a "recovery" in the time period the recovery occurred, not in the time period when it initially failed. Some track recovery rate by revenue rather than count. Others exclude in-progress campaigns in their headline number, which inflates results early and deflates them later.
Even if two tools use the same formula on paper, the underlying data can differ — which charges are included, how campaign start dates are assigned, whether stopped or manually resolved campaigns are counted.
The takeaway: if you're comparing performance across tools, you need a consistent data source and a consistent calculation. Consistent data calculated the same way for both time periods is the only reliable path. Dashboard numbers from two different platforms are not comparable.
When a recovery campaign is still running, its outcome is undetermined. If you include these campaigns in your analysis, you're mixing completed results with incomplete ones — and the result is a number that doesn't represent actual performance.
Here's what this looks like in practice. Say you have 1,000 failed payments in a 20-day campaign window. On day 7, 400 have been recovered, 100 have canceled, and 500 are still in progress. If you calculate recovery rate as 400 out of 1,000, you get 40% — which is an honest snapshot of where things stand so far. But if you exclude the 500 in-progress campaigns and calculate 400 out of 500, you get 80% — a number that looks great but dramatically overstates your actual performance.
The best practice is to exclude entire time periods that still have any active campaigns. This ensures you're only analyzing complete cohorts where every campaign has reached a final outcome.
Recovery rate fluctuates naturally. If your 30-day rolling recovery rate over the past year falls between 62% and 74%, then a single month at 64% isn't a crisis — it's within your normal range. But if you only looked at last month (70%) versus this month (64%), you might conclude something went wrong.
This is why rolling analysis matters so much. Instead of comparing two cherry-picked time periods (like January vs. February), rolling analysis calculates your metric across every overlapping window — Jan 1–30, Jan 2–31, Jan 3–Feb 1, and so on. This reveals the full distribution of your performance and helps you distinguish a genuine trend from normal fluctuation.
A monthly chart can actually hide what's really happening. Jan 1–31 and Feb 1–28 are arbitrary 30-day windows. The period of Jan 15–Feb 15 might tell a completely different story. Rolling analysis eliminates this blind spot.
If you haven't mapped your natural variance yet, it's hard to know whether any change in performance is meaningful. Establishing your baseline range is one of the most valuable things you can do before drawing conclusions.
Recovery rate is a useful summary metric, but it can mask important shifts happening underneath. Every failed payment ends in one of four outcomes: a card update, a successful retry, a cancellation, or passive churn. Looking at the overall recovery rate alone can hide changes in these individual components.
For example, your recovery rate might hold steady at 68% — but if card updates dropped from 35% to 25% while retries increased from 33% to 43%, that's a meaningful change worth understanding. Or your recovery rate might dip by a few points, and the entire drop is explained by a spike in cancellations — which has nothing to do with your dunning process and everything to do with customer sentiment.
Charting each outcome over time gives you a much clearer picture of what's actually happening and where to focus your attention. Learn more about the four outcomes here.
Recovery rate doesn't exist in a vacuum. It's influenced by everything that affects whether your customers want to remain subscribed. Some of the most common external factors we see:
Seasonality. Holidays, end-of-year, and summer months can all shift recovery behavior. For example, during late November through December more cards fail and it's harder to get subscribers' attention. In January, subscribers newly acquired through the holidays begin recurring payments and considering whether or not to stick with your subscription program. We see these patterns across brands consistently.
Acquisition surges. If you ran a major promotion or had a viral moment two or three months ago, you may now be seeing a wave of first- and second-renewal subscribers entering your recovery funnel. These newer subscribers fail at higher rates and recover at lower rates, which can drag your overall numbers down even though nothing about your process changed.
Product or pricing changes. A price increase, a change to your subscription offering, a redesigned website — any of these can shift customer retention behavior, which surfaces in your passive churn numbers.
Broader economic conditions. Macroeconomic factors can influence payment failure rates and recovery behavior across your entire customer base.
Before attributing a change in recovery rate to your dunning process, it's worth asking: did anything else change during this period?
Your recovery rate is an average across every type of customer running through your funnel at any given time. But not all customers are equal in this context.
First-order subscribers — those failing on their second charge — recover at significantly lower rates than long-tenured customers. If your subscriber mix shifts toward more new customers (which often follows a high-acquisition period), your overall recovery rate will drop even if each segment's performance is unchanged.
Similarly, customers on annual billing cycles behave differently from monthly subscribers. Customers acquired through deep discounts churn at higher rates than those who subscribed at full price.
Segmenting your data — even at a basic level, like separating first-renewal customers from the rest — can explain a surprising amount of the variance you're seeing. It also helps you avoid making broad process changes in response to a shift that's really about who's in the funnel, not how the funnel is performing.
Sometimes a spike in failures or a dip in recovery rate isn't about customer behavior at all — it's a processing glitch.
We regularly see situations where a specific charge error type suddenly spikes. A processor might start returning a higher volume of vague "declined" errors that are actually a temporary system issue, not real card problems. Or a batch of charges might get reprocessed from weeks or months ago, flooding your funnel with old failures that have a very low probability of recovery.
Before drawing conclusions from a metric shift, it's worth checking the distribution of charge errors for the period in question. If you see an unusual concentration of a specific error type, or charges that were already attempted multiple times before entering your campaign, that's a signal to investigate further before changing your strategy.
When you migrate between tools or make significant changes to your recovery settings — campaign length, retry timing, email cadence — there's typically a transition period where data from both the old and new configurations are in flight.
During a migration, you might have campaigns that started under one system and finished under another. You might have a period where settings were being adjusted and hadn't settled into their final state. Including this transition data in your analysis contaminates both the "before" and "after" picture.
The cleanest approach is to identify the transition period and exclude it entirely. Wait until new campaigns have fully completed under the new configuration — which, for a typical 20–30 day campaign, means waiting at least that long after the switchover before you have even one day of clean, completed data. And from there, you'll want at least 30–60 days of completed cohorts to smooth out natural variance and draw meaningful conclusions.
Patience here is genuinely one of the hardest parts. But rushing to evaluate during a transition period almost always leads to misleading results.
Good measurement isn't about finding a single number that tells you whether things are working; it's about building a clear picture over time, using consistent methodology, and understanding the context around your data.
If you're evaluating recovery performance — whether after onboarding, a tool migration, or just a routine check-in — here's a solid foundation:
Use a consistent data source and calculation method. Exclude days with in-progress campaigns. Look at rolling time periods, not arbitrary snapshots. Break results down by outcome type. Factor in what's happening outside your dunning process. Segment your customers when possible. Check for processing anomalies. And give transitions enough time to produce clean data before drawing conclusions.
Recovery rate analysis done well is a valuable exercise in subscription retention. Done poorly, it leads to expensive mistakes and unnecessary anxiety. We'd rather help you get it right.
Have questions about measuring your recovery performance?