Resource
Choosing impact metrics that won't collapse after the pilot
Characteristics of strong metrics, leading vs lagging indicators, and a template for metric definition.
Many organisations choose impact metrics during pilot phases that work well initially but become unsustainable over time. The metrics that survive are the ones that balance measurement quality with practical feasibility. Understanding what makes a metric sustainable helps you choose metrics that will serve you long-term, not just during the pilot.
Characteristics of strong metrics
Strong metrics share five characteristics:
1. Clear definition
Anyone on your team can measure it the same way. The definition is specific enough that two people would get the same result.
Example: "Number of participants who score below 10 on PHQ-9 (indicating no significant depression) at 6-month follow-up" is clear. "Improved mental health" is not.
2. Measurable with available resources
You can actually collect the data with your current capacity. If it requires external evaluators or expensive tools, it may not be sustainable.
3. Relevant to decisions
The metric informs decisions you actually make. If you wouldn't change anything based on the metric, it's not useful.
4. Timely
You get results when you need them. If a metric takes years to show results, it may not be useful for ongoing decision-making.
5. Sustainable to collect
The measurement burden is manageable long-term. If it requires significant extra work every time, it will eventually be dropped.
Metrics that lack any of these characteristics tend to collapse after pilots. They either become too burdensome, too unclear, or too irrelevant to maintain.
Leading vs lagging indicators
Understanding the difference between leading and lagging indicators helps you choose metrics that provide timely information:
Lagging indicators
Measure outcomes after they occur. They tell you what happened, but often too late to change it.
Examples: Employment rates 12 months after program completion, long-term health outcomes, system-level changes
Use when: You need to demonstrate long-term impact, or when outcomes take time to manifest.
Leading indicators
Measure early signals that predict later outcomes. They tell you what's likely to happen, giving you time to adjust.
Examples: Engagement levels during program, skill acquisition, behaviour changes, confidence measures
Use when: You need timely feedback for program improvement, or when you can't wait for long-term outcomes.
Most organisations need both. Leading indicators for ongoing improvement, lagging indicators for demonstrating impact. The key is balancing them so you have timely feedback without losing sight of long-term goals.
A common mistake is choosing only lagging indicators because they seem more impressive. This leaves you without timely feedback and makes it harder to improve programs during implementation.
Measurement burden and sustainability
Measurement burden is the time, effort, and resources required to collect a metric. It's the main reason metrics collapse after pilots. During pilots, extra effort is acceptable. Long-term, it's not.
To assess measurement burden, ask:
- How much staff time does it take? If it requires significant extra work beyond normal operations, it may not be sustainable.
- Does it require external resources? If it needs evaluators, consultants, or expensive tools, consider whether you can maintain this long-term.
- How often must it be collected? More frequent collection means more burden. Can you reduce frequency without losing value?
- Does it disrupt service delivery? If measurement interferes with your core work, it will eventually be dropped.
Sustainable metrics integrate into normal operations. They don't require special effort or external resources. They're collected as part of doing the work, not as an add-on.
Avoiding vanity metrics
Vanity metrics look impressive but don't inform decisions. They're easy to collect and make you feel good, but they don't help you understand what's working or what to change.
Common vanity metrics
- Total numbers served: Impressive but doesn't tell you about impact. 1,000 people served poorly is worse than 100 served well.
- Satisfaction scores: Easy to collect but often don't correlate with outcomes. People can be satisfied with services that don't help them.
- Activity counts: Number of sessions, events, or activities. These are inputs, not outcomes. They don't tell you what changed.
- Social media engagement: Likes, shares, comments. These measure awareness, not impact.
The test for whether a metric is vanity: Would you change your program based on this metric? If the answer is no, it's probably a vanity metric.
This doesn't mean you should never collect these metrics. They can be useful for different purposes (e.g., satisfaction for service improvement, numbers served for resource planning). But they shouldn't be your primary impact metrics.
A template for metric definition and review cadence
Documenting metrics clearly helps ensure they're measured consistently and can be reviewed for sustainability. Use this template for each metric:
Metric Definition Template
Clear, descriptive name
Specific definition that anyone could follow
How it's collected, what tool is used, who collects it
How often (e.g., monthly, quarterly, at program completion)
Time required, resources needed, who is responsible
What decisions does this inform? How would results change your approach?
How often to review whether this metric is still useful and sustainable (e.g., annually)
Review cadence is important. Metrics that were useful during pilots may become less useful over time, or measurement burden may increase. Regular reviews (at least annually) help you identify metrics that need adjustment or retirement.
During reviews, ask:
- Is this metric still informing decisions?
- Has measurement burden increased?
- Is the data quality still good?
- Would we start collecting this if we were starting fresh?
- Can we simplify the measurement method without losing value?
Choosing metrics that last
The metrics that survive are the ones that balance quality with feasibility. They're clearly defined, relevant to decisions, timely, and sustainable to collect. They integrate into normal operations rather than requiring special effort.
CIIS helps you manage metrics by providing a structured way to define, collect, and review them. The system supports both leading and lagging indicators, and helps you track measurement burden so you can identify metrics that may become unsustainable over time.
Next Steps
If this topic resonates with challenges you're facing, consider: