Skip to main content

Resource

Activity reporting vs outcomes reporting: the difference that changes decisions

Why activity reporting dominates, what outcomes reporting requires, and how to implement an outcomes ladder.

Most impact reports focus on activities: what organisations did, how many people they served, how many sessions they delivered. This is activity reporting. Outcomes reporting focuses on what changed as a result: improved wellbeing, increased employment, reduced isolation. The difference matters because only outcomes reporting can inform decisions about what works and what doesn't.

Definitions with examples

Activity reporting

Activity reporting describes what you did:

  • "We delivered 50 counselling sessions"
  • "We provided training to 200 participants"
  • "We distributed 1,000 food parcels"
  • "We ran 12 community events"

Outcomes reporting

Outcomes reporting describes what changed:

  • "75% of participants reported improved mental wellbeing (measured using validated scale)"
  • "40% of participants secured employment within 6 months"
  • "Food security improved for 80% of recipients (measured via household survey)"
  • "Community connectedness increased (measured via social network analysis)"

Activities are inputs and processes. Outcomes are changes in people's lives, communities, or systems. Activities are necessary but not sufficient for impact. Outcomes are what matter for decision-making.

Why activity reporting dominates

Activity reporting is easier. It requires counting what you did, which is straightforward. Outcomes reporting requires measuring change, which is harder. This creates a natural bias toward activity reporting, even when funders ask for outcomes.

Other reasons activity reporting dominates:

  • It's what organisations can control: You can control how many sessions you deliver. You cannot directly control whether someone's wellbeing improves.
  • It's easier to measure: Counting sessions is simple. Measuring wellbeing requires validated tools and baseline data.
  • It feels safer: Activity numbers are factual and defensible. Outcome claims require evidence and can be challenged.
  • It's what reporting templates ask for: Many funder templates focus on activities because they're easier to standardise.

The problem is that activity reporting cannot answer the questions that matter: Is this intervention working? Should we continue funding it? What should we change? Only outcomes reporting can answer these questions.

What outcomes reporting requires

Outcomes reporting requires three things that activity reporting doesn't:

  1. Baseline data: You need to know the starting point to measure change. This means collecting data before your intervention begins, not just after.
  2. Validated measurement tools: You need reliable ways to measure outcomes. This might mean using established scales (e.g., PHQ-9 for depression) or developing your own validated measures.
  3. Attribution logic: You need a clear link between your activities and the outcomes. This doesn't require experimental design, but it does require a logical argument about why your activities would cause the observed changes.

These requirements make outcomes reporting more resource-intensive than activity reporting. However, the value is in the decision-making capability it provides. Without outcomes data, you're making decisions based on what you did, not what changed.

Common failure modes and how to avoid them

Organisations attempting outcomes reporting often fail in predictable ways:

Failure mode 1: Measuring too late

Starting outcome measurement after the intervention begins means you have no baseline. You cannot measure change without a starting point.

Solution: Collect baseline data before or at the start of your intervention, even if it's simple.

Failure mode 2: Measuring the wrong thing

Measuring what's easy to measure rather than what matters. For example, measuring satisfaction (easy) instead of behaviour change (harder but more meaningful).

Solution: Start with your theory of change. What outcomes matter? Then find ways to measure those, even if it's harder.

Failure mode 3: No attribution logic

Observing outcomes but not being able to explain why your activities would cause them. This makes the link between activities and outcomes unclear.

Solution: Develop a clear logic model or theory of change that explains the pathway from activities to outcomes.

Failure mode 4: Overcomplicating

Trying to measure everything perfectly rather than measuring a few things well. This leads to measurement fatigue and poor data quality.

Solution: Start with 2-3 key outcomes. Measure them well. Add more only if you have capacity.

A simple outcomes ladder and how to implement it

An outcomes ladder helps you move from activities to outcomes gradually. It recognises that you cannot measure everything at once, but you can build capability over time.

Level 1: Activities (where most organisations start)

What you did, how many, when.

Example: "Delivered 50 counselling sessions in Q1"

Level 2: Outputs (immediate results)

What was produced or delivered as a direct result of activities.

Example: "50 people completed counselling sessions"

Level 3: Short-term outcomes (changes in knowledge, skills, attitudes)

Changes that occur during or immediately after the intervention.

Example: "Participants reported increased understanding of coping strategies (measured via pre/post survey)"

Level 4: Medium-term outcomes (changes in behaviour)

Changes in what people do, how they act, or how they engage.

Example: "Participants reported using coping strategies more frequently (measured via 3-month follow-up)"

Level 5: Long-term outcomes (changes in condition or status)

Fundamental changes in people's lives, communities, or systems.

Example: "Participants showed improved mental wellbeing scores (measured via validated scale at 6 months)"

To implement an outcomes ladder:

  1. Start where you are: If you're at Level 1 (activities), that's fine. Document it well.
  2. Add one level at a time: Don't try to jump to Level 5 immediately. Add Level 2 (outputs) first, then Level 3 (short-term outcomes), and so on.
  3. Build measurement capability: Each level requires different measurement approaches. Build this capability gradually.
  4. Use the data you collect: Don't collect outcomes data just for reporting. Use it to make decisions about what to continue, what to change, and what to stop.

The goal is not to reach Level 5 immediately. The goal is to move up the ladder over time, building capability and improving decision-making as you go.

Moving from activities to outcomes

The shift from activity reporting to outcomes reporting is a capability journey, not a one-time change. It requires building measurement systems, developing attribution logic, and using outcomes data for decision-making.

CIIS supports this journey by providing a structured way to collect and view both activity and outcome data. You can start with activities and gradually add outcome measures as your capability grows. The system helps you see the relationship between activities and outcomes, making it easier to understand what's working and what isn't.

Next Steps

If this topic resonates with challenges you're facing, consider: