Reported vs Estimated Hours

The Dashboard and Reports both show two numbers for hours saved:

  • Reported — the literal sum of what your team told us, no extrapolation
  • Estimated — that number scaled up, with caps, to account for survey cadence

Both numbers come from the same data, but they answer different questions. Here's how each is calculated and when to use which.

How the numbers are produced

When a user answers Q1 of the standard AI Pulse — "How much time did you save on tasks today?" — they pick one of:

Answer Counted as
None 0 minutes
Up to 30 mins today 15 minutes
30–60 mins today 45 minutes
1–2 hours today 90 minutes
More than 2 hours 150 minutes
I lost time because of it −30 minutes

We sum those minutes across every response in the period to produce reported minutes.

Why estimated is higher (and capped)

If your pulse fires once a week per tool per user (the most common cadence), then a single response — say "1–2 hours today" — represents one day's saving. There are roughly 22 workdays in a month. If we naively multiplied by 22 ÷ 1, we'd produce a wildly inflated number that nobody would believe.

So we apply a capped extrapolation factor. The maximum scale-up is 2.5×, regardless of how infrequently your team responded. This means:

  • For a daily-cadence pulse with high response rates, estimated ≈ reported (factor close to 1×)
  • For a weekly-cadence pulse, estimated ≈ 2.5× reported (the cap kicks in)
  • For users who respond once a fortnight, estimated is still 2.5× reported (cap)

The cap is intentionally conservative. It produces an "informed estimate" rather than an "extrapolation that breaks the rules of physics".

Which number should I use?

Use reported when you're:

  • Defending the number to Finance or your board ("we have evidence of 888 hours saved")
  • Comparing periods and want pure data
  • Worried about being seen as overclaiming

Use estimated when you're:

  • Telling the higher-level story to executives ("our AI investment is saving us 2,200 hours a month")
  • Translating into business value (because reported is what was captured during pulses, not what actually happened the rest of the time)
  • Building the case for renewing or expanding the program

The platform always shows both, side by side, so you (or anyone reading the dashboard) can make an informed call.

What about negative values?

If your team says they lost time on a tool, that gets counted as a negative. It rarely changes the headline materially, but it makes the number more honest — and a tool with a lot of "lost time" responses will show up in your insights as a fit problem.

Wider implications

The 2.5× cap also affects:

  • Dollar-value of hours saved — multiplied by the same capped estimated minutes
  • ROI multiplier — uses estimated value in the numerator
  • Per-tool hours columns — same methodology

So everywhere you see "estimated", the same conservative cap is in play.


Related: Understanding ROI · The standard AI Pulse explained · Setting your blended hourly rate

Did this answer your question? Thanks for the feedback There was a problem submitting your feedback. Please try again later.

Still need help? Contact Us Contact Us