The Three Core Pulses Explained
Every new organisation in The GAiGE ships with three built-in pulses — one for each question you actually care about when measuring AI adoption. You don't need to configure anything. They run out of the box, on sensible cadences, and feed every headline number in your Dashboard and Reports.
This article covers why there are three (rather than one long survey), what each one asks, and when it fires.
Why three, not one
The obvious question. Why not one big survey that covers everything?
Because different questions need different moments. A long survey at the end of the month gets forgotten answers. A short pulse right after someone uses a tool gets honest, specific answers. But some questions — like how are you developing with this tool over time? — don't make sense in a one-off moment; they need a longer horizon.
So we split the questions across three pulses, each with its own job, its own cadence, and its own answer format. Together they give you a full picture; individually, each one is small enough that people actually respond.
Pulse 1 · The AI Pulse (3× per week, in-the-moment)
The headline pulse. Short, fast, specific to the moment. Fires after someone has used an AI tool in their browser — so the answer is about the session they just had, not a vague recollection.
What it asks (conditional on Q0):
| Q | Question | Purpose |
|---|---|---|
| Q0 | Did you use {tool} today? (yes/no) | Drives active users + wasted seats |
| Q1 | How much time did {tool} save you? | Drives hours saved + ROI |
| Q2 | Which areas has it helped with? | Drives "what's working" breakdown |
| Q3 | Satisfaction (1-5) | Drives satisfaction gauge |
| Q4 | Any other comments? (optional) | Verbatim feedback |
| Q5 | (If no) Why didn't you use it? | Drives adoption blockers signal |
Fires up to 3 times per week per user per tool. Most users see 5 questions, takes 15-30 seconds.
This is the pulse the old article called "The Standard AI Pulse". It's the same data, just renamed to reflect that it's one of three now.
Pulse 2 · The AI Review (fortnightly)
A slower, reflective pulse. Not tied to a specific session — fires every two weeks on a tool-by-tool basis and asks your team to step back.
What it asks:
- Value — slider from "waste of money" to "indispensable"
- Quality — single-choice: is the output you get reliable?
- NPS — would you recommend this tool to a colleague? (0-10)
- Wishes — free text: what would you change about this tool?
This is where you find the early signal on renewal risk. A tool with strong AI Pulse numbers but a falling NPS on the Review is telling you something important.
Fires fortnightly per user per tool. Takes about 45 seconds.
Pulse 3 · The AI Capability Check (monthly)
The slowest pulse, and the most forward-looking. Not about a specific tool — about the person.
What it asks:
- Confidence — rating (1-5): how confident do you feel using AI tools to do your job?
- Help needed — multi-select: what kind of training or support would help?
- Wish list — free text: is there an AI tool you wish your org would invest in?
- New use case — yes/no: have you found a new way to use AI this month?
This powers the capability development story. You can see your team's confidence rising over time, track where they're asking for help, and spot AI use cases surfacing organically.
Fires monthly per user. Takes about a minute.
When the pulses actually appear
The Chrome extension shows a pulse when all of these are true:
- The user is on a website that matches one of your tools' domains
- They've been on the page for at least 30 seconds (so they've actually used the tool, not just opened a tab)
- A pulse is due for them (per the cadence above)
- They haven't already skipped or completed a pulse in the last 4 hours
If no pulse is due, nothing appears. No emails. No notifications. No nagging. That's deliberate — survey fatigue is the fastest way to kill response rates.
Can I edit or disable them?
All plans can hide a pulse from the Dashboard view (per-user, per-browser — doesn't affect data collection).
Standard and Pro plans can edit the wording of the core pulse questions. The underlying answer types are locked to the ones above, because they're what feed the gauges and metrics.
Pro plans can clone any core pulse and build a customised version from it, or add entirely new pulses with branching logic. See Building a custom pulse.
Expected response rates
Short, in-context pulses typically achieve 70-90% response rates across the orgs we've measured, which is considerably higher than the 40-60% range for annual engagement surveys. The three-pulse cadence is deliberately calibrated so no one user feels surveyed more than once every couple of days on average.
If your response rate drops below 60%, see Why is my response rate low? — almost always an extension rollout issue.
Related: The 2.5× rule — why our ROI numbers are defensible (blog) · Reading your dashboard · Reported vs. estimated hours saved · Building a custom pulse