Drag Test
Drag through checkpoints and review control, route completion and time.
20s mode
Recent local history
Top saved runs
About this test
Drag through checkpoints and review control, route completion and time.
These mouse tools are more useful for repeatable comparison than for one-off bragging rights. Use the same surface, device and posture when you want clean before-and-after checks.
Who this test is for
- Users checking whether a mouse, wheel or touchpad behaves roughly as expected in the browser.
- People comparing device settings, movement feel or obvious stability changes after setup tweaks.
- Anyone who wants a quick browser-side sanity check without turning the page into a hardware lab.
Common mistakes
- Treating a browser-side estimate as if it were a hardware certification.
- Changing device settings and test posture at the same time, which ruins comparison value.
- Reading one noisy sample as a final verdict instead of checking several clean passes.
How to read the score
- Use these values to compare settings and device behaviour, not as absolute lab-grade numbers.
- Repeated similar readings usually matter more than one standout sample.
- A stable mid-range result can be more useful than a noisy peak that never repeats.
FAQ
Are these numbers hardware-grade measurements?
No. They are browser-side estimates or interaction checks meant for quick comparison and obvious sanity testing.
Why should I repeat the same tool several times?
Repeated samples make it easier to spot stable behaviour and ignore one noisy pass.
What is the best way to compare device changes?
Keep the same surface, movement style and browser when checking before-and-after readings.
What this mode actually tests
- The main input speed or control signal for this tool, interpreted in the context of browser-based testing rather than lab measurement.
When to use this mode
- Use repeated runs under the same conditions when you want a believable comparison.
How to compare it with nearby modes
- The number becomes more useful when you compare neighboring modes or related tools instead of reading it in isolation.
Recommended next steps
- Use the linked guides and methodology page to understand what the score can and cannot tell you.
Methodology notes
- Browser-based scores depend on device input, focus state, browser timing and system load.
- Comparisons are strongest when you repeat the same setup, posture and timer family.
- Public saved results are filtered for suspicious or duplicate values, but your own local history is still the best place to judge repeatability.
Related tests
Mouse Polling Rate Test
Estimate mouse polling rate from movement events in the browser.
Why nearby pages matter
The most useful comparison is usually not against a random peak score, but against a neighboring timer or related input family on the same setup.
Popular guides
Mouse Polling Rate in the Browser: What a Web Estimate Can and Cannot Tell You
A stronger guide to browser polling-rate estimates, including what you can compare, what you should not compare, and how to read noisy samples.
Browser-Based Tests vs Hardware-Level Measurements: Where the Difference Matters
A fuller explanation of when browser tools are enough, when hardware-style methods matter, and how to avoid mixing the two unfairly.
Aim Practice Basics: Building a Short Browser Routine That Still Teaches Control
A more practical beginner guide to aim practice, target control, sensible routines and what browser aim drills can and cannot teach you.
CPS and Clicking Basics: What Browser Click Tests Really Tell You
A grounded introduction to CPS testing, timer families, repeatability and the difference between a useful benchmark and a random peak.