Information

Methodology & How Scores Work

Understand how browser-based scores are produced, filtered and interpreted on the platform.

This page explains how browser-side scores are produced, what the site filters before displaying public results and how to compare runs without creating misleading conclusions.

1. What the tools measure

The tools measure browser-detected input behavior: clicks counted during a timed round, time between a cue and response, words entered under a timer, or browser-observed movement samples. The site focuses on practical, repeatable testing rather than hardware certification.

2. Why scores vary

  • different mice, keyboards, switches, polling rates or displays
  • browser timing differences, focus state and system load
  • short-burst versus long-endurance timer design
  • fatigue, posture, grip and practice familiarity

3. Local history versus saved public history

Local history helps you compare your own recent attempts even when you do not save results publicly. Public saved blocks are more selective: they are filtered for suspicious, impossible or duplicate-looking entries so the visible score layer stays more trustworthy.

4. Public result sanity filtering

The site uses sanity thresholds by test family, suppresses suspect bot traffic and hides near-duplicate public entries from the same session window. Existing stored rows can also be backfilled and excluded from public blocks if they fail those rules.

Practical example: if a saved five-second click result implies a CPS level far beyond the family threshold, or if the same session keeps posting near-identical public rows in a tight timestamp window, those rows can be excluded from the public layer even if they remain in historical storage.

5. How duplicate public entries are treated

The public layer is not meant to show every repeated save from the same short session window. When several rows look like near-identical repeats from the same source, the strongest representative can remain while weaker duplicates are excluded from public blocks. That keeps leaderboards more readable and reduces junk clutter.

6. What happens when older rows or legacy content survive

The platform can rerun maintenance passes on existing stored rows and guide records after a deployment. That matters when an older impossible score, duplicate public entry or legacy guide row was created before the newer cleanup rules existed. In other words, the public layer is allowed to become stricter over time.

7. Fair-comparison method

  • Compare neighboring timers, not only the biggest or smallest mode.
  • Repeat the same setup before judging improvement.
  • Use medians or a narrow repeatable range rather than one peak outlier.
  • When switching devices, note the keyboard, mouse, browser and comfort conditions.

Wrong comparison: a 1-second click peak versus a 60-second endurance run on a different mouse and browser.

Better comparison: 1s to 2s, 2s to 5s, or 60-second typing to 2-minute typing on the same keyboard and same browser session.

8. What the site does not claim

The site does not claim to measure certified esports skill, medical reflex health or exact hardware-lab latency. It is built for browser-based practice, comparison and interpretation.