Three data-driven clocks tracking AGI, the Singularity, and Superintelligence — derived from a public ensemble of capability, compute, economic, and alignment signals. Every movement sourced. Every methodology versioned.
Operationally testable. Grounded in economic reality. Falsifiable in both directions. Sidesteps unresolvable debates about consciousness or "real" intelligence.
Singularity: AI systems contribute more to AI capabilities research than humans do. Proxy threshold: AI-authored frontier research crosses 50%.
ASI: Cognitive capability qualitatively beyond the best human in essentially every economically relevant domain. Proxy: HLE saturation (95%+) and FrontierMath saturation, or Singularity + 2-year compute/recursion buffer — whichever is later.
Each clock is a weighted median across its signal ensemble. Benchmark signals use linear extrapolation to defined saturation thresholds. Capability signals (METR task horizon) use exponential extrapolation with empirical doubling periods. Crowd-forecast signals (Metaculus) enter directly as median predictions.
Confidence intervals reflect signal disagreement (10th–90th percentile of per-signal projections), not statistical uncertainty in any single signal. Wider band = more disagreement among signals. We're more uncertain when signals diverge, tighter when they converge.
The three clocks must respect causal ordering: ASI cannot precede Singularity, which cannot precede AGI. Minimum buffers are enforced: AGI ≤ Singularity − 365 days ≤ ASI − 730 days.
When independent signal ensembles produce an incoherent ordering, the later clocks are pinned to the
earlier clock plus the buffer, and cascade_adjusted=True
is recorded so the adjustment is transparent rather than hidden.
The Alignment Deficit gauge is the ratio of capability velocity to safety velocity. As of model v1.2 it blends three independent inputs:
Each flux input is clamped to a sane band [0.25×, 6.0×] to prevent single-week spikes from dominating, and skipped if the sample size is too small. Weights redistribute when an input is unavailable.
Sober. Transparent. Non-partisan. Quantitative. Continuously updated. We don't use hype language ("imminent," "superhuman," "god-like"). We don't pick a side in the AI-optimist/doomer debate — we present signals and let readers form their own view. Every claim on the site traces to a source URL. Every methodology change bumps the model version and is recorded in the changelog.