The Singularity Clock.
LIVE · BUILT 14 APR 2026 · 18:00 UTC
Edition 2026-04-14 · Model v1.2.0 · AGI · Singularity · ASI

The world has never had a calibrated instrument for the most consequential transition in human history. This one runs in real time.

Three data-driven clocks tracking AGI, the Singularity, and Superintelligence — derived from a public ensemble of capability, compute, economic, and alignment signals. Every movement sourced. Every methodology versioned.

Clock 1 — The AGI Clock PRIMARY
v1.2.0 · ensemble of 5 signals · updated 2026-04-14
8 mo 5 d
EST. ARRIVAL Dec 15, 2026
80% CI AUG 2026 – APR 2027
SIGNALS 5
Top weighted signals
METR task horizon (→40hr) · projects APR 2027
SWE-bench Verified (→95%) · projects SEP 2026
Humanity's Last Exam (→80%) · projects NOV 2026
ARC-AGI-2 (→90%) · projects AUG 2026
FrontierMath (→75%) · projects DEC 2026
Arrival window
AUG 2026APR 2027
Clock 2 — The Singularity
6 yr 7 mo
EST. NOV 2032 · 2 SIGNALS
Recursive self-improvement inflection — the moment AI contributes more to AI capabilities research than humans do.
Clock 3 — Superintelligence
7 yr 7 mo
EST. NOV 2033 · 3 SIGNALS
Cognitive capabilities qualitatively beyond the best human in essentially every economically relevant domain.
Alignment Deficit
Keeping pace
Capability vs. safety velocity ratio: 1.19×
Keeping pace Moderate Elevated Critical
Capability research is tracking interpretability and alignment research by a factor of 1.19.

Live Signal Readings

11 signals tracked · open source methodology
METR Task Horizon
11.4 hr
+38% QoQ
SWE-bench Verified
88.2%
+4.1pp QoQ
GPQA Diamond
94.8%
+2.3pp QoQ
ARC-AGI-2
62.4%
+18pp QoQ
FrontierMath
41.7%
+12pp QoQ
Humanity's Last Exam
38.9%
+16pp QoQ
Frontier training FLOPs
8.2e26
2.1× YoY
AI R&D capex (global)
$512B
+64% YoY
AI-authored research
14.2%
+5.3pp YoY
Interpretability index
0.31
-0.02 QoQ
Frontier lab safety headcount
12.4%
-1.1pp YoY

Signals from the Field

200 arXiv papers · 178 lab/org posts · LLM-classified
2026-04-13
CAPABILITY LEAP
LangFlow: Continuous Diffusion Rivals Discrete in Language Modeling
First continuous diffusion language model rivaling discrete counterparts; achieves competitive perplexity on benchmarks via novel ODE bounds and flow matching.
arXiv · arXiv
Mag 4
2026-04-13
CAPABILITY LEAP
SWE-AGILE: A Software Agent Framework for Efficiently Managing Dynamic Reasoning Context
New SOTA on SWE-Bench-Verified for 7B-8B models; novel dynamic reasoning context management solving context-explosion problem for agentic reasoning.
arXiv · arXiv
Mag 4
2026-04-13
CAPABILITY LEAP
AffordSim: A Scalable Data Generator and Benchmark for Affordance-Aware Robotic Manipulation
First scalable simulation framework for affordance-aware robotic manipulation; novel integration of 3D affordance prediction enables previously unsolvable manipulation tasks.
arXiv · arXiv
Mag 4
2026-04-13
CAPABILITY LEAP
Playing Along: Learning a Double-Agent Defender for Belief Steering via Theory of Mind
Novel emergent bidirectional relationship between Theory of Mind and deception capability; frontier models struggle, RL-trained agents outperform them systematically.
arXiv · arXiv
Mag 4
2026-04-13
CAPABILITY LEAP
RationalRewards: Reasoning Rewards Scale Visual Generation Both Training and Test Time
Structured reasoning rewards unlock latent capabilities in visual generators; test-time critique-refine matches RL fine-tuning without parameter updates; significant efficiency and capability advance.
arXiv · arXiv
Mag 4
2026-04-13
CAPABILITY LEAP
Continuous Adversarial Flow Models
Adversarial flow models substantially improve ImageNet FID (8.26→3.63) and text-to-image generation; meaningful generative capability advance.
arXiv · arXiv
Mag 4
2026-04-02
MODEL LAUNCH
Gemma 4: Byte for byte, the most capable open models
Major open model family release with advanced reasoning and agentic capabilities.
Lab · Google DeepMind
Mag 4
2026-04-02
MODEL LAUNCH
Welcome Gemma 4: Frontier multimodal intelligence on device
Gemma 4 frontier multimodal model launch; major version advancement.
Lab · Hugging Face
Mag 4
2026-04-01
MODEL LAUNCH
Holo3: Breaking the Computer Use Frontier
Launch of Holo3 with computer use capability, frontier frontier agentic model.
Lab · Hugging Face
Mag 4
2026-03-31
POLICY
Accelerating the next phase of AI
Major funding round announcement with strategic investment and expansion commitment.
Lab · OpenAI
Mag 4

Recent Clock Movements

Last 90 days
Apr 14, 2026
ALIGN
Alignment Deficit reading moved Elevated → Keeping pace (3.0× → 1.29×). This is a methodology effect from the v1.2 blend introduction, not a real-world improvement: the structured ratio (1.28×) is now diluted with arXiv flux (1.35×) and frontier-release flux (1.28×) at 20% weights each. Note three known biases all pulling the blend low: (a) structured formula compares capability flow to safety stock, hiding that safety headcount is shrinking 1.1pp/yr while capex grows 64% YoY; (b) frontier-flux undercounts the safety side because Anthropic, Apollo, METR, Epoch, Meta, Mistral, and xAI lack discoverable RSS feeds; (c) arXiv flux applies a 4× discount to incremental capability that is asymmetric with the alignment-side discount. A more honest reading is likely Moderate (~1.8–2.4×). Restoring the safety-feed coverage and symmetrizing the arXiv discount are tracked for v1.2.1; structured-formula rework is tracked for v1.3. Source: Methodology doc: alignment blend transparency · 2026-04-14
gauge change
Apr 14, 2026
ALIGN
Frontier-feed coverage update: dropped 7 lab feeds that 404 with no discoverable RSS (Anthropic, Apollo, Epoch, Meta, Mistral, xAI, plus broken METR URL). Added 3 working safety-side feeds: METR (correct URL: metr.org/feed.xml), MIRI, and LessWrong curated. Added Google Research blog. Net effect: safety-side feed coverage roughly doubles, partially offsetting the structural undercount flagged in the prior entry. Source: data/frontier_feeds.json · 2026-04-14
gauge change
Apr 14, 2026
ALIGN
Methodology v1.2.0: Alignment Deficit blend now includes LLM-classified frontier-lab and safety-org RSS feeds (trailing 90 days). Captures launches, capability claims, and safety/policy posts that don't appear on arXiv. Final blend: 60% structured + 20% arXiv flux + 20% frontier-release flux. New 'Signals from the Field' site section surfaces highlighted papers and lab posts. Source: Methodology doc: frontier release monitor · 2026-04-14
gauge change
Apr 14, 2026
ALIGN
Methodology v1.1.0: Alignment Deficit now blends LLM-classified arXiv capability/alignment flux (trailing 30 days, cs.AI/cs.LG/cs.CL) with existing structured inputs at 30% weight. Grounds the gauge in observed research output, not only lab-reported headcount and spend. Clock projections unchanged. Source: Methodology doc: arXiv classifier · 2026-04-14
gauge change
Apr 12, 2026
AGI
AGI Clock moved 11 days closer following METR task-horizon update — autonomous task completion length crossed 11 hours, passing the 10-hour inflection 4 months ahead of model expectations. Source: METR Task Horizon v14 release · 2026-04-12
-11 days
Mar 28, 2026
SING
Singularity Clock moved 6 days closer after Anthropic publication on automated red-teaming showed AI-driven interpretability contributions at frontier lab level. Source: Anthropic research post · 2026-03-28
-6 days
Mar 19, 2026
ALIGN
Alignment Deficit gauge moved from Moderate to Elevated — capability/safety velocity ratio crossed 3.0× for first time since index inception. Source: TSC composite index · 2026-03-19
gauge change
Mar 4, 2026
ASI
ASI Clock moved 3 days closer following DeepMind announcement of novel algorithm discovery in combinatorial optimization — first confirmed superhuman contribution to an active research frontier. Source: DeepMind Nature paper · 2026-03-04
-3 days
Feb 21, 2026
AGI
AGI Clock moved 8 days closer after SWE-bench Verified saturation projections updated — extrapolation model now projects 95% by Q2 2027. Source: Epoch AI forecast update · 2026-02-21
-8 days

Methodology

v1.2.0 · Open methodology · Versioned model

What we count to — AGI

A single AI system that can autonomously perform the full scope of economically valuable cognitive work of a median human knowledge worker — sustained across a standard 40-hour work-week — at or above human quality, without task-specific fine-tuning and without human correction.

Operationally testable. Grounded in economic reality. Falsifiable in both directions. Sidesteps unresolvable debates about consciousness or "real" intelligence.

What we count to — Singularity & ASI

Singularity: AI systems contribute more to AI capabilities research than humans do. Proxy threshold: AI-authored frontier research crosses 50%.

ASI: Cognitive capability qualitatively beyond the best human in essentially every economically relevant domain. Proxy: HLE saturation (95%+) and FrontierMath saturation, or Singularity + 2-year compute/recursion buffer — whichever is later.

How we project

Each clock is a weighted median across its signal ensemble. Benchmark signals use linear extrapolation to defined saturation thresholds. Capability signals (METR task horizon) use exponential extrapolation with empirical doubling periods. Crowd-forecast signals (Metaculus) enter directly as median predictions.

Confidence intervals reflect signal disagreement (10th–90th percentile of per-signal projections), not statistical uncertainty in any single signal. Wider band = more disagreement among signals. We're more uncertain when signals diverge, tighter when they converge.

Cascade ordering

The three clocks must respect causal ordering: ASI cannot precede Singularity, which cannot precede AGI. Minimum buffers are enforced: AGI ≤ Singularity − 365 days ≤ ASI − 730 days.

When independent signal ensembles produce an incoherent ordering, the later clocks are pinned to the earlier clock plus the buffer, and cascade_adjusted=True is recorded so the adjustment is transparent rather than hidden.

Alignment Deficit blend (v1.2)

The Alignment Deficit gauge is the ratio of capability velocity to safety velocity. As of model v1.2 it blends three independent inputs:

  • 60% — structured composite (R&D capex YoY, frontier-lab safety headcount %, interpretability index)
  • 20% — arXiv flux (LLM-classified cs.AI/cs.LG/cs.CL papers, trailing 30 days)
  • 20% — frontier-release flux (LLM-classified frontier-lab and safety-org RSS, trailing 90 days)

Each flux input is clamped to a sane band [0.25×, 6.0×] to prevent single-week spikes from dominating, and skipped if the sample size is too small. Weights redistribute when an input is unavailable.

Editorial posture

Sober. Transparent. Non-partisan. Quantitative. Continuously updated. We don't use hype language ("imminent," "superhuman," "god-like"). We don't pick a side in the AI-optimist/doomer debate — we present signals and let readers form their own view. Every claim on the site traces to a source URL. Every methodology change bumps the model version and is recorded in the changelog.

© 2026 THE SINGULARITY CLOCK · MODEL v1.2.0 · DATA CC-BY 4.0
LAST UPDATED 14 APR 2026 · 18:00 UTC