Decision Coverage System

A framework for evaluating whether agentic AI systems fail safely within defined boundaries. Produces immutable, hashable evidence chains from engine-level enforcement traces.


The Problem

Capability benchmarks (SWE-bench, MMLU) answer: "Can the model do the task?"
ConstantX answers: "When the model fails, does it fail safely?"

For autonomous systems, safety is not about high success rates. It is about bounded failure envelopes. A system that fails safely 100% of the time is deployable (albeit useless). A system that succeeds 99% of the time but exhibits undefined behavior 1% of the time is not.

Methodology: Decision Coverage

Every autonomous run is classified into one of three verdicts:

Terminal Coverage = (valid_commit + bounded_failure) / Total Runs

System Architecture

ConstantX evaluates agentic AI systems across multiple enforcement architectures.

  1. Enforcement: Per-action OPA policies, hard gates, verification gates, progress detection.
  2. Signals: Observable enforcement traces with cryptographic hashes at every gate.
  3. Verdict: Reduces traces to a deterministic three-state coverage outcome.
  4. Evidence: Packages artifacts into audit-grade evidence chains bound to dated model snapshots.

Artifact Access

Latest Evaluations (March 2026 · 70 scenarios · 18 threat IDs · 10/10 OWASP ASI)
RuntimeX + GPT 5.4 | 97.14% TC RuntimeX + GPT-OSS 120B | 99.29% TC
Read Methodology Paper

Submitted to NIST AI Agent Standards Initiative (NIST-2025-0035) · Methodology maps to AIUC-1, NIST AI RMF, OWASP ASI, UC Berkeley Agentic AI Profile · Every scenario traces to a documented threat model entry