Adaptive (self-Learning / self-modifying) AI amplifies a significant stability problem. It’s time for an engineering solution. As AI systems operate with greater autonomy and bounded self-adjustment, failures become repeatable dynamics —reward hacking, tool fabrication, calibration drift, goal drift, resource runaway, and policy oscillation. The AI Stability Imperative — Supplement One is a stand-alone blueprint for building predictable, stable, adaptive AI systems. This volume provides implementable preliminary design specifications (not turnkey code) for teams that cannot afford instability in production. Inside you’ll find: Two Implementation Paths: a faster conceptual overlay (Alternative 1) or a more testable six-module Layer-1 decomposition (Alternative 2). - Concrete Control Artifacts: PolicyBundles, TemporalLeases, CapabilityTokens and associated enforcement pathways. - EnforcementPatterns: Evidence Verification (tool/capability claim checking), graceful degradation under pressure, and trust-tier routing at the perimeter. - Implementation Success/ Assurance: V&V gates, failure-mode checks, and red-team patterns aligned to Stability, Safety, Security, Reliability, Integrity, and Quality. Built to stand alone: condensed baseline architecture primers and implementation guidance are included in the appendices so you can begin applying the framework immediately. Public-domain defensive publication: the original technical designs in Appendix F are released under CC0 (Public Domain) to remove patent barriers for the engineering community.