A Human Signal Initiative · Govern the Machine
The Machine Won't Govern Itself.
AI is rewriting the rules of every institution — hiring, healthcare, justice, education, national security. No vendor is coming to save you. No policy paper will protect your people. The only signal that matters is yours.
Take the Pledge
The Noise Is Winning.
Every day, AI systems are deployed into institutions without governance, without accountability, without a single human who can explain what the system did or why. This is not innovation. This is institutional failure.
The consumption economy treats AI as a procurement problem. We treat it as a systems design problem — because when the system fails, it's the operator who answers for it.
You are the signal. The question is whether anyone can see you.
Every Failure Is a Practice Scenario.
These are real AI incidents — documented, verified, and scored. From the Project Cerebellum AI Incidents Tracker.
Grammarly's AI Allegedly Used Journalists' Names Without Consent
Grammarly's AI Expert Review feature accused of generating editing suggestions under the names of journalists, authors, and academics without approval. Federal class action filed.
ChatGPT Accused of Practicing Law Without a License
Nippon Life filed a lawsuit against OpenAI in Chicago, accusing ChatGPT of practicing law without a license in an Illinois disability case.
Autonomous Agent Obtained Unauthorized Access to McKinsey's AI Platform
CodeWall's autonomous agent reportedly exploited vulnerabilities in McKinsey's Lilli AI platform, potentially gaining unauthorized database access.
State AI Phone System Failed Spanish-Language Callers
Washington State DOL's AI phone system gave callers who selected Spanish AI-generated English responses spoken with an accent — for months.
AI Sepsis Alert Nearly Caused Harm to Dialysis Patient
An AI-generated sepsis alert prompted potentially inappropriate IV fluid administration for a dialysis patient, averted only by clinician intervention.
DeepSeek, Moonshot, MiniMax Caught Illicitly Distilling Claude
Anthropic discovered widespread use of fraudulent accounts and proxy services to illicitly distill Claude's capabilities at scale.
Source: AI Incident Database (AIID) · McGregor, S. (2021) IAAI-21
"The future requires human signal to overcome artificial noise. The machine must not win."— Human Signal Manifesto
The Signal Architecture
Four commitments. No theater. Aligned with NIST AI RMF, ISO/IEC 42001, and the Trusted AI Model (TAIM).
Presence
Be visible. Governance requires a human who can be named, reached, and held accountable. Presence is the opposite of automation on autopilot.
Discipline
Cut the noise. Institutional resilience requires operators who can distinguish signal from hype — and act on what's real, not what's trending.
Accountability
Own the outcome. When the system fails, someone answers. Not the vendor. Not the model. The operator. That's you.
Proof of Work
Show your work. Trust isn't declared — it's demonstrated through transparent governance, auditable decisions, and frameworks that survive contact with reality.
"We pledge to govern the machine — not with theater, but with discipline. We commit to restoring human visibility in systems designed for observation, to building governance infrastructure before the regulatory actions arrive, and to proving our work through transparent, auditable, and accountable AI stewardship. We are the signal. The machine will not win."
Source: Human Signal · Visible Human
Your Name. Your Signal. Your Stand.
This isn't a petition. It's a presence signal. Add your name to the operators, auditors, and governance leaders who refuse to let the machine win.
You've Made Your Signal Visible.
Thank you for standing up. You've joined a growing network of operators, auditors, and governance leaders who refuse to let the machine win. Share this with your network — the louder the signal, the harder it is to ignore.