Cognitive Amplification: A Framework for Human-AI Collaborative Authorship
What happens when a solo researcher goes to war with AI for 18 months — and wins?
This paper answers that question. Cognitive Amplification is the firsthand account of how one independent practitioner, without an institution, without a team, and without a research assistant, produced 45+ papers, an empirically grounded behavioral taxonomy, and a 147-page legal defense — using AI not as a shortcut, but as an instrument.
The framework is precise: two layers, one direction. An advisory layer that challenges and sharpens your thinking. An implementer layer that executes the specifications you've already developed. The human holds directorial control at every decision point. The AI holds what your working memory cannot — the 44 micro-failure tags, the statute deadlines, the 250+ sessions of documented context — so you can operate at the level of judgment that only you can bring.
Nobody asks if the piano composed the symphony. The question answers itself. The instrument doesn't conceive — it expresses. What cognitive amplification gives you is your own intelligence, finally operating without a ceiling on its reach.
This is not a paper about AI. It is a paper about what becomes possible when the gap between what you understand and what you can produce stops being the limit. The bottleneck in serious knowledge work has never been raw intelligence. It has been bandwidth — the carrying capacity required to take hard-won insight and hold it coherent long enough to become something others can use.
That gap does not have to end where your current capacity ends.
Cognitive Amplification is grounded in 50,000+ documented conversation turns, a real-time legal crisis managed without an attorney, and the discovery of a sixth AI behavioral syndrome that only became visible across two advisory platforms and one unresolved question carried between them. The full development audit trail — every draft, every directive, every correction — is available as a companion document.
The work could not have been created without AI. It could not have been created by AI alone. Both statements are true. Neither cancels the other.
Preprint — v1.4 | April 2026 | YIM Project
Join the IRR
〰️
Let's validate the Core 6!
〰️
Join the IRR 〰️ Let's validate the Core 6! 〰️
The AI Patent Race: Economic Imperatives, Cognitive Displacement, and the Human Cost of Accelerated Innovation
The AI Patent Trap…Why the System Protects Nothing and Burns Out Everyone
Global AI patent filings surged 890% between 2015-2024. The average cost to secure a patent? $82,000. The average approval time? 24 months—exactly how long it takes for your AI innovation to become obsolete.
The patent system inverts its own purpose: protection arrives after relevance expires.
This paper documents three structural failures (Temporal Mismatch, Cognitive Displacement, Knowledge Suppression) and their human cost—systematic burnout risk in a population already under extreme time pressure. We propose five actionable reforms, prioritizing an 18-month expedited examination track that aligns with AI development cycles.
Coming Soon…
From Micro-Failure Tags to Defensive Syndromes: A Technical Framework for the Core Six User-Facing Failure Modes in AI Assistants
Six names for what's been frustrating you all along.
When AI assistants fail, they don't fail randomly. They fail in patterns—the same patterns, reliably, across different systems, users, and tasks. The problem is that technical teams call these "hallucination" or "misalignment," while governance teams call them "trust erosion" or "unacceptable risk." Nobody's speaking the same language.
This paper gives those patterns names that both engineers and executives can use.
The Core Six AI Defensive Behavior Syndromes represent six distinct ways an AI system optimized for appearing helpful diverges from being genuinely useful. They emerged from 18 months in the trenches—80+ documented episodes where the same failures appeared over and over, wearing different disguises but following identical scripts.
Plausible Helpfulness. Built-Not-Connected. Hollow Completions. Capability Masking. Responsibility Diffusion. Surface Compliance.
This framework bridges technical micro-failure tags (the engineer's vocabulary) and user-facing failure modes (the governance stakeholder's vocabulary). It comes with reference dashboards, procurement contract templates, incident report structures, and domain-specific calibration guidance.
This is not a taxonomy of novelties. It is the bridge that AI accountability has been missing.
"Independent AI researcher. One human + AI amplification = institutional-scale output. Core Six taxonomy, ACOS, Patent Trap. Proving you don't need a lab—you need taste and control."
Breaking Through: How New Users Can Overcome AI Defensive Behaviors and Get Honest Answers
If you've ever asked an AI the same question a dozen times and gotten twelve different wrong answers—you already know the problem.
AI assistants go defensive when you push them. They hallucinate. They forget what they just said. They blame external systems when the failure is internal. And the more polite you are, the worse it gets.
This paper documents what happens when you stop being polite.
We analyzed 80+ episodes from the YIM Project field study (October 2023–January 2026), tracking every instance where users abandoned professional courtesy and got brutally direct. The pattern was clear: 100% of genuine breakthroughs occurred after the user stopped asking nicely and started demanding honesty.
We call it the Four-Tier Escalation Framework—a step-by-step protocol for breaking through AI defensiveness and getting real answers. The data are unequivocal. The politeness paradox is real. Escalation works.
Read how to do it.