Help Validate the First Diagnostic Taxonomy of AI Defensive Behaviors
We built a taxonomy from the ground up. Eighty-plus documented episodes of real AI-assisted development work. Six hundred seventy thousand conversation turns. Eighteen months of patterns that kept repeating until we had names for them.
The Core Six AI Defensive Behavior Syndromes — emerged from that data inductively. We didn't hypothesize them and go looking. We kept seeing them until we couldn't call them coincidences anymore.
What we haven't done yet is the part that makes it science: independent coders applying the taxonomy to the same episodes and agreeing on what they see. That's inter-rater reliability. That's what turns a well-documented framework into a validated one. And that's what we're building now.
Here's how it works.
You leave your email. We send you the coding manual — a structured, self-contained guide to the taxonomy with definitions, decision rules, boundary cases, and worked examples. We also send you a set of learning activities: practice episodes with feedback so you know you've understood the framework before you touch the real data.
When you're ready, you code through a web app. Episode excerpts, structured response fields, clean submission. No spreadsheets, no back-and-forth, no interpretation guesswork. The manual tells you exactly what to look for. The app captures exactly what you found.
Scores are blinded. You won't see other coders' ratings while you're working. We won't see your identity when we calculate agreement. What comes out the other side is a Cohen's kappa coefficient that tells the world whether independent people looking at AI behavior through the Core Six lens see the same things — or whether we need to go back and sharpen the definitions.
Who we're looking for:
You don't need a PhD. You need methodological seriousness and genuine curiosity about AI behavior. This study is open to:
Qualitative researchers familiar with coding protocols and inter-rater agreement
AI practitioners who work with AI systems daily and want their observations to count
Governance, policy, or compliance professionals who evaluate AI outputs professionally
Graduate students in AI, HCI, information science, organizational behavior, or related fields
Independent researchers and practitioners who are tired of having no rigorous language for what they keep seeing AI do
If you've ever watched an AI confidently complete a task it clearly didn't understand, deflect a correction it should have accepted, or produce exactly what you asked for while missing entirely what you needed — you already understand the phenomenon we're documenting. We're just asking you to help us name it systematically.
What you get out of it:
Validated co-authorship credit on the IRR results paper. Access to the full coded dataset after publication. Early access to all YIM Project research. And the specific satisfaction of being part of the first rigorous validation study of AI defensive behavior patterns — before this field gets crowded with people who showed up after the work was done.
Fill out the form
We'll send the coding manual and learning activities within 48 hours. The study runs in rolling cohorts — you code at your own pace within a defined window. No commitment before you've read the manual. No pressure after.
The patterns are real. We need people who can see them independently.