Why Drift Is Inevitable (And How the Architecture Protects Itself)
Jan 27, 2026
I'm writing this before the first session happens. That timing matters.
Most people build governance after problems emerge. They wait until something breaks, then scramble to fix it. By then, the institution has already learned the wrong habits. Corrective governance becomes damage control. It tries to restore something that was never properly protected in the first place.
I've built enough systems over 35 years to know what happens when you wait. The drift starts immediately. Not because of bad intentions. Not because of incompetence. Drift is what institutions do when they encounter operational reality. Technical problems demand attention. Practical decisions accumulate. What's measurable becomes what matters. The technology colonizes the structure it was meant to serve.
So I'm designing the protections now, before pressure arrives, when principles can still be embedded architecturally rather than enforced reactively. This is the only time you can build governance properly. After drift starts, you're already defending. Before drift starts, you're building foundations.
What Drift Looks Like
I've watched it happen in every system I've operated. The pattern is consistent.
You start with a clear mission. Parents need a place where understanding forms. Coaches need structure to articulate their reasoning. Players need capacity for self-observation. The interpretive layer is missing. You're building it. The purpose is sharp.
Then the first technical question arrives. What kind of recording equipment should we use. Seems innocent. Seems practical. You research options. Cameras. Microphones. Audio processing. Suddenly you're comparing specs and thinking about capture quality. The conversation shifts from "how do we protect conditions where understanding forms" to "how do we document what happens."
The technology becomes the point instead of the tool.
A few weeks later, someone asks about session format. Should we standardize the structure. Should we create templates. Should we develop metrics for tracking progress. All reasonable questions. All dangerous. Because the moment you optimize for consistency, you've stopped protecting emergence. The moment you measure outcomes, you've started managing to the measurement instead of attending to the reality beneath it.
I watched this exact pattern destroy a coaching development program I built in 2012. Started with deep observation and individualized mentorship. Worked beautifully for the first cohort. Then we tried to scale it. Created rubrics for evaluation. Standardized the curriculum. Built assessment tools. Within two years, coaches were performing for the rubric instead of developing their own judgment. The program succeeded by every metric we created. It failed at the only thing that mattered.
Six months in, you look around and realize the Founders' Room has become a performance capture system with structured facilitation protocols and outcome tracking. It works. It's professional. It's measurable. And it has nothing to do with what you built it for.
That's drift. It doesn't announce itself. It arrives through reasonable decisions that accumulate into mission corruption.
The Forces That Cause Drift
Three pressures push every institution away from its founding purpose.
Technical problems demand solutions. Real operational challenges require real answers. The microphone stops working. You need to fix it. The scheduling system gets confusing. You need to clarify it. Each technical decision feels small. But technical thinking is a language. The more you speak it, the more you think in it. Eventually, technical optimization replaces mission protection without anyone noticing the substitution.
Measurable outcomes dominate attention. Sessions happen. Participants leave. Did it work. You want to know. So you start tracking things. Session counts. Participant satisfaction. Return rates. Framework adoption. All useful data. But data creates gravity. What gets measured gets managed. What gets managed gets optimized. Soon you're designing sessions to improve metrics rather than protecting conditions where understanding forms.
Comfortable certainty pulls harder than productive ambiguity. The Founders' Room is designed to hold tension without resolving it prematurely. That's uncomfortable. Participants want answers. Facilitators want to help. Parents want reassurance. Coaches want validation. The pull toward resolution is constant. Giving people what they want feels like good service. But if you resolve tension before understanding forms, you've turned the room into expensive therapy. The interpretive layer disappears.
These forces don't require malice. They're structural. Any institution that doesn't protect itself from these pressures will drift. The question is not whether drift will happen. The question is whether you've built protection before it arrives.
The Moral Compass
Five principles protect the Founders' Room from these forces. I call them the moral compass. Not because they're aspirational. Because they're constitutional. They define what this institution is and is not. Violating them doesn't make you a bad facilitator. It means you're doing something other than running a Founders' Room.
The learner remains whole. No fragmentation by skill, no reduction to deficit, no splitting body from mind from spirit. The developmental complexity we're navigating is integrated. The moment we start treating players as collections of skills to be optimized, we've abandoned the mission. Wholeness is not optional. It's definitional.
Attention is sacred. The scarcest resource in development is not information or technique. It's sustained, quality attention from adults who can hold complexity without collapsing it. Technology serves attention. The moment technology demands attention instead of amplifying it, the architecture has inverted. We build systems that protect human attention, not systems that consume it.
Technology serves reflection. AI provides bandwidth for human judgment. It tracks context. It surfaces patterns. It holds memory. It does not make interpretive decisions. The constitutional principle holds here: AI may propose frames; only humans make commitments. The moment AI starts deciding what matters, we've recreated the factory model with better technology. The role is fixed architecturally, not just procedurally.
Dialogue over instruction. Understanding forms through conversation, not transmission. The facilitator does not solve problems. The AI does not prescribe solutions. The structure protects productive tension long enough for insight to emerge through multiple perspectives combining. If sessions become advice delivery, we've failed. The room exists to create conditions where understanding forms, not to provide answers.
The human arbiter decides. Every significant decision involves human judgment exercising moral reasoning. Not just technical competence. Not just pattern recognition. Moral reasoning about what matters, what's at stake, what's worth protecting. That capacity cannot be automated, optimized, or scaled through technology. It remains analog by design. The friction is the safeguard.
These five principles protect against the three forces. Technical problems will arrive. We'll solve them. But technology remains servant, not master. Measurable outcomes will tempt us. We'll track some things. But measurement doesn't dictate mission. Comfortable certainty will pull us. We'll resist. The architecture holds tension by design.
The Central Question
When I feel drift starting, there's one question that brings me back.
How do we scale attention without flattening the individual.
That's the entire challenge in one sentence. Alcott could give full attention to 30 students. That attention produced remarkable learning. But the method couldn't scale because human attention has limits. Mann could scale education to millions. But the system flattened individuals to make them processable at scale.
The Founders' Room solves this by using AI to provide bandwidth for human attention rather than replacing human judgment with automated instruction. But that solution only works if the architecture protects it. The moment we start flattening individuals to make them measurable, trackable, and optimizable, we've abandoned the solution and recreated the problem with fancier tools.
So when technical decisions accumulate, when metrics start dominating, when resolution becomes more comfortable than tension, I'll return to that question. How do we scale attention without flattening the individual. If a proposed change makes scaling easier but flattening more likely, it fails the test. The architecture is designed to serve attention at scale while protecting the wholeness of individual development.
That's not a slogan. It's a design constraint. Everything bends to it or it doesn't belong here.
Why Governance Is Architecture
Governance is not something you add after an institution exists. Governance is how you design the institution so it can resist the natural forces that corrupt mission.
Most people think of governance as oversight. Boards. Policies. Reviews. That's management. Important, but insufficient. Management responds to problems. Governance prevents them by making certain kinds of drift structurally impossible.
The Concord Compact, which I'll detail in the next piece, is governance as architecture. It doesn't just establish rules. It creates constraints that protect mission even when commercial pressure mounts, even when technical optimization tempts, even when comfortable certainty pulls.
Public trustees with veto power. That's not oversight. That's structural protection. A decision that serves commercial interests over public benefit can't happen. Not because everyone promises to be good. Because the architecture blocks it.
Twenty percent minimum public access funded by commercial operations. That's not aspiration. That's structural commitment. Mission drift toward pure commercial operation can't happen. The institution is legally structured to serve public benefit first.
Data ethics embedded in consent design. That's not policy. That's architectural protection. Exploitative data practices can't happen. The system is built to prevent them, not just prohibit them.
This is governance as architecture. You build the institution so certain kinds of corruption are structurally impossible rather than morally prohibited. You design protection before pressure arrives. You make mission drift difficult rather than relying on people to resist temptation indefinitely.
That's why I'm writing this now. Before the first session. Before operational reality arrives with its reasonable questions and practical pressures. The principles are embedded now, when they can shape structure rather than constrain behavior.
What Comes Next
The next piece details the Concord Compact itself. Not as vision or aspiration, but as legal architecture. How reasoning becomes public good. How commercial operations fund educational access. How governance protects mission from the drift that's coming.
Because drift is coming. It's not a possibility. It's a certainty. The question is whether the institution is built to resist it.
I've built the architecture to resist. Now I'm building the governance to protect the architecture. That's the work of designing an institution that can outlast its founder without abandoning its founding purpose.
The Founders' Room will face pressure to optimize, measure, and resolve. The Compact ensures it can't. Not without breaking itself. That's intentional. That's protection.
Next in this series: "The Concord Compact: Governance as Architecture"
About the author: Duey Evans has coached junior tennis since 1989. He's building the Founders' Room to solve a problem he's watched families navigate without institutional support for 35 years. This is his attempt to design that support properly before operational reality has a chance to corrupt it.
Never Miss a Moment
Join the mailing list to ensure you stay up to date on all things real.
I hate SPAM too. I'll never sell your information.