What Happens When Systems Decide What's True
Mar 27, 2026
The first essay in this series was about tennis coach education. What it was actually about was something older, wider, and considerably less comfortable to examine. Tennis was just the room where the pattern happened to be visible. The pattern itself is this: a system built to develop people gradually reorganizes itself around something other than development, and the people inside it lose the ability to see that the reorganization has occurred. That dynamic does not belong to tennis. Change the room and it stays the same. Change the field, the credential, the institution, the governing body. The outcome does not change in any way that matters, because the mechanism producing it has nothing to do with sport and everything to do with what happens inside any system that attempts to organize how people learn, develop, and improve over time.
Every one of those systems eventually has to answer a question it rarely states directly but always resolves in practice. Who decides what is true? Not what is taught, not what is measured, not what gets funded. What is actually true, in the environments where development is supposed to happen, about what works and what does not. That question sits underneath everything else in every field that deals with human growth. And the way a system answers it tells you more about that system's architecture than any mission statement, curriculum document, or certification rubric ever will.
The answer, in most mature systems, is not reality. It is structure.
When a field is young, reality has significant authority. A new approach to teaching reading either produces readers or it does not, and the gap between those two outcomes closes fast enough that ideas cannot hide for long. A training methodology either develops athletes or it explains why it did not. The feedback is close to the action. The people generating ideas are usually the same people standing in the environments where those ideas land, which means they cannot easily look away when the results do not match the theory. In early systems, being wrong is expensive, and that expense creates a pressure that is actually useful: it forces adjustment. The system learns because the consequences of not learning are visible to the people with the authority to change direction.
Scale dissolves that feedback loop faster than any other single force in institutional development. When a field expands, when participation multiplies and programs spread across more contexts than any centralized body can directly observe, the direct connection between the idea and the environment it is supposed to serve becomes harder to maintain. Nobody decides to abandon it. The logistics of maintaining it just become impossible under the weight of growth, and the system responds the way every system responds to a logistics problem it cannot solve through direct management. It creates proxies.
Credentials stand in for demonstrated competence. Affiliation stands in for credibility. Alignment with established frameworks stands in for demonstrated results. These substitutions are not cynical and they are not accidental. They reduce the cognitive load of evaluating thousands of practitioners, programs, and ideas across contexts no small group of people could ever directly observe. For a time, they function well enough that the system appears to be learning. The proxies were built from real observations, real patterns, real accumulated knowledge about what tends to produce good outcomes in the environments the system was designed to serve. The problem is not that proxies exist. The problem is that they drift.
The drift is slow, which is why it is so rarely named before it has already done significant damage. The connection between a proxy and the reality it was designed to represent requires active maintenance. It requires people with sufficient proximity to real development environments to keep asking whether the credential still measures what it was designed to measure, whether the affiliation still carries the weight it once carried, whether the framework still accounts for what is actually happening when practitioners apply it. That maintenance is expensive, and as systems grow, the infrastructure built around the proxies becomes valuable in its own right. Accreditation bodies, certification programs, conference structures, publication pipelines: all of these invest in the proxy's continued authority, which creates a structural resistance to examining whether that authority is still earned. The proxy stops being a representation of reality and starts being the reality the system manages itself against.
Once that transition completes, ideas inside the system are no longer evaluated primarily based on what they produce in real environments. They are evaluated based on how well they fit within the existing framework of proxies. Does the idea use the accepted language? Does it come from a source the system already recognizes? Can it be integrated without forcing a structural adjustment? An idea that scores well on those questions moves quickly regardless of whether it is producing anything in the environments it claims to serve. An idea that scores poorly on those questions stalls regardless of what it is actually producing, because the system has no reliable way to see what is happening outside its own structures.
What that looks like in practice is rarely dramatic. In September 2009 my co-Founder, communications coach and former television executive Kim Kurth wrote a letter to Patrick McEnroe, then the USTA's General Manager of Player Development. The timing was precise. Serena Williams had just delivered her U.S. Open outburst at a line judge, and Novak Djokovic, in the same tournament, was demonstrating the opposite: strategically disarming the New York crowd, rehabilitating a struggling reputation through calculated public interaction. Kim's proposal addressed that gap directly, drawing on five years coaching professional athletes on communication, a track record turning around a major network morning show's ratings, and active management of tennis centers that would later earn a national USTA award. The credentials were real. The problem was visible on national television. The letter was never answered. Not rejected. Not declined with explanation. Simply not answered. The system did not need to evaluate the idea. It did not recognize the source, and that was sufficient.
This is how organizations that were built to improve human development gradually become organizations that are primarily engaged in managing their own coherence. Coherence is genuinely useful. It creates shared language, shared expectations, and enough stability that the system can coordinate across large populations. But coherence optimized for internal consistency starts to look different from coherence built around what is actually true in development environments. It starts to produce outcomes that feel increasingly difficult to explain from inside the framework: players who develop unevenly despite technically sound instruction, students who cannot apply what they have been certified to know, professionals whose performance in real environments does not match the promise of their credentials.
The signals these outcomes generate do not disappear. They appear as practitioner frustration, as parental confusion, as the persistent gap between what the system says it is producing and what people experience inside it. What the system does with those signals reveals whether it is still capable of learning. Systems that remain close to reality treat anomalous outcomes as data. Systems that have separated from reality treat anomalous outcomes as noise, as exceptions, as individual failures that the existing framework adequately explains. They become fluent in the work of framing inconvenient evidence in ways that do not require structural adjustment. That fluency is not dishonesty. It is the natural output of a system that has been optimizing for coherence long enough that coherence and accuracy have become indistinguishable from the inside.
What makes the pattern so durable across fields is that it produces no single visible moment of failure. Medical credentialing, corporate leadership development, academic certification, athletic coach education, every domain that has built formal structures around preparing people to do consequential work with other people: they all follow the same arc, and they all produce the same outcome at the far end of it. A system that is still operational, still producing credentialed practitioners, still publishing research, still running conferences and issuing certificates, but that has lost meaningful accountability to the actual environments where development happens. It does not collapse. It persists, which is almost the more significant problem, because persistence without accountability trains everyone inside it to mistake the system's continuation for evidence that it is working.
Outside the boundary of every such system, there are people doing the actual work. They are building environments where the idea has to meet the player, the student, the patient, the person, honestly, without the protection of a framework that can explain away the gap. Those environments are not recognized by the system because the system cannot read them. They do not produce the proxies the system has learned to trust. They produce development, but development is harder to see than credentials, and a system that has replaced reality with structure as its primary authority cannot easily use what it cannot see. That is not a flaw in the people running the system. It is the designed output of an architecture that was built to manage proxies, not to learn from environments.
The question this series is building toward is what kind of structure would actually be required for a learning system to learn. That question does not have a simple answer, and this essay is not pretending otherwise. But the shape of the problem has to be clear before the shape of the solution can be legible. The shape of the problem is this: when structure becomes the authority for deciding what is true, the system loses the capacity to be corrected by what is actually happening in the environments it was built to serve. That loss is not catastrophic. It is gradual, invisible from the inside, and self-reinforcing in ways that make it more difficult to address the longer it continues. Every field that reaches sufficient scale faces this problem. The ones that solve it do not solve it by reforming from within. They solve it because something was already being built in the space the system could not see.
This is the second essay in a series examining what happens when reality is no longer the primary authority in human development systems.
Never Miss a Moment
Join the mailing list to ensure you stay up to date on all things real.
I hate SPAM too. I'll never sell your information.