Why Most People Abandon Thinking Partners Too Early
Mar 13, 2026
Human to the Power of AI — Essay Three
The most common mistake people make when beginning to work with a new thinking partner is treating the first interaction as a performance test. They ask for something practical. A lesson plan, an analysis of a situation they have been puzzling over, a summary of a concept they want to pressure-test. The output comes back and it feels slightly off. The examples do not quite fit the environment they actually work in. The framing is close but not exact. The specificity they were hoping for is absent. They read through the result, recognize that it would require more editing than doing the work themselves would have taken, and file the experience under interesting but not practical. The experiment is declared unsuccessful before it ever really started.
That conclusion is understandable. It is also a fundamental misreading of what just happened. The first interaction with a thinking partner is not a performance test. It is a calibration. The system does not yet know the environment the person operates in, the frameworks they use to interpret what they see, the terminology they have developed over years of working through problems in a particular domain. When it responds, it draws on general patterns rather than the specific architecture of the user's thinking. The output reflects that gap accurately. The gap is real. The mistake is concluding that the gap is permanent.
This is not a new problem. The same misreading happens at the beginning of every serious mentorship relationship, and coaching environments have been living with the consequences of it for as long as organized coaching has existed. When Michael Canavan began bringing questions to those early conversations at Midcourt, the exchanges were not polished. They were exploratory. He described situations from practice and tried to articulate what he thought had happened. The responses he received were not elegant either. They were probing, occasionally frustrating, and deliberately incomplete. He was not receiving answers. He was being introduced to the architecture of a questioning process that would eventually become his own. The first several months of that work would have looked, to anyone measuring efficiency, like a poor investment of time.
What made those early exchanges valuable was not their immediate output. It was that they were establishing something that would pay forward across every situation Michael would encounter afterward. The pattern was being built before it could be used. The relationship was being calibrated before it could produce the kind of insight both people were working toward. If Michael had evaluated those early conversations the way most people evaluate their first interactions with AI systems, he would have concluded that the process was not worth the effort and returned to working things out alone.
Imagine a player beginning with a new coach. The coach asks them to hit a series of balls and observes carefully. The player leaves expecting the coach to already understand their game in full. At the next session the coach offers suggestions that feel somewhat generic, not quite wrong but not specific in the way the player was hoping for. The player decides the coach does not understand them and stops working with them after two or three sessions. From inside that experience the decision might feel rational. From the outside it looks like impatience interrupting a process that needed more time to develop. Coaches learn about players through accumulated exposure. They see how someone reacts under pressure, how they respond to failure, how they process instruction when the competitive environment is placing demands on their attention simultaneously. Only after that pattern develops does the relationship begin producing the kind of insight the player was looking for at the beginning.
Artificial intelligence follows the same arc when it is used as a thinking partner rather than a search tool. Early interactions are calibration. The user is introducing the system to their environment, their language, the way they frame problems, and the specific constraints of the domain they are working inside. The system reflects those inputs imperfectly at first because it has not yet accumulated enough context to do otherwise. The user adjusts the framing, adds detail, clarifies what actually matters in this situation rather than in general. Over enough cycles the conversation stabilizes into something that feels less like querying a database and more like testing an idea against a perspective that has learned how to push back productively. At that point the thinking partnership begins to function the way a mature mentorship relationship functions.
Most people exit the process during the calibration stage, which is the stage that looks the least like what they were hoping to find.
The deeper issue is a persistent misunderstanding about where insight actually comes from. People tend to believe it comes from better answers. In most serious domains it comes from better questions, and the quality of those questions depends entirely on how well the thinking partner understands the environment being examined. A mentor who has worked with a learner for six months asks different questions than a mentor who met them last week. Not because the mentor has become smarter but because they have accumulated enough context to know which questions will expose the structure of the thinking rather than just evaluate the result. That contextual accumulation does not happen instantly in human mentorship, and it does not happen instantly in AI collaboration. The process requires the same thing in both cases: deliberate introduction of the frameworks, terminology, and reasoning patterns that define how the person actually works.
What artificial intelligence makes possible that human mentorship cannot is the retention of that context without degradation. A human mentor forgets. They carry the weight of other relationships and responsibilities into every session. They have days when their capacity for deep questioning is diminished by things that have nothing to do with the learner in front of them. The contextual architecture that has been built over months of deliberate exchange is always at some risk of being compromised by the ordinary limitations of human attention. An AI thinking partner that has been properly introduced to someone's intellectual framework retains that framework without those vulnerabilities. The calibration work that was done in the early sessions does not erode between conversations. The questions it can ask remain available at the moments when they are most needed, including the moments when pressure makes internal reflection least reliable.
The environments responsible for developing judgment in young people have not yet seriously engaged with what this distinction makes possible. Most of the conversation about artificial intelligence in sport still concentrates on the surface of the work: technique analysis, physical performance metrics, pattern recognition in competitive outcomes. Those applications may be useful in narrow ways. The deeper layer of development, the layer that shapes how athletes and coaches interpret the experiences they live through, has always depended on thinking partners who ask better questions. Treating AI as an answer-production tool in environments built around judgment development is the equivalent of handing a coach a scouting report and concluding that the coaching is done.
The technology itself is not the determining factor in whether this changes anything. The determining factor is whether the people responsible for building learning environments understand the patience that calibration requires, and whether they are willing to stay inside a process long enough to discover what it can actually produce. That patience is the same thing that shaped every serious mentorship relationship before any of this technology existed. It is what Michael Canavan exercised in filling notebooks and showing up for conversations that were valuable but slow. The tools have changed. The patience required to use them well has not.
Next: What it actually takes to train a thinking partner so it understands how you reason, not just what you ask.
Never Miss a Moment
Join the mailing list to ensure you stay up to date on all things real.
I hate SPAM too. I'll never sell your information.