Intentional Deskilling
What happens when the skills that made you competent are the ones you need to release
Progress is often described as the accumulation of skills. Much less attention is given to the art of letting them go.

I used to be able to read a regular map easily.
Not glance at one — read one. Contour lines, gradient, aspect, the way elevation changes told you where water would pool and where wind would funnel. I learned it from playing around in a backyard that faced a river and hiking with my friends while younger. We needed that knowledge to survive (and also drive to any unkown location).
I recently opened a map online for an unrelated behavioral research topic I was investigating and realized I couldn’t do it anymore. Not fluently. I had to think about what used to be automatic. The skill hadn’t vanished — it had rusted. A couple decades of GPS will do that to you.
I wasn’t upset about it, exactly. I was something else. Unsettled. Because I hadn’t decided to lose that skill. It just... left. It migrated out of me and into a system that does it better, and I didn’t notice until the system wasn’t there.
This is the thing nobody talks about: deskilling is already happening. The only question is whether we do it on purpose or let it happen to us.
The Three Modes
For decades, organizations have talked about upskilling — teaching workers new capabilities to keep pace with technological change. That language was expanded to include cross-skilling — training people across multiple domains to increase adaptability.
But a third category is quietly emerging. One that most organizations are uncomfortable naming.
Intentional deskilling.
Every industrial transition taught organizations how to train people to learn new skills. The AI transition may be the first major transition in which organizations must do this at scale for cognitive work, not just procedural work.
Not the accidental atrophy I experienced with my old school maps. Something deliberate. The organizational capacity to identify which skills should be released — consciously, strategically — because the environment has changed and holding onto them costs more than letting go.
As I explored in The Doing Was the Knowing, the doing is where expertise lives. Strip the verbs from a professional and you don’t get a strategist — you get someone staring at a dashboard they can no longer read. That’s the risk of accidental deskilling.
Intentional deskilling is different. It asks: which of these verbs should I still own, and which should I deliberately hand over — not because they’re unimportant, but because holding all of them prevents me from developing the ones that matter more now?
When Skills Become Infrastructure
Many professions contain skills that were once essential and later became embedded in tools, systems, or infrastructure.
I’ve been calling this pent-up organizational wisdom for a while now — the knowledge that’s so embedded in a system’s structure that it survives every attempt to reorganize it (this exists and persists strongly in government and large or bureaucratic organizations). You can restructure the department, replace the technology, rewrite the process documentation. And somehow the same patterns re-emerge. Not because anyone planned it, but because the structure learned something that no individual remembers deciding.
Ant colonies do something like this. Many species exhibit emergent spatial organization — separate brood areas, refuse zones, and corpse disposal behaviors — without any individual ant understanding the colony-wide plan.1 The organization knows something that no single ant knows. That’s what pent-up organizational wisdom looks like. And it’s what makes deskilling so tricky: you’re not just removing a skill from a person. You’re pulling a thread from a structure that may have reasons you can’t see.
Physicians once performed manual calculations for medication dosing that are now automated through digital systems. Engineers once wrote large portions of code directly in machine language. Pilots once navigated by celestial observation and analog instrumentation.2 Mathematicians once spent hours performing calculations now completed instantly by software.
These changes often did not eliminate expertise. They redistributed it — between people, tools, and organizations.3 The doctor’s skill migrated from arithmetic to clinical judgment. The engineer’s skill migrated from syntax to architecture. The pilot’s skill migrated from navigation to systems management and decision-making under uncertainty.
Research in organizational behavior has long shown that technology doesn’t simply eliminate jobs — it reorganizes skill distributions within them.4 Routine tasks migrate toward automation while human roles move toward supervision, judgment, and interpretation.
Artificial intelligence accelerates this dynamic.5 Many skills that took years to develop are becoming embedded in systems. Drafting code. Generating documentation. Structuring presentations. Analyzing datasets.6 The question for organizations isn’t only what new skills to teach. It’s which old skills to release — and how to release them without losing the judgment they once carried.
There’s a specific mechanism worth naming: automation bias. When decision-support tools become reliable enough to earn trust, practitioners can become less likely to independently verify their outputs — and less likely to catch errors when the system is wrong. The failure mode isn’t that the tool gets it wrong. It’s that no one notices when it does.
From first principles, skill is a closed loop: perception, action, and feedback. When a system takes over the action layer, the human often loses the feedback that kept judgment calibrated. That’s what makes automation bias dangerous — it doesn’t just remove a task, it breaks the loop that maintained competence.

The Identity Problem
This is where it gets hard. Not technically. Emotionally.
Expertise is deeply tied to identity. When professionals master a skill, it becomes part of how they understand themselves. The engineer who spent years learning to write efficient code is that skill in some fundamental way. Asking them to stop writing code — to let the agent do it while they focus on architecture and intent — can feel less like a promotion and more like an erasure.
Think about the professional who memorized the regulatory code — not because they had to look it up often, but because knowing it cold was the thing that made them them. Or the analyst who could build a financial model from a blank spreadsheet faster than anyone in the room. The skill wasn’t just useful. It was load-bearing for their sense of self.
I understand why technological transitions are often resisted not because people oppose innovation, but because innovation threatens the self. “I am the person who can do this” is one of the deepest sentences a professional carries. Change the “this” and you change the person.
There’s a term re-emerging in the research literature for what’s at stake here: epistemic autonomy — the capacity to form your own judgments through your own reasoning. It’s not just a skill. It’s the thing that makes all other skills yours. When you hand over a capability you’ve mastered, you’re not just outsourcing a task. You’re renegotiating your relationship with your own competence. The question isn’t whether the tool does it better. It’s whether you still know enough to judge.
Intentional deskilling has to reckon with this. It can’t be a memo. It can’t be a training module. It has to treat skill transition as both a technical process and a psychological one. You’re not just changing what people do. You’re changing who they believe they are.

The Organizational Blind Spot
Most organizations know how to train people to do new things. Far fewer know how to help people stop doing things that no longer matter.
This reveals a persistent organizational blind spot. I’ve watched teams cling to legacy workflows not because the new system is worse, but because the old workflow is the team’s identity. The weekly manual report that takes six hours and could be automated in minutes persists not because it’s useful but because the person who creates it has built twenty years of self-worth around being the person who creates it.
There’s an old bit of folk biology — the idea that hair and nails keep growing after death. They don’t. What actually happens is that the skin dehydrates and recedes, making it look like growth. The appearance of vitality is really the surrounding tissue retreating.7
The manual report is hair and nails.
The original function may be dead.
The identity built around it persists.
And without a way to mourn it — to acknowledge what’s ending and honor what it meant — people will keep producing the report, and the organization will keep accepting it, and everyone will pretend the structure is still alive.
The organizations that navigate this transition best won’t be the ones that automate fastest. They’ll be the ones that build rituals for letting go. That treat deskilling as a human process, not just a technical one. That understand the difference between stripping a skill and releasing one.
The Gradient
I keep thinking about what I’ve started calling the Agency Gradient — a simple way to describe how human roles shift as machines gain capability. As automation improves, organizations often try to move human roles upward toward judgment, direction, and meaning. Whether that migration succeeds depends on whether people retain enough understanding to evaluate the systems they now oversee. Intentional deskilling is how you move up the gradient without falling off it.
You can see the cognitive stack beginning to rearrange:
Before: human memory → human reasoning → human production
After: human judgment → human direction → human meaning → AI reasoning → AI production
Evidence from early generative-AI deployments suggests this shift is uneven. In one large customer-support study, introducing a generative-AI assistant increased worker productivity by about 14% on average, but the gains were concentrated among less experienced agents, while top performers saw smaller improvements.8 The gradient isn’t uniform. It tends to help those still learning the verbs more than those who already mastered them.
Moving up that stack means letting go of the lower rungs. Not because they’re unimportant — they built the ladder. But because clinging to them while the environment shifts means you can’t reach the next one.
The paradox from The Doing Was the Knowing applies: the doing built the knowing. The skills you’re releasing are the ones that made you competent enough to know they should be released. That’s not a contradiction. It’s the structure of growth. Every stage of development involves letting go of what got you here to reach what’s next.
The real question is whether the letting go is something that happens to you — the way my map-reading skills rusted while I wasn’t looking — or something you choose. Intentionally. With your eyes open. Knowing what you’re losing and why.
The Three Skills of Letting Go
Upskilling teaches us how to grow.
Cross-skilling teaches us how to adapt.
Intentional deskilling teaches us how to evolve.
The future of work may not belong to those who accumulate the most skills. It may belong to those who know which skills to release — and who have the judgment to know the difference between a skill that should be preserved and one that should be set down.

Somewhere, someone still carries a compass they don’t need. And somewhere, their kids have never held one.
They’ll survive without it. That’s not the question.
The question is whether they’ll be the same kind of people without the experience of needing one.
I’m starting to think the answer is no. And I’m starting to think that’s okay — as long as we know what we’re trading.
The dangerous version isn’t letting go. It’s letting go without noticing.
Part of an ongoing exploration of human agency in the age of intelligent systems.
Further Reading
The Doing Was the Knowing — On CRUD, the verb layer, and what happens to expertise when agents take over the operations.
The Fire We Carry — On AI as environmental condition, cognitive offloading, and the quiet rewiring of the human mind.
Let the Robot Wars Begin! — On compiled intent, agency erosion, and who owns the verbs in 2026.
Skin Deep — On the 200 milliseconds between decision and awareness, and what lives in the gap.
Footnotes
Leaf-cutting ants and other species show emergent spatial organization — compartmentalization of brood, refuse, and corpse handling — arising from simple local interaction rules without central planning. The implication for organizational theory: some structural knowledge is not held by any individual but is a property of the system itself. Hart, A. G., & Ratnieks, F. L. W. (2001). “Task partitioning, division of labour and nest compartmentalisation collectively isolate hazardous waste in the leafcutting ant Atta cephalotes.” Behavioral Ecology and Sociobiology, 49(5), 387-392. See also: Gordon, D. M. (1999). Ants at Work: How an Insect Society Is Organized. Free Press.
Research on automation and skill degradation in aviation has shown that reduced recent manual flying experience is associated with worse manual handling performance. The critical finding is not that automation is dangerous, but that the relationship between automation and skill requires active management. Ebbatson, M., Harris, D., Huddlestone, J., & Sears, R. (2010). “The relationship between manual handling performance and recent flying experience in air transport pilots.” Ergonomics, 53(2), 268-277.
Brynjolfsson and McAfee argue that the productive response to capable machines is not resistance but complementarity — racing with the machine rather than against it. The corollary they don’t fully explore: racing with the machine sometimes means dropping capabilities you used to carry alone. Brynjolfsson, E. & McAfee, A. (2014). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. W.W. Norton.
David Autor’s influential work demonstrates that technology does not simply replace jobs but reorganizes the task content within them. Routine tasks — both cognitive and manual — tend to be automated, while abstract reasoning and interpersonal tasks become more prominent. The pattern holds across decades and technologies. Autor, D. H. (2015). “Why Are There Still So Many Jobs? The History and Future of Workplace Automation.” Journal of Economic Perspectives, 29(3), 3-30.
Acemoglu and Restrepo’s framework for understanding automation distinguishes between the “displacement effect” (tasks moved from labor to capital) and the “reinstatement effect” (new tasks created that require human labor). Intentional deskilling operates in the gap between these two effects — the period when old tasks are leaving but new ones haven’t fully arrived. Acemoglu, D., & Restrepo, P. (2019). “Automation and New Tasks: How Technology Displaces and Reinstates Labor.” Journal of Economic Perspectives, 33(2), 3-30.
The OECD’s research on skills for the AI transition emphasizes that workforce adaptation requires not only learning new capabilities but actively managing the transition away from tasks that are being automated — a process the report frames as “task reorganization” but which might be more honestly called managed deskilling. OECD (2023). OECD Employment Outlook 2023: Artificial Intelligence and the Labour Market.
Hair and nails do not continue to grow after death. Growth requires metabolically active cells in the hair follicle and nail matrix, processes that cease shortly after circulation and oxygen delivery stop. The familiar appearance of post-mortem growth arises from dehydration and desiccation of the skin and surrounding soft tissue. As the tissue shrinks and retracts, more of the existing hair shafts and nail plates become exposed, creating the illusion that they have lengthened even though no new biological growth occurs. We will explore this phenomenon — and its broader metaphor for institutional persistence after functional decline — in more detail in a future article. Vreeman, R. C., & Carroll, A. E. (2007). “Medical myths.” BMJ, 335, 1288-1289.
The study examined a large call-center deployment and found that AI assistance increased productivity by roughly 14% overall, with the largest gains among novice workers and smaller effects among the most experienced agents. Brynjolfsson, E., Li, D., & Raymond, L. (2023). “Generative AI at Work.” Working paper, National Bureau of Economic Research. Later published in Quarterly Journal of Economics, 140(2), 889-938.

