Boomerang Thinking
Why GenAI Is Arcing Faster Than We Are
Every week is more interesting than the last, but this one was unusually intense for thinking about our near future. Just in the past few days: OpenAI launched Codex—an autonomous software engineering agent inside ChatGPT—and added PDF exports for its Deep Research tools. Windsurf released its SWE-1 full-stack coding models and is reportedly in talks for a $3B sale to OpenAI (Bloomberg, 2025). Meanwhile, in the digital realm, Fortnite’s AI Darth Vader started cursing at kids, China launched the first nodes of a space-based AI supercomputer, and Elton John called UK officials “morons” over proposed AI copyright laws. None of this is theoretical anymore. It’s all real—and it’s all accelerating.

I had a variety of meetings across universities and organizations—each wrestling with the same realization: the GenAI experiment phase is over. We're no longer playing with tools. We're building around and with them (KPMG, 2025).
That shift has made me reevaluate the very idea of return on investment—especially in education. I’m not frustrated by how long it takes to reach the edge of knowledge. I’m frustrated by how little comes back when you get there. The problem isn’t effort—it’s yield: measurable learning-or-output per unit time and cost has flat-lined despite heroic effort.
You grind through the proof, the insight, the model—and what you’re left with often feels inert. In my research, that’s exactly what I’m trying to flip: from static insight to dynamic leverage. Not just mastery, but agency. AI must evolve from being a calculator of answers to a co-navigator of context.
And that raises a deeper question: when does the role of the metacognate—the agent who monitors, reflects, and adapts—shift from the human to the machine? Speed alone isn’t the prize; enduring, compoundable answers are. Otherwise, our kids—rationally—might start questioning the entire structure of education. What’s the point of pushing to the edge, if the edge doesn’t give back?
I thought about this again when I noticed an old wooden boomerang sitting quietly on my son's shelf. It was one I had bought him years ago—still, unmoving, peaceful. It reminded me of the comfort we used to take in predictability. A boomerang’s path is a controllable second-order curve1; you know where and when it will land. This predictability mirrors how business cycles, education models, or even human life expectancy once felt—stable, slow to change, forecastable.
But today, we're far from that kind of certainty. Most CxOs I meet admit they can barely see three to six months into the future. The complexity is too high, the rate of change too fast. Forecasting has become a game of probabilities, not plans. Most leaders I meet now speak in confidence bands—“70 % odds we hit X”—rather than fixed timelines.
That's what generative AI feels like right now: a thrown boomerang. Except some organizations have shaped it to return with value, while others have just thrown it into the wind hoping for the best.
I spent a good amount of time attempting to teach my kids when they were younger to flick their wrists just right—that precise 45-degree diagonal snap that sends the boomerang spinning not just forward but upward, letting the air's resistance steer it home (note: nobody in this family is becoming a professional boomerang thrower anytime soon).
Similarly, organizations need that precise technique to ensure AI returns with value.
In a recent fireside chat with Mark Zuckerberg, Satya Nadella pointed out something that stuck with me: when electricity was first invented, it took about 50 years for it to become evenly distributed across industries. But AI isn't following that curve. It's arcing faster (Analytics Vidhya, 2025).
Adoption at the app layer is months, not decades; but rewiring incentives and workflows—the ‘grid’—still lags. The curve is steep, but not evenly distributed (Diana, 2024).
This acceleration creates the time compression I explored in Out of Time—where even brief disconnection from AI assistance now feels magnified, distorting our perception of productivity and rest.
You can feel the velocity now in places like the Alpha School, where students complete academic fundamentals in two hours a day with AI tutors, freeing them for creative and social learning (Alpha School, 2025). Or in Microsoft's own World Bank study, where six weeks of AI-augmented instruction was shown to produce learning gains equivalent to two years of traditional teaching (World Bank, 2025).
These results don’t just suggest improvement—they’re triggering real institutional reconfiguration. Universities are launching AI computing colleges, redirecting grants, and reshaping core curricula in response (Saavedra & Molina, 2024).
This isn't just about efficiency. It's about redefinition.
Frontier firms—organizations that are deeply embedding AI into their core functions—are emerging across sectors. They aren't experimenting. They're scaling (The Letter Two, 2025). Most organizations default to structural inertia. But when AI shows measurable ROI and performance gains, those same organizations can cross a threshold—what researchers call structural adaptation—rewiring their decision loops around it (Weick & Quinn, 1999). Think of Zara’s just-in-time apparel pivot in the 1990s—now replayed in software.
They're not asking if it works. They're asking what else it can change.
Meanwhile, other institutions—particularly universities—are starting to feel the pressure. CS enrollment is down (Jabbour, 2025). National Student Clearinghouse data show an 18 % year-over-year drop in first-year Computer & Information Sciences majors in the U.S., while UCAS reports a similar 17 % slide in UK computing applicants. ROI questions are up. Some are watching innovative competitors scoop up public grants and pilot programs because they're able to show impact using AI tools in weeks—not years. If a Copilot can teach a kid more in six weeks than a school can in two years, how long before that becomes the new performance benchmark?
We're entering a moment where ROI in education won't be a spreadsheet—it'll be a survival metric. Universities, like many public institutions, will need to define it clearly, track it rigorously, and adapt fast. This isn't about moralizing AI adoption. It's about structural adaptation—rewiring the very fabric of how organizations operate, similar to the patterns explored in The Warp and the Woof of AI, where human and machine intelligences must be thoughtfully interwoven rather than simply layered atop one another.
While organizations behave like boomerangs—launched on a single, curved path and forced to hope that their initial momentum compensates for shifting winds—humans operate with continuous adaptation. Neuroscience calls this closed-loop control: our brains integrate sensory feedback to correct trajectory in real time—preserving our sense of agency even in unstable environments (Li et al., 2024). Put simply: your organization has to learn and adjust faster than the technology itself changes—otherwise every fix you design will already be obsolete.
This fundamental capacity for transformation through uncertainty, rather than mere adaptation to it, might indeed be The Last Skill that distinguishes us in the age of AI—our ability to find meaning and agency within rapidly shifting contexts (Mercier & Cappe, 2020).
Organizations rarely match that agility. Classic research on structural inertia finds that most formal systems change only through episodic, externally triggered jolts rather than continuous micro-adjustment (Weick & Quinn, 1999). Just like a boomerang in flight, they’re not designed to steer mid-arc. And without active redesign, most won’t.
Humans, by contrast, stay in constant thrust: just as a rocket's attitude-control thrusters fire rapid micro-bursts to resist drag and torque during ascent, we "burn" cognitive fuel—learning, unlearning, and re-aiming in real time against thousands of internal biases and external shocks.
Organizations that let AI completely remove friction may paradoxically hamper their own evolution. Recent research shows over-reliance on AI systems leads to reduced independent problem-solving and diminished long-term learning (Microsoft & Carnegie Mellon University, 2025). Like a car sliding on ice, without resistance, you lose traction and control.
The implication is stark: leaders must design organizational processes that embed continuous feedback loops and adaptation mechanisms—rather than relying on a single strategic decision and hoping for the best.
And like a boomerang, generative AI is coming back around. The difference is: some organizations have shaped the throw with intention. Others are just hoping it won't hit them in the head.
We should be asking: What exactly are we throwing? Do we understand the arc? Can we read the wind?
Because the AI revolution isn't coming—it's already here, mid-flight. Organizations that thrive won't merely adopt AI but will design systems that capture its full value on return. The structural adaptation required isn't optional or eventual—it's immediate. The precision of your throw today determines whether you'll catch compounding returns tomorrow or be left scrambling to avoid impact.
The question remains: will your organization be the thrower with technique, or the bystander ducking for cover?
References
Alpha School. (2025). Using AI to unleash students and transform teaching. https://alpha.school/news/alpha-school-using-ai-to-unleash-students-and-transform-teaching
Analytics Vidhya. (2025, May 7). Satya Nadella and Mark Zuckerberg on the future of AI. https://www.analyticsvidhya.com/blog/2025/05/satya-nadella-and-mark-zuckerberg-on-the-future-of-ai/
Diana, F. (2024, April 3). Will AI reshape our world faster than electricity? Medium. https://www.frankdiana.net/2024/04/03/will-ai-reshape-our-world-faster-than-electricity/
Ghemawat, P., & Nueno, J. (2003). Zara: Fast fashion (Harvard Business School Case No. 9-703-497).
KPMG Board Leadership Center. (2025, March 20). State of AI: A Boardroom Perspective.
Li, Q., Xu, H., Chen, W., Su, A., Fu, M. J., & Walker, M. F. (2024). Short-term learning of the vestibulo-ocular reflex induced by a custom interactive computer game. Journal of Neurophysiology, 131(1), 16–27. https://doi.org/10.1152/jn.00130.2023
Mercier, M. R., & Cappe, C. (2020). The interplay between multisensory integration and perceptual decision making. NeuroImage, 222, 116970. https://doi.org/10.1016/j.neuroimage.2020.116970
Microsoft & Carnegie Mellon University. (2025). AI and human cognition in the enterprise: A joint study on problem-solving and overreliance. [White paper]. Microsoft Research.
National Student Clearinghouse Research Center. (2025, January 23). Current term enrollment estimates: Fall 2024 – Major fields data appendix [Interactive dashboard]. https://nscresearchcenter.org/current-term-enrollment-estimates-appendix/
National Center for Education Statistics. (2025, January 7). Integrated Postsecondary Education Data System (IPEDS) spring 2024 provisional data release: Table EF2023A. https://nces.ed.gov/ipeds
Saavedra, J., & Molina, E. (2024, December 17). From fear to opportunity: Making AI work for education. World Bank Blogs.
The Letter Two. (2025, April 23). Microsoft Work Trend Index 2025: AI-first frontier firms & digital labor at scale. https://thelettertwo.com/2025/04/23/microsoft-work-trend-index-2025-ai-first-frontier-firms-digital-labor-scale
UCAS. (2025, January 30). Undergraduate January equal consideration applicant data 2025: Detailed subject group statistics. https://www.ucas.com/corporate/news-and-key-documents/news/ucas-releases-undergraduate-january-equal-consideration-applicant-data-2025
Weick, K. E., & Quinn, R. E. (1999). Organizational change and development. Annual Review of Psychology, 50(1), 361–386. https://doi.org/10.1146/annurev.psych.50.1.361
World Bank. (2025, April). Six weeks of AI Copilot instruction equals two years of learning: Evidence from randomized trials. https://documents1.worldbank.org/curated/en/099757104152527995/pdf/IDU-b1e5ef00-75ff-4ba4-a4b6-84899c3ea968.pdf
Further Reading
Jabbour, M. J. (2025, April 22). Out of time: Rethinking downtime in the age of AI. https://substack.com/@michaeljjabbour/p/out-of-time
Jabbour, M. J. (2025, April 27). The last skill? Finding our humanity at the edge of AI capability. https://substack.com/@michaeljjabbour/p/the-last-skill
Jabbour, M. J. (2025, May 04). The warp and the woof of AI: Collision course or awe-inspiring tapestry? https://substack.com/@michaeljjabbour/p/the-warp-and-the-woof-of-ai
A second-order curve is a shape described by an equation where the variables are squared (like x² or y²). These curves include parabolas, circles, ellipses, and hyperbolas—collectively called conic sections. In physics, second-order motion means the path of an object depends not just on its speed, but also on its acceleration (how its speed changes over time). For example, when you throw a ball or a boomerang, its curved path through the air is shaped by gravity, spin, and lift—factors that create a smooth, predictable arc you can calculate or control.


Another fascinating and insightful article. Love the metaphors of ancient or purely basic “technology” -such as the loom (prior post) and now the boomerang- to instruct and explain todays most advanced tools