The Rhythm Engine
When AI Learns to Dance: Moving from Clockwork to Jazz
Last week, in "The Showroom and the Stack," I painted a picture of software's future: elegant curation replacing exhausting construction. But as I closed that piece, a friend's question caused me to ruminate: Will we get thoughtful showrooms or algorithmic chaos? Careful curated composition or endless Instagram feeds?
The answer, I've realized, lies not in the tools themselves but in how they learn to move with us.
Picture two musicians. One reads sheet music with mechanical precision—every note exactly as written, every rest counted to the second. The other plays jazz, reading the room, adjusting tempo to match their partner's energy, creating something alive in the moment. Both are reliable. Only one creates that magical and unpredictable flow.
This is the shift happening right now in AI: from deterministic clockwork to what I call ‘dependable rhythm’—the ability to sync with human timing, context, and flow without rigid repetition. And if we get it right, it's what will determine whether our AI curators become true partners or just sophisticated vending machines.

1. When Perfect Became the Enemy of Good
For decades, we worshipped at the altar of determinism. Same input, same output, every single time. Our software was Victorian clockwork—precise, predictable, and utterly rigid. This gave us comfort through "fixed inputs and produce the same output every time," as if repeatability itself was virtue.
Then ChatGPT arrived. DALL·E exploded. Claude began completing our sentences. These tools "can't guarantee byte-for-byte sameness, yet they've become indispensable" once we learned to judge them by a different standard: not perfection, but partnership – definitionally not an autopilot, but a Copilot.
The shift was visceral. I remember the first time GitHub Copilot suggested code that wasn't just syntactically correct but understood what I was trying to build. It wasn't deterministic—run the same prompt twice and you'd get variations. I’ve had identical experiences with ChatGPT and Microsoft Copilot as it relates to content. It was dependable in a deeper way: it caught my rhythm.
2. Why Rhythm Beats Raw Repetition
Research shows that "interactive behavior among humans is governed by the dynamics of movement synchronization" (Mörtl, Lorenz, & Hirche, 2014)—we naturally match each other's tempo in conversation, in movement, in thought. The best AI systems are learning this dance.
Consider Microsoft's meeting-note Copilot. It doesn't transcribe with robotic precision. Instead, it follows the flow of human conversation, knowing when to summarize, when to capture verbatim, when to highlight action items. Miss the rhythm, and the entire system feels off—like a drummer playing in 3/4 time while everyone else is in 4/4.
Tavus's Sparrow model—an AI that reads facial expressions and body language during video conversations—exemplifies this approach: built with a transformer-based turn-taking engine, it understands rhythm, intent, and pacing to engage naturally in conversation. Not reacting to every sound, but reading the room.
This is why I've etched this truth on my mind:
"When rhythm is random, no one can dance—human or machine."
3. The Memory That Makes Us Human
But rhythm alone isn't enough. A jazz musician who forgets the key, the progression, the shared history of the song? That's not improvisation—that's noise.

Current AI systems (and by current, I mean as of this week) are largely "stateless," meaning each query is processed in isolation, without inherent reference to previous interactions. Imagine working with a colleague who forgot every conversation the moment it ended. You'd spend more time re-explaining than creating.
This creates what developers call "prompt engineering overhead"—the need to constantly re-insert context into every prompt. We become parrots, repeating ourselves endlessly to machines that should remember.
Hence my second principle:
"An AI that forgets turns creators into parrots."
The solution is emerging through Long-Term Memory-like systems that provide models with the ability to accumulate historical experiences and knowledge, enabling continuous optimization through long-term interaction. Not just storing facts, but understanding context, relationships, the why behind the what.
4. The Architecture of Trust
So how do we build systems that can truly dance with us? Three pillars emerge:
1. Embrace Statistical Dependability
The old deterministic approach provides clarity and precision, as every rule is explicitly defined, but lacks the flexibility and adaptability needed for creative collaboration.
Instead, we need systems that are dependable within acceptable bounds—what engineers might call ε < 0.02 for creative systems—reflecting thresholds similar to human attention drift, trust tolerance in UI latency, or acceptable ambiguity in conversation. Not perfect, but predictably responsive.
2. Design for Human Rhythms
Stanford's Jann Spiess and his colleagues have shown that "the best algorithm is the one that takes into account how a human will interact with the information it provides" (Spiess & McLaughlin, 2022). This means:
Matching the natural cadence of human work (not interrupting flow)
Understanding context shifts (knowing when we've changed topics)
Providing breathing room (not responding instantly to every keystroke)
3. Build Memory That Matters
Modern systems are developing six fundamental operations—consolidation, updating, indexing, forgetting, retrieval, and compression—for memory. Yes, forgetting is a feature, not a bug. Remembering everything is hoarding; remembering what matters is intelligence.
5. A Tale of Two Futures

Stand at this crossroads and two paths diverge:
Path One: The Clockwork Curator
Deterministic. Predictable. Safe. Every interaction scripted, every response pre-programmed. It's the luxury showroom where every piece is bolted to the floor. Impressive to look at, impossible to live with.
Path Two: The Jazz Ensemble
Statistical. Adaptive. Alive. Your AI curator doesn't just remember your preferences—it senses your mood, matches your energy, knows when you need options versus opinions. It's the showroom where furniture rearranges itself based on how you move through the space.
Studies show that behavioral and inter-brain synchronizations were enhanced after human–machine tasks—when we dance with machines that match our rhythm, we actually get better at the dance.
The Third Principle
This brings me to my final truth:
"More output is just more noise unless it moves the needle."
A showroom can display a thousand perfect chairs. But if none of them fit the way you sit, what's the point? As AI systems reach parity with humans on benchmark after benchmark, the question shifts from "Can it?" to "Should it?"
Not can it generate a hundred variations, but can it sense which one resonates with your vision? Not can it remember every interaction, but can it surface the memory that unlocks your current challenge?
Activity ≠ Outcomes.
⸻
From Principles to Practice
These aren't abstract concepts. They're design requirements for the AI systems we're building right now.
The shift is already happening. Companies are discovering that RAG—Retrieval-Augmented Generation, where AI searches your actual data before responding—provides the opportunity to enhance business intelligence by correlating trending tabular data with context found in unstructured data—not just retrieving information but understanding its rhythm within your work.
Learning the New Dance
Next week, we’ll discuss one of many ways to actually build in this new paradigm. How to compose software through conversation, not construction. How to work with AI that remembers your rhythm, not just your syntax.
But first, we needed to understand why the shift from determinism to dependability isn't just a technical evolution—it's a philosophical revolution. We're not building better calculators. We're teaching machines to dance.
The showroom is ready. The components are waiting. But without rhythm, without memory, without the ability to read the room and match the mood, we're just rearranging furniture in the dark.
The tools that win won't be the ones with perfect recall or flawless repetition. They'll be the ones that make you feel like you're jamming with a talented partner who knows your style, remembers your riffs, and pushes you to play better than you thought you could.
Because in the end, that's what separates curation from chaos: not perfection, but rhythm. Not determinism, but dependability. Not clockwork, but jazz.
What rhythms are emerging in your work with AI? When has a tool surprised you by catching your flow rather than disrupting it?
References:
Mörtl, A., Lorenz, T., & Hirche, S. (2014). "Rhythm patterns interaction - synchronization behavior for human-robot joint action." PLOS ONE, 9(4), e95195.
Spiess, J., & McLaughlin, B. (2022). "Algorithmic assistance with recommendation-dependent preferences." Working paper, Stanford Graduate School of Business.
Tavus. (2025, March 6). Tavus Introduces Phoenix‑3, Raven‑0, and Sparrow‑0: A Family of Models Powering the First AI Agents That Truly See, Hear, and Engage in Real‑Time, Face‑to‑Face Interaction.
Further Reading:
Jabbour, M.J. (2025). "The Showroom and the Stack: Why software development now looks more like interior design than engineering."

