Out of Time
Rethinking Downtime in the Age of AI
The Rhythm That Worked—for 25 Years
For as long as I can remember, I’ve thrived on structure. Wake up clear-headed. Write down what matters. Scratch through the list item by item. End the day with a few good checkmarks and a rough sketch of tomorrow.
That rinse-and-repeat cycle carried me through two decades of work, reinvention, and impact. Productivity was linear, tangible, and mine to own—every step of the way.
Then the Tasks Changed
Today’s task list? It’s different. It’s dimensional.
When I write down something like “research the impact of saying please and thank you to LLMs”, I know I’m signing up for:
Canvassing the literature
Digging through dense academic papers
Synthesizing themes
Forming opinions
Writing
And maybe—if it sparks something—telling my best friend about what I found
Only then, maybe, I’ll cross it off the list. One line now contains a dozen micro-missions.
Enter the Assistant
Here’s the thing: I don’t have to do it all anymore. Not alone.
My AI assistant can carry some of it. Not just support me, but run with it. That one-liner isn’t just a personal promise—it’s a collaboration request.
But if that assistant doesn’t have memory, metacognition, or a sense of evolving context—it’ll choke. On the first try, the second, and the hundredth. Unless it learns to ask, clarify, experiment, and loop back. That’s when the magic kicks in.
When agents spin up in parallel while I’m still sipping coffee, something shifts. My work expands. My productivity compounds.
What I Found When I Asked “Politely”
That one task—“research the impact of saying please and thank you to LLMs”—led me into an unexpectedly deep rabbit hole.
Several sources explore whether tone and politeness affect LLM responses, and the findings are surprisingly nuanced.
OpenAI’s own best practices emphasize clarity and specificity over charm. You don’t need to say “please”—you just need to be clear about what you want. Reducing ambiguity is key to consistent outputs.
But then, there’s a twist.
A 2024 cross-lingual study by Yin et al. revealed that impolite prompts can actually degrade performance. In tasks across English, Chinese, and Japanese, rude phrasing increased error rates, led to more omissions, and amplified bias. The ideal level of politeness? It varied by language and model, suggesting LLMs reflect deeper cultural and linguistic norms .
So no, you don’t need to flatter your AI. But if you want better outcomes? Be direct. Be clear. And maybe don’t be a jerk about it.
The Time Cost of Doing Nothing
And here’s the part I didn’t expect: downtime hits differently now.
If I take a few hours to rest, it feels like stepping away for a week. Take two or three days? It could cost me a month. The rate of work, the velocity of output—it’s no longer calibrated to human pace. It’s something else entirely.
This isn’t just about personal efficiency. It’s about recalibrated expectations—what people expect from me, and what I now expect from others.
A Friend’s Quiet Counterpoint
After I shared some of this with my friend Sam Schillace, he offered a perspective that’s been sitting with me:
“I suspect this isn’t sustainable in that form. I would say more: use the time the AI is thinking to have quiet space yourself, rather than spinning up a new AI task, if you can. I spent a bunch of time on the wheel this weekend. Usually I play a podcast or something. This time I just worked on the wheel. My head feels better, AND it was more productive.”
There’s a real truth here: rest isn’t just about recovering from doing. It’s about creating room for insight, rhythm, and renewal. AI might be tireless, but we’re not meant to be. And maybe, part of the real work now is learning how to pause—on purpose.
Rest Like You Mean It
So here’s my advice to anyone navigating this shift:
When you rest, really rest. Disconnect. Go off-grid.
Because time is compressing. And presence—true, undistracted presence—is about to become one of the rarest, most valuable currencies we have.
Sources & Further Reading
Yin, Z., Wang, H., Horio, K., Kawahara, D., & Sekine, S. (2024).
Should We Respect LLMs? A Cross-Lingual Study on the Influence of Prompt Politeness on LLM Performance.
Polite prompts yielded more accurate and less biased outputs across English, Chinese, and Japanese, though excessive politeness didn’t always help. Cultural nuance matters.
OpenAI. (2023).
Best Practices for Prompt Engineering with GPT.
platform.openai.com/docs/guides/gpt-best-practices
Clarity, structure, and specificity are the main levers of reliable prompt performance—politeness is optional.


