Of Port and Purpose
Will direction beat speed in the age of AI?
"If a man knows not to which port he sails, no wind is favorable."
— Seneca, Letters to Lucilius
The Question in the Storm
Over the last few months, I've been talking to my parents about a new name for their travel company. We discussed the value of travel, differences between sea and land travel, and even the ease of navigating all the information available today. My brain, likely over-indexing on the practical, kept circling back to one question: "But what's the point of the journey if you don't have a sense of where you're going?"
That conversation has been nagging at me. Not because of the travel business, but because it captures something essential about where we all are right now—caught in the AI current, swimming hard but not always sure toward what shore.
The wind is howling. Models are launching daily. Benchmarks across key tasks in crucial industries are shattering weekly. School boards are panicking about the future of teaching and learning. Companies are mandating AI use, some under the threat of termination. Engineers might even be automating themselves out of some jobs while philosophers debate if AIs have consciousness.
In the first half of 2025 alone, over 20 frontier models have been released:
And that's just the first half of 2025. The second half promises even more.
In the middle of it all, I keep hearing the same question in different forms:
Where exactly are we trying to go?
When the Philosopher Knew the Storm
Seneca wrote those words—"Ignoranti quem portum petat, nullus suus ventus est"—not from some ivory tower, but from the eye of his own hurricane. Tutor to Nero. Advisor to madness. A Stoic trying to practice wisdom while navigating palace intrigue that would eventually kill him.
He understood something we're learning now: in times of radical uncertainty, the first question isn't "How fast can we move?" It's "Where are we going?"
This connects directly to what I explored in "The First & Last Principle"—that agency isn't output, it's authorship. And you can't author a journey without knowing your destination.
The Memory That Clarifies
A couple months ago, I was facilitating an AI strategy session with a leadership team. The AI leadership had prepared over 80 slides on capabilities. The CFO had ROI projections extending to 2030. The CHRO had talent frameworks that would make McKinsey jealous.
But when I asked, "What port are you sailing toward?"—silence.
Finally, the CEO spoke: "We're implementing AI because everyone else is."
That's not a port. That's drift.
I've seen this pattern everywhere I've worked these past two years—from hospitals delaying to deploy diagnostic AI without even asking what "better diagnosis" could mean for their community, to schools adopting AI tutors without defining what "education" should produce in 2025.
We're all feeling the wind. But surprisingly few of us have named our port.
The Three Camps in the Harbor
As I've navigated hundreds of these conversations, I see three distinct responses to our moment:
The Wind Chasers chase every gust. Latest model drops? They're on it. New benchmark? Already testing. They mistake motion for progress, speed for direction. They'll sail fast—but in circles.
The Anchor Droppers see the storm and respond by battening down. "We need more ethics committees." "Let's wait for regulations." "This too shall pass." They mistake stillness for safety. But as I wrote in "Boomerang Thinking," standing still in transformation is its own form of movement—backward.
The Navigators—rarer but essential—ask different questions. Not "How fast?" but "Where to?" Not "What can it do?" but "What should we become?" They understand what I discovered in "The Rhythm Engine": sustainable progress requires synchronization between human purpose and machine capability.

My Compass
I learned navigation from a compass I got as a gift while at sleep away camp as a kid. On hikes, and in boats on the water, you have this unusual desire to sense or try to sense your direction. That physical compass taught me something I still use: every major decision benefits from asking three simple questions. Where am I? Where do I want to be? What would have to be true to get there?
A real notebook for that purpose sits on my desk now. And I've been using this framework recently to think about our collective AI journey.
Where are we? It's a landscape I barely recognize from two years ago. Technical capabilities are expanding faster than our ability to integrate them meaningfully. Our institutions, built for stability, are trying to adapt to exponential change while humans experience that peculiar mix of exhilaration and dread. We're in the middle of weaving human and machine intelligence together without a clear pattern.
Where do we want to be? This is where most organizations stop thinking. But in my work across sectors, I'm seeing clear ports emerge. Healthcare that's both high-tech and high-touch. Education that amplifies human potential rather than replacing human connection. Workplaces where AI handles the mechanical so humans can be more human. Communities where technology serves belonging, not just efficiency.
What would have to be true to get there? We'd need to fundamentally rethink our approach. Design for human rhythm, not just machine speed. Welcome friction in the right places—remembering that growth requires resistance. We'd need pragmatic optimism—neither naive about challenges nor paralyzed by them. Most importantly, we'd need to stop implementing AI and start integrating it.
The Velocity We Didn't See Coming
Here's what keeps me up at night: not being able to see how fast a particular area or tech is moving could be your demise.
Remember how many people doubted self-driving cars? "Too complex." "Too dangerous." "Decades away." Yet as of June 2025, we have robotaxis operating in Texas with the ability to almost instantly turn a million cars into robotaxis.
Tesla just announced 9 billion miles worth of US safety statistics that should make everyone pause:
Accidents per million miles with Tesla Autopilot: 1,506
Accidents per million miles without Autopilot: 6,864
US average accidents per million miles: 18,367
Full self-driving prevented 16,861 accidents in that sample alone. The technology we doubted is now safer than human drivers by an order of magnitude.¹
Or consider healthcare. Microsoft just released an AI diagnostician that's 4x more accurate than doctors at detecting certain conditions.² Not marginally better. Four times better. How many lives could that save if deployed thoughtfully? How many won't be saved if we delay out of fear?
This isn't about Tesla or Microsoft. It's about our inability to perceive exponential change while we're inside it. What other transformations are we dismissing because they don't fit our linear expectations?
Finding Your Path When You Can't Sense It
But what if you genuinely can't sense your direction? What if the fog is too thick, the change too rapid, the options too overwhelming?
You've probably been there. Standing in front of a whiteboard covered in possibilities, feeling that familiar tightness in your chest that whispers: you're lost.
There are ways to navigate even when your internal compass spins.
Sometimes following the friction helps. Where do you feel the most resistance in your organization or life? That friction often points to where transformation is most needed—and most valuable. Friction isn't failure; it's information.
Other times, tracking the energy reveals the path. In workshops I run, there's always one use case that makes everyone lean forward. Their eyes light up. The room's temperature changes. That energy is data—your organization's unconscious knowledge of where it wants to go.
When everything feels too big, starting small and real can break the paralysis. You don't need to see the whole port to take the first bearing. Pick one real problem that matters to real people. Solve it. The next direction often reveals itself in the solving.
And sometimes, using time as a compass provides clarity. Ask yourself: "What would I be proud to have built in five years?" Then work backward. This isn't prediction—it's intention-setting. It's choosing to create the conditions for the story you want to tell.
The Pragmatic Optimist's Chart
You can't navigate by extremes. The pure optimists crash on rocks they refused to see. The pure pessimists never leave harbor.
The middle way—pragmatic optimism—means acknowledging the real risks without being paralyzed by them. Job displacement is real. Algorithmic bias is real. The fragmentation of our attention is real. But so are the opportunities: democratized expertise, augmented creativity, accelerated discovery.
The hardest part? Taking responsibility for steering between them. I think of it like teaching my kids to ride bikes. I don't pretend they won't fall. I don't forbid them from trying. I run alongside, ready to steady but not to carry.
Three Navigational Tools
From two years of helping organizations find their ports, three practices consistently emerge as actually working.
The first is starting with stories, not strategies. I learned this the hard way after watching too many teams suffocate under hundred-slide decks. Now I ask them to imagine: "What story do you want to tell in five years?" Something shifts when they stop planning "AI implementation" and start imagining "the story we'll tell about how we stayed human while becoming more capable."
The second is designing for descendants. Every significant choice should pass the grandchild test: Will I be proud to explain this decision to my grandchild? It's not about predicting the future—it's about taking responsibility for creating it.
The third is harder to measure but equally vital: tracking meaning alongside metrics. Yes, we need efficiency gains and cost savings. But I've learned to also ask: Are people more fulfilled? Are we solving problems that matter? Are we creating the world we want to inhabit? As I discovered while vibe coding, the best tools amplify human intention, not just acceleration.
The Choice Before Us
Last week alone, new announcements were made, boundaries I didn't know existed are pushed. The wind isn't just blowing—it's accelerating.
But here's what Seneca knew that we must remember: the wind is neutral. It's neither good nor evil, neither progress nor peril. It's power waiting for direction.
In "The Showroom and the Stack," I wrote about software development becoming more like interior design—curation over construction. The same is true for our collective future. We're not building from scratch; we're choosing what kind of world to compose from the pieces already in motion.
Your Port, Our Journey
So I'll ask you what I ask every team I work with:
What's your port?
Not your organization's AI strategy. Not your implementation roadmap. Your actual destination—the future you're willing to work toward, the story you want to be part of telling.
Because here's the truth beneath Seneca's wisdom: in the absence of a chosen port, the current chooses for you. And currents, as any swimmer knows, usually flow toward the rocks.
We have the wind. History will ask what we did with it.
The time for drifting is over. The time for navigation has begun.
Name your port. Adjust your sails. And let's sail toward something worthy of the power we've been given.
What port are you sailing toward? Share your navigation story in the comments. Sometimes the best way to clarify our own direction is to help others find theirs.
P.S. Hope everyone had a great 4th of July weekend! Happy birthday, America. I'm grateful every day for the opportunity this country gives us to explore, think, and express freely—not something I take for granted.
Further Reading:
"The First & Last Principle" – On agency as authorship, not just output
"The Rhythm Engine" – Finding sustainable patterns in human-AI collaboration
"Boomerang Thinking" – Why transformation requires intentional direction
"The Overwhelm" – Dancing with chaos in an accelerated world
References:
Tesla Vehicle Safety Report (2025). Q1 2025 data shows 7.44 million miles per crash with Autopilot vs 1.51 million without. Available at tesla.com/VehicleSafetyReport
Microsoft AI (2025). "The Path to Medical Superintelligence." MAI-DxO achieves 85.5% diagnostic accuracy vs 20% for physicians. Available at microsoft.ai/new/the-path-to-medical-superintelligence/

