Built by Choice, Not Destiny: The Myth of the Inevitable Future

Jun 17, 2025

Built by Choice, Not Destiny: The Myth of the Inevitable Future

Jun 17, 2025

The word "inevitable" has for me become the most troubling four syllables in modern language.

It rolls off the tongues of tech executives with the confidence of prophets delivering divine revelation. Sam Altman calls AGI inevitable. Jensen Huang calls the AI revolution inevitable. Every startup pitch deck, every venture capital thesis, every breathless tech journalist preaches the same gospel: resistance is futile, adaptation is survival, and the future is a freight train with no brakes. But step back from the noise for a moment. Look at what's actually happening. We're burning through electrical grids to generate synthetic Shakespeare. We're strip-mining decades of human creative output to train systems that may ultimately replace the very people who created that output. We're watching entire industries restructure around technologies whose long-term implications we barely understand, while being told that questioning the pace or direction of this change is tantamount to standing in front of history itself. And somehow, we've accepted this as progress.

The Architecture of Inevitability

The myth of inevitability isn't new. It's a story as old as power itself: a way of making the choices of the few feel like the destiny of the many.

In the 1850s, Southern plantation owners called slavery an inevitable economic necessity. In the 1920s, industrial barons called worker exploitation the inevitable cost of progress. In the 1980s, economists called trickle-down theory the inevitable logic of market forces. Each time, the word "inevitable" served the same function: to transform policy choices into natural laws, human decisions into cosmic forces, and resistance into foolishness.

Today's version is more sophisticated but follows the same script. Silicon Valley has perfected the art of reframing corporate strategy as historical destiny. When Mark Zuckerberg announced the metaverse, he didn't present it as one possible future among many. He positioned it as the inevitable next chapter of human communication. When Elon Musk pushes for neural interfaces, he frames it not as a business venture but as humanity's necessary evolution to remain relevant in an AI-dominated world.

The language is carefully chosen. "Disruption" makes destruction sound creative. "Innovation" makes experimentation on society sound heroic. "Inevitable" makes corporate roadmaps sound like physics.

But physics doesn't require marketing campaigns.


The Gold Rush Mentality

Walk through the halls of any major tech conference today, and you'll hear echoes of 1849. The same breathless excitement, the same promises of fortune, the same conviction that those who don't join the rush will be left behind forever.

In California's gold rush, most prospectors went broke. The real fortunes went to Levi Strauss selling sturdy pants, to Samuel Brannan selling mining supplies, to the merchants who understood that hope itself was the most valuable commodity. They didn't need to find gold; they just needed to convince others that gold was there for the taking.

Today's AI gold rush follows the same pattern with disturbing precision. Billions pour into generative AI companies promising to "democratize creativity" while charging subscription fees that price out the very creators they claim to serve. We're told this technology will free us from mundane tasks so we can focus on "higher-level thinking," but what happens when the algorithm becomes better at higher-level thinking too?

OpenAI's ChatGPT can write marketing copy, legal briefs, and college essays. Midjourney can create art that wins competitions. GitHub Copilot can generate code faster than most programmers can type. Each breakthrough is celebrated as a victory for human potential, but few are asking the harder question: if machines can think, write, create, and code, what exactly are humans supposed to do in this brave new world?

The uncomfortable truth is that we're experimenting with the fundamental nature of human work and creativity in real time, at scale, with no safety net and no pause button. We're automating away the experiences that give people purpose, identity, and economic security.

And we're calling it inevitable.


The Atrophy of Mastery

I recently watched a young designer defend their heavy reliance on AI by saying, "Why should I spend hours perfecting typography when AI can do it in seconds?"

The question haunted me for days.

Here's why those hours matter: because the process is the product. When we outsource the struggle, we outsource the learning. When we skip the craft, we lose the intuition that only comes from thousands of small decisions, failed attempts, and hard-won insights. We're automating away the very experiences that make us creative, thoughtful, and human.

Consider what we lose when we delegate our thinking to machines. The carpenter who only uses power tools never develops the sensitivity to feel the grain of the wood, to know by touch where it wants to split and where it will hold. The writer who relies on AI assistance may lose the ability to sit with uncertainty, to wrestle with language, to discover what they actually think through the painful, essential act of writing.

Skills don't just atrophy from disuse; they transform us through practice. The surgeon's hands know things the surgeon's mind cannot articulate. The pianist's fingers understand rhythm in ways that transcend conscious thought. The programmer who has debugged thousands of lines of code develops an intuitive sense for where errors hide that no AI can replicate.

When we shortcut these experiences, we don't just lose skills. We lose the wisdom that comes from engaging deeply with resistance, from solving problems through sustained attention rather than algorithmic assistance. We risk becoming dependent on systems we don't understand, controlled by companies whose interests may not align with our own.

This isn't anti-technology sentiment. This is recognition that every tool changes the person who uses it. The question we should be asking: what kind of people do we become when we no longer have to think our way through problems?


The Broader Surrender

AI is just the most visible symptom of a much deeper issue: our collective surrender to the belief that complex problems are bound by destiny rather than shaped by decision.

We're told that wealth inequality is inevitable, a natural consequence of globalization and technological progress. But inequality is a policy choice. Countries like Denmark and Germany operate with similar levels of technological infrastructure yet maintain far more equitable wealth distribution. That outcome isn't due to market forces alone. It's the result of deliberate decisions around taxation, social services, and corporate accountability.

We're told that climate change has reached a point of no return, that we've crossed irreversible thresholds. But every degree of warming still matters. Every delay worsens the outcome, but the outcome is not prewritten. Renewable energy, regenerative agriculture, and carbon-aware economics are not futuristic ideas. They're present-day possibilities. The tools exist. The capital exists. What's missing is commitment.

We're told that political polarization is inevitable in diverse societies. But polarization is often manufactured. Algorithms reward outrage. Media ecosystems thrive on division. Electoral systems and civic structures can either deepen these divides or create conditions for common ground. We still have a say in which direction we move.

Across each of these examples, the pattern is the same: the myth of inevitability benefits those already in power.

When pharmaceutical companies price insulin at $300 a vial, they frame it as market logic. When tech platforms extract user data without compensation, they call it innovation. When fossil fuel companies push forward in the face of climate warnings, they call it economic necessity.

The language of inevitability turns human decisions into natural laws. It recasts agency as futility. And in doing so, it shields those making the decisions from accountability.


The Questions That Cut Through

What if we started every technological announcement, every economic forecast, every political claim with a simple question: Who benefits from this version of the future?

When a tech CEO promises that AI will revolutionize education, ask: who profits from automating teachers? When a company claims their algorithm will eliminate bias, ask: who defines bias, who benefits from that definition, and who gets harmed when the definition is wrong? When platforms promise to democratize content creation, ask: if this is democracy, why are we paying subscription fees to participate?

These aren't cynical questions. They're clarifying ones. They don't assume bad intentions, but they do assume that intentions matter less than outcomes, and outcomes are shaped by incentives.

The most revealing moment in any tech presentation isn't the demo of what the technology can do. It's the pause when someone asks who controls it. Watch how quickly the conversation shifts from possibility to property, from potential to profit models, from "what if" to "who pays."


In Practice: How to Reclaim Agency


The Daily Intention

Recognizing inevitability as a myth is the first step. Living as if you have choices is the practice that follows. Here are concrete ways to reclaim agency in a world designed to convince you that you have none.

Start with the Language

When someone says "inevitable," pause. Count to three. Then ask: "Who decided this?" The pause breaks the spell. The question reveals the choice.

Replace "have to" with "choose to." Instead of "I have to use this AI tool to keep up," try "I'm choosing to use this tool." The shift from compulsion to decision changes everything. You're no longer a victim of circumstances…you're someone making a strategic choice.

Question the word "disruption." When a company claims to be disrupting an industry, ask what they're actually destroying and who benefits from that destruction. Disruption is rarely neutral. It's usually extraction and destruction dressed up as innovation.

Before You Automate, Interrogate

Ask what you're trading away. Before adopting any AI tool, spend five minutes listing what skills, insights, or experiences you might lose. The carpenter who only uses power tools never learns to feel the grain of the wood. What's your equivalent?

Practice the hard way once a week. Write something by hand. Navigate without GPS. Calculate something without a calculator. Debug code line by line instead of asking AI. These aren't nostalgic exercises—they're resistance training for your mind.

Sit with problems longer. Before reaching for automated solutions, spend an extra ten minutes wrestling with the problem yourself. The wrestling is where insights live. The struggle is where you develop intuition that no algorithm can replicate.

Follow the Money

Every "free" innovation has a business model. If you can't see how something makes money, you're probably the product. Ask: "If this is free, what am I paying with?" Usually, it's your data, your attention, or your dependence.

Look for the subscription creep. Notice how tools that start free gradually introduce paid tiers, then make the free version barely functional. This isn't accident—it's strategy. Plan for it.

Ask who gets replaced. When a technology promises to "democratize" something, ask who currently does that work and what happens to them. Democratization often means "cheaper labor through automation."

Create Spaces for Deep Work

Defend your attention. Choose specific times when you engage with tools that think for you, and specific times when you don't. Your brain needs practice being the primary processor of complex information.

Build things the slow way sometimes. Take on projects that can't be automated. Grow something. Build something with your hands. Write something that requires multiple drafts. The goal isn't efficiency—it's maintaining your capacity for sustained, difficult work.

Practice saying "I don't know" more often. Resist the urge to immediately search for everything. Sit with uncertainty. Let your mind work on problems in the background. The discomfort of not-knowing is where original thinking begins.

Question the Narrative

Ask whose version of progress you're buying. When someone promises that new technology will make life better, ask: "Better for whom?" Progress for shareholders often looks different than progress for workers, communities, or the environment.

Look for the pattern. Notice how similar promises have played out in the past. The internet would democratize information but instead, it concentrated power in a few platforms. Social media would connect us, instead, it often isolates and polarizes us. What pattern might repeat with AI?

Find the people saying no. For every technological "inevitability," someone is choosing differently. Find them. Learn from their choices. You're not alone in questioning the default path forward.

Start Small, Think Big

The goal isn't to reject all technology or live in the past. The goal is to remember that you have agency in how you engage with the tools that shape your life. Every time you pause to question, every time you choose the harder path, every time you ask "who benefits?", you're practicing the radical act of thinking for yourself.

These small acts of resistance compound. They strengthen your capacity to make intentional choices rather than reflexive reactions. They remind you that the future isn't something that happens to you, it's something you help create.

The train has brakes. But you have to remember to use them.


The Reclamation Begins

The most radical act in 2025 isn't building a better algorithm or launching a more efficient startup. It's remembering that we have choices.

We can choose to regulate AI development before it reshapes society rather than after. We can choose to preserve spaces for human skill development even as we adopt new tools. We can choose transparency over convenience, community ownership over corporate control, sustainable innovation over extractive growth.

We can choose to ask different questions about progress itself. Instead of "How fast can we go?" we might ask "Where do we want to end up?" Instead of "What's technically possible?" we might ask "What's actually beneficial?" Instead of "How do we compete?" we might ask "How do we thrive together?"

These choices require us to reject the story that any particular version of the future is inevitable. They require us to remember that the future is not a destination we discover but a place we build.

The train has brakes. The tracks can be rebuilt. The destination is still up to us.

Every time someone tells you that change is impossible, that resistance is futile, that adaptation is the only option, remember: they're not describing reality. They're describing their preferred reality, fueled by their own desires. Preferences can be changed.


What We're Really Reclaiming

This conversation goes beyond technology. It's about agency: the recognition that human beings have the capacity to shape their circumstances rather than simply react to them.

The future doesn't belong to the companies with the most data or the biggest models. It belongs to the people who refuse to accept that any version of the future is beyond negotiation. It belongs to the communities that choose cooperation over competition, sustainability over extraction, wisdom over efficiency.

It belongs to anyone willing to ask: What if we built something different?

The question itself is the beginning of reclamation. The moment we start asking it, inevitability loses its power, and possibility sparks.

What are we waiting for?

The word "inevitable" has for me become the most troubling four syllables in modern language.

It rolls off the tongues of tech executives with the confidence of prophets delivering divine revelation. Sam Altman calls AGI inevitable. Jensen Huang calls the AI revolution inevitable. Every startup pitch deck, every venture capital thesis, every breathless tech journalist preaches the same gospel: resistance is futile, adaptation is survival, and the future is a freight train with no brakes. But step back from the noise for a moment. Look at what's actually happening. We're burning through electrical grids to generate synthetic Shakespeare. We're strip-mining decades of human creative output to train systems that may ultimately replace the very people who created that output. We're watching entire industries restructure around technologies whose long-term implications we barely understand, while being told that questioning the pace or direction of this change is tantamount to standing in front of history itself. And somehow, we've accepted this as progress.

The Architecture of Inevitability

The myth of inevitability isn't new. It's a story as old as power itself: a way of making the choices of the few feel like the destiny of the many.

In the 1850s, Southern plantation owners called slavery an inevitable economic necessity. In the 1920s, industrial barons called worker exploitation the inevitable cost of progress. In the 1980s, economists called trickle-down theory the inevitable logic of market forces. Each time, the word "inevitable" served the same function: to transform policy choices into natural laws, human decisions into cosmic forces, and resistance into foolishness.

Today's version is more sophisticated but follows the same script. Silicon Valley has perfected the art of reframing corporate strategy as historical destiny. When Mark Zuckerberg announced the metaverse, he didn't present it as one possible future among many. He positioned it as the inevitable next chapter of human communication. When Elon Musk pushes for neural interfaces, he frames it not as a business venture but as humanity's necessary evolution to remain relevant in an AI-dominated world.

The language is carefully chosen. "Disruption" makes destruction sound creative. "Innovation" makes experimentation on society sound heroic. "Inevitable" makes corporate roadmaps sound like physics.

But physics doesn't require marketing campaigns.


The Gold Rush Mentality

Walk through the halls of any major tech conference today, and you'll hear echoes of 1849. The same breathless excitement, the same promises of fortune, the same conviction that those who don't join the rush will be left behind forever.

In California's gold rush, most prospectors went broke. The real fortunes went to Levi Strauss selling sturdy pants, to Samuel Brannan selling mining supplies, to the merchants who understood that hope itself was the most valuable commodity. They didn't need to find gold; they just needed to convince others that gold was there for the taking.

Today's AI gold rush follows the same pattern with disturbing precision. Billions pour into generative AI companies promising to "democratize creativity" while charging subscription fees that price out the very creators they claim to serve. We're told this technology will free us from mundane tasks so we can focus on "higher-level thinking," but what happens when the algorithm becomes better at higher-level thinking too?

OpenAI's ChatGPT can write marketing copy, legal briefs, and college essays. Midjourney can create art that wins competitions. GitHub Copilot can generate code faster than most programmers can type. Each breakthrough is celebrated as a victory for human potential, but few are asking the harder question: if machines can think, write, create, and code, what exactly are humans supposed to do in this brave new world?

The uncomfortable truth is that we're experimenting with the fundamental nature of human work and creativity in real time, at scale, with no safety net and no pause button. We're automating away the experiences that give people purpose, identity, and economic security.

And we're calling it inevitable.


The Atrophy of Mastery

I recently watched a young designer defend their heavy reliance on AI by saying, "Why should I spend hours perfecting typography when AI can do it in seconds?"

The question haunted me for days.

Here's why those hours matter: because the process is the product. When we outsource the struggle, we outsource the learning. When we skip the craft, we lose the intuition that only comes from thousands of small decisions, failed attempts, and hard-won insights. We're automating away the very experiences that make us creative, thoughtful, and human.

Consider what we lose when we delegate our thinking to machines. The carpenter who only uses power tools never develops the sensitivity to feel the grain of the wood, to know by touch where it wants to split and where it will hold. The writer who relies on AI assistance may lose the ability to sit with uncertainty, to wrestle with language, to discover what they actually think through the painful, essential act of writing.

Skills don't just atrophy from disuse; they transform us through practice. The surgeon's hands know things the surgeon's mind cannot articulate. The pianist's fingers understand rhythm in ways that transcend conscious thought. The programmer who has debugged thousands of lines of code develops an intuitive sense for where errors hide that no AI can replicate.

When we shortcut these experiences, we don't just lose skills. We lose the wisdom that comes from engaging deeply with resistance, from solving problems through sustained attention rather than algorithmic assistance. We risk becoming dependent on systems we don't understand, controlled by companies whose interests may not align with our own.

This isn't anti-technology sentiment. This is recognition that every tool changes the person who uses it. The question we should be asking: what kind of people do we become when we no longer have to think our way through problems?


The Broader Surrender

AI is just the most visible symptom of a much deeper issue: our collective surrender to the belief that complex problems are bound by destiny rather than shaped by decision.

We're told that wealth inequality is inevitable, a natural consequence of globalization and technological progress. But inequality is a policy choice. Countries like Denmark and Germany operate with similar levels of technological infrastructure yet maintain far more equitable wealth distribution. That outcome isn't due to market forces alone. It's the result of deliberate decisions around taxation, social services, and corporate accountability.

We're told that climate change has reached a point of no return, that we've crossed irreversible thresholds. But every degree of warming still matters. Every delay worsens the outcome, but the outcome is not prewritten. Renewable energy, regenerative agriculture, and carbon-aware economics are not futuristic ideas. They're present-day possibilities. The tools exist. The capital exists. What's missing is commitment.

We're told that political polarization is inevitable in diverse societies. But polarization is often manufactured. Algorithms reward outrage. Media ecosystems thrive on division. Electoral systems and civic structures can either deepen these divides or create conditions for common ground. We still have a say in which direction we move.

Across each of these examples, the pattern is the same: the myth of inevitability benefits those already in power.

When pharmaceutical companies price insulin at $300 a vial, they frame it as market logic. When tech platforms extract user data without compensation, they call it innovation. When fossil fuel companies push forward in the face of climate warnings, they call it economic necessity.

The language of inevitability turns human decisions into natural laws. It recasts agency as futility. And in doing so, it shields those making the decisions from accountability.


The Questions That Cut Through

What if we started every technological announcement, every economic forecast, every political claim with a simple question: Who benefits from this version of the future?

When a tech CEO promises that AI will revolutionize education, ask: who profits from automating teachers? When a company claims their algorithm will eliminate bias, ask: who defines bias, who benefits from that definition, and who gets harmed when the definition is wrong? When platforms promise to democratize content creation, ask: if this is democracy, why are we paying subscription fees to participate?

These aren't cynical questions. They're clarifying ones. They don't assume bad intentions, but they do assume that intentions matter less than outcomes, and outcomes are shaped by incentives.

The most revealing moment in any tech presentation isn't the demo of what the technology can do. It's the pause when someone asks who controls it. Watch how quickly the conversation shifts from possibility to property, from potential to profit models, from "what if" to "who pays."


In Practice: How to Reclaim Agency


The Daily Intention

Recognizing inevitability as a myth is the first step. Living as if you have choices is the practice that follows. Here are concrete ways to reclaim agency in a world designed to convince you that you have none.

Start with the Language

When someone says "inevitable," pause. Count to three. Then ask: "Who decided this?" The pause breaks the spell. The question reveals the choice.

Replace "have to" with "choose to." Instead of "I have to use this AI tool to keep up," try "I'm choosing to use this tool." The shift from compulsion to decision changes everything. You're no longer a victim of circumstances…you're someone making a strategic choice.

Question the word "disruption." When a company claims to be disrupting an industry, ask what they're actually destroying and who benefits from that destruction. Disruption is rarely neutral. It's usually extraction and destruction dressed up as innovation.

Before You Automate, Interrogate

Ask what you're trading away. Before adopting any AI tool, spend five minutes listing what skills, insights, or experiences you might lose. The carpenter who only uses power tools never learns to feel the grain of the wood. What's your equivalent?

Practice the hard way once a week. Write something by hand. Navigate without GPS. Calculate something without a calculator. Debug code line by line instead of asking AI. These aren't nostalgic exercises—they're resistance training for your mind.

Sit with problems longer. Before reaching for automated solutions, spend an extra ten minutes wrestling with the problem yourself. The wrestling is where insights live. The struggle is where you develop intuition that no algorithm can replicate.

Follow the Money

Every "free" innovation has a business model. If you can't see how something makes money, you're probably the product. Ask: "If this is free, what am I paying with?" Usually, it's your data, your attention, or your dependence.

Look for the subscription creep. Notice how tools that start free gradually introduce paid tiers, then make the free version barely functional. This isn't accident—it's strategy. Plan for it.

Ask who gets replaced. When a technology promises to "democratize" something, ask who currently does that work and what happens to them. Democratization often means "cheaper labor through automation."

Create Spaces for Deep Work

Defend your attention. Choose specific times when you engage with tools that think for you, and specific times when you don't. Your brain needs practice being the primary processor of complex information.

Build things the slow way sometimes. Take on projects that can't be automated. Grow something. Build something with your hands. Write something that requires multiple drafts. The goal isn't efficiency—it's maintaining your capacity for sustained, difficult work.

Practice saying "I don't know" more often. Resist the urge to immediately search for everything. Sit with uncertainty. Let your mind work on problems in the background. The discomfort of not-knowing is where original thinking begins.

Question the Narrative

Ask whose version of progress you're buying. When someone promises that new technology will make life better, ask: "Better for whom?" Progress for shareholders often looks different than progress for workers, communities, or the environment.

Look for the pattern. Notice how similar promises have played out in the past. The internet would democratize information but instead, it concentrated power in a few platforms. Social media would connect us, instead, it often isolates and polarizes us. What pattern might repeat with AI?

Find the people saying no. For every technological "inevitability," someone is choosing differently. Find them. Learn from their choices. You're not alone in questioning the default path forward.

Start Small, Think Big

The goal isn't to reject all technology or live in the past. The goal is to remember that you have agency in how you engage with the tools that shape your life. Every time you pause to question, every time you choose the harder path, every time you ask "who benefits?", you're practicing the radical act of thinking for yourself.

These small acts of resistance compound. They strengthen your capacity to make intentional choices rather than reflexive reactions. They remind you that the future isn't something that happens to you, it's something you help create.

The train has brakes. But you have to remember to use them.


The Reclamation Begins

The most radical act in 2025 isn't building a better algorithm or launching a more efficient startup. It's remembering that we have choices.

We can choose to regulate AI development before it reshapes society rather than after. We can choose to preserve spaces for human skill development even as we adopt new tools. We can choose transparency over convenience, community ownership over corporate control, sustainable innovation over extractive growth.

We can choose to ask different questions about progress itself. Instead of "How fast can we go?" we might ask "Where do we want to end up?" Instead of "What's technically possible?" we might ask "What's actually beneficial?" Instead of "How do we compete?" we might ask "How do we thrive together?"

These choices require us to reject the story that any particular version of the future is inevitable. They require us to remember that the future is not a destination we discover but a place we build.

The train has brakes. The tracks can be rebuilt. The destination is still up to us.

Every time someone tells you that change is impossible, that resistance is futile, that adaptation is the only option, remember: they're not describing reality. They're describing their preferred reality, fueled by their own desires. Preferences can be changed.


What We're Really Reclaiming

This conversation goes beyond technology. It's about agency: the recognition that human beings have the capacity to shape their circumstances rather than simply react to them.

The future doesn't belong to the companies with the most data or the biggest models. It belongs to the people who refuse to accept that any version of the future is beyond negotiation. It belongs to the communities that choose cooperation over competition, sustainability over extraction, wisdom over efficiency.

It belongs to anyone willing to ask: What if we built something different?

The question itself is the beginning of reclamation. The moment we start asking it, inevitability loses its power, and possibility sparks.

What are we waiting for?

Let’s Make
Magic :)

Got a big idea, product, or message that needs to land? I work with teams ready to build things that connect and last. Reach out and let’s talk.

Contact us

Let’s Make
Magic :)

Got a big idea, product, or message that needs to land? I work with teams ready to build things that connect and last. Reach out and let’s talk.

Contact us

Let’s Make
Magic :)

Got a big idea, product, or message that needs to land? I work with teams ready to build things that connect and last. Reach out and let’s talk.

Contact us