AI was sold to us as a productivity multiplier.
Automation, copilots, and “10x efficiency” promised a future where humans could focus on higher-level thinking.
Yet, paradoxically, it feels like we’re working more than ever.
As a Product Manager working in tech, I’ve been observing a growing disconnect between what AI promised and how it’s actually changing our behavior. This post isn’t an anti-AI manifesto. It’s an attempt to slow down and ask a more fundamental question:
What are we actually optimizing for?
Table of Contents
- 1. AI Seems to Be Making People Work More, Not Less
- 2. But What Are We Actually Running Toward?
- 3. When the Means Become the Goal (AI as the Wrong Center)
- 4. The Confusion AI Is Creating (And Amplifying)
- 5. AI and the Drift Toward More Instinctive Behavior
- 6. Not AI Skepticism, But a Call for Caution
- 7. What Actually Matters, and Where to Focus
1. AI Seems to Be Making People Work More, Not Less
Across the tech industry, work hours don’t appear to be shrinking. In many cases, they’re expanding.
- Workdays are getting longer, not shorter
- Context switching has increased
- Expectations around output velocity have quietly risen
In the US tech scene, there’s a noticeable admiration, sometimes explicit, sometimes implicit, for extreme work cultures. Narratives similar to China’s “996” (working 9 a.m. to 9 p.m., six days a week) are being reframed as necessary sacrifices for staying competitive in the AI race.
This raises uncomfortable questions:
- Is this driven by an existential fear—the idea that humans might be fully replaced?
- Or is it a national and economic anxiety—fear of losing technological dominance?
- Or, more realistically, a blend of both?
AI has become not just a tool, but a pressure amplifier.
2. But What Are We Actually Running Toward?
What’s more troubling than longer hours is the lack of clarity around why we’re investing so much physically, mentally, and emotionally.
Yes, every investment carries risk.
But increasingly, it feels like:
- We’re sprinting without a finish line
- Speed is valued over direction
- “Being productive” is confused with “doing meaningful work”
In many situations, it’s hard to tell what the end goal is.
Even harder to see how all this effort translates into something genuinely constructive—for users, for businesses, or for ourselves.
From a product perspective, this is alarming.
Execution without purpose is just well-organized chaos.
3. When the Means Become the Goal (AI as the Wrong Center)
There are cases where AI truly acts as a powerful lever.
If a company already has:
- A well-structured operating model
- Clear problem definitions
- Strong feedback loops
Then AI can absolutely accelerate growth in a meaningful way.
But those cases are not the majority.
Instead, I have questions like:
- “Isn’t this AI product basically just wrapping ChatGPT or Claude?”
- “If we automate something inefficient, aren’t we just doing inefficient work at a massive scale?”
These aren’t cynical takes, but they’re valid product questions.
The real issue is that many organizations don’t have solid foundations to begin with:
- The core problem isn’t well-defined
- Processes are broken or unclear
- Metrics don’t reflect real value
Yet AI is being treated as a universal remedy.
This is a classic case of means and ends being reversed.
Instead of:
“What problem are we solving, and how can technology help?”
We see:
“We have AI, and what can we apply it to?”
Whether in business or individual work, technology is increasingly used to mask the absence of clarity rather than resolve it.
4. The Confusion AI Is Creating (And Amplifying)
Beyond inefficiency, AI is also introducing a surprising amount of confusion.
I frequently see people and teams:
- Using AI performatively, driven by fear of falling behind
- Prioritizing the appearance of being AI-savvy over real outcomes
The identity of “someone who is studying AI” or “an AI-forward organization” starts to overshadow actual goals.
This distortion isn’t limited to work.
On social media:
- Low-effort, bizarre AI-generated content is flooding feeds
- Misinformation, manipulation, and outright fabrication are spreading faster than before
- The number of people who believe and amplify these outputs is growing at an alarming rate
More unsettlingly, I’ve seen moments where agency itself is being outsourced:
- People copy-pasting AI-generated responses without review
- Mistakes being brushed off with “ChatGPT got it wrong”
At that point, AI isn’t a tool anymore — it’s a convenient scapegoat.
And in some cases, it starts to look like people are anchoring parts of their identity to AI systems rather than using them consciously.
5. AI and the Drift Toward More Instinctive Behavior
One unexpected pattern I’ve noticed is how AI seems to be pushing parts of society toward more instinct-driven, even animalistic behavior.
In milder forms, this shows up as:
- The explosive growth of influencer-driven ecosystems
- Content optimized purely for attention, not interaction
Ironically, while “social” platforms grow larger, real human-to-human interaction appears to be shrinking.
In more extreme cases:
- Many self-proclaimed “AI startups” quietly pivot toward sexualized chatbots
- Businesses that already operated in these domains (for example, adult subscription platforms) grow even faster
The common thread is hard to ignore.
People are increasingly treated not as humans to engage with, but as objects to stimulate, extract from, or optimize against.
When interaction becomes transactional and mediated entirely through systems, it’s easy to forget that there’s a person on the other side.
6. Not AI Skepticism, But a Call for Caution
This isn’t an argument against AI.
It’s an argument for being more deliberate.
From a career perspective
The ability to:
- Understand business fundamentals
- Identify what actually matters quickly
- Operate within (or choose) organizations with solid operating systems
will matter more than ever.
Historically, this has always been true. AI just makes the gap wider.
From a work execution perspective
Speed is no longer the bottleneck.
- Moving fast is easy
- Failing fast is even easier
Without strong review loops and judgment, speed simply accelerates failure.
From a human perspective
The deeper question is existential:
In a world where AI can do more and more, what makes humans more human? And what roles should we consciously take on?
At times, this moment feels like Everything Everywhere All at Once:
- Will we lean into an optimistic nihilism, accepting uncertainty while still choosing meaning?
- Or slide into a destructive nihilism, where nothing matters and everything becomes disposable?
7. What Actually Matters, and Where to Focus
Technological history tends to repeat a familiar pattern:
- Explosive experimentation
- Convergence
- Commoditization
- A new paradigm
What we’re seeing now looks like a massive phase of divergence.
Personally, I don’t want to be swept away by it blindly.
Many current experiments don’t need more exploration —
they need stronger fundamentals:
- Clear problem framing
- Better system design
- Sharper judgment
It’s also worth noting:
- We’re still largely within the bounds of probabilistic transformer-based models
- Reports increasingly suggest diminishing returns from raw computational scaling
Just as the emergence of GPT models in 2023–2024 triggered a major paradigm shift, it’s entirely possible that today’s dominant approaches will be disrupted sooner than we expect.
That doesn’t mean current efforts are meaningless.
But it does mean:
- Understand the broad direction
- Avoid over-investing in narrow details
- Focus on what’s genuinely needed right now
I’m not a mathematical genius who can push the boundaries of model architecture.
So instead, I choose a simpler path:
Calmly identify what matters, and work on solving that.
Final Note
If AI gives us anything, it should be clarity, not panic.
As product builders, leaders, and humans, our responsibility isn’t to run faster, but to run in the right direction.

