Skip to main content
Back to blog
· 8 min read
AI is already working. The question is whether you're working with it.

AI is already working. The question is whether you're working with it.

Data shows AI has already transformed work for those who adopted it. But most companies are still firing in anticipation, not based on results.

artificial intelligence productivity career future of work technology

I don’t go a single day without using AI anymore. Claude, ChatGPT, it doesn’t matter. I have an idea, want to validate a concept, need to structure an argument, and the first impulse is to open the tool. It wasn’t planned. It became part of the rhythm.

This didn’t bother me. But it made me reflect. Because the dynamics change. And they change very fast. In September 2025, Anthropic released Claude 4.5 Sonnet. Less than five months later, in February 2026, Claude Opus 4.6 arrived with a context window of 1 million tokens, five times larger than the previous version. Five times more context means five times more capacity to understand, analyze, and connect information in a single conversation. And this is a change measured in months, not years.

When the tool you use daily evolves at this speed, it’s not just your work that changes. It’s your relationship with work. The way you think, what you consider feasible, what seems “too much” to do alone. Everything recalibrates in real time.

Months ago, I wrote here that the direction was clear. That the question wasn’t whether AI would replace tasks, but what would remain of us when it did. Recent data confirms: it’s no longer direction. It’s displacement. Things have already moved. And most people are still debating whether they should move.

The numbers nobody is reading right

There’s a dominant narrative about AI and work that swings between two extremes: blind enthusiasm (“it’ll solve everything!”) and widespread panic (“it’ll destroy all jobs!”). The data tells a more complex story than either side admits.

The Anthropic Economic Index analyzed real AI usage in the labor market, and the findings are revealing: 49% of existing jobs can use AI in at least 25% of their tasks. Not in theory. In practice, with tools available today.

But the detail that caught my attention most was another one: 52% of real Claude usage is for augmentation, not automation. People are using AI to amplify what they do, not to replace what they do. The tool works as a multiplier, not a substitute.

The gains are real and measurable. Tasks requiring university-level reasoning are being accelerated by up to 12x. High school-level tasks by 9x. The potential macroeconomic impact is significant: estimates point to a productivity increase in the US that could add between 1.0 and 1.2 percentage points to annual growth. To put that in perspective, average American productivity growth over the past two decades has been around 1.5% per year. We’re talking about nearly doubling that number.

Now the other side. The tech sector accumulated roughly 245,000 layoffs in 2025. In the US, approximately 55,000 were directly attributed to AI adoption. And worker anxiety about AI jumped from 28% in 2024 to projected 40%.

The dissonance is striking. Data shows that augmentation works, that AI amplifies more than it replaces. But the public narrative is dominated by fear of replacement. And that fear, as we’ll see, has real and measurable consequences.

The paradox: companies fire out of fear, not results

Here’s what I consider the most important data point in this discussion: according to the Harvard Business Review, 60% of organizations that reduced headcount citing AI did so in anticipation. Not because they measured results. Not because AI proved it could replace those professionals. They did it out of fear of falling behind.

Meanwhile, only 2% of companies made significant layoffs directly tied to actual AI implementation. Two percent. The gap between these numbers reveals something beyond poor management. It reveals a market dynamic driven by narrative, not evidence.

The fallout from this rushed race is showing up. Forrester reported that 55% of companies that made AI-based cuts regretted it. Gartner projects that half of these companies will need to rehire by 2027. The cycle is so predictable it should have a name: fire on hype, rehire on reality.

The most emblematic case is Klarna. The Swedish fintech replaced 700 employees with AI systems, publicly celebrated the decision as an innovation case study, became a reference in conference panels. And then had to rehire because service quality plummeted. The enthusiasm got ahead. Reality caught up.

I’ve seen this pattern before. During the early waves of outsourcing, companies cut entire teams to “save money” without understanding what those people actually did. The result was always the same: loss of tacit knowledge, quality decline, costly rehiring months later. With AI, the script is repeating. The technology changes, but the management mistake is identical: reacting to the hype before understanding the tool.

And most concerning: 44% of hiring managers already expect to make AI-related layoffs in 2026. The cycle of poorly informed decisions is far from over.

What I’ve gained (and what I’m losing)

I’ll be direct. The productivity gain I’ve had with AI over the past year is hard to overstate.

I use Claude daily as a work partner. To write code, review architecture, explore ideas, structure documents, navigate between completely different contexts in the same day. An internal Anthropic study revealed that their own engineers use Claude in 60% of their work, with productivity gains of approximately 50% year over year. My personal experience confirms that order of magnitude. I can do in an afternoon what used to take days.

But the data point that impressed me most was another: 27% of Claude-assisted tasks were tasks that “wouldn’t have been done otherwise.” It’s not just doing things faster. It’s doing things that simply wouldn’t have fit in the day before. Projects that would stay on the shelf, experiments that weren’t worth the time investment, analyses nobody would have the patience to execute manually. AI isn’t just accelerating existing work. It’s expanding what’s viable.

It’s liberating. And at the same time, concerning.

Because there’s a real risk in this. It’s similar to the calculator. People who get used to doing math in their heads develop an agility that goes beyond arithmetic: quick reasoning, estimation, a sense of proportion. But when you start using the calculator for everything, you reach a point where you’re punching in 8 + 5. Not because you can’t add. Because the reflex changed.

With AI, the pattern is the same. When the tool solves problems before you even finish formulating them, something changes in your thinking process. You start delegating not just execution, but parts of the reasoning itself.

I try to balance this in my day-to-day. I use AI heavily for complex tasks, for navigating dense technical contexts, for exploring paths that would take hours alone. I spend a good part of my time reading and critiquing what the AI produces, not blindly accepting it. But I also force myself to step out of that mode. To write without assistance. To structure arguments from scratch. To exercise the muscle that the tool threatens to atrophy.

Still, sometimes I wonder if I’m getting worse at certain things while getting better at others. And maybe that’s the most honest question anyone using AI daily can ask themselves.

The adaptation that matters (and the one that doesn’t)

Dario Amodei, CEO of Anthropic, recently said that 50% of entry-level white-collar jobs could be at risk within the next one to five years. He used the phrase “unusually painful disruption” and called this moment the “adolescence of technology”: too powerful to ignore, too unpredictable to blindly trust.

The phrase captures well where we are. “Learning to use ChatGPT” isn’t adaptation. It’s the bare minimum. The adaptation that truly matters is different: developing judgment, critical thinking, and the ability to define problems worth solving. Knowing how to operate the tool is commodity. Knowing what to ask of it is the differentiator.

In Brazil, this picture gains an extra layer of complexity. AI investments already exceed R$13 billion, and 87% of business leaders plan to increase them. But adoption is deeply uneven. While large companies in urban centers integrate AI into their processes, most of the country still faces basic digitization challenges. Tertiary education rates are low, and the qualification infrastructure can’t keep pace with the speed of change.

In the US, the debate is productivity versus displacement. In Brazil, the barrier is more fundamental: access, education, and uneven digitization. Before debating whether AI will replace jobs, we need to discuss who has the conditions to use it. It’s a different conversation, and no less urgent.

If I could distill what I’ve learned navigating this transition, it would be four points:

  1. Use AI daily, but pay attention to what happens to your thinking. The tool is powerful. The risk is outsourcing reasoning along with execution.
  2. Periodically practice without AI to maintain fundamental skills. Just as athletes train fundamentals, professionals need to maintain the ability to think without assistance.
  3. Focus on the work AI doesn’t do well: judgment under ambiguity, relationship building, problem definition. These are the skills that gain value as AI absorbs the rest.
  4. Be skeptical of both hype and panic. The data tells a more complex story than any headline suggests. Look for the numbers before forming an opinion.

AI won’t wait for anyone to adapt. It’s already in the office, in the terminal, in the workflow of those who chose to pay attention. The question isn’t whether it will change work. It’s whether you’re changing with it.


If this topic interests you, I’d love to exchange ideas. Find me on LinkedIn.

Join my Newsletter

Thoughts on technology leadership, AI, and education. Straight to the point, no fluff.

No spam. Unsubscribe anytime.