Anthropic’s December 2025 internal report on how its own engineers use AI tools paints a complicated picture of productivity gains, shifting workflows, and growing anxiety about long-term skill erosion.
The headline finding is striking: 27% of all Claude-assisted tasks were pieces of work that “would not have been done otherwise.”
It’s important to note that these weren’t trivial tasks. Engineers used AI to scale in-progress projects, revive abandoned ideas, and build nice-to-have internal tools like interactive dashboards, experimental pipelines, and data visualizations that previously weren’t worth the time investment.
Beyond just making existing work faster, AI is thus expanding the scope of what engineers can reasonably build at all. However, even as teams report increased output and greater “full-stack flexibility,” the report highlights the real concern growing inside Anthropic’s walls: engineers fear that the key skills they relied on to build their careers are slowly dulling.

Source: Anthropic’s How AI is transforming work at Anthropic study
Based on Anthropic’s internal survey on engineers and researchers, much of Claude’s workload today sits in familiar territory. The AI tool was used for a variety of coding tasks, with debugging being the most common (55%). This was followed by interpreting unfamiliar codebases, proposing refactors, fixing small “papercut” issues, and building internal tools or dashboards that would’ve been deprioritized in the pre-AI era.
The study notes that engineers are most comfortable delegating “easily verifiable or low-stakes tasks”, as errors are quickly detectable and corrections are cheap in such areas.
Moreover, AI’s responsibilities are already expanding beyond simple tasks. Engineers report using Claude for more complex work: code design and architectural planning jumped from 1% to 10% of usage, while implementing new features rose from 14% to 37%. In other words, AI transformed from becoming a mere assistant to a collaborator on a daily basis.
Anthropic engineers now use Claude in about 60% of their work, with many estimating roughly 50% more productivity than a year ago. But the study also presents a caveat, as there are still some people who reported a net increase in time spent debugging and cleaning up Claude-assisted coding tasks.
As the scope grows, so do the trade-offs—and they’re not just limited to time savings and quality of output.
Despite the surge in output, many engineers worry that AI is slowly eroding their foundational skills. With Claude handling more of the mundane work (and increasingly, the complex work), opportunities for hands-on practice shrink.
The result, they fear, is a creeping loss of deep craftsmanship. Some feel alienated from what used to be difficult yet fulfilling work, with one engineer even citing it as “skill atrophy”.
Others lament the loss of satisfaction itself, whether by manually writing, debugging, or refactoring code. Several are also concerned that their role will evolve into something narrower: supervising, validating, and editing AI-generated code rather than building things from scratch.
One engineer said their work had shifted “70%+ to being a code reviewer/reviser rather than a net-new code writer.” Another imagined a future of “taking accountability for the work of 1, 5, or 100 Claudes.”

Another finding: this shift in workflow is also changing team dynamics. AI often becomes the “first stop,” reducing the need for peer collaboration, mentorship, and the serendipitous learning that comes from code reviews and engineering discussions. As one engineer put it in the study:
“I like working with people and it’s sad that I ‘need’ them less now… More junior people don’t come to me with questions as often.”
In essence, short-term productivity may be up, but the long-term trajectory of engineering careers feels less certain.
What’s happening at Anthropic is likely a preview of what many engineering teams will face over the next few years. As AI systems become more capable, tasks once considered essential training grounds—debugging, writing boilerplate, wrestling with unfamiliar codebases—may become automated or AI-assisted by default.
That could mean fewer opportunities for juniors to build core experience, widening the gap between “AI-supervising engineers” and “AI-agnostic” or traditional engineers. A previous analysis on vibe coding raised similar concerns about ownership, accountability, and eroding technical quality, in light of AI-generated errors and hallucinations requiring more effort to debug and clean up.
There are cultural risks, too, which can endanger the talent pipeline, from less pair programming to fewer mentorship loops. As more people work through an AI instead of through each other, institutional knowledge transfer could weaken and threaten the sustainability of companies and organizations.
All of these raise the question:
If Anthropic, one of the world’s most AI-native engineering teams, is already seeing skill dilution and collaboration drop, what will this mean for the rest of the industry?
In the end, this shift is neither purely good nor purely bad. AI opens doors to new types of work, faster iteration, and broader experimentation. But it also forces developers to reconsider what the core of their craft looks like in an era where writing code manually may become optional.