Tech company executives are confident that AI will completely transform the economy and point to the changes they see in-house to prove that this change is coming fast. At Meta, Google, Microsoft, and others, leadership says that AI generates a growing share of the overall code, which makes it cheaper and faster to produce. The implication is that if this AI is good enough that tech companies are using it internally to improve efficiency and reduce headcount, it’s only a matter of time until every other industry is similarly transformed.
Developers who are told to use AI whether they like it or not, however, tell a different story. On Reddit, Hacker News and other places where people in software development talk to each other, more and more people are becoming disillusioned with the promise of code generated by large language models. Developers talk not just about how the AI output is often flawed, but that using AI to get the job done is often a more time consuming, harder, and more frustrating experience because they have to go through the output and fix its mistakes. More concerning, developers who use AI at work report that they feel like they are de-skilling themselves and losing their ability to do their jobs as well as they used to.
“We’re being told to use [AI] agents for broad changes across our codebase. There’s no way to evaluate whether that much code is well-written or secure—especially when hundreds of other programmers in the company are doing the same,” a UX designer at a midsized tech company told me. 404 granted all the developers we talked to for this story anonymity because they signed non-disclosure agreements or because they fear retribution from their employers. “We’re building a rat’s nest of tech debt that will be impossible to untangle when these models become prohibitively expensive (any minute now…).”
The actual quality of output doesn’t matter as much as our willingness to participate.
Tech company executives love to brag about how much of the code at their company is AI-generated. In April, Google said that three quarters of new code at the company was generated by AI. Last year, Microsoft CEO Satya Nadella said up to 30 percent of the company’s code was generated by AI. Microsoft’s CTO Kevin Scott said he expects 95 percent of all code at the company to be AI-generated by 2030. Meta’s Mark Zuckerberg said last year he expects AI to write most of the code improving AI within 12-18 months. Anthropic says 90 percent of the code written by most if its team is AI generated. Tech companies have also been bragging about their “tokenmaxxing,” or how much money they’re spending on AI tools instead of human employees.
Predictably, the huge spike in productivity that these companies claim their own AI products have enabled hasn’t resulted in more or better products, shorter work weeks, or better consumer experiences. Mostly, AI implementation in tech companies has been used to justify multiple massive rounds of layoffs. To name just a few examples where tech companies said they reduced headcount because of AI use, more recently, Meta said it would cut 10 percent of its workforce (around 8,000 people), Microsoft said it would offer voluntary retirement to 7 percent of its American workforce (around 125,000 people). Snapchat said it would lay off 16 percent of its full-time staffers (about 1,000 people).
The developers I talked to contradicted the narrative about AI’s utility in coding in many ways, but the most glaring issue with the narrative AI company executives are pitching is that the adoption of AI tools they see internally isn’t voluntary or organic. Developers say they are either explicitly ordered to use AI tools or heavily pressured to use them.
“AI in some shape or form is all but explicitly mandated,” a software engineer at a FAANG company that brags publicly about its internal AI adoption told me. “Its usage is part of our performance review criteria and most (maybe all?) of us have been reorganized into AI focused ‘pods.’ We’re absolutely flooded with AI tooling and it feels like the answer to every problem is ‘use AI first.’”
“We’ve been told performance evaluations are tied to AI adoption,” the UX designer told me. “This has led to most of my teammates using it performatively, even if most of us implicitly know that the output is flawed. The actual quality of output doesn’t matter as much as our willingness to participate.”
Another software engineer at a financial technology company told me that he was never forced to use LLMs but that the companies where he worked changed in a way that encouraged their use. His previous employer didn’t demand developers use AI but it was encouraged and developers were given access to Cursor, one of the leading coding agents.
“It started as a ‘who wants to try it’ and I volunteered. Later it was slowly, due to costs, that we stopped renewing our JetBrains IDE and forced everyone to move to Cursor (though the editor itself doesn’t force you to use AI),” he said. JetBrains IDE is an integrated development environment used by software developers. “Adoption came mostly from inside the engineering team, with a single engineer manager trying to champion it and writing project based rules for Cursor to try to make the output better.”
All the developers I talked to were excited to try using LLMs at work at first, or were at least curious about them. Their feelings about the tools, based on their personal experience, are now overwhelmingly negative.
“There were almost no productivity gains using IDE-based AI tools. AI-generated code ended up with more bugs because I am working on distributed web apps, highly complex multi-system things, so giving the LLM context is very difficult,” a software developer at a small web design firm told me. “Another developer on a contract working with me at the moment generates massive amounts of code, leaving me with 1000+ lines of pull requests to review and it takes massive amounts of time to do this. This leads to me feeling more tired and burned out than I’ve ever felt in my entire life. The cognitive overhead of switching between prompting, coding, checking the LLM’s output is a massive energy drain. It has not been a productivity booster at all, it feels like a speedrun towards severe mental exhaustion.”
The developer in fintech I talked to also said that one major problem with LLMs is that it can generate more code than developers can properly vet or explain. “The sheer breadth of code makes it impossible to be critical enough and then you’re either throwing it away or submitting it and feeling scared there might be really low quality stuff that if someone notices will make you embarrassed (and even more embarrassing to say: ‘oh i don’t know what that is, the AI did that’),” he said. “Or worse, you ship it without someone noticing and that is really hit or miss.”
“I have gotten stuck on bug fixes where, when I run out of Anthropic tokens in Claude Code, I couldn’t work anymore. The current system I am working on started to become a monstrosity of complexity where I didn’t even know what most of it does anymore, and when I had to fix a bug, it took longer than I would have taken in the past to debug,” the software developer at a small web design firm told me.
The developers I talked to found AI useful for some tasks. Several developers said that it was good for experimentation, allowing them to quickly prototype an idea or to implement something in a domain they’re unfamiliar with. One developer said it was a good information interface. Specifically, he said, the AI helped him find where on the server a certain request is handled, summarize logs, or find documentation related to code changes.
The problem all the developers I talked to agreed on is that the more they relied on AI to code, the more the skills they’ve honed for years deteriorated. This is by now a well studied phenomenon sometimes referred to as “cognitive debt” or “cognitive atrophy.” The idea is that people who use AI to automate certain parts of their job lose the ability to do those tasks well, therefore de-skilling themselves.
“I had some issues where I forgot how to implement a Laravel API and it scared the shit out of me. I went to university for this, I’ve been a software engineer for many years now and it feels like I am back before I ever wrote a single line of code,” the software developer at a small web design firm told me.
“It’s making me dumber for sure,” the fintech software developer told me. “It’s like when we got cellphones and stopped remembering phone numbers, but it’s grown to me mentally outsourcing ‘thinking’ in general. I feel my critical thinking and ability to sit and reason about a problem or a design has degraded because the all-knowing-dalai-llama is just a question away from giving me his take. And supposedly I tell myself ill just use it for inspiration but it ends up being my only thought. It gives you the illusion of productivity and expertise but at the end of the day you are more divorced from the output you submit than before.”
“When I was using it for code generation, I found myself having a lot of trouble building and maintaining a mental model of the code I was working with,” the software engineer at the FAANG told me. “Another aspect is that I joined late last year and [the company’s] codebase is massive. As a new hire, part of my job is to learn how to navigate the codebase and use the established conventions, but I think the AI push really hampered my ability to do that.”
The developers I talked to agreed that LLMs will stick around and play a role in programming in the future in some fashion, but worried about how the industry will adapt to executives’ current obsession with the technology, especially when it comes to fostering future generations of developers.
“Older programmers will be fine if there are any jobs left in a few years, but I worry for people early in their careers,” the UX designer told me. “We are hiring junior programmers who rely on AI to complete the simplest tasks. They don’t have the knowledge or experience to know when AI output is error-laden or inefficient.”
“I wish I had a crystal ball for this one, but my gut feeling is that this method of building software will be unsustainable either economically or in terms of tech debt,” the software engineer at the FAANG company said. “There’s a pretty clear split on my team between people who love AI coding and those who just do it because it’s what the company wants, and generally speaking I find that the people who are still [technically focused individual contributors] with their nose in code all the time are less likely to be big AI boosters. I think the tech and its outputs start to really break down the more you question them and those who are doing that day in and day out tend to have a worse opinion of the tech.”
“I think there will be a ‘reckoning’ or ‘awakening’ from the industry notion that now everyone can code and that vibe coding is viable for a real production app and software companies are dead,” the developer in fintech said. “I think we will grow to find the patterns and industry best practices that will balance the negatives of LLM development (hallucination, unstructured code) with better techniques to verify the output’s correctness at scale, and the hype and techno optimism of AI will get to a saner middle ground.”


