Curiosity is often seen as a key trait of great developers. However, in an age of generative AI tools that can write code, generate tests, and even review themselves, the act of asking questions is at risk of being outsourced. While these tools may speed up delivery, some in the developer community warn that they also threaten the growth of a developer.
An MIT Study also hinted towards a decline in cognitive capabilities when using an LLM. It’s not just about juniors or seniors, fast delivery, or clean code. It’s about what happens when understanding gets replaced by imitation and how that could slowly diminish a developer’s capacity to create truly exceptional software.
The Fragility of Copy-Paste Knowledge
“GenAI is potentially dangerous to the long-term growth of developers. If you pass all the thinking to GenAI, then the result is that the developer isn’t doing any thinking,” said Ben Hoskin, a dynamics 365 solution architect at Kainos, in a conversation with AIM.
Additionally, Hoskin notes in his Medium blog that developers who lack an understanding of sound practice logic are missing out on the real benefits. He warns that blindly following principles without understanding their purpose leads to fragile knowledge that can easily break down in different contexts.
Hoskin draws a line between clean code and correct code. He notes that developers “might already be wrong, but don’t know it yet,” primarily when they rely on requirements without understanding the underlying logic.
This was echoed when AIM asked him about his opinion on the correct code, but not regarding the right job, based on his experience. He notes that if the requirements are wrong, no matter how good the code is, it will still be incorrect.
He believes that the company only finds out during demos or UAT, which causes problems because they’ve built dependent software on top of faulty software.
Chaitanya Choudhary, CEO of Workers IO, echoed this sentiment and told AIM that he once dedicated days to developing a well-designed authentication system only to discover that users were abandoning it due to its complexity. Currently, he emphasises the importance of first validating the problem at hand, often by applying the simplest solution possible.
Choudhary believes the solution-first mindset is being amplified by GenAI. “It can create a mentality where you build because you can, not because you should,” he said. The issue is not a lack of capability, but curiosity. Or rather, the lack of it when machines do the proposing.
Experiment, Don’t Just Execute
Hoskin explained that developers learn more through experimenting with various solutions and experiencing failures, rather than just creating solutions. He mentions that this kind of thinking is being gradually replaced by automation. He encourages developers to approach their work as if conducting an experiment, finding a healthy balance between meeting requirements and fostering growth.
Choudhary echoes this experimental mindset, emphasising the importance of being flexible and adaptable. He told AIM, “The best engineers I know approach each feature like a hypothesis to be tested. They ask ‘What if we’re wrong about this?’ and build in ways that make it easy to pivot.” This shared perspective highlights a common theme among innovative developers: the value of iterative testing and learning.
Building on this idea, Choudhary also stresses the importance of striking a balance between investment and resources. He adds that while a rapid prototype may suffice in some cases, others require a resilient infrastructure. This experimental approach enables deliberate trade-offs, thereby preventing the default tendency to over-engineer every solution.
All agree GenAI can play a positive role, but only when used deliberately.
“The way to use GenAI while learning is to ask GenAI lots of questions and get it to come up with ideas that you then take time to understand and develop,” said Hoskin.
However, GenAI isn’t always helpful in that regard. Hoskin warns that the weakness with GenAI is the need to review its creations. He adds that it’s too easy to assume it has done it correctly because reviewing code, documents, and unit tests is boring.
Considering this, it’s crucial to adopt a cautious approach when using GenAI. It’s essential to treat your work as an experiment and remain open to refining your solutions. As Alex Dunlop, a senior engineer at Popp AI, said, “It’s vital to see your work as an experiment and avoid becoming too attached to your initial solution, as this can lead to defensiveness.”
Curiosity Is the Long Game
The concern isn’t that GenAI will produce poor developers, but that it will foster complacent ones. Developers who avoid the struggles of debugging, stop questioning why, and place too much trust in the system.
When asked about his view on the issues developers face in the current era of GenAI, he said, “Without understanding the purpose behind the requirements, development teams have no idea if they are building the right software”.
Dunlop explained that the initial excitement of not knowing everything and the constant urge to learn tends to diminish as one becomes a senior engineer, replaced by a sense of duty to have all the answers. However, a recent shift in outlook has rekindled their explorer’s curiosity by viewing everything as an unknown to be uncovered.
For those willing to stay curious, GenAI can be an accelerant rather than a crutch. Choudhary builds “curiosity projects”—small tools that solve real problems—just to keep learning. “I also make it a practice to understand what the AI is doing,” he adds. “Asking ‘why did it choose this approach?’ keeps me learning even when using powerful tools.
As GenAI improves at delivering code, the best developers may not be the fastest builders, but rather the most profound thinkers who retain their curiosity and critical thinking.
The post ‘GenAI is potentially dangerous to the long-term growth of developers’ appeared first on Analytics India Magazine.