Developers Should Have a Choice of Not Using AI

Across the tech industry, a quiet tension is emerging between productivity and autonomy. As AI becomes embedded in developer workflows through tools like Copilot and ChatGPT, many organisations are encouraging—and in some cases even mandating—their use. 

The move raises a difficult question: should developers have the freedom to choose whether to use AI tools at work, or has the modern software industry’s relentless pursuit of efficiency made such a choice obsolete?

This debate gained prominence after a widely shared blog post by programmer Prahlad Yeri, who warned against a worrying shift from AI assistance to AI enforcement. For many, this represents a fundamental threat to the craft of programming itself, a shift from creating to merely reviewing and approving machine-generated work.

The Freedom to Choose or the Cost of Refusal

Brijesh Patel, founder and CTO of SNDK Corp, believes the answer lies in empowerment, not enforcement. “Developers should be empowered to choose the tools that best align with their work style and the project’s needs,” he told AIM.

“Forcing developers to use AI could stifle innovation and lead to the over-reliance on machines for tasks that require human insight.”

Patel explained that at SNDK Corp, no developer is mandated to use AI assistants. Instead, they are encouraged to make informed decisions about integrating new technologies. 

“We don’t mandate tools like AI for the sake of efficiency. Instead, we empower our developers…ensuring their creativity and expertise are always at the core of what we build.”

He added that AI should be viewed as a collaborative enhancement rather than an imposed requirement. For him, balance is key. When AI is introduced thoughtfully, it can increase consistency and reduce time spent on repetitive tasks. However, when imposed without context, it risks alienating developers who value their creative autonomy.

Neeti Sharma, CEO of TeamLease Digital, offered a balanced view. While she agrees that “developers should have the choice to choose AI tools that they believe will help them in their area of work”, she added that organisational governance must take precedence in certain cases. 

“Companies are required to define governance and security best practices, which may exempt a few tools from being accessed by developers,” she said.

Sharma explained that companies must weigh flexibility with compliance. Allowing unrestricted tool use can pose data privacy and intellectual property challenges, she pointed out.

At the same time, developers need space to experiment and decide which tools best fit their workflow. She believes that striking this balance will define how comfortably organisations navigate the next phase of AI adoption.

This tension between individual autonomy and organisational control is becoming increasingly visible in tech workplaces. For companies that prioritise productivity metrics, mandating AI tools may seem logical. But for developers, it risks reducing craftsmanship to compliance.

Accountability in the Age of AI Code

Another contentious issue is responsibility. If AI-generated code fails, who should bear the blame: the developer who approved it or the company that pushed the tool?

Patel believes accountability must be shared. “Developers must take responsibility for the code they produce, but the company that implements AI tools should also be held accountable for ensuring that the tool is reliable, effective and transparent,” he said. 

According to him, shared responsibility models could help reduce friction between developers and management, making it clear where tool oversight ends and human judgement begins.

Sharma, too, agreed that while the developers’ primary responsibility is any potential error in their code, the company must also ensure the right processes and support. 

“Since the code is being prepared on behalf of the company, it also has a key role to play, upskilling, reviewing systems and ensuring developers get enough time for coding such that they provide error-free code,” she explained.

“So while the developer owns the code, the company owns the process.”

Liu Tang, head of product for APAC at TiDB, compared AI to a knife—useful but potentially dangerous if misused. 

“AI can suggest, generate or refactor code, but it doesn’t understand the full business logic or context,” Tang said. “The engineer remains the decision-maker; the one who decides whether to accept or reject what the AI proposes.”

Tang elaborated that companies must ensure developers are trained to understand the limitations of AI. He observed that while AI can speed up tasks, it can also introduce subtle logic errors if used without critical evaluation. In his view, sound engineers should treat AI as a junior collaborator, not an unquestioned authority.

Losing the Craft or Redefining It

While Yeri fears that AI could turn programmers into “rubber stamps” approving machine-written code, the experts see nuance. All three agree that the danger lies not in the tool itself, but in how it is used.

Sharma cautioned that developers relying entirely on AI for coding could lose their grasp of logic, debugging skills and creative problem-solving. However, she also noted that when used effectively, AI can serve as a coding partner, handling repetitive tasks and allowing developers to concentrate on architecture, performance and innovation.

Tang shared a similar optimism. AI serves as a creative accelerator, not a dependency. Developers who embrace AI as a collaborative partner can enhance their creativity and strategic thought processes. He added that at TiDB, some of their most productive engineers use AI to automate routine work and gain more time to experiment and innovate.

Patel echoed that sentiment, reminding that “coding is as much an art as it is a science”. For him, developers who balance human insight with AI efficiency will remain the real innovators. He believes the goal should not be to resist AI, but to use it as a tool to unlock higher levels of problem-solving.

Also Read: Is Agentic Coding Just a Smart Intern Pretending to Know More?

The Future Developer

As AI becomes an integral part of software engineering, the definition of a developer may evolve. Tang suggested that tomorrow’s engineers might look very different from those of today. He believes that as AI progresses, the traditional developer role may become obsolete, replaced by a new kind of developer specialising in problem definition, intelligent system orchestration, quality and safety governance and high-level human-AI workflow design.

Both Sharma and Patel believe that adaptability will define future success. Developers who treat AI as a collaborator rather than a competitor are likely to thrive in an environment.

For now, the debate on whether AI should be a helping hand or a mandatory rule continues. The industry may not yet have a consensus, but as Patel summed up, autonomy remains at the heart of true innovation.

The post Developers Should Have a Choice of Not Using AI appeared first on Analytics India Magazine.

Scroll to Top