a red, box-shaped machine being worked on by robots

AI + Human hybrid Developers - The Worst of Us?

The current interface for software development still has shortcomings, and LLM-integrated IDEs get backlash in the engineering community for pushing developers out of jobs, and producing broken or poor quality code.

These IDEs introduce new features that attract developers at different stages in their career --- not just hobbyist and tech savvy business owners, but senior engineers at big name startups with high impact on products used by millions of users. Every time X has released a new Grok version, Cursor has a new feature, or Deepseek outperforms another model on various benchmarks, we're hearing arguments on both sides of how these tools should be applied in a safe and responsible way.

For enterprises, it's the risk taken on by making vital decisions and investments that depend on AI tools or advice.

There are a lot of solid applications in code documentation, summarization, git commit automation, integration automation, but code generation seems like it hasn't convinced the wider community of its value.

A well-placed criticism of LLM-assisted code is that many developers use generated snippets and chatbots without fully understanding the code before running it, leading to preventable breakage and security vulnerabilities in critical components. This is especially concerning when developers use the same AI system to generate both the code and its integration tests, or end to end tests, making debugging more challenging.

This makes sense. Entry-level developers, tech-savvy founders, and even senior engineers must evaluate whether AI tools assist them or hinder their ability to write "good code"—whatever that may mean to them and their business needs.

I used Perplexity to research how LLM-driven IDEs affect software developers. I've linked it here:

a link to an article written by Perplexity on the SDLC and LLM-driven IDEs

It found some interesting numbers:

However, it comes with some downsides:

The most impactful issues stem from:

Technical debt to correct these issues is a serious concern, and developers in the beginning of their journey may find that evaluating the right areas to focus on is more challenging without a clear understanding of good software development practices.

More senior engineers may find that the code generated is only able to automate the most basic tasks in their workflow, preferring instead to have it help document and explain the code they write, or restrict it to generating boilerplate code.

Here's where it hits the hardest, too, when it comes to fixing AI code:

So developers have harder problems to fix, they're weirder problems, and they know less about it the more they depend on AI generated code over time.

a robot slipping down the side of a mountain, gripping the grass

What can developers do to use it safely?

What can companies do to bring it into the SDLC?

What now though?

Well, this is referencing an AI written research paper, shouldn't it also be validated?

What do you think?