The largest LLM use case is writing code. LLMs are good at writing code, but not great. Not even Claude Opus 4.1. LLMs make mistakes. Lots of them.
LLMs do not always follow instructions, even when prompted with specific instructions.
LLMs regularly hallucinate.
LLMs lack common sense, often taking the circuitous route rather than the simpler, cleaner, more obvious path.
I simply can’t trust an LLM to write a clean, efficient block of code, to say nothing of the risk of the LLM injecting harmful script into my code (LLMs pull material from the Internet and bad guys are posting malicious stuff online that corrupts LLMs when ingested and is more frequently finding its way into production code). So yes, there is an enormous security risk to having LLMs write code.
Given these issues, why would I ever hand the keys over to an AI Agent to autonomously write code on my behalf? I would only be asking for trouble. People and companies will eventually figure this out if they haven’t already.
No, software developers are not going away despite what the LLM promotors may say and yes, the LLM companies will have a day of reckoning. Those $100 billion build cost models are not happening.