Ask HN: LLMs enhance productivity so why don't we have more/better software?

3 points by dboreham 6 hours ago

Lifelong AI skeptic now turned LLM proponent here. I'm almost in the Adrian Cockroft/Joe Magerramov camp: for me today's LLMs are by far the most productivity-increasing tool for software development since the compiler. Yes I'm so old that I remember making software without a compiler. Reinforcing my non-koolaid-consuming cred: I've been using LLM tools to write programs I'd never have time to write, find bugs I'd have taken much longer to track down myself, understand the design of large complex codebases I'd have previously left as mystery-meat. The new tools are helping me work through the seemingly endless pile of stuff that always needed to be done but never got done.

Although the media narrative and previous HN discussions focus on developer layoffs supposedly due to AI adoption, I'm wondering about the inverse perspective. Since LLM tools improve software developer productivity significantly, why haven't we seen much better software? Why haven't we seen startups making new useful applications? Is there something about the wider business context that precludes improved productivity being applied to increase capacity and/or improve quality? After all when Walmart discovered how to optimize retail they didn't use that capability to make one super-efficient store. They built stores everywhere. Are we somehow stuck in some crappyness equilibrium where there's no overall benefit to improving software. Was that always the case but we never realized because by chance we had just enough developers to get by?

verdverm 6 hours ago

1. Productivity (quantity) does not equal quality. How is productivity even measured? (the jury is still out on this one)

2. LLMs & Agents do not automatically create better software. They more often prefer to write from scratch rather than use the library sitting right next to the code they reimplement. While they are good at writing narrowly scoped tasks, they are not good at large perspective work. They also make lots of mistakes, just like us.

3. Why haven't we seen the things you expect? Because of (1) hype and, similar to stock market trades, people only share their wins and not their losses. (2) They are not as capable as the proffers would have you believe.

PaulHoule 6 hours ago

Whenever some new development in software development comes around people remember this classic Fred Brooks essay

https://en.wikipedia.org/wiki/No_Silver_Bullet

which points out that software development involves many different tasks, let's say

   20%    requirements gathering
   20%    design
   20%    coding
   20%    test
   20%    deployment
let's say that some huge innovation cuts the time to code down to 0. You still have to do 80% of the work! If a "no code" system is going to radically improve the situation it has to take a big chunk out of all of those things.

We don't have a failing video game industry or disasters like iOS 26 because low-level coders are making little mistakes, we are having them because of poor productivity and quality in the area of deciding what software gets made and what characteristics that software has. If you were able to: (a) fire everybody at Microsoft except for Satya Nadella or (b) fire Satya Nadella, (b) would be the change that would impact what gets made, I'm sure Satya Nadella could come up with the bad ideas on all his own in case (a).

---

The bright spot is that there's a certain kind of person who could make AI-enhanced software for their own personal or for small group use. If you can get the business out of it entirely, AI software development could be revolutionary. If the goal is to make polished software that serves the needs of a wide number of people you run into all the old business problems (e.g. https://en.wikipedia.org/wiki/Enshittification)

incomingpain 6 hours ago

Because coding LLMs have really only been about a year or so. Do you expect the entire landscape of software to change in only a year? Most the AI activity is on fixing and improving what we already have.

  • dboreham 6 hours ago

    Fair point perhaps, although I'd note that I first heard colleagues boasting about how they were having Chat-GPT write all their tests sometime in 2022.