The paradox of shipping more while coding less

The tension between not writing any code while contributing significantly more

I can’t remember the last time I typed a line of code in our company’s repo. I also feel like I’ve been contributing significantly more to our repo since I stopped writing the actual code.

There’s a tension to that, and I have an interesting thought experiment:

  1. If you attached a keylogger to my computer, would you observe me entering more or less keystrokes since agents? I would wager I have been typing significantly more since I used to write the code myself.

  2. Would each entry contain more or less information density? I would also argue that the things I type contain much more information density; e.g. a single prompt or code planning session contains significantly more “core idea” outputs than a session where I myself write code.

I think the paradox might be solved by the following concept:

I was never contributing just code, I was always contributing ideas.

My hypothesis is that each input I write today is significantly more information dense than the code I used to write. I can also produce inputs significantly faster since I don’t need to worry about syntax and grammar. Coding agents can then materialize approximations of my ideas way faster than I could ever type them out. It also tends to be much more ambiguous- hence the need and importance to still verify if the agents translated my inputs to outputs that match my intent.

Similarly, I have never written a single line of our automatically generated machine code. I have no idea where the code we ship actually gets served in production (some data center somewhere in a specified region). Yet I know these systems well enough to know what they can do and what I can trust about them. Once configured, they work for me. Some need attention, some are abstractions you never even think of anymore.

Contributing ideas is hard, but iterating on these ideas has now become much easier. I think idea contribution won’t be changing any time soon. We’ve just stepped up on the abstraction layer. Agents today aren’t good at thinking outside the box past their context window. Agents are great in closed-loop systems, where a result can be instantly compared to a verifiable reward. But new ideas don’t work that way. At least not yet.