Knowing how much time is spent changing code in component x, lets us reason about entropy in the system. Showing your team mates that your swanky new AI co-worker enables you to be faster, could help you improve overall team productivity. Maybe you want to see if you're more effective when using Vim or VSCode, how else could you know than measuring over time?
Time-tracking as a fuzzy element of over-zealous Agile management turns many of us off. But when time-tracking is a tool developers can use for their own benefit, I’m all in for that.
Measure Beats Estimate
Good software development is not about making perfect software. It is about making software that’s good enough for the purpose, for maintainabilty, while not increasing entropy. Neglecting those last two is leaving free development cadence on the table.
It's taken a lot of trial and error to find a measure that feels a genuine reflection of time spent 'writing code'. While at the same time wrapped in a process that doesn't add any overhead.
My measurement spans from creating a new branch to merging that branch. A git hook automatically writes a log of the time taken to an annotated tag. That time is, objectively, how long I spent on implementation.
Now, bear in mind that this is still a vague measure of development time. But it's definitely less vague than guessing before starting the work. It’s meant to highlight the effort of making a change, excluding engineering/thinking time. All things being relatively equal, over a longer period, anomilies pop out.
Anomolies and trends in the data are things we can talk about in practical terms. Such as, “since we started using AI, average branch time is 10% less.” Or, “component X is getting progressively slower to maintain.”