As 2019 was wrapping up I found myself wanting to get some kind of high-level picture of what I've spent the year doing. First, I'm just curious. Second, I want to know how much time I've spent building out different engine systems and if that time matches my initial estimates. Third, I want to make sure that I've spent my time doing "the right" things.
I think all good programmers have something of an obsessive nature and it's sometimes easy to get carried away with refactors and optimizations that in the big picture maybe don't matter that much. With the small team we have at The Machinery, working on building a complete engine that outshines the competition, we can't afford to mess around. I'm usually pretty good at staying focused, but I'm always striving to improve.
I also want to get better at making time estimates. This is a notoriously difficult thing that we programmers get asked to do all the time by our managers. Often we balk and try to squirm away: "That's impossible! I can't possibly know how much time it takes before I've done it!"
It's easy to scoff at this attitude, but I actually think it is pretty reasonable. Estimating how long it takes to code something is hard because programming is fundamentally different from most other human endeavors. If you are building bridges, doing woodworking or cleaning the house, you typically have a relatively short planning phase where you decide what to do and then a long execution phase where you hunker down and do it. In the planning phase, you can make a reasonable estimate of how long this execution phase is going to take.
However, programming, at least the way I see it, is pretty much all planning. The bulk of programming is deciding how to solve something: what the APIs should be, how different components should work together, etc. When you have decided exactly what do, writing out the code is trivial. (For this reason, I also don't really like splitting coding into a planning and a coding phase. In my view, coding is planning.) Since planning is the bulk of the work, estimating the size of a coding task means trying to plan how long the planning is going to take. At that level of meta-analysis, things become pretty sketchy. Many programmers don't want to give management numbers they're not sure about, out of (an often valid) fear that management will take those numbers and run with them.
But since I'm an owner and a founder at Our Machinery, I'm the one asking myself for these estimates. And I'm not willing to let me get away that easily!
As a planning instrument, time estimates are crucial. We want to know how much time something will take to implement so that we can make an appropriate cost-benefit analysis. We also want to be able to present some kind of road map of upcoming features, both for our own sake and the sake of our customers. So we need to tackle the unpleasant task of making estimates.
The first step to getting better estimates is getting time tracking right. If we're not even sure how long something did take, how can we ever know how long something is going to take. With time tracking as calibration, we can hopefully learn to improve our estimates, even if it's always going to be a difficult task. For example, I've noticed that I often overestimate the time it takes to complete small tasks and underestimate the time it takes to complete big tasks. The reason, I think, is that it always seems reasonable to say that something will take 1-2 days, even though many small tasks can be completed in hours. Whereas for big tasks, it is just hard to think of everything that goes into them. I end up missing stuff and underestimating.
One other thing I think is important is that you don't punish or reward employees for their estimates. For example, you shouldn't force people to work overtime because they underestimated or praise them because they overestimated and are "done early". Neither should you shame them for making estimates that are too long: "What, it'll take you three months to do this?" All of this will just push your employees to make worse estimates.
Enough set about this tricky topic. In the rest of this post, I'll focus on time tracking.
Looking back at 2019, I found it pretty hard to discern exactly where and how I had spent my time. This even though I actually had three different and pretty detailed records of the work I had been doing:
I have a big todo-list in Workflowy, where I keep track of all the things on my plate. Workflowy sends me a summary email every day of tasks I have completed. So this is a pretty detailed and complete record of everything I've been doing, except it doesn't cover exactly how much time I've spent on each task.
For time tracking I use toggl.com. Whenever I switch to a new task I start a Toggl timer. This way I can see both how much time I've spent on each individual task and how many hours I've worked total. Each week, Toggl gives me a report which I summarize in a Slack post to let the rest of the team know what I'm working on. So this gives timing information, but less detail, because I typically just make an entry called Web Site when I'm working on the web site without specifying exactly what I did.
The git commit history contains a lot of information too. I subscribe to the "commit early, commit often" philosophy, which means I usually commit changes multiple times a day. So the git log can tell me what systems I've been working on (from the files I touched) and give an idea of how much time I've spent on each (the time since the last commit). And, of course, the commit comments themselves give more detailed information. But it doesn't tell me anything about non-coding tasks.
So the problem is not a lack of data. Rather, the problem is too much data! It's hard to see the forest for the trees! Going through these logs and trying to summarize them into high-level information like "How long did the physics system take to write?" would mean slogging through thousands of entries.
I suppose I could sit down and sift through all of this, write some script to automatically analyze the logs and the email archives and merge the information from all three sources to get a more complete picture, but that would likely take days. And considering that the whole point of this was to find out how to spend my time more efficiently that does not seem like a prudent thing to do.
So for 2020, I've decided that I need to do time tracking differently. If the goal is to have high-level, big picture information available at-a-glance, then that is what I should be collecting all the time - not try to piecemeal together at the end of the year.
I have three main things that I want to accomplish with time tracking.
First, I want to know the total amount of hours I work each week. Since I'm working from home it's easy to work both too little and too much. Too little, because there's always other stuff that needs to be done around the house. Too much, because you can always pick up the laptop and do some more work.
For me, I find the optimum to be about 30-40 hours of work per week. (Measured when I'm actually sitting down at my computer - hands on keyboard - so it doesn't include things like coffee breaks or chit-chat with coworkers that would be a part of a normal office workday.) If I work <30 hours I get restless and frustrated and feel like I'm not getting anything done. If I work >40 hours I tend to get "brain frazzle". I find it hard to relax, make poorer judgment calls when coding and can easily get distracted by social media, blog posts, etc.
So I want to measure my total work hours to make sure that as far as possible I stay in this 30-40 hour sweet zone.
Second, I want to know how much of that time I'm actually spending on the specific engine feature I'm primarily trying to implement. Sometimes I will counter-productively start to stress out about how I "didn't get anything done this week", only to realize that the main reason is that I've been occupied by other important tasks. Maybe I got derailed by a tricky bug that took two days to fix. Maybe I needed to spend a lot of time planning out the roadmap, answering questions on the forum or writing a blog post.
In addition to not feeling too bad, I also want to track this to make sure I keep a healthy balance between feature work and all the other tasks that also need to be done. If this engine is to be a success, I can't spend all my time fixing bugs, taking meetings, updating the web site or answering forum questions. I'd like to be able to spend at least 50 % of my time on feature work, preferably more. If the time tracking shows me consistently dipping below that, I know that I need to reconsider how I'm spending my time.
And finally, as I said in the beginning, I want to know how long a feature took to implement so that I can get better at estimating costs and planning. This information needs to be at a high level and not bogged down with minutia. For example, I want to know how many weeks it took to implement the physics system, not how many hours it took to add collision filtering to the character controller. I also want this information to be available at-a-glance all the time and not require batch processing of logs or something like that.
Having the information always available means I can react to it in real-time: "Huh, seems like prettifying this part of the UI is taking a lot longer than I expected. Since this is not a crucial task, it does not make sense to spend a week or more on it. I should switch to doing something more important for our success."
Side note: I use trunk-based development and avoid feature branches so that if I have to abort a task early I will still have delivered value (in this case - prettified some parts of the UI).
Note that what I'm interested in is the real, wall-clock time it took to implement the feature. Not the time it would have taken in an ideal world where I could focus 100 % on feature work, was never interrupted, didn't have any vacation days or sick kids, etc. It's the time in the real world that matters, and there will always be things like this.
Given these goals, here's the system I've come up with:
I'll use Toggl to keep track of the time spent on each individual task I'm working on (as before).
I'll assign each task to one of three categories (Toggl calls them Projects), so that Toggl can tell me, not only how many hours I've worked each week, but also how much time I've spent on each category:
Feature - Work on the primary feature I'm currently adding to the engine (physics, animation, etc).
Maintenance - Fixing bugs, regressions, addressing user feedback, etc. Basically, any coding work that is not a direct part of developing the feature.
Admin - Anything else I need to do for the company. Blogging, answering forum posts, working on the website, emailing, distributing new alpha versions, raising money, planning, etc.
Each week, I send a note to my coworkers showing what percentage of my time has been spent on each category, together with a short description of what I've done. Note that I only send percentage numbers to my coworkers, not the actual hours worked. I record the hours worked each week for my own sake and I want to avoid a culture where workers get shamed for low numbers or praised for high numbers since that just tends to lead to the numbers getting fudged. "I read all these blog posts vaguely related to work, so I'll just record that as 5 worked hours." I think things work better when everybody is assumed to be a responsible adult until proven otherwise. Also, I'd rather encourage people to be efficient - to get more done in less time - than to work long hours. There's enough of that in this industry.
I record these numbers in a spreadsheet, together with one line describing the main feature that I worked on that week. That way, it's simple to get an overview of how many weeks a feature took by looking at the spreadsheet. I don't really care about recording tasks with a more detailed granularity than that. If I need more detailed information, I can always look at the logs.
Here's what the spreadsheet might look like:
|Week #||Feature||Maintenance||Admin||Main Feature||Notes|
It's not always 100 % clear how to draw the line between Feature and Maintenance work. For example, while I'm working on a feature I often discover that another system needs to be refactored or has bugs. Typically I record that as Feature work. My reasoning behind this is that the refactoring and API changes that need to be made are an integral part of the work on that feature (without the feature, they wouldn't be needed - YAGNI). Also, discovering and fixing minor bugs is just a natural part of coding.
The only exception is if I discover a particularly hairy bug in another system. Say a bug that takes more than 1-2 hours to fix. In that case, I will record it as Maintenance work instead.
I aim to keep this time tracking system up for the rest of 2020 and see how it works.