Some developers are more productive than others. And some developers are much more productive! Not just two or three times as productive, but much more productive! Some studies (for example Sackman, Hal, Erickson and Grant) claim that the factor is ten (and as a factor 28 in how fast the developers can write code). I don’t know if the factor is 10 or 28 – and I’m sure that it’s not about how fast you can write code, but I know, that the factor is big enough to make it worthwhile to dive into what makes great developers and how you can measure performance.
There have been many attempts in trying to simplify the task of measuring performance and finding great developers. Performance has for example been measured by the number of lines of code the developer produced, how many bugs he or she introduced into the code or how many function points the developer delivers per month.
But the number of code-lines does not say anything about the quality or the features delivered. You can write hundreds of lines of code a day, but if the code has poor quality and introduces a lot of bugs, is unreadable to the poor developer who has to maintain it and has bad performance it wasn’t very productive, was it? Then you’d rather have the ten lines of code, which just did the job right!
The number of bugs introduced isn’t a very good measurement either. Of course fewer bugs are better than a lot of bugs – but developers that code makes mistakes. A developer who does nothing makes no mistakes. And the developer can write fantastic code with no bugs – but if he or she didn’t understand the business and implemented the wrong solution, then it really doesn’t matter how many bugs the code had.
And finally Function Points (my favourite aversion) does not measure productivity. It measures features and does not take into account if the code is readable, bug-free, or implements the architecture. For example you would get a lower function point per man month score if you’d actually implemented a SOA or did a reasonable amount of layering in your code than if you just called the database from the GUI.
If one combines these measurements - and add others like the amount of documentation, compliance to standards and guidelines and so forth – you might end up with a measurement that made a little sense. But it still wouldn’t tell you anything about how well the business needs where met, how easy the software was to maintain or to which extend it implemented the right architecture.
I fully understand the need for measuring how well the individual developer is performing. How else can the organization hire and develop the right people? But since you get what you measure, introducing oversimplified measurements of performance does more harm than good to the productivity of the developers.
So if you need to understand the performance of the individual developer (and you do need that if you want to increase the productivity!), then look at his or hers code and see how well structured, readable, maintainable and architecture-compliant it is. Go sit with the developer and do some pair-programming, listen to the developer’s communication with fellow-developers, business-people and testers. Discuss best practices in processes, tools and principles. In essence: Understand how well the developer is performing instead of implementing simple measurements.
Software development is a craft. And you have to rely on the craftsmanship of great developers to get great performance. Introducing simple measurements of developer performance is a poor substitute for real understanding of this craft and what makes great craftsmen. So… go introduce that understanding in your organization instead.