To have manual testers, or not to have manual testers, that is the question. Typically I have one per team: Less than many teams but more than some organisations get away with.
Testing is one of those areas where I’m still very much experimenting. I’m a huge fan of automated testing. But I see this as a developer thing. I also hire manual testers. Usually one per software team, but sometimes less. There is no science behind this ratio – it is just what I’ve ended finding to be about right.
Other people have a higher ratio of testers. A few years ago, for example, Microsoft had teams with a 1-2-4 shape. 1 program manager (not a Programme Manager as the rest of the world would view it – more like a Scrum Product Owner), two testers, and four developers. so quite a high proportion of the team are testers.
But I go with one tester per team. Apparently, I’ve landed on about the right balance. Independently Rally Software, in their Impact of Agile Quantified – A De-Mystery Thriller, found that:
- More testers lead to better Quality
- But they also generally lead to worse Productivity and Responsiveness
The nice chart in the slide pack suggests that one manual tester provides the optimum overall balance of quality, productivity and responsiveness. for me the balance is preferable to any one attribute in isolation.
But I do have a nagging doubt. I hear rumours of teams that rely entirely on automated testing. USwitch, for example, got rid of their testers entirely – they became developers or left. They took this step to enable continuous delivery. Perhaps I have too many manual testers.
In fact, Rally’s Impact of Agile Quantified found that teams that self-identify as having no testers have:
- the best Productivity
- almost as good Quality (although a much wider variation in Quality)
So should I bite the bullet and dispense with my manual testers. I admit this just feels a bit too scary for me. At least today.