To have manual testers, or not to have manual testers, that is the question. Typically I have one per team: Less than many teams but more than some organisations get away with.
Testing is one of those areas where I’m still very much experimenting. I’m a huge fan of automated testing. But I see this as a developer thing. I also hire manual testers. Usually one per software team, but sometimes less. There is no science behind this ratio – it is just what I’ve ended finding to be about right.
Other people have a higher ratio of testers. A few years ago, for example, Microsoft had teams with a 1-2-4 shape. 1 program manager (not a Programme Manager as the rest of the world would view it – more like a Scrum Product Owner), two testers, and four developers. so quite a high proportion of the team are testers.
But I go with one tester per team. Apparently, I’ve landed on about the right balance. Independently Rally Software, in their Impact of Agile Quantified – A De-Mystery Thriller, found that:
- More testers lead to better Quality
- But they also generally lead to worse Productivity and Responsiveness
The nice chart in the slide pack suggests that one manual tester provides the optimum overall balance of quality, productivity and responsiveness. for me the balance is preferable to any one attribute in isolation.
But I do have a nagging doubt. I hear rumours of teams that rely entirely on automated testing. USwitch, for example, got rid of their testers entirely – they became developers or left. They took this step to enable continuous delivery. Perhaps I have too many manual testers.
In fact, Rally’s Impact of Agile Quantified found that teams that self-identify as having no testers have:
- the best Productivity
- almost as good Quality (although a much wider variation in Quality)
So should I bite the bullet and dispense with my manual testers. I admit this just feels a bit too scary for me. At least today.
We’ve been pushing Continuous Delivery and BDD for a while now, with some success. Manual testers CAN add quality but subject to the usual laws of diminishing returns. One thing I’ve seen is that a person in “Tester mode” can see gaps in scenarios that Devs or business SMEs miss. This is good, and empowering for a Tester. Another thing I’ve seen is that both Testers and Automated tests can both miss serious bugs. So neither has the perfect answer.
I agree that having somebody in “Tester Mode” is useful. Particularly in a Three Amigos Meeting.
At the end of the day, most apps are going to be used by humans, thus a manual (human) tester is always necessary.
But the people writing the automated tests are also human. They are just testing using a tool rather than manually.
Agree. Our experience is that both automated test and human tests can miss things. Automated tests driven by feature files are the gift that keeps on giving. Continuous regression… Traceability…. Always extendible… One advantage people put forward in favour of human testers is in “exploratory testing”. I’m thinking this is looking at things backward. Testers should be free to use exploratory techniques to contribute to automated tests.