Bugs slip into production in spite of the best efforts of designers, coders, and testers. While testers may not be responsible for the introduction of bugs to the system, they bear some responsibility for the introduction of bugs to the user (I know many of my tester friends will be against this point). But testing can be adjusted to reduce the number of bugs that pass through to production – without necessarily requiring more resource.
Some of the testers may argue with me saying: “Well, testers don’t make Bugs”. But then again, hardly anyone does. So, It is more correct perhaps to say that “testers can’t avoid bugs”. Testers have to find them.
Testers are often all too happy to point out these flaws in other people’s work, and are even happy to suggest ways that those people could change their work and so avoid introducing similar bugs elsewhere. However, testers are not immune to failure.
Sometimes, testers do make mistakes. Testers make flawed decisions. Testers fail to notice that within their actions lie dormant problems. When dealing with the disarmingly simple question ‘How did the testers miss this?’ the glib misdirection ‘Testers don’t make bugs’ is no answer whatsoever.
To have a bug to log, the test team must trigger it and observe it. Testers miss some bugs because those bugs are never trigged in testing. Testers miss other bugs because, although the bug is triggered, nobody notices.
In most of the cases, the number of bugs that are caught during a testing phase, are not due to the effectiveness of the test strategy, rather just by sheer coincidence. For any reasonable system, possible tests far outnumber actual bugs. Bugs are often found during tests that are not designed to look for them – but which trigger them by coincidence.
Conversely, consciously designed tests do not reliably reveal the bugs that surface in production. It is entirely possible (and not uncommon) to have a supposedly exhaustive set of tests that fail to find bugs that appear on the very first day of live use.
Big bugs are often found by coincidence, because the more ubiquitous the bug, or the greater its impact, the easier it is to see – even when using a dumped-down and mostly-blinded tool. Bugs found by coincidence rather than design also tend to seem bigger because they are more of a surprise.
Test design often concentrates on ways to act on the system to trigger bugs. It is vital to also consider the opportunities for observing bugs – whether triggered by design, or by coincidence. So it is the emphasis on test design at the expense of observation that allows bugs to drop through to production.
Some of the testers may argue with me saying: “Well, testers don’t make Bugs”. But then again, hardly anyone does. So, It is more correct perhaps to say that “testers can’t avoid bugs”. Testers have to find them.
Testers are often all too happy to point out these flaws in other people’s work, and are even happy to suggest ways that those people could change their work and so avoid introducing similar bugs elsewhere. However, testers are not immune to failure.
Sometimes, testers do make mistakes. Testers make flawed decisions. Testers fail to notice that within their actions lie dormant problems. When dealing with the disarmingly simple question ‘How did the testers miss this?’ the glib misdirection ‘Testers don’t make bugs’ is no answer whatsoever.
To have a bug to log, the test team must trigger it and observe it. Testers miss some bugs because those bugs are never trigged in testing. Testers miss other bugs because, although the bug is triggered, nobody notices.
In most of the cases, the number of bugs that are caught during a testing phase, are not due to the effectiveness of the test strategy, rather just by sheer coincidence. For any reasonable system, possible tests far outnumber actual bugs. Bugs are often found during tests that are not designed to look for them – but which trigger them by coincidence.
Conversely, consciously designed tests do not reliably reveal the bugs that surface in production. It is entirely possible (and not uncommon) to have a supposedly exhaustive set of tests that fail to find bugs that appear on the very first day of live use.
Big bugs are often found by coincidence, because the more ubiquitous the bug, or the greater its impact, the easier it is to see – even when using a dumped-down and mostly-blinded tool. Bugs found by coincidence rather than design also tend to seem bigger because they are more of a surprise.
Test design often concentrates on ways to act on the system to trigger bugs. It is vital to also consider the opportunities for observing bugs – whether triggered by design, or by coincidence. So it is the emphasis on test design at the expense of observation that allows bugs to drop through to production.
This above article is inspired by James Lyndsay's excellent article "Things Testers Miss".
0 Comments:
Post a Comment
NOTE: Comments posted on Software Testing Tricks are moderated and will be approved only if they are on-topic. Please avoid comments with spammy URLs. Having trouble leaving comments? Contact Me!