I am back with the final part of the interview with Elisabeth Hendrickson, who writes at Test Obsessed and is a world-renowned expert in Software Testing and Quality Assurance. In case, you missed out the first part, I would highly suggest you to read it first before proceeding to read this one.
So here we go:
Debasis: What do you think as the most essential skills that make a great tester?
Elisabeth: Great software testers are investigators, able to apply their technical, analytical, and observational skills to uncover information that has value to their stakeholders, and communicate effectively about their discoveries. That means that great testers:
- Use a variety of analysis techniques (such as modeling states and events in a system) to design tests
- Have the technical skills necessary to dig below the surface of whatever they're testing
- See things--clues to system behavior--that other people tend to miss
- Have an understanding of how the software or system serves the business so they understand what information is significant and what is not
- Have the communication skills necessary to explain their findings both verbally and in writing
- Use a variety of analysis techniques (such as modeling states and events in a system) to design tests
- Have the technical skills necessary to dig below the surface of whatever they're testing
- See things--clues to system behavior--that other people tend to miss
- Have an understanding of how the software or system serves the business so they understand what information is significant and what is not
- Have the communication skills necessary to explain their findings both verbally and in writing
Debasis: Tell me about the most fascinating bug that you have encountered in your entire testing career.
Elisabeth: Probably the most fascinating behavior I got to see turned out not to be a bug at all.
We were running a series of automated tests against software that ran on custom hardware. It was an interesting project because everything was custom: the hardware, the firmware, and the end user client was all being developed at the same time.
I was part of a small group that was creating early end-to-end test automation to exercise the whole system.
One day the end-to-end test automation started failing for no apparent reason. This was a huge surprise because we were running the test automation just to confirm what tests we'd automated so far. We expected all the tests to pass. They had all passed just the day before and nothing in the system under test had changed. But the tests that had passed just 24 hours before were now failing left and right. Clearly something had changed.
We could find nothing that was different from the previous passing test run. The client software was the same version. The firmware was the same version. And the custom hardware, sitting on the rack, had not been touched in a couple of weeks. So what had changed? Those of us on the test automation team investigated, pondered, poked, and prodded with no results. Finally we consulted an integration engineer who was responsible for maintaining the test lab.
The integration engineer investigated, poked, and prodded.
Finally he looked at the rack of custom hardware. Noticing an unshielded coil of high voltage wires, he moved the coil 12 inches away from the custom boards we were testing against and said, "try it again." We did, and the tests all passed. It turned out to be magnetic interference from the unshielded coil.
That project taught me that the things that there can be a variety of factors that affect software behavior that have nothing whatsoever to do with the software itself. And thus analyzing the variables - the things that we can vary that affect the behavior of the software - turns out to be really important.
We were running a series of automated tests against software that ran on custom hardware. It was an interesting project because everything was custom: the hardware, the firmware, and the end user client was all being developed at the same time.
I was part of a small group that was creating early end-to-end test automation to exercise the whole system.
One day the end-to-end test automation started failing for no apparent reason. This was a huge surprise because we were running the test automation just to confirm what tests we'd automated so far. We expected all the tests to pass. They had all passed just the day before and nothing in the system under test had changed. But the tests that had passed just 24 hours before were now failing left and right. Clearly something had changed.
We could find nothing that was different from the previous passing test run. The client software was the same version. The firmware was the same version. And the custom hardware, sitting on the rack, had not been touched in a couple of weeks. So what had changed? Those of us on the test automation team investigated, pondered, poked, and prodded with no results. Finally we consulted an integration engineer who was responsible for maintaining the test lab.
The integration engineer investigated, poked, and prodded.
Finally he looked at the rack of custom hardware. Noticing an unshielded coil of high voltage wires, he moved the coil 12 inches away from the custom boards we were testing against and said, "try it again." We did, and the tests all passed. It turned out to be magnetic interference from the unshielded coil.
That project taught me that the things that there can be a variety of factors that affect software behavior that have nothing whatsoever to do with the software itself. And thus analyzing the variables - the things that we can vary that affect the behavior of the software - turns out to be really important.
Debasis: How do you see software testing as a career, lets say after a decade? What would be the biggest challenges for the field and what would be the biggest advancements?
Elisabeth: To understand what I think represents the biggest ongoing challenge in testing, let's look at the frequency with which a typical organization ships software. Back in the 1990s it was typical for software companies to do just one major release a year. Ten years ago, it was common to release every month or two. Today the hot new thing is Continuous Deployment, where businesses are able to roll out tiny incremental enhancements multiple times per day.
For example, a blog post from Chuck Rossi, a Facebook engineer, claims that at Facebook, "changes you make in the code will be in front of your mom and 175 million other people in less than 60 minutes." http://www.facebook.com/note.php?note_id=59150988919 . Timothy Fitz at IMVU explains that they push new code into production 6 times every hour. http://timothyfitz.wordpress.com/2009/02/10/continuous-deployment-at-imvu-doing-the-impossible-fifty-times-a-day/
In fact, I think that the need for speed is helping to drive widespread Agile adoption. Agile development practices like Continuous Integration enable implementation teams to keep pace with business demands.
And now testing has to keep pace. There just isn't time to do six week manual regression cycles anymore. Organizations need faster feedback.
We have some answers. I'm delighted to see widespread adoption of automated unit testing, automated functional testing, and Exploratory Testing. We have better tools now, including FIT, Fitnesse/Slim, Robot Framework, Twist, Concordion, Cucumber, and a host of other Agile-friendly test automation frameworks. And in many organizations, the silo walls around independent QA organizations are melting as testers are integrating their efforts with the rest of the implementation team.
But there's still more work to be done. And ten years isn't enough to do it all.
For example, a blog post from Chuck Rossi, a Facebook engineer, claims that at Facebook, "changes you make in the code will be in front of your mom and 175 million other people in less than 60 minutes." http://www.facebook.com/note.php?note_id=59150988919 . Timothy Fitz at IMVU explains that they push new code into production 6 times every hour. http://timothyfitz.wordpress.com/2009/02/10/continuous-deployment-at-imvu-doing-the-impossible-fifty-times-a-day/
In fact, I think that the need for speed is helping to drive widespread Agile adoption. Agile development practices like Continuous Integration enable implementation teams to keep pace with business demands.
And now testing has to keep pace. There just isn't time to do six week manual regression cycles anymore. Organizations need faster feedback.
We have some answers. I'm delighted to see widespread adoption of automated unit testing, automated functional testing, and Exploratory Testing. We have better tools now, including FIT, Fitnesse/Slim, Robot Framework, Twist, Concordion, Cucumber, and a host of other Agile-friendly test automation frameworks. And in many organizations, the silo walls around independent QA organizations are melting as testers are integrating their efforts with the rest of the implementation team.
But there's still more work to be done. And ten years isn't enough to do it all.
Debasis: What single thing would you want to tell every newbie who is struggling in the early stage of building software testing career?
Elisabeth: I would want to tell them, "Your job is to provide information; work with your stakeholders to make sure you understand what information they will find valuable."
Too many new testers think their job is to do something other than provide information, like assure quality (impossible), or find bugs (too narrow; bugs are just one kind of information), or execute pre-defined test cases (again, too narrow).
So I recommend asking your stakeholders, "What information can I provide you that will help you move the project forward?"
Too many new testers think their job is to do something other than provide information, like assure quality (impossible), or find bugs (too narrow; bugs are just one kind of information), or execute pre-defined test cases (again, too narrow).
So I recommend asking your stakeholders, "What information can I provide you that will help you move the project forward?"
Debasis: Is there anything else that you would like to say?
Thanks very much for including me in your interview series!
For more such interviews with other testing experts, check the older articles and feel free to suggest a testing guru whom you'd want me to interview next time. Happy testing...
For more such interviews with other testing experts, check the older articles and feel free to suggest a testing guru whom you'd want me to interview next time. Happy testing...
0 Comments:
Post a Comment
NOTE: Comments posted on Software Testing Tricks are moderated and will be approved only if they are on-topic. Please avoid comments with spammy URLs. Having trouble leaving comments? Contact Me!