In Defense of Human-Powered QA
Anthroware is an agile, boutique software shop that offers a special blend of strategy, design, and engineering. We build many custom, new-to-market projects that necessitate quick responsiveness to evolutions in the market and a lean allocation of resources. We keep a low ratio of testers to developers, and our team must be cross-functional, tightly collaborative, and effective in managing multiple concurrent projects. We work hard to tailor our services to the specific needs of each project, and to keep overhead low and value high. Under these conditions, our approach to QA has evolved into a creative process with strong undercurrents of improvisation; as in playing jazz or writing poetry, our most inspired work comes from mastering standard procedures well enough to skillfully deviate from the formulaic.
Exploratory testing is an art.
Let the record show: our engineering team loves automated tests. Computers will perform high-volume, high-specificity tests more accurately, more consistently, and more efficiently than humans, and won’t balk at the tedium of repeating the process for every build. Automated test engineering offers engaging technical puzzles and a steady stream of dazzling new tools. It’s a necessary component of a complete testing strategy, and in a parallel universe where we created tests only for fun, it’s probably all we’d do; but in our universe, it’s not the whole picture. No test plan is complete without exploratory testing, and highly dynamic projects will reward adopting this as the primary strategy. Performing exploratory testing and writing test code are related skills, but they’re not equivalent, and a balanced engineer will employ them situationally, as any craftswoman does with her tools. For those less comfortable with the art of exploratory testing, here are some thoughts on how to get higher return on your time and effort.
Keep it lightweight.
As thermodynamic systems tend toward entropy, so do unchecked test plans tend toward complexity. A conscientious engineer will naturally identify an ever-expanding list of potential susceptibilities in an application, and potential experiments to test them. But it’s crucial to remember that it’s the performance of tests that finds bugs, not the generation of test cases. Since we know that we can’t test everything, we remember that some tests are more valuable than others and some bugs don’t represent a meaningful impact on the user experience. We prioritize components based on their value to users and their risk of failure. We want to optimize product delight without wasting resources on infrastructure that will never save expense. We must set high standards, and then meet them (but not exceed them) as efficiently as possible.
Hold expectations loosely.
In his article, The Ongoing Revolution in Software Testing, guru Cem Kaner poses the following scenario:
“ …suppose we test a program that adds numbers. Give it 2+3 (the intended inputs) and get back 5 (the monitored output). This looks like a passing test, but suppose that the test took 6 hours, or that there was a memory leak, or that it erased a file on the hard disk. If all we compare against is the expected result (5), we won’t notice these other bugs.”
It’s expensive to document every expected and actual result of every test, and it can be misleading. A thoughtful and attentive engineer will develop an intuition for expected behavior as he comes to understand a product, and will recognize when something seems “off”; test documentation should supplement, rather than replace, this intuition. Every test we perform on an application is an opportunity to better learn its behavior. To maximize the value of our testing, we must maximize the amount we learn from each test we perform. If we fixate on expected results generated a priori, we limit ourselves to gleaning only that information for which we foresee utility. This is antithetical to the Agile movement, which is built on the assumption that our foresight is imperfect and we must learn as we build.
This is antithetical to the Agile movement, which is built on the assumption that our foresight is imperfect and we must learn as we build.
Be curious.
The art of exploratory testing lies in blending expectations and inquiry, strategy and opportunism. An artful engineer can digest a product’s behavior specification into an explicit test case, but can also respond to unanticipated behavior with a mindset of curiosity. She recognizes the discrepancy between her expectation and her observation as an indicator of incomplete understanding, and embraces the opportunity to better learn the application’s actual behavior. In Kaner’s words, “being willing to change what you do, to follow up what you're seeing as you test, is the mark of a thoughtful tester.” [1] The engineer’s capacity to test thoughtfully is precisely what makes exploratory testing the appropriate tool when automated testing is not.
Model the problem.
It may never fall to the tester to fix the problems he uncovers in his testing, but he should still engage with the question of what the problem might be. An abstract model will help him think through what to test, and how, as he attempts to isolate the problem. It’s neither expected nor necessary that his model accurately represent what’s really going on; the purpose is to guide his thinking and inform his experiments. It’s not the tester’s responsibility to diagnose the problem for the developer—but an inability to model the problem is an indicator of incomplete understanding, and resulting reports will likely be more ambiguous and less useful to the engineer charged with the fix.
Designing intelligent test cases requires a firm comprehension of behavior specification and of experimental design. But intelligent test cases are only one component of valuable testing, and the manner in which the tests are performed is equally important. More automation means less overhead for a given run of tests, but it also means less room to perceive, adapt, and respond. Testing with an exploratory approach means a more variable test environment, so it requires a higher degree of attention and critical thinking. An effective engineer will use intellect, intuition, and imagination to discern patterns, hypothesize models, and check her conclusions for sanity. She’s mastering exploratory testing when each test teaches her something new about the software, and unexpected results guide her toward a more complete understanding. At Anthroware, we study the art of exploratory testing. We craft our testing strategies to the needs of the project, and we make it our business to learn the best practices that the industry has to offer, to follow them when they take us where we want to go, and to write our own playbook when they don’t.