I have always been doing ad-hoc manual testing, until I stumbled upon the articles by James Bach (the guru) on Exploratory testing, some two years ago. And since then I have greatly enhanced and organized my testing process. Exploratory Testing techniques have given my manual testing process a great form and meaning, and there is no place for the word ah-hoc.
“Exploratory testing is simultaneous learning, test design, and test execution.”
- James Bach
Based on this definition, exploratory testing is categorized as follows
- Learning the product
- Test design
- Test execution
Following are the test heuristics that I use while I do exploratory testing, build from the articles by James Bach.
§ The process of “Learning the product” is most important in exploratory testing. To learn the product – understand the product, the following are the “Learning the product – Lessons”
- Visual lesson: Get familiar with the product, the general UI, the general process flow etc.
- Feature lesson: List all the features and its function
- Variability lesson: List all the things you can change in the product and its effects, expected not expected
- Complexity lesson: List 5 most complex things in the product (the number vary depending upon the size of the project). Find ways to get into the system the developers may not want to
- Claims lesson – List all information that tells you what the product does. You are looking not only to identify what the claims are; but also the attempts to find out which claims are vague possibly incomplete or even inconsistence
1. Identify reference materials that include claims about the product (implicit or explicit)
2. If you’re testing from an explicit specification, expect it and the product to be brought into alignment
- Structure lesson – Answer the following question
1. What files does it have?
2. Do I know anything about how it was built?
3. Is it one program or many?
4. What physical material comes with it?
5. Can I test it module by module?
- Data Lesson : Identify major data elements of the application; what are they; where are they coming from
1. Input: any data that is processed by the product
2. Output: any data that results from processing by the product
3. Preset: any data that is supplied as part of the product, or otherwise built into it, such as prefabricated databases, default values, etc.
4. Persistent: any data that is stored internally and expected to persist over multiple operations. This includes modes or states of the product, such as options settings, view modes, contents of documents, etc.
5. Temporal: any relationship between data and time, such as the number of keystrokes per second, date stamps on files, or synchronization of distributed systems
6. Big and little: variations in the size and aggregation of data
7. Invalid: any data or state that should trigger an error handling function
8. Identify how the data can be accessed, created, modified or deleted
- Configuration Lesson
1. Related to data lesson and variability lesson
2. Look for persistence and how it affects product feature
3. Attempt to find all the ways you can to change settings in such a way that application retains those settings
- Compatibility Lesson: Answer following question
1. What does the application interact with
2. How well it does with the external components and configuration
3. What does it depend on to function properly
§ Test design
- Testability: Find all features you can use as testability feature and identify tools you have available
- User: Imagine 5 users of the product and the information they want from the products or the major features they would be interested in
1. Identify categories and roles of users.
2. Determine what each category of user will do (use cases), how they will do it, and what they value.
- Scenario – try to imagine 5 realistic scenario for how the user identified earlier would use the product
1. Begin by thinking about everything going on around the product.
2. Design tests that involve meaningful and complex interactions with the product.
3. A good scenario test is a compelling story of how someone who matters might do something that matters with the product.
§ Test Execution : Based on ‘learning the product’ and ‘test design’, execute the test
- Function Testing
1. Test each function, one at a time.
2. See that each function does what it’s supposed to do and not what it isn’t supposed to do
- Claims Testing – Based on claims lesson
1. Analyze individual claims, and clarify vague claims
2. Verify that each claim about the product is true
- Domain Testing
1. Decide which particular data to test with. Consider things like boundary values, typical values, convenient values, invalid values, or best representatives.
2. Consider combinations of data worth testing together.
- Stress Testing
1. Look for sub-systems and functions that are vulnerable to being overloaded or “broken” in the presence of challenging data or constrained resources.
2. Identify data and resources related to those sub-systems and functions.
3. Select or generate challenging data, or resource constraint conditions to test with: e.g., large or complex data structures, high loads, long test runs, many test cases, low memory conditions.
- Flow Testing
1. Define test procedures or high level cases that incorporate multiple activities connected end-to-end.
2. Don’t reset the system between tests.
3. Vary timing and sequencing, and try parallel threads.
- User Testing
1. Get real user data, or bring real users in to test.
2. Otherwise, systematically simulate a user (be careful—it’s easy to think you’re like a user even when you’re not).
3. Powerful user testing is that which involves a variety of users and user roles, not just one
- Scenario Testing: Test to a compelling story
1. Based on the user and their roles, and the scenario listed during test design execute your tests
- Risk Testing: Imagine a problem, and then look for it.
1. What kinds of problems could the product have?
2. Which kinds matter most? Focus on those.
3. How would you detect them if they were there?
4. Make a list of interesting problems and design tests specifically to reveal them.
5. It may help to consult experts, design documentation, past bug reports, or apply risk heuristics.
- Random Testing: Run a million different tests
1. Look for opportunities to automatically generate a lot of tests.
2. Develop an automated, high speed evaluation mechanism.
3. Write a program to generate, execute, and evaluate the tests.
The process of learning the product, designing the tests and executing the tests is ever improving. Each one undergoes modification if the other is changed. While you execute the test you learn more about the product, you change your test design accordingly and hence the execution.
Result: In freestyle exploratory testing, the only official result that comes from a session of ET is a set of bug reports.
The exploratory tester must watch for anything unusual or mysterious
Exploratory testers must also be careful to distinguish observation from inference.
Excellent exploratory testers are able to review and explain their logic, looking for errors in their own thinking.
Excellent exploratory testers build a deep inventory of tools, information sources, test data, and friends to draw upon. While testing, they remain alert for opportunities to apply those resources to the testing at hand.