What is Exploratory Testing?
Posted by Steve Green on 10 March 2012.
This post was initially written in response to the question "What exactly is exploratory testing?" in the Exploratory Testing group on LinkedIn.
Opinions vary and it is often misrepresented. We follow a number of principles, some of which are ours, and some of which we have adopted from the exploratory testing community, such as:
- Test what the system 'can' do, not what it 'should' do. It can be a good strategy to ignore all the design documentation when you first start testing, because your thinking can become blinkered when you know what the system is supposed to do. You can always compare your findings with the documentation later.
- Test what users 'can' do to the system, not what they 'should' do. There is no limit to the number of tests you can conceive of when you think in these terms. Use cases in the design documentation typically take the happy path through the system once, in the most linear, optimal route possible (i.e. the route that absolutely no one will ever take). In reality users take all sorts of different paths through the system many times over a period of days or even years.
- Good testing is 90% skill, 10% process. Only do anything to the extent that it is useful to do so.
- Test the system, not the software.
- Don't necessarily trust what you're told about the system. Create inventories to find out what it is you are being asked to test. Inventories might include a list of pages or screens, a list of functions or features, a list of states the system can be in, a list of technologies used to build it etc. Is this what you were expecting? Is there more or less than you were told? Do you have specifications or other oracles for everything? Do you have time to test it all to a reasonable depth? Do you know how to and have the resources to do so?
- Find oracles. Whether there is any design documentation or not, there may be many other oracles to help you decide if the system is working correctly, such as FAQ and Help pages, comments in the source code, industry standards, competitive products, previous versions of your product, packaging, marketing material etc. In the absence of all these, there is always 'reasonable user expectation'.
- Implementation is important. Run tests to find out how the system does what it does. This knowledge may prompt test ideas that you would not think of when taking a black-box approach. For instance a feature would require some different tests if it was implemented using JavaScript compared with implementing it using Flash.
- Look for self-consistency. If you can create, edit or enter a piece of data in more than one place, the data validation rules should be consistent even if you don't know what they are supposed to be. If you can create a username with 16 characters and the login form only accepts 8, there's a problem no matter what the specification says.