Video: Chewing the Fat Review - Testing with Luke Cook and Piers Sinclair (7 min)
Watch the extended cut (32 min).
If you ask your manager, developers, clients or other stakeholders whether they think testing is important, you're very likely to get a resounding "yes, of course!". The question of why they believe it's important is usually more difficult for them to answer, though.
Everyone thinks they know what "testing" is. Like most things, though, there isn't a shared understanding of what testing really means across the IT industry.
Distinguishing "testing" and "checking" is a great way to help build this shared understanding when we're talking about this critical part of the software development process.
Without a good understanding of testing and its limitations, it's easy for clients and customers to believe that we "test everything" - but there's a problem with this belief:
Complete (or 100% or exhaustive) testing is impossible.
There is a common misconception that you can automate all the testing.
While there can be great value in using automation in testing, human capabilities are still required for the key testing skills such as evaluation, experimentation, exploration, etc.
"Critical distance" refers to the difference between one perspective and another, the difference between two ways of thinking about the same thing.
You know how easy it is for someone else to spot things - both good and bad - about your work that you haven’t noticed. You're "too close" to the work to be truly critical of it.
Developers naturally have a builder mindset focused on success, while testers are likely to have a more skeptical, critical-thinking mindset.
The critical distance between the mindset of a developer and a tester is important for excellent testing.
We know that complete testing is impossible so how do we decide which finite set of tests to perform out of the infinite cloud of possible tests for a story, feature or release?
This problem can seem overwhelming, but focusing on risk is a good approach so let's look at risk-based testing.
In a Scrum team, every member of the team has some responsibility for the quality of the deliverables of the team.
If you have a dedicated tester embedded in the Scrum team, they are not solely responsible for performing all of the types of testing required to help build a quality deliverable.
We know that complete testing is impossible, so we need ways to help us decide when to stop testing... aka when we've done "enough" testing.
> "Genius sometimes consists of knowing when to stop." — Charles de Gaulle
Exploratory testing is an approach to testing that fits very well into agile teams and maximises the time the tester spends interacting with the software in search of problems that threaten the value of the software.
Exploratory testing is often confused with random ad hoc approaches, but it has structure and is a credible and efficient way to approach testing.
Let's dig deeper, look into why this approach is so important, and dispel some of the myths around this testing approach.
A big suite of various levels of automated tests can be a great way of quickly identifying problems introduced into the codebase.
As your application changes and the number of automated tests increases over time, though, it becomes more likely that some of them will fail.
It's important to know how to handle these failures appropriately.
- Do you understand why testing is important?
- Do you understand what "testing" really means?
- Do you know that “complete testing” is impossible?
- Do you understand why testing cannot be completely automated?
- Do you recognize the importance of critical distance?
- Do you take a risk-based approach to test planning?
- Do you know the whole Scrum team is responsible for quality?
- Do you know when you've done "enough" testing?
- Do you know what “exploratory testing” is?
- Do you understand the dangers of tolerating automated test failures?
- Do you know the different types of testing?
- Do you mix user research methods to capture the full picture?
- Do you test high-risk features with real users before launch?
- Do you treat your automated test code as a first-class citizen?
- Do you remember to use automated UI testing sparingly?
- Do you know whether a test is a good candidate for automation?
- Do you understand the "testing pyramid" models?
- Do you regularly review your automated tests?
- Do you know how to manage and report on exploratory testing?
- Do you know how to decide what to test?
- Do you understand how you know when you’ve found a problem?
- Do you use port forwarding to test local builds?
- Do you use the EF Core In-Memory provider to simplify your tests?
- Do you use Ephemeral environments for clean and isolated testing?
- Do you handle Multi-OS dev teams in source control?
- Do you know which environments you need to provision when starting a new project?
- Do you use Logging Fakes?
- Do you canary deploy your new features using a spreadsheet?
- Do you know you should only Roll Forward?