Rules to Better User Acceptance Tests (UAT) for Bug Management - 15 Rules
Since 1990, SSW has supported the developer community by publishing all our best practices and rules for everyone to see.
If you still need help, visit SSW Consulting Servicesand book in a consultant.
Never assume automatic Gold Plating.
Most teams are getting the hang of User Stories that have subtasks. Unfortunately the same can’t be said about acceptance criteria.It is so important because real user stories tell a team when the task is done.
Also, Product Owners should not get heartburn because ‘obvious’ functionality was not included. All requirements should be specified in the Acceptance Criteria.
For example, Product Owners should not assume things like:
- They will get a message that says ‘no records found’ or
- The grid will support features such as pagination or sorting
They must be specified in the Acceptance Criteria if required.
There are 2 parts to getting this right: The Acceptance Criteria, then the Acceptance Tests.
Figure: You need a common language to communicate in Acceptance Criteria (from the Product Owner) define the exact requirements that must be met for the story to be completed. They answer the question, "How will I know when I' m done with the story?"
Figure: A User Story with Acceptance Criteria (MSF Agile Template) When I enter ‘Adam’ in the search box and click 'Search' I will see all entries starting with 'Adam' in the grid.
Figure: Bad Example of Acceptance Criteria - Incomplete
Positive Test -When I enter ‘Adam’ in the Search box and click ‘Search’ I will see all entries starting with Adam in the Grid Negative Test - When I enter ‘zzz’ in the Search box and click ‘Search’ I will see *no* entries in the Grid
Figure: OK Example of Acceptance Criteria
Positive Test - When I enter ‘Adam’ in the Search box and click ‘Search’ I will see all entries starting with Adam in the Grid
Negative Test - When I enter ‘zzz’ in the Search box and click ‘Search’ I will see *no* entries in the Grid
Gold Plating - If no results are retuned show a message box ‘No results found’
Gold Plating – Validation: If no search text is entered, the ‘Search’ button should be disabled
Gold Plating – Right-clicking on a column header should provide ‘Sort’ functionality
Gold Plating – If a large set of results is returned, display pagination with page numbers and ‘Prev’, ‘Next’ linksFigure: Good Example of Acceptance Criteria – Including Gold Plating
Note: For tiny User Stories, you can omit Acceptance Criteria. Sometimes you just need a screenshot, or even better a video.
When you use Microsoft Test Manager to create test plans and test suites for your team project, there are several approaches that you can take.
You may only create 1 test plan that you use for all milestones and add test suites and tests as you progress. This is bad because if you use this approach, you do not have historical data for your test pass rates for previous milestones.
By creating test plans for each Sprint, you can see when a Sprint is complete, based on your testing goals. You can also prepare the test plan for the next Sprint while you complete your testing for the current Sprint.
By using this approach, you can track your testing progress for each of your test plans and see that the quality of your application is improving.
Tip: If you add both manual and automated tests to your test suites, you can view the overall quality based on both of these types of tests for your test suites and test plans.
Follow these steps to create a Test Case in TFS.VisualStudio.com:
Figure: Double click the Product Backlog Item that you want to create a Test Case for to open it Figure: Open the "TEST CASES" tab and click on the "New linked work item" button Figure: Ensure that the link type is 'Tested By', that the work item type is 'Test Case' and enter a title for the Test Case. Click OK Figure: Select the correct iteration, and update the Status and Details sections. Click on the 'Click here to add a step' and proceed to add the steps required to test the user story Figure: After entering each action, along with its expected result, click Save and Close This is how assign a tester to test configurations:
How to assign a tester How to assign a tester This how you configure which environments to use for a particular test:
Figure: From the Plan menu choose the Test Suite. Click on the test plan and then the Configurations button Figure: To view the available configurations, click in the configurations column for the test and then select the arrow at the end of the field. Select configurations and click the Apply button Once the coding is done by the developers, the functionality must then be stepped through in the required browsers. You can do this manually or automating it using a great tool like Microsoft Test Manager.
The 1st step in getting automated tests, is to setup Acceptance Tests:
Figure: Run each 'test case' with a prescribed configuration Figure: As you progress through each step, 'Pass' or 'Fail' the expected results. Take screen captures or video as appropriate Figure: Bad Example -After checking all the ‘Expected’ results in your MTM test, do not forget to 'Pass' or 'Fail' the Test Case Figure: Good example - After all 'Test Steps' have been checked off, choose the overall status for the test. Otherwise it will continue to show as 'Active' on the reports Tip: You can pass a test from the test list. Select the Test menu, then the Test Suite. Choose the Test Case to pass and then click the green button ‘Pass Test’.
The next step is to review the Statistics of the Sprint.
Developers think they are done when they finish coding and check in.
Wrong. It is much better to use Microsoft Test Manager (MTM) and step through the Acceptance Tests.
Once you are doing that, this is how you check the status of the current Sprint:
Figure: Good example - This Sprint currently has 2 'Failed' tests (red), and 1 'Active' test (blue). (This 'Results' view is new in MTM 2012) Key:
- The red is work remaining for the developers, and
- The blue is working remaining for the testers (unfinished testing)
Acceptance Tests (built by the developers) verify that the Acceptance Criteria are met.
The goal is for teams to move beyond manual testing and implement automated testing. E.g. CodedUI tests, Telerik Tests etc
Test cases answer the question, "How do I test and what are the test steps?"
Figure: Test Cases in a User Story (MSF For Agile Template) Positive Test - When I enter ‘Adam’ in the Search box and click ‘Search’ I will see all entries starting with Adam in the Grid
Negative Test - When I enter ‘zzz’ in the Search box and click ‘Search’ I will see no entries in the Grid
Gold Plating - If no results are retuned show a message box ‘No results found’
Gold Plating – Validation: If no search text is entered, the ‘Search’ button should be disabled
Gold Plating – Validation: If the button is disable and search text is entered, the ‘Search’ button becomes enabled
Gold Plating – Right clicking on a column header and using the ‘Sort’ functionality, sorts the data by that column
Gold Plating – if a large set of results is returned, clicking the pagination page numbers shows the correct data
Gold Plating – if a large set of results is returned and we are on page > 1, clicking the ‘Prev’ button goes to the previous page
Gold Plating – if a large set of results is returned and we are on page 1, ‘Prev’ button does not error
Gold Plating – if a large set of results is returned and we are on page < MaxPage, clicking the ‘Next’ button goes to the next page
Gold Plating – if a large set of results is returned and we are on page = MaxPage, clicking the ‘Next’ button does not errorFigure: Good example - Acceptance Tests
Figure: The tester sees the Test Cases in Test Manager Figure: The tester follows each instruction (aka the Test Steps), and gives it a tick or cross In an agile team, pre-planning all your tests is not always the most efficient use of time for testers. PBIs can change direction, scope, and priority, and pre-planned tests are likely to change.
Exploratory testing provides the best way to create repeatable tests from the acceptance criteria - as you need them.
There are two ways to run an exploratory test in Microsoft Test Manager.
Figure: Bad Example - go to the Test tab, choose Do Exploratory Testing, choose a PBI, then click Explore. Too many steps Figure: Good Example - Right-click on a requirement in your test suite and choose "Explore requirement" Note: You should always run an exploratory test against a PBI. This will automatically relate any bugs and test cases to that PBI (not to mention the exploratory test run).
When you start an Exploratory test, you don't see any test steps, but you can click on the title of the requirement to see its Acceptance Criteria.
Figure: Clicking on the title will show you the Acceptance Criteria If you find a bug while testing, click the Create bug button to add a bug related to the PBI.
Figure: Creating a bug from exploratory test links to the PBI By default, the reproduction steps will be populated with the last 10 actions you took (you can change this and other defaults with configuration). You can cut this down to just the relevant actions by clicking Change steps.
Figure: You can change the repro steps captured in the bug very easily Now you have a bug, you should create a matching test case so you can verify when the bug is fixed. This also gives you a handy regression test to help ensure the problem isn't reproduced later.
Figure: Click Save and create test to create a matching test case Again, the steps are prepopulated from your bug steps.
Figure: The test steps are prepopulated from the action recording Use Microsoft's Exploratory Testing - Test & Feedback extension - to perform exploratory tests on web apps directly from the browser.
Capture screenshots, annotate them and submit bugs as you explore your web app - all directly from Chrome (or Firefox) browser. Test on any platform (Windows, Mac or Linux), on different devices. No need for predefined test cases or test steps. Track your bugs in the cloud with Azure DevOps.
Video: Ravi walks Adam through the exploratory testing extension - You can also watch on SSW TV Video: Ravi Shanker and Adam Cogan talk about the test improvements in Azure DevOps and the Chrome Test & Feedback extension - You can also watch on SSW TV- Go to Visual Studio Marketplace and install "Test & Feedback".
Figure: Microsoft Test & Feedback (was Exploratory Testing) extension - Click "Add to Chrome" to add the extension to the browser on your computer.
Figure: Chrome Web Store page for Test & Feedback extension - Go to Chrome.
- Start a session by clicking on the Chrome extension and then click start a session.
Figure: Chrome extension icon Figure: Test & Feedback start session button - Upload the screenshot to a PBI.
Figure: PBI in Azure DevOps showing the screenshot More info: Azure DevOps - Explore work items with the Test & Feedback extension
You organize your Test Cases by adding them to a Test Plan (also called a Test Suite).
We have one Test Plan per sprint.
How to add a test case to a test plan How to add a test case to a test plan How to add a test case to a test plan Running tests with MTM allows you to keep track of your testing progress.
Problem Steps Recorder is a useful tool that allows anyone to capture details of a bug. Best of all, It's already installed on your Windows PC!
Bug reports can come from anywhere and anyone - the more people reporting back, the better.
Once a problem has been discovered, it helps to collect as much information as possible. Although there are many useful tools you can give to your test team, Problem Details Recorder has one big advantage: It's shipped with Windows since Windows 7 so anyone can record details of their issue with no preinstalled software.
Figure: To start Problem Steps recorder, type PSR into the start | run box Figure: then click 'Start Record' Once recording, the user can repeat whatever steps to reproduce an issue.
Figure: User behavior is captured along with full screenshots. This can be saved and attached to a Bug PBI In the old days, reading and understanding test cases was something only developers could do. Behavioral-Driven Design (BDD) starts to solve this problem by enabling organizations to define their use cases in plain language and integrate these aspects with testing frameworks.
Using Gherkin syntax and a BDD framework like SpecFlow you can write test scenarios in plain language using a few key words (Given, When, Then). Plain language makes the test scenarios easy to understand, even for non-technical team members.
First think about the different scenarios that you want to test, then write them out in plain language using gherkin syntax.
Feature: Greeting Message Participant sees a greeting message Scenario: Participant sees a greeting message Given I visit the website When I navigate to the greeting screen Then I see the greeting message
Figure: Good example - Gherkin syntax scenarios (Given, When, Then)
Once you have your scenarios lined up, you should begin to write the test steps for each scenario.
[Given(@"I visit the website")] public async Task VisitTheWebsite() { await HomePage.NavigateAsync(); } [When(@"I navigate to the greeting screen")] public async Task NavigateToWelcome() { await HomePage.NavigateToGreeting(); } [Then(@"I see the greeting message")] public async Task ThenISeeTheGreetingMessage() { var message = await HomePage.GetGreetingMessage(); Assert.IsTrue(message == GreetingMessage); }
Figure: Good example - Test steps to run, matching the Gherkin Syntax
Automated UI testing (aka end-to-end testing) is an awesome way to automate the process of browser based testing.
In the old days, Selenium was the gold standard, but these days it has been overtaken by Playwright which lets you write tests in many popular languages including .NET, Java, Python and Node.js
Playwright has a few advantages over Selenium:
- Actionability
- Performance
- Stability
- Switching browser contexts for parallel testing
- and more...
//Store the ID of the original window const originalWindow = await driver.getWindowHandle(); //Check we don't have other windows open already assert((await driver.getAllWindowHandles()).length === 1); //Click the link which opens in a new window await driver.findElement(By.linkText('new window')).click(); //Wait for the new window or tab await driver.wait( async () => (await driver.getAllWindowHandles()).length === 2, 10000 ); //Loop through until we find a new window handle const windows = await driver.getAllWindowHandles(); windows.forEach(async handle => { if (handle !== originalWindow) { await driver.switchTo().window(handle); } }); //Wait for the new tab to finish loading content await driver.wait(until.titleIs('Selenium documentation'), 10000);
Figure: Bad Example - Selenium only lets you have one window focused at a time meaning you can't do parallel testing easily
const { chromium } = require('playwright'); // Create a Chromium browser instance const browser = await chromium.launch(); // Create two isolated browser contexts const userContext = await browser.newContext(); const adminContext = await browser.newContext(); // Create pages and interact with contexts independently
Figure: Good Example - Playwright makes it easy to spin up independent browser contexts for parallel testing
Playwright codegen
Playwright offers a super cool feature that lets developers record actions in the browser to automatically generate the code for tests.