Skip to content
This repository was archived by the owner on Sep 6, 2021. It is now read-only.

moj noms apvs 0000 test strategy

Thomas Swann edited this page Nov 14, 2016 · 1 revision

Contents

Delivery Pipeline

The testing activities described in this document form part of the definition of a feature being 'production ready'.

Testing Pyramid

The different classes of test illustrated in this diagram each exercise functional qualities of the system in different ways.

They may also test non-functional qualities - for example some performance testing may be automated at the unit and integration test levels, but other performance qualities may also still be exploratory.

Generally, an effective test plan incorporates a range of testing activities over favoring just one - i.e. purely automated versus purely manual.

In-Sprint Activities

A list of test activities that should occur during each sprint ceremony are listed below.

Principles

  • Testing is not split into a seperate phase or stages
  • Agile testing is a 'whole-team' approach - the whole team must take responsibility for quality on the project
  • Automate wherever possible. This includes unit and integration testing, functional and non-functional
  • Test driven development. Tests should specify desired behavior and drive what is implemented. Testing and coding should proceed in parallel as a single activity as development proceeds
  • Testing will provide continuous feedback to the team in the form of automated test results, discoveries made during exploratory testing and observations made by business users of the system
  • Collaborate with the users/business stakeholders to clarify and prioritise requirements

Backlog Grooming

  • Assure that the user stories can be adequately tested
  • Highlight testing impacts on user story estimation

Sprint Planning

  • Establish clear understanding of the test approach for each story among the group
  • Understand how new features/functionality will affect existing functionality and regression testing
  • Size stories and provide estimates from a test perspective with a view on whether or not the team have capacity to cover the anticipated test effort
  • Establish a clear view and plan of testing tasks that need to be completed for the sprint (e.g., manual tests, automated tests, non-functional tests, test data prep, test environment prep etc.)

Test Prep and Execution

  • Complete/co-ordinate any test preparation activities that may be required for the sprint (e.g., test data, test environment, deciding on appropriate candidates for automation)
  • Conduct exploratory testing to investigate unknown and unexpected behavior within the application
  • Design and develop automated functional unit and integration tests for all stories
  • Design and prepare non-functional tests – Performance, Security, Accessibility for stories which require them
  • Provide feedback to the PO on the functionality so that they can make informed decisions on how to proceed with delivery

Issues

  • The person identifying a defect should raise it in the first instance with the developer of the feature (face-to-face communication)
  • Bug which are discovered during development of features being delivered in-sprint should be fixed in-sprint
  • Bugs discovered later will be added to the backlog
  • There will be triaged and each sprint will have reserved time set aside to fix bugs
  • If there is an occasion where a defect has still not been resolved by the end of a sprint and the Product Owner has confirmed acceptance of the related user story, then:
    • The user story will be modified by the PO to reflect any deviations from acceptance criteria
    • A defect will be formally logged and will follow the normal process described above

Template

Defects should at a minimum capture the following information (this can be used as a template for submission to the backlog):

  • Description: Succinct high-level description of the issue
  • Environment: (Dev/Test/Staging/Production)
  • Build number: Identify the build against which the issue was produced
  • Steps to reproduce: Numbered sequence of steps taken to cause the issue (include any data, screenshots etc which may be required to facilitate this

Other ceremonies

  • Ensure that the team remains clear on test activities that need to be completed during, and before the end of, the sprint
  • Show and Tell is an opportunity to elicit feedback to the incorporated into the test approach for subsequent sprints
  • Retrospectives will provide input from a test perspective and look to continuously improve the team's test approach in future sprints

Exploratory testing

Exploratory testing is effectively manual testing which is unguided by a pre-defined set of scripts.

Manual testing is human-present testing. A human tester comes up with scenarios that will cause software to fail using their experience, creativity and intuition.

This gives a better chance of finding bugs related to the underlying business logic of the application.

Manual testing has some issues. It can be slow, too ad hoc, not repeatable and not reproducible.

To mitigate these issues, we propose the following strategy for any manual exploratory testing activities which are carried out in-sprint.

Guidelines

  • Any new feature of the solution completed in-sprint should have exploratory testing carried out against it
  • The feature should always be tested by a person other than the one who developed it
  • Focus on the goals of the system - think like a user of the system
  • Tests are unscripted, but should still be planned and the outcome accurately captured (See below)

A popular model for exploratory testing is the Tourist metaphor. At a minimum, the Landmark and Intellectual tour models listed in the above link are good approachs to use in combination, though I recommend having a read through all of the approaches listed.

Supporting Evidence

Exploratory testing generates documentation as tests are being performed instead of ahead of time in the form of a rigid test plan.

We will initially adopt a light-weight approach to capturing the output of an exploratory test in the form of a simple one-page report, which states:

  • Name of the feature tested
  • Goal of the exploratory test
  • Note how the test was conducted
  • List any issues identified in enough detail to reproduce (i.e. a sequential, numbered set of steps)
  • Any attached test data or screenshots that might be useful

There are many tools which can be used to support exploratory testing such as screen records and loggers, however we will adopt the simple proposal above and re-assess if need dictates.

Information on the scope of an exploratory test can be found here: Gov Exploratory Testing Tips

Unit testing

JavaScript Unit Testing

It is crucial that all of our custom JavaScript code is covered by effective unit tests.

Each component of the solution which is written as a Node JS process, should have a source structure which incorporates a test/ directory, for example:

external-api-node/
    app/
        routes/
    test/
        routes/
    ...

The test directory will contain sub-directories and file corresponding to each source code file under app/. This makes it easy to navigate to the test files that match any given module of the application.

Test files will use dash case naming using the folder path and original file, e.g. app/routes/index.js has test file test/routes/test-route-index.js. This is necessary to uniquely identify the test file and original file in outputs which do not display full paths.

In the example above, this shows two corresponding routes/ directories, representing the route definitions for the express application and the matching test cases for those routes.

We will adopt a Test Driven Development approach to unit testing JavaScript code. This represents a philosophy of tests as specification - i.e. a test should be written before the code which would make that test pass. All non-trivial parts of the solution should be written in such a way that they are testable, as code which is difficult to test creates gaps in test coverage or tests which are hard to maintain.

Test Packages

Unit level tests should isolate the code directly under test from any external dependencies such as database connections or other external resources.

To support this, we will use a number of supporting npm packages which add various mocking capabilities to our test approach:

Proxyquire

proxyquire is an npm package which provides an unobtrusive proxy for the node require mechanism.

This allows us to selectively mock functions from required modules within the code under test.

See the proxyquire documentation above for examples of usage.

Sinon

Sinon a stubbing/mocking library. Works with proxyquire to replace dependencies.

Mocha

Mocha provides the main framework for structuring BDD/TDD style unit tests for our JavaScript code.

There are a few main constructs in Mocha tests which you should be aware of:

  • describe is used to group a set of tests, commonly known as a 'test suite'
  • it represents an individual test case within a describe test suite
  • beforeEach will execute a piece of code before each test case (used for prerequisite setup)
  • before will execute code before the first test case (i.e. only once for the suite)

For example, the following would represent two test cases within a single test suite:

describe('a test suite', function(){
  it('sheep should baa', function(){ /** ... */ })
  it('cows should moo', function(){ /** ... */ })
})
Supertest

supertest is a module designed specifically for testing HTTP and setting expectations for the result of test calls to our HTTP methods.

var request = require('supertest');
var express = require('express');

var app = express();

app.get('/user', function(req, res) {
  res.status(200).json({ name: 'tobi' });
});

request(app)
  .get('/user')
  .expect('Content-Type', /json/)
  .expect('Content-Length', '15')
  .expect(200)
  .end(function(err, res) {
    if (err) throw err;
  });

The basic example taken from the supertest docs is shown above. This illustrates using the module to wrap our express app object and then setting .expect assertions against the results of the GET request under test.

Chai

chai is a BDD/TDD framework for node which works well in conjunction with Mocha, supertest and many other JavaScript testing libraries.

For example, we can easily integrate Chai's .expect BDD assertion syntax into our Mocha tests.

Assertions support a fluent API, for example:

expect(answer).to.equal(42);
Extension packages

The related package sinon-chai extends chai to provide fluent assertion syntax for use with sinon stubs and spies.

Some examples:

mySpy.should.have.been.calledWith("foo");
mySpy.should.have.been.calledOnce

sinon-bluebird is a similar plugin which adds Bluebird helper methods to sinon.

// Stub a function that returns a resolved bluebird Promise
sinon.stub(obj, 'foo').resolves('hello world!');

Integration testing

Integration tests are automated tests written by the developer which do not include mocks or stubs for dependencies on other components or external services, but test against ideally physically deployed components or test harnesses.

The Development envrionment will act as the integration environment against which these tests will run.

This will include a development database and any other instances of services against which the tests should run.

Integration tests which create data (in the form of DB records, file uploads or similar) should include a teardown step to ensure that the environment is left in a consistent state comparable to before the test was executed.

In Mocha tests, this can be achieved through use of the after hook, for example:

after(function () {
  db.clean()
})

UI testing

UI Testing is focussed on flow through the application rather than on functionally testing the application.

The UI test purely tests:

  1. Page Content - expected elements are visible
  2. Forms - user can complete all fields they need to and can click buttons
  3. Navigation - user brought to the expected page following specific action

IMPORTANT data, including the values entered on forms and any results returned from the server are NOT part of a UI test.

Testing of values is the responsibility of Integration testing

UI tests will be written using Selenium Web-driver

Full details are documented in the section below on cross-browser testing.

Cross-browser and multi-platform testing

TODO: Sprint 3 - Spike 243

Accessibility

As part of non-functional testing, accessibility audits will be conducted on a continuous basis by the team.

It is important to note that Accessibility Auditing and Accessibility Testing are not the same.

Accessibility testing

Accessibility testing should be conducted by a group of disabled users (e.g. users who have a visual impairment).

The users conduct testing against each of the application screens and provide feedback.

All feedback is captured by a designated person who is facilitating the test session.

Accessibility auditing

Accessibility auditing is performed both manually by the tester or automated by a tool as part of the continuous integration process.

NOMS require that web applications be developed to meet the AA WCAG 2.0 accessibility standard.

All web content, including both the internal and external facing sites, must meet all of the criteria as defined in standard which are a requirement of Level AA.

The aim of auditing is to highlight any areas of the application screens that may not be compliant with the guidelines.

This audit is the responsibility of the developer who added the screens under examination.

Developers will consider accessibility requirements as they design the screens for the application.

We will use the pa11y node module to help automate the audit of WCAG AA compliance against the screens.

For any new screens that are added to the application, developers are expected to confirm that this audit check passes without any errors as part of the review process.

The proof-of-concept application contains an example audit module which runs the pa11y WCAG AA check and outputs a sample HTML report

Change directory to alpha/external-web/ and run npm test

The report output will be generated in external-web/test/wcag/html

Security testing

Developers are responsible for ensuring that new web application features developed meet the OWASP security guidelines as a minimum standard.

The OWASP Top Ten is a useful awareness document which gives a broad overview of where most web application security flaws will be found.

Code Security

"Using Components With Known Vulnerabilities" is now part of the OWASP Top 10 - insecure libraries can pose a large risk to web applications.

Using automated tools to receive continuous feedback on the security of our node applications is good practice which mitigates the risk of introducing security flaws early in delivery.

Retire JS is an npm package which provides a scanning tool which checks node web applications to identify known vulnerablilities. It does this by checking against a variety of security databases such as NIST NVD

Retire will check both packages installed via npm and any locally included JavaScript files (e.g. jQuery)

Install Retire globally:

npm install retire --global

Running retire in the node application directory will report any errors found with information on the nature of the vulnerability and remediation.

See Usage Options on how to narrow or expand the scope of the scan.

.retireignore can be included in a project to selectively ignore modules when performing the check

Retire JS ranks vulnerabilities Low - High. As a minimum all Medium and High vulnerablilies should be resolved as part of feature development.

The Grunt package grunt-retire can be configured in the application Gruntfile to automate the execution of audit.

Penetration Testing

As with any other element of testing strategy - variety is good when it comes to security testing a running application.

Kainos have developed a standalone docker image containing several common security testing tools which can scan a web application to assess a number of different attack vectors.

watchdog

Included tools are:

  • Arachni - scanner and penetration testing framework
  • sslyze - identify SSL misconfigurations
  • SQLMap - automate SQL injection flaw detection
  • Garmr - Inspect responses for basic security requirements

Instructions for how to download, install and run Watchdog are included in the README. A worked example is shown below.

Example Usage

The following steps can be used to run a local manual test against the external web application (for example):

  • cd into APVS alpha source
  • ./docker-compose up # Start alpha applications
  • Clone the watchdog repo locally
  • cd watchdog/vagrant
  • vagrant up build
  • vagrant ssh run
  • sudo -i
  • docker run --rm -it -e "GAUNTLT_ATTACK_SUBJECT=localhost:3000" -v /attacks:/data moomzni/gauntlt

Installing and running the docker image as part of a CI process is possible and this is also documented in the README.

Design Security

The techniques outlined above focus on the identification of issues with the basic structure of the web application itself.

However, at a higher level there exists the possibility that security flaws will exist within the design of the application, and this type of error is usually both harder to detect and to fix.

This is where security must be considered as part of either automated unit and integration tests or the manual exploratory testing conducted during the sprint. Common issues to look out for include:

  • Bypassing required navigation - In a sequence of screens, can the pages be requested out of order? Can certain steps be bypassed? This can be easily tested by examing the appropriate URLs and then using this information to navigate to an out-of-sequence URL on subsequent attempts
  • Attempting privileged operations - Catalog all the links for actions accessible only as an admin user. As a regular user or guest, attempt to access each in turn to check for privilege escalation
  • Abusing predictable identifiers - If an ID in a resource URL is easily predictable (e.g. sequential) then it may be possible to access resources which the user should not be able to see
  • Abusing repeatability - Any given action should bear the question - what if I do this again? If you can do it again, how many times can you do it and what happens? This type of test is a good candidate for automation.
  • Abusing high-load actions - Actions like image upload, loading in files or any other action which could incur a high resource cost is a candidate for DoS attacks.

Performance testing

Web performance budgeting

TODO: Backlog - Spike 241

Acceptance Testing

The purpose of User Acceptance Testing (or UAT) is to validate that a system is of sufficient quality to be accepted by the users, and in particular the Product Owner.

The acceptance/pre-production environment will be used for the purpose of testing and signing off features with the product owner.

The testing required to ensure acceptance of a story is to be defined by APVU / NOMS.

Environments

This section lists each of the environments which need to exist to support an automated delivery pipeline, their purpose within the pipeline and the rules for promoting artefacts from one environment to the next. This is summarised in the diagram below.

Environments

Each of the environments listed below is a full, shared environment hosted in Azure.

Gated/Manual steps represent a checkpoint where the team will manually promote changes to the target environment once all Gateway critiera (as described below) have been met.

The Acceptance / Pre-production environment is an exact match for Production in every possible way, with each previous test environment maintaining less parity, but still aiming to match production as closely as possible.


Development

This is the first environment to which code is delivered by the Kainos development team and is exclusively for their use.

Integration to external dependencies will be against test versions of those services if possible, or stubs and test harnesses.

Gateway

Fully completed features which are committed to the develop branch are promoted to this environment by the CI server, following all automated testing for that task having passed.

Changes to this environment therefore occur at the granularity of an individual pull request, representing either a task which is part of a story or a logical sub-component of the task as determined by the developer.

Tests will include automated smoke, unit, integration and UI tests.

Access

Kainos development team.

Data

Development data and that automatically created and destroyed by automated integration tests.


Testing

The testing environment represents a more stable environment for the purpose of testing at the level of complete user stories.

Testing activities are carried out by the Kainos team.

Gateway

Changes are manually promoted to this environment whenever all tasks comprising the story have been integrated and tested in the development environment. All automated tests pass, and in addition manual exploratory testing is also conducted.

Access

Kainos development team.

Data

This environment should have a more stable data set to allow for reproduction of testing efforts and issues. An obfuscated dataset which simulates production volumes.


Acceptance

This environment should be a mirror of production in all respects. This enables deployments to be vetted in an environment which is identifical to the live system, creating confidence in the quality of the delivered artefacts for promotion to production.

Testing will be conducted by the Kainos team and for user acceptance tests, the APVU team.

Gateway

All tests pass in Testing environment.

Access

Kainos and APVU teams.

Data

Either obfuscated or full production data.


Production

Live environment in active use by the users of the system.

Gateway

All tests pass within the pre-production environment. This test suite will include all smoke, unit, integration, UI, performance/security, manual and user acceptance tests.

Access

Individuals with administrative access to production need to be agreed with APVU.

Data

Full production data.


Performance / Security

Using Azure container service as our deployment platform offers us the opportunity to spin up environments for one-off activities.

This may include Performance and Security testing events.

Typically these environments

Gateway

All tests pass in Testing environment

Access

Kainos team.

Data

Either obfuscated or full production data. For particular classes of performance test such as soak or load testing this may constitute a greater than expected volume of data that the production load.


Clone this wiki locally