Every year, we see headlines of critical system failures. The Healthcare.gov roll-out, stock exchange crashes, fund pricing application failures, to name a few. While only the ‘high profile’ incidents hit the national headlines, many other system fiascos occur and oftentimes fly under the radar. System issues can be caused by any number of factors – hardware, software, malware, human error, etc. However, a key underlying theme frequently points back to ‘testing’. Was the system adequately tested from an ‘end to end’ perspective? Was the ‘test environment’ a good proxy for the real ‘production’ environment? These are heard time and time again.
So why does testing get the blame, and what are the key challenges? Testing is the last stop before a system(s) goes live so it’s the last set of hands to touch the system prior to moving to production. Testing is where all formal sign-off typically occurs. And the commencement of testing is typically where “the rubber meets the road.” After going through a long journey (e.g., requirements, design, development), testing is where things all come together. And as usual with most large projects, deadlines are committed at the project onset, and the analysis / design / development stages take longer than planned. So, testing becomes time-boxed and squeezed.
It’s also no secret that testing is difficult. Many separately tested components have to be integrated and tested as a whole which can be complex, expensive, and time consuming. A variable can change in one component, and a previously successful test in other components can fail. When you consider all the different permutations and combinations of variables in a highly complex system, the scope of testing can increase to an enormous effort. A test environment also needs to closely mimic the actual production environment and if they are different, the test results could vary. Creating separate test environments can be very expensive, especially when they consist of multiple applications serving multiple business teams.
Below are seven common testing pitfalls I’ve seen, with some suggestions on how to avoid them:
- No single testing owner – This leads to lack of accountability for ‘end to end’ testing efforts. There should be a sole Test Manager responsible for all test phases (e.g., functional, integration, regression) across all in-scope systems.
- A rush to commence test execution – This often follows a failure to complete proper test planning. You should start early on to lay out your test strategy, approach, scenarios, scripts, expected results, etc., prior to starting the actual testing.
- Inadequate test environment – Often the test environment is not reflective of actual production environment. You need to ensure the test environment is well controlled and coordinated, as far as code base, data, configuration settings, etc.
- Too many testers playing in the same sandbox – Many companies try to save money by leveraging the same test environment(s) across multiple projects / initiatives but having too many testers just causes sand to get in their eyes. When required, fight for your own sandbox.
- Too much focus on testing the new functionality – Oftentimes firms focus on new initiatives at the expense of regression testing. Your test plan must provide adequate time to ensure existing functionality continues to work as expected alongside the new functionality.
- Test scenarios not well defined – If test scenarios aren’t understood upfront, untested conditions often result. Work with business experts to ensure complex business scenarios (including upstream / downstream impacts) are properly accounted for and tested.
- Staffing the test team with non-testing professionals – Testing is both an art and a skill, and leveraging folks who have done this before will help the test team achieve efficiencies, avoid common pitfalls, and quickly address unexpected issues.
Comments