Software failures cost businesses millions of pounds every year, with much of this wasted money directly attributable to inadequate testing. The business case for thorough and timely software testing could hardly be more compelling: the stark truth is that the later in the product development lifecycle a failure is identified and fixed, the more expensive the fix, and the greater the risks of potential damage to an organisation’s reputation and non-compliance with legal or safety regulations. Should an application or system go live with a bug, the potential damage could be huge, so why is early involvement of testing not the norm?
Most developers dislike being told that the software baby they have nurtured from concept through design and inception is ugly. It’s a natural defensive instinct; after all, in fast-paced software development, teams are already under tremendous amounts of pressure. Alarmingly, this can mean that critical bugs found within a programme may be deprioritised to meet a tight release deadline or specific criteria. This may also occur if perceived risks are low – “the user would never do that” – or an “unpalatable” or costly fix is proposed.
When faced with apparent criticism from the testing team, developers can dismiss or close a defect complaint. This “them and us” mentality needs to be put aside to reach a point where development and testing teams acknowledge that they are building a product together and have a shared responsibility to deliver quality and success. A strong defect management process created by both teams is critical.
This shared responsibility can help organisations to avoid the kind of headline-making IT disasters that seem to dog the financial services and public sectors, with their complex IT infrastructures and multiple stakeholders. This negative publicity is compounded by the costs of fixing a defect, which rise exponentially the further on in the project the defect is identified.
During the requirements, design, development and testing phases, costs of fixing defects are often “manageable”. These costs start to increase if fixes are made during systems integration, user acceptance testing or live phases.
Although teamwork is key, ideally the testing team should be independent for a much-needed unbiased view. This avoids scenarios where, for instance, outsourced developers provide testing; in effect their own teams “mark their own homework”.
Team dynamics are not the only source of friction. Software testing is often planned far too late: traditional software development methodologies focus on collecting requirements and the initial stages of design. Testing requirements evaluation conducted too far into a project can reveal that more testing is required than originally thought, which can lead to de-scoped testing to keep costs and schedule under control.
Industries prone to software failures are often those that migrate “bedroom coding” or “spreadsheets and access databases” into fully-blown enterprise architectures. In the railway industry, for instance, the use of spreadsheets and personal databases formed the core of working processes, and this particular pattern of usage only changed after several major rail disasters, including Hatfield.
The agile development process can also be an excuse for software failure. The agile team may use the philosophy to avoid documentation. “It’s agile, we didn’t do documentation” can be used as the reason if a defect is found. At all stages of an agile project, developers and testers need to determine an adequate level of documentation. Just like agile, software quality needs to be a continual process, with testing teams working side by side with developers and being included in the development process early.
It’s only then that high profile software glitches will fail to make the headlines.
Michael Vessey is a senior consultant at software testing and quality management consultancy SQS
• Do you agree that developers are software tester are often at loggerheads? Or is the role of testers becoming more appreciated, especially in light of recent high-profile software disasters? Let us know by leaving a comment
Sometimes, the power of the mainframe is the most cost effective answer. Computing's Peter Gothard puts Computing's readers' questions on the future of the mainframe to IBM's Z13 expert Steven Dickens.
This Dummies white paper will help you better understand business process management (BPM)