Friday Dec 13, 2024
Tuesday, 23 November 2010 01:07 - - {{hitsCtrl.values.hits}}
A question and answer session with Chandana Ranasinghe, Director Quality Assurance, and Waruna De Silva, Manager Quality Assurance at Virtusa
According to the Handbook of Software Quality Assurance 3rd Edition by G. Gordon Schulmeyer and James I. Mcmanus, the software industry has witnessed a dramatic rise in the impact and effectiveness of software quality assurance recently.
From its day of infancy, when a handful of software pioneers explored the first applications of quality assurance, to the development of software, software quality assurance has become integrated into all phases of software development. Quality has long been the Achilles Heel of software development. All too often, Software Quality Assurance (SQA) specialists are called in after the fact to spot defects.
Typically, SQA efforts begin only after the code is completed, with design and architectural mistakes already set in stone, a situation where delays and cost overruns become inevitable. The root of the problem is that most organisations view quality as a way to fix problems, rather than make better products. Most IT organisations fail to pay adequate attention to avoiding problems. Instead, they assume that bugs and defects are inevitable, and that SQA is necessary to identify those problems so they can be fixed.
The drawback with conventional SQA approaches is simple: they waste time, cost money and don’t produce better software. But what if IT organisations decided to develop software right, the first time?
Over 20 years ago, manufacturers learned that lesson the hard way, as they faced a rising tide of high quality, low‐cost imports from rivals who embedded quality, not as an afterthought, but directly into the entire product life cycle, from concept through design, production and delivery.
Measuring software quality involves two aspects: measuring how well the software is designed (quality of design) and measuring how well the software conforms to that design (quality of conformance). The first aspect measures the validity of the design and the requirements for creating a worthwhile software product. The second aspect is concerned with the implementation of the software product.
Software testing provides an objective, independent view of the software. Testing techniques includes executing a program or application with the intent of finding defects in functionality and usability, evaluating conformance to good software development practices, testing the application for performance issues and ability to manage varying loads, etc.
In the interview, Virtusans Chandana Ranasinghe and Waruna De Silva expressed his view on the following areas: Designing, controlling and improving the software testing process is a vital element in order to assure software quality; the software development lifecycle needs to be effectively monitored and the right statistics captured; choosing the right modeling and analysis techniques is critical and we examine a business goal-driven measurement and metrics model for software process; and software product and services organisations must develop the appropriate measurement process according to their business goals.
Q: Quality has different definitions. What in your opinion is “quality”?
A: Simply stated, quality is achieved through meeting user expectations, meeting the true needs of the user and occurs as a result of four activities:
nUnderstanding user requirements.
nDesigning products and services that satisfy those requirements.
nDeveloping processes that are capable of producing those products and services.
nControlling and managing those processes so they consistently deliver to their capabilities.
Q: What gets us talking about quality? Quality gaps? How do quality gaps occur?
A: There are so many ways quality gaps can be introduced into any product or a service. Mainly there are two ways. One is users/customers are unable to explain what they expect from the product or a service so that user needs are not articulated well. The second gap occurs when the product or service producers fail to understand the user needs well so that products/services are not designed and developed to meet the exact need.
Q: How do we deal with this challenge?
A: Primarily there are two ways, quality assurance and quality control.
Through Quality Assurance and Process Improvements, products and services are delivered through a set of planned and systematic activities that are implemented to provide adequate confidence and fulfill requirements for quality.
Quality Control, on the other hand, is the operational techniques and activities to compare the outcome of the products/services against the user requirements that are used to fulfill requirements for quality.
Q: What is the role metrics and measurements play in Quality Assurance and Quality Control?
A: There is a very famous saying - When you can measure what you are speaking about, and express it in numbers, you know something about it, but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind – William Thompson (Lord Kelvin), 1824-1907.
Measuring the attributes of the product and service quality is important for both quality assurance and control. Usually metrics are generated through the process of quality control.
Q: How often does the industry use metrics for quality management and what is the way forward?
A: The current use of metrics is at an average level. However, it has significantly improved in the last two years due to the increasing demand for higher quality and competitive markets.
We can expect a further growth due to various factors:
The recent recession was a great opportunity for both producers and users to stop and think about the products and services that they produce/consume. Quality is now not a differentiator, it is a must.
With the increased focus on resource conservation, producers are forced to improve quality so that efficiency of resource usage is increased.
Markets are becoming more aware of quality so that more objectivity is expected in comparing products and services.
(Chandana Ranasinghe heads the Test Engineering team at Virtusa Corporation in Colombo. He joined Virtusa in 1999 and assumed many leadership roles, delivering many projects in the capacity of a QA Manager and architect, both onshore and offshore. He also assumed many organisational roles at Virtusa including Head of the Test Engineering Competency Excellence Group at the Colombo Advanced Technology Centre, Senior QA Manager for the Banking, Financial Services and Insurance Business Unit, and Lead QA Analyst for the Global Enterprise Business Unit. Chandana holds a bachelor’s degree in Computer Engineering from Moratuwa University, Sri Lanka, a Masters in Business Administration (MBA) from the Postgraduate Institute of Management (PIM) at the University of Sri Jayewardenepura, Sri Lanka and is also a Certified Software Quality Analyst (CSQA) of the Quality Assurance Institute Worldwide (QAI). He is also actively involved in the industry and academia. He leads the Quality Forum of the Sri Lanka Association of Software and Services Companies (SLASSCOM). He is also a founding member and Senior Consultant of the Quality Assurance Institute (QAI) Worldwide, Sri Lankan Chapter. He is an external lecturer on Software Quality Assurance and Testing at the Engineering Faculty, University of Moratuwa, School of Computing, University of Colombo and Sir Lanka Institute of Technology.
(Waruna De Silva, Heads the Testing Competency Excellence Group at Virtusa Corporation in Colombo. He has more than ten years experience in the Software Test Engineering. His experience spans from the software test team management, testing processes improvement, domain testing, test automation, performance testing and to specialised testing such as usability and security. Quality Assurance Institute (QAI) Worldwide, Sri Lankan Chapter. Waruna holds an MBA in Project Management and a B.Sc in Engineering.)
Connect with Chandana & Waruna on http://www.facebook.com/VirtusaCorp