So you want to implement Quality Assurance… or should it be Quality Control?

By Bill Ferrarini, Senior Quality Assurance Analyst at SunGard Public Sector, and CISQ Member

 

Most companies will use these terms interchangeably, but the truth is Quality Assurance is a preventative method while Quality Control is an Identifier.

 

Don’t go shooting the messenger on this one, I know that each and every one of us has a different point of view when it comes to quality. The truth of the matter is we all have the same goal, but defining how we get there is the difficult part.

 

Let’s take a look at the different definitions taken from ASQ.org.

 

Quality Assurance

Quality Control

The planned and systematic activities implemented in a quality system so that quality requirements for a product or service will be fulfilled.

The observation techniques and activities used to fulfill requirements for quality.

Quality Assurance is a failure prevention system that predicts almost everything about product safety, quality standards and legality that could possibly go wrong, and then takes steps to control and prevent flawed products or services from reaching the advanced stages of the supply chain.

Quality Control is a failure detection system that uses a testing technique to identify errors or flaws in products and tests the end products at specified intervals, to ensure that the products or services meet the requirements as defined during the earlier process for QA.

 

 

As different as the definitions are, their scope is also very different.

 

To define a company’s Quality Assurance strategy is to specify the process, artifacts, and reporting structure that will assure the quality of the product. To define a company’s Quality Control is to specify the business and technical specifications, release criteria, test plan, use and test cases, and configuration management of the product under development.

 

It is important for a company to agree on the differences between Quality Assurance (QA) and Quality Control (QC). Both of these processes will become an integral part of the companies’ quality management plan. Without this delineation a companies’ quality system could suffer from late deliveries, being over budget, and a product that does not meet the customers’ criteria.

 

Quality Assurance

The ISO 9000 standard for best practices states that Quality Assurance is “A part of quality management focused on providing confidence that quality requirements will be fulfilled.”

 

Quality Assurance focuses on processes and their continuous improvement. The goal is to reduce variance in processes in order to predict the quality of an output.

 

To measure a company’s success in a Quality Assurance Implementation, you would do well to monitor the follow areas:

  • Best Practices
  • Code
  • Time to Market

Quality Control

The ISO 9000 standard for best practices states that Quality Control is “A part of quality management focused on fulfilling quality requirements.”

 

While QA is built around known best practices and processes, QC is a bit more complicated. To Control Quality, at a minimum you need to know two pieces of information:

  • The Customer’s view of Quality
  • Your company’s view of Quality

There are certain to be gaps between these two opposing views. How well you bring those gaps together will determine the Quality of your product.

 

Other metrics that come into play within a Quality Control environment would be:

  • Number of defects found vs. fixed in an iteration
  • Number of defects found vs. fixed in a release
  • Defects by severity level

These are just some of the metrics you would use to measure the success of your Quality Control implementation.

 

Summary

Neither QA nor QC focuses on the “whose fault is it?” question. The goal of a good QA and QC implementation should be to make things better by continuously improving your quality from start to finish. This requires good communication between the QA/QC groups.

 

Key attributes for success are:

  • Participation: Both process owners and users need to provide their expert input on how things “should” work, and define that in a fashion that allows your Quality Control to monitor the function.
  • Transparency: Open communication and the ability to look at all aspects of the process are critical to fully understand and identify both what works and what doesn’t.
  • Clear Goals: The entire team should know the intended results.

So if your company is implementing a Quality Management System, your first order of priority will be to understand the differences between QA and QC and when established measure and improve every chance you get.

 

About the Author

Bill Ferrarini is a Senior Quality Assurance Analyst at SunGard Public Sector. Bill has over 25 years of experience testing software, hardware, and web browser based systems. After beginning his career as a software developer, Bill has been devoted solely to furthering the Quality Management movement. He has a diploma in Quality Management, a degree in Video and Audio Production, is a former certified ISO internal auditor, and an accomplished musician.

Gartner Application Architecture, Development & Integration Summit 2014

25604_thumb_logo_gartnerGartner Application Architecture, Development & Integration Summit 2014 will be held December 8 – 10, in Las Vegas, NV. Mark your calendar now and stay up to date on the must-attend event for AADI professionals. 

 

Don’t miss out on a robust agenda of the hottest topics in AADI, industry-defining keynotes, top solution providers and the opportunity to network with industry experts and peers. CISQ representatives will be there to speak about the importance of software quality.

 

For more information click here.

CISQ Executive Lunch – Software Quality and Size Measurement in Government Sourcing

Where: Marriott Grand Hotel Flora, Via Veneto, 191, Rome, Italy

When: July 11, 2014

 

Government and industry have been plagued by expensive and inconsistent measures of software size and quality.  The Consortium for IT Software Quality has responding by creating industry standard measurement specifications for Automated Functions Points that adheres as closely as possible to the IFPUG counting guidelines, in addition to automated quality measures for Reliability, Performance Efficiency, Security, and Maintainability.  Dr. Bill Curtis will describe these specifications and how they can be used to manage the risk and cost of software developed for government and industry use.

What Software Developers Can Learn From the Latest Car Recalls

By Sam Malek, CTO / Co-Founder of Transvive Inc., and CISQ Member

 

If you have been following the news these days, you probably heard about the recall of some General Motors cars because of an ignition switch issue. It is estimated to be 2.6 million cars (1) and will cost around $400 million (2), which is roughly $166 per vehicle. This price is significantly expensive for a 57 cent part that could have been easily replaced on the assembly line.

 

As we enter the third wave of the industrial revolution (Toffler), where information technology is starting to dominate major parts of everyday life, software is becoming a critical component of day-to-day activities: from the coffee machine that might be running a small piece of code to the control unit that governs vehicles, and everything else in between. 

 

However, these days with the overflow of news about applications that have made millions – even billions – of dollars for their developers, the stories we hear about the development life cycle does not necessarily highlight the quality aspect of any software development cycle. Even in some recent documentaries and blog articles, the software development cycle is portrayed as a mad rush to get software out of the door without proper attention to its quality.

 

The case for higher software quality becomes even more important when an application is touching a critical aspect of our daily living. For example, vehicle drivers do not expect the onboard instrumentation to shutdown or crash due to a software malfunction – especially while driving on a highway.

 

While the origins of software defects are many (including design defects, requirements defects, coding defects, etc.), coding defects present the highest percentage of software defects when measured as a number of defects per a functional point. Dr. Capers Jones estimates that about 35% of any software application’s defects are attributed to code defects (3). These code defects can be easily detected and fixed while software is being manufactured, and the cost would be relatively very small when compared to fixing them later – as we found out with GM’s ignition switch issue.

 

The process of developing software is maturing. In the past, the focus was on extensive testing in multiple areas such as user acceptance testing, regression testing, and scalability testing. Today there are quality tools that enable early detection of software coding, structural, security and reliability defects during the manufacturing process. These tools can highlight potential issues within the software, thereby reducing the risk of fixing defects at later stages.

 

Late inspection of software can cause rework and expose technical debt, potentially making the cost of fixing defects or the cost of a change anywhere from 40 to 100 times greater than the cost of fixing those very same defects when they were first created (Boehm, 2004). This alone can make the case for implementing early quality monitoring tools.

 

Early inspection is not new. In fact it has been an integral part of the Industrial Revolution especially in the 1980’s through the work of the late W. Edwards Deming, who was well known at that time as the father of quality.

 

While software practices such as waterfall methodologies have been focusing more on detection of defects in later cycles, we can learn from the quality revolution to harness the “Mistake Proofing” technique for automatically preventing defects from happening. This is called “Poka Yoke” – a Japanese term which means “mistake proofing”. The purpose of mistake proofing is to prevent defects or unnecessary work later on before or during the final product is within the hands of its users.

 

Over the past few years, we have seen many IT shops implement proactive diagnosis only on the operational side of IT, such as proactive network and security monitoring. A smaller number of development shops have also integrated proactive defect tracking and fixing within software application development. As a result, these shops deliver the highest possible quality work and have the highest customer satisfaction.

 

If you look at the history of auto manufacturing which began in the 1890’s within the United States, it took almost 90 years for the industry to start learning about the true meaning of Total Quality – although it was already implemented in other parts of the world and especially in Japan- after the rise of the import automobiles market share. The “Key” question is, how many years will the software industry take to realize the same conclusion?

 

About the Author

Sam Malek is CTO and Co-Founder of Transvive Inc., an application modernization consulting firm. Sam has a track record of aligning business and IT strategies and a passion for helping organizations transform, improve service delivery and achieve operational excellence. Sam has been working with enterprises to design and implement strategies to deliver innovative solutions to complex problems in the Enterprise Architecture and Application Portfolio Management areas, specifically the field of application modernization.

 

References

(1)    GM ignition switch probe finds misjudgment but no conspiracy-http://www.cbc.ca/news/business/gm-ignition-switch-probe-finds-misjudgment-but-no-conspiracy-1.2664803

(2)    Chevy Aveo Recall Brings GM Total To 13.8 Million-  http://washington.cbslocal.com/2014/05/21/chevy-aveo-recall-brings-gm-recall-total-to-13-8-million/

(3)    SOFTWARE DEFECT ORIGINS AND REMOVAL METHODS –http://namcookanalytics.com/software-defect-origins-and-removal-methods/

Automating Function Points – ICTscope.ch (SwiSMA/SEE)

Speaker: Massimo Crubellati, CISQ Outreach Liaison, Italy

Location: swissICT Vulkanstrasse, Zurich, Switzerland

 

Abstract:

IT executives have complained about the cost and inconsistency of counting Function Points manually.  The Consortium for IT Software Quality was formed as a special interest group of the Object Management Group (OMG) co-sponsored by the Software Engineering Institute at Carnegie Mellon University for the purpose of automating the measurement of software attributes from source code.

 

One of the measures the founding members of CISQ requested was Automated Function Points specified as closely as possible to the IFPUG counting guidelines.  David Herron, a noted FP expert led the effort which has now resulted in Automated Function Points being an Approved Specification of the OMG.  This talk with discuss the specification and report on experience with its use, including comparisons with manual counts.  It will also present methods for using AFPs for calibrating FP estimating methods early in a project as well as how to integrate automated counts into development and maintenance processes.

 

For more information, click here.

Productivity Challenges in Outsourcing Contacts

By Sridevi Devathi, HCL Estimation Center of Excellence, and CISQ Member

 

In an ever competitive market, year-on-year productivity gains and output-based pricing models are standard ‘asks’ in most outsourcing engagements. Mature and accurate SIZING is the KEY in order to address the same!

 

It is essential that the below stated challenges are clearly understood and addressed in outsourcing contracts for successful implementation.

 

Challenge 1 – NATURE OF WORK

All IT Services provided by IT vendors are NOT measurable using the ISO certified Functional Sizing Measures like IFPUG FP, NESMA FP or COSMIC FP (referred as Function Points hereafter). While pure Application development and Large Application enhancement projects are taken care of by Function Points, there are no industry standard SIZING methods for projects/work units that are purely technology driven, like the following:

  • Pure technical projects like data migration, technical upgrades (e.g. VB version x.1 to VB version x.2)
  • Performance fine tuning and other non-functional projects
  • Small fixes in business logic, configuration to enable a business functionality
  • Pure cosmetic changes
  • Pure testing projects
  • Pure agile projects

 

Challenge 2 – NEWER TECHNOLOGIES

  • The applicability of Function Points for certain technologies like Data Warehousing, Business Intelligence and Mobility are not established.
  • While COSMIC is supposed to be the most suitable for such technologies, there is not enough awareness and/or data points.

 

Challenge 3 – TIME CONSUMING AND COMPETENCY ISSUES

  • It is of utmost importance to ensure that IFPUG/COSMIC certified professionals are involved in SIZING; hence there is a dependency on subject matter experts.
  • Also appropriate additional efforts need to be budgeted upfront for SIZING of applications; releases and projects.

 

Conclusions and Recommendations

Challenge 1 & 2 could lead to situations where more than 50% of work done is not ‘SIZE’able in a given engagement. Most clients do not foresee this gap, and often expect that the SIZE delivered by a vendor should be in proportion to the efforts paid.  It is critical to have these challenges documented and agreed to with the client upfront.

 

Challenge 3 could be addressed by usage of tools. For example, CAST provides automated FP counts based on code analysis. So it would be worthwhile for IT vendors to validate and ratify the CAST automated FP counts for various technologies, architectures and nature of work. While there would be exception scenarios which are not addressed by CAST, the dependency on FP Subject Matter experts could be significantly reduced.  CAST supports the Automated FP Standard – http://www.castsoftware.com/news-events/press-release/press-releases/cast-announces-support-for-the-omg-automated-function-point-standard

 

Various other tools on IFPUG FP, like Total Metrics, could also be used, if manual FP counting is required. While these tools do not remove the dependency on FP subject matter experts, they significantly reduce the overall efforts on SIZING and also help in faster impact analysis of changes done to existing applications.

 

 

About the Author

Sridevi Devathi has 19 years of IT experience in the areas of Estimation Center of Excellence, Quality Management & Consulting, IT Project Management and Presales. She has been with HCL for past 16 years, and currently leads the HCL Estimation Center of Excellence. She has taken up various certifications like CFPS®, PMP®, IQA, CMM ATM and Six Sigma Yellow Belt. She has taken part in external industry forums like CISQ Size Technical Work group in 2010 (http://it-cisq.org), IFPUG CPM Version 4.3 review in 2008 (http://www.ifpug.org), and BSPIN SPI SIG during 2006-2007 (http://www.bspin.org).

Software Quality Challenges in Healthcare Systems – OMG (Boston, MA USA)

Model Based Systems Engineering (MBSE) in Healthcare Summit. Wednesday, June 18, 2014, Boston, MA

 

The OMG Technical Meeting provides IT architects, business analysts, government experts, vendors and end-users a neutral forum to discuss, develop and adopt standards that enable software interoperability for a wide range of industries.

 

On Wednesday, June 18, 2014, Dr. Bill Curtis will be hosting a session on Software Quality Challenges in Healthcare Systems. Here is the abstract:

 

The recent Healthcare.gov debacle highlighted the challenges of software quality in healthcare systems. However, these challenges extend far beyond badly managed government projects. Healthcare has lagged other industry segments in adopting recent advances for improving software quality, such as continuous improvement of both process and product. Generally organizations building embedded software for medical devices have been ahead of those building business software for administering medical operations and billing. This talk will review how continuous process improvement coupled with lean principles has dramatically improved software in other industry segments and will include a short case study from a medical device manufacturer. It will then discuss the more recent focus on the structural quality of software, which cannot be ensured through traditional testing methods. Structural issues related to Reliability, Performance, Security, and Maintainability will be discussed along with the costs and risks they affect.

 

More information can be found here.

 

CISQ Seminar: Measuring and Managing Software Risk, Security, and Technical Debt

Hosted By: Consortium for IT Software Quality (CISQ) in cooperation with the Center for Advanced Research in Software Engineering (ARiSE) at The University of Texas, IT Metrics & Productivity Institute (ITMPI), Object Management Group (OMG), and the Software Engineering Institute (SEI) at Carnegie Mellon University.

 

Join us for the next CISQ Seminar at the OMG Technical Meeting on Wednesday, September 17, 2014 at the Sheraton Austin Hotel at the Capitol (701 East 11th Street) in Austin, TX USA.

 

The rising number of multi-million dollar computer outages and security breaches has made software quality a boardroom topic because of the risk and cost of these embarrassing failures. The Measuring and Managing Software Risk, Security, and Technical Debt 1-day master seminar will feature Dr. Bill Curtis and other national experts to address the measurement and management of software risk, security, technical debt, and related areas of software quality. 

 

This seminar is intended for IT Executives, application managers, software measurement and improvement specialists, quality assurance professionals, and others interested in using automated software measures.

 

Registration is US $50. Registration is now closed.

 

CISQ members can access presentations under “Event & Seminar Presentations.”

 

“If you’re concerned about technical debt, software quality, and software security, you need to come to this event!” – Dr. Bill Curtis, Director, CISQ

 

 

 

 

PROGRAM AGENDA

 

  8:00 – 9:00 am

 

  Registration

 

 

  9:00 – 9:15 am

 

  Welcome and Introductions to CISQ and ARiSE

Dr. Bill Curtis, Director, Consortium for IT Software Quality (CISQ)

Herb Krasner, Principal Researcher, ARiSE, University of Texas

 

  9:15 – 10:15 am

 

  The State of Software Process and Quality in the State of Texas

Herb Krasner, Principal Researcher, ARiSE, University of Texas
Mr. Krasner will describe his work with Texas state government to assess the maturity of their development practices and establish improvement programs. He will report on the quality and cost of ownership of the portfolio of applications in several state agencies and what is being done to manage and reduce it.

 

  10:30 – 11:30 am

 

  Technical Liability and Self-Insuring Software

Dr. Israel Gat, Director, Agile Product and Project Management Practice, Cutter Consortium
Dr. Murray Cantor, IBM Distinguished Engineer

By shipping software, an executive assumes the risk it will not cause a future event that creates significant liability. Thus, the organization is essentially self-insuring against future liabilities. A fair price of this insurance, the technical liability, reduces the economic value of the software. This talk discusses how to price this self-insurance, and use it in deciding to ship or to invest further in improving quality.

 

  11:30 am – 12:00 pm

 

  The Global State of Software Structural Quality: Do Method and Source Matter?

Dr. Bill Curtis, SVP and Chief Scientist, CAST Software

Dr. Curtis will discuss results from the structural analysis of 1316 software systems from 4 continents comprising 700 million lines of code, including the effects of technology, development method, industry sector, and sourcing and shoring choices on the quality factors of robustness, security, performance, and changeability.

 

  12:00 – 1:00 pm

 

  Lunch

 

 

  1:00 – 1:45 pm

 

  Measuring and Managing Technical Debt

Dr. Bill Curtis, SVP and Chief Scientist, CAST Software

The various components of the technical debt metaphor will be defined and examples provided (principal, interest, liability, opportunity cost). An automated measure for estimating technical debt will be described along with empirical results from over 700 commercial applications. A process for managing technical debt will be presented along with several empirical case studies of successful cost reduction from controlling and removing technical debt-principal.

 

  1:45 – 2:30 pm

 

  New Findings on Measuring the Effectiveness and Quality of Agile Projects

 Dr. William Nichols, Software Engineering Institute, Carnegie Mellon University

This session will present new research being released by the Software Engineering Institute (SEI) on the measurement of agile projects. The featured results from the SEI will present conclusions from a study of transactional data collected from an Agile life-cycle management platform. Results will be contracted with data from Team Software Process (TSP) projects. Findings include observations on some difficulties and limitations in measuring agile projects and the consistency of agile practices.

 

  2:30 – 2:45 pm

 

  Break

 

 

  2:45 – 2:45 pm

 

  Advances in Measuring and Preventing Software Security Weaknesses

Robert Martin, Director, Common Weakness Enumeration Repository, Mitre Corp.

Mr. Martin will describe the latest developments in the national cyber-security community to identify and measure security threat vectors and the weaknesses they exploit. He will describe the actions taken by this community to improve the state of software security and spread best security practices to the development community.

 

  3:45 – 4:00 pm

 

  Standards and Automated Software Measurement

Dr. Bill Curtis, Director, Consortium for IT Software Quality (CISQ)

Dr. Curtis will briefly describe the work of CISQ to supplement ISO standards with standards for automating the measurement of functional size and source code structural quality. Future work on standards for measuring technical debt and quality-adjusted productivity will described.

 

Registration is now closed. 

 

 

Thank you to CISQ Partners

 

Advanced Research in Software Engineering (ARiSE)

The Center for Advanced Research in Software Engineering (ARiSE) was established to create cutting edge basic and domain-specific software engineering research. ARiSE integrates research in the Departments of Electrical & Computer Engineering, Computer Science, Civil Engineering, and the School of Information Sciences at The University of Texas at Austin. ARiSE produces significant advances in software engineering paradigms, methods, techniques and technologies, as well as empirically evaluates new concepts. http://arise.utexas.edu

ARiSE

 

IT Metrics & Productivity Institute (ITMPI)

The IT Metrics and Productivity Institute (ITMPI) has built the largest repository of online, on demand, mobile friendly, educational lectures anywhere in the world – specifically for IT and software professionals with an interest in metrics, quality, and process improvement. With 100s of expert presenters and hundreds of different topics, you will find everything they need – in one place – to meet all their continuing education needs. Your one year membership to the ITMPI is FREE with your CISQ-ARISE conference registration. That’s unlimited access for a period of one year— at no cost! Your coupon code for free membership will be included in your registration bag. Good luck and best wishes for your continued success! http://www.itmpi.org/

ITMPI