CISQ Interviewed by SD Times – Dr. Bill Curtis (CISQ) and Dr. Richard Soley (OMG) Cited

Read About CISQ’s Mission, Standards Work, and Future Direction

 

Tracie Berardi, Program Manager, Consortium for IT Software Quality (CISQ)

 

Rob Marvin published an article in the January issue of SD Times that details the work of the Consortium for IT Software Quality (CISQ). Rob interviewed Dr. Richard Soley, CEO of the Object Management Group (OMG) and Dr. Bill Curtis, Executive Director of CISQ.  The article sheds light on the state of software quality standards in the IT marketplace.

 

I can supplement what’s covered in the article for CISQ members.

 

CISQ was co-founded by the Object Management Group (OMG) and the Software Engineering Institute (SEI) at Carnegie Mellon University in 2009.

 

Says Richard Soley of OMG, “Both Paul Nielsen (CEO, Software Engineering Institute) and I were approached to try to solve the twin problems of software builders and buyers (the need for consistent, standardized quality metrics to compare providers and measure development team quality) and SI’s (the need for consistent, standardized quality metrics to lower the cost of providing quality numbers for delivered software). It was clear that while CMMI is important to understanding the software development process, it doesn’t provide feedback on the artifacts developed. Just as major manufacturers agree on specific processes with their supply chains, but also test parts as they enter the factory, software developers and acquirers should have consistent, standard metrics for software quality. It was natural for Paul and I to pull together the best people in the business to make that happen.”

 

Richard Soley reached out to Dr. Bill Curtis to take the reins at CISQ. Bill Curtis is well-known in software quality circles as he led the creation of the Capability Maturity Model (CMM) and People CMM while at the Software Engineering Institute. Bill has published 5 books, over 150 articles, and was elected a Fellow of the Institute of Electrical and Electronics Engineers (IEEE) for his career contributions to software process improvement and measurement. He is currently SVP and Chief Scientist at CAST Software.

 

“Industry and government badly need automated, standardized metrics of software size and quality that are objective and computed directly from source code,” he says.

 

Bill Curtis organized CISQ working groups to start work on specifications. The Automated Function Point (AFP) specification was led by David Herron of the David Consulting Group and became an officially supported standard of the OMG in 2013. Currently, Software Quality Measures for Security, Reliability, Performance Efficiency, and Maintainability are undergoing standardization by the OMG.

 

The SD Times article in which Dr. Curtis and Dr. Soley are cited – CISQ aims to ensure industry wide software quality standards – is a summary of these specifications and their adoption. Please read.

 

A media reprint of the article has been posted to the members area of the CISQ website.  

 

You can also watch this video with Dr. Bill Curtis.

 

Later this year CISQ will start work on specs for Technical Debt and Quality-Adjusted Productivity.

 

CISQ to Start Work on Automated Enhancement Function Point Specification

By Tracie Berardi, Program Manager, Consortium for IT Software Quality (CISQ)

 

In January 2013 CISQ published a specification for Automated Function Points (AFP) that enables the automated sizing of software by function points. The spec was developed by an international team led by David Herron of the David Consulting Group. The CISQ AFP spec was designed to be as similar as possible to the IFPUG Counting Guidelines, but also to be objective so counts are consistent (the same every time) and can be automated for use in tools. You can learn more about AFP here.

 

In 2015 CISQ begins work on a specification for Automated Enhancement Function Points (AEFP). The existing AFP specification is not suitable for productivity analysis, as it does not solve the problem of measuring maintenance work which does not change the total number of function points even after substantial changes to the code.

 

The primary challenge is to identify a counting or weighting method for AEFPs that is correlated with maintenance effort – i.e. small changes should produce small AEFP counts, while large changes should produce large AEFP counts. The method and weighting schemes will be consistent with the theory and methods of function point counting, with a primary focus on consistency with the IFPUG Counting Guidelines.

 

The objective for CISQ is to create a specification that can be approved as a supported specification of the Object Management Group (OMG).

 

The project team consists of CISQ sponsors from CAST Software, Huawei, Wipro, Accenture, and Atos.

 

For questions or to get involved, you can contact me directly at tracie.berardi@it-cisq.org or contact CISQ through our website.

CISQ Executive Lunch – Software Quality and Size Measurement in Government Sourcing

Where: Marriott Grand Hotel Flora, Via Veneto, 191, Rome, Italy

When: July 11, 2014

 

Government and industry have been plagued by expensive and inconsistent measures of software size and quality.  The Consortium for IT Software Quality has responding by creating industry standard measurement specifications for Automated Functions Points that adheres as closely as possible to the IFPUG counting guidelines, in addition to automated quality measures for Reliability, Performance Efficiency, Security, and Maintainability.  Dr. Bill Curtis will describe these specifications and how they can be used to manage the risk and cost of software developed for government and industry use.

Automating Function Points – ICTscope.ch (SwiSMA/SEE)

Speaker: Massimo Crubellati, CISQ Outreach Liaison, Italy

Location: swissICT Vulkanstrasse, Zurich, Switzerland

 

Abstract:

IT executives have complained about the cost and inconsistency of counting Function Points manually.  The Consortium for IT Software Quality was formed as a special interest group of the Object Management Group (OMG) co-sponsored by the Software Engineering Institute at Carnegie Mellon University for the purpose of automating the measurement of software attributes from source code.

 

One of the measures the founding members of CISQ requested was Automated Function Points specified as closely as possible to the IFPUG counting guidelines.  David Herron, a noted FP expert led the effort which has now resulted in Automated Function Points being an Approved Specification of the OMG.  This talk with discuss the specification and report on experience with its use, including comparisons with manual counts.  It will also present methods for using AFPs for calibrating FP estimating methods early in a project as well as how to integrate automated counts into development and maintenance processes.

 

For more information, click here.

Productivity Challenges in Outsourcing Contacts

By Sridevi Devathi, HCL Estimation Center of Excellence, and CISQ Member

 

In an ever competitive market, year-on-year productivity gains and output-based pricing models are standard ‘asks’ in most outsourcing engagements. Mature and accurate SIZING is the KEY in order to address the same!

 

It is essential that the below stated challenges are clearly understood and addressed in outsourcing contracts for successful implementation.

 

Challenge 1 – NATURE OF WORK

All IT Services provided by IT vendors are NOT measurable using the ISO certified Functional Sizing Measures like IFPUG FP, NESMA FP or COSMIC FP (referred as Function Points hereafter). While pure Application development and Large Application enhancement projects are taken care of by Function Points, there are no industry standard SIZING methods for projects/work units that are purely technology driven, like the following:

  • Pure technical projects like data migration, technical upgrades (e.g. VB version x.1 to VB version x.2)
  • Performance fine tuning and other non-functional projects
  • Small fixes in business logic, configuration to enable a business functionality
  • Pure cosmetic changes
  • Pure testing projects
  • Pure agile projects

 

Challenge 2 – NEWER TECHNOLOGIES

  • The applicability of Function Points for certain technologies like Data Warehousing, Business Intelligence and Mobility are not established.
  • While COSMIC is supposed to be the most suitable for such technologies, there is not enough awareness and/or data points.

 

Challenge 3 – TIME CONSUMING AND COMPETENCY ISSUES

  • It is of utmost importance to ensure that IFPUG/COSMIC certified professionals are involved in SIZING; hence there is a dependency on subject matter experts.
  • Also appropriate additional efforts need to be budgeted upfront for SIZING of applications; releases and projects.

 

Conclusions and Recommendations

Challenge 1 & 2 could lead to situations where more than 50% of work done is not ‘SIZE’able in a given engagement. Most clients do not foresee this gap, and often expect that the SIZE delivered by a vendor should be in proportion to the efforts paid.  It is critical to have these challenges documented and agreed to with the client upfront.

 

Challenge 3 could be addressed by usage of tools. For example, CAST provides automated FP counts based on code analysis. So it would be worthwhile for IT vendors to validate and ratify the CAST automated FP counts for various technologies, architectures and nature of work. While there would be exception scenarios which are not addressed by CAST, the dependency on FP Subject Matter experts could be significantly reduced.  CAST supports the Automated FP Standard – http://www.castsoftware.com/news-events/press-release/press-releases/cast-announces-support-for-the-omg-automated-function-point-standard

 

Various other tools on IFPUG FP, like Total Metrics, could also be used, if manual FP counting is required. While these tools do not remove the dependency on FP subject matter experts, they significantly reduce the overall efforts on SIZING and also help in faster impact analysis of changes done to existing applications.

 

 

About the Author

Sridevi Devathi has 19 years of IT experience in the areas of Estimation Center of Excellence, Quality Management & Consulting, IT Project Management and Presales. She has been with HCL for past 16 years, and currently leads the HCL Estimation Center of Excellence. She has taken up various certifications like CFPS®, PMP®, IQA, CMM ATM and Six Sigma Yellow Belt. She has taken part in external industry forums like CISQ Size Technical Work group in 2010 (http://it-cisq.org), IFPUG CPM Version 4.3 review in 2008 (http://www.ifpug.org), and BSPIN SPI SIG during 2006-2007 (http://www.bspin.org).