Survey on Time-to-Fix Technical Debt

CISQ is working on a standard measure of Technical Debt. Technical debt is a measure of software cost, effort, and risk due to defects remaining in code at release. Like financial debt, technical debt incurs interest over time in the form of extra effort and cost to maintain the software. Technical debt also represents the level of risk exposed to business due to the increased cost of ownership.

 

Completing the measure requires estimates of the time required to fix software weaknesses included in the definition of Technical Debt.

 

Please take our Technical Debt Survey

 

The survey is a PDF form that is posted to the CISQ website. To take the survey:

  • Download the PDF form
  • Fill in your responses
  • Press the “send survey” button on the last page of the survey
  • Alternatively, you can save the PDF file to your desktop and email it directly to: coordinator@it-cisq.org

 

As a “thank you” for your time, we are giving away $20 Amazon Gift cards to the first 50 respondents.

 

To download the survey (PDF): http://it-cisq.org/technical-debt-remediation-survey/

 

Thank you for contributing to this initiative.

 

For any questions:

 

Tracie Berardi
Program Manager
Consortium for IT Software Quality (CISQ)
tracie.berardi@it-cisq.org
781-444-1132 x149

CISQ-em

 

Takeaways from the 2016 Software Risk Summit

Tracie Berardi, CISQ Program Manager

 

Software risk has historically been overlooked as a security concern by business leaders, and companies have paid a high price as a result. Remember the debacle earlier this year when HSBC services went down, leaving customers unable to access their online banking? That was during the peak of tax season, causing a flurry on social media and deeply damaging the company’s reputation with customers.

 

Companies have also had to dish out high sums to compensate their customers. RBS paid £231 million for their IT failures a few years ago, and the Target breach cost the retailer $152 million in addition to chief executive turnover. Most recently, Jeep controls have been taken over by hackers, and a similar incident with Toyota-Lexus leaves the manufacturer fixing a software bug that disabled cars’ GPS and climate control systems.

 

Poor structural quality of IT systems and software risk are not just IT issues. They are big problems that can lead to lost revenue and a decline in consumer confidence. So I was thrilled to know that the topic for the annual Software Risk Summit in New York was indeed just that, software risk.

 

Panel guests from BNY Mellon, the Software Engineering Institute at Carnegie Mellon, the Boston Consulting Group and CAST shared interesting “real world” insights. But beforehand, I was able to sit-in on the keynote by Rana Foroohar, who is a regular commentator on CNN news and a global economics analyst for TIME Magazine, among others.

 

Rana made a very important connection between America’s post-recession recovery and the role software risk will play in companies’ ability to create real, sustainable growth. According to Rana and her book Makers & Takers, we are entering a period of volatility with lower long-term growth, an unstable U.S. election cycle and a growing wealth divide. Because of this, the private sector is going to take on a bigger role in turning technology and infrastructure into tangible value that will carry the country through a period of “public sector slump.”

 

She shared an interesting statistic, noting that pre-2008, companies and consumers held the majority of the country’s debt. Now that paradigm has shifted, with consumers and corporations becoming more debt-averse, leaving the U.S. government to carry the vast majority of our debt burden. In this coming era of increased dependence on the private sector to create and sustain a thriving economy, it is more important than ever for business executives to take software risk seriously, take stock of their technology investments and prepare for future waves of innovation.

 

Following Rana’s inspiring keynote, the panel discussion dove-in head-first to the tactical application of software risk mitigation. Here is a brief summary of the interactive Q&A:

 

Why is Software Risk a Problem?

 

Benjamin Rehberg, Managing Director, BCG: The biggest responsibility lies at the CEO and board level. Many leaders may realize they’re becoming a technology company, but they’re not quite sure what to do about it. Most CEOs want to focus on boosting revenue, but they fail to recognize technology as a strategic enabler of the business.

 

Early technology was originally used to run internal systems, so the incentive for developers to write resilient code was very low. Only 20 years ago with initial exposure to the Internet did we start to see the need to worry about risk in systems that are directly end-customer facing. So there’s still a lot of digital risk buried in millions of lines of code.

 

However, with the increased publicity of big software glitches, there is more pressure to keep the business running and customers satisfied. For example, board members and CEOs are starting to think about what will happen to them if big security issues and breaches continue to plague their companies. Their company performance and jobs are at stake, so personal incentives are becoming more important and are starting to drive change.

 

Kevin Fedigan, Head of Asset Servicing and Broker Dealer Services, BNY Mellon: Leadership must take a progressive attitude toward risk and treat it as a core organizational value. For example, BNY measures three levels of risk: 1) general employees, 2) traditional compliance roles, 3) internal and external auditing. The financial services industry, in particular, has a reputation to uphold. We need to ensure customer trust in our systems.

 

Dr. Paul Nielsen, CEO of SEI: Some CEOs are uncomfortable with risk, so they delegate it to their CIO. But even then, they can’t rid themselves of the responsibility. This creates more of a stigma around risk and fosters an environment where it can grow and lead to bigger problems down the line. It’s interesting to see us all rushing to the Internet of Things, but most of the technology supporting this shift was designed with code written before the Internet. We clearly still have some catching up to do.

 

Vincent Delaroche, CEO of CAST: This may seem like a paradox because there is such high demand for security, but the root cause of many software catastrophes are actually resiliency and efficiency issues – not security flaws. Security gets the glitz and the glory, but the press sometimes misses the true root cause of many software issues, thereby misleading executives to search out security tools rather than solutions that help with resiliency and efficiency. I believe we are reaching a tipping point where there will be a spoke in demand by the Fortune 500 to assess their real risk exposure.

 

What does culture have to do with software risk? Do we have a communication issue? What is IT not doing to get the business and the board’s attention?

 

BNY Mellon: We make the business own the risk, so risk is not removed from business outcomes. For example, with high priority items, that risk must be removed within a 30 day window. Our CEOs report to the Chief Risk Officer to ensure we aren’t putting risk and security to the wayside. We’re doing the best we can to remove those communication barriers and increase transparency between operations and the business.

 

SEI: There are too many risks to address them all, so you have to figure out what really matters. By setting benchmarks, it’s easier to measure your investment and prove ROI. There is a set of specific risk issues that have been identified as important by the Consortium for IT Software Quality (CISQ), an organization we’re involved in, along with a standard measurement framework.

 

BCG: Financial risk is a big topic for many of our clients. Financial institutions are constantly being hammered by regulators to comply, but they have such a broad range of technologies to manage. Because of this, we’re seeing that most outages are actually due to the “plumbing in between” various components of core systems and business processes. Very few technologists are actually looking at the transaction level between the technologies, and that’s a big place we see clients get messed up.

 

What is the correlation between costs and risk?

 

SEI: Effective leaders will balance cost with productivity. It’s important to determine what is vitally important to your business and make sure those systems don’t degrade, but you must also prioritize where the investment goes. Many leaders still don’t understand where the risk comes from. The industry would benefit from a “genetic testing of software.”

 

BCG: The earlier you catch risks, the less it will cost to repair. What we’ve seen work well is creating a culture of incentives for high quality code. There’s a big push for IT organizations to become more agile and set up code peer reviews to help create more robust software.

 

CAST: Let’s say a development team for a large enterprise has about 1,000 engineers. Each year, an IT department this size will have to deal with about 20,000 software defects in production. When we look at these defects, we typically see that 90% cost very little, maybe a few hundred dollars. 9% of defects cost the business an average of $5,000, and only 1% of defects are severe enough to cost the business upwards of $50,000. So, individually, these errors are small. But when we look at them together, we see enterprise CIOs writing off upwards of $20 million per year and not thinking twice.

 

Conversely, if you look at the top 1% of the 1% of severe defects, this is where you see the massive breaches and glitches that sometimes end up in the press (like RBS, HSBC and others). These outages can cost companies an average of $600 thousand, according to the most recent KPMG Risk Radar, and very quickly catch the attention of senior leaders and CEOs.

 

At CAST, we help illuminate and prevent the 1% catastrophic risks and some of the “hidden” costs of the 99%, showing CIOs how to get more from their IT departments. Left unchecked, these common issues can consume more than 20% of ADM budget and keep developers from focusing on delivering new, innovative value to the business.

 

The good news is that we have concrete data points and studies that show the correlation between product defects and software flaws. There are currently about 60 critical flaws documented by the Consortium for IT Software Quality that need to be addressed, so this is manageable for CIOs and IT departments. And, the same set of flaws that reduce the risk of newsworthy incidents also lower the unseen cost of glitches.

 

As the Software Risk event would indicate, it’s clear that some companies are leading the way forward by integrating IT innovation with strategic business outcomes. But many are still stuck trying to justify IT expenditures that don’t necessarily correlate to growth. Organizations that link software risk performance with executive objectives will fare better than others.

 

If history tells us anything, it’s that there is only a matter of time before another cataclysmic glitch takes down core banks, exposes consumers to identity or credit card theft, and costs corporations millions of dollars. Establishing an effective software risk framework will pay off in dividends.

The Relationship Between Unit and System Level Issues

Bill Dickenson, Independent Consultant, Strategy On The Web

 

Dr. Richard Soley, the Chariman and CEO of OMG, published a paper for CISQ titled, How to Deliver Resilient, Secure, Efficient, and Easily Changed IT Systems in Line with CISQ Recommendations, that outlines the software quality standard for IT business applications. He classified software engineering best practices into two main categories:

  • Rules of good coding practice within a program at the Unit Level without the full Technology or System Level context in which the program operates, and
  • Rules of good architectural and design practice at the Technology or System level that take into consideration the broader architectural context within which a unit of code is integrated.

Correlations between programming defects and production defects revealed something really interesting and to some extent, counter-intuitive. It appears that basic Unit Level errors account for 92% of the total errors in the source code. That’s a staggering number. It implies that in fact the coding at the individual program level is much weaker than expected even with quality checks built into the IDE. However, these code level issues eventually count for only 10% of the defects in production. There is no question that it drives up the cost of support and maintenance as well as decreased flexibility, but the translation of these into production defects is not as large as might be expected. It also calls into question the effectiveness of development level IDE to eliminate production defects.

 

On the other hand, bad software engineering practices at the Technology and System Levels account for only 8% of total defects, but consume over half the effort spent on fixing problems. This eventually leads to 90% of the serious reliability, security, and efficiency issues in production. This means that tracking and fixing bad programming practices at the Unit Level alone may not translate into the anticipated business impact, since many of the most devastating defects can only be detected at the Technology and System Levels.

 

When we review the information from the CRASH database, this is not wholly unexpected. Many of the more serious defects are undetected until the components interact.

CISQ Interviewed by SD Times – Dr. Bill Curtis (CISQ) and Dr. Richard Soley (OMG) Cited

Read About CISQ’s Mission, Standards Work, and Future Direction

 

Tracie Berardi, Program Manager, Consortium for IT Software Quality (CISQ)

 

Rob Marvin published an article in the January issue of SD Times that details the work of the Consortium for IT Software Quality (CISQ). Rob interviewed Dr. Richard Soley, CEO of the Object Management Group (OMG) and Dr. Bill Curtis, Executive Director of CISQ.  The article sheds light on the state of software quality standards in the IT marketplace.

 

I can supplement what’s covered in the article for CISQ members.

 

CISQ was co-founded by the Object Management Group (OMG) and the Software Engineering Institute (SEI) at Carnegie Mellon University in 2009.

 

Says Richard Soley of OMG, “Both Paul Nielsen (CEO, Software Engineering Institute) and I were approached to try to solve the twin problems of software builders and buyers (the need for consistent, standardized quality metrics to compare providers and measure development team quality) and SI’s (the need for consistent, standardized quality metrics to lower the cost of providing quality numbers for delivered software). It was clear that while CMMI is important to understanding the software development process, it doesn’t provide feedback on the artifacts developed. Just as major manufacturers agree on specific processes with their supply chains, but also test parts as they enter the factory, software developers and acquirers should have consistent, standard metrics for software quality. It was natural for Paul and I to pull together the best people in the business to make that happen.”

 

Richard Soley reached out to Dr. Bill Curtis to take the reins at CISQ. Bill Curtis is well-known in software quality circles as he led the creation of the Capability Maturity Model (CMM) and People CMM while at the Software Engineering Institute. Bill has published 5 books, over 150 articles, and was elected a Fellow of the Institute of Electrical and Electronics Engineers (IEEE) for his career contributions to software process improvement and measurement. He is currently SVP and Chief Scientist at CAST Software.

 

“Industry and government badly need automated, standardized metrics of software size and quality that are objective and computed directly from source code,” he says.

 

Bill Curtis organized CISQ working groups to start work on specifications. The Automated Function Point (AFP) specification was led by David Herron of the David Consulting Group and became an officially supported standard of the OMG in 2013. Currently, Software Quality Measures for Security, Reliability, Performance Efficiency, and Maintainability are undergoing standardization by the OMG.

 

The SD Times article in which Dr. Curtis and Dr. Soley are cited – CISQ aims to ensure industry wide software quality standards – is a summary of these specifications and their adoption. Please read.

 

A media reprint of the article has been posted to the members area of the CISQ website.  

 

You can also watch this video with Dr. Bill Curtis.

 

Later this year CISQ will start work on specs for Technical Debt and Quality-Adjusted Productivity.

 

The Aging IT Procurement Processes of the Pentagon

About 2 months ago a blog article was written for the NDIA, exposing the difficulties of buying new IT systems by the Defense Department. Pentagon acquisitions chief Frank Kendall was on the hot seat during an April 30th hearing. Senate Armed Services Committee Chairman Carl Levin, D-Mich., said that the track record for procurement has been “abysmal.” Sen. Claire McCaskill, D-Mo., angrily said “You’re terrible at it, just terrible at it.”

 

Yet the Pentagon requested $30.3 billion for unclassified IT programs in fiscal year 2015 (a drop of $1 billion, or 3.3 percent, from fiscal 2014). So what are the issues? Well, one of them points to the complex approval process. “I think we’re imposing too much burden on people and we’re micromanaging,” said Kendall. “We have a tendency in the department, I think, to try to force the business systems that we acquire to do things the way we’ve historically done business.” And there is little incentive to change.

 

David Ahearn, a partner at BluestoneLogic, wrote in a blog post that “Military-specific IT systems acquisition — not to be confused with email platforms and other commodity IT platforms — needs to be a completely different approach than hardware platform acquisition.” Often IT projects start off with a requirements “black hole, where non-technical people can “dream up” anything they like, and contractors are motivated to pursue programs based on their complexity, as those programs create a longer support tail, and a better bottom line for traditional defense contractors, Ahearn noted.

 

There needs to be cooperation between science fiction and reality. Half the battle is to ensure clearly stated requirements that can be used to optimally build the systems required to meet stated goals. Innovation is one thing, and pushing the envelope with pie-in-the-sky requirements could achieve some of that. But, a line needs to be drawn where the path that developers and implementers go down is stopped, rethought, and corrected before all taxpayer money is spent.

 

By using standard quality metrics such as those created by CISQ, government can help define how to spot that line and ultimately ensure a hardened system in production for a lower Total Cost of Ownership (TCO).  Procurement practices for large, software intensive programs should be managed in this way.

 

One of the Pentagon’s most ambitious IT projects is to “create one interoperable medical record that will transition seamlessly from the Defense Department to the Department of Veterans Affairs as service members go from active duty to veteran status,” according to Lloyd McCoy, a market intelligence consultant with immixGroup. Hmm…does this remind you of healthcare.gov?

 

The article referenced can be found here

CISQ Seminar – Software Quality in Federal Acquisitions

CISQ hosted its latest Seminar at the HYATT Reston Town Center in Reston, VA, USA. The topic for this installment was “Software Quality in Federal Acquisitions”, and included the following speakers:

 

  • David Herron, David Consulting Group
  • Robert Martin, Project Lead, Common Weakness Enumeration, MITRE Corp.
  • John Keane, Military Health Systems
  • Dr. Bill Curtis, Director, CISQ
  • John Weiler, CIO Interop. Clearinghouse
  • Joe Jarzombek, Director for Software & Supply Chain Assurance, DHS
  • Dr. William Nichols, Software Engineering Institute

 

Over 75 senior leaders from public and private sector organizations such as BSAF, MITRE, US Department of Defense, Northrop Grumman, NSA, Fannie Mae, US Army, and NIST were in attendance listening to presentations, engaging in discussions, and networking with peers.

 

Dr. Curtis began the day by discussing the recent changes in the regulatory environment at the Federal level, especially as they relate to software risk prevention. Kevin Jackson (IT-AAC) stressed how innovation cannot be adopted if it cannot be measured.

 

Mr. Herron introduced the uses of productivity analysis and Function Points to more effectively and efficiently manage portfolios. He noted that a baseline provides a “stake in the ground” measure of performance, help to identify opportunities for optimized development practices, and enables the organization to manage risks and establish reasonable service level measures. Mr. Herron also discussed how automation will change the game with respect to software sizing and Function Points, including increased coupling with structural quality and improved vendor management.

 

Mr. Martin led a lively session on identifying and eliminating the causes of security breaches through the development of the Common Weakness Enumeration repository. He described the best practices for using information in the repository for improving the security of software, noting that everything is based on software today and that any flaws in that software within today’s highly connected universe will magnify the issues. Different assessment methods are effective at finding different types of weaknesses, and some are good at finding the cause while others can find the effect. So it’s ok to use different methods together.

 

Mr. Keane then spoke about the tools and processes his team uses to measure and manage structural quality on DoD contracts. He noted the importance of strong vendor contract language dictating the quality and performance standards required. Performing static code analysis correctly has great benefits, and Mr. Keane stated that static analysis prior to testing is very quick and about 85% efficient. His team measures code quality using technical debt, architectural standards, and architectural dependencies.

 

Mr. Jarzombek showed how security automation, software assurance and supply chain risk management can enable enterprise resilience. He noted that there is an increased risk from supply chains due to: increasing dependence on commercial ICT for mission critical systems; increasing reliance on globally-sourced ICT hardware, software, and services; residual risk, such as counterfeit products and products tainted with malware, passed to the end-user’s enterprise; growing technological sophistication among adversaries. Mr. Jarzombek also noted that the ICT/software security risk landscape is a convergence between “defense in depth” and “defense in breadth”.

 

Dr. Nichols presented new research on the measurement of agile projects, noting that the agile community lacks hard data regarding which agile practices provide the best outcomes. He identified some trends and distributions but determined that there were no strong correlations between specific agile practices and measures. Great care must be taken when combining agile metrics because the variables often combine in ways that are not intuitively obvious and can easily become misleading when used in different contexts.

 

Dr. Curtis then concluded the event by talking about the importance of software productivity and quality measures in every kind of contract, and discussing the important work that CISQ is doing on creating specifications for automated software quality measures. He noted the need to curb technical debt in order to reduce the need for future rework, and made some recommendations for acquisition which include:

 

  • Setting structural quality objectives
  • Using a common vocabulary
  • Measuring quality at the system level
  • Evaluating contract deliverables
  • Using rewards and penalties wisely

 

A fantastic Cocktail Social followed the event, facilitating great networking between the speakers and attendees. We received many positive statements from attendees throughout the event, noting the wealth of valuable information that was disseminated through engaging presentations and Q&A.

 

Materials from the event are now posted in the Member Page under the “Event & Seminar Presentations” category. For more information regarding upcoming CISQ events, visit our Events page.