Measuring IT Project Performances in Texas: House Bill (HB) 3275 Implications

CISQ Advisory Board member, Herb Krasner, has released a position paper for Texas state CIOs and IT leaders seeking guidance on House Bill (HB) 3275 passed in June 2017 requiring the reporting of software quality measurement in Texas State IT projects. Krasner drafted the legislation that was signed into law by Texas governor, Greg Abbott. Directives go into effect on January 1, 2018.


The new law, HB 3275 is available on the CISQ website for review.


Abstract from the position paper, Measuring IT Project Performances in Texas: House Bill (HB) 3275 Implications:


“Texas’ usage of IT is big and getting bigger, but past project performances have a “checkered” history. In June 2017 HB 3275 became law in Texas. It requires state agencies to improve the measuring and monitoring of large IT projects to collect and report on performance indicators for schedule, cost, scope, and quality. If these indicators go out of bounds, more intense scrutiny is then triggered, potentially requiring corrective action. These indicators will be made visible to the public via an online, user-friendly dashboard, and will be summarized annually in a report to state leaders. This new law facilitates the early detection of troubled projects, and helps establish baselines for improvement initiatives. This position paper discusses the implications and challenges of implementing this new law for state and agency IT leadership.”


Professor Herb Krasner recently retired from the University of Texas at Austin. He was the Director of Outreach Services for the UT Center for Advanced Research in Software Engineering (ARiSE) and founder and CTO of the UT Software Quality Institute (SQI). As a systems excellence consultant, his mission, spanning five decades, has been to enable the development of superior software intensive systems, and to stamp out poor quality software, wherever found. Mr. Krasner is active in Texas state legislature IT improvement initiatives. Full bio







How Outsourcing Can Mitigate Cyberrisks in DevOps


Dr. Erik Beulen, Principal, Amsterdam office (; Dr. Walter W. Bohmayr, Senior Partner, Vienna office (; Dr. Stefan A. Deutscher, Associate Director, Berlin office (; and Alex Asen, Senior Knowledge Analyst, Boston office (


DevOps agility requires organizational adjustments and additional tooling to ensure cybersecurity. At the same time, the challenges of the cybersecurity labor market drive the need to increase tooling’s impact and to consider outsourcing. In turn, these require carefully focusing on cybersecurity governance, including the assignment of accountability and responsibility.


In DevOps, the business is in the driver’s seat. DevOps characteristics (such as iterative prioritizing and deployment) plus the combined responsibility for development and operations present cybersecurity risks. They also create opportunities. DevOps tools, infrastructure, processes, and procedures can be used to fully automate patch deployments and continuously monitor, for example, open ports. Best practices are to automate information security platforms using at a minimum programmable APIs, but preferably automated to control access, containers and container orchestration combined with hypervisors or physical separation to avoid the impact of an attack on the OS kernel layer.


Market Developments


Our analysis of global startup activity in cybersecurity products reveals about 1,000 firms that represent more than $20 billion of investments. This explosion of competing cybersecurity products has driven enterprise reliance on best-of-breed solutions, which requires a lot of coordination and increases the risk of gaps in the cybersecurity landscape. Consolidation of cybersecurity product portfolios through mergers and acquisitions will still take some time—about three to five years. In the enterprise segment, we have to accept best-of-breed solutions and the associated increased complexity for the years to come.


Meanwhile, the service market is also evolving but still scattered. Managed security service providers (MSSPs) provide end-to-end protection, stabilize infrastructure, optimize IT operations, and provide rapid responses to security breaches. On one hand, MSSPs can be used to scale up required capabilities, reduce complexity, and innovate to achieve cyberresilience. On the other hand, the service market is not mature yet, so prior to contracting with an MSSP, companies should rigorously assess a solution’s robustness and vision. Companies should also determine  the number and seniority level of the cybersecurity experts at an MSSP.




Accountability for cyberresilience can never be outsourced. Organizations need to build a cybersecurity competence center that oversees the design and maintenance of strategy and requirements, assesses cybersecurity compliance, and evangelizes cybersecurity. (See Exhibit 1.) This competence center manages the business demands. It also directs in-house cybersecurity and MSSPs’ strategy and policies, including standards, frameworks, certification, risk tolerance levels, and attack procedures. The number of MSSPs a company should engage depends on the size of the organization, cybersecurity requirements, and the capability to manage suppliers. Rarely do organizations engage with more than three MSSPs to avoid coordination challenges and ensure unambiguous responsibilities.


Exhibit 1: Cybersecurity Competence Center Responsibilities
Click to view larger image





Responsibilities for cyberresilience have to be embedded from the board level down to each DevOps team. This is not straightforward and requires a constant and intense dialogue embedded in governance structures and involving all stakeholders. At the application level, product owners and scrum masters have to ensure cybersecurity is respected and embraced by the DevOps teams (“cybersecurity by design”). This doesn’t mean developers must become security experts. Rather, product owners must assign dedicated security experts to each DevOps team. This will not be a full-time role, and security experts can be allocated to multiple DevOps teams. However, cybersecurity remains a team responsibility. Scrum masters have to explicitly address cybersecurity in each step of the DevOps lifecycle. This starts with creating cybersecurity awareness by training developers using gamification (such as Microsoft EOP game[1]). Furthermore, continuously monitoring and measuring cybersecurity performance (service levels) is important. The end goal is to champion cybersecurity by deploying and maintaining software in accordance with the set risk tolerance levels and applicable security standards.




Ensure cybersecurity in DevOps by taking these steps: empowering your product owners and scrum masters, building a competence center, partnering with no more than three MSSPs, using automation, and, of course, making cybersecurity a business agenda item. Also follow the World Economic Forum Working Group,[2] which kicked off cyberresilience through brainstorming!








Survey on Time-to-Fix Technical Debt

CISQ is working on a standard measure of Technical Debt. Technical debt is a measure of software cost, effort, and risk due to defects remaining in code at release. Like financial debt, technical debt incurs interest over time in the form of extra effort and cost to maintain the software. Technical debt also represents the level of risk exposed to business due to the increased cost of ownership.


Completing the measure requires estimates of the time required to fix software weaknesses included in the definition of Technical Debt.


Please take our Technical Debt Survey


The survey is a PDF form that is posted to the CISQ website. To take the survey:

  • Download the PDF form
  • Fill in your responses
  • Press the “send survey” button on the last page of the survey
  • Alternatively, you can save the PDF file to your desktop and email it directly to:


As a “thank you” for your time, we are giving away $20 Amazon Gift cards to the first 50 respondents.


To download the survey (PDF):


Thank you for contributing to this initiative.


For any questions:


Tracie Berardi
Program Manager
Consortium for IT Software Quality (CISQ)
781-444-1132 x149



Takeaways from the 2016 Software Risk Summit

Tracie Berardi, CISQ Program Manager


Software risk has historically been overlooked as a security concern by business leaders, and companies have paid a high price as a result. Remember the debacle earlier this year when HSBC services went down, leaving customers unable to access their online banking? That was during the peak of tax season, causing a flurry on social media and deeply damaging the company’s reputation with customers.


Companies have also had to dish out high sums to compensate their customers. RBS paid £231 million for their IT failures a few years ago, and the Target breach cost the retailer $152 million in addition to chief executive turnover. Most recently, Jeep controls have been taken over by hackers, and a similar incident with Toyota-Lexus leaves the manufacturer fixing a software bug that disabled cars’ GPS and climate control systems.


Poor structural quality of IT systems and software risk are not just IT issues. They are big problems that can lead to lost revenue and a decline in consumer confidence. So I was thrilled to know that the topic for the annual Software Risk Summit in New York was indeed just that, software risk.


Panel guests from BNY Mellon, the Software Engineering Institute at Carnegie Mellon, the Boston Consulting Group and CAST shared interesting “real world” insights. But beforehand, I was able to sit-in on the keynote by Rana Foroohar, who is a regular commentator on CNN news and a global economics analyst for TIME Magazine, among others.


Rana made a very important connection between America’s post-recession recovery and the role software risk will play in companies’ ability to create real, sustainable growth. According to Rana and her book Makers & Takers, we are entering a period of volatility with lower long-term growth, an unstable U.S. election cycle and a growing wealth divide. Because of this, the private sector is going to take on a bigger role in turning technology and infrastructure into tangible value that will carry the country through a period of “public sector slump.”


She shared an interesting statistic, noting that pre-2008, companies and consumers held the majority of the country’s debt. Now that paradigm has shifted, with consumers and corporations becoming more debt-averse, leaving the U.S. government to carry the vast majority of our debt burden. In this coming era of increased dependence on the private sector to create and sustain a thriving economy, it is more important than ever for business executives to take software risk seriously, take stock of their technology investments and prepare for future waves of innovation.


Following Rana’s inspiring keynote, the panel discussion dove-in head-first to the tactical application of software risk mitigation. Here is a brief summary of the interactive Q&A:


Why is Software Risk a Problem?


Benjamin Rehberg, Managing Director, BCG: The biggest responsibility lies at the CEO and board level. Many leaders may realize they’re becoming a technology company, but they’re not quite sure what to do about it. Most CEOs want to focus on boosting revenue, but they fail to recognize technology as a strategic enabler of the business.


Early technology was originally used to run internal systems, so the incentive for developers to write resilient code was very low. Only 20 years ago with initial exposure to the Internet did we start to see the need to worry about risk in systems that are directly end-customer facing. So there’s still a lot of digital risk buried in millions of lines of code.


However, with the increased publicity of big software glitches, there is more pressure to keep the business running and customers satisfied. For example, board members and CEOs are starting to think about what will happen to them if big security issues and breaches continue to plague their companies. Their company performance and jobs are at stake, so personal incentives are becoming more important and are starting to drive change.


Kevin Fedigan, Head of Asset Servicing and Broker Dealer Services, BNY Mellon: Leadership must take a progressive attitude toward risk and treat it as a core organizational value. For example, BNY measures three levels of risk: 1) general employees, 2) traditional compliance roles, 3) internal and external auditing. The financial services industry, in particular, has a reputation to uphold. We need to ensure customer trust in our systems.


Dr. Paul Nielsen, CEO of SEI: Some CEOs are uncomfortable with risk, so they delegate it to their CIO. But even then, they can’t rid themselves of the responsibility. This creates more of a stigma around risk and fosters an environment where it can grow and lead to bigger problems down the line. It’s interesting to see us all rushing to the Internet of Things, but most of the technology supporting this shift was designed with code written before the Internet. We clearly still have some catching up to do.


Vincent Delaroche, CEO of CAST: This may seem like a paradox because there is such high demand for security, but the root cause of many software catastrophes are actually resiliency and efficiency issues – not security flaws. Security gets the glitz and the glory, but the press sometimes misses the true root cause of many software issues, thereby misleading executives to search out security tools rather than solutions that help with resiliency and efficiency. I believe we are reaching a tipping point where there will be a spoke in demand by the Fortune 500 to assess their real risk exposure.


What does culture have to do with software risk? Do we have a communication issue? What is IT not doing to get the business and the board’s attention?


BNY Mellon: We make the business own the risk, so risk is not removed from business outcomes. For example, with high priority items, that risk must be removed within a 30 day window. Our CEOs report to the Chief Risk Officer to ensure we aren’t putting risk and security to the wayside. We’re doing the best we can to remove those communication barriers and increase transparency between operations and the business.


SEI: There are too many risks to address them all, so you have to figure out what really matters. By setting benchmarks, it’s easier to measure your investment and prove ROI. There is a set of specific risk issues that have been identified as important by the Consortium for IT Software Quality (CISQ), an organization we’re involved in, along with a standard measurement framework.


BCG: Financial risk is a big topic for many of our clients. Financial institutions are constantly being hammered by regulators to comply, but they have such a broad range of technologies to manage. Because of this, we’re seeing that most outages are actually due to the “plumbing in between” various components of core systems and business processes. Very few technologists are actually looking at the transaction level between the technologies, and that’s a big place we see clients get messed up.


What is the correlation between costs and risk?


SEI: Effective leaders will balance cost with productivity. It’s important to determine what is vitally important to your business and make sure those systems don’t degrade, but you must also prioritize where the investment goes. Many leaders still don’t understand where the risk comes from. The industry would benefit from a “genetic testing of software.”


BCG: The earlier you catch risks, the less it will cost to repair. What we’ve seen work well is creating a culture of incentives for high quality code. There’s a big push for IT organizations to become more agile and set up code peer reviews to help create more robust software.


CAST: Let’s say a development team for a large enterprise has about 1,000 engineers. Each year, an IT department this size will have to deal with about 20,000 software defects in production. When we look at these defects, we typically see that 90% cost very little, maybe a few hundred dollars. 9% of defects cost the business an average of $5,000, and only 1% of defects are severe enough to cost the business upwards of $50,000. So, individually, these errors are small. But when we look at them together, we see enterprise CIOs writing off upwards of $20 million per year and not thinking twice.


Conversely, if you look at the top 1% of the 1% of severe defects, this is where you see the massive breaches and glitches that sometimes end up in the press (like RBS, HSBC and others). These outages can cost companies an average of $600 thousand, according to the most recent KPMG Risk Radar, and very quickly catch the attention of senior leaders and CEOs.


At CAST, we help illuminate and prevent the 1% catastrophic risks and some of the “hidden” costs of the 99%, showing CIOs how to get more from their IT departments. Left unchecked, these common issues can consume more than 20% of ADM budget and keep developers from focusing on delivering new, innovative value to the business.


The good news is that we have concrete data points and studies that show the correlation between product defects and software flaws. There are currently about 60 critical flaws documented by the Consortium for IT Software Quality that need to be addressed, so this is manageable for CIOs and IT departments. And, the same set of flaws that reduce the risk of newsworthy incidents also lower the unseen cost of glitches.


As the Software Risk event would indicate, it’s clear that some companies are leading the way forward by integrating IT innovation with strategic business outcomes. But many are still stuck trying to justify IT expenditures that don’t necessarily correlate to growth. Organizations that link software risk performance with executive objectives will fare better than others.


If history tells us anything, it’s that there is only a matter of time before another cataclysmic glitch takes down core banks, exposes consumers to identity or credit card theft, and costs corporations millions of dollars. Establishing an effective software risk framework will pay off in dividends.

Software Risk Management

By David Gelperin, CTO, ClearSpecs Enterprises


40-60% of larger projects fail. Fewer smaller projects fail. Therefore, do smaller projects.


It’s safer to do projects you have done successfully before, e.g., build another ecommerce website. Therefore, repeat successful projects.


If you must do something larger and unfamiliar, identify its hazards and how you plan to mitigate them.


Functions are the goals that customers care about and focus on. Developers are told to focus on customer value. Qualities like security, privacy, reliability, and robustness are goals that customers rarely think about. 


Functions are easy. Qualities are hard. When system failures make the news, e.g., security breaches, it is rarely because of a functional failure. Qualities are commonly missing from software estimates and inadequately supported in operational software. 


Quality may be free, but qualities need investment. Providing a quality is nothing like providing a function. Qualities are dangerous because they are unfamiliar and out of focus.


Current Agile development ignores qualities or treats them like functions. Qualities are incompatible with iterative development. Therefore, current Agile development is dangerous when used on larger and unfamiliar projects.


There is a hybrid Agile process that retains the power of Agile, but mitigates its quality risk.