IT Modernization Best Practices Repository

The IT Modernization Best Practices Repository wiki was created for the Cyber Resilience Summit series. Here you will find meeting notes, presentations, policy updates, press coverage and more.

 

The IT Modernization Best Practices Repository is managed by

 

 

UPCOMING MEETING

Cyber Resilience Summit: The Crossroads of IT Modernization & Cybersecurity

October 16, 2018 at the Army Navy Country Club in Arlington, VA, USA

Registration is now open! Admission is complimentary for government employees and elected officials, not-for-profit standards developing organizations, and universities; industry $250.

 

 

 

 

MEETING NOTES

Download meeting notes from the March 20, 2018 Cyber Resilience Summit

 

Download meeting notes from the October 19, 2017 Cyber Resilience Summit

 

PRESENTATIONS

Standards for Managing Cybersecurity, Risk and Technical Debt
Dr. Bill Curtis, Executive Director, Consortium for IT Software Quality (CISQ)
Cyber Resilience Summit, March 20, 2018

 

Using Software Quality Standards with Outsourced IT Vendors – a Fortune 100 Case Study
Marc Cohen, Vendor Management practitioner at Fortune 100 institution
Cyber Resilience Summit, March 20, 2018

 

Security Risk Management
Adam Isles, Principal, Chertoff Group
Cyber Resilience Summit, March 20, 2018

 

Bugcrowd – The Pentagon Opened Up to Hackers and Fixed Thousands of Bugs
Michael Chung, Head of Government Solutions, Bugcrowd
Cyber Resilience Summit, March 20, 2018

 

Risk Management Standards in Practice
Robert Martin, Senior Principal Engineer, MITRE
Cyber Resilience Summit, March 20, 2018

 

Getting IT Quality Standards into Practice – Confessions of a Texas IT Champion
Herb Krasner, University of Texas at Austin (ret.), Texas IT Champion
Cyber Resilience Summit, March 20, 2018

 

UL 2900 Security Standards
Jeff Barksdale, Principal Security Advisor, Underwriters Laboratories (UL)
Cyber Resilience Summit, March 20, 2018

 

Roadmap for IT Modernization and Cyber Resilience
John Weiler, Vice Chair, IT Acquistion Advisory Council (IT-AAC)
Cyber Resilience Summit, October 19, 2017

 

Supply Chain Risk Management (SCRM) for Continuous Diagnostics and Mitigation (CDM) Products

Emile Monette, Senior Cybersecurity Strategist and Acquisition Advisor, DHS OCISO

Cyber Resilience Summit, October 19, 2017

 

 

PRESS COVERAGE

Resources-strapped agencies are leaving networks vulnerable to cyberattack
Jessie Bur, Federal Times, March 21, 2018

 

Tony Scott calls IT workforce drain a “creeping” crisis bigger than Y2K
Carten Cordel, fedscoop, October 20, 2017

 

Report: DHS Tests Cyber Tech Acquisition Management Model
Nichols Martin, ExecutiveGov, October 20, 2017

 

DHS piloting agile cyber acquisition, CDM for cloud, CISO says
Carten Cordel, fedscoop, October 19, 2017

 

DHS to Stand Up CDM Cloud Services for Small Agencies
Morgan Lynch, Meritalk, October 19, 2017

 

Learn to Deal With Cyber Risk
Morgan Lynch, Meritalk, October 19, 2017

 

 

POLICY

GSA is weighing “multiple initiatives” for the next wave of IT Modernization CoE (Centers of Excellence) projects in 2019, reports fedscoop. The CoE program, announced in December 2017, is built on five teams of IT talent specializing in cloud adoption, IT infrastructure optimization, customer experience, contact center services and service delivery analytics. Those teams are paired with contractors, as well as personnel at target agencies, to carry out IT modernization projects based on their skill sets. They kicked off work in April. The USDA was selected to be the “lighthouse” agency for the rollout of all five CoE teams.

 

The Technology Modernization Fund (TMF), which supports the transformation of agency IT to improve mission execution and delivery of services to the American public, has awarded funding for three projects (for more information see https://tmf.cio.gov/projects/). The TMF website has launched for updates: https://tmf.cio.gov/.

 

The White House Office of Management and Budget published the Federal Cybersecurity Risk Determination Report and Action Plan on May 20, 2018 in accordance with Presidential Executive Order 13800, Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure, (Executive Order 13800) and OMB Memorandum M-17-25, Reporting Guidance for Executive Order on Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure.

 

The Presidents Management Agenda was released on March 20, 2018 and focuses on three drivers: IT modernization, modern workforce, data transparency and accountability. “A key part of the President’s Management Agenda is establishing cross-agency priority goals, or what we call CAP goals, to compliment the broad vision and get into execution and on the ground tactics,” says Office of Management and Budget Deputy Director for Management Margaret Weichert. “Each CAP goal will be led by an interagency team of senior federal leaders.” Read more on Federal Times. Says the White House, “Because accountability is important part of the PMA, CAP goal results will be tracked publicly each quarter online at www.performance.gov/PMA.”

 

OMB’s user guide to the MGT Act – February 6, 2018 on FWC

The Office of Management and Budget is working on a rules-of-the-road document to cover how agencies can seek funds under the Modernizing Government Technology Act. In a 19-page draft memorandum to agency heads obtained by FCW, OMB lays out what information agencies should include in their project proposals to receive money from the centralized modernization fund, housed by the General Services Administration, as well as how to navigate using their IT working capital funds.

 

Gen. Burke “Ed” Wilson was promoted to OSD Policy on Cyber. Read the announcement published January 29, 2018 on www.defense.gov.

 

Suzette Kent, principal at Ernst & Young, is appointed new Federal CIO by President Donald Trump. Read Trump picks federal CIO (FCW) on January 26, 2018.

 

Final White House IT Modernization Plan delivered to President Trump in December 2017 outlining plans to accelerate the modernization of legacy systems. See https://itmodernization.cio.gov/.

 

IT-AAC Federal IT Modernization Report signed September 20, 2017 was submitted to White House American Technology Council (ATC) in response to Executive Order 13,800.

 

IT-AAC Recommendations for Embracing Commercial Cloud in DoD signed November 17, 2017 submitted to DoD Cloud Executive Steering Group.

 

 

CYBER RESILIENCE STANDARDS

Consortium for IT Software Quality (CISQ) www.it-cisq.org/standards

Also see related standards and guidelines including NIST, ISO, CMM, etc.

 

 

WEBINARS

New Automated Technical Debt Standard

The CISQ measure of Automated Technical Debt has just been approved by the OMG® as a standard for measuring the future cost of defects remaining in system source code at release. Technical Debt hinders innovation and puts businesses at unacceptable levels of risk, including high IT maintenance costs, outages, breaches, and lost business opportunities. Dr. Bill Curtis, CISQ Executive Director, delivers an overview of the specification.

 

Using Software Quality Standards with Outsourced IT Vendor Engagements – a Fortune 100 Case Study

Marc Cohen led IT vendor management at American Express and discusses how to use software quality standards from CISQ in outsourcing engagements. He explains how to derive better software, better development resources, and better vendor relationships by leveraging software quality standards.

 

Using Software Quality Standards at Scale in Agile and DevOps Environments

Over the past two years Fannie Mae IT has transformed from a waterfall organization to a lean culture enabled by Agile and DevOps. Barry Snyder, DevOps Product Manager at Fannie Mae, discusses how to use software measurement standards from CISQ to demonstrate significant improvements in code quality and development productivity. Executive management monitors the organization’s Agile-DevOps transformation by reviewing quality, productivity, and delivery-to-speed.

 

 

IT ACQUISITION ADVISORY COUNCIL (IT-AAC) DOCUMENTS

DoD’s acquisition and sustainment chief, Ellen Lord, shares path forward for new office, envisioning an agile acquisition framework, reports Federal News Radio on May 25, 2018.

 

 

ADDITIONAL RESOURCES

A Useful Point of Reference for Critical Infrastructure Resilience
Don O’Neill, Independent Consultant

 

Presentations from OMG® Modernization Summit, March 21, 2018 in Reston, VA

 

 

 

 

PHOTOS

View more photos from the Cyber Resilience Summit here

CISQ Automated Source Code Green Measure

Problem statement

IT operations run on electricity.

kWh production leads to CO2 emission.

Lack of efficiency in IT operations waste energy simply because unnecessary CPU cycles are equivalent to unnecessary kWh consumption.

Efficiency in IT operations is for a large part conditioned by the way it was developed.

People have been used to ever-growing computing resources, omitting the impact on the environment through the energy consumption, resulting in software that are far from optimal.

In addition to suboptimal software development that amounts to “pipe leaks”, there are also “pipe ruptures” that can be avoided, so as to save the resources needed to recover/restart/resume the activity.

Energy can be saved now by making software more efficient.

The relative emergency in helping this initiative is the spread of software in billions of devices. Every small gain can make a difference.

 

Opportunity

To identify pieces of Software that could be optimized to require less CPU resources
  • Focus on “pipe leaks”
    • data access efficiency
    • algorithmic costs
    • resource economy
  • Focus on “pipe ruptures” – avoiding failures

Thanks to selected patterns from from:

  • Automated Source Code Performance Efficiency Measure (http://www.omg.org/spec/ASCPEM/)
  • Automated Source Code Reliability Measure (http://www.omg.org/spec/ASCRM/)
  • Automated Source Code Security Measure (http://www.omg.org/spec/ASCSM/)

Objectives

  • Perform the selection of the applicable patterns
  • Validate the coverage of salient aspects
    • Or identify the “uncovered” ones and specify applicable patterns

Limitations

  • No direct kWh measure
  • No direct CO2 equivalent

Development

OMG Measure In ASCGM ?
ASCMM-MNT-1: Control Flow Transfer Control Element outside Switch Block
ASCMM-MNT-2: Class Element Excessive Inheritance of Class Elements with Concrete
Implementation
ASCMM-MNT-3: Storable and Member Data Element Initialization with Hard-Coded Literals
ASCMM-MNT-4: Callable and Method Control Element Number of Outward Calls
ASCMM-MNT-5: Loop Value Update within the Loop
ASCMM-MNT-6: Commented Code Element Excessive Volume
ASCMM-MNT-7: Inter-Module Dependency Cycles
ASCMM-MNT-8: Source Element Excessive Size
ASCMM-MNT-9: Horizontal Layer Excessive Number
ASCMM-MNT-10: Named Callable and Method Control Element Multi-Layer Span
ASCMM-MNT-11: Callable and Method Control Element Excessive Cyclomatic Complexity Value
ASCMM-MNT-12: Named Callable and Method Control Element with Layer-skipping Call
ASCMM-MNT-13: Callable and Method Control Element Excessive Number of Parameters
ASCMM-MNT-14: Callable and Method Control Element Excessive Number of Control Elements
involving Data Element from Data Manager or File Resource
ASCMM-MNT-15: Public Member Element
ASCMM-MNT-16: Method Control Element Usage of Member Element from other Class Element
ASCMM-MNT-17: Class Element Excessive Inheritance Level
ASCMM-MNT-18: Class Element Excessive Number of Children
ASCMM-MNT-19: Named Callable and Method Control Element Excessive Similarity
ASCMM-MNT-20: Unreachable Named Callable or Method Control Element
ASCPEM-PRF-1: Static Block Element containing Class Instance Creation Control Element
ASCPEM-PRF-2: Immutable Storable and Member Data Element Creation TRUE
ASCPEM-PRF-3: Static Member Data Element outside of a Singleton Class Element
ASCPEM-PRF-4: Data Resource Read and Write Access Excessive Complexity TRUE
ASCPEM-PRF-5: Data Resource Read Access Unsupported by Index Element TRUE
ASCPEM-PRF-6: Large Data Resource ColumnSet Excessive Number of Index Elements ?
ASCPEM-PRF-7: Large Data Resource ColumnSet with Index Element of Excessive Size ?
ASCPEM-PRF-8: Control Elements Requiring Significant Resource Element within Control Flow
Loop Block
TRUE
ASCPEM-PRF-9: Non-Stored SQL Callable Control Element with Excessive Number of Data Resource
Access
?
ASCPEM-PRF-10: Non-SQL Named Callable and Method Control Element with Excessive Number of
Data Resource Access
?
ASCPEM-PRF-11: Data Access Control Element from Outside Designated Data Manager
Component
TRUE
ASCPEM-PRF-12: Storable and Member Data Element Excessive Number of Aggregated Storable and
Member Data Elements
?
ASCPEM-PRF-13: Data Resource Access not using Connection Pooling capability TRUE
ASCPEM-PRF-14: Storable and Member Data Element Memory Allocation Missing De-Allocation
Control Element
?
ASCPEM-PRF-15: Storable and Member Data Element Reference Missing De-Referencing Control
Element
?
ASCRM-CWE-120: Buffer Copy without Checking Size of Input TRUE
ASCRM-CWE-252-data: Unchecked Return Parameter Value of named Callable and Method Control
Element with Read, Write, and Manage Access to Data Resource
TRUE
ASCRM-CWE-252-resource: Unchecked Return Parameter Value of named Callable and Method Control
Element with Read, Write, and Manage Access to Platform Resource
TRUE
ASCRM-CWE-396: Declaration of Catch for Generic Exception ?
ASCRM-CWE-397: Declaration of Throws for Generic Exception ?
ASCRM-CWE-456: Storable and Member Data Element Missing Initialization TRUE
ASCRM-CWE-674:Uncontrolled Recursion
ASCRM-CWE-704: Incorrect Type Conversion or Cast TRUE
ASCRM-CWE-772: Missing Release of Resource after Effective Lifetime
ASCRM-CWE-788: Memory Location Access After End of Buffer TRUE
ASCRM-RLB-1: Empty Exception Block ?
ASCRM-RLB-2: Serializable Storable Data Element without Serialization Control Element FALSE
ASCRM-RLB-3: Serializable Storable Data Element with non-Serializable Item Elements FALSE
ASCRM-RLB-4: Persistant Storable Data Element without Proper Comparison Control Element TRUE
ASCRM-RLB-5: Runtime Resource Management Control Element in a Component Built to Run on
Application Servers
ASCRM-RLB-6: Storable or Member Data Element containing Pointer Item Element without Proper
Copy Control Element
ASCRM-RLB-7: Class Instance Self Destruction Control Element
ASCRM-RLB-8: Named Callable and Method Control Elements with Variadic Parameter Element
ASCRM-RLB-9: Float Type Storable and Member Data Element Comparison with Equality
Operator
TRUE
ASCRM-RLB-10: Data Access Control Element from Outside Designated Data Manager Component
ASCRM-RLB-11: Named Callable and Method Control Element in Multi-Thread Context with
non-Final Static Storable or Member Element
ASCRM-RLB-12: Singleton Class Instance Creation without Proper Lock Element Management ?
ASCRM-RLB-13: Inter-Module Dependency Cycles
ASCRM-RLB-14: Parent Class Element with References to Child Class Element
ASCRM-RLB-15: Class Element with Virtual Method Element wihout Virtual Destructor
ASCRM-RLB-16: Parent Class Element without Virtual Destructor Method Element
ASCRM-RLB-17: Child Class Element wihout Virtual Destructor unlike its Parent Class
Element
ASCRM-RLB-18: Storable and Member Data Element Initialization with Hard-Coded Network
Resource Configuration Data
ASCRM-RLB-19: Synchronous Call Time-Out Absence
ASCSM-CWE-22: Path Traversal Improper Input Neutralization
ASCSM-CWE-78: OS Command Injection Improper Input Neutralization
ASCSM-CWE-79: Cross-site Scripting Improper Input Neutralization
ASCSM-CWE-89: SQL Injection Improper Input Neutralization
ASCSM-CWE-99: Name or Reference Resolution Improper Input Neutralization
ASCSM-CWE-120: Buffer Copy without Checking Size of Input
ASCSM-CWE-129: Array Index Improper Input Neutralization
ASCSM-CWE-134: Format String Improper Input Neutralization
ASCSM-CWE-252-resource: Unchecked Return Parameter Value of named Callable and Method Control
Element with Read, Write, and Manage Access to Platform Resource
ASCSM-CWE-327: Broken or Risky Cryptographic Algorithm Usage
ASCSM-CWE-396: Declaration of Catch for Generic Exception
ASCSM-CWE-397: Declaration of Throws for Generic Exception
ASCSM-CWE-434: File Upload Improper Input Neutralization
ASCSM-CWE-456: Storable and Member Data Element Missing Initialization
ASCSM-CWE-606: Unchecked Input for Loop Condition
ASCSM-CWE-667: Shared Resource Improper Locking
ASCSM-CWE-672: Expired or Released Resource Usage
ASCSM-CWE-681: Numeric Types Incorrect Conversion
ASCSM-CWE-772: Missing Release of Resource after Effective Lifetime
ASCSM-CWE-789: Uncontrolled Memory Allocation
ASCSM-CWE-798: Hard-Coded Credentials Usage for Remote Authentication
ASCSM-CWE-835: Loop with Unreachable Exit Condition (Infinite Loop)

CISQ Automated Technical Debt Measure – May 26 workshop – Proposition description

 

(Here follows the description of the proposition presented during May 26 workshop)

Proposed principles for Automated Technical Debt Measure (ATDM) specifications:

  1. Automated Technical Debt items whose removal / remediation cost is to be quantified and qualified are the occurrences of patterns defined in all four Automated Security / Reliability / Performance Efficiency / Maintainability Measure specifications
    1. It positions itself alongside a Project Technical Debt Measure or Contextual Technical Debt Measure, which would follow the same computation principles, except for the selection of patterns to consider: the selection would be project- or organization-specific to account for the specifics of the measured software, …, thus delivering more adapted values but preventing their use for benchmarking outside the project or organization scope
  2. Technical Debt Risk items quantification (normative)
    1. considers the remediation cost as follows (aligned on Agile Alliance proposition): 
      1. occurrence removal from the source code (a.k.a. coding), 
      2. unit testing creation or adaptation, 
      3. non regression testing adaptation
    2. i.e., following costs are not taken into account
      1. integration testing creation or adaptation
      2. system testing creation or adaptation
      3. issue tracking management: ticket handling
      4. source code management considerations: check-in, check-out of necessary source code elements
    3. results in 5 measures: 
      1. the main measurement: Automated Technical Debt Measure (ATDM)
      2. the focussed measurements:
        1. Automated Security Debt Measure (ASDM)
          or Automated Security Remediation Effort Measure (ASREM)
          to avoid spoiling the “Debt” wording 
        2. Automated Reliability Debt Measure (ARDM)
          or Automated Reliability Remediation Effort Measure (ARREM)
        3. Automated Performance Efficiency Debt Measure (APEDM)
          or Automated Performance Efficiency Remediation Effort Measure (APEREM)
        4. Automated Maintainability Debt Measure (AMDM)
          or Automated Maintainability Remediation Effort Measure (AMREM)
  3. Technical Debt items qualification (normative)
    • is composed of the following pieces of information:
      1. system-level exposure of the occurrence of the pattern to the rest of the software, to assess operational risk of not fixing it as well as the destabilization risk of fixing it, measured as
        1. the number of distinct call paths through a software to the targeted code elements (that is, McCabe Cyclomatic Complexity applied to call paths instead of Control Flow)
        2. the number of distinct direct callers (that is, Fan-In)
      2. concentration of occurrences of any Automated Security / Reliability / Performance Efficiency / Maintainability Measure patterns in the same code element(s)
        1. same pattern
        2. different patterns
      3. evolution status of both 
        1. the occurrences 
        2. and the code elements supporting the occurrences
      4. technological diversity of occurrences, measured as the number of involved technologies
    1. results in intelligence about the Structural Debt designed to
      1. better understand the risk associated with Technical Debt items remediation (or non remediation for that matter)
        • e.g.: expect large overheads in integration and system testing activities due to the exposure of the occurrence
      2. help setting priorities of Technical Debt items remediation
        • e.g.: focus on high exposure Technical Debt items to maximize the RoI of their repayment
        • e.g.: focus on low exposure Technical )Debt items to minimize destabilization risks
        • e.g.: focus on added occurrences
        • e.g.: focus on high concentration Technical Debt items to consider replacement / rerouting / reengineering / …
      3. support more accurate predictive models (Cf. usage scenarios (informative))
        • e.g.: penalty factor for multi-technology fixes
        • e.g.: penalty factor for high impact fixes
        • e.g.: reward factor for concentration
        • e.g.: reward factor for status 
    2. is required by the fact that
      1. organization are often at loss when facing large amount of Technical Debt
      2. providing removal cost alone is misleading as some situations lead to sharply higher cost due to the context of the occurrences to remove in the software
  4. Usage scenarios (informative):
    1. Collection/estimation of application to support analytics
      1. demographics
      2. associated risk in case of failure (e.g.: loss of trading time), poor response time (e.g.: decrease in trading capability and partial loss of trading activity), security breach (e.g.: theft of trading data), …
      3. functional size measurement (e.g.: AFP)
    2. Deliver more accurate predictive models thanks to enveloppe factors / contextual modifiers
      1. to account for the fact that:
        1. concentration of occurrences in the same code elements can be cheaper to fix (reengineering, rerouting, …)
        2. highly exposed code elements carrying occurrences can be more expensive to fix (destabilization risk, …)
        3. multi-technology occurrences can be more expensive to fix (multi-team coordination, …)
      2. delivered as
        1. Option #1: a range defined by
          1. a low value to count one remediation cost per code element carrying the occurrences, as opposed to count one remediation cost per individual occurrence of the violation
          2. a high value to factor in exposure (with logarithmic transformation due to combinatoric nature of the number of distinct call paths through a software to get human-friendly, benchmarkable values, leading for instance to consider a propagating fix that nominally cost 15′ to cost 2h when there exists 255 distinct call paths leading to it) and technological diversity
        2. Option #2: a couple of multiplier
          1. a concentration multiplier, lower than 1
          2. an exposure multiplier and technological diversity, greater than 1
        3. Option #3: contextual error margins, knowing that the “error” comes from the unknown development organization characteristics (e.g.: fully automated integration / system testing or not) 
          1. a “minus” margin, based on concentration
          2. a “plus” margin, based on exposure and technological diversity
  5. About conformance to the specifications
    1. only the normative part is concerned: sections 2 and 3 above
      1. nominal and focused ATDM and ASERM/ARREM/APEREM/AMREM values, along with the drill-down to the finer-grain data per ASCSM/ASCRM/ASCPEM/ASCMM pattern, number of occurrences per code elements (so that one can re-compute ATDM and ASERM/ARREM/APEREM/AMREM values with different effort settings per pattern per technology as well as compute subset/superset measures from these)
      2. qualification information per ASCSM/ASCRM/ASCPEM/ASCMM pattern, per occurrence
      3. in human readable format starting with formatted files
    2. implementation of measurement/indicators/… from section 4 above is not required

 

Illustrations of the Option #3 from section 4.2.2.3 above:

  • WebGoat ATDM evolution
    WG_ATDM showing an upward trend (due to the fact that WebGoat purpose is to showcase Security issues) with an even-faster increase of the “plus” error margin (on account of increased exposure of issues) 
    • WegGoat ASDM+ARDM+APEDM evolution
      WG_ASDM_ARDM_APEDM
    • WebGoat AMDM evolution
      WG_AMDM
  • eCommerce ATDM evolution
    eC_ATDM
    showing some slight downward trend on the last three release with slightly faster decrease of the “plus” error margin, indicating a diminution of exposure of issues
    • eCommerce ASDM+ARDM+APEDM evolution
      eC_ASDM_ARDM_APEDM
  • Explanation of the “plus” error marging
    • WebGoat 5.2 distribution per exposure range
      WG_52
      showing from left to right the ATDM with exponentially-increasing exposure, with a significant peak for bin #07 (indicating issues that can propagate to 255 distinct call paths in the software, potentially causing significative overhead if the testing capability is not mature enough)
  • ATDM reimbursement guidance information
    • eCommerce ATDM split per object status
      eC_OS
    • eCommerce ATDM split per violation status
      eC_VS
    • eCommerce ATDM of added and updated violation, split per rule
      eC_R_AU
    • eCommerce ATDM of Top 100 exposed objects
      eC_O

 

ASCTDM Project 2: Normative Measure(s) (in progress)

Project 2 will develop specifications for one or more measures:

  • At least one measure should be based on CISQ quality characteristic measures
  • Not all proposed TD measures must be become CISQ measures
  • Weighting schemes to be applied to individual violations in CISQ measures
  • Measurement frameworks can be developed for interest, liability, etc.

Introduction

This project is really about the definition of normative measure(s) to support IT executives management of application development and maintenance activities with objective, repeatable, and verifiable software measures and metrics.

As laid-out in “ASCTDM Project 1: Conceptual Framework“, Technical Debt landscape is vast and most of it is not subject to objective, repeatable, and verifiable measurement of the software source code.

Therefore, this project will limit itself to the measurement of Automated Source Code Technical Debt Items (although the measurement can be extended to the measurement of Project Source Code Technical Debt Items).

In addition to this limitation on the locations of Technical Debt Items to consider, this project will also limit itself to some of the consequences on Cost and Value:

  • the Initial Principal,
  • and, as much as possible, the Accruing Interest

Automated Source Code Technical Debt Items - cost structure

 

Illustration

In order to propose a satisfactory model, let us look at some of the patterns defined in the ASC*M specifications. 

According to the categories presented by Dr. Richard Soley (Chairman and CEO, Object Management Group) in “How to Deliver Resilient, Secure, Efficient, and Adaptable IT Systems in Line with CISQ Recommendations.“, the ASC*M will require one of the three following analysis level:

  1. Unit-level, the analysis of a single unit of code
  2. Technology-level, the analysis of an integrated collection of code units written in the same language, by taking into account the dependencies across programs, components, files, or classes
  3. System-level, the analysis of all code units and different layers of technology to get a holistic view of the integrated business application.

Unit-level patterns

Sample presentation

ASCSM-CWE-396: Declaration of Catch for Generic Exception

Pattern Category: ASCSM_Security

Objective: Avoid failure to use dedicated exception types.

Consequence: Software unaware of accurate execution status control incurs the risk of bad data being used in operations, possibly leading to a crash or other unintended behaviors.

Measure Element: Number of instances where the named callable control element or method control element contains a catch unit which declares to catch an exception parameter whose data type is part of a list of overly broad exception data types.

Description: This pattern identifies situations where the <ControlElement> named callable control element (code:CallableUnit with code:CallableKind ‘regular’, ‘external,’ or ‘stored’) or method control element (code:MethodUnit) contains the <CatchElement> catch unit (action:CatchUnit) which declares to catch the <CaughtExceptionParameter> exception parameter (code:ParameterUnit with code:ParameterKind ‘exception’) whose datatype (code:DataType) is part of the <OverlyBroadExceptionTypeList> list of overly broad exception datatypes. As an example, with JAVA, <OverlyBroadExceptionTypeList> is {‘java.lang.Exception’}.

Descriptor: ASCSM-CWE-396(ControlElement: controlElement,CatchElement: catchElement, CaughtExceptionParameter: caughtExceptionParameter, OverlyBroadExceptionTypeList: overlyBroadExceptionTypeList)
Variable input: <OverlyBroadExceptionTypeList> list of overly broad exception datatypes.
Comment: Measure element contributes to Security and Reliability

List of Roles: ControlElement, CatchElement, CaughtExceptionParameter, OverlyBroadExceptionTypeList

 It implies that a measurement solution will have to report the following elements for each occurrence of this pattern:

  1. ControlElement,
    the named callable control element (code:CallableUnit with code:CallableKind ‘regular’, ‘external,’ or ‘stored’) or method control element (code:MethodUnit) which contains the
  2. CatchElement,
    the catch unit (action:CatchUnit) of the following exception
  3. CaughtExceptionParameter,
    the exception parameter (code:ParameterUnit with code:ParameterKind ‘exception’) whose datatype (code:DataType) is in the following list
  4. OverlyBroadExceptionTypeList
    the list of overly broad exception datatypes

Sample Analysis

The way to remove the TD item is generally quite simple. It would require:

  • Pattern occurrence localisation: not complex in this case, thanks to the measurement report
  • Pattern occurrence understanding
  • Pattern occurrence removal: changing the type of the exception to a more accurate one
  • Removal validation via unit testing

Yet, the change of behavior caused by the removal of this occurrence does not limit to the internal behavior of the code element. It will impact the reference graph as well. I.e., it will cause Accruing Interest.

  • Understanding overhead: yes, to understand the reference graph and the impact of a new type of exception
  • Removal validation overhead
    • Integration testing: yes, of the reference graph
    • System testing: n/a

Technology-level pattens

Sample presentation

 ASCRM-RLB-14: Parent Class Element with References to Child Class Element

Pattern Category: ASCRM_Reliability

Objective: Avoid parent class references to child class(es).

Consequence: Software that does not follow the principles of inheritance and polymorphism results in unexpected behaviors.

Measure Element: Number of instances where a parent class element that is used in the ‘to’ association of an Extends class relation, references the child class element used in the ‘from’ association of an Extends class relation, directly or indirectly through a parent and child class element, using a callable or data relation (the reference statement is made directly to the child class element or to any one of its own method or member elements).

Description: This pattern identifies situations where the <ParentClass> parent class class element (code:StorableUnit of code:DataType code:ClassUnit) that is used in the ‘to’ association of the Extends class relation, references the <ChildClass> child class element (code:StorableUnit of code:DataType code:ClassUnit) used in the ‘from’ association of the Extends class relation (code:Extends), directly or indirectly through parent and child class element, with the <ReferenceStatement> callable or data relations (action:CallableRelations or action:DataRelations). The reference statement is made directly to the child class element or to any one of its own method or member elements (code:MethodUnit and code:MemberUnit).

 

Descriptor: ASCRM-RLB-14(ParentClass: parentClass,ChildClass: childClass, ReferenceStatement: referenceStatement)

Variable input (none applicable)

Comment (none applicable) 

List of Roles: ParentClass, ChildClass, ReferenceStatement 

It implies that a measurement solution will have to report the following elements for each occurrence of this pattern: 

  1. ParentClass, 
  2. ChildClass, 
  3. ReferenceStatement between the ParentClass and the ChildClass

Sample Analysis

The way to remove the TD item is generally moderately simple. It would require:

  • Pattern occurrence localisation: not complex in this case, thanks to the measurement report
  • Pattern occurrence understanding: understanding if the reference is a simple mistake or if it was “on purpose” to get a sophisticated expected behavior
  • Pattern occurrence removal: in the later case ( “on purpose”), class design must be updated
  • Removal validation via unit testing

Yet, the change of behavior caused by the removal of this occurrence does not limit to the internal behavior of the code element. It will impact the reference graph as well. I.e., it will cause Accruing Interest.

  • Understanding overhead: n/a
  • Removal validation overhead
    • Integration testing: yes, of the reference graph, to spot side effects on other components that must now rely on a different class design
    • System testing: n/a

System-level patterns

Sample presentation

ASCPEM-PRF-8: Control Elements Requiring Significant Resource Element within Control Flow Loop Block 

Pattern Category: ASCPEM_Performance_Efficiency

Objective: Avoid resource consuming operations found directly or indirectly within loops.

Consequence: Software that is coded so as to execute expensive computations repeatedly (such as in loops) requires excessive computational resources when the usage and data volume grow.

Measure Element: Number of instances where a control element that causes platform resource consumption is directly or indirectly called via an execution path starting from within a loop body block or within a loop condition.

Description: This pattern identifies situations where the <ExpensiveControlElement> control element (code:ControlElement), whose nature is known to cause platform resource consumption (platform:PlatformActions with platform:ResourceType), is directly or indirectly called via the <ExecutionPath> execution path (action:BlockUnit composed of action:ActionElements with action:CallableRelations to code:ControlElements), starting from within the loop body block (action:BlockUnit starting as the action:TrueFlow of the loop action:GuardedFlow and ending with an action:Flow back to the loop action:GuardedFlow) or within the loop condition (action:BlockUnit used in the action:GuardedFlow).

Descriptor: ASCPEM-PRF-8(LoopStatement: loopStatement,ExpensiveOperation: expensiveOperation, ExecutionPath: executionPath)

Variable input (none applicable)

Comment (none applicable)

List of Roles: LoopStatement, ExpensiveOperation, ExecutionPath

It implies that a measurement solution will have to report the following elements for each occurrence of this pattern:

  1. LoopStatement,
  2. ExpensiveOperation,
  3. ExecutionPath,
    the execution path (action:BlockUnit composed of action:ActionElements with action:CallableRelations to code:ControlElements), starting from within the loop body block (action:BlockUnit starting as the action:TrueFlow of the loop action:GuardedFlow and ending with an action:Flow back to the loop action:GuardedFlow) or within the loop condition (action:BlockUnit used in the action:GuardedFlow) and leading to the ExpensiveOperation

Sample analysis

The way to remove the TD item is generally not so simple. It would require:

  • Pattern occurrence localisation: not complex in this case, thanks to the measurement report
  • Pattern occurrence understanding
  • Pattern occurrence removal: new design of the feature to mutualize the operation (e.g.: new SQL query to process all necessary rows instead of a loop running a SQL query for one row at a time)  
  • Removal validation via unit testing

Then, the change of behavior caused by the removal of this occurrence does not limit to the internal behavior of a single code element. It will impact the execution graph as well, as well as the references to elements of the execution graph. I.e., it will cause Accruing Interest.

  • Understanding overhead: yes
  • Removal validation overhead
    • Integration testing: yes
    • System testing: very likely

 

ReferenceStatement

 

ASCTDM Project 1: Conceptual Framework (in progress)

Project 1 will set the structure for defining measures:

  • Define the concept of Technical Debt and its component parts
  • Establish a conceptual framework for defining CISQ and related measures
  • Publish and publicize the framework to make it a common foundation
  • Develop a sustainable product improvement model for managing/removing TD

Introduction

As part of CISQ mission to define computable metrics standard for measuring software quality & size, so as to support IT executives management of application development and maintenance activities, the Technical Debt metaphor from Ward Cunningham (“The WyCash Portfolio Management System”, OOPSLA ’92 Experience Report) is a natural candidate for new computable metrics standard. On the one hand, the metaphor proved successful in supporting communication between technical and executive audiences. On the other hand, the metaphor is used to designated such a wide and unspecified range of items that using any Technical Debt value is challenging as soon as the underlying technical facts are abstracted. 

Therefore, CISQ proposes a conceptual framework to remove the confusion and ambiguity from the metaphor, by clearly identifying the Technical Debt contributing elements, and among which, the ones that can be automatically measured so as to produce management indicators. 

In other words, CISQ had to respond to the “call for action” from “Reducing friction in software development” (Paris Avgeriou, Philippe Kruchten, Robert L. Nord, Ipek Ozkaya, Carolyn Seaman, IEEE Software, January 2016) by providing new software metrics standard in this area.

Proposed TD Conceptual Framework

Technical Debt vs. Features & Defects 

In agreement with “Technical Debt: From Metaphor to Theory and Practice” (Philippe Kruchten, Robert L. Nord, Ipek Ozkaya, IEEE Software, November/December 2012), Technical Debt is composed of items that are invisible to software end-users but visible to development and maintenance team alone.

Technical Debt items

In agreement with the sidebar glossary from “Reducing friction in software development” (Paris Avgeriou, Philippe Kruchten, Robert L. Nord, Ipek Ozkaya, Carolyn Seaman, IEEE Software, January 2016), a Technical Debt item is one atomic element of technical debt  connecting: 

  1. a set of  development artifacts, with 
  2. consequences on quality, value  and cost of the system, and triggered by 
  3. some causes related to process, management, context, or business goals 

With

  • consequence, the effect on the value, quality or cost of the  current or the future state of the system associated with technical  debt items
    • Cost: The financial burden of developing or maintaining the product, which  is mostly paying the people working on it
    • Value: The business value derived from the ultimate consumers of the  product: its users, or acquirers, the people who are going to pay  good money to use it, and the perceived utility of the product
    • Quality: The degree to which a system, component, or process meets  customer or user needs or expectations (from IEEE std. 610)
  • cause, the process, decision, action, lack of action, or external event  that triggers the existence of a technical debt item 

Development artifacts 

In agreement with “Technical Debt: From Metaphor to Theory and Practice” (Philippe Kruchten, Robert L. Nord, Ipek Ozkaya, IEEE Software, November/December 2012), development artifacts composing a Technical Debt item can be found in various locations:

  • Source Code, including implemented Software Structure and Architecture
  • Build Scripts
  • Test Scripts
  • Documentation (not in source code)
  • Technology
  • Decisions or Design, including Architecture Decisions (different from implemented Software Architecture)
In the context of CISQ software metrics standard definition, it is critical to understand that only the first location can be covered. This limitation does not prevent the definition of valuable measurement as: 
  • benchmarking can leverage the comparison of software on a subset of the Technical Debt, as long as the audience is well aware of the limitation
  • correlation between source code Technical Debt items and total Technical Debt (still to be validated by empirical data) can be beneficial to IT executives 

Consequences on Cost and Value

In agreement with the sidebar glossary from “Reducing friction in software development” (Paris Avgeriou, Philippe Kruchten, Robert L. Nord, Ipek Ozkaya, Carolyn Seaman, IEEE Software, January 2016), consequences on Cost and Value can be split between

  • Principal, with the subdivition between
    • Initial Principal, the cost savings gained by taking some initial approach or  “shortcut” in development
    • Current Principal, the cost it  would take now to develop a different or “better” solution   
  • Interest, with the subdivision between
    • Accruing Interest, the additional costs incurred by building new software depending on  an element of technical debt , a non­optimal solution. These  accrue over time into the initial principal  to lead to the current  principal 
    • Recurring Interest, the additional costs incurred by the project in the presence of  technical debt, due to reduced productivity (or velocity), induced  defects or loss of quality (maintainability and evolvability). These  are sunk costs, that are not recoverable

 In agreement with “Paying Down the Interest on Your Applications – A Guide to Measuring and Managing Technical Debt” (Bill Curtis), one must also consider:

  • Liability Cost to the business resulting from operational problems caused by flaws in production code. 
  • Opportunity Cost, that is, the benefits that could have been achieved had resources been committed to developing new capability rather than being assigned to retire Technical Debt. Opportunity cost represents the tradeoff that application managers and executives must weigh when deciding how much effort to devote to retiring Technical Debt.

Even if not measurable, these additional external costs are key to consider when it is time to manage the Technical Debt and decide which Technical Debt items to repay (first)

Focus on Source Code Technical Debt items

When it comes to Technical Debt items that can be located in software source code, the following considerations are key:

  • are there source code patterns (including source code structure and architecture patterns) to locate TD items?
    • if so, which ones?
    • then, is there a one-to-one relation between occurrences of these source code patterns and actual Technical Debt items? 
      • In other words, are all individual occurrences of these source code patterns TD items? 
      • Have all individual occurrences of these source code patterns “enough” consequences on the Cost, Value, or Quality of the software that they outbalance unequivocally the cost of removing them?
      • Opposing the one-to-one relation is the argument that each single occurrence will not [have to] be removed from the software and a debt that will never [have to] be repaid is not a debt.
      • In favor of the one-to-one relation are the arguments that
        • Automated Source Code * Measures were specified with a “zero tolerance” in mind
        • The decision to not remove each single occurrence is team- or expert-specific and would therefore make benchmarking difficult.
      • The proposed solution to this dilemma is discussed in the next section “Considerations about benchmarking” 
  • the impact of the removal / remediation of the occurrences of source code patterns. The following situations generally exist
    • Automatable remediation
    • Remediation only (no impact on compilation)
    • Unit testing required (as defined in SWEBOK V3.0 http://www.computer.org/web/swebok/v3)
    • Integration testing required
    • System testing required
    • Another way to present these situations is to relate to the Current Principal and Accrued Interest concepts defined above. 
      • Remediation effort/cost and, if applicable, unit testing handling are estimating the Current Principal, that is, “the cost it  would take now to develop a different or “better” solution”
      • Integration and system testing, if applicable, are estimating some of the Accrued Interest, “the additional costs incurred by building new software depending on  an element of technical debt , a non­optimal solution.” 

 

Considerations about benchmarking

CISQ context puts a strong emphasis on the need to define software metrics standard that can be used to compare softwares. 

This implies a level of objectivity which is not always compatible with the theoretical definition of the Technical Debt. This means there would be a need for Automated Source Code Technical Debt items, different from the Project Source Code Technical Debt items which would accept the subjectivity of the decision to include a given Source Code Technical Debt item or not.

Similarly, the wide variability of organization/department/team maturity regarding testing makes it difficult to come up with agreeable effort/cost estimation of testing-related additional costs. Automated Source Code Technical Debt items would come up with a benchmarking objective configuration while Project Source Code Technical Debt items would take into account the organization/department/team specifics.

The Project vs. Automated distinction is likely to also apply to other locations of TD items. The introduction of this concept in association with source code TD alone comes from the existence of the Automated Source Code Reliability, Performance Efficiency, Security, and Maintainability Measures, to support an objective automatable measure.

TDConceptualFramework

Cost structure of Source Code TD Items

Using the cost framework laid-out in above section, we can look at the cost structure of Source Code Technical Debt Items.

Regarding the

  • Initial Principal, we can consider the cost of removing the occurrences of the Source Code TD pattern right away:
    • localization,
    • understanding,
    • removal
    • and removal validation at the compornent level, through unit testing.
  • Accruing Interest, we can consider the overhead costs due to the fact that other components of the software are built upon / using the components involved in the occurrence of the Source Code TD pattern:
    • understanding overhead, as one must understand a larger set of components to do the job properly,
    • removal validation overhead, as there are more components affected by the update of the components involved in the occurrence of the Source Code TD pattern, through
      • integration testing
      • system testing
  • Recurring Interest
    • when security, reliability, performance efficiency source code patterns are involved, they would lead to induced defects
    • when maintainability source code patterns are involved, they would lead to reduced productivity or velocity
  • Opportunity and Liability cost
    • when security, reliability, performance efficiency source code patterns are involved, they can lead to data theft, software outage, software unresponsiveness (or unacceptable response times), … 
    • when maintainability, security, reliability, performance efficiency  source code patterns are involved, they would lead to missed opportunities

ASCTDItemsCostStructure

 

 

 

 

Technical Debt Wiki

Meeting Presentations

 

Posted Resources

 

Three Project Teams

 

Project 1 will set the structure for defining measures:

  • Define the concept of Technical Debt and its component parts
  • Establish a conceptual framework for defining CISQ and related measures
  • Publish and publicize the framework to make it a common foundation
  • Develop a sustainable product improvement model for managing/removing TD
  • Should strive for conceptual definition in February

Members: Bill Curtis (CISQ), Philippe-Emmanuel Douziech (CAST), Robert Nord (SEI), Ipek Ozkaya (SEI), LiGuo Huang (SMU), Qiao Zhang (SMU), Dan Tucker (BAH), Emily Leung (BAH)

 

Project 2 will develop specifications for one or more measures:

  • At least one measure should be based on CISQ quality characteristic measures
  • Not all proposed TD measures must be become CISQ measures
  • Weighting schemes to be applied to individual violations in CISQ measures
  • Measurement frameworks can be developed for interest, liability, etc.
  • Should strive for initial measure specification by May, earlier if possible

Members: Bill Curtis (CISQ), Philippe-Emmanuel Douziech (CAST), Luc Béasse (CAST), Robert Nord (SEI), Ipek Ozkaya (SEI), LiGuo Huang (SMU), Qiao Zhang (SMU), Jennifer Attanasi (BAH), Keith Tayloe (BAH)

 

Project 3 will try to provide empirical support and possible validation for the measures

  • Apply GQM to the Project 1 conceptual framework and include Project 2 measures
  • Define measures of TD and outcomes, as well as methods of collection
  • Approach IT organizations for data collection
  • Publicize results and benefits

Members: Bill Curtis (CISQ), Philippe-Emmanuel Douziech (CAST), David Zubrow (SEI), Emily Leung (BAH), Jennifer Attanasi (BAH)

 

 

 

 

 

 

Questions? Contact Tracie at tracie.berardi@it-cisq.org.

 

Technical Debt computation – Introduction

Objectives

Define a formula to compute the Technical Debt linked to CISQ Quality Characteristics violations so as to be able

  1. to adjust productivity measurement of IT system software development projects
  2. to monitor and benchmark quality of IT system softwares

Approach

  • Turn quality issues into effort or cost to adjust the amount of resource invested in the development project
  • Rely on CISQ Quality Characteristics measure elements
  • Model the Technical Debt Principal only, as the Technical Debt Interest Rates are highly context-dependent

Proposition

  • Split the Technical Debt Principal into
    • the Remediation Effort
    • the Unit Testing overhead, when applicable
    • the Integration Testing overhead, when applicable
    • the Complete Validation overhead, when applicable
  •  Model the Remediation Effort of a violation to a given measure element as a measure-dependent technology-dependant Remediation Effort duration to fix the first occurrence of the violation pattern in the object in violation, plus a measure-dependent technology-dependant additional Remediation Effort duration for each additional occurrence of the violation pattern in the same object
    • E.g.: 30′ to fix the first occurrence in a method, plus 15′ to fix each additional occurrences in the same method
  • Model the Unit Testing overhead, when the fix to a violation requires it, as a measure-dependent technology-dependant Unit Testing duration to unit test the object in violation, where all occurrences of the measure element violation pattern would have been fixed
    • E.g.: 30′ more to unit test the method where 1+ occurrences of the violation pattern of the measure element have been removed
  • Model the Integration Testing overhead, when the fix to a violation requires it, as a technology-dependant Integration Testing duration to integrate test the object in violation, where all occurrences of the violation pattern of one or more measure element would have been fixed, factoring in the number of distinct call paths to the object (so as to account for the difficulty to manage integration tests for objects which impacts a lot of other components in the software)
    • E.g.: 30′ x 1.5 = 45′ more to integration test the method where 1+ occurrences of the violation pattern of 1+ measure elements have been removed and which is in used in X call paths….
  • Model the Complete Validation overhead, when the fix to a violation requires it, as a Complete Validation duration to validate the fixed object in violation, where all occurrences of the violation pattern of one or more measure element would have been fixed, to be run once after fixing all a batch of X objects

Next steps

  • Validate the formulas “on paper”
  • Share a prototype to compute these values on real applications to validate the formulas “in the field” (prototype is ready, mailto:p.douziech@castsoftware.com)
  • Validate a default configuration to be used for CISQ objective benchmarking