Cyber Resilience Summit – Knowledge Repository

This Knowledge Repository (wiki) was created for the October 19 Cyber Resilience Summit: Modernizing and Securing Government IT (http://it-cisq.org/cyber-resilience-summit-oct-2017/)

 

EXECUTIVE SUMMARY

Download meeting notes

 

PRESENTATIONS

Technical Debt Findings and a Standard
Dr. Bill Curtis, Executive Director, Consortium for IT Software Quality (CISQ)
Cyber Resilience Summit, October 19, 2017

 

Roadmap for IT Modernization and Cyber Resilience
John Weiler, Vice Chair, IT Acquistion Advisory Council (IT-AAC)
Cyber Resilience Summit, October 19, 2017

 

Vision for Improving Performance in Texas State IT Projects: Measuring Quality and Cybersecurity
Herb Krasner, University of Texas at Austin (ret.), Texas IT Champion
Cyber Resilience Summit, October 19, 2017

 

Supply Chain Risk Management (SCRM) for Continuous Diagnostics and Mitigation (CDM) Products

Emile Monette, Senior Cybersecurity Strategist and Acquisition Advisor, DHS OCISO

Cyber Resilience Summit, October 19, 2017

 

Software Security and CISQ
Dr. Bill Curtis, Executive Director, Consortium for IT Software Quality (CISQ)
OMG Cybersecurity Workshop, September 28, 2017

 

IT Acquisition Workshop: Leveraging Executive Order 13636, CCA & FITARA to Drive Down Cyber Risk
CISQ, IT-AAC, GSA
Cyber Resilience Summit, March 15, 2016

 

 

PRESS COVERAGE

Tony Scott calls IT workforce drain a “creeping” crisis bigger than Y2K
Carten Cordel, fedscoop, October 20, 2017

 

Report: DHS Tests Cyber Tech Acquisition Management Model
Nichols Martin, ExecutiveGov, October 20, 2017

 

DHS piloting agile cyber acquisition, CDM for cloud, CISO says
Carten Cordel, fedscoop, October 19, 2017

 

DHS to Stand Up CDM Cloud Services for Small Agencies
Morgan Lynch, Meritalk, October 19, 2017

 

Learn to Deal With Cyber Risk
Morgan Lynch, Meritalk, October 19, 2017

 

 

POLICY

IT-AAC Federal IT Modernization Report signed September 20, 2017
submitted to White House American Technology Council (ATC) in response to Executive Order 13,800.

See https://itmodernization.cio.gov/.

 

IT-AAC Recommendations for Embracing Commercial Cloud in DoD signed November 17, 2017
submitted to DoD Cloud Executive Steering Group

 

 

CYBER RESILIENCE STANDARDS

Consortium for IT Software Quality (CISQ) www.it-cisq.org/standards

Software sizing: Automated Function Points, Automated Enhancement Points

Structural quality: Automated Quality Characteristic Measures, Technical Debt

 

WEBINAR

New Automated Technical Debt Standard
January 16, 2018 from 11:00 – 11:30am ET
Dr. Bill Curtis, CISQ Executive Director

 

The CISQ measure of Automated Technical Debt has just been approved by the OMG® as a standard for measuring the future cost of defects remaining in system source code at release. The ripple effects from Technical Debt can hinder innovation and put businesses at unacceptable levels of risk, including high IT maintenance costs, outages, breaches, and lost business opportunities.

 

 

PHOTOS

View more photos from the Cyber Resilience Summit here!

CISQ Automated Source Code Green Measure

Problem statement

IT operations run on electricity.

kWh production leads to CO2 emission.

Lack of efficiency in IT operations waste energy simply because unnecessary CPU cycles are equivalent to unnecessary kWh consumption.

Efficiency in IT operations is for a large part conditioned by the way it was developed.

People have been used to ever-growing computing resources, omitting the impact on the environment through the energy consumption, resulting in software that are far from optimal.

In addition to suboptimal software development that amounts to “pipe leaks”, there are also “pipe ruptures” that can be avoided, so as to save the resources needed to recover/restart/resume the activity.

Energy can be saved now by making software more efficient.

The relative emergency in helping this initiative is the spread of software in billions of devices. Every small gain can make a difference.

 

Opportunity

To identify pieces of Software that could be optimized to require less CPU resources
  • Focus on “pipe leaks”
    • data access efficiency
    • algorithmic costs
    • resource economy
  • Focus on “pipe ruptures” – avoiding failures

Thanks to selected patterns from from:

  • Automated Source Code Performance Efficiency Measure (http://www.omg.org/spec/ASCPEM/)
  • Automated Source Code Reliability Measure (http://www.omg.org/spec/ASCRM/)
  • Automated Source Code Security Measure (http://www.omg.org/spec/ASCSM/)

Objectives

  • Perform the selection of the applicable patterns
  • Validate the coverage of salient aspects
    • Or identify the “uncovered” ones and specify applicable patterns

Limitations

  • No direct kWh measure
  • No direct CO2 equivalent

Development

OMG Measure In ASCGM ?
ASCMM-MNT-1: Control Flow Transfer Control Element outside Switch Block
ASCMM-MNT-2: Class Element Excessive Inheritance of Class Elements with Concrete
Implementation
ASCMM-MNT-3: Storable and Member Data Element Initialization with Hard-Coded Literals
ASCMM-MNT-4: Callable and Method Control Element Number of Outward Calls
ASCMM-MNT-5: Loop Value Update within the Loop
ASCMM-MNT-6: Commented Code Element Excessive Volume
ASCMM-MNT-7: Inter-Module Dependency Cycles
ASCMM-MNT-8: Source Element Excessive Size
ASCMM-MNT-9: Horizontal Layer Excessive Number
ASCMM-MNT-10: Named Callable and Method Control Element Multi-Layer Span
ASCMM-MNT-11: Callable and Method Control Element Excessive Cyclomatic Complexity Value
ASCMM-MNT-12: Named Callable and Method Control Element with Layer-skipping Call
ASCMM-MNT-13: Callable and Method Control Element Excessive Number of Parameters
ASCMM-MNT-14: Callable and Method Control Element Excessive Number of Control Elements
involving Data Element from Data Manager or File Resource
ASCMM-MNT-15: Public Member Element
ASCMM-MNT-16: Method Control Element Usage of Member Element from other Class Element
ASCMM-MNT-17: Class Element Excessive Inheritance Level
ASCMM-MNT-18: Class Element Excessive Number of Children
ASCMM-MNT-19: Named Callable and Method Control Element Excessive Similarity
ASCMM-MNT-20: Unreachable Named Callable or Method Control Element
ASCPEM-PRF-1: Static Block Element containing Class Instance Creation Control Element
ASCPEM-PRF-2: Immutable Storable and Member Data Element Creation TRUE
ASCPEM-PRF-3: Static Member Data Element outside of a Singleton Class Element
ASCPEM-PRF-4: Data Resource Read and Write Access Excessive Complexity TRUE
ASCPEM-PRF-5: Data Resource Read Access Unsupported by Index Element TRUE
ASCPEM-PRF-6: Large Data Resource ColumnSet Excessive Number of Index Elements ?
ASCPEM-PRF-7: Large Data Resource ColumnSet with Index Element of Excessive Size ?
ASCPEM-PRF-8: Control Elements Requiring Significant Resource Element within Control Flow
Loop Block
TRUE
ASCPEM-PRF-9: Non-Stored SQL Callable Control Element with Excessive Number of Data Resource
Access
?
ASCPEM-PRF-10: Non-SQL Named Callable and Method Control Element with Excessive Number of
Data Resource Access
?
ASCPEM-PRF-11: Data Access Control Element from Outside Designated Data Manager
Component
TRUE
ASCPEM-PRF-12: Storable and Member Data Element Excessive Number of Aggregated Storable and
Member Data Elements
?
ASCPEM-PRF-13: Data Resource Access not using Connection Pooling capability TRUE
ASCPEM-PRF-14: Storable and Member Data Element Memory Allocation Missing De-Allocation
Control Element
?
ASCPEM-PRF-15: Storable and Member Data Element Reference Missing De-Referencing Control
Element
?
ASCRM-CWE-120: Buffer Copy without Checking Size of Input TRUE
ASCRM-CWE-252-data: Unchecked Return Parameter Value of named Callable and Method Control
Element with Read, Write, and Manage Access to Data Resource
TRUE
ASCRM-CWE-252-resource: Unchecked Return Parameter Value of named Callable and Method Control
Element with Read, Write, and Manage Access to Platform Resource
TRUE
ASCRM-CWE-396: Declaration of Catch for Generic Exception ?
ASCRM-CWE-397: Declaration of Throws for Generic Exception ?
ASCRM-CWE-456: Storable and Member Data Element Missing Initialization TRUE
ASCRM-CWE-674:Uncontrolled Recursion
ASCRM-CWE-704: Incorrect Type Conversion or Cast TRUE
ASCRM-CWE-772: Missing Release of Resource after Effective Lifetime
ASCRM-CWE-788: Memory Location Access After End of Buffer TRUE
ASCRM-RLB-1: Empty Exception Block ?
ASCRM-RLB-2: Serializable Storable Data Element without Serialization Control Element FALSE
ASCRM-RLB-3: Serializable Storable Data Element with non-Serializable Item Elements FALSE
ASCRM-RLB-4: Persistant Storable Data Element without Proper Comparison Control Element TRUE
ASCRM-RLB-5: Runtime Resource Management Control Element in a Component Built to Run on
Application Servers
ASCRM-RLB-6: Storable or Member Data Element containing Pointer Item Element without Proper
Copy Control Element
ASCRM-RLB-7: Class Instance Self Destruction Control Element
ASCRM-RLB-8: Named Callable and Method Control Elements with Variadic Parameter Element
ASCRM-RLB-9: Float Type Storable and Member Data Element Comparison with Equality
Operator
TRUE
ASCRM-RLB-10: Data Access Control Element from Outside Designated Data Manager Component
ASCRM-RLB-11: Named Callable and Method Control Element in Multi-Thread Context with
non-Final Static Storable or Member Element
ASCRM-RLB-12: Singleton Class Instance Creation without Proper Lock Element Management ?
ASCRM-RLB-13: Inter-Module Dependency Cycles
ASCRM-RLB-14: Parent Class Element with References to Child Class Element
ASCRM-RLB-15: Class Element with Virtual Method Element wihout Virtual Destructor
ASCRM-RLB-16: Parent Class Element without Virtual Destructor Method Element
ASCRM-RLB-17: Child Class Element wihout Virtual Destructor unlike its Parent Class
Element
ASCRM-RLB-18: Storable and Member Data Element Initialization with Hard-Coded Network
Resource Configuration Data
ASCRM-RLB-19: Synchronous Call Time-Out Absence
ASCSM-CWE-22: Path Traversal Improper Input Neutralization
ASCSM-CWE-78: OS Command Injection Improper Input Neutralization
ASCSM-CWE-79: Cross-site Scripting Improper Input Neutralization
ASCSM-CWE-89: SQL Injection Improper Input Neutralization
ASCSM-CWE-99: Name or Reference Resolution Improper Input Neutralization
ASCSM-CWE-120: Buffer Copy without Checking Size of Input
ASCSM-CWE-129: Array Index Improper Input Neutralization
ASCSM-CWE-134: Format String Improper Input Neutralization
ASCSM-CWE-252-resource: Unchecked Return Parameter Value of named Callable and Method Control
Element with Read, Write, and Manage Access to Platform Resource
ASCSM-CWE-327: Broken or Risky Cryptographic Algorithm Usage
ASCSM-CWE-396: Declaration of Catch for Generic Exception
ASCSM-CWE-397: Declaration of Throws for Generic Exception
ASCSM-CWE-434: File Upload Improper Input Neutralization
ASCSM-CWE-456: Storable and Member Data Element Missing Initialization
ASCSM-CWE-606: Unchecked Input for Loop Condition
ASCSM-CWE-667: Shared Resource Improper Locking
ASCSM-CWE-672: Expired or Released Resource Usage
ASCSM-CWE-681: Numeric Types Incorrect Conversion
ASCSM-CWE-772: Missing Release of Resource after Effective Lifetime
ASCSM-CWE-789: Uncontrolled Memory Allocation
ASCSM-CWE-798: Hard-Coded Credentials Usage for Remote Authentication
ASCSM-CWE-835: Loop with Unreachable Exit Condition (Infinite Loop)

CISQ Automated Technical Debt Measure – May 26 workshop – Proposition description

 

(Here follows the description of the proposition presented during May 26 workshop)

Proposed principles for Automated Technical Debt Measure (ATDM) specifications:

  1. Automated Technical Debt items whose removal / remediation cost is to be quantified and qualified are the occurrences of patterns defined in all four Automated Security / Reliability / Performance Efficiency / Maintainability Measure specifications
    1. It positions itself alongside a Project Technical Debt Measure or Contextual Technical Debt Measure, which would follow the same computation principles, except for the selection of patterns to consider: the selection would be project- or organization-specific to account for the specifics of the measured software, …, thus delivering more adapted values but preventing their use for benchmarking outside the project or organization scope
  2. Technical Debt Risk items quantification (normative)
    1. considers the remediation cost as follows (aligned on Agile Alliance proposition): 
      1. occurrence removal from the source code (a.k.a. coding), 
      2. unit testing creation or adaptation, 
      3. non regression testing adaptation
    2. i.e., following costs are not taken into account
      1. integration testing creation or adaptation
      2. system testing creation or adaptation
      3. issue tracking management: ticket handling
      4. source code management considerations: check-in, check-out of necessary source code elements
    3. results in 5 measures: 
      1. the main measurement: Automated Technical Debt Measure (ATDM)
      2. the focussed measurements:
        1. Automated Security Debt Measure (ASDM)
          or Automated Security Remediation Effort Measure (ASREM)
          to avoid spoiling the “Debt” wording 
        2. Automated Reliability Debt Measure (ARDM)
          or Automated Reliability Remediation Effort Measure (ARREM)
        3. Automated Performance Efficiency Debt Measure (APEDM)
          or Automated Performance Efficiency Remediation Effort Measure (APEREM)
        4. Automated Maintainability Debt Measure (AMDM)
          or Automated Maintainability Remediation Effort Measure (AMREM)
  3. Technical Debt items qualification (normative)
    • is composed of the following pieces of information:
      1. system-level exposure of the occurrence of the pattern to the rest of the software, to assess operational risk of not fixing it as well as the destabilization risk of fixing it, measured as
        1. the number of distinct call paths through a software to the targeted code elements (that is, McCabe Cyclomatic Complexity applied to call paths instead of Control Flow)
        2. the number of distinct direct callers (that is, Fan-In)
      2. concentration of occurrences of any Automated Security / Reliability / Performance Efficiency / Maintainability Measure patterns in the same code element(s)
        1. same pattern
        2. different patterns
      3. evolution status of both 
        1. the occurrences 
        2. and the code elements supporting the occurrences
      4. technological diversity of occurrences, measured as the number of involved technologies
    1. results in intelligence about the Structural Debt designed to
      1. better understand the risk associated with Technical Debt items remediation (or non remediation for that matter)
        • e.g.: expect large overheads in integration and system testing activities due to the exposure of the occurrence
      2. help setting priorities of Technical Debt items remediation
        • e.g.: focus on high exposure Technical Debt items to maximize the RoI of their repayment
        • e.g.: focus on low exposure Technical )Debt items to minimize destabilization risks
        • e.g.: focus on added occurrences
        • e.g.: focus on high concentration Technical Debt items to consider replacement / rerouting / reengineering / …
      3. support more accurate predictive models (Cf. usage scenarios (informative))
        • e.g.: penalty factor for multi-technology fixes
        • e.g.: penalty factor for high impact fixes
        • e.g.: reward factor for concentration
        • e.g.: reward factor for status 
    2. is required by the fact that
      1. organization are often at loss when facing large amount of Technical Debt
      2. providing removal cost alone is misleading as some situations lead to sharply higher cost due to the context of the occurrences to remove in the software
  4. Usage scenarios (informative):
    1. Collection/estimation of application to support analytics
      1. demographics
      2. associated risk in case of failure (e.g.: loss of trading time), poor response time (e.g.: decrease in trading capability and partial loss of trading activity), security breach (e.g.: theft of trading data), …
      3. functional size measurement (e.g.: AFP)
    2. Deliver more accurate predictive models thanks to enveloppe factors / contextual modifiers
      1. to account for the fact that:
        1. concentration of occurrences in the same code elements can be cheaper to fix (reengineering, rerouting, …)
        2. highly exposed code elements carrying occurrences can be more expensive to fix (destabilization risk, …)
        3. multi-technology occurrences can be more expensive to fix (multi-team coordination, …)
      2. delivered as
        1. Option #1: a range defined by
          1. a low value to count one remediation cost per code element carrying the occurrences, as opposed to count one remediation cost per individual occurrence of the violation
          2. a high value to factor in exposure (with logarithmic transformation due to combinatoric nature of the number of distinct call paths through a software to get human-friendly, benchmarkable values, leading for instance to consider a propagating fix that nominally cost 15′ to cost 2h when there exists 255 distinct call paths leading to it) and technological diversity
        2. Option #2: a couple of multiplier
          1. a concentration multiplier, lower than 1
          2. an exposure multiplier and technological diversity, greater than 1
        3. Option #3: contextual error margins, knowing that the “error” comes from the unknown development organization characteristics (e.g.: fully automated integration / system testing or not) 
          1. a “minus” margin, based on concentration
          2. a “plus” margin, based on exposure and technological diversity
  5. About conformance to the specifications
    1. only the normative part is concerned: sections 2 and 3 above
      1. nominal and focused ATDM and ASERM/ARREM/APEREM/AMREM values, along with the drill-down to the finer-grain data per ASCSM/ASCRM/ASCPEM/ASCMM pattern, number of occurrences per code elements (so that one can re-compute ATDM and ASERM/ARREM/APEREM/AMREM values with different effort settings per pattern per technology as well as compute subset/superset measures from these)
      2. qualification information per ASCSM/ASCRM/ASCPEM/ASCMM pattern, per occurrence
      3. in human readable format starting with formatted files
    2. implementation of measurement/indicators/… from section 4 above is not required

 

Illustrations of the Option #3 from section 4.2.2.3 above:

  • WebGoat ATDM evolution
    WG_ATDM showing an upward trend (due to the fact that WebGoat purpose is to showcase Security issues) with an even-faster increase of the “plus” error margin (on account of increased exposure of issues) 
    • WegGoat ASDM+ARDM+APEDM evolution
      WG_ASDM_ARDM_APEDM
    • WebGoat AMDM evolution
      WG_AMDM
  • eCommerce ATDM evolution
    eC_ATDM
    showing some slight downward trend on the last three release with slightly faster decrease of the “plus” error margin, indicating a diminution of exposure of issues
    • eCommerce ASDM+ARDM+APEDM evolution
      eC_ASDM_ARDM_APEDM
  • Explanation of the “plus” error marging
    • WebGoat 5.2 distribution per exposure range
      WG_52
      showing from left to right the ATDM with exponentially-increasing exposure, with a significant peak for bin #07 (indicating issues that can propagate to 255 distinct call paths in the software, potentially causing significative overhead if the testing capability is not mature enough)
  • ATDM reimbursement guidance information
    • eCommerce ATDM split per object status
      eC_OS
    • eCommerce ATDM split per violation status
      eC_VS
    • eCommerce ATDM of added and updated violation, split per rule
      eC_R_AU
    • eCommerce ATDM of Top 100 exposed objects
      eC_O

 

ASCTDM Project 2: Normative Measure(s) (in progress)

Project 2 will develop specifications for one or more measures:

  • At least one measure should be based on CISQ quality characteristic measures
  • Not all proposed TD measures must be become CISQ measures
  • Weighting schemes to be applied to individual violations in CISQ measures
  • Measurement frameworks can be developed for interest, liability, etc.

Introduction

This project is really about the definition of normative measure(s) to support IT executives management of application development and maintenance activities with objective, repeatable, and verifiable software measures and metrics.

As laid-out in “ASCTDM Project 1: Conceptual Framework“, Technical Debt landscape is vast and most of it is not subject to objective, repeatable, and verifiable measurement of the software source code.

Therefore, this project will limit itself to the measurement of Automated Source Code Technical Debt Items (although the measurement can be extended to the measurement of Project Source Code Technical Debt Items).

In addition to this limitation on the locations of Technical Debt Items to consider, this project will also limit itself to some of the consequences on Cost and Value:

  • the Initial Principal,
  • and, as much as possible, the Accruing Interest

Automated Source Code Technical Debt Items - cost structure

 

Illustration

In order to propose a satisfactory model, let us look at some of the patterns defined in the ASC*M specifications. 

According to the categories presented by Dr. Richard Soley (Chairman and CEO, Object Management Group) in “How to Deliver Resilient, Secure, Efficient, and Adaptable IT Systems in Line with CISQ Recommendations.“, the ASC*M will require one of the three following analysis level:

  1. Unit-level, the analysis of a single unit of code
  2. Technology-level, the analysis of an integrated collection of code units written in the same language, by taking into account the dependencies across programs, components, files, or classes
  3. System-level, the analysis of all code units and different layers of technology to get a holistic view of the integrated business application.

Unit-level patterns

Sample presentation

ASCSM-CWE-396: Declaration of Catch for Generic Exception

Pattern Category: ASCSM_Security

Objective: Avoid failure to use dedicated exception types.

Consequence: Software unaware of accurate execution status control incurs the risk of bad data being used in operations, possibly leading to a crash or other unintended behaviors.

Measure Element: Number of instances where the named callable control element or method control element contains a catch unit which declares to catch an exception parameter whose data type is part of a list of overly broad exception data types.

Description: This pattern identifies situations where the <ControlElement> named callable control element (code:CallableUnit with code:CallableKind ‘regular’, ‘external,’ or ‘stored’) or method control element (code:MethodUnit) contains the <CatchElement> catch unit (action:CatchUnit) which declares to catch the <CaughtExceptionParameter> exception parameter (code:ParameterUnit with code:ParameterKind ‘exception’) whose datatype (code:DataType) is part of the <OverlyBroadExceptionTypeList> list of overly broad exception datatypes. As an example, with JAVA, <OverlyBroadExceptionTypeList> is {‘java.lang.Exception’}.

Descriptor: ASCSM-CWE-396(ControlElement: controlElement,CatchElement: catchElement, CaughtExceptionParameter: caughtExceptionParameter, OverlyBroadExceptionTypeList: overlyBroadExceptionTypeList)
Variable input: <OverlyBroadExceptionTypeList> list of overly broad exception datatypes.
Comment: Measure element contributes to Security and Reliability

List of Roles: ControlElement, CatchElement, CaughtExceptionParameter, OverlyBroadExceptionTypeList

 It implies that a measurement solution will have to report the following elements for each occurrence of this pattern:

  1. ControlElement,
    the named callable control element (code:CallableUnit with code:CallableKind ‘regular’, ‘external,’ or ‘stored’) or method control element (code:MethodUnit) which contains the
  2. CatchElement,
    the catch unit (action:CatchUnit) of the following exception
  3. CaughtExceptionParameter,
    the exception parameter (code:ParameterUnit with code:ParameterKind ‘exception’) whose datatype (code:DataType) is in the following list
  4. OverlyBroadExceptionTypeList
    the list of overly broad exception datatypes

Sample Analysis

The way to remove the TD item is generally quite simple. It would require:

  • Pattern occurrence localisation: not complex in this case, thanks to the measurement report
  • Pattern occurrence understanding
  • Pattern occurrence removal: changing the type of the exception to a more accurate one
  • Removal validation via unit testing

Yet, the change of behavior caused by the removal of this occurrence does not limit to the internal behavior of the code element. It will impact the reference graph as well. I.e., it will cause Accruing Interest.

  • Understanding overhead: yes, to understand the reference graph and the impact of a new type of exception
  • Removal validation overhead
    • Integration testing: yes, of the reference graph
    • System testing: n/a

Technology-level pattens

Sample presentation

 ASCRM-RLB-14: Parent Class Element with References to Child Class Element

Pattern Category: ASCRM_Reliability

Objective: Avoid parent class references to child class(es).

Consequence: Software that does not follow the principles of inheritance and polymorphism results in unexpected behaviors.

Measure Element: Number of instances where a parent class element that is used in the ‘to’ association of an Extends class relation, references the child class element used in the ‘from’ association of an Extends class relation, directly or indirectly through a parent and child class element, using a callable or data relation (the reference statement is made directly to the child class element or to any one of its own method or member elements).

Description: This pattern identifies situations where the <ParentClass> parent class class element (code:StorableUnit of code:DataType code:ClassUnit) that is used in the ‘to’ association of the Extends class relation, references the <ChildClass> child class element (code:StorableUnit of code:DataType code:ClassUnit) used in the ‘from’ association of the Extends class relation (code:Extends), directly or indirectly through parent and child class element, with the <ReferenceStatement> callable or data relations (action:CallableRelations or action:DataRelations). The reference statement is made directly to the child class element or to any one of its own method or member elements (code:MethodUnit and code:MemberUnit).

 

Descriptor: ASCRM-RLB-14(ParentClass: parentClass,ChildClass: childClass, ReferenceStatement: referenceStatement)

Variable input (none applicable)

Comment (none applicable) 

List of Roles: ParentClass, ChildClass, ReferenceStatement 

It implies that a measurement solution will have to report the following elements for each occurrence of this pattern: 

  1. ParentClass, 
  2. ChildClass, 
  3. ReferenceStatement between the ParentClass and the ChildClass

Sample Analysis

The way to remove the TD item is generally moderately simple. It would require:

  • Pattern occurrence localisation: not complex in this case, thanks to the measurement report
  • Pattern occurrence understanding: understanding if the reference is a simple mistake or if it was “on purpose” to get a sophisticated expected behavior
  • Pattern occurrence removal: in the later case ( “on purpose”), class design must be updated
  • Removal validation via unit testing

Yet, the change of behavior caused by the removal of this occurrence does not limit to the internal behavior of the code element. It will impact the reference graph as well. I.e., it will cause Accruing Interest.

  • Understanding overhead: n/a
  • Removal validation overhead
    • Integration testing: yes, of the reference graph, to spot side effects on other components that must now rely on a different class design
    • System testing: n/a

System-level patterns

Sample presentation

ASCPEM-PRF-8: Control Elements Requiring Significant Resource Element within Control Flow Loop Block 

Pattern Category: ASCPEM_Performance_Efficiency

Objective: Avoid resource consuming operations found directly or indirectly within loops.

Consequence: Software that is coded so as to execute expensive computations repeatedly (such as in loops) requires excessive computational resources when the usage and data volume grow.

Measure Element: Number of instances where a control element that causes platform resource consumption is directly or indirectly called via an execution path starting from within a loop body block or within a loop condition.

Description: This pattern identifies situations where the <ExpensiveControlElement> control element (code:ControlElement), whose nature is known to cause platform resource consumption (platform:PlatformActions with platform:ResourceType), is directly or indirectly called via the <ExecutionPath> execution path (action:BlockUnit composed of action:ActionElements with action:CallableRelations to code:ControlElements), starting from within the loop body block (action:BlockUnit starting as the action:TrueFlow of the loop action:GuardedFlow and ending with an action:Flow back to the loop action:GuardedFlow) or within the loop condition (action:BlockUnit used in the action:GuardedFlow).

Descriptor: ASCPEM-PRF-8(LoopStatement: loopStatement,ExpensiveOperation: expensiveOperation, ExecutionPath: executionPath)

Variable input (none applicable)

Comment (none applicable)

List of Roles: LoopStatement, ExpensiveOperation, ExecutionPath

It implies that a measurement solution will have to report the following elements for each occurrence of this pattern:

  1. LoopStatement,
  2. ExpensiveOperation,
  3. ExecutionPath,
    the execution path (action:BlockUnit composed of action:ActionElements with action:CallableRelations to code:ControlElements), starting from within the loop body block (action:BlockUnit starting as the action:TrueFlow of the loop action:GuardedFlow and ending with an action:Flow back to the loop action:GuardedFlow) or within the loop condition (action:BlockUnit used in the action:GuardedFlow) and leading to the ExpensiveOperation

Sample analysis

The way to remove the TD item is generally not so simple. It would require:

  • Pattern occurrence localisation: not complex in this case, thanks to the measurement report
  • Pattern occurrence understanding
  • Pattern occurrence removal: new design of the feature to mutualize the operation (e.g.: new SQL query to process all necessary rows instead of a loop running a SQL query for one row at a time)  
  • Removal validation via unit testing

Then, the change of behavior caused by the removal of this occurrence does not limit to the internal behavior of a single code element. It will impact the execution graph as well, as well as the references to elements of the execution graph. I.e., it will cause Accruing Interest.

  • Understanding overhead: yes
  • Removal validation overhead
    • Integration testing: yes
    • System testing: very likely

 

ReferenceStatement

 

ASCTDM Project 1: Conceptual Framework (in progress)

Project 1 will set the structure for defining measures:

  • Define the concept of Technical Debt and its component parts
  • Establish a conceptual framework for defining CISQ and related measures
  • Publish and publicize the framework to make it a common foundation
  • Develop a sustainable product improvement model for managing/removing TD

Introduction

As part of CISQ mission to define computable metrics standard for measuring software quality & size, so as to support IT executives management of application development and maintenance activities, the Technical Debt metaphor from Ward Cunningham (“The WyCash Portfolio Management System”, OOPSLA ’92 Experience Report) is a natural candidate for new computable metrics standard. On the one hand, the metaphor proved successful in supporting communication between technical and executive audiences. On the other hand, the metaphor is used to designated such a wide and unspecified range of items that using any Technical Debt value is challenging as soon as the underlying technical facts are abstracted. 

Therefore, CISQ proposes a conceptual framework to remove the confusion and ambiguity from the metaphor, by clearly identifying the Technical Debt contributing elements, and among which, the ones that can be automatically measured so as to produce management indicators. 

In other words, CISQ had to respond to the “call for action” from “Reducing friction in software development” (Paris Avgeriou, Philippe Kruchten, Robert L. Nord, Ipek Ozkaya, Carolyn Seaman, IEEE Software, January 2016) by providing new software metrics standard in this area.

Proposed TD Conceptual Framework

Technical Debt vs. Features & Defects 

In agreement with “Technical Debt: From Metaphor to Theory and Practice” (Philippe Kruchten, Robert L. Nord, Ipek Ozkaya, IEEE Software, November/December 2012), Technical Debt is composed of items that are invisible to software end-users but visible to development and maintenance team alone.

Technical Debt items

In agreement with the sidebar glossary from “Reducing friction in software development” (Paris Avgeriou, Philippe Kruchten, Robert L. Nord, Ipek Ozkaya, Carolyn Seaman, IEEE Software, January 2016), a Technical Debt item is one atomic element of technical debt  connecting: 

  1. a set of  development artifacts, with 
  2. consequences on quality, value  and cost of the system, and triggered by 
  3. some causes related to process, management, context, or business goals 

With

  • consequence, the effect on the value, quality or cost of the  current or the future state of the system associated with technical  debt items
    • Cost: The financial burden of developing or maintaining the product, which  is mostly paying the people working on it
    • Value: The business value derived from the ultimate consumers of the  product: its users, or acquirers, the people who are going to pay  good money to use it, and the perceived utility of the product
    • Quality: The degree to which a system, component, or process meets  customer or user needs or expectations (from IEEE std. 610)
  • cause, the process, decision, action, lack of action, or external event  that triggers the existence of a technical debt item 

Development artifacts 

In agreement with “Technical Debt: From Metaphor to Theory and Practice” (Philippe Kruchten, Robert L. Nord, Ipek Ozkaya, IEEE Software, November/December 2012), development artifacts composing a Technical Debt item can be found in various locations:

  • Source Code, including implemented Software Structure and Architecture
  • Build Scripts
  • Test Scripts
  • Documentation (not in source code)
  • Technology
  • Decisions or Design, including Architecture Decisions (different from implemented Software Architecture)
In the context of CISQ software metrics standard definition, it is critical to understand that only the first location can be covered. This limitation does not prevent the definition of valuable measurement as: 
  • benchmarking can leverage the comparison of software on a subset of the Technical Debt, as long as the audience is well aware of the limitation
  • correlation between source code Technical Debt items and total Technical Debt (still to be validated by empirical data) can be beneficial to IT executives 

Consequences on Cost and Value

In agreement with the sidebar glossary from “Reducing friction in software development” (Paris Avgeriou, Philippe Kruchten, Robert L. Nord, Ipek Ozkaya, Carolyn Seaman, IEEE Software, January 2016), consequences on Cost and Value can be split between

  • Principal, with the subdivition between
    • Initial Principal, the cost savings gained by taking some initial approach or  “shortcut” in development
    • Current Principal, the cost it  would take now to develop a different or “better” solution   
  • Interest, with the subdivision between
    • Accruing Interest, the additional costs incurred by building new software depending on  an element of technical debt , a non­optimal solution. These  accrue over time into the initial principal  to lead to the current  principal 
    • Recurring Interest, the additional costs incurred by the project in the presence of  technical debt, due to reduced productivity (or velocity), induced  defects or loss of quality (maintainability and evolvability). These  are sunk costs, that are not recoverable

 In agreement with “Paying Down the Interest on Your Applications – A Guide to Measuring and Managing Technical Debt” (Bill Curtis), one must also consider:

  • Liability Cost to the business resulting from operational problems caused by flaws in production code. 
  • Opportunity Cost, that is, the benefits that could have been achieved had resources been committed to developing new capability rather than being assigned to retire Technical Debt. Opportunity cost represents the tradeoff that application managers and executives must weigh when deciding how much effort to devote to retiring Technical Debt.

Even if not measurable, these additional external costs are key to consider when it is time to manage the Technical Debt and decide which Technical Debt items to repay (first)

Focus on Source Code Technical Debt items

When it comes to Technical Debt items that can be located in software source code, the following considerations are key:

  • are there source code patterns (including source code structure and architecture patterns) to locate TD items?
    • if so, which ones?
    • then, is there a one-to-one relation between occurrences of these source code patterns and actual Technical Debt items? 
      • In other words, are all individual occurrences of these source code patterns TD items? 
      • Have all individual occurrences of these source code patterns “enough” consequences on the Cost, Value, or Quality of the software that they outbalance unequivocally the cost of removing them?
      • Opposing the one-to-one relation is the argument that each single occurrence will not [have to] be removed from the software and a debt that will never [have to] be repaid is not a debt.
      • In favor of the one-to-one relation are the arguments that
        • Automated Source Code * Measures were specified with a “zero tolerance” in mind
        • The decision to not remove each single occurrence is team- or expert-specific and would therefore make benchmarking difficult.
      • The proposed solution to this dilemma is discussed in the next section “Considerations about benchmarking” 
  • the impact of the removal / remediation of the occurrences of source code patterns. The following situations generally exist
    • Automatable remediation
    • Remediation only (no impact on compilation)
    • Unit testing required (as defined in SWEBOK V3.0 http://www.computer.org/web/swebok/v3)
    • Integration testing required
    • System testing required
    • Another way to present these situations is to relate to the Current Principal and Accrued Interest concepts defined above. 
      • Remediation effort/cost and, if applicable, unit testing handling are estimating the Current Principal, that is, “the cost it  would take now to develop a different or “better” solution”
      • Integration and system testing, if applicable, are estimating some of the Accrued Interest, “the additional costs incurred by building new software depending on  an element of technical debt , a non­optimal solution.” 

 

Considerations about benchmarking

CISQ context puts a strong emphasis on the need to define software metrics standard that can be used to compare softwares. 

This implies a level of objectivity which is not always compatible with the theoretical definition of the Technical Debt. This means there would be a need for Automated Source Code Technical Debt items, different from the Project Source Code Technical Debt items which would accept the subjectivity of the decision to include a given Source Code Technical Debt item or not.

Similarly, the wide variability of organization/department/team maturity regarding testing makes it difficult to come up with agreeable effort/cost estimation of testing-related additional costs. Automated Source Code Technical Debt items would come up with a benchmarking objective configuration while Project Source Code Technical Debt items would take into account the organization/department/team specifics.

The Project vs. Automated distinction is likely to also apply to other locations of TD items. The introduction of this concept in association with source code TD alone comes from the existence of the Automated Source Code Reliability, Performance Efficiency, Security, and Maintainability Measures, to support an objective automatable measure.

TDConceptualFramework

Cost structure of Source Code TD Items

Using the cost framework laid-out in above section, we can look at the cost structure of Source Code Technical Debt Items.

Regarding the

  • Initial Principal, we can consider the cost of removing the occurrences of the Source Code TD pattern right away:
    • localization,
    • understanding,
    • removal
    • and removal validation at the compornent level, through unit testing.
  • Accruing Interest, we can consider the overhead costs due to the fact that other components of the software are built upon / using the components involved in the occurrence of the Source Code TD pattern:
    • understanding overhead, as one must understand a larger set of components to do the job properly,
    • removal validation overhead, as there are more components affected by the update of the components involved in the occurrence of the Source Code TD pattern, through
      • integration testing
      • system testing
  • Recurring Interest
    • when security, reliability, performance efficiency source code patterns are involved, they would lead to induced defects
    • when maintainability source code patterns are involved, they would lead to reduced productivity or velocity
  • Opportunity and Liability cost
    • when security, reliability, performance efficiency source code patterns are involved, they can lead to data theft, software outage, software unresponsiveness (or unacceptable response times), … 
    • when maintainability, security, reliability, performance efficiency  source code patterns are involved, they would lead to missed opportunities

ASCTDItemsCostStructure

 

 

 

 

Technical Debt Wiki

Meeting Presentations

 

Posted Resources

 

Three Project Teams

 

Project 1 will set the structure for defining measures:

  • Define the concept of Technical Debt and its component parts
  • Establish a conceptual framework for defining CISQ and related measures
  • Publish and publicize the framework to make it a common foundation
  • Develop a sustainable product improvement model for managing/removing TD
  • Should strive for conceptual definition in February

Members: Bill Curtis (CISQ), Philippe-Emmanuel Douziech (CAST), Robert Nord (SEI), Ipek Ozkaya (SEI), LiGuo Huang (SMU), Qiao Zhang (SMU), Dan Tucker (BAH), Emily Leung (BAH)

 

Project 2 will develop specifications for one or more measures:

  • At least one measure should be based on CISQ quality characteristic measures
  • Not all proposed TD measures must be become CISQ measures
  • Weighting schemes to be applied to individual violations in CISQ measures
  • Measurement frameworks can be developed for interest, liability, etc.
  • Should strive for initial measure specification by May, earlier if possible

Members: Bill Curtis (CISQ), Philippe-Emmanuel Douziech (CAST), Luc Béasse (CAST), Robert Nord (SEI), Ipek Ozkaya (SEI), LiGuo Huang (SMU), Qiao Zhang (SMU), Jennifer Attanasi (BAH), Keith Tayloe (BAH)

 

Project 3 will try to provide empirical support and possible validation for the measures

  • Apply GQM to the Project 1 conceptual framework and include Project 2 measures
  • Define measures of TD and outcomes, as well as methods of collection
  • Approach IT organizations for data collection
  • Publicize results and benefits

Members: Bill Curtis (CISQ), Philippe-Emmanuel Douziech (CAST), David Zubrow (SEI), Emily Leung (BAH), Jennifer Attanasi (BAH)

 

 

 

 

 

 

Questions? Contact Tracie at tracie.berardi@it-cisq.org.

 

Technical Debt computation – Introduction

Objectives

Define a formula to compute the Technical Debt linked to CISQ Quality Characteristics violations so as to be able

  1. to adjust productivity measurement of IT system software development projects
  2. to monitor and benchmark quality of IT system softwares

Approach

  • Turn quality issues into effort or cost to adjust the amount of resource invested in the development project
  • Rely on CISQ Quality Characteristics measure elements
  • Model the Technical Debt Principal only, as the Technical Debt Interest Rates are highly context-dependent

Proposition

  • Split the Technical Debt Principal into
    • the Remediation Effort
    • the Unit Testing overhead, when applicable
    • the Integration Testing overhead, when applicable
    • the Complete Validation overhead, when applicable
  •  Model the Remediation Effort of a violation to a given measure element as a measure-dependent technology-dependant Remediation Effort duration to fix the first occurrence of the violation pattern in the object in violation, plus a measure-dependent technology-dependant additional Remediation Effort duration for each additional occurrence of the violation pattern in the same object
    • E.g.: 30′ to fix the first occurrence in a method, plus 15′ to fix each additional occurrences in the same method
  • Model the Unit Testing overhead, when the fix to a violation requires it, as a measure-dependent technology-dependant Unit Testing duration to unit test the object in violation, where all occurrences of the measure element violation pattern would have been fixed
    • E.g.: 30′ more to unit test the method where 1+ occurrences of the violation pattern of the measure element have been removed
  • Model the Integration Testing overhead, when the fix to a violation requires it, as a technology-dependant Integration Testing duration to integrate test the object in violation, where all occurrences of the violation pattern of one or more measure element would have been fixed, factoring in the number of distinct call paths to the object (so as to account for the difficulty to manage integration tests for objects which impacts a lot of other components in the software)
    • E.g.: 30′ x 1.5 = 45′ more to integration test the method where 1+ occurrences of the violation pattern of 1+ measure elements have been removed and which is in used in X call paths….
  • Model the Complete Validation overhead, when the fix to a violation requires it, as a Complete Validation duration to validate the fixed object in violation, where all occurrences of the violation pattern of one or more measure element would have been fixed, to be run once after fixing all a batch of X objects

Next steps

  • Validate the formulas “on paper”
  • Share a prototype to compute these values on real applications to validate the formulas “in the field” (prototype is ready, mailto:p.douziech@castsoftware.com)
  • Validate a default configuration to be used for CISQ objective benchmarking

AEFP data collection tooling

Objectives

 This page aims at detailing the data collection tooling developped to support the different study threads about AEFP

 

 

“how to” videos

CISQ_AEFP_DataCollectionTool_extendedTransactions
CISQ_AEFP_DataCollectionTool_evolutions
(click on links to download the videos)
 

 

Considered AEFP threads

The following threads are considered

  1. analyze the impact transactional AFP content extention with the entire call graph or reference graph starting from the transactional AFP entry point: volume, type, …
    1. I.e., does it extend by much the content of current transactional AFP?
    2. I.e., does it allow to link to a transactional AFP objects that are impacting the behavior of the transaction yet that are not directly between transactional AFP entry point and transactional AFP data enetities?
  2. analyze the “Dark Matter” of an application, i.e., the part of the code that is not involved in transactional AFP (extended or not): volume, type, …
    1. I.e., how much of an application is outside of transactional AFP?
    2. I.e., what kind of code is outside of transactional AFP? 
      • E.g.: code that is used in a Java Web Application Servlet Mapping
  3. analyze the level of sharing of objects between transactional AFP (extended or not): volume, type, … 
    1. I.e., how much is shared between transactional AFP?
    2. I.e., what kind of code is shared between transactional AFP? Technical? Functional?
  4. analyze the weight of the evolution of a transactional AFP (extended or not): share of the transaction, object type, complexity, …
    1. I.e., which percentage of a transaction AFP is typically updated? using object number, using effort complexity, …

 

 

Available tooling – Toolkit description

As of 2015.2.25, the toolkit is composed of 2 executable files, compatible with CAST Storage Service implementations:

  • extendedTransactions.exe
  • evolutions.exe

Their execution syntaxes are:

  • extendedTransactions.exe 
    -url jdbc:postgresql://pdow7lap:2280/postgres
    -driver org.postgresql.Driver
    -schema cisqxfp_local
    -password CastAIP
    -process
  • evolutions.exe 
    -url jdbc:postgresql://pdow7lap:2280/postgres
    -driver org.postgresql.Driver
    -AS_schema cisqxfp_local
    -DS_schema cisqxfp_central
    -password CastAIP
    -process

Notes:

  • replace “-process” with “-reset” to ensure you use the latest implementation tables, etc.
  • replace “-process” with “-delete” to delete results and therefore remove the added volume of data
  • the execution log
    • is stored in DSS_HISTORY (default) and can be externalized into a file
    • can be enriched with JDBC debug information

 

 

Available tooling – Download

  • ftp://ftp.castsoftware.com using cisqaefp_user login (mailto:p.douziech@castsoftware.com to get the password)
    • extendedTransactions.zip
    • evolutions.zip

 Notes:

  • Zip files contain the executable files as well as a library folder
  • Library folder can be shared by all executable files

 

Available tooling – Computation data description

The toolkit executables populate the following tables:

  • in the target Analysis Service, by extendedTransactions.exe:
    • DSS_OBJECT_PATHS: populated with all objects from CDT_OBJECTS, it indicates 
      • the object ID with OBJECT_ID
      • if the object is “internal” to at least one analysis project, when MIN_PROPERTIES is 0
      • the object type ID, with OBJECT_TYPE_ID
      • how many times the object is referenced, with OBJECT_CALL_COUNT value
      • how many times the object is part of a transactional AFP (current content), with OBJECT_AFP_COUNT value
      • how many times the object is called in an extended transactional AFP, with OBJECT_AFP_CALL_COUNT value
      • how many times the object is referenced in an extended transactional AFP, with OBJECT_AFP_REF_COUNT value
    • DSS_PATH_CONTENT: populated with all transactional AFP entry points and their children when the transactional AFP is extended to all call and reference paths from these entry points, it indicates
      • the object ID with OBJECT_ID
      • the child ID with CHILD_ID
  • in the target Dashboard Service, by evolutions.exe:
    • DSS_OBJECT_PATH_EVOLUTIONS: populated with all the objects from DSS_OBJECT_PATHS of the linked Analysis Service, it indicates
      • the object ID in Dashboard Service with OBJECT_ID (-1, when present only in the Analysis Service)
      • the object ID in Analysis Service with SITE_OBJECT_ID
      • the artifact nature of the object, with IS_ARTIFACT (Yes = 1, No = -1)
      • the effort complexity of the object, with COMPLEXITY (N/A = -1, Low = 0, Medium = 1, High = 2, Extreme = 3)
      • the object type ID, with OBJECT_TYPE_ID
      • the object status between the latest snapshot and the previous one, with OBJECT_ST
      • how many times the object is referenced, with OBJECT_CALL_COUNT value
      • how many times the object is part of a transactional AFP (current content), with OBJECT_AFP_COUNT value
      • how many times the object is called in an extended transactional AFP, with OBJECT_AFP_CALL_COUNT value
      • how many times the object is referenced in an extended transactional AFP, with OBJECT_AFP_REF_COUNT value
    • DSS_PATH_CONTENT: populated with all transactional AFP entry points and their children from DSS_PATH_CONTENT of the linked Analysis Service, it indicates
      • the entry point object ID in Dashboard Service with OBJECT_ID (-1, when present only in the Analysis Service)
      • the entry point object ID in Analysis Service with SITE_OBJECT_ID
      • the child ID in Dashboard Service with CHILD_ID (-1, when present only in the Analysis Service)
      • the child ID in Analysis Service with SITE_CHILD_ID
    • DSS_PATH_EVOLUTIONS: populated with all transactional AFP entry points from DSS_PATH_CONTENT of the linked Analysis Service, it indicates
      • the entry point object ID in Dashboard Service with OBJECT_ID (-1, when present only in the Analysis Service)
      • the entry point object ID in Analysis Service with SITE_OBJECT_ID
      • the count of  [all|shared|added|updated] [objects|artifacts] in the extended transactional AFP, with [ |SHARED_|ADDED_|UPDATED_][OBJECT|ARTIFACT]_COUNT
      • the sum of the complexity value of [all|shared] [all|added|updated] artifacts in the extended transactional AFP, with [ |SHARED_]COMPLEXITY[ |_IN_ADDED|_IN_UPDATED]

Note that some other working tables are created:

  • in the target Analysis Service, by extendedTransactions.exe:
    • DSS_PATH_DETAILS
    • DSS_PATH_BRANCHES
    • DSS_PATH_LINKS

     

 

 

Available tooling – Computation follow-up

With:

  SELECT   '2 - branche processing follow-up ' "task",
         COUNT(1)                           ,
         depth                              ,
         CASE status
                  WHEN 2
                  THEN '2 - Done'
                  WHEN 1
                  THEN '1 - Processing'
                  WHEN 0
                  THEN '0 - New'
         END "status"
FROM     dss_path_branches
GROUP BY depth,
         status
ORDER BY 1         ,
         depth DESC,
         status DESC;

 On Analysis Service, you get the following data:

2 – branche processing follow-up  22663 10 0 – New
2 – branche processing follow-up  93 1 2 – Done
2 – branche processing follow-up  373 1 0 – New

Which means that:

  • 93 branches starting at depth of 1, i.e., transaction heads, are processed
  • 373 branches starting at depth of 1, i.e., transaction heads, are still to be processed
  • 22663 branches starting at depth of 10, i.e., transaction branches, are still to be processed, once the branches at depth of 1 are all processed

 

 

Available tooling – Sample reporting

Analysis Service – DSS_OBJECT_PATHS – Sample #1

With:

SELECT   COUNT(DISTINCT op.object_id) "count",
         object_type_name                    ,
         object_min_properties "properties"  ,
         '0 - all'             "link_nature"
FROM     dss_object_paths op,
         dss_object_types ot
WHERE    ot.object_type_id = op.object_type_id
GROUP BY object_type_name,
         object_min_properties

UNION ALL

SELECT   COUNT(DISTINCT op.object_id),
         object_type_name            ,
         object_min_properties       ,
         '1 - in basic AFP'
FROM     dss_object_paths op,
         dss_object_types ot
WHERE    ot.object_type_id   = op.object_type_id
AND      op.object_afp_count > 0
GROUP BY object_type_name,
         object_min_properties

UNION ALL

SELECT   COUNT(DISTINCT op.object_id),
         object_type_name            ,
         object_min_properties       ,
         '3 - called in AFP'
FROM     dss_object_paths op,
         dss_object_types ot
WHERE    ot.object_type_id        = op.object_type_id
AND      op.object_afp_call_count > 0
GROUP BY object_type_name,
         object_min_properties

UNION ALL

SELECT   COUNT(DISTINCT op.object_id),
         object_type_name            ,
         object_min_properties       ,
         '4 - referenced in AFP'
FROM     dss_object_paths op,
         dss_object_types ot
WHERE    ot.object_type_id       = op.object_type_id
AND      op.object_afp_ref_count > 0
GROUP BY object_type_name,
         object_min_properties

UNION ALL

SELECT   COUNT(DISTINCT op.object_id),
         object_type_name            ,
         object_min_properties       ,
         '97 - not in basic AFP'
FROM     dss_object_paths op,
         dss_object_types ot
WHERE    ot.object_type_id   = op.object_type_id
AND      op.object_afp_count = 0
GROUP BY object_type_name,
         object_min_properties

UNION ALL

SELECT   COUNT(DISTINCT op.object_id),
         object_type_name            ,
         object_min_properties       ,
         '98 - not in AFP'
FROM     dss_object_paths op,
         dss_object_types ot
WHERE    ot.object_type_id        = op.object_type_id
AND      op.object_afp_call_count = 0
AND      op.object_afp_ref_count  = 0
AND      op.object_afp_count      = 0
GROUP BY object_type_name,
         object_min_properties

UNION ALL

SELECT   COUNT(DISTINCT op.object_id),
         object_type_name            ,
         object_min_properties       ,
         '99 - not referenced'
FROM     dss_object_paths op,
         dss_object_types ot
WHERE    ot.object_type_id    = op.object_type_id
AND      op.object_call_count = 0
GROUP BY object_type_name,
         object_min_properties
ORDER BY 2,
         3,
         4;

On Analysis Service, you get the following data:

2 Bean Property 0 0 – all
1 Bean Property 0 3 – called in AFP
1 Bean Property 0 4 – referenced in AFP
2 Bean Property 0 97 – not in basic AFP
1 Bean Property 0 98 – not in AFP
3 External URL 0 0 – all
3 External URL 0 97 – not in basic AFP
3 External URL 0 98 – not in AFP
3 External URL 0 99 – not referenced
19 Generic Java Class 1 0 – all
13 Generic Java Class 1 4 – referenced in AFP
19 Generic Java Class 1 97 – not in basic AFP
6 Generic Java Class 1 98 – not in AFP
5 Generic Java Class 1 99 – not referenced
18 Generic Java Interface 1 0 – all
12 Generic Java Interface 1 4 – referenced in AFP
18 Generic Java Interface 1 97 – not in basic AFP
6 Generic Java Interface 1 98 – not in AFP
2 Generic Java Interface 1 99 – not referenced
9 Generic Java Method 1 0 – all
9 Generic Java Method 1 97 – not in basic AFP
9 Generic Java Method 1 98 – not in AFP
8 Generic Java Method 1 99 – not referenced
9 Generic Java Type Parameter 1 0 – all
8 Generic Java Type Parameter 1 4 – referenced in AFP
9 Generic Java Type Parameter 1 97 – not in basic AFP
1 Generic Java Type Parameter 1 98 – not in AFP
74 HTML Event 0 0 – all
74 HTML Event 0 1 – in basic AFP
45 HTML Event 0 4 – referenced in AFP
3 Hibernate Configuration File 0 0 – all
3 Hibernate Configuration File 0 97 – not in basic AFP
3 Hibernate Configuration File 0 98 – not in AFP
3 Hibernate Configuration File 0 99 – not referenced
244 Hibernate Entity 0 0 – all
108 Hibernate Entity 0 1 – in basic AFP
76 Hibernate Entity 0 3 – called in AFP
92 Hibernate Entity 0 4 – referenced in AFP
136 Hibernate Entity 0 97 – not in basic AFP
127 Hibernate Entity 0 98 – not in AFP
82 Hibernate Entity 0 99 – not referenced
2228 Hibernate Entity Property 0 0 – all
424 Hibernate Entity Property 0 4 – referenced in AFP
2228 Hibernate Entity Property 0 97 – not in basic AFP
1804 Hibernate Entity Property 0 98 – not in AFP
1500 Hibernate Entity Property 0 99 – not referenced
244 Hibernate Mapping File 0 0 – all
244 Hibernate Mapping File 0 97 – not in basic AFP
244 Hibernate Mapping File 0 98 – not in AFP
34 Hibernate Mapping File 0 99 – not referenced
180 J2EE Scoped Bean 0 0 – all
25 J2EE Scoped Bean 0 1 – in basic AFP
94 J2EE Scoped Bean 0 3 – called in AFP
94 J2EE Scoped Bean 0 4 – referenced in AFP
155 J2EE Scoped Bean 0 97 – not in basic AFP
66 J2EE Scoped Bean 0 98 – not in AFP
3 J2EE Web Application Descriptor 0 0 – all
3 J2EE Web Application Descriptor 0 97 – not in basic AFP
3 J2EE Web Application Descriptor 0 98 – not in AFP
3 J2EE Web Application Descriptor 0 99 – not referenced
1 J2EE XML File 0 0 – all
1 J2EE XML File 0 97 – not in basic AFP
1 J2EE XML File 0 98 – not in AFP
1 J2EE XML File 0 99 – not referenced
51 J2EE XML File 1 0 – all
51 J2EE XML File 1 97 – not in basic AFP
51 J2EE XML File 1 98 – not in AFP
51 J2EE XML File 1 99 – not referenced
4 JSP Application 0 0 – all
4 JSP Application 0 97 – not in basic AFP
4 JSP Application 0 98 – not in AFP
4 JSP Application 0 99 – not referenced
186 JSP Custom Tag 0 0 – all
29 JSP Custom Tag 0 3 – called in AFP
29 JSP Custom Tag 0 4 – referenced in AFP
186 JSP Custom Tag 0 97 – not in basic AFP
157 JSP Custom Tag 0 98 – not in AFP
120 JSP Custom Tag 0 99 – not referenced
3090 JSP Custom Tag Attribute 0 0 – all
96 JSP Custom Tag Attribute 0 3 – called in AFP
96 JSP Custom Tag Attribute 0 4 – referenced in AFP
3090 JSP Custom Tag Attribute 0 97 – not in basic AFP
2994 JSP Custom Tag Attribute 0 98 – not in AFP
2902 JSP Custom Tag Attribute 0 99 – not referenced
9 JSP Custom Tag Library 0 0 – all
3 JSP Custom Tag Library 0 4 – referenced in AFP
9 JSP Custom Tag Library 0 97 – not in basic AFP
6 JSP Custom Tag Library 0 98 – not in AFP
17 Java Annotation Type 1 0 – all
11 Java Annotation Type 1 4 – referenced in AFP
17 Java Annotation Type 1 97 – not in basic AFP
6 Java Annotation Type 1 98 – not in AFP
596 Java Class 0 0 – all
59 Java Class 0 1 – in basic AFP
307 Java Class 0 4 – referenced in AFP
537 Java Class 0 97 – not in basic AFP
286 Java Class 0 98 – not in AFP
80 Java Class 0 99 – not referenced
543 Java Class 1 0 – all
3 Java Class 1 3 – called in AFP
208 Java Class 1 4 – referenced in AFP
543 Java Class 1 97 – not in basic AFP
335 Java Class 1 98 – not in AFP
56 Java Class 1 99 – not referenced
387 Java Constructor 0 0 – all
2 Java Constructor 0 1 – in basic AFP
74 Java Constructor 0 3 – called in AFP
74 Java Constructor 0 4 – referenced in AFP
385 Java Constructor 0 97 – not in basic AFP
313 Java Constructor 0 98 – not in AFP
276 Java Constructor 0 99 – not referenced
150 Java Constructor 1 0 – all
70 Java Constructor 1 3 – called in AFP
70 Java Constructor 1 4 – referenced in AFP
150 Java Constructor 1 97 – not in basic AFP
80 Java Constructor 1 98 – not in AFP
374 Java Constructor 2 0 – all
42 Java Constructor 2 3 – called in AFP
42 Java Constructor 2 4 – referenced in AFP
374 Java Constructor 2 97 – not in basic AFP
332 Java Constructor 2 98 – not in AFP
290 Java Constructor 2 99 – not referenced
3 Java Enum 0 0 – all
2 Java Enum 0 4 – referenced in AFP
3 Java Enum 0 97 – not in basic AFP
1 Java Enum 0 98 – not in AFP
8 Java Enum Item 0 0 – all
6 Java Enum Item 0 3 – called in AFP
6 Java Enum Item 0 4 – referenced in AFP
8 Java Enum Item 0 97 – not in basic AFP
2 Java Enum Item 0 98 – not in AFP
3226 Java Field 0 0 – all
121 Java Field 0 1 – in basic AFP
1442 Java Field 0 3 – called in AFP
1442 Java Field 0 4 – referenced in AFP
3105 Java Field 0 97 – not in basic AFP
1750 Java Field 0 98 – not in AFP
461 Java Field 0 99 – not referenced
98 Java Field 1 0 – all
28 Java Field 1 3 – called in AFP
28 Java Field 1 4 – referenced in AFP
98 Java Field 1 97 – not in basic AFP
70 Java Field 1 98 – not in AFP
677 Java File 0 0 – all
677 Java File 0 97 – not in basic AFP
677 Java File 0 98 – not in AFP
677 Java File 0 99 – not referenced
1141 Java File 1 0 – all
1141 Java File 1 97 – not in basic AFP
1141 Java File 1 98 – not in AFP
1141 Java File 1 99 – not referenced
29 Java Initializer 0 0 – all
29 Java Initializer 0 97 – not in basic AFP
29 Java Initializer 0 98 – not in AFP
29 Java Initializer 0 99 – not referenced
22 Java Instantiated Class 0 0 – all
3 Java Instantiated Class 0 1 – in basic AFP
4 Java Instantiated Class 0 4 – referenced in AFP
19 Java Instantiated Class 0 97 – not in basic AFP
15 Java Instantiated Class 0 98 – not in AFP
4 Java Instantiated Class 1 0 – all
1 Java Instantiated Class 1 4 – referenced in AFP
4 Java Instantiated Class 1 97 – not in basic AFP
3 Java Instantiated Class 1 98 – not in AFP
1 Java Instantiated Class 1 99 – not referenced
139 Java Instantiated Interface 0 0 – all
78 Java Instantiated Interface 0 1 – in basic AFP
89 Java Instantiated Interface 0 4 – referenced in AFP
61 Java Instantiated Interface 0 97 – not in basic AFP
41 Java Instantiated Interface 0 98 – not in AFP
15 Java Instantiated Interface 1 0 – all
10 Java Instantiated Interface 1 4 – referenced in AFP
15 Java Instantiated Interface 1 97 – not in basic AFP
5 Java Instantiated Interface 1 98 – not in AFP
27 Java Instantiated Method 0 0 – all
10 Java Instantiated Method 0 1 – in basic AFP
13 Java Instantiated Method 0 3 – called in AFP
13 Java Instantiated Method 0 4 – referenced in AFP
17 Java Instantiated Method 0 97 – not in basic AFP
10 Java Instantiated Method 0 98 – not in AFP
95 Java Interface 0 0 – all
75 Java Interface 0 4 – referenced in AFP
95 Java Interface 0 97 – not in basic AFP
20 Java Interface 0 98 – not in AFP
169 Java Interface 1 0 – all
1 Java Interface 1 3 – called in AFP
78 Java Interface 1 4 – referenced in AFP
169 Java Interface 1 97 – not in basic AFP
91 Java Interface 1 98 – not in AFP
7 Java Interface 1 99 – not referenced
6836 Java Method 0 0 – all
1151 Java Method 0 1 – in basic AFP
2378 Java Method 0 3 – called in AFP
2386 Java Method 0 4 – referenced in AFP
5685 Java Method 0 97 – not in basic AFP
4221 Java Method 0 98 – not in AFP
2559 Java Method 0 99 – not referenced
779 Java Method 1 0 – all
334 Java Method 1 3 – called in AFP
389 Java Method 1 4 – referenced in AFP
779 Java Method 1 97 – not in basic AFP
390 Java Method 1 98 – not in AFP
10 Java Method 1 99 – not referenced
6 Java Method 2 0 – all
6 Java Method 2 97 – not in basic AFP
6 Java Method 2 98 – not in AFP
6 Java Method 2 99 – not referenced
265 Java Method 3 0 – all
265 Java Method 3 1 – in basic AFP
94 Java Method 3 4 – referenced in AFP
127 Java Package 0 0 – all
127 Java Package 0 97 – not in basic AFP
127 Java Package 0 98 – not in AFP
127 Java Package 0 99 – not referenced
217 Java Package 1 0 – all
217 Java Package 1 97 – not in basic AFP
217 Java Package 1 98 – not in AFP
217 Java Package 1 99 – not referenced
4 Java Project 0 0 – all
4 Java Project 0 97 – not in basic AFP
4 Java Project 0 98 – not in AFP
4 Java Project 0 99 – not referenced
119 Java Properties File 0 0 – all
119 Java Properties File 0 97 – not in basic AFP
119 Java Properties File 0 98 – not in AFP
119 Java Properties File 0 99 – not referenced
9379 Java Property Mapping 0 0 – all
108 Java Property Mapping 0 1 – in basic AFP
569 Java Property Mapping 0 3 – called in AFP
569 Java Property Mapping 0 4 – referenced in AFP
9271 Java Property Mapping 0 97 – not in basic AFP
8796 Java Property Mapping 0 98 – not in AFP
8238 Java Property Mapping 0 99 – not referenced
5 Java Subset 0 0 – all
5 Java Subset 0 97 – not in basic AFP
5 Java Subset 0 98 – not in AFP
5 Java Subset 0 99 – not referenced
4 JavaBeans 0 0 – all
4 JavaBeans 0 97 – not in basic AFP
4 JavaBeans 0 98 – not in AFP
4 JavaBeans 0 99 – not referenced
1229 Javascript Client Side Method 0 0 – all
495 Javascript Client Side Method 0 1 – in basic AFP
74 Javascript Client Side Method 0 3 – called in AFP
187 Javascript Client Side Method 0 4 – referenced in AFP
734 Javascript Client Side Method 0 97 – not in basic AFP
733 Javascript Client Side Method 0 98 – not in AFP
651 Javascript Client Side Method 0 99 – not referenced
297 Javascript Client Side Method 1 0 – all
297 Javascript Client Side Method 1 97 – not in basic AFP
297 Javascript Client Side Method 1 98 – not in AFP
263 Javascript Client Side Method 1 99 – not referenced
59 Javascript Client Side Variable 0 0 – all
59 Javascript Client Side Variable 0 97 – not in basic AFP
59 Javascript Client Side Variable 0 98 – not in AFP
3 Javascript Client Side Variable 1 0 – all
3 Javascript Client Side Variable 1 97 – not in basic AFP
3 Javascript Client Side Variable 1 98 – not in AFP
3 Oracle Schema Subset 0 0 – all
3 Oracle Schema Subset 0 97 – not in basic AFP
3 Oracle Schema Subset 0 98 – not in AFP
3 Oracle Schema Subset 0 99 – not referenced
418 Oracle check table constraint 0 0 – all
418 Oracle check table constraint 0 97 – not in basic AFP
418 Oracle check table constraint 0 98 – not in AFP
418 Oracle check table constraint 0 99 – not referenced
15 Oracle dml trigger 0 0 – all
9 Oracle dml trigger 0 1 – in basic AFP
11 Oracle dml trigger 0 3 – called in AFP
11 Oracle dml trigger 0 4 – referenced in AFP
6 Oracle dml trigger 0 97 – not in basic AFP
3 Oracle dml trigger 0 98 – not in AFP
235 Oracle index 0 0 – all
235 Oracle index 0 97 – not in basic AFP
235 Oracle index 0 98 – not in AFP
235 Oracle index 0 99 – not referenced
89 Oracle primary key table constraint 0 0 – all
89 Oracle primary key table constraint 0 97 – not in basic AFP
89 Oracle primary key table constraint 0 98 – not in AFP
89 Oracle primary key table constraint 0 99 – not referenced
15 Oracle sequence 0 0 – all
15 Oracle sequence 0 97 – not in basic AFP
15 Oracle sequence 0 98 – not in AFP
15 Oracle sequence 0 99 – not referenced
90 Oracle table 0 0 – all
59 Oracle table 0 1 – in basic AFP
66 Oracle table 0 3 – called in AFP
66 Oracle table 0 4 – referenced in AFP
31 Oracle table 0 97 – not in basic AFP
22 Oracle table 0 98 – not in AFP
1 Oracle table 0 99 – not referenced
783 Oracle table column 0 0 – all
11 Oracle table column 0 3 – called in AFP
11 Oracle table column 0 4 – referenced in AFP
783 Oracle table column 0 97 – not in basic AFP
772 Oracle table column 0 98 – not in AFP
317 Oracle table column 0 99 – not referenced
1 PL/SQL Project 0 0 – all
1 PL/SQL Project 0 97 – not in basic AFP
1 PL/SQL Project 0 98 – not in AFP
1 PL/SQL Project 0 99 – not referenced
1 SQL instance 0 0 – all
1 SQL instance 0 97 – not in basic AFP
1 SQL instance 0 98 – not in AFP
1 SQL instance 0 99 – not referenced
1 SQL schema 0 0 – all
1 SQL schema 0 97 – not in basic AFP
1 SQL schema 0 98 – not in AFP
1 SQL schema 0 99 – not referenced
7 Servlet 0 0 – all
7 Servlet 0 97 – not in basic AFP
7 Servlet 0 98 – not in AFP
3 Servlet 0 99 – not referenced
4 Servlet Attributes Scope 0 0 – all
4 Servlet Attributes Scope 0 97 – not in basic AFP
4 Servlet Attributes Scope 0 98 – not in AFP
4 Servlet Attributes Scope 0 99 – not referenced
5 Servlet Mapping 0 0 – all
5 Servlet Mapping 0 97 – not in basic AFP
5 Servlet Mapping 0 98 – not in AFP
5 Servlet Mapping 0 99 – not referenced
238 Spring Bean 0 0 – all
57 Spring Bean 0 3 – called in AFP
57 Spring Bean 0 4 – referenced in AFP
238 Spring Bean 0 97 – not in basic AFP
181 Spring Bean 0 98 – not in AFP
165 Spring Bean 0 99 – not referenced
462 Spring Bean 1 0 – all
1 Spring Bean 1 3 – called in AFP
1 Spring Bean 1 4 – referenced in AFP
462 Spring Bean 1 97 – not in basic AFP
461 Spring Bean 1 98 – not in AFP
460 Spring Bean 1 99 – not referenced
15 Spring Beans File 0 0 – all
15 Spring Beans File 0 97 – not in basic AFP
15 Spring Beans File 0 98 – not in AFP
15 Spring Beans File 0 99 – not referenced
15 Spring Beans File 1 0 – all
15 Spring Beans File 1 97 – not in basic AFP
15 Spring Beans File 1 98 – not in AFP
7 Spring Beans File 1 99 – not referenced
261 Struts Action 0 0 – all
150 Struts Action 0 1 – in basic AFP
111 Struts Action 0 3 – called in AFP
111 Struts Action 0 4 – referenced in AFP
111 Struts Action 0 97 – not in basic AFP
104 Struts Action 0 98 – not in AFP
87 Struts Action 0 99 – not referenced
15 Struts Interceptor 0 0 – all
8 Struts Interceptor 0 4 – referenced in AFP
15 Struts Interceptor 0 97 – not in basic AFP
7 Struts Interceptor 0 98 – not in AFP
72 Struts Interceptor 1 0 – all
2 Struts Interceptor 1 3 – called in AFP
20 Struts Interceptor 1 4 – referenced in AFP
72 Struts Interceptor 1 97 – not in basic AFP
52 Struts Interceptor 1 98 – not in AFP
28 Struts Interceptor 1 99 – not referenced
17 Struts Interceptor Stack 0 0 – all
9 Struts Interceptor Stack 0 4 – referenced in AFP
17 Struts Interceptor Stack 0 97 – not in basic AFP
8 Struts Interceptor Stack 0 98 – not in AFP
22 Struts Interceptor Stack 1 0 – all
1 Struts Interceptor Stack 1 4 – referenced in AFP
22 Struts Interceptor Stack 1 97 – not in basic AFP
21 Struts Interceptor Stack 1 98 – not in AFP
18 Struts Interceptor Stack 1 99 – not referenced
26 Struts Package 0 0 – all
6 Struts Package 0 3 – called in AFP
7 Struts Package 0 4 – referenced in AFP
26 Struts Package 0 97 – not in basic AFP
19 Struts Package 0 98 – not in AFP
5 Struts Package 0 99 – not referenced
6 Struts Package 1 0 – all
1 Struts Package 1 4 – referenced in AFP
6 Struts Package 1 97 – not in basic AFP
5 Struts Package 1 98 – not in AFP
3 Struts Package 1 99 – not referenced
474 Struts Result 0 0 – all
56 Struts Result 0 1 – in basic AFP
166 Struts Result 0 3 – called in AFP
166 Struts Result 0 4 – referenced in AFP
418 Struts Result 0 97 – not in basic AFP
289 Struts Result 0 98 – not in AFP
120 Struts Result 0 99 – not referenced
32 Struts Validator 1 0 – all
5 Struts Validator 1 3 – called in AFP
5 Struts Validator 1 4 – referenced in AFP
32 Struts Validator 1 97 – not in basic AFP
27 Struts Validator 1 98 – not in AFP
22 Struts Validator 1 99 – not referenced
15 Struts2 Configuration File 0 0 – all
15 Struts2 Configuration File 0 97 – not in basic AFP
15 Struts2 Configuration File 0 98 – not in AFP
2 Struts2 Configuration File 0 99 – not referenced
8 Struts2 Configuration File 1 0 – all
8 Struts2 Configuration File 1 97 – not in basic AFP
8 Struts2 Configuration File 1 98 – not in AFP
6 Struts2 Configuration File 1 99 – not referenced
33 URL 0 0 – all
3 URL 0 4 – referenced in AFP
33 URL 0 97 – not in basic AFP
30 URL 0 98 – not in AFP
19 URL 0 99 – not referenced
450 eDirectory 0 0 – all
450 eDirectory 0 97 – not in basic AFP
450 eDirectory 0 98 – not in AFP
450 eDirectory 0 99 – not referenced
458 eFile 0 0 – all
458 eFile 0 1 – in basic AFP
21 eFile 0 3 – called in AFP
39 eFile 0 4 – referenced in AFP
326 eFile 0 99 – not referenced
24 eFile 1 0 – all
10 eFile 1 4 – referenced in AFP
24 eFile 1 97 – not in basic AFP
14 eFile 1 98 – not in AFP
7 eFile 1 99 – not referenced

 

 

Dashboard Service – DSS_PATH_EVOLUTIONS – Sample #1 

With:

SELECT
sum(object_count)/count(object_id) "average number of children",
sum(artifact_count)/count(object_id) "average number of artifact children",
sum(artifact_count)/sum(object_count)::decimal "average ratio of artifact children",
sum(added_object_count)/count(object_id) "average number of added children"
sum(added_object_count)/sum(object_count)::decimal "average ratio of added children"
sum(added_artifact_count)/count(object_id) "average number of added artifact children",
sum(added_artifact_count)/sum(artifact_count)::decimal "average ratio of added artifact children",
sum(updated_object_count)/count(object_id) "average number of updated children",
sum(updated_object_count)/sum(object_count)::decimal "average ratio of updated children",
sum(updated_artifact_count)/count(object_id) "average number of updated artifact children",
sum(updated_artifact_count)/sum(artifact_count)::decimal "average ratio of updated artifact children",
sum(shared_object_count)/count(object_id) "average number of shared children",
sum(shared_object_count)/sum(object_count)::decimal "average ratio of shared children",
sum(shared_artifact_count)/count(object_id) "average number of shared artifact children",
sum(shared_artifact_count)/sum(artifact_count)::decimal "average ratio of shared artifact children",
sum(complexity)/count(object_id) "average effort complexity",
sum(complexity_in_added)/count(object_id) "average effort complexity of added children",
sum(complexity_in_updated)/count(object_id) "average effort complexity of updated children",
sum(shared_complexity)/count(object_id) "average effort complexity of shared children",
sum(shared_complexity_in_added)/count(object_id) "average effort complexity of shared added children",
sum(shared_complexity_in_updated)/count(object_id) "average effort complexity of shared updated children"
  FROM dss_path_evolutions;

 On Dashboard Service, you get the following data:

average number of children 234
average number of artifact children 73
average ratio of artifact children 31%
average number of added children 7
average ratio of added children 3%
average number of added artifact children 2
average ratio of added artifact children 3%
average number of updated children 8
average ratio of updated children 4%
average number of updated artifact children 0
average ratio of updated artifact children 0%
average number of shared children 226
average ratio of shared children 97%
average number of shared artifact children 69
average ratio of shared artifact children 95%
average effort complexity 19
average effort complexity of added children 0
average effort complexity of updated children 2
average effort complexity of shared children 18
average effort complexity of shared added children 0
average effort complexity of shared updated children 2