Texas Cybersecurity Legislation Passed In 2017 – A Summary

Herb Krasner, University of Texas at Austin (ret.), CISQ Advisory Board member

 

Here is a summary of the cybersecurity legislation that was passed this year that will have an impact on state agencies and institutions of higher education (all from the 85th regular session of the Tx legislature). The Tx Dept. of Information Resources (DIR) and state agency CISO’s will be the primary actors to make these new laws happen. The 2017 cybersecurity legislation (HB 8, except where noted otherwise) includes the following summarized provisions:

  • Establishment of legislative select committees for cybersecurity in the House and Senate.
  • Establishment of an information sharing and analysis center to provide a forum for state agencies to share information regarding cybersecurity threats, best practices, and remediation strategies.
  • Providing mandatory guidelines to state agencies for the continuing education requirements for cybersecurity training that must be completed by all IT employees of the agencies.
  • Creating a statewide plan (by DIR) to address cybersecurity risks and incidents in the state.
  • DIR will collect the following information from each state agency in order to produce a report due to the Legislature in November of every even numbered year. (SB 532)
    – Information on their security program
    – Inventory of agency’s servers, mainframe, cloud services, and other technologies
    – List of vendors that operate and manage agency’s IT infrastructure
  • The state cybersecurity coordinator shall establish and lead a cybersecurity council that includes public and private sector leaders and cybersecurity practitioners to collaborate on matters of cybersecurity.
  • Establishment of rules for security plans and assessments of Internet websites and mobile applications containing sensitive personal information.
  • Requiring the conduct of a study on digital data storage and records management practices.
  • Each agency shall prepare a biennial report assessing the extent to which all IT systems are vulnerable to unauthorized access or harm, or electronically stored information is vulnerable to alteration, damage, erasure, or inappropriate use.
  • At least once every two years, each state agency shall conduct an information security assessment, and report the results to DIR, the governor, the lieutenant governor, and the speaker of the House of Representatives.
  • Required proof that agency executives have been made aware of the risks revealed during the preparation of the agency ’s information security plan.
  • Requires state agencies to identify information security issues and develop a plan to prioritize the remediation and mitigation of those issues including legacy modernization and cybersecurity workforce development and retention.
  • In the event of a breach or suspected breach of system security or an unauthorized exposure of sensitive information, a state agency must report within 48 hours to their executives and the state CISO. Information arising from an organization’s efforts to prevent, detect, investigate, or mitigate security incidents is defined as confidential.  (SB 532)
  • Requires creating and defining an Election Cyber Attack Study (by Sec. of State).
  • Allowing DIR to request emergency funding if a cybersecurity event creates a need (SB 1910).

 

 

 

 

Survey on Time-to-Fix Technical Debt

CISQ is working on a standard measure of Technical Debt. Technical debt is a measure of software cost, effort, and risk due to defects remaining in code at release. Like financial debt, technical debt incurs interest over time in the form of extra effort and cost to maintain the software. Technical debt also represents the level of risk exposed to business due to the increased cost of ownership.

 

Completing the measure requires estimates of the time required to fix software weaknesses included in the definition of Technical Debt.

 

Please take our Technical Debt Survey

 

The survey is a PDF form that is posted to the CISQ website. To take the survey:

  • Download the PDF form
  • Fill in your responses
  • Press the “send survey” button on the last page of the survey
  • Alternatively, you can save the PDF file to your desktop and email it directly to: coordinator@it-cisq.org

 

As a “thank you” for your time, we are giving away $20 Amazon Gift cards to the first 50 respondents.

 

To download the survey (PDF): http://it-cisq.org/technical-debt-remediation-survey/

 

Thank you for contributing to this initiative.

 

For any questions:

 

Tracie Berardi
Program Manager
Consortium for IT Software Quality (CISQ)
tracie.berardi@it-cisq.org
781-444-1132 x149

CISQ-em

 

Event Summary: Cyber Resilience Summit, October 20, 2016

CYBER RESILIENCE SUMMIT: Ensure Resiliency in Federal Software Acquisition

Topic: Improving System Development & Sustainment Outcomes with Software Quality and Risk Measurement Standards

Hosted by: Consortium for IT Software Quality (CISQ) in cooperation with Object Management Group, Interoperability Clearinghouse, IT Acquisition Advisory Council

Date: 20 October 2016 from 0800 – 1230

Location: Army Navy Country Club, 1700 Army Navy Drive, Arlington, VA

Agenda and Presentations: http://it-cisq.org/cyber-resilience-summit-2016/

 

Event Background

 

The Consortium for IT Software Quality (CISQ) held its semiannual Cyber Resilience Summit at the Army Navy Country Club in Arlington, Virginia in cooperation with the IT Acquisition Advisory Council (IT-AAC) and other IT leadership organizations. “Titans of Cyber” from the U.S. Federal Government attended the Summit to share critical insights from the front lines of the cyber risk management battle. The program focused on standards and best practices for measuring risk and quality in IT-intensive programs from the standpoint of productivity, software assurance, overall quality and system/mission risk. The discussion addressed proven methods and tools of incorporating such standard metrics into the IT software development, sustainment and acquisition processes.

 

Discussion Points

 

John Weiler, IT-AAC Vice Chair and Dr. Bill Curtis, CISQ Executive Director opened the Summit.
Dr. Curtis gave an overview of CISQ, explaining that it was co-founded in 2009 by the Software Engineering Institute (SEI) at Carnegie Mellon University and Object Management Group (OMG) and is currently managed by OMG. The Consortium is chartered to create international standards for measuring the size and structural quality of software. Its mission is to increase the use of software product measures in software engineering and management. Dr. Curtis developed the original Capability Maturity Model (CMM) while at the SEI and now directs CISQ. Current sponsors include CAST, Synopsys, Booz Allen Hamilton, Cognizant, others.

 

Significant CISQ contributions include:

  • A standard for automating Function Points that mirrors IFPUG counting guidelines
  • Four measures of structural quality to quantify violations of good architectural and coding practice:
    • Reliability
    • Performance Efficiency
    • Security
    • Maintainability
  • It is important to note that most measures of reliability assess system availability or downtime, which are behavioral measures. The CISQ measures assess flaws in the software that can cause operational problems. Thus, the CISQ measures provide prerelease indicators for operational or cost of ownership risks.

CISQ measures can be used to track software performance against agreed targets, as well aggregated into management reports to track vendor performance. The continuing stream of multi‐million dollar failures is causing an increased demand for certifying software. Although CISQ will not provide a certification service, it will provide an assessment process to endorse technologies that can detect the critical weaknesses that comprise the CISQ Quality Characteristic measure standards.

 

The Security measure effort was led by the next speaker, Robert Martin, who oversees the Common Weakness Enumeration Repository maintained by MITRE Corporation. This repository contains over 800 known weaknesses that hackers exploit to gain unauthorized entry into systems.

 

Robert Martin, Senior Principal Engineer at MITRE, gave a presentation on: Defending Against Exploitable Weaknesses When Acquiring Software-Intensive Systems.

cisq-cyber-resilience-summit-robert-martin-mitre

 

Mr. Martin’s main themes were:

  • We are more dependent upon software-enabled cyber technology than ever
  • Hardware and software are highly vulnerable so the possibility of disruption is greater than ever
  • Software in end items (ex. Cars, fighter jets, etc.) is growing at an exponential rate
  • Almost everything is cyber connected and co-dependent during operations and/or other phases of life
  • Today up to 90% of an application consists of third party code

Mr. Martin’s main question was how do we track and measure all the code characteristics that are flowing into software development? How do we determine and track what is really important?

 

Mr. Martin then described how to establish assurance by using an Assurance Case Model (Safety Case Tooling) with the elements of Claim/Sub-claim, Argument, and Evidence. He pointed out that this evidence-based assurance is an emerging part of NIST SPs 800-160 (draft) and 800-53 Rev 4.

  • This technique is good for capturing complicated relationships
  • Tying the evidence to supported claims can be an ongoing part of creating and maintaining the system
  • It is useful for Mission Impact Analysis and Cyber Risk Remediation Analysis

Mr. Martin also identified the Common Weakness Scoring System (CWSS) and the Common Weakness Risk Analysis Framework (CWRAF) that applies to the SANS Top 25 list. He then identified the benefits of using multiple detection methods, as some are good at finding cause, while others are good at finding effect. We can use multiple detection methods to collect evidence through all phases of development from design review through red teaming – all the while considering the most important common weaknesses.
Mr. Martin then discussed program protection planning for prioritization/criticality analysis, using the assurance case to tie claims to supporting evidence. He also introduced the concept of “trustworthiness” that is a combination of factors like safety, privacy, security, reliability, and resilience. For example, a security “false-positive” may be a safety or reliability issue.

 

Finally, Mr. Martin pointed out that we can manage assurance cases with claims and the association of evidence to claims and that the evidence is articulated using structures related to common weaknesses. This accounts for all types of threats including human error, system faults, and environmental disruptions. Assurance cases can also be exchanged and integrated to aid extended system analysis.

 

CISQ is a standard for measuring security, safety and reliability in a consistent and computable way.

 

Next, the Titans of Cyber panel was led by Dr. Marv Langston, Principal, Langston Associates.

 

cisq-cyber-resilience-summit-titans-of-cyber-panel-3

 

Dr. Marv Langston introduced the members:

  • Ray Letteer, of the USMC stated the Marine Corps cyber concerns are for operational working metrics using standards. The need is for technical details not a “trust me” plan of action and milestones.
  • Kevin Dulany, of DIAP, (a protégé of Dr. Letteer) stated that networks are “hard on the outside but soft in the middle.” Attacks today are data driven – inside the networks but current risk management frameworks (RMFs) are based on traditional constructs of IT. Procedures require them to use these RMFs but the mitigations do not actually apply. Embedded computing drives the need for a different approach with new mitigations. Kevin also noted that some levels of security tend to be categorized at a lower level because of the lack of resources.
  • Chris Page of ONI stated that we do have superiority because of our great technology, citing that the Navy pub, “Design for Maintaining Naval Superiority” is a message (warning) to our adversaries.
  • Martin Stanley of DHS said that their focus is on securing high-value assets. Their process assesses the security posture, applies measures for a year, and then reassesses with lessons learned. He went on to say that root causes are related to basic IT practices and ways the organization must operate systems. His organization is producing enterprise architecture (EA) guidance that is unusual for cyber because it attempts to address root causes.
  • J. Michael Gilmore of DoD OT&E said that our systems are not designed with cyber security as a priority. He gave an example of a supporting network to an aircraft that was not considered in recovery and was also tied to a vendor network. He explained that capabilities need to be secured from both the government and contractor sides. Mike also cited that people can be major conduit for bad cyber – especially worldwide partners. Finally he added that the Joint Regional Security Stacks (JRSS) is essential for DoD but that people in the field are not fully trained and don’t understand its use or vulnerabilities.

Marv Langston kicked off the discussion by stating:

  • We test and deliver but don’t look back. Why don’t we do cyber tests on operational systems – on a daily basis?
  • Gilmore – We have a limited project to do and are facing resource constraints so we push back on current cyber authorities. The commercial sector does continuous red teaming but the Government is resisting – deploy and forget.
  • Letteer – Testing must be continuous. The USMC has established “White Teams” to do continuous scans, coordinating with red teams. In addition, we have cyber protection teams to help put in mitigations. This is not as widespread as we would like but we are making progress. The USAF has a similar program.
  • Stanley – Many agencies are working with DHS on continuous monitoring. The traditional Certification and Authorization (C&A) process has a place but continuous monitoring is supplemental. Overall compliance is more important than continuous monitoring, and this must change.
  • Dulany – We used to have high emphasis on system security engineering but with too much reliance on contractors. Today we look at controls but that does not get us down into useable specifications or standards. The RMF is a good tool but we have problems keeping up with technologies. We cannot use tools in certain environments, so continuous RMF would help to reinforce compliance.

Langston – I’m concerned that we will wear ourselves out with all these processes but will miss the critical checks like the daily cyber check

  • John Weiler – The software market is constantly refreshing but we still have 1985 software weapon systems.
  • Gilmore – We resist processes that are not spelled out in specifications. What percent of this good stuff are actually in RFPs? So, we will get resistance to innovative metrics because they are not in the specs.
  • Letteer – I agree, we are trying to get cyber security measures in RFPs but this is hard with any specifications. We are used to doing cyber (requirements) in general but cannot do it in the specifications. Because of this, cyber security is not a mandated function.
  • Gilmore – We used to get in response to cyber security “There are no formal requirements for cyber security in DoD requirements.” The Joint Staff is working on cyber Key Performance Parameters (KPPs) but we are not there yet. All we have so far is a document that describes acceptable degradations after a cyber-attack. As of yet, there are no cyber security requirements blessed by the JRCC.
  • Page – Also, we see people shopping around for a threat profile to fit the security they implemented.

Questions from the audience:

  • Question – Aren’t there software assurance metrics language that could be adapted to programs?
  • Panel – Sounds good to us, but we do not tend to use the most modern tools like the Google desktop.
  • Question – Today’s cyber activities seemed to be aimed at the whole stack but not at individual levels.
  • Panel – This relates to how we cement Government – Industry partnerships. We are good at sharing high-level information but not at collaborating on the details. How do we change what we do across the environment? We can look at the kill chain concept to identify our weak points. However, we must look at needed capabilities first then look at tools.
  • Question – Relating to cyber KPPs, we are trying to work with operators on how we manage risk. We need new commercial standards and it is important to work with industry.
  • Panel – Agree, but we are disappointed that this is taking two years. In addition, we are not sure we have PMO experience to understand cyber security engineering architectures. There is also more emphasis on getting complex mobile IT networks to function in the field. This has analogies in the commercial market but not specifics. Therefore, we struggle to get them to work and to facilitate links to supporting entities who can address failures. Common sense cyber controls would cause the (mobile) system to fail – not sure what we can do about this.
  • Question – The scale of nodes is moving from millions to billions, what does the panel think of this?
  • Panel – Going to IPV6 will help drive this complexity. Of course, our mobile devices have access to our networks. We have to focus on the assets we really want to protect, not everything. We must be prepared for continuous surprise. We need to keep up with bad actors, but the solution is not necessarily to modify the RMF. This is a continual slog that we continue to do over the years.

 

BREAK

 

Keynote speaker Dr. David Bray, CIO of the FCC, presented: Charting Cyber Terra Incognita: A CIO’s Perspective and Challenges.

 

cisq-cyber-resilience-summit-david-bray-fcc-2

 

Dr. Bray began his presentation by emphasizing the exponential growth in IT, human participants, and networked devices. His Terra Incognita (unknown land) is the combination of complex legacy infrastructure, the explosive growth of internet-connected systems, all with human actors and behaviors. This complexity and human interaction make it impossible not to have cyber issues and we must strive for resiliency. He said that cyber threats will run like infectious diseases across borders and that the public, private sector, public sector, academia, and non-profits must all build bridges to deal with this. Fortune 500 CEOs cite having one cyber security engineer for each $1B of data. In addition, threats are over classified so it is often hard to make the case for cyber security support. Dr. Bray mentioned “DIY” and the “internet of everything” that is outpacing cyber controls, citing examples such as industrial controls (moving to the internet with weak security, but consumers are not willing to pay for more) and capabilities for grass root entrepreneurs – pioneering civic and social innovation (but could be exploited by terrorists). He also described a greater reliance on IT with machine learning as an essential complement to human activities. Exponential growth in technology is also spilling over into bio warfare with DNA engineered bugs. Everything in cyber could be in bio within 5 years. He cited the example of the FCC that was a sitting target for cyberattacks but then went to 100% public cloud – from premise to off premise. In the briefing, Dr. Bray then described the giant leap from IPV4 to IPV6; addressing numbers from 232 to 2128.This is like moving from the volume of a beach ball to the volume of the sun! Dr. Bray talked about the importance of “Public Service” (21st century) Vs governance (20th century). He ended by emphasizing the need for more “change agents” in these exponential times.

 

At the conclusion of Dr. Bray’s presentation, John Weiler (IT-AAC) asked, “What do we see as the difference between (big) agile acquisition and agile development?”

 

Dr. Bray – We should not be in the code writing business. We are trying to procure IT capabilities in 6-9 months so agile acquisition should be an “a la carte” method with selectable modules.

 

Leo Garciga, Joint Improvised Threat Defeat Agency (JIDO) – In this commoditized environment, why do we still build custom stuff? Even standards are commoditized.

 

Dr. Bray – The commodity approach is good. Instead of Business Process Engineering, we should just keep it simple and draw on the board “how do you want to work”. In the FCC, we tried to automate an on-line form with an initial estimate of $17M but found we could do it for $450K using the commodity approach.

 

Question from audience – What should we do about weapon systems and cyber vulnerabilities?

 

Dr. Bray – We must balance availability and protection. Sometimes we rule out cloud-based solutions by asking ourselves “do I want this on the internet?”

 

John Weiler – What are services that can be on the internet?

 

Dr. Bray – We can move to limited public and Government internets (Taiwan and Australia do this.)

 

Question from audience – How do we retrofit TCPIP to be more secure in flight?

 

Dr. Bray – 1. Trust but verify (red teams). 2. Focus on Mission (what do you really need?)

 

Next, Leo Garciga, J6 Chief / CIO, JIDO and Ryan Skousen, Software Engineer, Booz Allen Hamilton presented: Integration of Security and Agile/DevOps Processes.

 

cisq-cyber-resilience-summit-ryan-skousen-leo-garciga

 

The presentation began with a review of JIDO’s mission as a quick reaction capability – to bring timely solutions to war fighters. The J6’s mission for IT is to:

  • Built a Big Data analytic platform, “Catapult,” and tool suite based on real-time, tactical needs
  • Embed with users worldwide to understand data available, analytic methodologies, & capability/data gaps
  • Provide solutions required same day at times

JIDO has been doing Agile SDLC for five years. Continuous integration is already implemented with nightly security scans. Release management with traditional CM/CCB is still hard. Agile alone is not enough.

  • Quick reaction capability to emerging threats
  • Quicker than standard DoD process; seeing agility and speed
  • Length of time to approve is standard
  • Intel fusion system with focus on how to change and when we need to change

 

JIDO started DevOps evolution in 2015. Security and compliance is built in upfront. JIDO’s goal is to completely automate deployments from code to production. We think this is a great capability.

  • Focus on managing risk and not compliance
  • Small changes
  • No manual/human review gate
  • Affordable by other agencies

 

Security /accreditation – ongoing authorization is secure agile + DevOps + continuous monitoring. JIDO has also developed an automated ongoing authorization pipeline.

  • Think through C&A before writing code
  • Adopt mission focus
  • Security accreditation (per NIST SP 800-37) can be automated to a large extent and should help to implement decisions by continuous monitoring instead of one-by-one inspection of packets – sort of an “ongoing authorization”

JIDO is still working to transition capability. This is hard to do but we are working to make it transferable to other agencies

 

Major takeaways:

  • Secure design and planning throughout SDLC
  • Containers for standardized deployment packaging
  • Secured, transparent DevOps pipeline.
    • Prohibits tampering, provides monitoring, and traceability.
    • Escalation based on code triggers (code delta, coverage)
  • Type-accredited platform to receive and run containers
  • It is like having a trusted candy factory, packaging goodies into bulletproof briefcases, transporting through a point-to-point hyperloop, delivering to candy shops with turrets – Really need to lick every lollipop?

 

Question from audience – How long will it take to fully implement JIDO Agile/DevOps process?

 

JIDO – We are now in full deployment across the classified and unclassified environment. We still have some staff education issues, but technically, we are up and running by working out CM problems.

 

John Weiler – In the DevOps world, speed is sacrificing some assurance issues. How do we recognize and incorporate engineering needs for safety/security?

 

JIDO – DevOps cannot be used for new ground up systems. Our process incorporates assurance by having daily scrums.

 

John Weiler – Security engineering cannot be determined by engineering. We must force rigorous engineering of systems with knowledge infusion.

 

JIDO – Yes, we do that by scanning in real time.

 

Question from audience – How do you characterize the tech challenge vs. the people challenge?

 

JIDO – This is a HUGE cultural change. We initially had a lot of push back with people worried about their rules.

 

The second panel, Standards of Practice for IT Modernization and Software Assurance, was led by Dr. Bill Curtis, CISQ Executive Director.

 

cisq-cyber-resilience-summit-bill-curtis-standards-of-practice-panel

 

Don Davidson, DoD kicked off the panel with a short presentation on DoD cyber resilience:

  • Cyber resilience is to ensure that DoD missions (and their critically enabled systems) are dependable in the face of cyber warfare by a capable cyber adversary.
  • The DoD cybersecurity campaign:
    • Cybersecurity discipline implementation plan
    • Cybersecurity scorecard
    • Culture and compliance
  • The campaign covers these cybersecurity disciplines:
    • Strong authentication for access
    • Device hardening with configuration management / SW patching
    • Reduction of attack surface
    • Monitoring and diagnostics
  • Mission appropriate cybersecurity balances risks vs. additional security (beyond cybersecurity discipline) for trusted systems
  • Approach incorporates fundamental basis of supply chain risk management and addresses compliance through policy.

 

Joe Jarzombek, Synopsys – We are starting to implement SW assurance systems to address low hanging fruit.

 

Tom Hurt, DoD – Layers of cybersecurity are like multiple Maginot Lines applying 95% of assets on 16% of problems. Software must be integrated into system engineering.

 

Emile Monette, DHS – We have challenges to interpret cases and do not cover them all. We have many weaknesses in thousands of categories and automation is difficult. System security measures we discussed today are useful, but we can also focus on human expertise and leave other forms of assurance to automation.

 

Mr. Jarzombek – It is about leadership, not technical issues. KPPs get diminished for functionality. We need to be more demanding on providers and have requirements that are more specific, MOEs, and testing. We can specify industry standards but we must also help providers work through issues.

 

Mr. Davidson – We need to write KPPs because there are baseline security requirements that cannot be traded away. CIOs and CISOs are always fighting – but it needs to be a healthy dialog.

 

At the Black Hat conference, we heard:

  • Major breaches will continue for two years (bad for CISOs)
  • Industry may have to provide software with warranties
  • Software as a Service (SaaS) is a good model. Self-driving cars will lead to insuring software!
  • Sourcing untrusted libraries may drive some away from COTS to in-sourcing

 

Mr. Hurt – For mission assurance, we can take successful attacks back through architects and engineers to analyze with tools including penetration. Why don’t we have red hat (penetration) tests as part of O&M?

  • We could avoid vulnerabilities in development
  • It always take more money to fix something after the fact

 

Dr. Curtis – 40% of software engineers are self-taught.

 

Panel members – We should ask if people and products we have are certified. We need (strong) leadership to avoid deploying dangerous products. This can be part of the RFI. One approach to vetting would be to have industry recommend proper controls, but other vendors may reject the recommendations.

 

Dr. Curtis – We need to know that a piece of software has some sort of certification. Education may help, but this is a complex issue and cyber courses in schools are not standardized. Institutions are now promoting cybersecurity basics in software engineering schools. We could approach this like a community “buyers’ club” – putting assurance in all Agency networks with requirements to build security into the software. This idea is emerging in industry such as the Vendor Security Alliance. These are models we could use to promote Government standards.

 

Mr. Hurt – The DoD Program Protection Plan requires use of assurance measures. We need assessments that are passed on to DT, OT, and O&M. We had a Joint Federation Information Assurance IOC in April 2016.

 

Dr. Curtis – How does cybersecurity work with agile? Agile is not incompatible with this and assumes activities are engaged with customers.

 

Mr. Hurt – For each sprint, we need a good set of allocated requirements and they must cover assurance – so we blend assurance into agile.

 

Question from audience – Do we need continuing education for cybersecurity professionals?

 

Panel – Yes, it is required for CISSPs. In addition, software engineers should be networked to work fixes to bugs. There are software development courses that cover cybersecurity but we still lack hard and fast requirements. The Government always asks for Project Management Professionals (PMPs) but rarely for cyber credentials.

 

Who is teaching “formerly verified code”? This is a great concept of merging AI with humans but we don’t know how mature it is or how long it would take to train someone.

 

Question from audience – What are we doing to give “tactical” hands on knowledge?

 

Dr. Curtis – Industry does not want to train and generally looks for experience. We have professional students vs. untrained practitioners. There is lots of pressure to push out code.

 

Question from audience – How does Government want industry to train? What certifications?

 

Mr. Hurt – New DoD 5000.2 will have software tools. We hope policy will move into guidance and best practices on websites. (DoD has 100,000 system engineers)

 

Question from audience – There is no certification in industry for security in software coding so we have to use contract (language) to govern security requirements. The FAR allows us to make suppliers fix bad software, but who exercises this? It does not seem to stand up in court.

 

Mr. Garciga – Scanning of software helps to deal with this. We should scan before acceptance. We must also get source code and software design description (SDD) to promote an organizational maturity. See White House.Gov on open source code that is forcing PMs to build document libraries of software with access to source code.

 

The Cyber Resilience Summit ended at 1230 with John Weiler (IT-AAC) and Dr. Curtis (CISQ) gave closing comments.

 

 

Join us at the next Cyber Resilience Summit on March 21, 2017 in Reston, Virginia.

 

Contact: Tracie Berardi, Program Manager, Consortium for IT Software Quality (CISQ)

tracie.berardi@it-cisq.org; 781-444-1132 x149

 

 

 

 

 

 

 

 

Adjusting Agile for Remote Environments

Bill Dickenson, Independent Consultant, Strategy On The Web

 

In most commercial environments the developers are distributed — rarely occupying the same physical site and often on very different hours. Faced with this reality, AGILE struggles. In the 12 principles from the “Agile Manifesto” is the principle that “The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.” This is clearly true and taken as a fixed principle, would rule out Agile for remote teams.

 

Research from CISQ (Consortium for IT Software Quality) has recently evaluated the effectiveness of software teams using Agile as well as Waterfall and found a surprising result. While both Agile and Waterfall produced quality software, organizations that used both methodologies produced higher quality software than organizations that used exclusively with one or the other. This opens up some interesting approaches for Agile in a distributed environment.

 

Start with the Right Projects

 

In general, Agile is best suited when the requirements are too high level or unclear to benefit from a more rapid, iterative approach. One approach that successful companies have used is to separate the well-defined, and clearly documented changes from those that benefit from the interactions that Agile provides.

 

Smaller Work Packets

 

One of the strengths in Agile is speed to value. The smaller the project the more likely it is to deliver the value (Capers Jones). Agile work packets should be small enough to be completed as quickly as possible, and if the organization is moving to DevOps, released to production as soon as practical.  Understand the problem clearly. Resist solving the problem until you understand the problem that needs to be solved.

 

Form Consistent Teams

 

Creating a team requires more than physically grouping developers together. Organizational dynamics dictate that even high performing individuals need time and practice to be a team. Create some stability by naming teams in advance and find ways for the group to interact. A trip to a common location tends to jump start team dynamics. The goal is improved communication and communication becomes far more effective when the team has a level of trust. Chances are that at some point, one of the team members will talk to the business about a function worked on by a remote member. Trust makes that process smoother.

 

Create a Team Room

 

Teams need persistent communications. Shared team rooms help make that possible. Many allow a full spectrum of common notes, virtual post its, drawings, etc. that rival a live team room. At some point everyone will be remote and these need to be as robust as possible.

 

The business users should be included here as well. The “community” of people who have an interest in the results of this change should be included. One highly successful remote Agile group borrowed the rules from an online university with required posts, answers and instructional techniques. As a side note, many of the “problems” the team was asked to solve were solved by other members of the business community who just had not talked to themselves. Communities are a major source of business effectiveness.

 

Include some “personal” space here as well for non-work related posts. Teams have common interests outside the project and again, the more these are used, the higher the trust.

 

Defect Free Code

 

Inexperienced Agile teams tend to lump bugs and defects into the same “group” and then hide behind the “Our highest priority is to satisfy the customer through early and continuous delivery of valuable software” while ignoring the “Continuous attention to technical excellence and good design enhances agility” principle a few items down.  Defects are the measure of technical excellence and Agile teams need to understand, adhere and be audited on compliance. Few issues destroy business value and credibility more than defects in code. 

 

Consider the following benchmarks for quality:

 

1) Security violations per Automated Function Point: The MITRE Common Weakness Enumeration (CWE) database contains very clear guidance on unacceptable coding practices that lead to security weakness. In a perfect world, delivered code should not violate any of these practices. More realistically, all code developed should have no violations of the Top 25 most dangerous and severe security violations, 22 of which are measureable in the source code and constitute the CISQ Automated Source Code Security Measure

 

2) Reliability below 0.1 violations per Automated Function Point: In any code there are data conditions that could cause the code to break in a way that allows an antagonist to gain access to the system. This can cause delivery failures in the expected functionality of the code. Reliability measures how well the code handles unexpected events and how easily system performance can be reestablished. Reliability can be measured as weaknesses in the code that can cause outages, data corruption, or unexpected behaviors. See the CISQ Automated Source Code Reliability Measure.

 

3) Performance Efficiency below 1.0 violations per Automated Function Point: Performance Efficiency measures how efficiently the application performs or uses resources such as processor or memory capacity. Performance Efficiency is measured as weaknesses in the code base that cause performance degradation or excessive processor or memory use. See the CISQ Automated Source Code Performance Efficiency Measure.

 

4) Maintainability violations below 3.0 per Automated Function Point: As code becomes more complex, the change effort to adapt to evolving requirements also increases. Organizations that focus on Maintainability have a lower cost to operate, faster response to change, and a higher return on investment for operating costs. It is important that code can be easily understood by different teams that inherit its maintenance. Maintainable, easily changed code is more modular, more structured, less complex, and less interwoven with other system components, making it easier to understand and change, conforming to “good design enhances agility”. See the CISQ Automated Source Code Maintainability Measure.

 

Feedback

 

Most Agile shops are familiar with the traditional burn down charts, user acceptance of the functionality delivered, time and productivity measures. All Agile teams should incorporate the quality measures above as well as a tracking across time. Teams function best when the feedback is unambiguous and frequent.

 

Conclusion

 

Agile is a powerful tool that can enhance the time to value in many organizations. These principles will guide an organization to finding the benefits of Agile while taking advantage of the best available resources globally. The extra effort in planning and execution pays off with better software and business value.

 

What Developers Should Expect from Operations in DevOps

Bill Dickenson, Independent Consultant, Strategy On The Web

 

Expectation Management

As DevOps becomes increasingly mainstream, it is essential that expectations are met for each group involved in the process. Part 1 of this blog focused on what operations should expect from the developers in DevOps  while this part (Part 2) will focus on what developers should expect from Operations. Managing both sides is essential to a successful flow.

 

To be successful, software must operate efficently on the target platform, handle exceptions without intervention, and be easily changed while remaining secure. It must deliver the functionality at the lowest cost possible. CISQ has evolved a set of quality characteristic measures that when combined with automated software tools, provide a way to make sure that the code delivered, delivers. To deliver on this, Operations must provide the right tools and the right processes to succeed.

 

Specifications for Continuous Release

 

DevOps dramatically increases the speed that application code is developed and moved into production and the first requirement is to design for speed. Specifications should be designed to be delivered in work “packets” that are smaller than typical waterfall design. CISQ research has shown that designing even long projects as a series of smaller fixed scope projects in the 1-3 month range dramatically improves stability and cost control. When staggered to allow continuous releases, the smaller packet design can make DevOps easier. As releases get “bigger” the corresponding risk management problems also get bigger. The success rate for the projects also increases with the reduced time frame.

 

Tools, Tools, Tools

 

As speed increases, there is no room for manual processes which are not only unpredictable but inefficient as well. One of the goals of streamlining the process is to deliver business value rapidly and that requires a better approach. The code delivery “pipeline” must be optimized to deliver an increasingly rapid flow.

  • Software Quality: In the previous blog we discussed CISQ recommendation for software quality. These should be part of the developer’s toolkit. Select a tool that can look at the whole portfolio as many security violations are in the spaces between programs. While there are some worthy open source analysis tools, this is an area where getting the best tool not only reduces the risk but also makes the process smoother. While the open source tools are evolving rapidly, the business case will more that support high quality tools. The entire pipeline should start with quality objectives.
  • Source Code Control/Packet Repository: One area where DevOps implementations report issues is in the software control process. Increasing the speed of development puts source code control at risk especially in legacy environments where the release cycle was measured in months. The faster “packet” design will stress the existing toolset. The Packet repository should hold the products of the entire process. Deployment tools become more important.
  • Codified and Comprehensive Risk Management: Many DevOps implementations fail when an unusually large amount of risk is introduced rapidly. Data center operations are not typically application risk aware and there is usually no codified process beyond the dangerous High-Medium-Low scale. In addition to investing in a better risk management process, the approach must contemplate both application and infrastructure.

 

Environments

 

As the pace quickens, environments need to be defined and available at a far more aggressive pace. Cloud-based services shine at this but hybrid environments work also.

 

  • Test environments: Testing will increase in volume as the more continuous flow drives repetitive testing. The process will drive considerably higher testing needs.
  • Test Data Management: Unlike quarterly and even longer cycles, it becomes almost
    impossible to manually manage test data. The “golden transaction” process where the data necessary to test is preloaded into the image, becomes increasingly critical. The test system images now need to include replicated environments that can be tested rapidly.

From Operations, Developers should expect specifications designed to be implemented more frequently, tools to support the process, and environments designed for application services. Both groups benefit from understanding each others’ needs.  

 

What Operations Should Expect from Developers in DevOps

Bill Dickenson, Independent Consultant, Strategy On The Web

 

Expectation Management

DevOps brings both the developers and operations processes into alignment. This blog focuses on what operations should expect from the developers while my next blog will focus on what developers should expect from Operations. Managing both sides is essential to a successful flow.

 

One of the major weaknesses in application development is that while software only delivers value when it is running, few universities or professional training organizations focus on how to make software operate smoothly. To be successful, software must operate efficently on the target platform, handle exceptions without intervention, and be easily changed while remaining secure. Security may sound like an odd addition here but studies continue to validate that many violations in security are at the application level. It must deliver the functionality at the lowest cost possible.  CISQ has evolved a set of quality characteristic measures that when combined with automated software tools, provide a way to make sure that the code delivered, delivers. Operations has every reason to expect that software will be delivered with these characteristics.

 

Setting SLA Measurements for Structural Quality Characteristics

CISQ recommends the following four OMG standard measures engineered into the DevOps process.   CISQ measures for Security, Reliability, Performance Efficiency, and Maintainability were developed by representatives from 24 CISQ member companies that included large IT organizations, software service providers, and software technology vendors.

 

1) Security Violations per Automated Function Point

 

The MITRE Common Weakness Enumeration (CWE) database contains very clear guidance on unacceptable coding practices. Delivered code should not violate any of these practices however the top 22 are considered the most egregious. They place an unreasonable burden on the infrastructure to protect.  Operations cannot plug the leaks between modules where the security issues occur.  The CISQ Security measure covers the Top 22 CWEs.

 

2) Reliability below 0.1 violations per Automated Function Point

 

In any code there are data conditions that could cause the code to break in a way that allows an antagonist to gain access to the system. These also cause delivery failures in the expected functionality of the code.  Reliability can be measured as weaknesses in the code that can cause outages, data corruption, or unexpected behaviors.  The CISQ Reliability measure is composed from 29 severe violations of good architectural and coding practice that can cause applications to behave unreliably.  

 

3) Performance Efficiency below 1.0 violations per Automated Function Point

 

Performance Efficiency measures how efficiently the application performs or uses resources such as processor or memory capacity.  Performance Efficiency is measured as weaknesses in the code base that cause performance degradation or excessive processor or memory use.  This has been operationalized in the CISQ Performance Efficiency measure.  In today’s relatively cheap hardware environment, violations of this have become common. Unfortunately, they also degrade the cloud readiness.

 

4) Maintainability violations below 3.0 per Automated Function Point

 

As code becomes more complex, the change effort to adapt to evolving requirements also increases. Organizations that focus on Maintainability have a lower cost to operate, faster response to change, and a higher return on investment for operating costs. Up to 50% of maintenance effort is spent understanding the code before modification. The CISQ Maintainability measure is composed from 20 severe violations of good architectural and coding practice that make code unnecessarily complex.

 

These 4 are the minimum requirements that operations should expect from developers. In the next blog we will discuss what developers should require from operations!

 

The Relationship Between Unit and System Level Issues

Bill Dickenson, Independent Consultant, Strategy On The Web

 

Dr. Richard Soley, the Chariman and CEO of OMG, published a paper for CISQ titled, How to Deliver Resilient, Secure, Efficient, and Easily Changed IT Systems in Line with CISQ Recommendations, that outlines the software quality standard for IT business applications. He classified software engineering best practices into two main categories:

  • Rules of good coding practice within a program at the Unit Level without the full Technology or System Level context in which the program operates, and
  • Rules of good architectural and design practice at the Technology or System level that take into consideration the broader architectural context within which a unit of code is integrated.

Correlations between programming defects and production defects revealed something really interesting and to some extent, counter-intuitive. It appears that basic Unit Level errors account for 92% of the total errors in the source code. That’s a staggering number. It implies that in fact the coding at the individual program level is much weaker than expected even with quality checks built into the IDE. However, these code level issues eventually count for only 10% of the defects in production. There is no question that it drives up the cost of support and maintenance as well as decreased flexibility, but the translation of these into production defects is not as large as might be expected. It also calls into question the effectiveness of development level IDE to eliminate production defects.

 

On the other hand, bad software engineering practices at the Technology and System Levels account for only 8% of total defects, but consume over half the effort spent on fixing problems. This eventually leads to 90% of the serious reliability, security, and efficiency issues in production. This means that tracking and fixing bad programming practices at the Unit Level alone may not translate into the anticipated business impact, since many of the most devastating defects can only be detected at the Technology and System Levels.

 

When we review the information from the CRASH database, this is not wholly unexpected. Many of the more serious defects are undetected until the components interact.

CISQ Sponsors Meet in Bangalore to Improve the Sizing of Maintenance Work

Dr. Bill Curtis, Executive Director, CISQ

 

During May 25-27 the sponsors of CISQ met in Bangalore, India to develop a specification for automating a Function Point-style measure for analyzing the productivity of maintenance and enhancement activity. Current Function Point-based measures do not account for significant portions of the code in a modern application, that is, the non-functional code required for operating large multi-language, multi-layer IT applications. Thus developers or maintenance staff can perform extensive work enhancing, modifying, and deleting code that does not affect traditional Function Point counts. Consequently their productivity cannot be accurately measured. Although NESMA has proposed an adjustment for this problem, the IT community needs an automatable solution that analyzes the full application.

 

The goal for this mew measure involves sizing the portion of an application affected during maintenance and enhancement activity in a way that is strongly related to the effort expended. The fundamental question related to this goal is how non-functional code should be measured when it is involved in changes. This spring several CISQ sponsors ran scripts on some of their applications to determine what portions of their code went unmeasured in traditional Function Point counting. In Bangalore they compared their results and discussed options for measuring the application code affected in maintenance activity. After two days of debate and discussion they coalesced on an approach which, after being formalized, will be submitted to the Object Management Group for consideration as a supported specification (an OMG standard).

 

Although the sponsors started from traditional methods for counting Function Points, they did not limit themselves to the constraints of these counting techniques. Thus, this specification might be more accurately thought of as Automated Implementation Points since it measures more than just the functional aspects of an application. This new measure will supplement traditional Function Point measures by providing a more complete sizing of the work performed during maintenance and enhancement. Thus this measure will enable more accurate analysis of productivity for use in benchmarking, estimating maintenance effort, and understanding the factors that cause variation in results. We will provide additional updates as the specification progresses.

 

CISQ Interviewed by SD Times – Dr. Bill Curtis (CISQ) and Dr. Richard Soley (OMG) Cited

Read About CISQ’s Mission, Standards Work, and Future Direction

 

Tracie Berardi, Program Manager, Consortium for IT Software Quality (CISQ)

 

Rob Marvin published an article in the January issue of SD Times that details the work of the Consortium for IT Software Quality (CISQ). Rob interviewed Dr. Richard Soley, CEO of the Object Management Group (OMG) and Dr. Bill Curtis, Executive Director of CISQ.  The article sheds light on the state of software quality standards in the IT marketplace.

 

I can supplement what’s covered in the article for CISQ members.

 

CISQ was co-founded by the Object Management Group (OMG) and the Software Engineering Institute (SEI) at Carnegie Mellon University in 2009.

 

Says Richard Soley of OMG, “Both Paul Nielsen (CEO, Software Engineering Institute) and I were approached to try to solve the twin problems of software builders and buyers (the need for consistent, standardized quality metrics to compare providers and measure development team quality) and SI’s (the need for consistent, standardized quality metrics to lower the cost of providing quality numbers for delivered software). It was clear that while CMMI is important to understanding the software development process, it doesn’t provide feedback on the artifacts developed. Just as major manufacturers agree on specific processes with their supply chains, but also test parts as they enter the factory, software developers and acquirers should have consistent, standard metrics for software quality. It was natural for Paul and I to pull together the best people in the business to make that happen.”

 

Richard Soley reached out to Dr. Bill Curtis to take the reins at CISQ. Bill Curtis is well-known in software quality circles as he led the creation of the Capability Maturity Model (CMM) and People CMM while at the Software Engineering Institute. Bill has published 5 books, over 150 articles, and was elected a Fellow of the Institute of Electrical and Electronics Engineers (IEEE) for his career contributions to software process improvement and measurement. He is currently SVP and Chief Scientist at CAST Software.

 

“Industry and government badly need automated, standardized metrics of software size and quality that are objective and computed directly from source code,” he says.

 

Bill Curtis organized CISQ working groups to start work on specifications. The Automated Function Point (AFP) specification was led by David Herron of the David Consulting Group and became an officially supported standard of the OMG in 2013. Currently, Software Quality Measures for Security, Reliability, Performance Efficiency, and Maintainability are undergoing standardization by the OMG.

 

The SD Times article in which Dr. Curtis and Dr. Soley are cited – CISQ aims to ensure industry wide software quality standards – is a summary of these specifications and their adoption. Please read.

 

A media reprint of the article has been posted to the members area of the CISQ website.  

 

You can also watch this video with Dr. Bill Curtis.

 

Later this year CISQ will start work on specs for Technical Debt and Quality-Adjusted Productivity.

 

Seeking Beta Sites for Quality-First Agile Development

By David Gelperin, CTO, ClearSpecs Enterprises

 

Seeking sites to refine and use a hybrid Agile process containing two phases. The second phase is “pure” Agile development and focuses on user functions. The first phase (Quality-First) identifies and manages quality goals such as reliability, understandability, or response time, which matter to your application.

 

Quality-First contains the following steps:

 

1. Identify relevant quality goals and their acceptable quality levels early (workshop).

 

Some quality goals are universal that are relevant to most applications. These include: reliability, response time, modularity, ease of use and learning, and all basic qualities (compliance, sufficiency, understandability, and verifiability).

 

The remaining (nonuniversal) quality goals are reviewed to identify those which matter to your application.

 

<A comprehensive quality model will be supplied to speed this step>

 

2. Refine quality goal information and identify “quality champions” among your team.

 

3. Create master lists of development restrictions including quality constraints and design, coding, and verification tactics derived from your quality goals.

 

Each quality goal restricts verification, coding, or design of modules, data, or interfaces, or a combination of these. A set of quality goals has a matching set of restrictions.

 

Start with master lists of universal quality goal restrictions and develop the restrictions for the (nonuniversal) goals that matter to your application.

 

During the Agile phase, consult the restriction master lists at the beginning of each iteration and as a part of the acceptance review at the end.

 

This process is based on the following assumptions: 

 

1. Identifying relevant qualities only needs an understanding of the type of application (e.g. flight control or e-commerce) to be acquired and not details of its behavior.

2. A comprehensive, multilevel quality model should be used on every project.

3. Identification of quality goals is the responsibility of PMs, Customers, Technical leads, and SMEs.

4. Cost-effective iterative development should be aware of restrictions from quality goals early in each iteration.

5. With experience, master list development takes no more than a week. At first, it takes 2 to 3 weeks.

 

Questions about and comments on these ideas are most welcome. To be considered as a beta site, drop me an email (david@clearspecs.com).