Texas Cybersecurity Legislation Passed In 2017 – A Summary

Herb Krasner, University of Texas at Austin (ret.), CISQ Advisory Board member

 

Here is a summary of the cybersecurity legislation that was passed this year that will have an impact on state agencies and institutions of higher education (all from the 85th regular session of the Tx legislature). The Tx Dept. of Information Resources (DIR) and state agency CISO’s will be the primary actors to make these new laws happen. The 2017 cybersecurity legislation (HB 8, except where noted otherwise) includes the following summarized provisions:

  • Establishment of legislative select committees for cybersecurity in the House and Senate.
  • Establishment of an information sharing and analysis center to provide a forum for state agencies to share information regarding cybersecurity threats, best practices, and remediation strategies.
  • Providing mandatory guidelines to state agencies for the continuing education requirements for cybersecurity training that must be completed by all IT employees of the agencies.
  • Creating a statewide plan (by DIR) to address cybersecurity risks and incidents in the state.
  • DIR will collect the following information from each state agency in order to produce a report due to the Legislature in November of every even numbered year. (SB 532)
    – Information on their security program
    – Inventory of agency’s servers, mainframe, cloud services, and other technologies
    – List of vendors that operate and manage agency’s IT infrastructure
  • The state cybersecurity coordinator shall establish and lead a cybersecurity council that includes public and private sector leaders and cybersecurity practitioners to collaborate on matters of cybersecurity.
  • Establishment of rules for security plans and assessments of Internet websites and mobile applications containing sensitive personal information.
  • Requiring the conduct of a study on digital data storage and records management practices.
  • Each agency shall prepare a biennial report assessing the extent to which all IT systems are vulnerable to unauthorized access or harm, or electronically stored information is vulnerable to alteration, damage, erasure, or inappropriate use.
  • At least once every two years, each state agency shall conduct an information security assessment, and report the results to DIR, the governor, the lieutenant governor, and the speaker of the House of Representatives.
  • Required proof that agency executives have been made aware of the risks revealed during the preparation of the agency ’s information security plan.
  • Requires state agencies to identify information security issues and develop a plan to prioritize the remediation and mitigation of those issues including legacy modernization and cybersecurity workforce development and retention.
  • In the event of a breach or suspected breach of system security or an unauthorized exposure of sensitive information, a state agency must report within 48 hours to their executives and the state CISO. Information arising from an organization’s efforts to prevent, detect, investigate, or mitigate security incidents is defined as confidential.  (SB 532)
  • Requires creating and defining an Election Cyber Attack Study (by Sec. of State).
  • Allowing DIR to request emergency funding if a cybersecurity event creates a need (SB 1910).

 

 

 

 

Event Summary: Cyber Resilience Summit, October 20, 2016

CYBER RESILIENCE SUMMIT: Ensure Resiliency in Federal Software Acquisition

Topic: Improving System Development & Sustainment Outcomes with Software Quality and Risk Measurement Standards

Hosted by: Consortium for IT Software Quality (CISQ) in cooperation with Object Management Group, Interoperability Clearinghouse, IT Acquisition Advisory Council

Date: 20 October 2016 from 0800 – 1230

Location: Army Navy Country Club, 1700 Army Navy Drive, Arlington, VA

Agenda and Presentations: http://it-cisq.org/cyber-resilience-summit-2016/

 

Event Background

 

The Consortium for IT Software Quality (CISQ) held its semiannual Cyber Resilience Summit at the Army Navy Country Club in Arlington, Virginia in cooperation with the IT Acquisition Advisory Council (IT-AAC) and other IT leadership organizations. “Titans of Cyber” from the U.S. Federal Government attended the Summit to share critical insights from the front lines of the cyber risk management battle. The program focused on standards and best practices for measuring risk and quality in IT-intensive programs from the standpoint of productivity, software assurance, overall quality and system/mission risk. The discussion addressed proven methods and tools of incorporating such standard metrics into the IT software development, sustainment and acquisition processes.

 

Discussion Points

 

John Weiler, IT-AAC Vice Chair and Dr. Bill Curtis, CISQ Executive Director opened the Summit.
Dr. Curtis gave an overview of CISQ, explaining that it was co-founded in 2009 by the Software Engineering Institute (SEI) at Carnegie Mellon University and Object Management Group (OMG) and is currently managed by OMG. The Consortium is chartered to create international standards for measuring the size and structural quality of software. Its mission is to increase the use of software product measures in software engineering and management. Dr. Curtis developed the original Capability Maturity Model (CMM) while at the SEI and now directs CISQ. Current sponsors include CAST, Synopsys, Booz Allen Hamilton, Cognizant, others.

 

Significant CISQ contributions include:

  • A standard for automating Function Points that mirrors IFPUG counting guidelines
  • Four measures of structural quality to quantify violations of good architectural and coding practice:
    • Reliability
    • Performance Efficiency
    • Security
    • Maintainability
  • It is important to note that most measures of reliability assess system availability or downtime, which are behavioral measures. The CISQ measures assess flaws in the software that can cause operational problems. Thus, the CISQ measures provide prerelease indicators for operational or cost of ownership risks.

CISQ measures can be used to track software performance against agreed targets, as well aggregated into management reports to track vendor performance. The continuing stream of multi‐million dollar failures is causing an increased demand for certifying software. Although CISQ will not provide a certification service, it will provide an assessment process to endorse technologies that can detect the critical weaknesses that comprise the CISQ Quality Characteristic measure standards.

 

The Security measure effort was led by the next speaker, Robert Martin, who oversees the Common Weakness Enumeration Repository maintained by MITRE Corporation. This repository contains over 800 known weaknesses that hackers exploit to gain unauthorized entry into systems.

 

Robert Martin, Senior Principal Engineer at MITRE, gave a presentation on: Defending Against Exploitable Weaknesses When Acquiring Software-Intensive Systems.

cisq-cyber-resilience-summit-robert-martin-mitre

 

Mr. Martin’s main themes were:

  • We are more dependent upon software-enabled cyber technology than ever
  • Hardware and software are highly vulnerable so the possibility of disruption is greater than ever
  • Software in end items (ex. Cars, fighter jets, etc.) is growing at an exponential rate
  • Almost everything is cyber connected and co-dependent during operations and/or other phases of life
  • Today up to 90% of an application consists of third party code

Mr. Martin’s main question was how do we track and measure all the code characteristics that are flowing into software development? How do we determine and track what is really important?

 

Mr. Martin then described how to establish assurance by using an Assurance Case Model (Safety Case Tooling) with the elements of Claim/Sub-claim, Argument, and Evidence. He pointed out that this evidence-based assurance is an emerging part of NIST SPs 800-160 (draft) and 800-53 Rev 4.

  • This technique is good for capturing complicated relationships
  • Tying the evidence to supported claims can be an ongoing part of creating and maintaining the system
  • It is useful for Mission Impact Analysis and Cyber Risk Remediation Analysis

Mr. Martin also identified the Common Weakness Scoring System (CWSS) and the Common Weakness Risk Analysis Framework (CWRAF) that applies to the SANS Top 25 list. He then identified the benefits of using multiple detection methods, as some are good at finding cause, while others are good at finding effect. We can use multiple detection methods to collect evidence through all phases of development from design review through red teaming – all the while considering the most important common weaknesses.
Mr. Martin then discussed program protection planning for prioritization/criticality analysis, using the assurance case to tie claims to supporting evidence. He also introduced the concept of “trustworthiness” that is a combination of factors like safety, privacy, security, reliability, and resilience. For example, a security “false-positive” may be a safety or reliability issue.

 

Finally, Mr. Martin pointed out that we can manage assurance cases with claims and the association of evidence to claims and that the evidence is articulated using structures related to common weaknesses. This accounts for all types of threats including human error, system faults, and environmental disruptions. Assurance cases can also be exchanged and integrated to aid extended system analysis.

 

CISQ is a standard for measuring security, safety and reliability in a consistent and computable way.

 

Next, the Titans of Cyber panel was led by Dr. Marv Langston, Principal, Langston Associates.

 

cisq-cyber-resilience-summit-titans-of-cyber-panel-3

 

Dr. Marv Langston introduced the members:

  • Ray Letteer, of the USMC stated the Marine Corps cyber concerns are for operational working metrics using standards. The need is for technical details not a “trust me” plan of action and milestones.
  • Kevin Dulany, of DIAP, (a protégé of Dr. Letteer) stated that networks are “hard on the outside but soft in the middle.” Attacks today are data driven – inside the networks but current risk management frameworks (RMFs) are based on traditional constructs of IT. Procedures require them to use these RMFs but the mitigations do not actually apply. Embedded computing drives the need for a different approach with new mitigations. Kevin also noted that some levels of security tend to be categorized at a lower level because of the lack of resources.
  • Chris Page of ONI stated that we do have superiority because of our great technology, citing that the Navy pub, “Design for Maintaining Naval Superiority” is a message (warning) to our adversaries.
  • Martin Stanley of DHS said that their focus is on securing high-value assets. Their process assesses the security posture, applies measures for a year, and then reassesses with lessons learned. He went on to say that root causes are related to basic IT practices and ways the organization must operate systems. His organization is producing enterprise architecture (EA) guidance that is unusual for cyber because it attempts to address root causes.
  • J. Michael Gilmore of DoD OT&E said that our systems are not designed with cyber security as a priority. He gave an example of a supporting network to an aircraft that was not considered in recovery and was also tied to a vendor network. He explained that capabilities need to be secured from both the government and contractor sides. Mike also cited that people can be major conduit for bad cyber – especially worldwide partners. Finally he added that the Joint Regional Security Stacks (JRSS) is essential for DoD but that people in the field are not fully trained and don’t understand its use or vulnerabilities.

Marv Langston kicked off the discussion by stating:

  • We test and deliver but don’t look back. Why don’t we do cyber tests on operational systems – on a daily basis?
  • Gilmore – We have a limited project to do and are facing resource constraints so we push back on current cyber authorities. The commercial sector does continuous red teaming but the Government is resisting – deploy and forget.
  • Letteer – Testing must be continuous. The USMC has established “White Teams” to do continuous scans, coordinating with red teams. In addition, we have cyber protection teams to help put in mitigations. This is not as widespread as we would like but we are making progress. The USAF has a similar program.
  • Stanley – Many agencies are working with DHS on continuous monitoring. The traditional Certification and Authorization (C&A) process has a place but continuous monitoring is supplemental. Overall compliance is more important than continuous monitoring, and this must change.
  • Dulany – We used to have high emphasis on system security engineering but with too much reliance on contractors. Today we look at controls but that does not get us down into useable specifications or standards. The RMF is a good tool but we have problems keeping up with technologies. We cannot use tools in certain environments, so continuous RMF would help to reinforce compliance.

Langston – I’m concerned that we will wear ourselves out with all these processes but will miss the critical checks like the daily cyber check

  • John Weiler – The software market is constantly refreshing but we still have 1985 software weapon systems.
  • Gilmore – We resist processes that are not spelled out in specifications. What percent of this good stuff are actually in RFPs? So, we will get resistance to innovative metrics because they are not in the specs.
  • Letteer – I agree, we are trying to get cyber security measures in RFPs but this is hard with any specifications. We are used to doing cyber (requirements) in general but cannot do it in the specifications. Because of this, cyber security is not a mandated function.
  • Gilmore – We used to get in response to cyber security “There are no formal requirements for cyber security in DoD requirements.” The Joint Staff is working on cyber Key Performance Parameters (KPPs) but we are not there yet. All we have so far is a document that describes acceptable degradations after a cyber-attack. As of yet, there are no cyber security requirements blessed by the JRCC.
  • Page – Also, we see people shopping around for a threat profile to fit the security they implemented.

Questions from the audience:

  • Question – Aren’t there software assurance metrics language that could be adapted to programs?
  • Panel – Sounds good to us, but we do not tend to use the most modern tools like the Google desktop.
  • Question – Today’s cyber activities seemed to be aimed at the whole stack but not at individual levels.
  • Panel – This relates to how we cement Government – Industry partnerships. We are good at sharing high-level information but not at collaborating on the details. How do we change what we do across the environment? We can look at the kill chain concept to identify our weak points. However, we must look at needed capabilities first then look at tools.
  • Question – Relating to cyber KPPs, we are trying to work with operators on how we manage risk. We need new commercial standards and it is important to work with industry.
  • Panel – Agree, but we are disappointed that this is taking two years. In addition, we are not sure we have PMO experience to understand cyber security engineering architectures. There is also more emphasis on getting complex mobile IT networks to function in the field. This has analogies in the commercial market but not specifics. Therefore, we struggle to get them to work and to facilitate links to supporting entities who can address failures. Common sense cyber controls would cause the (mobile) system to fail – not sure what we can do about this.
  • Question – The scale of nodes is moving from millions to billions, what does the panel think of this?
  • Panel – Going to IPV6 will help drive this complexity. Of course, our mobile devices have access to our networks. We have to focus on the assets we really want to protect, not everything. We must be prepared for continuous surprise. We need to keep up with bad actors, but the solution is not necessarily to modify the RMF. This is a continual slog that we continue to do over the years.

 

BREAK

 

Keynote speaker Dr. David Bray, CIO of the FCC, presented: Charting Cyber Terra Incognita: A CIO’s Perspective and Challenges.

 

cisq-cyber-resilience-summit-david-bray-fcc-2

 

Dr. Bray began his presentation by emphasizing the exponential growth in IT, human participants, and networked devices. His Terra Incognita (unknown land) is the combination of complex legacy infrastructure, the explosive growth of internet-connected systems, all with human actors and behaviors. This complexity and human interaction make it impossible not to have cyber issues and we must strive for resiliency. He said that cyber threats will run like infectious diseases across borders and that the public, private sector, public sector, academia, and non-profits must all build bridges to deal with this. Fortune 500 CEOs cite having one cyber security engineer for each $1B of data. In addition, threats are over classified so it is often hard to make the case for cyber security support. Dr. Bray mentioned “DIY” and the “internet of everything” that is outpacing cyber controls, citing examples such as industrial controls (moving to the internet with weak security, but consumers are not willing to pay for more) and capabilities for grass root entrepreneurs – pioneering civic and social innovation (but could be exploited by terrorists). He also described a greater reliance on IT with machine learning as an essential complement to human activities. Exponential growth in technology is also spilling over into bio warfare with DNA engineered bugs. Everything in cyber could be in bio within 5 years. He cited the example of the FCC that was a sitting target for cyberattacks but then went to 100% public cloud – from premise to off premise. In the briefing, Dr. Bray then described the giant leap from IPV4 to IPV6; addressing numbers from 232 to 2128.This is like moving from the volume of a beach ball to the volume of the sun! Dr. Bray talked about the importance of “Public Service” (21st century) Vs governance (20th century). He ended by emphasizing the need for more “change agents” in these exponential times.

 

At the conclusion of Dr. Bray’s presentation, John Weiler (IT-AAC) asked, “What do we see as the difference between (big) agile acquisition and agile development?”

 

Dr. Bray – We should not be in the code writing business. We are trying to procure IT capabilities in 6-9 months so agile acquisition should be an “a la carte” method with selectable modules.

 

Leo Garciga, Joint Improvised Threat Defeat Agency (JIDO) – In this commoditized environment, why do we still build custom stuff? Even standards are commoditized.

 

Dr. Bray – The commodity approach is good. Instead of Business Process Engineering, we should just keep it simple and draw on the board “how do you want to work”. In the FCC, we tried to automate an on-line form with an initial estimate of $17M but found we could do it for $450K using the commodity approach.

 

Question from audience – What should we do about weapon systems and cyber vulnerabilities?

 

Dr. Bray – We must balance availability and protection. Sometimes we rule out cloud-based solutions by asking ourselves “do I want this on the internet?”

 

John Weiler – What are services that can be on the internet?

 

Dr. Bray – We can move to limited public and Government internets (Taiwan and Australia do this.)

 

Question from audience – How do we retrofit TCPIP to be more secure in flight?

 

Dr. Bray – 1. Trust but verify (red teams). 2. Focus on Mission (what do you really need?)

 

Next, Leo Garciga, J6 Chief / CIO, JIDO and Ryan Skousen, Software Engineer, Booz Allen Hamilton presented: Integration of Security and Agile/DevOps Processes.

 

cisq-cyber-resilience-summit-ryan-skousen-leo-garciga

 

The presentation began with a review of JIDO’s mission as a quick reaction capability – to bring timely solutions to war fighters. The J6’s mission for IT is to:

  • Built a Big Data analytic platform, “Catapult,” and tool suite based on real-time, tactical needs
  • Embed with users worldwide to understand data available, analytic methodologies, & capability/data gaps
  • Provide solutions required same day at times

JIDO has been doing Agile SDLC for five years. Continuous integration is already implemented with nightly security scans. Release management with traditional CM/CCB is still hard. Agile alone is not enough.

  • Quick reaction capability to emerging threats
  • Quicker than standard DoD process; seeing agility and speed
  • Length of time to approve is standard
  • Intel fusion system with focus on how to change and when we need to change

 

JIDO started DevOps evolution in 2015. Security and compliance is built in upfront. JIDO’s goal is to completely automate deployments from code to production. We think this is a great capability.

  • Focus on managing risk and not compliance
  • Small changes
  • No manual/human review gate
  • Affordable by other agencies

 

Security /accreditation – ongoing authorization is secure agile + DevOps + continuous monitoring. JIDO has also developed an automated ongoing authorization pipeline.

  • Think through C&A before writing code
  • Adopt mission focus
  • Security accreditation (per NIST SP 800-37) can be automated to a large extent and should help to implement decisions by continuous monitoring instead of one-by-one inspection of packets – sort of an “ongoing authorization”

JIDO is still working to transition capability. This is hard to do but we are working to make it transferable to other agencies

 

Major takeaways:

  • Secure design and planning throughout SDLC
  • Containers for standardized deployment packaging
  • Secured, transparent DevOps pipeline.
    • Prohibits tampering, provides monitoring, and traceability.
    • Escalation based on code triggers (code delta, coverage)
  • Type-accredited platform to receive and run containers
  • It is like having a trusted candy factory, packaging goodies into bulletproof briefcases, transporting through a point-to-point hyperloop, delivering to candy shops with turrets – Really need to lick every lollipop?

 

Question from audience – How long will it take to fully implement JIDO Agile/DevOps process?

 

JIDO – We are now in full deployment across the classified and unclassified environment. We still have some staff education issues, but technically, we are up and running by working out CM problems.

 

John Weiler – In the DevOps world, speed is sacrificing some assurance issues. How do we recognize and incorporate engineering needs for safety/security?

 

JIDO – DevOps cannot be used for new ground up systems. Our process incorporates assurance by having daily scrums.

 

John Weiler – Security engineering cannot be determined by engineering. We must force rigorous engineering of systems with knowledge infusion.

 

JIDO – Yes, we do that by scanning in real time.

 

Question from audience – How do you characterize the tech challenge vs. the people challenge?

 

JIDO – This is a HUGE cultural change. We initially had a lot of push back with people worried about their rules.

 

The second panel, Standards of Practice for IT Modernization and Software Assurance, was led by Dr. Bill Curtis, CISQ Executive Director.

 

cisq-cyber-resilience-summit-bill-curtis-standards-of-practice-panel

 

Don Davidson, DoD kicked off the panel with a short presentation on DoD cyber resilience:

  • Cyber resilience is to ensure that DoD missions (and their critically enabled systems) are dependable in the face of cyber warfare by a capable cyber adversary.
  • The DoD cybersecurity campaign:
    • Cybersecurity discipline implementation plan
    • Cybersecurity scorecard
    • Culture and compliance
  • The campaign covers these cybersecurity disciplines:
    • Strong authentication for access
    • Device hardening with configuration management / SW patching
    • Reduction of attack surface
    • Monitoring and diagnostics
  • Mission appropriate cybersecurity balances risks vs. additional security (beyond cybersecurity discipline) for trusted systems
  • Approach incorporates fundamental basis of supply chain risk management and addresses compliance through policy.

 

Joe Jarzombek, Synopsys – We are starting to implement SW assurance systems to address low hanging fruit.

 

Tom Hurt, DoD – Layers of cybersecurity are like multiple Maginot Lines applying 95% of assets on 16% of problems. Software must be integrated into system engineering.

 

Emile Monette, DHS – We have challenges to interpret cases and do not cover them all. We have many weaknesses in thousands of categories and automation is difficult. System security measures we discussed today are useful, but we can also focus on human expertise and leave other forms of assurance to automation.

 

Mr. Jarzombek – It is about leadership, not technical issues. KPPs get diminished for functionality. We need to be more demanding on providers and have requirements that are more specific, MOEs, and testing. We can specify industry standards but we must also help providers work through issues.

 

Mr. Davidson – We need to write KPPs because there are baseline security requirements that cannot be traded away. CIOs and CISOs are always fighting – but it needs to be a healthy dialog.

 

At the Black Hat conference, we heard:

  • Major breaches will continue for two years (bad for CISOs)
  • Industry may have to provide software with warranties
  • Software as a Service (SaaS) is a good model. Self-driving cars will lead to insuring software!
  • Sourcing untrusted libraries may drive some away from COTS to in-sourcing

 

Mr. Hurt – For mission assurance, we can take successful attacks back through architects and engineers to analyze with tools including penetration. Why don’t we have red hat (penetration) tests as part of O&M?

  • We could avoid vulnerabilities in development
  • It always take more money to fix something after the fact

 

Dr. Curtis – 40% of software engineers are self-taught.

 

Panel members – We should ask if people and products we have are certified. We need (strong) leadership to avoid deploying dangerous products. This can be part of the RFI. One approach to vetting would be to have industry recommend proper controls, but other vendors may reject the recommendations.

 

Dr. Curtis – We need to know that a piece of software has some sort of certification. Education may help, but this is a complex issue and cyber courses in schools are not standardized. Institutions are now promoting cybersecurity basics in software engineering schools. We could approach this like a community “buyers’ club” – putting assurance in all Agency networks with requirements to build security into the software. This idea is emerging in industry such as the Vendor Security Alliance. These are models we could use to promote Government standards.

 

Mr. Hurt – The DoD Program Protection Plan requires use of assurance measures. We need assessments that are passed on to DT, OT, and O&M. We had a Joint Federation Information Assurance IOC in April 2016.

 

Dr. Curtis – How does cybersecurity work with agile? Agile is not incompatible with this and assumes activities are engaged with customers.

 

Mr. Hurt – For each sprint, we need a good set of allocated requirements and they must cover assurance – so we blend assurance into agile.

 

Question from audience – Do we need continuing education for cybersecurity professionals?

 

Panel – Yes, it is required for CISSPs. In addition, software engineers should be networked to work fixes to bugs. There are software development courses that cover cybersecurity but we still lack hard and fast requirements. The Government always asks for Project Management Professionals (PMPs) but rarely for cyber credentials.

 

Who is teaching “formerly verified code”? This is a great concept of merging AI with humans but we don’t know how mature it is or how long it would take to train someone.

 

Question from audience – What are we doing to give “tactical” hands on knowledge?

 

Dr. Curtis – Industry does not want to train and generally looks for experience. We have professional students vs. untrained practitioners. There is lots of pressure to push out code.

 

Question from audience – How does Government want industry to train? What certifications?

 

Mr. Hurt – New DoD 5000.2 will have software tools. We hope policy will move into guidance and best practices on websites. (DoD has 100,000 system engineers)

 

Question from audience – There is no certification in industry for security in software coding so we have to use contract (language) to govern security requirements. The FAR allows us to make suppliers fix bad software, but who exercises this? It does not seem to stand up in court.

 

Mr. Garciga – Scanning of software helps to deal with this. We should scan before acceptance. We must also get source code and software design description (SDD) to promote an organizational maturity. See White House.Gov on open source code that is forcing PMs to build document libraries of software with access to source code.

 

The Cyber Resilience Summit ended at 1230 with John Weiler (IT-AAC) and Dr. Curtis (CISQ) gave closing comments.

 

 

Join us at the next Cyber Resilience Summit on March 21, 2017 in Reston, Virginia.

 

Contact: Tracie Berardi, Program Manager, Consortium for IT Software Quality (CISQ)

tracie.berardi@it-cisq.org; 781-444-1132 x149

 

 

 

 

 

 

 

 

“Government Gets a ‘D’ for Cybersecurity”

Secure Coding Standards Needed for Cyber Resilience

 

On March 15, 2016 the Consortium for IT Software Quality (www.it-cisq.org), with support from the IT Acquisition Advisory Council (www.it-aac.org), hosted IT leaders from the U.S. Federal Government to discuss IT risk, secure coding standards, and areas of innovation to reduce the risk of Federal software-intensive systems. The following three themes were repeatedly emphasized by speakers and panelists and underline the need for secure coding standards in cyber resilience efforts.

 

Three alarms from the March 15 Cyber Resilience Summit tying code quality to secure coding standards

 

1) The current level of risk in Federal IT is unacceptable and processes must change.

Cyberattacks are becoming more prevalent and complex, and the nation’s IT systems, both public and private, are unprepared, explained Curtis Dukes, director of the National Security Agency’s Information Assurance Directorate. He scores the government’s national security systems at 70 to 75 percent, a ‘C’; the government as a whole gets a ‘D’; and the nation as a whole receives a failing grade, an ‘F’. The safest position is to assume your systems already have malware, remarked Dr. Phyllis Schneck, Deputy Under Secretary for Cybersecurity and Communications for the National Protection and Programs Directorate (NPPD), at the U.S. Department of Homeland Security. Both public and private IT organizations are far from the security and resilience required for dependable, trustworthy systems.

 

2) Poor quality code and architecture makes IT systems inherently less secure and resilient software
Several recent studies found that many of the weaknesses that make software less reliable also make it less secure, in that they can be exploited by hackers while at the same time making systems unstable. In essence, poor quality software is insecure software. Too often security is not designed into the software up front, making it much harder to secure and protect the system. One reason for this is that poor engineering practices at the architecture level are much more difficult to detect and costly to fix.

 

3) Software must move from a “craft” to an engineering discipline

Software development is still too often viewed as an art. In order to produce secure, resilient systems, software development must mature from an individually practiced craft to become an engineering discipline. Coding practices that avoid security and resilience weaknesses can be trained and measured during development. For comparison, civil engineering has matured to where measurement plays a dominant role in every step of the process. Civil engineers use standard measures to ensure that structures are designed, built, and maintained in a manner that is safe and secure. The CISQ standards provide one means of measuring the structural quality of a system as the software is being developed, thus helping software transition to a more engineering based discipline.

 

Presentations from the Cyber Resilience Summit are posted to the CISQ website at http://it-cisq.org/cyber-resilience-summit/. The public is invited to join CISQ to stay current with code quality standards and receive invitations for outreach events.

 

CISQ is consortium co-founded by the Software Engineering Institute (SEI) at Carnegie Mellon University and the Object Management Group® (OMG®) in the IT industry developing standard, automatable metrics for automating the measurement of non-functional, structural aspects of software source code, such as security, reliability, performance efficiency, and maintainability. Weaknesses in these attributes can lead to costly system outages, security breaches, data corruption, excessive maintenance costs, un-scalable systems, and other devastating problems. Now approved as internationals standards by the Object Management Group, the CISQ measures provide a common basis for measuring software regardless of whether it is developed in-house or on contracts. 

How to Identify Architecturally Complex Violations

Bill Dickenson, Independent Consultant, Strategy On The Web

 

Dr. Richard Soley, the Chariman and CEO of OMG, published a paper for CISQ titled, How to Deliver Resilient, Secure, Efficient, and Easily Changed IT Systems in Line with CISQ Recommendations, that outlines the software quality standard for IT business applications. The last post explored the relationship between unit and system level issues.

 

The logical and obvious conclusion is to dramatically increase the effort focused on detecting the few really dangerous architectural software defects. Unfortunately, identifying such ‘architecturally complex violations’ is anything but easy. It requires holistic analysis at both the Technology and System Levels, as well as a comprehensive, detailed understanding of the overall structure and layering of an application. For those needing further confirmation and explanation of such problems, the most common examples for each of the four CISQ characteristics, are described below.   

 

#1 Reliability & Resiliency: Lack of reliability and resilience is often rooted in the “error handling.” Local, Unit Level analysis can help find missing error handling when it’s related to local issues, but when it comes to checking the consistency of the error management across multiple technology stacks, which is tactically always the case in sophisticated business applications, a contextual understanding at the Technology and System Levels is needed. A full analysis of the application is mandatory because developers may simply bypass data manipulation frameworks, approved access methods, or layers. As a result, multiple programs may touch the data in an uncontrolled, chaotic way. Bad coding practices at the Technology Level lead to two‐thirds of the serious problems in production.   

 

#2 Performance Efficiency: Performance or efficiency problems are well known to damage end‐user productivity, customer loyalty, and to consume more IT resources than they should. The ‘remote calls inside loops’ (i.e. remote programs executed on a remote device from another program itself located in a loop) are a well‐known example that creates performance problems. A top down, System Level analysis is required to search down the entire system calling graph to identify the source of the problem. Performance issues in the vast majority of cases reside in System Level.

 

#3 Security & Vulnerability: Detecting backdoor or unsecure dynamic SQL queries through multiple layers requires a deep understanding of all the data manipulation layers as well as the data structure itself. Overall, security experts Greg Hoglund and Gary McGraw believe cross‐layer security issues account for 50% of all security issues. The Common Weakness Enumeration database maintained by MITRE is essential for removing common defects.

CISQ Interviewed by SD Times – Dr. Bill Curtis (CISQ) and Dr. Richard Soley (OMG) Cited

Read About CISQ’s Mission, Standards Work, and Future Direction

 

Tracie Berardi, Program Manager, Consortium for IT Software Quality (CISQ)

 

Rob Marvin published an article in the January issue of SD Times that details the work of the Consortium for IT Software Quality (CISQ). Rob interviewed Dr. Richard Soley, CEO of the Object Management Group (OMG) and Dr. Bill Curtis, Executive Director of CISQ.  The article sheds light on the state of software quality standards in the IT marketplace.

 

I can supplement what’s covered in the article for CISQ members.

 

CISQ was co-founded by the Object Management Group (OMG) and the Software Engineering Institute (SEI) at Carnegie Mellon University in 2009.

 

Says Richard Soley of OMG, “Both Paul Nielsen (CEO, Software Engineering Institute) and I were approached to try to solve the twin problems of software builders and buyers (the need for consistent, standardized quality metrics to compare providers and measure development team quality) and SI’s (the need for consistent, standardized quality metrics to lower the cost of providing quality numbers for delivered software). It was clear that while CMMI is important to understanding the software development process, it doesn’t provide feedback on the artifacts developed. Just as major manufacturers agree on specific processes with their supply chains, but also test parts as they enter the factory, software developers and acquirers should have consistent, standard metrics for software quality. It was natural for Paul and I to pull together the best people in the business to make that happen.”

 

Richard Soley reached out to Dr. Bill Curtis to take the reins at CISQ. Bill Curtis is well-known in software quality circles as he led the creation of the Capability Maturity Model (CMM) and People CMM while at the Software Engineering Institute. Bill has published 5 books, over 150 articles, and was elected a Fellow of the Institute of Electrical and Electronics Engineers (IEEE) for his career contributions to software process improvement and measurement. He is currently SVP and Chief Scientist at CAST Software.

 

“Industry and government badly need automated, standardized metrics of software size and quality that are objective and computed directly from source code,” he says.

 

Bill Curtis organized CISQ working groups to start work on specifications. The Automated Function Point (AFP) specification was led by David Herron of the David Consulting Group and became an officially supported standard of the OMG in 2013. Currently, Software Quality Measures for Security, Reliability, Performance Efficiency, and Maintainability are undergoing standardization by the OMG.

 

The SD Times article in which Dr. Curtis and Dr. Soley are cited – CISQ aims to ensure industry wide software quality standards – is a summary of these specifications and their adoption. Please read.

 

A media reprint of the article has been posted to the members area of the CISQ website.  

 

You can also watch this video with Dr. Bill Curtis.

 

Later this year CISQ will start work on specs for Technical Debt and Quality-Adjusted Productivity.

 

CISQ Seminar Presentations Now Available: Measuring and Managing Software Risk, Security, and Technical Debt, September 17, 2014, Austin, TX

By Tracie Berardi, Program Manager, Consortium for IT Software Quality (CISQ)

 

Hello Seminar Attendees and CISQ Members,

 

Last week we met in Austin, Texas for a CISQ Seminar: Measuring and Managing Software Risk, Security, and Technical Debt. 

 

Presentations are posted to the CISQ website under “Event & Seminar Presentations.”
Login with your CISQ username/password, or request a login here

 

The seminar was kicked off by Dr. Bill Curtis, CISQ Director, and Herb Krasner, Principal Researcher, ARiSE University of Texas. Are you looking to prove the ROI of software quality? Mr. Krasner’s presentation is exploding with helpful statistics. Dr. Israel Gat (Cutter) and Dr. Murray Cantor (IBM) went on to discuss the economics of technical liability and self-insuring software. Dr. William Nichols (SEI Carnegie Mellon) revealed results from studying the practices of agile teams. Robert Martin from MITRE, Director of the Common Weakness Enumeration (CWE), and lead on the CISQ security specification, talked about the latest advancements in fighting software security weaknesses. 

 

Thank you for participating in this lively event! If you couldn’t make it to Austin, please feel free to view the presentations. Our next seminar will be in Reston, Virginia in late March 2015. 

 

CISQ aims to turn software quality into a measurable science. CISQ has developed quality measures for Security, Performance Efficiency, Reliability, and Maintainability that are going through the OMG standardization process now. You can view CISQ Quality Standard Version 2.1 on the CISQ site. We expect the measures to become official standards in early 2015.

 

CISQ Seminar – Software Quality in Federal Acquisitions

CISQ hosted its latest Seminar at the HYATT Reston Town Center in Reston, VA, USA. The topic for this installment was “Software Quality in Federal Acquisitions”, and included the following speakers:

 

  • David Herron, David Consulting Group
  • Robert Martin, Project Lead, Common Weakness Enumeration, MITRE Corp.
  • John Keane, Military Health Systems
  • Dr. Bill Curtis, Director, CISQ
  • John Weiler, CIO Interop. Clearinghouse
  • Joe Jarzombek, Director for Software & Supply Chain Assurance, DHS
  • Dr. William Nichols, Software Engineering Institute

 

Over 75 senior leaders from public and private sector organizations such as BSAF, MITRE, US Department of Defense, Northrop Grumman, NSA, Fannie Mae, US Army, and NIST were in attendance listening to presentations, engaging in discussions, and networking with peers.

 

Dr. Curtis began the day by discussing the recent changes in the regulatory environment at the Federal level, especially as they relate to software risk prevention. Kevin Jackson (IT-AAC) stressed how innovation cannot be adopted if it cannot be measured.

 

Mr. Herron introduced the uses of productivity analysis and Function Points to more effectively and efficiently manage portfolios. He noted that a baseline provides a “stake in the ground” measure of performance, help to identify opportunities for optimized development practices, and enables the organization to manage risks and establish reasonable service level measures. Mr. Herron also discussed how automation will change the game with respect to software sizing and Function Points, including increased coupling with structural quality and improved vendor management.

 

Mr. Martin led a lively session on identifying and eliminating the causes of security breaches through the development of the Common Weakness Enumeration repository. He described the best practices for using information in the repository for improving the security of software, noting that everything is based on software today and that any flaws in that software within today’s highly connected universe will magnify the issues. Different assessment methods are effective at finding different types of weaknesses, and some are good at finding the cause while others can find the effect. So it’s ok to use different methods together.

 

Mr. Keane then spoke about the tools and processes his team uses to measure and manage structural quality on DoD contracts. He noted the importance of strong vendor contract language dictating the quality and performance standards required. Performing static code analysis correctly has great benefits, and Mr. Keane stated that static analysis prior to testing is very quick and about 85% efficient. His team measures code quality using technical debt, architectural standards, and architectural dependencies.

 

Mr. Jarzombek showed how security automation, software assurance and supply chain risk management can enable enterprise resilience. He noted that there is an increased risk from supply chains due to: increasing dependence on commercial ICT for mission critical systems; increasing reliance on globally-sourced ICT hardware, software, and services; residual risk, such as counterfeit products and products tainted with malware, passed to the end-user’s enterprise; growing technological sophistication among adversaries. Mr. Jarzombek also noted that the ICT/software security risk landscape is a convergence between “defense in depth” and “defense in breadth”.

 

Dr. Nichols presented new research on the measurement of agile projects, noting that the agile community lacks hard data regarding which agile practices provide the best outcomes. He identified some trends and distributions but determined that there were no strong correlations between specific agile practices and measures. Great care must be taken when combining agile metrics because the variables often combine in ways that are not intuitively obvious and can easily become misleading when used in different contexts.

 

Dr. Curtis then concluded the event by talking about the importance of software productivity and quality measures in every kind of contract, and discussing the important work that CISQ is doing on creating specifications for automated software quality measures. He noted the need to curb technical debt in order to reduce the need for future rework, and made some recommendations for acquisition which include:

 

  • Setting structural quality objectives
  • Using a common vocabulary
  • Measuring quality at the system level
  • Evaluating contract deliverables
  • Using rewards and penalties wisely

 

A fantastic Cocktail Social followed the event, facilitating great networking between the speakers and attendees. We received many positive statements from attendees throughout the event, noting the wealth of valuable information that was disseminated through engaging presentations and Q&A.

 

Materials from the event are now posted in the Member Page under the “Event & Seminar Presentations” category. For more information regarding upcoming CISQ events, visit our Events page.

What Does Application Security Cost? – Your Job!

Today Target Stores announced that Beth Jacob, their CIO since 2008, has resigned.  Estimates vary, but the confidential data of at least 70 million of Target’s customers were compromised.  Target’s profits and sales have declined as a result, and it faces over $100 million in legal settlements.  Not surprisingly, CEO Gregg Steinhafel announced that Target will hire an interim CIO charged with dramatically upgrading its information security and compliance infrastructure. 

 

Whether it’s security breaches at Target, humiliating performance at Healthcare.gov, outages in airline ticketing systems, or 30 minutes of disastrous trading at Knight Capital, the costs of poor structural quality can be staggering.  In fact, they are now so high that CEOs are being held accountable for IT’s misses and messes.  Consequently, Ms. Jacob will not be the last CIO to lose a job over an application quality problem.

 

Don’t be surprised if the next CIO survey from one of the IT industry analysts reports that a CIO’s top concern is some combination of application security, resilience, and risk reduction.  These issues just moved from variable to fixed income.  That is, rather than having improvements in security and dependability affect a CIO’s bonus, they will instead affect a CIO’s salary continuation plan.

 

Regardless of what the org chart says, the CIO is now the head of security.  The threats online overwhelm those onsite.  The CIO’s new top priority is to guard the premises of the firm’s electronic business.  Failing to accomplish this is failing, period.  CIOs and VPs of Application Development, Maintenance, and Quality Assurance must walk on the job knowing these techniques.  On-the-job learning is too expensive to be tolerated for long.

 

By its nature, size, and complexity, software is impossible to completely protect from disruptions and breaches.  However, if you want to keep your job, it shouldn’t be the CEO calling for an overhaul of information security and compliance with industry standards.

Tough Love for Software Security

Each day brings more reports of hacked systems.  The security breaches at Target, TJ Maxx, and Heartland Payment Systems are reported to have cost well beyond $140,000,000 each.  Are we near a tipping point where people stop trusting online and electronic systems and go back to buying over-the-counter with cash and personal checks?  When does the financial services industry reach the breaking point and start charging excessive fees to cover their losses?  Before we arrive there, IT needs to apply some tough love to software security.

 

Reports following the shutdown of a crime ring last summer that had stolen 130,000,000+ credit card numbers indicated that the weakness most frequently exploited to gain entry was SQL injection.  SQL injection???  Haven’t we known about that weakness for two decades?  How can we still be creating these types of vulnerabilities?  How can we not have detected them before putting the code into production?  Don’t you validate your input?  Don’t you wash your hands before eating?

 

What do we have to do to derail this hacking express?  What will it take to develop a global profession of software engineers who understand the structural elements of secure code?  We need some tough love for those who continue to leave glaring holes in critical applications.

 

Here is a tough love recommendation.  It is admittedly a bit whacky, but you’ll get the point.  First, we rate each of the code-based weaknesses in the Common Weakness Enumeration (cwe.mitre.org) on a severity scale from ‘1 – very minor and difficult to exploit’, to ‘9 – you just rolled out a red carpet to the confidential data’.  Next, we implement technology that continually scans code during development for security vulnerabilities.  Finally, we immediately enforce the following penalties when a security-related flaw is detected during a coding session.

 

  • Severity rating 1, 2 — “Come on dude, that’s dumb” flashes on the developer’s display
  • Severity rating 3, 4 — developer placed in ‘timeout’ for 2 hours by auto-locking IDE
  • Severity rating 5, 6 — developer’s name and defect published on daily bozo list
  • Severity rating 7, 8 — mild electric shock administered through the developer’s keyboard
  • Severity rating 9 — developer banished to database administration for 1 month

 

Okay, this is a bit much, but with the cost of security flaws to business running well into 9-digits, the status quo in development is no longer tolerable.  Here are some reasonable steps to take on the road to tough love.

 

  1. All applications touching confidential information should be automatically scanned for security weaknesses during development, and immediate feedback provided to developers.
  2. Before each release into production, all application code should be scanned at the system level for security weaknesses. 
  3. All high severity security weaknesses should be removed before the code enters production.
  4. All other security weaknesses should be prioritized on a maintenance or story backlog for future remediation.
  5. All developers should be trained in developing secure code for each of their languages and platforms.
  6. Developers who continue to submit components to builds that harbor security weaknesses should receive additional training and/or mentoring.
  7. Developers who are unable to produce secure code even after additional training and/or mentoring should be assigned to other work.

 

The latter recommendations may upset some developers.  However, as the financial damage of security breaches escalates, the industry must take steps necessary to ensure that those entrusted to develop secure systems have the knowledge, skill, and discipline necessary to the task.  Organizations must accept some responsibility for preparing developers and sustaining their skills.  Academic institutions need to incorporate cyber-security as a requirement into their computer science and software engineering curricula.

 

The cyber-security community is supporting many important initiatives, and IT needs to take advantage of them.  Good places to start include the CERT website (www.cert.org) supported by the Software Engineering Institute at Carnegie Mellon University, the SANS Institute (www.sans.org), and the Common Weakness Enumeration (cwe.mitre.org) repository supported by Mitre on behalf of the US Department of Homeland Security.  Ultimately, developers must be held accountable for their capability and work results, since the risk to which they expose a business has grown unacceptably large.  Tough love for tougher security.

Software Quality beyond Application Boundaries

 

The retail security crisis continues…

 

A recent Wall Street Journal article exposed potential issues with Bitcoin’s transaction network. This left Tokyo-based Mt. Gox exchange and Gavin Andresen, Chief Scientist at the Bitcoin Foundation, pointing fingers at each other.

 

So far the retail industry has felt the pain of sophisticated hackers stealing sensitive information:

 

  • Target Corp. – The latest news suggests that the breach started with a malware-laced email phishing attack sent to employees at an HVAC firm that did business with the nationwide retailer
  • Nieman Marcus – 1.1 million debit and credit cards used at its stores may have been compromised
  • Michaels – investigating a possible security breach on its payment card network

 

According to a Business Insider article, smaller breaches on at least three other well-known U.S. retailers also took place during the U.S. holiday shopping season last year and were conducted using similar techniques as the one on Target. Those breaches have yet to come to light in the mainstream media.

 

Memory-scraping software, a danger exposed as early as five years ago, is becoming a common tool for these breaches. When a customer swipes a payment card at the checkout, the POS grabs data from the magnetic strip and transfers it to the retailer’s payment processing provider. While the data is encrypted during the process (as required by PII regulation), scrapers harvest the information from the POS RAM, where it briefly appears in plain text. In some cases, the encrypted data along with its keys are stolen and then decrypted outside the victim’s infrastructure. Cyber criminals have been adding features to make it more difficult for victims to detect the malicious software on their networks.

 

Thoroughly testing the quality of software has long been known to be an imperfect practice. We make up representative test cases and create fake data to ensure the release of software on time. Or, we outsource this development to organizations where our application becomes one of others in focus. But the long tail of problems is becoming so prevalent now that it is time to leverage up-to-date technology and automation to dramatically increase the scope of our testing.

 

We also need to extend the notion of software quality beyond a particular application or process. As soon as that application or process has to share information with an outside process or system, a window is exposed for an attack. The measurement of quality must extend to the very system or business process within which it runs. For example, efforts need to be stepped up to ensure the right patches and guards are deployed frequently. Traffic channels must be monitored at high speed. Hardware issues must be corrected. And all of this must happen not just at the server level but at any and all connected endpoints. The Internet of Things, a phenomenon whereby “things” as diverse as smartphones, cars and household appliances are all online and connected to the internet, is becoming a reminder that the entry point for an attack can come from almost any device.

 

In order to ensure the quality and stability of software, we must learn to think and act like hackers. We must extend the monitoring and measurement of software quality to include the processes and systems within which software plays a role. We must harness the capabilities of available technology and automation to ensure deeper testing coverage. All of this is necessary to reduce the cost and risk associated with the types of breaches we are now starting to see. The integrity of the software application depends on it.