Texas Cybersecurity Legislation Passed In 2017 – A Summary

Herb Krasner, University of Texas at Austin (ret.), CISQ Advisory Board member

 

Here is a summary of the cybersecurity legislation that was passed this year that will have an impact on state agencies and institutions of higher education (all from the 85th regular session of the Tx legislature). The Tx Dept. of Information Resources (DIR) and state agency CISO’s will be the primary actors to make these new laws happen. The 2017 cybersecurity legislation (HB 8, except where noted otherwise) includes the following summarized provisions:

  • Establishment of legislative select committees for cybersecurity in the House and Senate.
  • Establishment of an information sharing and analysis center to provide a forum for state agencies to share information regarding cybersecurity threats, best practices, and remediation strategies.
  • Providing mandatory guidelines to state agencies for the continuing education requirements for cybersecurity training that must be completed by all IT employees of the agencies.
  • Creating a statewide plan (by DIR) to address cybersecurity risks and incidents in the state.
  • DIR will collect the following information from each state agency in order to produce a report due to the Legislature in November of every even numbered year. (SB 532)
    – Information on their security program
    – Inventory of agency’s servers, mainframe, cloud services, and other technologies
    – List of vendors that operate and manage agency’s IT infrastructure
  • The state cybersecurity coordinator shall establish and lead a cybersecurity council that includes public and private sector leaders and cybersecurity practitioners to collaborate on matters of cybersecurity.
  • Establishment of rules for security plans and assessments of Internet websites and mobile applications containing sensitive personal information.
  • Requiring the conduct of a study on digital data storage and records management practices.
  • Each agency shall prepare a biennial report assessing the extent to which all IT systems are vulnerable to unauthorized access or harm, or electronically stored information is vulnerable to alteration, damage, erasure, or inappropriate use.
  • At least once every two years, each state agency shall conduct an information security assessment, and report the results to DIR, the governor, the lieutenant governor, and the speaker of the House of Representatives.
  • Required proof that agency executives have been made aware of the risks revealed during the preparation of the agency ’s information security plan.
  • Requires state agencies to identify information security issues and develop a plan to prioritize the remediation and mitigation of those issues including legacy modernization and cybersecurity workforce development and retention.
  • In the event of a breach or suspected breach of system security or an unauthorized exposure of sensitive information, a state agency must report within 48 hours to their executives and the state CISO. Information arising from an organization’s efforts to prevent, detect, investigate, or mitigate security incidents is defined as confidential.  (SB 532)
  • Requires creating and defining an Election Cyber Attack Study (by Sec. of State).
  • Allowing DIR to request emergency funding if a cybersecurity event creates a need (SB 1910).

 

 

 

 

“Government Gets a ‘D’ for Cybersecurity”

Secure Coding Standards Needed for Cyber Resilience

 

On March 15, 2016 the Consortium for IT Software Quality (www.it-cisq.org), with support from the IT Acquisition Advisory Council (www.it-aac.org), hosted IT leaders from the U.S. Federal Government to discuss IT risk, secure coding standards, and areas of innovation to reduce the risk of Federal software-intensive systems. The following three themes were repeatedly emphasized by speakers and panelists and underline the need for secure coding standards in cyber resilience efforts.

 

Three alarms from the March 15 Cyber Resilience Summit tying code quality to secure coding standards

 

1) The current level of risk in Federal IT is unacceptable and processes must change.

Cyberattacks are becoming more prevalent and complex, and the nation’s IT systems, both public and private, are unprepared, explained Curtis Dukes, director of the National Security Agency’s Information Assurance Directorate. He scores the government’s national security systems at 70 to 75 percent, a ‘C’; the government as a whole gets a ‘D’; and the nation as a whole receives a failing grade, an ‘F’. The safest position is to assume your systems already have malware, remarked Dr. Phyllis Schneck, Deputy Under Secretary for Cybersecurity and Communications for the National Protection and Programs Directorate (NPPD), at the U.S. Department of Homeland Security. Both public and private IT organizations are far from the security and resilience required for dependable, trustworthy systems.

 

2) Poor quality code and architecture makes IT systems inherently less secure and resilient software
Several recent studies found that many of the weaknesses that make software less reliable also make it less secure, in that they can be exploited by hackers while at the same time making systems unstable. In essence, poor quality software is insecure software. Too often security is not designed into the software up front, making it much harder to secure and protect the system. One reason for this is that poor engineering practices at the architecture level are much more difficult to detect and costly to fix.

 

3) Software must move from a “craft” to an engineering discipline

Software development is still too often viewed as an art. In order to produce secure, resilient systems, software development must mature from an individually practiced craft to become an engineering discipline. Coding practices that avoid security and resilience weaknesses can be trained and measured during development. For comparison, civil engineering has matured to where measurement plays a dominant role in every step of the process. Civil engineers use standard measures to ensure that structures are designed, built, and maintained in a manner that is safe and secure. The CISQ standards provide one means of measuring the structural quality of a system as the software is being developed, thus helping software transition to a more engineering based discipline.

 

Presentations from the Cyber Resilience Summit are posted to the CISQ website at http://it-cisq.org/cyber-resilience-summit/. The public is invited to join CISQ to stay current with code quality standards and receive invitations for outreach events.

 

CISQ is consortium co-founded by the Software Engineering Institute (SEI) at Carnegie Mellon University and the Object Management Group® (OMG®) in the IT industry developing standard, automatable metrics for automating the measurement of non-functional, structural aspects of software source code, such as security, reliability, performance efficiency, and maintainability. Weaknesses in these attributes can lead to costly system outages, security breaches, data corruption, excessive maintenance costs, un-scalable systems, and other devastating problems. Now approved as internationals standards by the Object Management Group, the CISQ measures provide a common basis for measuring software regardless of whether it is developed in-house or on contracts. 

Wall St. Journal Cyber Attack Highlights Need for Security

Last week a hacker known as “w0rm” attacked the Wall St. Journal website. W0rm is a hacker (or group of hackers) known to infiltrate news websites, post screenshots on Twitter as evidence, and solicit the sale of database information and credentials. Information stolen from the site would let someone “modify articles, add new content, insert malicious content in any page, add new users, delete users and so on,” said Andrew Komarov, chief executive of IntelCrawler, who brought the hack to the attention of the Journal.

 

See “WSJ Takes Some Computer Systems Offline After Cyber Intrusion.”

 

Security is a major issue that’s highlighted by the rising number of multi-million dollar computer outages and security breaches in the news today. The breach of the Wall St. Journal website was the result of a SQL injection into a vulnerable web graphics system. Since the 1990’s the IT community has been talking about SQL injections (which are relatively simple to prevent) yet input validation issues still represent the significant majority of web application attacks.

 

At CISQ we’ve gathered hundreds of IT organizations, system integrators, outsourced service providers, and software vendors to create global standards for software quality, including metrics to measure and manage security, reliability, performance, and maintainability.

 

At the upcoming CISQ Seminar, “Measuring and Managing Software Risk, Security, and Technical Debt” on September 17, our experts will discuss technical liability and security weaknesses.

 

We’ll have Robert Martin, an expert from The MITRE Corporation, a not-for-profit organization that operates research and development centers sponsored by the federal government, to discuss the latest developments in the national cyber-security community. Robert Martin is co-creator of the Common Weakness Enumeration (CWE), a list of common software weaknesses that serves as a reference to developers and organizations building and purchasing applications. The CWE helps us identify and communicate vulnerabilities in code, design, or architecture. Robert Martin leads the CISQ working group on security.

 

You’re invited to attend CISQ events throughout the year. To get involved with software quality standards, please contact us.

CISQ Seminar – Software Quality in Federal Acquisitions

CISQ hosted its latest Seminar at the HYATT Reston Town Center in Reston, VA, USA. The topic for this installment was “Software Quality in Federal Acquisitions”, and included the following speakers:

 

  • David Herron, David Consulting Group
  • Robert Martin, Project Lead, Common Weakness Enumeration, MITRE Corp.
  • John Keane, Military Health Systems
  • Dr. Bill Curtis, Director, CISQ
  • John Weiler, CIO Interop. Clearinghouse
  • Joe Jarzombek, Director for Software & Supply Chain Assurance, DHS
  • Dr. William Nichols, Software Engineering Institute

 

Over 75 senior leaders from public and private sector organizations such as BSAF, MITRE, US Department of Defense, Northrop Grumman, NSA, Fannie Mae, US Army, and NIST were in attendance listening to presentations, engaging in discussions, and networking with peers.

 

Dr. Curtis began the day by discussing the recent changes in the regulatory environment at the Federal level, especially as they relate to software risk prevention. Kevin Jackson (IT-AAC) stressed how innovation cannot be adopted if it cannot be measured.

 

Mr. Herron introduced the uses of productivity analysis and Function Points to more effectively and efficiently manage portfolios. He noted that a baseline provides a “stake in the ground” measure of performance, help to identify opportunities for optimized development practices, and enables the organization to manage risks and establish reasonable service level measures. Mr. Herron also discussed how automation will change the game with respect to software sizing and Function Points, including increased coupling with structural quality and improved vendor management.

 

Mr. Martin led a lively session on identifying and eliminating the causes of security breaches through the development of the Common Weakness Enumeration repository. He described the best practices for using information in the repository for improving the security of software, noting that everything is based on software today and that any flaws in that software within today’s highly connected universe will magnify the issues. Different assessment methods are effective at finding different types of weaknesses, and some are good at finding the cause while others can find the effect. So it’s ok to use different methods together.

 

Mr. Keane then spoke about the tools and processes his team uses to measure and manage structural quality on DoD contracts. He noted the importance of strong vendor contract language dictating the quality and performance standards required. Performing static code analysis correctly has great benefits, and Mr. Keane stated that static analysis prior to testing is very quick and about 85% efficient. His team measures code quality using technical debt, architectural standards, and architectural dependencies.

 

Mr. Jarzombek showed how security automation, software assurance and supply chain risk management can enable enterprise resilience. He noted that there is an increased risk from supply chains due to: increasing dependence on commercial ICT for mission critical systems; increasing reliance on globally-sourced ICT hardware, software, and services; residual risk, such as counterfeit products and products tainted with malware, passed to the end-user’s enterprise; growing technological sophistication among adversaries. Mr. Jarzombek also noted that the ICT/software security risk landscape is a convergence between “defense in depth” and “defense in breadth”.

 

Dr. Nichols presented new research on the measurement of agile projects, noting that the agile community lacks hard data regarding which agile practices provide the best outcomes. He identified some trends and distributions but determined that there were no strong correlations between specific agile practices and measures. Great care must be taken when combining agile metrics because the variables often combine in ways that are not intuitively obvious and can easily become misleading when used in different contexts.

 

Dr. Curtis then concluded the event by talking about the importance of software productivity and quality measures in every kind of contract, and discussing the important work that CISQ is doing on creating specifications for automated software quality measures. He noted the need to curb technical debt in order to reduce the need for future rework, and made some recommendations for acquisition which include:

 

  • Setting structural quality objectives
  • Using a common vocabulary
  • Measuring quality at the system level
  • Evaluating contract deliverables
  • Using rewards and penalties wisely

 

A fantastic Cocktail Social followed the event, facilitating great networking between the speakers and attendees. We received many positive statements from attendees throughout the event, noting the wealth of valuable information that was disseminated through engaging presentations and Q&A.

 

Materials from the event are now posted in the Member Page under the “Event & Seminar Presentations” category. For more information regarding upcoming CISQ events, visit our Events page.

Open Source is Not Immune to Software Quality Problems

The Heartbleed Bug reinforces the need to monitor the quality of open source software

 

OpenSSL came under fire this past week through the now infamous Heartbleed bug.

 

This open source encryption software is used by over 500,000 websites, including Google, Facebook, and Yahoo to protect their customers’ valuable information. While generally a solid program, OpenSSL harbors a security vulnerability that allows hackers to access the memory of data servers and potentially steal a server’s digital keys that are used to encrypt communications, thus gaining access to an organization’s internal documents.

 

Technically-known as CVE-2014-0160, the Heartbleed bug allows hackers to access up to 64 kilobytes of memory during any one attack and provides the ability for repeat attacks. Faulty code within OpenSSL is responsible for the vulnerability and – as an open source project – it’s hard to pinpoint who is responsible much less scrutinize all the complex code created for the SSL project to find such a minute vulnerability.

 

While I’m definitely not knocking open-source projects – WordPress and Mozilla Firefox are both open-source and world-class programs – there needs to be careful scrutiny over any contributed work submitted to open source projects. As Zulfikar Ramzan, CTO of Elastica, told the New York Times, “Heartbleed is not the main part of SSL. It’s just one additional feature within SSL….So it’s conceivable that nobody looked at that code as carefully because it was not part of the main line.”

 

The owners of open source projects need to examine the testing harnesses used prior to checking in code, and upgrade their quality assurance strategy to check for both functional and structural software quality problems. Testing collections of contributed modules alone is not enough – we need to also run tests against the whole integrated system to ensure that the new variables and call-outs work dependably and securely with the main body of code. For example, Salesforce.com requires that builders of custom code also build test harnesses, run those tests, and submit the results as well as the original and test code back to them for review. Only after review will Salesforce.com allow the introduction of custom code for a deployment into their Force.com platform. Open source projects should demand the same from their contributors.

 

If you are integrating open source software into your proprietary code, don’t take quality for granted. You too should be running your own functional and structural quality checks against the introduced code as well as the whole integrated system. This means that those incorporating open source code should run their own traditional functional test suite, along with system-level static and dynamic analysis. Advances in automated testing, penetration testing, and static analysis, along with the availability of temporary cloud computing resources have reduced the cost and effort of continuously running and verifying tests.

 

The Heartbleed bug was around for quite some time before finally being exposed. Don’t leave the discovery of a lurking menace up to chance; be proactive and protect yourself and your consumers. It’s always better to be safe than sorry.

 

What are your thoughts on the Heartbleed Bug? Share them with us in the “comments” section below.

You’ve Been Cloned

We no longer need biology to clone people. Electronics will do nicely. Hieu Minh Ngo, an enterprising young citizen of Vietnam, has just been arraigned in New Hampshire for posing as a private investigator from Singapore and offering an underground service that provided clients with identity information including social security numbers that were available from Court Ventures, an Experian subsidiary that provides access to court records, as well as from US Info Search, a firm that provides identity verification information. While it is unknown how many identities were breached, the likely count is in the millions.

 

How many databases hold shards of information about you? Start with what you have published openly on Facebook, LinkedIn, Twitter, and similar social sites. Then add the information saved by companies with which you do business, electronically or face-to-face, such as credit cards, purchases, preferences, and the like. Then add all the companies that gather data from them, collate it into records about you they sell to others regarding your financial, criminal, shopping, and charitable history. Then add all the medical data held by your doctors, hospitals, insurance companies, and record collating firms. Then add all the data retained by educational institutions you attended about your grades, disciplinary problems, test scores, diplomas, and other achievements. Then add all the data held by your employers such as salary, performance, retirement accounts, automatic deposit accounts, and tax information. Then add all the information local, state, and federal governments maintain on your driving, marital history, criminal record, real estate, tax, travel, and so many other aspects of your life. And who knows what the NSA has gotten ahold of? With such rich sources, who needs your DNA?

 

In Jurassic Park they needed DNA preserved in amber for eons to clone dinosaurs. On the internet they only need a few SQL injections and big data analytics to fully clone you. Integrate the medical data with the pictures you posted on Facebook, tie it to a 3D printer, and presto — instant you.

 

Now add the driver’s license, social security, and passport data held by government agencies and presto — you can pass through security. Now add financial, educational, and employment data, and presto — your alter ego thrives. Who needs the witness protection program, just become whoever you want? On the run or just gone bankrupt, no problem, just become someone else with a clean record.

 

The bottom line is that unless we clean up the security weaknesses in software that maintains personal information, we are headed for an IDENTITY HOLOCAUST. Anyone who has been affected by credit card fraud or identity theft has already experienced the nightmare. And it will be worse next time. In the late 1990s the world attacked the Y2K problem with a vengeance so that civilization as we know it would not end at midnight. We need the same internationally-coordinated determination to harden the software that defends our electronic identities.

 

Wow, you just texted me from Sydney, but didn’t I just see you over at………

What Does Application Security Cost? – Your Job!

Today Target Stores announced that Beth Jacob, their CIO since 2008, has resigned.  Estimates vary, but the confidential data of at least 70 million of Target’s customers were compromised.  Target’s profits and sales have declined as a result, and it faces over $100 million in legal settlements.  Not surprisingly, CEO Gregg Steinhafel announced that Target will hire an interim CIO charged with dramatically upgrading its information security and compliance infrastructure. 

 

Whether it’s security breaches at Target, humiliating performance at Healthcare.gov, outages in airline ticketing systems, or 30 minutes of disastrous trading at Knight Capital, the costs of poor structural quality can be staggering.  In fact, they are now so high that CEOs are being held accountable for IT’s misses and messes.  Consequently, Ms. Jacob will not be the last CIO to lose a job over an application quality problem.

 

Don’t be surprised if the next CIO survey from one of the IT industry analysts reports that a CIO’s top concern is some combination of application security, resilience, and risk reduction.  These issues just moved from variable to fixed income.  That is, rather than having improvements in security and dependability affect a CIO’s bonus, they will instead affect a CIO’s salary continuation plan.

 

Regardless of what the org chart says, the CIO is now the head of security.  The threats online overwhelm those onsite.  The CIO’s new top priority is to guard the premises of the firm’s electronic business.  Failing to accomplish this is failing, period.  CIOs and VPs of Application Development, Maintenance, and Quality Assurance must walk on the job knowing these techniques.  On-the-job learning is too expensive to be tolerated for long.

 

By its nature, size, and complexity, software is impossible to completely protect from disruptions and breaches.  However, if you want to keep your job, it shouldn’t be the CEO calling for an overhaul of information security and compliance with industry standards.

Tough Love for Software Security

Each day brings more reports of hacked systems.  The security breaches at Target, TJ Maxx, and Heartland Payment Systems are reported to have cost well beyond $140,000,000 each.  Are we near a tipping point where people stop trusting online and electronic systems and go back to buying over-the-counter with cash and personal checks?  When does the financial services industry reach the breaking point and start charging excessive fees to cover their losses?  Before we arrive there, IT needs to apply some tough love to software security.

 

Reports following the shutdown of a crime ring last summer that had stolen 130,000,000+ credit card numbers indicated that the weakness most frequently exploited to gain entry was SQL injection.  SQL injection???  Haven’t we known about that weakness for two decades?  How can we still be creating these types of vulnerabilities?  How can we not have detected them before putting the code into production?  Don’t you validate your input?  Don’t you wash your hands before eating?

 

What do we have to do to derail this hacking express?  What will it take to develop a global profession of software engineers who understand the structural elements of secure code?  We need some tough love for those who continue to leave glaring holes in critical applications.

 

Here is a tough love recommendation.  It is admittedly a bit whacky, but you’ll get the point.  First, we rate each of the code-based weaknesses in the Common Weakness Enumeration (cwe.mitre.org) on a severity scale from ‘1 – very minor and difficult to exploit’, to ‘9 – you just rolled out a red carpet to the confidential data’.  Next, we implement technology that continually scans code during development for security vulnerabilities.  Finally, we immediately enforce the following penalties when a security-related flaw is detected during a coding session.

 

  • Severity rating 1, 2 — “Come on dude, that’s dumb” flashes on the developer’s display
  • Severity rating 3, 4 — developer placed in ‘timeout’ for 2 hours by auto-locking IDE
  • Severity rating 5, 6 — developer’s name and defect published on daily bozo list
  • Severity rating 7, 8 — mild electric shock administered through the developer’s keyboard
  • Severity rating 9 — developer banished to database administration for 1 month

 

Okay, this is a bit much, but with the cost of security flaws to business running well into 9-digits, the status quo in development is no longer tolerable.  Here are some reasonable steps to take on the road to tough love.

 

  1. All applications touching confidential information should be automatically scanned for security weaknesses during development, and immediate feedback provided to developers.
  2. Before each release into production, all application code should be scanned at the system level for security weaknesses. 
  3. All high severity security weaknesses should be removed before the code enters production.
  4. All other security weaknesses should be prioritized on a maintenance or story backlog for future remediation.
  5. All developers should be trained in developing secure code for each of their languages and platforms.
  6. Developers who continue to submit components to builds that harbor security weaknesses should receive additional training and/or mentoring.
  7. Developers who are unable to produce secure code even after additional training and/or mentoring should be assigned to other work.

 

The latter recommendations may upset some developers.  However, as the financial damage of security breaches escalates, the industry must take steps necessary to ensure that those entrusted to develop secure systems have the knowledge, skill, and discipline necessary to the task.  Organizations must accept some responsibility for preparing developers and sustaining their skills.  Academic institutions need to incorporate cyber-security as a requirement into their computer science and software engineering curricula.

 

The cyber-security community is supporting many important initiatives, and IT needs to take advantage of them.  Good places to start include the CERT website (www.cert.org) supported by the Software Engineering Institute at Carnegie Mellon University, the SANS Institute (www.sans.org), and the Common Weakness Enumeration (cwe.mitre.org) repository supported by Mitre on behalf of the US Department of Homeland Security.  Ultimately, developers must be held accountable for their capability and work results, since the risk to which they expose a business has grown unacceptably large.  Tough love for tougher security.

Software Quality beyond Application Boundaries

 

The retail security crisis continues…

 

A recent Wall Street Journal article exposed potential issues with Bitcoin’s transaction network. This left Tokyo-based Mt. Gox exchange and Gavin Andresen, Chief Scientist at the Bitcoin Foundation, pointing fingers at each other.

 

So far the retail industry has felt the pain of sophisticated hackers stealing sensitive information:

 

  • Target Corp. – The latest news suggests that the breach started with a malware-laced email phishing attack sent to employees at an HVAC firm that did business with the nationwide retailer
  • Nieman Marcus – 1.1 million debit and credit cards used at its stores may have been compromised
  • Michaels – investigating a possible security breach on its payment card network

 

According to a Business Insider article, smaller breaches on at least three other well-known U.S. retailers also took place during the U.S. holiday shopping season last year and were conducted using similar techniques as the one on Target. Those breaches have yet to come to light in the mainstream media.

 

Memory-scraping software, a danger exposed as early as five years ago, is becoming a common tool for these breaches. When a customer swipes a payment card at the checkout, the POS grabs data from the magnetic strip and transfers it to the retailer’s payment processing provider. While the data is encrypted during the process (as required by PII regulation), scrapers harvest the information from the POS RAM, where it briefly appears in plain text. In some cases, the encrypted data along with its keys are stolen and then decrypted outside the victim’s infrastructure. Cyber criminals have been adding features to make it more difficult for victims to detect the malicious software on their networks.

 

Thoroughly testing the quality of software has long been known to be an imperfect practice. We make up representative test cases and create fake data to ensure the release of software on time. Or, we outsource this development to organizations where our application becomes one of others in focus. But the long tail of problems is becoming so prevalent now that it is time to leverage up-to-date technology and automation to dramatically increase the scope of our testing.

 

We also need to extend the notion of software quality beyond a particular application or process. As soon as that application or process has to share information with an outside process or system, a window is exposed for an attack. The measurement of quality must extend to the very system or business process within which it runs. For example, efforts need to be stepped up to ensure the right patches and guards are deployed frequently. Traffic channels must be monitored at high speed. Hardware issues must be corrected. And all of this must happen not just at the server level but at any and all connected endpoints. The Internet of Things, a phenomenon whereby “things” as diverse as smartphones, cars and household appliances are all online and connected to the internet, is becoming a reminder that the entry point for an attack can come from almost any device.

 

In order to ensure the quality and stability of software, we must learn to think and act like hackers. We must extend the monitoring and measurement of software quality to include the processes and systems within which software plays a role. We must harness the capabilities of available technology and automation to ensure deeper testing coverage. All of this is necessary to reduce the cost and risk associated with the types of breaches we are now starting to see. The integrity of the software application depends on it.