A year and a half ago I registered for the spring semester at Baruch College in New York City. The same morning I had an eye procedure in Florida. Shortly after that I bought $4000 of art from a dealer in Kansas City. By midday I had bought several thousand dollars more art in Australia. Apparently I was having a fine time at supersonic speeds. Then my credit card company’s neural nets caught up with me. Well, not me exactly.
Within an hour that fine morning I received a call, an email, and a text message telling me my credit card had been terminated and wanting to verify recent charges. Apparently I was joined on this round-the-world foray by several thousand other credit card customers. The credit card company figured the only way we could have executed this spending spree was on the Concorde, which of course had been grounded years earlier…and rarely flew to Australia anyway. Yep, somebody had been hacked.
I received a new credit card the next day, and all the fraudulent charges were reversed so I survived far better than the merchants who had parted with goods or services. And how much did it cost the credit card company to express mail thousands of credit cards to victimized customers in addition to the hours it took to clean up the financial debris? Yet my biggest question is what kind of idiot thinks he can enroll at Baruch College with a stolen credit card and not get caught?
These memories came flooding back today when US federal prosecutors working with international authorities announced they had broken a hacker ring that spent almost ten years fleecing millions of unsuspecting customers and merchants using 160 million credit card numbers stolen from the IT systems of several large companies. HOW DOES THIS KEEP HAPPENING? EVEN WORSE HOW DO YOU NOT GET DETECTED IN THE MIDST OF USING 160 MILLION STOLEN CREDIT CARDS FOR ALMOST A DECADE?
Five hackers have been charged ranging from age 26 to 32, and two are in custody. The damage to merchants and creditors is reported to be at least $300 million. Once breached, the card numbers were sold around the world–$10 for an American card, $15 for a Canadian card, and $50 for a European card. My good credit is worth only ten crummy bucks! Wow, that’s humiliating.
In addition to hacking NASDAQ (a US stock exchange), the hackers also penetrated Heartland Payment Systems and Global Payment Systems, companies that clear large amounts of credit card transactions for merchants. It looks like PCI-compliance is no more a guarantee of impenetrable systems than being CMMI Level 5 is a guarantee of impeccable quality. There appears to be more needed for a solid defense against hackers than just compliant processes.
Here’s the worst of it. Apparently the hackers used a weakness known as ‘SQL injection’ to break into the systems. SQL injection! That’s one of the oldest attack patterns in the book and we knew about it in the last century. The ‘book’ in this case is the Common Weakness Enumeration (CWE) repository maintained by Mitre Corporation with support from the US Department of Homeland Security (cwe.mitre.org). Out of the 800+ software vulnerabilities enumerated in the repository, the CWE team lists a Top 25 most common weaknesses exploited by hackers. Guess which weakness is ranked #1 or #2 year after year. Right, SQL injection!
But Mitre is not alone in assailing SQL injection. The Open Web Application Security Project (OWASP) publishes a list of the top 10 security vulnerabilities every three years. And of course, there is SQL injection right at the top.
If everyone agrees SQL injection is a huge security problem, how come we still see these weaknesses in critical business systems? How can one of the most well know security vulnerabilities still be a primary source of unauthorized penetration? As a profession do we ever learn? How sophisticated do hackers have to be if they don’t even have to read very far down the list of weaknesses to find a way in? Why don’t developers and testers know about these weaknesses and how to detect and avoid them? If software systems are this easily penetrated can we really call software development ‘engineering’, or even a profession?
The Consortium for IT Software Quality (CISQ) has recently defined a measure of software security based on the top 25 weaknesses in the CWE that were measureable. The effort was led by Bob Martin who is in charge of the CWE repository. The specification for CISQ’s security measure is available free on the CISQ website. CISQ will hold a seminar on Wednesday, September 25 at the Hyatt Regency in New Brunswick, NJ in which much of the afternoon session will focus on how to measure and manage software security.
Vulnerable customers need immediate preventative action on all systems that access confidential information. Here are some of the actions every IT organization with these systems needs to take ASAP.
- Ensure that every developer working on a system accessing confidential information is trained in secure architectural and coding practices as well as at least the top 25 CWE weaknesses.
- Ensure everyone involved in testing or any other form of defect detection is trained in how to detect violations of secure architectural and coding practices, and especially the top 25 CWE weaknesses.
- Implement and enforce quality assurance practices that are capable of detecting at a minimum the top 25 CWE weaknesses.
- Require all systems that access confidential information to undergo automated analysis at both the code unit and system levels for structural (non-functional) flaws that affect security.
- Enforce a process that provides sufficient time and resource to detect security weaknesses.
- Prepare an evidence-based case supporting claims that any system accessing confidential information is protected against known threats, attack patterns, and security weaknesses.
- Report security measures and other types of audit information regarding system security to upper management on a periodic basis.
This sounds expensive—until you get the bill for a major unauthorized penetration. $300 million makes these recommendations seem inexpensive. Investing in secure software should be guided by a risk analysis that evaluates potential losses against improvement costs. There is no way to guarantee a foolproof system. But there is no sense in being the fool who proves it.