How Compliance Compromises Happen. (Or, The Most Boring Article Title in the History of All the Internet…)
John Strand //
There have been quite a few articles lately on how compliance standard X or Y is broken. Unfortunately, this often leads to blaming the nameless and faceless people behind the standards. It is easy to simply say they are dullards and not fit to be setting any agenda relating to computer security.
While this may be true, there is a bit more to it than that. Further, if we look at how these standards get watered down, there is a lot we can learn. Basically, let’s talk about common mistakes when it comes to creating and adhering to compliance frameworks. As a side note, most of this is based on helping a relatively small group try to define security standards within their limited geographical location for a specific industry. The only reason I am writing this is that many of the problems they ran into are the same issues I see again and again in our customers. http://www.darkreading.com/analytics/hipaa-not-helping-healthcares-software-security-lagging/d/d-id/1322715
Burdens on Industry
The first major breakdown when creating or adhering to compliance is the intense fear that the standards are going to be too burdensome on the industry in question. A long time ago, I was working on compliance standards for various oil companies in relation to the government. It was my first experience with this issue. There was much hemming and hawing about overburdening the oil and gas industry with unnecessary oversight that would make the whole process unprofitable. This was years before the whole scandal in MMS blew up in 2010. See the article by Ian Urbina for more details. But, the really odd thing to me at the time was the fact that the very people who were complaining about overburdening an industry were the same people who drove “prestige class” rental vehicles to the meetings and wore clothes that were worth more than I made in a month at the time. What exactly is the point of all this? The point is the people who create compliance requirements are often under tremendous political pressure by people far more powerful than they are.
Further, even if rampant corruption and graft are not at play, as human beings we have a huge desire to make as many people as possible happy. This often standardizes mediocrity. We have seen this again and again in this industry, from the seven-layer module to PCI to HIPAA. Also, when this occurs it often obscures the true meaning of what the compliance standards were trying to do in the first place. For example, many compliance standards are meant to be a series of guidelines and are provided to offer direction for organizations who don’t even know where to start. These things quickly become the core baseline and minimum level of effort organizations strive to meet. Anything above and beyond these recommendations is often looked at as a waste of money.
Fear of the Unknown
There is a certain herd mentality that infects all we do in security. This is because many people have little-to-no understanding of what they are doing from a base-technical level. For example, when testing and working with customers, it is very common for us to encounter auditors who downplay technical risks because it does not fit into some simple compliance model they work with. A number of years ago we were testing an energy company and discovered a large number of edge-routers with telnet running with no authentication for user access and a default password would grant enable or level 15 access to the devices. Needless to say, this vulnerability would allow us to completely take over their networks. However, when we provided this customer with this vulnerability, they immediately discounted it because when running a vulnerability scanner, it simply found the telnet server and blessed this vulnerability with a low CVSS score. The customer was hesitant to fix the issue because some other “authority” deemed the risk lower than what we provided. There was much handwringing over going against the automated risk score. So, we demonstrated how the vulnerability could be exploited and once they saw the true risk, they immediately addressed the issue. But, they had to see the risk first. To put their hand in its side, if you will.
This also brings up another point. Many of the best testers I have ever known were auditors who got sick and tired of being asked to “prove it.” That is what penetration testing provides. Proof. Not scan results. Not automated risk scores. It is about removing the shroud of the unknown and bringing clarity.
It is All Unknown
At the end of the day, many of the standards which exist today don’t make sense. But why? How does this happen? In fact, pretty easily. For example, there are a wide number of compliance standards in organizations that require passwords of eight characters or more. Why? For us to answer that question, we need to go back in time. We are in 2015 after all, and we have self-lacing shoes and flying cars. To take this one password length example and truly understand it, we have to go back to 1985. Yes, the Back to the Future references are thick in this one. Anyway, the NIST Greenbook was released, you can get it here: http://csrc.nist.gov/publications/secpubs/rainbow/std002.txt It references several things. First, it mentions how long we should use passwords for (roughly 6+ months) before changing them. And it also covers the length of the passwords for those timeframes. A nice, in the middle number, was eight characters. This was all based on how long it would take to crack a password over a 300 baud services, which is 8.5 guesses per min. Yeah… 300 baud… See, this is how a lot of compliance errors occur. People do not understand something and they instead rely on the previous work of others who came before. Insanity carries forward. Because no one knows better.
Things We Don’t Understand Seem Hard
In sports, there are often these mystical barriers we believe are insurmountable. Things like breaking the sub four-minute mile, which was done by Bannister in 1954. It seemed impossible, but once it is done, many follow. Or, a more applicable example, internal firewalls or Internet whitelisting. These two security approaches seem impossible as well. However, if we look at implementing them in even permissive ways, we are far more secure than a simple AV/Blacklisting approach. We have worked with many companies who have no desire to move to a whitelisting/firewall everything approach. However, once they were compromised, their attitude changed rather quickly. Once they start down the path of greater restriction, it did not seem so hard.
Eventually, it Gets Better
As much as this seems like one long painful breakdown of the compliance breakdown, it does get better. We are starting to see organizations that are far more interested in doing things right than being compliant. Believe me, I have seen it. No. Really. Stop laughing. I have seen organizations who have started to empower their security teams to do the right thing. They have proper budgets. They have security teams who are focusing on their data and not a checklist. They are companies who have management support at the highest levels.
They are ever so much closer to being “secure” because they know that secure is a process, not a destination.
But It Does Get Worse First
That was nice, wasn’t it? Looks like we might end this post on a happy note.
See, in almost every organization who gets better and moves away from trying to be compliant and instead moves to being secure, they were compromised. They learned their lessons. They touched the frying pan and found it was hot. They will do anything in their power to not make the same mistakes again. I have seen some companies who have imported these lessons in the form of hiring a C-O from a company who was burnt. But, somewhere in their past, there was a deep dark psychological altering experience that made them learn that simply being compliant will not work. We have to strive for better.
No one dies and is at peace with the fact they made their networks compliant.