Pink Teaming: The Dilution of Pentesting
John Strand //
There have been a few conversations at conferences and meet-ups over the past year or so about the validity of penetration testing. There are many things on the horizon of computer security that are going to be disruptive to the industry as a whole. And, in large part, these will be good for the industry. One of them is the rise of bug bounty programs. These are when an organization is hired to have a relatively obscure set of testers test your company resources. I have seen the results first hand, and they can be very good. They have their limitations and their strengths, but that is a topic for another blog.
Instead, I want to focus on something we are seeing more and more of at BHIS. A steady watering down of penetration tests. Internally, we call these Pink Teams.
A Pink Team is when the best of intentions of a Red Team get watered down. Most all the time, the direct technical contacts at the organizations we work with have the absolute best of intentions. They want a true test of their organization’s ability to detect and react to advanced threats. But then, the test starts to get watered down. Restrictions from management and legal start to creep in. Sometimes, the concerns are very much legitimate based on FCC or HIPAA concerns. Any test is going to have some level of restrictions, and we expect that as it is part of doing business in this field. But, what I would like to talk about today is when the restrictions are not necessarily in line with a test at all. Rather, they border on the ridiculous.
For example, here are a handful of things we have dealt with over the past few months:
- Customers actively watching for the testing IP addresses (which we gave at their request), then actively blocking them as soon as they see any traffic from them.
- Customers watching exploited systems real-time and actively disabling cmd.exe and PowerShell as we are using them.
- Setting a large number of email addresses as “off limits” to phishing because they belong to important people in the organizations.
- Having us stop testing as soon as we exploit a system, then take a week to let us begin again… Because, they did not think we would get in and, when we did, they did not know what to do.
- Having a group of people authorize every system we intend to exploit. Then, disallowing exploitation on the most likely targets.
I am writing these out not to poke fun at the customers, but rather, I want to address why this happens and how you can deal with it in your organization.
The first thing to understand is that these issues, at their core, are driven by a lack of understanding. While our direct contacts almost always understand how a test is going to work, the people who work with them may not. The single best way to handle this is by setting up lunch-and-learns where you can walk through what is going to happen during a test. We often do a webcast for systems administrators and developers to explain how they can remove the low hanging fruit from their environment. It is very common for me to have meetings with our customers’ upper management to walk through what we are doing and why. It is also important to clearly explain that an untested path will be attacked. It is just a matter of time and training.
I talk for a living. Teaching for SANS has exposed me to thousands of students, some of them hostile. I don’t rattle very easily, and I can usually get my point across in less than an hour. I also have that “outsider” thing going for me. Many times I say the exact same things that my technical contacts have been saying to management for years. But, because I am from outside the organization, and sometimes for that reason only, management listens.
The reasons above are why you need to be constant in your conversations and brown bag sessions. Do not expect trust and understanding to simply appear in a one hour meeting – trust is established over time. Provide these meetings and sessions at least monthly. After a while you will develop the trust and the base understanding of security and testing within your organization. More importantly, they will start to trust you even more. This is the best defense against watering down of testing objectives.
Want to level up your skills and learn more straight from John himself?
You can check out his classes below!
Active Defense & Cyber Deception
Getting Started in Security with BHIS and MITRE ATT&CK
Available live/virtual and on-demand
February 8, 2017 @ 9:15 am
We’ve got people who want a heads-up when we’re doing phishing tests.
February 8, 2017 @ 2:53 pm
Do you have a set of rules/lines that you can draw regarding what blue teams are okay to look at during a pen test? Setting up alerts on IP addresses, specific account names, command line history, etc while testing is a serious problem.
February 9, 2017 @ 9:25 am
It’s hard to give a specific list, since the industry changes so quickly – we will say however that it seems to be a problem of understanding the purpose of red teams coming in to evaluate the situation. People feel very vulnerable when they are getting a pentest, we find this unfortunate because we’re not here to break everything, but to help everyone understand how they can improve. We hope we can help a customer understand that this is a form of training.