Joff Thyer //
I had an interesting experience recently that reminded me to always “trust but verify.” Let me set the stage for you. As a penetration tester, and IT security consultant, I have a pretty substantial network, and home lab setup. In my case, this includes a fiber optic strand with an associated /28 IPv4 network block. Oh my god, why have we not adopted IPv6!!!! But I digress.
One of my fine Black Hills colleagues (Sally) asked me to spin up a KALI Linux image for some customer-focused scanning activity using MASSCAN. For your information, the GitHub link for MASSCAN is included here https://github.com/robertdavidgraham/masscan.
This tool is absolutely awesome. It literally has the ability to scan the entire Internet in under six minutes at a rate of ten million packets per second.
My Internet Service Provider (in their wisdom) provided me with an Adtran 3140 router as a part of my service installation. The Adtran has an Ethernet handoff to my own router gateway and is connected upstream to a fiber bridge device.
Here’s the challenge… as soon as MASSCAN kicked off, the Adtran router immediately started dropping packets left and right. Most Information Security consultants would probably say that it was probably a bandwidth problem. Well, it absolutely was NOT since the scan was configured at a mere 400 packets per second or less. I suspected this was actually a connection state tracking issue.
Why did I think that? Well, let’s look at the goal; scan 65535 TCP ports across several hundred IP addresses in a very short amount of time. If the perimeter device was tracking connection state, and let’s say we have only 100 IP addresses being scanned in parallel, then we are looking at 655,230 TCP connection states to potentially track in milliseconds.
I called up the ISP and said, “Hey folks, can you give me a direct Ethernet handoff to my own router device?”. To my surprise, they said “yes”! I am thinking, “Hey cool, now we have the consumer crap out of the way, let’s rock out.” So I call up Sally, and literally say we fixed it and “let it rip at 1000 pps.”
What happens next? My Linux router dies an untimely death with system CPU time on router cores maxed out at 100%, and packets start dropping left and right. Let me back up a second here. You might be thinking “Hey Joff, your box should have just been bridging the publicly routable box, and how freaking hard is that to do really?” Well yes, this would be the case if the ISP did not require me to route the public IP space being delivered to me, but now that I had that Ethernet handoff, it was my job to be the router, and not just for RFC1918 internal networks.
The truth? Well, my Linux system was routing the traffic because the ISP actually treated me as the last router hop for my /28. Hey no big deal, I can handle that because I can set some IP forwarding rules up in my iptables configuration and it should be just fine.
Here is a scenario. Imagine your public WAN IP address is 188.8.131.52, and you are lucky enough that your ISP is routing an IP address block of 255.2.2.0/28 to this IP address. What does that mean? Well, any packet directed at 255.2.2.0/28 will arrive at 184.108.40.206.
So what does your configuration look like? Something like this:
Eth0: 220.127.116.11/24 (or whatever ISP mask they gave you) Eth1: 255.2.2.1/28
Yes, you need to set up IP forwarding in the kernel also so that all packets arriving at 255.1.2.3 will be forwarded right!? Ok this is totally cool but you are a security person, so you set up your firewall gateway to do other cool stuff like perhaps perform network address translation (NAT) for some internal network segments and you have egress filtering to boot!
Being the total geek that you are, you have a 4 interface system with the ability to not only host the public IP (DMZ) segment on Eth1, you have a couple more networks internally like:
Eth2: 192.168.100.1/24 Eth3: 192.168.200.1/24
So we are now feeling really good about ourselves in that we have constructed a routed network but our firewall rules need to be laid down. In the world of Linux, this will all occur in the “FORWARD” chain of iptables. Let’s imagine for a second we want to forward ALL traffic to our DMZ, and only allow selected traffic to flow internally to outside. I am using “selected” pretty liberally here as the below rules allow all TCP/UDP traffic to flow. But the real point is that we would want to track connection state using the “nf_conntrack” module which is a part of the iptables Linux kernel implementation.
A nice appropriate ruleset would be as follows (note that this is in the “iptables-save” format):
# DMZ traffic -A FORWARD -i eth0 -o eth1 -d 255.2.2.0/28 -j ACCEPT -A FORWARD -i eth1 -o eth0 -s 255.2.2.0/28 -j ACCEPT # Internal to Outside Traffic -A FORWARD -s 192.168.0.0/16 -p tcp -j ACCEPT -A FORWARD -s 192.168.0.0/16 -p udp -j ACCEPT -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
Intuitively you would think that the above ruleset has you completely covered and everything will be beautiful. Yes, this is absolutely true until you fire up that “MASSCAN” from the DMZ in the 255.2.2.1/28 network segment.
Guess what folks? Since you want to diligently track the state of connections from your internal network segment, your DMZ interfaces are not immune. But wait you say!!? I put rules in that configuration to just let it all pass without hindrance. Not so fast Obi-Wan, the state tracking applies regardless because you told iptables to do that!!
Ok so “MASSCAN” fires up and suddenly your connection state table goes from perhaps a few hundred connections (hey, I have kids), to over half a million and your Linux NAT/Firewall box keels over dead.
How did I verify what was happening? I used the Linux command “conntrack” which shows the state table itself. Basically, if the flow entries figure was very large (in the thousands) then more connection tracking was occurring than I wanted.
# conntrack -L … stuff omitted ... conntrack v1.4.3 (conntrack-tools): 310 flow entries have been shown.
What is the solution? There are a couple of thoughts here. One is that we should not track the state of forwarded connections. Having said that, do you REALLY want to just allow ephemeral TCP/UDP ports open in the outside -> inside portion of the configuration? I don’t think so Kemosabe.
The right answer, in my opinion, is a lovely feature of iptables that allows you to tweak the “raw” table such that some connections will never be tracked!
So perhaps you need to do something like this:
*raw -A PREROUTING -i eth1 -s 255.2.2.0/28 ! -d 255.2.2.0/28 -j NOTRACK
In short, the rule says that if something is entering your home-based Linux router from the DMZ (publicly routable) segment of your network and the packet is destined to the greater Internet, then forget about tracking connection state! In other words, short circuit the remaining logic of iptables and just forward the packet.
As you might imagine, I researched and implemented something very similar to this in my quest to help Sally and sure enough, it worked like a champ.
So all in the space of one day, I achieved:
- Direct Ethernet handoff from ISP
- Happier network performance
- Happier kid, and happier Dad
- Winning scanner performance!
Go forth and profit, and remember to always “Keep Calm and Hack Naked.”
You can learn more straight from Joff himself with his classes:
Available live/virtual and on-demand!