Securing Your Way to Restful Sleep with Ansible Galaxy

When securing remote access, key-based SSH and sometimes 2FA can provide an extra layer of relief. Additionally, it would be hard to imagine dropping one’s defenses and going naked, so a firewall is a standard requirement. Why not throw in fail2ban to defend against attempts to smash through open services with iptables? I seriously love this combination of services. 🙂 “Oh, you tried thrice to authenticate on our HTTPS? Fail2ban, please jail this IP for a while. Thanks, mkay.” And, last but not least, let’s dump all this stuff into logstash, Geo-IP map it in Kibana for an awesome visualization, and finally, get some rest.

First, let’s check our ansible version:

ansible@tw17ch01:/etc/ansible/roles/sshd$ dpkg -l |grep ansible
ii  ansible   2.1.0.0-1ppa~trusty        all        
A radically simple IT automation platform

Next, let’s install, configure and deploy SSHd with a galaxy role that includes every possible variable. We are going to add the role, make some changes, and then later, we will build a playbook to deploy our new role. Link here – https://galaxy.ansible.com/mattwillsher/sshd/

Ansible server magic:

ansible@tw17ch01:/etc/ansible/playbooks$ sudo ansible-galaxy 
install mattwillsher.sshd
[sudo] password for ansible:
- downloading role 'sshd', owned by mattwillsher
- downloading role from https://github.com/willshersystems/
ansible-sshd/archive/v0.4.4.tar.gz
- extracting mattwillsher.sshd to /etc/ansible/roles/
mattwillsher.sshd
- mattwillsher.sshd was installed successfully

ansible@tw17ch01:/etc/ansible/playbooks$ sudo mv mattwillsher.sshd/sshd/
### Galaxy roles, for the most part, create a standard structure inside their "role directory" and if it doesn't exist,

ansible@tw17ch01:/etc/ansible/roles/sshd$ ls
CHANGELOG  files  LICENSE  README.md  templates  Vagrantfile
defaults   handlers  meta  tasks  tests  vars

ansible@tw17ch01:/etc/ansible/roles$ sudo nano sshd/vars/Ubuntu_14.yml

*Strong recommendation to modify the default port, disable root login altogether and add a line for PasswordAuthentication no. Ensure the file you modify is appropriate for the OS you are deploying the role to.

Next, let’s install, configure and deploy a firewall to block all the things. Link here – https://galaxy.ansible.com/geerlingguy/firewall/

ansible@tw17ch01:/etc/ansible/roles$ sudo ansible-galaxy install 
geerlingguy.firewall
[sudo] password for ansible:
- downloading role 'firewall', owned by geerlingguy
- downloading role from https://github.com/geerlingguy/ansible-role-firewall/archive/1.0.9.tar.gz
- extracting geerlingguy.firewall to /etc/ansible/roles/geerlingguy.firewall
- geerlingguy.firewall was installed successfully

ansible@tw17ch01:/etc/ansible/roles$ sudo mv geerlingguy.firewall/ chains/
### Let's make a few edits.
ansible@tw17ch01:/etc/ansible/roles$ sudo nano chains/defaults/main.yml

The firewall mods were easier to wrap my brain around than the complexity built in to the sshd role. My system only needed the above listed SSH port and HTTPS. One more mod to the chains role:

ansible@tw17ch01:/etc/ansible/roles$ sudo nano chains/templates/firewall.bash.j2

I added the following rule because I don’t care about outbound traffic from a DO droplet right now.

#  Allow all outbound traffic - 
#  you can/SHOULD modify this to only allow certain traffic!
iptables -A OUTPUT -j ACCEPT

The chains role should be ready to rock.

Next, let’s install, configure and deploy fail2ban to protect services. Link here – https://galaxy.ansible.com/tersmitten/fail2ban/

ansible@tw17ch01:/etc/ansible/roles$ sudo ansible-galaxy install tersmitten.fail2ban
- downloading role 'fail2ban', owned by tersmitten
- downloading role from https://github.com/Oefenweb/ansible-fail2ban/archive/v1.5.0.tar.gz
- extracting tersmitten.fail2ban to /etc/ansible/roles/tersmitten.fail2ban
- tersmitten.fail2ban was installed successfully

ansible@tw17ch01:/etc/ansible/roles$ sudo mv tersmitten.fail2ban/ banner/
### Let's make a few edits
ansible@tw17ch01:/etc/ansible/roles$ sudo nano banner/defaults/main.yml

I had to make some changes to my sshd services configuration to reflect the use of a non-standard port. I also added service configuration for HTTPS. After looking through the rest of the directory structure, this role is also ready to go.

Last, let’s install, configure and deploy filebeat to ship log files to a useful destination. Link here – https://galaxy.ansible.com/jpnewman/elk-filebeat/

ansible@tw17ch01:/etc/ansible/roles$ sudo ansible-galaxy install jpnewman.elk-filebeat
- downloading role 'elk-filebeat', owned by jpnewman
- downloading role from https://github.com/jpnewman/ansible-role-elk-filebeat/archive/master.tar.gz
- extracting jpnewman.elk-filebeat to /etc/ansible/roles/jpnewman.elk-filebeat
- jpnewman.elk-filebeat was installed successfully

ansible@tw17ch01:/etc/ansible/roles$ sudo mv jpnewman.elk-filebeat/ logger/

Finally, let’s set up TLS shipment of logs across the interwebs for future prospecting inside an existing ELK stack. This assumes a bunch of things have already been done:

1. Fully operational ELK stack

2. TLS/PKI infrastructure in place for logstash and the certificate available for deployment via logger role

3. Port forwarding on network firewall for logstash port

4. Optional redis cluster to handle a large volume of log processing

ansible@tw17ch01:/etc/ansible/roles$ sudo nano logger/defaults/main.yml

In here, I have modified the elastic and logstash hosts to point to an infrastructure destination. Remember the pretty standard directory structure we discussed earlier? Yeah, copy your logstash-forward.crt file in to ../roles/logger/files/certs/ and the playbook intelligence will deliver. Here we are again, good to go. Let’s roll and take a look at a few things to make sure it all works.

ansible@tw17ch01:/etc/ansible/roles$ ls
banner  chains  logger  sshd   ### roles all exist

$> vi ../ansible/hosts #add new host to your ansible hosts file
$> ansible dropper ping -m
  12.34.56.78 | SUCCESS => {
        "changed": false,
        "ping": "pong"
}

ansible@tw17ch01:/etc/ansible/roles$ sudo nano 
../playbooks/NewDrop.yml
### roles are awesome. invest the time.

- hosts: all
  become: yes
  roles:
          # deploy standard SSH config
        - { role: sshd }
          # install iptables
        - { role: chains }
          # add filebeat
        - { role: logger }
          # deploy and configure fail2ban
        - { role: banner }  

The manual portion of this deployment happens here. SSH over to the new droplet and create a sudo user. Yes, this can be automated and we’ll write about that another day, another way.

ssh root@NewDropIP
useradd ansible -m -s /bin/bash
passwd ansible
UNIX Pass:
UNIX Pass:
su - ansible
mkdir .ssh
touch .ssh/authorized_keys
echo "ssh-rsa AAAAB3N <KEY REDACTED> IClTJ1E1 ansible@tw17ch01" >> .ssh/authorized_keys
exit
visudo   ## add ansible ALL=(ALL:ALL) ALL

ansible@tw17ch01:/etc/ansible$ sudo nano hosts  ### add [dropper] and NewDropIP
ansible@tw17ch01:/etc/ansible$
ansible@tw17ch01:/etc/ansible$ ansible-playbook playbooks/NewDrop.yml -l dropper -u ansible -K
SUDO password:

PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
The authenticity of host '12.34.56.78 (12.34.56.78)' can't be established.
ECDSA key fingerprint is 5f:<redacted>:1f.
Are you sure you want to continue connecting (yes/no)? yes
Enter passphrase for key '/home/ansible/.ssh/id_rsa':
ok: [12.34.56.78]
TASK [sshd : Set OS dependent variables] ***************************************
ok: [12.34.56.78] => (item=/etc/ansible/roles/sshd/vars/Ubuntu_14.yml)
TASK [sshd : OS is supported] **************************************************
ok: [12.34.56.78]
TASK [sshd : Installed] ********************************************************
ok: [12.34.56.78] => (item=[u'openssh-server', u'openssh-sftp-server'])
TASK [sshd : Run directory] ****************************************************
ok: [12.34.56.78]
TASK [sshd : Configuration] ****************************************************
changed: [12.34.56.78]
TASK [sshd : Service enabled and running] **************************************
ok: [12.34.56.78]
TASK [sshd : Register that this role has run] **********************************
ok: [12.34.56.78]
TASK [chains : Ensure iptables is installed (RedHat).] *************************
skipping: [12.34.56.78]
TASK [chains : Ensure iptables is installed (Debian).] *************************
ok: [12.34.56.78]
TASK [chains : Flush iptables the first time playbook runs.] *******************
changed: [12.34.56.78]
TASK [chains : Copy firewall script into place.] *******************************
changed: [12.34.56.78]
TASK [chains : Copy firewall init script into place.] **************************
changed: [12.34.56.78]
TASK [chains : Ensure the firewall is enabled and will start on boot.] *********
changed: [12.34.56.78]
TASK [logger : Create directory to store ssl crt] ******************************
changed: [12.34.56.78]
TASK [logger : Copy SSL cert] **************************************************
changed: [12.34.56.78]
TASK [logger : Install Filebeat dependencies] **********************************
ok: [12.34.56.78]
TASK [logger : Check if Filebeat is already at the right version] **************
changed: [12.34.56.78]
TASK [logger : Download Filebeat agent] ****************************************
changed: [12.34.56.78]
TASK [logger : Install Filebeat agent] *****************************************
changed: [12.34.56.78]
TASK [logger : Create directory for Filebeat Configures] ***********************
changed: [12.34.56.78]
TASK [logger : Create directory for Filebeat Configures] ***********************
changed: [12.34.56.78]
TASK [logger : Configure Filebeat] *********************************************
changed: [12.34.56.78]
TASK [logger : Configure Filebeat prospectors] *********************************
[DEPRECATION WARNING]: Using bare variables is deprecated. Update your playbooks  ### Need to clean up this playbook
 so that the environment value uses the full variable syntax
('{{prospectors}}').

This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.

changed: [12.34.56.78] => (item={u'paths': [{u'log_paths': [u'/var/log/syslog', u'/var/log/auth.log'], u'document_type': u'syslog'}], u'type': u'syslog', u'id': u'syslog'})
changed: [12.34.56.78] => (item={u'paths': [{u'log_paths': [u'/var/log/*.log'], u'document_type': u'log', u'exclude_files': [u'^syslog$', u'^auth.log$', u'^filebeat.log.*$', u'^topbeat.log.*$']}], u'id': u'varlog'})
TASK [logger : Start Filebeat] *************************************************
changed: [12.34.56.78]
TASK [banner : install] ********************************************************
changed: [12.34.56.78] => (item=[u'fail2ban'])
TASK [banner : update configuration file - /etc/fail2ban/fail2ban.conf] ********
changed: [12.34.56.78]
TASK [banner : update configuration file - /etc/fail2ban/jail.local] ***********
changed: [12.34.56.78]
TASK [banner : copy filters] ***************************************************
skipping: [12.34.56.78]
TASK [banner : copy actions] ***************************************************
skipping: [12.34.56.78]
TASK [banner : copy jails] *****************************************************
skipping: [12.34.56.78]
TASK [banner : start and enable service] ***************************************
ok: [12.34.56.78]
RUNNING HANDLER [sshd : reload_sshd] *******************************************
changed: [12.34.56.78]
RUNNING HANDLER [chains : restart firewall] ************************************
changed: [12.34.56.78]
RUNNING HANDLER [logger : restart filebeat] ************************************
changed: [12.34.56.78]
RUNNING HANDLER [banner : restart fail2ban] ************************************
changed: [12.34.56.78]
PLAY RECAP *********************************************************************
12.34.56.78                 : ok=32   changed=22   unreachable=0        failed=0

In sweet corn bread muffins I’ll be dipped – heck yes! We just rolled out solutions to most everything I (or most admins) worry about. Let’s ssh over and take a look around to make sure nothing got bricked and verify that things look good.

SSH:

ansible@tw17ch03:~$ cat /etc/ssh/sshd_config
# ansible managed: /etc/ansible/roles/sshd/templates/sshd_config.j2 modified on 2016-04-16 12:32:30 by root on tw17ch01
Port 22444
Protocol 2
HostKey /etc/ssh/ssh_host_rsa_key
AcceptEnv LANG LC_*
ChallengeResponseAuthentication no
HostbasedAuthentication no
IgnoreRhosts yes
KeyRegenerationInterval 3600
LogLevel INFO
LoginGraceTime 120
PasswordAuthentication no
[...............]
X11Forwarding yes

iptables:
ansible@tw17ch03:~$ sudo iptables -L
[sudo] password for tendans:
Chain INPUT (policy ACCEPT)

target         prot opt source                   destination            
ACCEPT         all  --  anywhere                 anywhere                
ACCEPT         tcp  --  anywhere                 anywhere                 tcp dpt:22444
ACCEPT         tcp  --  anywhere                 anywhere                 tcp dpt:https
ACCEPT         icmp --  anywhere                 anywhere                
ACCEPT         udp  --  anywhere                 anywhere                 udp spt:ntp
ACCEPT         all  --  anywhere                 anywhere                 state RELATED,ESTABLISHED
LOG            all  --  anywhere                 anywhere                 limit: avg 15/min burst 5 LOG level debug prefix "Dropped by firewall: "
DROP           all  --  anywhere                 anywhere                

Chain FORWARD (policy ACCEPT)
target         prot opt source                   destination            
Chain OUTPUT (policy ACCEPT)
target         prot opt source                   destination            
ACCEPT         all  --  anywhere                 anywhere                
ACCEPT         udp  --  anywhere                 anywhere                 udp dpt:ntp

FileBeat:
ansible@tw17ch03:~$ cat /etc/filebeat/filebeat.yml

################### Filebeat Configuration Example #########################
############################# Filebeat ######################################
filebeat:
# List of prospectors to fetch data.

Fail2Ban:
tendans@tw17ch03:~$ cat /etc/fail2ban/jail.local
# ansible managed: /etc/ansible/roles/banner/templates/etc/fail2ban/jail.local.j2 modified on 2016-05-30 05:57:15 by root on tw17ch01
[......]

[ssh]
enabled = true
port = 22444
filter = sshd
logpath = /var/log/auth.log
maxretry = 6
findtime = 600
[https]
enabled = true
port = https
filter = https
logpath = /var/log/auth.log
maxretry = 6
findtime = 600

That’s it, that’s all. The playbook looks good and the roles all deployed as expected. The config files are updated and SSH is allowing us remote access. Oh yeah, and here is a Kibana visualization of source geo-IP mapping of the remote connections arriving via the packaged logs shipping over from FileBeat and H/T to an unnamed colleague, cheers!