Port knocking is a great approach to get rid of the bots trying to brute force via SSH. The only problem is that not all the users may have tools (or skills) to perform this action.
On the other hand, most systems have (or should have) fail2ban installed. This tool shines for its configuration flexibility. So, why not to implement port knocking which can be done with a simple browser?
The idea behind the approach consists in making an HTTP request to a known URL and because of that port 22 opens for the IP which made the request.
First of all, we configure HTTP server (Nginx in this case) for simply logging all requests to a given URL. We don’t even need a correct response (404 is perfectly fine).
access_log /var/log/nginx/example.com.log access_log;
As you may see I listen on port 80 logging all incoming requests. All of them will return 404, but that’s OK since all of them are going to leave a trace in a log file.
As with a port knocking approach its Netfilter who decides who is going to be allowed in. Let’s create a chain:
iptables -N letmein-ssh
For that chain the default action (if there is no hit) should be RETURN, so I append the rule:
iptables -A letmein-ssh -j RETURN
After that, we should redirect to this chain all the incoming packets which come to port 22.
This rule should be at the top of other rules which may affect port 22. The explanation is quite simple, every packet that comes in via port 22 is introduced in the chain letmein-ssh, if there is no hit the packet is simply returned to the original chain which is INPUT. It’s important to note that the default action for INPUT chain should be DROP, so the packet is dropped after all.
Now, it’s time to configure fail2ban to pick up the traces we are interested in. In this case, the trace will look like:
# Fail2Ban filter for letmein-ssh
# I don’t need any include for this
failregex = ^<HOST> -.*"GET /ssh.*$
Now comes the magic. We should reverse ban action, I mean, when filter hits the trace it should allow connection to port 22 from the IP.
# Fail2Ban configuration file
# No need to include anything
# Option: actionstart
# Notes.: command executed once at the start of Fail2Ban.
# We setup a basic RETURN rule with this one
# Values: CMD
actionstart = iptables -A letmein-ssh -j RETURN
# Option: actionstop
# Notes.: command executed once at the end of Fail2Ban
# Values: CMD
actionstop = iptables -F letmein-ssh
# Option: actioncheck
# Notes.: command executed once before each actionban command
# We don’t need to check anything
# Values: CMD
# Option: actionban
# This command actually inserts rule
actionban = iptables -I letmein-ssh -s <ip> -j ACCEPT
# Option: actionunban
# Notes.: command executed when unbanning an IP. Take care that the
# command is executed with Fail2Ban user rights.
# Once again, we do the magic here so actually this command disables an IP from accessing SSH.
# Values: CMD
actionunban = iptables -D letmein-ssh -s <ip> -j ACCEPT
Finally I configure jail for what I need:
enabled = true
filter = letmein-ssh
action = allow-iptables-letmein
logpath = /var/log/nginx/example.com.log access_log;
bantime = 120
maxretry = 1
findtime = 60
That's all. Now, before connecting via SSH to server the client need to issue an HTTP request to http://example.com/ssh.
At some point in time, I found myself in a need of changing KVM virtual machine's root password. When making experiments with a VM it's easy to mess things up (like firewall configuration) so the machine stops responding. In this cases, the only way to recover control over the VM is by connecting to it via VNC. But then, if you didn't setup a password for the user, and this is a pretty common situation with machines accessible only by SSH, you can't do anything even with VNC because you will not be able to log in at the console prompt.
Fortunately, you may setup (change) root's password if you have read/write access to system's drive image. Normally, for KVM, qcow2 format is used, so in this article, I will focus on how to do it using this kind of image file format.
First of all, you should stop the virtual machine.
$ virsh shutdown my-virtual-machine
Find out where is an image file located:
$ virsh dumpxml my-virtual-machine
From the command you will see an output similar to this one:
Just remember the file path, in this case, it is /var/lib/libvirt/images/my-virtual-machine.qcow2.
As a next step, you will need to mount this file somewhere, let's use /tmp/my-virtual-machine folder. You will find a full explanation of commands here. For instance, just execute the following commands (you may need to install qemu-nbd, in which case apt-get install qemu-utils):
So, now you will find the contents of your VM's HDD at /tmp/my-virtual-machine. As you may know, on Linux systems the passwords are stored in /etc/shadow file. More info here. The important thing about this file is that all passwords are hashed and salted. That's why we are going to need openssl as a tool for generating it.
You may put whatever you want (ASCII characters only, please) as a salt parameter. This command will produce a string which is a valid password for /etc/shadow file. After this, you will need to go to /tmp/my-virtual-machine/etc/shadow and replace this string for root user between the first and the second colon. For example:
Many times when working with docker containers I feel the need of assigning a known beforehand IP address to a container. This is a huge advantage if you want to control the network access to and from a container with a tool like iptables. However, current docker version (1.11.1) does not allow this operation out of the box, but there is an official way of achieving this. Thanks to docker network command a user may create a fully customizable network and connect a container to it. You may find full information at the official Docker site, here.
I will start with a clean docker installation on a test vagrant machine (Ubuntu Trusty). After installing docker, as usual, you may see docker0 network interface. This a default bridging interface. I will follow the documentation and create an isolated network using the same subnet, addresses and names.
So the first step is to create a new network:
This network will allow me to use a 172.25.0.1 - 172.25.255.254 address range. If I run ifconfig now I will see that a new interface is created. In my case, docker calls it br-98446a2a4f1f. Just to be sure I reboot my machine to see if this network persists across reboots and it does.
Now I want to start nginx container with 172.25.0.2 address, I can do it with the following command:
If I get inside the container and run ip addr command I will see that the assigned IP address is, in fact, the requested one:
Now, I will start another container just to check the connectivity between them:
So, if I get inside a second container I'm able to perform ping and telnet with the first one:
Time to check that linking between containers also works, I will start my containers this way:
Then, if I connect to my_nginx_02 container I will be able to ping and telnet my_nginx_01 host.
As you can see my_nginx_01 resolves to the IP address assigned during the startup. With this configuration, you may be able to control your security perimeter using FORWARD chain in your iptables configuration.
At the time being communication through email is less important than 10 years ago. Facebook, Whatsapp, Telegram, Slack, etc there is a feeling like old good email has less importance now for a meaning of communication. However, there is a field where this way of communication still being very important, when not the only way of communication. I'm talking about servers here.
For example, when ssh server detects break-in attempt it will try to send an email notification (or at least it could be configured in this way), or when some crontab-job fails, it also will try to send an email. But there is a problem, sometimes installing full-fledged MTA is counter-productive. Think about Raspberry Pi, if you install Postfix there it will be eating precious (and limited) resources of the machine itself. And we are not talking about the effort you will need to invest in an administration of the email server.
For myself, I found a much simpler solution called msmtp. According to their website:
msmtp is an SMTP client.
In the default mode, it transmits a mail to an SMTP server (for example at a free mail provider) which takes care of further delivery.
To use this program with your mail user agent (MUA), create a configuration file with your mail account(s) and tell your MUA to call msmtp instead of /usr/sbin/sendmail.
In other words, you may connect to any SMTP server and send messages to any directions as if they were users on localhost. I will be using gmail.com for this.
First of all, you will need to install it. Surprisingly for me, the version contained within Ubuntu repository is pretty old. However, you may build the latest version, this is not covered in this article.
Then I created a new email address at Gmail just to receive notifications from servers.
Configuration is pretty straightforward.
Please note the /etc/aliases file. We will need to use it later.
I'm not sure about the user which is going to execute msmtp command, so the log file should be created beforehand with proper permissions.
Now you can test the whole setup with the following command:
If everything is OK you should receive an email at email@example.com. If not, try to see the debug output and figure out what went wrong.
Fixing mail command
For mail command to work you will need to put the following in /etc/mail.rc. Please note that in order to have mail utility you will need to install mailutils package which, unfortunately, on Ubuntu also includes Postfix server.
You can test that mail works with following command:
When crontab fails to execute some command it sends an email message. In order to perform the operation, it uses sendmail executable. If you remember, in the beginning, I also installed msmtp-mta package. Doing so also creates a link /usr/sbin/sendmail to /usr/bin/msmtp. This way crontab does not need any additional configuration.
The problem here is that crontab always sends a message to a local user. I mean, if I log in as admin user and create a crontab job for this user the email will be sent to admin at localhost. It can be fixed using the following approaches.
This file defines email aliases for user. For example:
Given this configuration, all emails which go to root@localhost will also be sent to firstname.lastname@example.org and email@example.com. Also, all messages to unknown (non-existent) users will be sent to firstname.lastname@example.org.
Crontab offers a more elegant way of sending emails. In the beginning of the crontab file, you can edit it with crontab -e, declare MAILTO variable. Like this: