Setting Up Let’s Encrypt with Lighttpd and Automatic Certificate Renewal

This is just a quick and dirty post to show you how to setup Let’s Encrypt with Lighttpd and configure automatic certificate renewal on Ubuntu Server 16.04 LTS (but I’m pretty sure the commands below will work for all Debian based systems).

**** INITIAL SETUP ****

First, let’s obtain the latest version of the Let’s Encrypt client from their github repo:

sudo git clone https://github.com/letsencrypt/letsencrypt /opt/letsencrypt
cd /opt/letsencrypt

Now, let’s request a certificate for the first time. Change the paths & domainname in the command below as necessary. Follow the on-screen prompts:

./letsencrypt-auto certonly --webroot -w /var/www/freek.ws/ -d freek.ws -d www.freek.ws

This command will obtain a single certificate for freek.ws and www.freek.ws; it will place temporary ‘challenge’ files in /var/www/freek.ws to prove to Let’s Encrypt that you’re the owner of these two domains.

Lighttpd expects certs to be combined, so we need to concatonate them before we can configure it.
Remember to replace your domain in the path (note: this is in /etc/letsencrypt and not your webroot!).

cd /etc/letsencrypt/live/freek.ws
cat privkey.pem cert.pem > ssl.pem

Now, let’s setup Lighttpd to work with our SSL certificate. Edit your vhost config or lighttpd.conf and add the following, changing the paths as necessary.

nano /etc/lighttpd/lighttpd.conf
$SERVER["socket"] == ":443" {
        ssl.engine = "enable"
        ssl.pemfile = "/etc/letsencrypt/live/freek.ws/ssl.pem"
        ssl.ca-file =  "/etc/letsencrypt/live/freek.ws/fullchain.pem"
        ssl.cipher-list = "ECDHE-RSA-AES256-SHA384:AES256-SHA256:HIGH:!MD5:!aNULL:!EDH:!AESGCM"
        ssl.honor-cipher-order = "enable"
        ssl.use-sslv2 = "disable"
        ssl.use-sslv3 = "disable"
}

Reload lighttpd to activate the certificate using the following command:

/etc/init.d/lighttpd reload

Congratulations! Your website should now have a valid SSL certificate in place :) You can check this by visiting your website over HTTPS and look for the green padlock.

*** FORCE HTTPS (OPTIONAL) ***

This is optional but if you’d like to force encryption, add a this redirect to your vhost config or lighttpd.conf:

nano /etc/lighttpd/lighttpd.conf
$HTTP["scheme"] == "http" {
        $HTTP["host"] =~ "freek.ws" {
                url.redirect = ( "^/(.*)" => "https://www.freek.ws/$1" )
        }
}

*** AUTOMATIC RENEWAL ***

For automatic renewal, schedule the following bash script using cron. This sets the script to run on a weekly basis.

touch /etc/cron.weekly/letsencrypt
chmod +x /etc/cron.weekly/letsencrypt
nano /etc/cron.weekly/letsencrypt
# Automatically Renew Letsencrypt Certs
# Edit webroot-path with your www folder location
/opt/letsencrypt/letsencrypt-auto renew --webroot --webroot-path /var/www/freek.ws/
# Rebuild the cert
# Edit folder location to your domainname
cd /etc/letsencrypt/live/freek.ws/
cat privkey.pem cert.pem > ssl.pem
# Reload lighttpd
/etc/init.d/lighttpd reload

That’s all folks! Your webserver should now have a valid SSL certificate in place and it’ll automatically be renewed when it’s almost due to expire!
For more information, visit this excellent documentation from Let’s Encrypt: https://certbot.eff.org/docs/using.html#webroot
Comments or suggestions? Let me know in the comments!

Blocking DNS Amplification attacks using IPtables

If you are managing a Linux server, you’ve probably heard about DNS amplification attacks which make use of misconfigured DNS servers. DNS amplification is a DDoS technique which uses a large reply by DNS resolving the target. This is accomplished by spoofing the query with the source IP of the target victim to ask for a large DNS record, such as an ANY reply of the ROOT record or isc.org, which is most commonly found. The request itself is usually around 60-70 bytes, while the reply is as much as 2-3K. That’s why it’s called amplification. It will not only make your network participate in the attack, but it will also consume your bandwidth. More details can be found here.

Blocking these kinds of attacks can be tricky. However, there are some basic iptables rules that block most of it, using them in combination with fail2ban. As usual, your mileage might vary.
The commands below were tested and executed on Ubuntu Server 16.04 LTS 64-bit using fail2ban 0.10.

First, add this IPtables rule to your /etc/network/interfaces.tail so it’s automatically loaded each time your network interface(s) restart and/or your server (re)boots.

post-up iptables -A INPUT -p udp -m udp --dport 53 -m limit --limit 5/sec -j LOG --log-prefix "fw-dns " --log-level 7
post-up iptables -A INPUT -p udp --dport 53 -m string --from 50 --algo bm --hex-string '|0000FF0001|' -m recent --set --name dnsanyquery --mask 255.255.255.0
post-up iptables -A INPUT -p udp --dport 53 -m string --from 50 --algo bm --hex-string '|0000FF0001|' -m recent --name dnsanyquery --rcheck --seconds 1 --hitcount 1 -j DROP --mask 255.255.255.0

Note: –mask depends on your network configuration. Type ifconfig to find your mask.

The first iptables rule is for use with fail2ban. The second and third rule are used in conjunction with each other:
The second iptables rule looks for the incoming udp packets on port 53 and searches the first 50 of packet for hex string “0000FF0001” (which is equivalent to an ANY query) and saves it under dnsanyquery (under /proc/net/ipt_recent/dnsanyquery) with a timestamp.
The third iptables rule drops the packet if the source ip and query type (in this case “ANY”) matches and occurred more than one time in the past second.

Next, create the file /etc/fail2ban/filter.d/iptables-dns.conf with the following contents:

[Definition]
failregex = fw-dns.*SRC=<HOST> DST
            ^.* security: info: client #.*: query \(cache\) './(NS|A|AAAA|MX|CNAME)/IN' denied
ignoreregex =

And add the following to /etc/fail2ban/jail.local to ban the IP for 1 day (No jail.local file? Shame on you! Don’t edit the jail.conf file directly, It will be overwritten during updates. Instead, make & edit a copy like so: cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local)

[iptables-dns]
# see https://freek.ws for more info
enabled = true
ignoreip = 127.0.0.1
filter = iptables-dns
action = iptables-multiport[name=iptables-dns, port="53", protocol=udp]
logpath = /var/log/kern.log
bantime = 86400
findtime = 120
maxretry = 1

Finally, restart fail2ban and your network interface(s) or server to enable the rules. After doing so, check using fail2ban-client status if you see the ‘iptables-dns’ jail listed. If fail2ban refuses to start, check your regex for typos using fail2ban-regex /var/log/kern.log /etc/fail2ban/filter.d/iptables-dns.conf

That’s all folks! Comments or suggestions? Let me know in the comments!

Source: A Realistic Approach and Mitigation Techniques for Amplifying DDOS Attack on DNS in Proceedings of 10th Global Engineering, Science and Technology Conference 2-3 January, 2015, BIAM Foundation, Dhaka, Bangladesh, ISBN: 978-1-922069-69-6 by Muhammad Yeasir Arafat, Muhammad Morshed Alam and Feroz Ahmed.

Free Public Pi-hole

Pi-hole is Network-Wide Ad Blocker that blocks ads of all sorts at the router level. It blocks advertisements on any device and improves overall network performance. For more information, watch this short video below.

For those of you who don’t own a Raspberry Pi, I’ve setup an internet facing Pi-hole server. It’s running from a Virtual Private Server (VPS) in a datacenter, so no worries about latency or bandwidth issues. The exact details are as follows:

Follow these steps to get started: https://discourse.pi-hole.net/t/how-do-i-configure-my-devices-to-use-pi-hole-as-their-dns-server/245

Comments or suggestions? Let me know in the comments :)

How to migrate between Synology NAS (DSM 6.0 and later)

Recently I bought a Synology DS216j to replace my Synology DS214se. The DS214se is a good entry-level NAS for personal use, but it was struggling to keep up with my 3 HD IP-cameras as well as acting as a Mail Server, mainly because of the single-core CPU. Since I didn’t want to lose my data, I had to perform a migration from the DS214se to the DS216j in order to retain the data. A quick Google search lead me into this Synology knowledge base article: https://www.synology.com/en-us/knowledgebase/DSM/tutorial/General/How_to_migrate_between_Synology_NAS_DSM_5_0_and_later

The title of the knowledge article above says that it’s intended for Synology NAS running DSM 5.0 and later. At the point of writing, DSM 6.1 is the latest available DSM version, so I had a suspicion that the knowledge base article might be out of date. Because my NAS models were not identical to each other, I had to follow section 2.2. of the article linked above; Migrating between different Synology NAS models. After doing so, I can confirm that my suspicions were right; the knowledge base article is out of date, the migration process between two Synology NAS just got easier!

Here’s a small writeup about what has changed in migrating between Synology NAS between DSM 5.0 and DSM 6.0:


Section 2.2. Migrating between different Synology NAS models starts with a word of caution, telling you that all packages on the target Synology NAS (i.e. your new NAS) will have to be reinstalled, which results into in losing the following data (…) Mail Server and Mail Station settings & Surveillance Station settings. This was applicable to my Synology NAS, as I had these packages installed and were actively used. However, after performing the migration to my new NAS as described in Section 2.2. (which basically comes down to update your old NAS to the latest DSM, switch it off, swap the drives to the new NAS and turn it on) my new Synology said the packages had to be repaired instead of being reinstalled. After clicking the repair button, all my packages came back to life on the new NAS, without any data loss; all my settings and files, including from Mail Server, Mail Station and Surveillance Station (emails as well as recordings), were still there! Needless to say, it’s still good practice to backup you data before performing the migration, as described in section 1 of the knowledge base article linked above.

However, what did change was the IP address of my NAS. I assumed that my new NAS would be using the same IP as my old NAS, as Synology instructs you to turn off your old NAS before powering up your new NAS, but that was not the case. So after the migration, use the Synology finder to find the new IP of your NAS and change it to your old IP after the migration, which can be in the Control Panel à Network.

Also, lastly, I had to re-register my DDNS hostname by re-logging into my Synology account, which can be done in the Control Panel à External Access.

That’s all folks!

PS. Should you have bought any additional Surveillance Station license keys in the past, don’t forget them down and to deactivate them on your old NAS before the migration, since the license keys can only be active on one Synology product at a time. Also, as an FYI, each license key can only be migrated just once.

PicoTorrent; a BitTorrent client without the bloat.

Remember the good old days when BitTorrent clients were just exactly that; BitTorrent clients? Nowadays BitTorrent clients are packed with loads of unnecessary features. Take uTorrent for example; it started as a lightweight BitTorrent client, but nowadays is bloated with features such as streaming but also advertisements. Roughly the same happened to qBittorrent, so I switched to Baretorrent. Sadly development of Baretorrent stopped in 2013 and is getting outdated in terms of encryption protocols, hence I started looking for an alternative and behold; PicoTorrent, the true lightweight BitTorrent Client. Basically it’s an updated version of Baretorrent; It has no unnecessary features, no advertisements, IPv6 support and it’s open-source!

Get it here: http://www.picotorrent.org/

TeamSpeak Server 3.0.13 not starting? Install Visual C++ Redistributable for Visual Studio 2015!

Recently I encountered issues starting my TeamSpeak server after updating it from version 3.0.11.4 to 3.0.13.6.; it would immediately crash with a blank error log.
Apparently TeamSpeak Server 3.0.13 onwards requires the 32-bit Visual C++ Redistributable for Visual Studio 2015 to be installed. Yes, the 32-bit variant, even if your OS is 64 bit. So, should you encounter immediate server crashes after updating your TeamSpeak server to version 3.0.13, try downloading and installing the 32-bit variant of Visual Studio 2015 run-time from here: https://www.microsoft.com/en-us/download/details.aspx?id=48145

PS. In case you missed the release notes of TeamSpeak Server 3.0.12, as of that version, the server binaries file names do NOT contain platform suffixes any more. They’re all called “ts3server” now, so don’t forget to delete the old/obsolete binary including the platform suffix from your TeamSpeak Server installation folder (else it will crash as well…)

How to schedule a PHP script in Plesk for Windows using cronjob/crontab

Nowadays it’s dead easy to schedule a PHP script as a cronjob/crontab in Plesk Onyx for Windows. However, in the previous versions, Plesk did not supply a sample syntax for scheduled tasks. Most examples found on the interwebs assume that you’re running Plesk on Linux, but if you are like me and run Plesk on Windows, that syntax is just plain wrong.

This small ‘note to self post’ shows how to correctly schedule a PHP script in Plesk for Windows for those of you who are still running an older version of Plesk :)

Step 1. Open Plesk and search for Scheduled Tasks

Step 2. Create a new cronjob/crontab as shown above. Adjust the parameters to your liking. In this example, I’ve scheduled the particular .php script to run every 5 minutes of each day of the week.

Step 3. You’re done! In the end, your finished cronjob/crontab should look like in the image above. If desired, you can also run it on demand by clicking Run Now.

HostsMan: Pi-hole without the pi (DNS-based adblocker)

If you’re tired of AdBlock Plus slowing down your browser and you don’t have a spare Raspberry Pi lying around to run Pi-hole, HostsMan is a great alternative that runs on Windows. One way to keep malware and advertisements outside is to block the servers that serve this content. This can be done by adding the IP numbers of these machines into the hosts file and redirect them to ‘localhost’. Updating the hosts is time-consuming and prone to errors, but this is when HostsMan comes into play. This free program can retrieve current lists of websites known to serve advertisements and malware and combine with the existing hosts file. Furthermore, it checks the hosts file for incorrect, duplicate or malicious entries. It also features a built-in editor and can be used to empty the DNS cache.

Download link: http://www.abelhadigital.com/hostsman

Unlike AdBlock Plus, however, HostsMan doesn’t make it obvious which hosts file sources you subscribe to. Enabling all of them sounds like a good idea, but doing so hoses some functionality such as social sharing bookmarklets. For information on which sources to use in HostsMan, visit: https://jdrch.wordpress.com/2014/11/03/which-sources-to-use-in-hostsman/

Also, for a comparison of AdBlock Plus and HostsMan, visit: https://jdrch.wordpress.com/2014/11/05/the-best-way-to-block-ads-adblock-plus-vs-a-custom-hosts-file-hostsman/

Microsoft Azure Tips & Tricks

Recently I’ve started to play around with Azure, Microsoft’s cloud. In this blogpost, I’ll be sharing some of my initial findings, which might prevent you from making the same rookie mistakes as I did ;)

First of all, compared to other cloud providers such as Vultr and Digital Ocean, Azure is quite expensive. However, should you have a MSDN account, you might be eligible for free Azure credit up to $150 per month! For more information, visit https://azure.microsoft.com/en-us/pricing/member-offers/msdn-benefits-details

The Azure control panel might be overwhelming at first, but it’s pretty intuitive once you start using it. Basically the workflow is from left to right, i.e. if you click on an option, a new screen opens to the right.

Image credit: http://pumpingco.de/your-own-game-server-on-azure-create-a-dedicated-counter-strike-server/

So without further ado, here’s my list of top tips & tricks for Azure rookies:

  • VMs are automatically allocated a dynamic WAN (‘internet facing IP address’) in order to communicate with the internet. This causes the IP address to change when you stop and start the VM, for example using auto-start/auto-stop as described above. You can specify a DNS domain name label for a public IP resource, which creates a mapping for domainnamelabel.location.cloudapp.azure.com to the public IP address in the Azure-managed DNS servers. For instance, if you create a public IP resource with contoso as a domainnamelabel in the West US Azure location, the fully-qualified domain name (FQDN) contoso.westus.cloudapp.azure.com will resolve to the public IP address of the resource. This is particularly helpful in case of using a dynamic IP address.
    Should you not want this, you can assign a static IP to the VM. However, you cannot specify the actual IP address assigned to the public IP resource. Instead, it gets allocated from a pool of available IP addresses in the Azure location the resource is created in. For more information, visit: https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-ip-addresses-overview-arm

    It’s possible to delete the VM and preserve the static IP address, for example to assign to another/new VM. In order to do so, do not delete the entire resource group that is associated with the IP address. Instead, shutdown the VM, then delete the NIC associated with the static IP you want to preserve. Then delete all other components in the resource group except for the IP you want to preserve. Please do note that IP addresses cannot change resource groups, so in order to re-use the IP address, you need to create a new VM (or load balancer) in the same resource group as the IP address.
  • VMs in a DevTest lab are deployed without a Network Security Group (also known as a firewall) within Azure. They only come with the default Windows Firewall running on the machine itself. Should you want a NSG/Firewall within Azure as well, you can deploy one yourself in the same resource group and assign it to the VNet of the VM you created in the DevTest lab. Please do note that this requires you to forward ports in twofold; one time within the VM itself in the Windows Firewall and the second time from within the NSG/Firewall in the Azure portal.


    Image credit: http://pumpingco.de/your-own-game-server-on-azure-create-a-dedicated-counter-strike-server/

    This is also the case with normal VMs (i.e. VMs that are not created in a DevTest lab) as they come with a NSG/firewall by default.
  • Azure offers two types of storage disks: SSD (Premium storage) and conventional HDD (Standard storage). SSD disks come in 3 sizes, 128GB, 256GB, 512GB. HDD disks have a flexible/user chosen size which can vary up to 1TB. You need to create a software RAID in order to get a large volume/disk.
    Premium Storage supports DS-series, DSv2-series, GS-series, and Fs-series VMs. You can use both Standard and Premium storage disks with Premium Storage supported of VMs. But you cannot use Premium Storage disks with VM series which are not Premium Storage compatible (for example the A-series or Av2-series).
    If you’re using standard storage, you’ll only be charged for the data you’re storing. Let’s say you created a 30 GB VHD but storing only 1 GB of data in it, you will only be charged for 1 GB. All empty pages are not charged.
    HOWEVER, if you’re using SSD (premium storage) you pay for the FULL SSD disk regardless of how much data you stored and empty pages are charged… so it becomes expensive pretty quick. For more information, visit https://azure.microsoft.com/nl-nl/blog/azure-premium-storage-now-generally-available-2/ and https://docs.microsoft.com/en-us/azure/storage/storage-premium-storage
  • Azure Storage provides the capability to take snapshots of your VMs. In order to take snapshots of your VM, you need a Recovery Services vault. A Recovery Services vault is an entity that stores all the backups and recovery points you create over time. The Recovery Services vault also contains the backup policy applied to the protected files and folders.
    Previously, when you created a backup vault you had the option to select locally redundant storage (LRS) or geo-redundant storage (GRS). This has now changed and you do not have the option during the vault creation.

    Now, by default, your vault will be created in a GRS, which costs (way) more than LRS (almost double!). If you want to switch to LRS, you must do so after the vault has been created but PRIOR to registering any items to the vault. You cannot switch the storage type of your Vault after you registered any items to it! To do this, simply access the Configure tab of your backup vault and change the replication from the default of GRS to LRS. Don’t forget to click Save…

    For more information, visit: https://docs.microsoft.com/en-us/azure/backup/backup-configure-vault

  • When you create a VM in Windows Azure you are provided with a temporary storage automatically. This temporary storage is “D:” on a Windows VM and it is “/dev/sdb1” on a Linux VM and is used to save the system paging file. This temporary storage must not be used to store data that you are not willing to lose, as there is no way to recover any data from the temporary drive. The temporary storage is present on the physical machine that is hosting your VM. Your VM can move to a different host at any point in time due to various reasons (hardware failure etc.). When this happens your VM will be recreated on the new host using the OS disk from your storage account. Any data saved on the previous temporary drive will not be migrated and you will be assigned a temporary drive on the new host. Because this temporary storage drive is present on the physical machine which is hosting your VM, it can have higher IOPS and lower latency when compared to the persistent storage like data disk. The temporary storage provided with each VM has no extra cost associated with it for storage space as well as for transactions. For more information, visit: https://blogs.msdn.microsoft.com/mast/2013/12/06/understanding-the-temporary-drive-on-windows-azure-virtual-machines/
    If your application needs to use the D drive to store data, you can to use a different drive letter for the temporary disk. However, you cannot delete the temporary disk (it’ll always be there), you can only assign it a different drive letter. For instructions on how to do so, visit https://docs.microsoft.com/en-us/azure/virtual-machines/virtual-machines-windows-classic-change-drive-letter

That’s it for now!

How to: MikroTik Hairpin NAT with dynamic WAN IP for dummies

About half a year ago, I bought a Mikrotik RouterBoard RB962UiGS-5HacT2HnT hAP AC (Phew! What a mouthful!). It’s a great router, or should I say routerboard, which has more features I could ever wish for… maybe even too many!

Nevertheless, out of the box, I was unable to visit my local NAS/Webserver using the WAN IP from my local network. After a quick Google DuckDuckGo search, I discovered that this requires a so-called ‘Hairpin NAT’. Basically, it reroutes all traffic sent to your WAN IP from your local network, back to (a specific IP address in) your local network. Graphically, it looks like this:

serveimage
See the arrow which, with a little bit of imagination, looks like a hairpin? Hence the name…anyway, let’s continue!

However, the issue with most Hairpin NAT configurations you find online is that it requires you to have a static WAN IP, which I don’t have. Additionally, most tutorials use the terminal, whereas I prefer the graphical interface of Winbox. Therefore, I figured out how to setup a Hairpin NAT in combination with a dynamic WAN IP using Winbox myself and since ‘sharing is caring’, here’s how to do it:

  1. Connect to your Mikrotik using Winbox
  2. Select IP –> Firewall from the menu
    mikrotik1
  3. Make sure that the default ‘defconf: masquerade’ rule is on top, which looks as follows:
    mikrotik2
  4. Add a new rule as follows, name it ‘Hairpin NAT’, which looks as follows (replace 192.168.88.0/24 with your own local network IP range):
    mikrotik3
  5. Add another rule. This rule will contain the IP and port you are trying to reroute. For example, lets say I want to connect to my local NAS running on IP 192.168.88.50 and port 1337 using my WAN IP, my rule would look like this:
    mikrotik4
    Don’t forget the exclamation mark in front of the Destination Address!
    Protip: You can add more ports in the same rule. Just split ranges with dashes like this: 1330-1337 and multiple ports with commas like so: 80,443,1330-1337
  6. Done! You should now be able to access your NAS/Webserver using your WAN IP from your local network. Feel free to add more rules to your liking, but remember, the order is important. So first the ‘defconf: masquerade’ rule, then the ‘Hairpin NAT’ rule and then all other rules :)
    mikrotik5