This is just a short message to let you all know that I am aware of the (stability) issues with the free public Pi-hole service I am offering.
Some of these issues were caused by outages at my hosting provider:
However, the current issue is caused due to changes I made to Pi-hole core files a couple of months ago to support SSL before it was officially implemented by the Pi-hole development team. Due to these changes, a complete re-installation of Pi-hole is the easiest way forward, since many core files have been adjusted by the official Pi-hole development team. I am planning on doing so this Monday, the 29th of January. Additionally, I’ll try to setup a secondary node/resolver as well for redundancy.
My apologies for any inconvenience caused and thanks in advance for your understanding.
On the Windows Servers I use for development, I like to keep things simple. That means security should be in place, but at the same time should be workable and flexible enough for me to install and download things, without getting nagged by obnoxious over-active security mechanisms. In order to do so, I execute the following steps on every Windows development server I install.
Install RDP Defender
If your Windows server is publicly available from the internet, then there is a 100% chance that hackers, network scanners and brute force robots are trying to guess your Administator login and password as we speak.
Using password dictionaries, they will automatically try to login to your server hundreds to thousands times every minute. Not only this is bad for your server’s security, but it also wastes a lot of resources, such as CPU and bandwidth.
RDP Defender will block these attacks, by monitoring failed login attempts and automatically blacklisting the offending IP addresses after several failures. You can of course configure it to suit your needs, but it pretty much take care of itself. It takes just 30 seconds to download and install: https://www.terminalserviceplus.com/rdp-defender.php
Increase RDP Security
Start –> Run –> gpedit.msc Go to Computer Configuration à Administrative Templates à Windows Components à Remote Desktop Services à Remote Desktop Session Host à Security
Set client connection encryption level – Set this to High Level so your Remote Desktop sessions are secured with 128-bit encryption.
Require secure RPC communication – Set this to Enabled.
Require use of specific security layer for remote (RDP) connections – Set this to SSL (TLS 1.0).
Require user authentication for remote connections by using Network Level Authentication – Set this to Enabled.
I’m not a fan of having stuff enabled that I don’t use or need, so even tough this probably isn’t a security risk, I’m going to disable it anyway.
Go to Server Manager à Local Server à Remote Management and click ‘Enabled’. In the window that opens, untick ‘Enable Remote management of this server from other computers’ and hit apply.
Do not start Server Manager automatically at logon
Go to Server Manager à Manage à Server Manager Properties Check ‘Do not start Server Manager automatically at logon’.
Disable Password Expiration
Like I said, this is a development server. There’s no need for me to have top notch security, as I’ll probably spin up a new machine in a couple of months again and delete this one.
Start à Run à gpedit.msc Go to Computer Configuration à Windows Settings à Security Settings à Password Policy.
Change ‘Maximum password age’ to 0. Hit apply and ‘Password will not expire’ should now be shown.
Schedule automatic update restarts
Windows Server 2012 and 2016 use ‘active hours’ to determine whether or not it’s safe to reboot the machine for updates. Moreover, the maximum time frame of the ‘active hours’ cannot be greater than 12 consecutive hours. To be honest, I don’t know who came up with this brilliant idea, since a server is usually designed to be on 24/7. Therefore, I prefer choose when Windows reboots for updates by scheduling a specific time, instead of playing Russian roulette whether or not the thing is going to reboot while I’m running any jobs/tests.
Start à Run à gpedit.msc Go to Computer Configuration à Policies à Administrative Templates à Windows Components à Windows Update.
Tick ‘Enabled’, choose option 4 and tick ‘Install during automatic maintenance’
Note: When ticking ‘Install during automatic maintenance’ the schedule you define in gpedit, i.e. ‘Every day’ and the scheduled install time of 03:00 as in the screenshot above, have no effect! The automatic maintenance option overrides this schedule. Automatic maintenance is performed daily, but you are free to change at which time it takes place via Control Panel à System and Security à Security and Maintenance à Automatic Maintenance
Disable Internet Explorer Enhanced Security Configuration
On a development server, downloading new tools and utilities is common practice. Instead of whitelisting every domain, which are a lot nowadays, I simply turn off the Internet Explorer Enhanced Security Configuration. Yes, I know this is a potential security risk, especially on production servers, but like I said, this is a development server. In addition, use your common sense when pointing and clicking at stuff on the interwebz and you should come a long way
Go to Server Manager à Local Server à IE Enhanced Security Configuration and tick ‘Off’
Windows Server 2012, and especially Windows Server 2016, are quite intrusive when it comes to privacy. I don’t like the automatic sharing of ‘diagnostic and usage data’ (whatever that may be), so I switch off these options as far as possible (hoping they actually do something instead of being bogus buttons/placeholders).
Go to Server Manager à Local Server à Feedback & Diagnostics and click ‘Settings’ In the window that opens, choose ‘Never’ and ‘Basic’:
Do the same for Windows Defender, by switching off ‘Cloud Protection’ and ‘Automatic Sample Submission’:
Show extensions & hidden files, folders and drives
It’s always handy to know whether or not you’re opening invoice.pdf.exe or an actual invoice.pdf, isn’t it ?
Open a random folder, go to File à Change folder and search options
Tick ‘Show hidden files, folders and drives’ and untick ‘Hide extensions for known file types. Hit apply and OK.
Change Power Plan to High Performance
I hate waiting for my disks to spin-up and since this is a server, I always choose the High Performance Power Plan in order to get maximum performance.
Go to Control Panel à Hardware à Power Options and tick ‘High performance’.
Last but not least, install 7Zip & Notepad++
These two tools belong in every developer’s toolkit, so install them while you’re at it!
That’s all for now. Comments or questions? Let me know down below. Cheers!
As many of my friends know, and perhaps so do you by glancing at my sidebar, I’m into cryptocurrencies. Back in 2013 I started mining Litecoin. Nowadays I’m mining Ethereum and I’m interested in ICOs. This post marks the start of a new series of blog post where I dive into different cryptocurrency topics. Today, I’ll start off with an interesting ICO named ‘Nucleus Vision’.
The Nucleus Vision cryptocurrency was originally founded in 2014 at one of the most prestigious education establishments in the world, Harvard University. It describes itself as an end-to-end technology solution that is able to capture and provide unaccessed data, which was previously unavailable to retailers and other businesses. You could argue that at some points, there is a gap between the worlds of offline and online retail, and the creators of Nucleus Vision want to provide a solution for this by creating a platform that doesn’t depend on the likes of RFID, WiFi Bluetooth, or Facial Recognition. This leap forward in sensor technology could change the way data can be collected and analysed, not just for the retail space, but for other areas too.
The creators of Nucleus Vision have outlined exactly where they plan on taking this technology in the future. This show a very clear understanding of what they feel they are capable of, and gives them time to make necessary improvements to the infrastructure if the need arises.
As this is their first phase as part of their launch, we will look at what is going into this sector. Nucleus has built the world’s first IoT (Internet of Things) based contactless identification system using blockchain technology. With the use of this technology, a retail outlet will be able to tailor a customer’s shopping experience into something more personal. The platform uses blockchain sensors that can be used to track a customers’ visit into a particular store, as well as being able to collect data to interpret which aisles they ventured down, to the path that they took throughout the store itself. Upon collecting all of this valuable data, a retailer can then create a more immersive shopping experience, and predict customer behaviour, which results in a more positive experience for any customer that enters the retail store.
Through the use of the re-nowed blockchain technology, Nucleus Vision hopes to enhance the experiences of the 2.6 trillion customers that visit the 91 million physical stores, all in real time. The worry that some people might have when they visit these stores, is that their privacy is being violated and the corporations are solely benefitting from this. However, the light at the end of the tunnel suggests that Nucleus aims to shift the power of data monetization to the customers themselves, which give them opportunity to take full control of their data, as well as the power to monetize it. The nCash token is going to be the currency for the data exchange between customer and company and will act as a remuneration and reward customers for willingly sharing their personal data. For personal data to be so open and freely exchange is the pillar on which a decentralised system like blockchain exists.
In order for Nucleus Vision to fully immerse itself into a retail store and be used to 100% of its capacity, there are a couple of things that need to be built into the system in order for it to be fully operational, these include:
ION Sensor: Proprietary Sensor Technology. This piece of technology can be used to uniquely identify and sense changes in temperature, motion, pressure, acceleration and sound within the confines of the sensor. This allows the retailers to capture an avenue of new data that was previously unobtainable, and can be used to produce a greater shopping experience for a customer.
Orbit Blockchain: This platform known as Orbit is the foundation where everything that is known about a customer that enters a store, will be shared securely between people and retailer. This includes the nCash token.
Neuron Layer: This intelligence platform utilises deep learning, blockchain and IoT in order for the state-of-the-art analytics engine to connect the customers and retailers and the right time.
nCash: Token-Based Payments. This is the decentralised cryptocurrency, which will be used for a number of transactions between a retailer and customer who decides to opt in for sharing their personal details with a store.
The potential for Nucleus Vision to spread out across different sectors is only limited by their imagination, and from the sounds of it, there’s no stopping them. With promises to expand into security and provide a solution to an already lacking market, Nucleus Vision will provide a way for information to utilised, again by blockchain technology, in real time. I don’t think I’ve come across a company that has thought this far into the future with their proprietary technology, so this is certainly one to watch for during their ICO. Additional Links Website: https://nucleus.vision/ Announcement: https://bitcointalk.org/index.php?topic=2461585.0
In this guide, I’m going to show you how to secure your Traccar installation with SSL, so that it can be reached over https instead of http. Traccar is a free and open source modern GPS tracking system. Since Traccar has no native support for encrypted connections, we’ll do so by setting up a Reverse Proxy using IIS (which is the recommended method by the developer). We’ll be using Let’s Encrypt to generate a free valid certificate for your Traccar installation.
A working Traccar instance, reachable over http (by default http://localhost:8082), installed on Windows Server 2012 R2 or Windows Server 2016.
A Fully Qualified Domain Name (FQDN), for example ‘yourdomain.com’, with an A record pointing to the IP of your Traccar server:
(Of course, in the screenshot above, change the variables to meet your environment, i.e. replace ‘22.214.171.124’ with the IP of your Traccar server and ‘traccar.yourdomain.com’ with your own (sub)domain. Please note that it can take up to 24 hours, but usually no more than 1-2 hours, for your DNS servers to ‘propagate’, i.e. sync your update with the rest of the world.)
First, install the URL Rewrite add-on module. From Windows Server 2012 R2 and up, you can use the Microsoft Web Platform Installer (WebPI) to download and install the URL Rewrite Module. Just search for ‘URL Rewrite’ in the search options and click ‘Add’.
After installing, do the same for the Application Request Routing 3.0 add-on module:
Next, open IIS and add a new website:
In the window that opens, fill in the following details:
Change the variables to meet your environment.
Close IIS for now and download and install ‘Certify the web’, a free (up to 5 websites) SSL Certificate Manager for Windows (powered by Let’s Encrypt). Certify will automatically renew your certificates before they expire, so it pretty much takes care of itself.
After installing, open Certify. Before we can request a new certificate, we first need to setup a new contact. This is mandatory. So, first, go to ‘Settings’ and set a ‘New Contact’:
Next, click on ‘New Certificate’:
Select the website you created in IIS, in my case named ‘Traccar’:
The rest of the information should now autofill, based on the details you entered in IIS.
Next, go to the Advanced tab and click ‘Test’ to verify if everything is setup correctly
If all goes well, you should get this popup:
Click OK and click ‘Save’.
Next, click ‘Request Certificate’ to request your free valid SSL certificate from Let’s Encrypt for your Traccar installation:
If all goes well, you should get ‘Success’
Next, close Certify and open IIS again. Go to the website you created (in my example Traccar) and click on URL Rewrite
Click on ‘Add Rule(s)’ in the top right corner:
In the window that opens, click on ‘Reverse Proxy’ and click ‘Ok’
In the window that opens, enter ‘localhost:8082’ in the Inbound Rules text field, select ‘Enable SSL Offloading’, select ‘Rewrite the domain names of the links in the HTTP responses’ from ‘localhost:8082’ and select your Traccar domain from the dropdown menu, i.e. ‘traccar.yourdomain.com’ and click OK.
Next, go to your website in IIS again and click on Compression:
Outbound rewriting can only be applied on un-compressed responses. If the response is already compressed then URL Rewrite Module will report an error if any of the outbound rules is evaluated against that response. Therefore, we need to disable Compression in order to get Traccar to play nicely with IIS. Uncheck both options and click Apply:
That’s it! We’re done! Your Traccar installation should now be reachable over HTTPS and have a valid SSL certificate:
If the website is not opening (times out), check if port 443 inbound is open in your firewall:
Since your website is now reachable over https, you can change the Challenge Type to tls-sni-01 in Certify:
This way, you can remove the port 80 binding in IIS if you want, to force all traffic to your Traccar installation over https:
Have fun! Any questions or comments, let me know down below.
Recently I was asked last-minute (as always) to come up with a solution to have a photo slideshow loop all day on a TV during an event. The supplied TV supported playback of various video files, including images, from a USB device, but it did not support playing photo slideshows on repeat. For video files it did, but for the crucial photo slideshow in question it did not. Sure I could’ve hooked up a laptop to the TV and have the slideshow loop on the PC, but since this was at a fair, I didn’t want to take the risk of my laptop being stolen. Therefore, I came up with the idea to save the slideshow as a video file. This turned out to be easier said than done. I knew that Microsoft PowerPoint had the option to export PowerPoint Presentations as video files, but the output file had a codec the TV didn’t support. Since this was the only method I could come up with that met my requirements, i.e. no external equipment, I came up with the idea to convert the video to a different format, hoping the TV would play the file. To do so, I had to find a free video converter. Usually I use a free online video converter, so I don’t need to download any software. But since this slideshow contained about 100+ high resolution photos, it would take too long to upload the video file and to figure out using trial and error which format/codec would run on the TV. Therefore I went on a quest to find a decent free video converter without any restrictions in terms of time or size limitations. After many fake and ad-infected downloads, I finally found Freemake Video Converter, which is available for free at http://www.freemake.com/free_video_converter/
According to its own homepage, Freemake Video Converter converts between 500+ video formats, without any trials or limitations. It has been around since July 2010 and has currently over 93 million users worldwide! Surely 93 million users can’t be wrong, right? (ha-ha)
Freemake Video Converter does indeed live up to its promise. It’s free and has no limitations in terms of formats or time restrictions. Using Freemake Video Converter, I was able to convert the exported PowerPoint to the correct video format for the TV to recognize. Apparently it’s either MP4 or the build-in ‘Samsung’ preset, both do the job. However, Freemake Video Converter also has the option to create its own photo slideshow and allows you to directly convert it to a video format of your choice. It even lets you add an audio track! Sadly the other options in terms of photo slideshow are quite limited; there is just one transition effect called ‘the panorama effect’, also known as pan and zoom, and you can change the interval between photos if you like.
However, to unlock its full potential, for example to remove the Freemake logo from the video, you need to pay a small yearly or one-time fee, depending on which feature you want to unlock; each feature has to be unlocked using its own appropriate ‘pack’. For example, there is one ‘pack’ to enable conversion for internet videos such as YouTube, also known as YouTube ripper/downloader, and another ‘pack’ to add subtitles to your video file. If you like, you can also unlock all packs at once by purchasing the ‘Mega pack’, which contains all five packs for one price.
All in all, Freemake Video Converter is a great free tool. It’s fast, powerful and easy to use. Additionally, it’s quite feature rich, although some features are locked behind a pay-wall. Should you ever need to convert a video last-minute and you don’t know what the right format is, I recommend taking a look at Freemake Video Converter.
I pretty much tried all of the recommendations to increase the FPS in PlayerUnknown’s BattleGrounds (PUBG) that are out there, and only kept the ones that actually helped:
– In-game settings: AA and View Distance on Ultra, everything else Very Low. Depending on the amount of Video RAM (VRAM) you have available, you can bump the Textures level to Medium or High.
– Go to C:\Program Files (x86)\Steam\steamapps\common\PUBG\TslGame\Binaries\Win64\ right click TslGame.exe -> Properties -> check the “Disable fullscreen optimizations” and the “Override high DPI scaling behaviour” (performed by: application).
– Add these to the end of the Engine.ini file, located in C:\Users\USERNAME\AppData\Local\TslGame\Saved\Config\WindowsNoEditor:
This is just a quick and dirty post to show you how to setup Let’s Encrypt with Lighttpd and configure automatic certificate renewal on Ubuntu Server 16.04 LTS (but I’m pretty sure the commands below will work for all Debian based systems).
**** INITIAL SETUP ****
First, let’s obtain the latest version of the Let’s Encrypt client from their github repo:
sudo git clone https://github.com/letsencrypt/letsencrypt /opt/letsencrypt
Now, let’s request a certificate for the first time. Change the paths & domainname in the command below as necessary. Follow the on-screen prompts:
This command will obtain a single certificate for freek.ws and www.freek.ws; it will place temporary ‘challenge’ files in /var/www/freek.ws to prove to Let’s Encrypt that you’re the owner of these two domains.
Lighttpd expects certs to be combined, so we need to concatonate them before we can configure it. Remember to replace your domain in the path (note: this is in /etc/letsencrypt and not your webroot!).
cat privkey.pem cert.pem > ssl.pem
Now, let’s setup Lighttpd to work with our SSL certificate. Edit your vhost config or lighttpd.conf and add the following, changing the paths as necessary.
# Automatically Renew Letsencrypt Certs
# Edit webroot-path with your www folder location
/opt/letsencrypt/letsencrypt-auto renew --webroot --webroot-path /var/www/freek.ws/
# Rebuild the cert
# Edit folder location to your domainname
cat privkey.pem cert.pem > ssl.pem
# Reload lighttpd
That’s all folks! Your webserver should now have a valid SSL certificate in place and it’ll automatically be renewed when it’s almost due to expire! For more information, visit this excellent documentation from Let’s Encrypt: https://certbot.eff.org/docs/using.html#webroot Comments or suggestions? Let me know in the comments!
If you are managing a Linux server, you’ve probably heard about DNS amplification attacks which make use of misconfigured DNS servers. DNS amplification is a DDoS technique which uses a large reply by DNS resolving the target. This is accomplished by spoofing the query with the source IP of the target victim to ask for a large DNS record, such as an ANY reply of the ROOT record or isc.org, which is most commonly found. The request itself is usually around 60-70 bytes, while the reply is as much as 2-3K. That’s why it’s called amplification. It will not only make your network participate in the attack, but it will also consume your bandwidth. More details can be found here.
Blocking these kinds of attacks can be tricky. However, there are some basic iptables rules that block most of it, using them in combination with fail2ban. As usual, your mileage might vary. The commands below were tested and executed on Ubuntu Server 16.04 LTS 64-bit using fail2ban 0.10.
First, add this IPtables rule to your /etc/network/interfaces.tail so it’s automatically loaded each time your network interface(s) restart and/or your server (re)boots.
Note: –mask depends on your network configuration. Type ifconfig to find your mask.
The first iptables rule is for use with fail2ban. The second and third rule are used in conjunction with each other: The second iptables rule looks for the incoming udp packets on port 53 and searches the first 50 of packet for hex string “0000FF0001” (which is equivalent to an ANY query) and saves it under dnsanyquery (under /proc/net/ipt_recent/dnsanyquery) with a timestamp. The third iptables rule drops the packet if the source ip and query type (in this case “ANY”) matches and occurred more than one time in the past second.
Next, create the file /etc/fail2ban/filter.d/iptables-dns.conf with the following contents:
And add the following to /etc/fail2ban/jail.local to ban the IP for 1 day (No jail.local file? Shame on you! Don’t edit the jail.conf file directly, It will be overwritten during updates. Instead, make & edit a copy like so: cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local)
# see https://freek.ws for more info
enabled = true
ignoreip = 127.0.0.1
filter = iptables-dns
action = iptables-multiport[name=iptables-dns, port="53", protocol=udp]
logpath = /var/log/kern.log
bantime = 86400
findtime = 120
maxretry = 1
Finally, restart fail2ban and your network interface(s) or server to enable the rules. After doing so, check using fail2ban-client status if you see the ‘iptables-dns’ jail listed. If fail2ban refuses to start, check your regex for typos using fail2ban-regex /var/log/kern.log /etc/fail2ban/filter.d/iptables-dns.conf
That’s all folks! Comments or suggestions? Let me know in the comments!
Source: A Realistic Approach and Mitigation Techniques for Amplifying DDOS Attack on DNS in Proceedings of 10th Global Engineering, Science and Technology Conference 2-3 January, 2015, BIAM Foundation, Dhaka, Bangladesh, ISBN: 978-1-922069-69-6 by Muhammad Yeasir Arafat, Muhammad Morshed Alam and Feroz Ahmed.
Pi-hole is Network-Wide Ad Blocker that blocks ads of all sorts at the router level. It blocks advertisements on any device and improves overall network performance. For more information, watch this short video below.
For those of you who don’t own a Raspberry Pi, I’ve setup an internet facing Pi-hole server. It’s running from a Virtual Private Server (VPS) in a datacenter, so no worries about latency or bandwidth issues. The exact details are as follows:
The title of the knowledge article above says that it’s intended for Synology NAS running DSM 5.0 and later. At the point of writing, DSM 6.1 is the latest available DSM version, so I had a suspicion that the knowledge base article might be out of date. Because my NAS models were not identical to each other, I had to follow section 2.2. of the article linked above; Migrating between different Synology NAS models. After doing so, I can confirm that my suspicions were right; the knowledge base article is out of date, the migration process between two Synology NAS just got easier!
Here’s a small writeup about what has changed in migrating between Synology NAS between DSM 5.0 and DSM 6.0:
Section 2.2. Migrating between different Synology NAS models starts with a word of caution, telling you that all packages on the target Synology NAS (i.e. your new NAS) will have to be reinstalled, which results into in losing the following data (…) Mail Server and Mail Station settings & Surveillance Station settings. This was applicable to my Synology NAS, as I had these packages installed and were actively used. However, after performing the migration to my new NAS as described in Section 2.2. (which basically comes down to update your old NAS to the latest DSM, switch it off, swap the drives to the new NAS and turn it on) my new Synology said the packages had to be repaired instead of being reinstalled. After clicking the repair button, all my packages came back to life on the new NAS, without any data loss; all my settings and files, including from Mail Server, Mail Station and Surveillance Station (emails as well as recordings), were still there! Needless to say, it’s still good practice to backup you data before performing the migration, as described in section 1 of the knowledge base article linked above.
However, what did change was the IP address of my NAS. I assumed that my new NAS would be using the same IP as my old NAS, as Synology instructs you to turn off your old NAS before powering up your new NAS, but that was not the case. So after the migration, use the Synology finder to find the new IP of your NAS and change it to your old IP after the migration, which can be in the Control Panel à Network.
Also, lastly, I had to re-register my DDNS hostname by re-logging into my Synology account, which can be done in the Control Panel à External Access.
That’s all folks!
PS. Should you have bought any additional Surveillance Station license keys in the past, don’t forget them down and to deactivate them on your old NAS before the migration, since the license keys can only be active on one Synology product at a time. Also, as an FYI, each license key can only be migrated just once.