Utilising OpenVPN and a Firewall to create an Intranet with private services

This post will explain how I’ve effectively created an intranet using my Digital Ocean Droplet, OpenVPN, and UFW. I’ve assumed that you’re technically capable and already have a good understand of routing, firewalls, sockets, services etc.

The first thing you’ll of course need is a server – be it a VM, Raspberry Pi, or VPS. If you do go the VPS route I recommend checking out Digital Ocean.

OpenVPN

Once you’ve got the server you’ll need to setup and configure OpenVPN. I won’t explain how to do that as it’s a relatively long process, but the best guide is probably here on the Arch Wiki. However once you understand it, it’s easy to make configuration changes in the future. In the OpenVPN settings you’ll need to enable ‘client-to-client’ communication, specify ‘tun’ for a tunnel device type (as we want a routed IP tunnel) and at various places in the config specify the IP range you want to use. I used 10.8.0.0/16 as that is a private range and not externally routable (any gateway/border router should drop the packets). In addition I found that using TCP as opposed to UDP was a better choice for the VPN, my phone does do some random reconnects sometimes when packets get out of order, but I’ve found that when browsing the web YouTube and GIFs are far more reliable. You can switch OpenVPN to TCP by changing the proto udp directive to proto tcp.

Remember you’ll also need to setup IPv4 forwarding using NAT (explained in the guide linked above). Of all the steps, this is probably the most confusing even though once you understand what’s going on, it’s easy. Be patient.

UFW

Once you’ve got OpenVPN working and setup the way you like, the next thing to do is begin to tighten things down. Install Uncomplicated Firewall (UFW) and lock down everything except SSH and OpenVPN or you’ll lock yourself out (you’re probably using port 22 and 1194, both TCP). You can do this by setting UFW’s default policy to be ‘deny’ – this means you have to explicitly open ports. The best quick-reference guide I’ve found is here on the Ubuntu Community Page for UFW, for more advanced tweaking, take a look at Arch’s Wiki page for UFW.

All we need to do for now is have UFW online and running in the background, ready to restrict access to services later on…

Creating the ‘Intranet’ and configuring services

Now comes the part of locking down applications so that they are only accessible from our new Intranet – the intranet consists of OpenVPN clients and conveniently they will only ever be on the 10.8.0.0/16 range. This is the key to creating an Intranet, as we can configure services to allow authenticated OpenVPN clients, but ignore public users visiting our web server. I’ve done this in two layers for added protection. I use the firewall to stop randomers using the ports of my private services by restricting access to only the 10.8.0.0/16 range. The other layer is achieved by configuring the service itself to only offer its services on a 10.8.0.0/16 range. The reason for this dual-layer protection is if I incorrectly configure either the Firewall or the service, the firewall protects the service, or the service protects itself.

For example, let’s imagine that I am running an NTP server on it’s regular port 123/udp, but only want the OpenVPN clients to be able to use it. Remembering that the OpenVPN server should be 10.8.0.1, I configure UFW:

ufw allow from 10.8.0.0/16 to 10.8.0.1 port 123 proto udp

Then I add the other layer of protection in NTPd’s config file –

VPN Interfaces interface ignore wildcard interface listen 10.8.0.1 interface listen 127.0.0.1

Different services allow you to configure them to restrict communications from certain ranges in different ways, but there is almost always a way of doing it. For example sshd_config has this:

ListenAddress 10.8.0.1

Job Done – add more services!

Now you’ve got the framework in place for an intranet you can add additional services. For example I run a DNS server, this is so that I can view which websites my phone and tablet are contacting in the background when I’m not using them. There were  a few tracker sites, so I decided to use the DNS server to route sites like doubleclick.net to 127.0.0.1 and black-hole the traffic. It is important to note that if you want to use your own DNS server, you must change the DNS route push in OpenVPN to look similar to: *push “dhcp-option DNS 10.8.0.1”. *

The methods of locking the service down are the same as above – *create a firewall rule, and instruct the service to only accept connections from a specific IP range. *I run a deluge web manager on a specific port restrict access to that. You could run server-monitoring web pages like Zabbix, email access services, whatever you want.

My experience of Dalvik vs. ART on Android

Back in July I decided to swap my phone and tablet’s runtime from Dalvik to the (apparently) newer and faster ART runtime. The difference between the two being that Dalvik performs Just-In-Time-Compiling (JIT) of the applications you are running, whilst ART performs compilation during the installation, saving the need for it later.

I had read that ART will use more space (storing precompiled executables) and the performance boost from using the precompiled executables would be noticable and a battery saver. Instead applications were choppier. I realised this even more strongly when I reverted back to Dalvik.

I appreciate that ART is still meant to be in Beta and a preview for developers, but I’m still disappointed it doesn’t work even remotely like people would hope. Yet.

How to get an A+ Rating on SSLLabs

To gain an A+ rating over at SSL labs requires your website’s SSL to be configured with the follow principles:

  • A large key size: 4096bits
  • HTTP Strict Transport Security
  • A VirtualHost configuration for the website that meets minimum requirements (see bottom).

You don’t need a trusted certificate to get an A+. The SSLlabs tool will grade you as T due to trust hierarchy issues, but underneath it does say “If trust issues are ignored: A+”, as you can see below.

SSLlabs
The Apache directives below must be included in your VirtualHost’s configuration. Credit for this template goes to the Arch Linux AUR GitLab maintainer. Please not that SSLCipherSuite’s long configuration string must all be on the same line.
<br></br>
SSLEngine on<br></br>
SSLProtocol all -SSLv2<br></br>
SSLHonorCipherOrder on<br></br>
SSLCipherSuite "ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:<br></br>
ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS"<br></br>
Header add Strict-Transport-Security: "max-age=15768000;includeSubdomains"<br></br>
SSLCompression Off<br></br>
SSLCertificateFile /path/to/yourwebsite.crt<br></br>
SSLCertificateKeyFile /path/to/yourwebsite.key<br></br>
ServerName your.servername.com<br></br>
ServerSignature Off

Quick disable of IPv6 in Ubuntu, which is responsible for slow website connections.

By default Ubuntu has IPv6 enabled on network interfaces. This means when you try and visit a website, an IPv6 DNS request is made. It’s unlikely your ISP supports IPv6 and your client hangs waiting for DNS response from IPv6, whilst the IPv4 one has come back immediately. Eventually the IPv6 request times out, and if you reload the page it is instantaneous. This is because the cached IPv4 address is used. The solution is to disable IPv6

The Quad-A IPv6 DNS query is sent out immediately after the regular IPv4 A query.The Quad-A IPv6 DNS query is sent out immediately after the regular IPv4 A query.

Run the following in a terminal to fix this.
<br></br>
sudo echo '##IPV6\nnet.ipv6.conf.all.disable_ipv6 = 1\nnet.ipv6.conf.all.disable_ipv6 = 1\nnet.ipv6.conf.default.disable_ipv6 = 1\nnet.ipv6.conf.lo.disable_ipv6 = 1' >> /etc/sysctl.conf<br></br>

This adds a few lines to /etc/sysctl.conf that disable IPv6 at boottime.

Reboot.

Checking Ink Levels on Epson Printers in Arch and Ubuntu

Relevant to 2013 onwards involving udev and checking ink levels on older printers.

The standard GutenPrint libraries and drivers used by CUPS for printing can talk to Epson Printers well enough so that general printing can take place, test pages, change of print quality etc. However there is no decent means for returning current Ink levels.

There are two programs that allow you to do this – ‘ink’ and ‘escputil’.

These utilities were built when HAL was the dominant hardware-path-allocation tool. Now udev is used. However udev paths can seem convoluted, confusing and unecessarily long. Some guides say that in order to interface with older devices you need to go down /sys/devices/usb/0:0:0:1/1:0:0/-onwards-/

So when using commands like ‘ink’, you don’t know where to point it…

You need to point them at: /dev/usb/lp0 | lp1 | lp2 etc. Why people don’t know to go here, I’ve no idea, but here it is.

The exact commands needed for Ink and Escputil can be found on their websites. – I think ink’s is

sudo ink -d /dev/usb/lp0

Application Menu error in Ubuntu's LuckyBackup package causes various problems

The luckybackup program is a good rsync GUI for conducting backups if you are not comfortable or familiar with using rsync within the terminal. However, I noticed that it uses incorrect permissions when installed in Ubuntu.

When installed using…
<br></br>
sudo apt-get install luckybackup<br></br>

The package comes with a shortcut in Applications>System>luckyBackup (Super User). This means that when running luckybackup all of your configuration folders for the application under ~/.luckybackup get locked up, forcing you to always use Super User (sudo) privileges when running the application. Also, I found that if you have Network Shares currently mounted using gvfs (SAMBA), then the little shortcuts in your file-manager sidebar to these network-mounted locations do not appear. Even if you go the long winded way and visit their true path under /run/user/1000/gvfs, the gvfs symbolic link is greyed out and unavailable.

The reason for this is that the Network locations were bound under your regular 1000 user, but now you are running as a different user which has no idea what to do with the gvfs folder bound to your regular user. Especially irritating if you are trying to use luckybackup across Windows Shares.

To get around this, run the luckybackup command from the terminal or Run bar (Alt-F2) so that it can operate under your normal user, and thereby see and use your network mounts. These Network Mounts will be visible in the sidebar as other ‘currently mounted’ media usually are – convenient. When you click these shortcuts in Lucky Backup, the full path is resolved and placed wherever you wanted it to be in Lucky Backup (Source/Destination folder, probably).

Or, just alter the luckybackup shortcut in your Applications Menu and remove any commands that cause privilege escalation (sudo, su-to-root, etc.).

The Ubuntu maintainer of Lucky Backup should fix this, and it’s not really a bug, just user-error of the Ubuntu package maintainer for luckybackup:

Bug Report Filed – Click Here