Categories
Linux Networking Software

Running WireGuard in a Docker container (amd64)

This is the 2nd post about WireGuard.

So I am running two WireGuard servers — one on a Raspberry Pi 4, and one in an amd64 virtual machine. This post will be about getting WireGuard working on amd64 in a Docker container.

As this container rarely get rebuild, I am running unattended-upgrades inside the container to make sure security updates are applied.

I am also running Bind9 to act as a caching DNS server inside the container. Ideally this should be running from its dedicated container but that makes everything more complicated and not worth it for what I am trying.

I am also

The public repo that acts as a proof of concept can be found here.

start.sh — this file starts (or restarts) and builds the container. It will also create the files as needed, set the forwarding DNS server, etc.

Dockerfile — the example will start a basic container based on debian-slim, set up the port forwarding, install the tools we need, and copy over the configs

run.sh — this file will be executed after the container has been built. We need to install WireGuard from this file or it will fail due to the volume not being mounted and not having the right params.
This will also start the named (bind9) server.
I manually set ip address add dev wg0 10.200.200.1/24 because using Address in wg0.conf caused issues. I haven’t recently tested if that’s still the case.

named.conf.options — pretty standard bind9 config file; I want to be in control of my forwarding server because I am using NextDNS and want to apply a different config.

And of course your wg0.conf.

Running docker exec wireguard wg should give details about your connected hosts.

Categories
Linux Networking Software

WireGuard

This is the first post of several. Next posts will focus on running WireGuard inside a Docker container on amd64 Linux and a Raspberry Pi.

I’ve been running WireGuard for a few months now and I’ve been loving it.

I first started using it about a year ago when in China — OpenVPN was once again being actively blocked and it was driving me nuts. Overnight I set up a DigitalOcean server in Singapore and ran WireGuard from it — both my phone and laptop were able to actively bypass the GFW and (at that time) surf the internet freely once more. As WireGuard gains popularity, I am sure the GFW will start detecting it — it’s a quiet but not a stealthy protocol.

Since then I’ve dug quite a bit deeper in WireGuard and am really looking forward to what it’s going to bring.

WireGuard differentiates itself to be an extremely simple VPN server (which can make getting started and debugging a bit more challenging) — but it wants to seamlessly work together with existing tools. One of the main features still missing is for example running a DHCP server on the server and dynamically assigning IPs (like oVPN does).

WireGuard network
Simplified diagram of my network. Using static routing my clients can access the WireGuard network even without running WireGuard directly. (Some of) my containers are also able to access the network, this allows me to run Resilio Sync over WireGuard. It’s using one big subnet to create one big LAN.

It’s also pretty cool because any node can both be a server and a client at the same time. In my setup I am running two servers: one running at home in Singapore on a RPi4 (1Gbit fiber connection) and one on a virtual machine in Amsterdam (1Gbit as well). The RPis at my parents are connected to the server in Amsterdam, my iPad and phones are connected to the server in Singapore. If I am in Europe I might switch over and let my iDevices connect to the AMS server instead.

WireGuard and traffic shaping
Click to enlarge.
Bandwidth stats from Resilio Sync, transferring several big files. We can clearly see a speed increase (from 2-5mb/s to 11mb/s) when routing the exact same traffic over WireGuard. Traffic shaping at its best.

The example above clearly shows speed gains by cloaking the traffic in UDP packets. The shared folder has only two nodes (sender and receiver) and shows several big files being transferred from Amsterdam to Singapore. Resilio Sync uses the Bittorrent protocol, something ISPs generally hate and tend to slow down as much as they can — thanks Starhub.

Wireguard also allows the client to decide what to route through the server: only the VPN LAN traffic, or a whole subnet, or 0.0.0.0/0? So for my iPhone I for example route all traffic through VPN to avoid hotel/airport/… WiFi’s to mine/log/scan my data. For my laptop I have two configs, one to only connect to the LAN, but another that routes all my traffic through the VPN if I want to avoid exposure or circumvent censoring.

Note that I am not running WireGuard to remain anonymous and I’ll definitely leak some information — just trying to minimise and remain in control of what I leak. This is not a Tor replacement.

Categories
Linux Networking

Smokeping

Back in the days — when I was 16 or so — Smokeping was the rage. Every colo provider in Amsterdam, every NOC, they all had their own Smokeping.

Playing around with Docker I saw some Smokeping image and that made me want to set it up again.

I’m running Smokeping on my server in Amsterdam (Leaseweb colo): smokeping.rootspirit.com.

At home, in Singapore, I am also running it on a Raspberry Pi 4: smokeping-sg.superuser.one.

Note that this is actually the same config used on both, as the RPi and server are on the same WireGuard network that works out nicely.

This is my docker run command to start it up:

docker run --name=smokeping --hostname=smokeping -e PUID=1000 -e PGID=1000 -e TZ=`cat /etc/timezone` -d -p 8000:8000 -v /srv/smokeping/config:/config -v /srv/smokeping/data:/data --restart unless-stopped --network 0x04 linuxserver/smokeping

Be sure to create the needed paths, and I am running it in my specific network 0x04. Change (or remove) --network 0x04 to something that works for you.

Categories
Apple Linux Networking Software VM

Box — Docker shell server

A couple of months ago I had the great idea to set up a shell server in Docker. Simply because my docker skillz were quite rusty and a shell server was something I actually genuinely needed.

Shell servers… so 2005. I remember in the good old IRC days people asking for (free) shell servers to run their eggdrop and stuff. OMG am I getting old? Anyhow…

I ssh quite often. I manage quite a few servers (~15?) and routers that require me to login and do some random stuff. I also work on a laptop quite often and that means closing the lid and moving around.

First of all, mosh is amazing and allows you to stay connected via ssh, even with crappy (airport/hotel) internet as well as moving around networks — that solves half the problem. If you are not using it, start using it now!

Second, during my datacenter technician days at Google we used to have a “jump server” — a shell server that allowed us to bridge the corporate network and ssh into prod machines. Doubt that’s still used nowadays, but the idea stuck. I wanted something similar to ssh from, wherever I was, and easily connect to my servers. And as the network the shell server is running on is stable, I only need to use mosh to the shell server. Thereafter, the connection very rarely dies.

And I guess, third, I recently purchased an iPad Pro and I really need to have my local “dev” environment with my git repo that I edit quite frequently but iPadOS isn’t really your average computer, and doesn’t even have a proper terminal. This is my experiment to make iPadOS work as a main computer when on the move.

Enter box — Docker shell server

I’ve copied over the files I use to this example repo, and added some comments. Mind you that this repo acts as a proof of concept and isn’t kept up to date, as I have my own private repo — but this should give you a good idea on how to set up your own shell server with Docker.

start.sh — this is a simple script that I execute when I first run or need to update the container. I execute the same file on two different servers: Liana, my Raspberry Pi at home and Ocean, my server in Amsterdam.

zsh.sh — this installs what I care about for zsh. This could be part of the Dockerfile but for some reason I separated it. ¯\_(ツ)_/¯

git.sh — this clones my Git repos so I can edit and commit stuff from the shell server.

run.sh — this file is launched by Dockerfile at the end and executes what matters: the ssh daemon. It also adds a Wireguard route and executes the scripts above.

Dockerfile — this installs everything I need and configures the whole thing. I’ve added tons of comments that should get you going.

I am also cloning misc and homefiles as submodules in files/ — but you should change this to something that works for you. See the Dockerfile for more info.

Categories
Google Linux Networking

NextDNS + EdgeRouter + Redirecting DNS requests

Realised I haven’t updated this in a long while (life happened).

Couple of weeks ago I started to play with NextDNS — and I really recommend anyone that’s something privacy minded and cares about the stuff happening on their network.

I’ve set up several configs (home, parents, FlatTurtle TurtleBox (the NUCs controlling the screens)) and Servers. Once it’s out of beta and better supported on Unifi and Ubiquiti hardware I might deploy it to our public WiFi (well, most access points don’t look like that — but you get the point) networks too.

Looking at the logs was an eye-opener seeing what goes through your network. You can play around and block (or whitelist) certain domains.

I figured out my Devialet does an insane amount of requests to cache.radioline.fr for example. This domain has a 30s TTL. It shows that the majority of my DNS requests are actually automated pings and not in any way human traffic.

Anyhow — I’ve since installed the NextDNS CLI straight on my EdgeRouter Lite acting as a caching DNS server and forwarding using DoH.

I’ve turned off dnsmasq (/etc/default/dnsmasq => DNSMASQ_OPTS="-p0") and have NextDNS listen to :53 directly.

Note that every EdgeOS update seems to wipe out the NextDNS installation, and requires a fresh install… Pain in the ass and doesn’t seem like that’s fixable.

This is my ERL NextDNS config (/etc/nextdns.conf)

hardened-privacy false
bogus-priv true
log-queries false
cache-size 10MB
cache-max-age 0s
report-client-info true
timeout 5s
listen :53
use-hosts true
setup-router false
auto-activate true
config 34xyz8
detect-captive-portals false
max-ttl 0s

The explanation of every flag is explain on their Github page and they are very responsive via issues or through their chat on my.nextdns.io.

All right — next thing I’ve noticed is that my Google Home devices are not sending any DNS requests — which means the devices use hard coded DNS servers.

I have a separate vlan (eth1.90) for Google Home (includes my Android TV, OSMC, Nest Home Hub and all other GHome and Chromecast devices). For this vlan I set up a deflector to be able to cast and ping/ssh from my “main” network/vlan to GHome vlan.

Using this guide I redirected all external DNS traffic to the ERL so I can monitor what’s happening. The important part was the following:

[email protected]# show service nat rule 4053
destination {
port 53
}
inbound-interface eth1.90
inside-address {
address 10.3.34.1
port 53
}
protocol tcp_udp
type destination

This allows to “catch” all UDP and TCP connections to :53 and redirect them the ERL DNS server (10.3.34.1). The GHome devices were acting a bit weird after committing the change, but a reboot of the device fixed it.

Note that you need to set this up per vlan. If you want to catch DNS requests for your Guest or IoT vlan, you’ll need to do the same.