Linux Networking Software

Allow ping from USG

Because I keep forgetting and it takes me far too much time to go through one of my million sites where I set this up and find the right config…

To allow a USG (Unifi Security Gateway) to reply to external (WAN) ping requests, do the following:

  • Head to the Unifi dashboard -> Settings -> Firewall & Security
  • Create a new rule
  • Type: Internet Local
  • Description: Allow Ping (Echo Request)
  • Rule Applied: Before Predefined Rules
  • Action: Accept
  • IPv4 Protocol: ICMP
  • IPv4 IMP Type Name: Echo Request
  • Apply Changes -> wait ~2 minutes

That’s it…

All this for Smokeping.

Linux Networking Software

NextDNS, EdgeOS and device names

Noticed that NextDNS was reporting old hostnames in the logs. For example old device names (devices that changed hostnames), devices that were definitely no longer on the network, or IPs that were matched to the wrong hostnames.

The culprit is how EdgeOS deals with its hosts file. Basically it just keeps all the old hosts added and just adds a new line at the end of the file.

NextDNS searches for the first valid entry in that file, which is always going to be an older record.

So the simplest solution I found was the turn off hostfile-update every so often. This clears the hosts file.

So ssh into the device, run configure, and then run these commands:

set service dhcp-server hostfile-update disable
set service dhcp-server hostfile-update enable

Update 22 Jun ’23:

Be sure to restart NextDNS, or it won’t actually publish the up-to-date client hostnames.

sudo /config/nextdns/nextdns restart
Google Linux Networking

NextDNS + EdgeRouter + Redirecting DNS requests

Realised I haven’t updated this in a long while (life happened).

Couple of weeks ago I started to play with NextDNS — and I really recommend anyone that’s something privacy minded and cares about the stuff happening on their network.

I’ve set up several configs (home, parents, FlatTurtle TurtleBox (the NUCs controlling the screens)) and Servers. Once it’s out of beta and better supported on Unifi and Ubiquiti hardware I might deploy it to our public WiFi (well, most access points don’t look like that — but you get the point) networks too.

Looking at the logs was an eye-opener seeing what goes through your network. You can play around and block (or whitelist) certain domains.

I figured out my Devialet does an insane amount of requests to for example. This domain has a 30s TTL. It shows that the majority of my DNS requests are actually automated pings and not in any way human traffic.

Anyhow — I’ve since installed the NextDNS CLI straight on my EdgeRouter Lite acting as a caching DNS server and forwarding using DoH.

I’ve turned off dnsmasq (/etc/default/dnsmasq => DNSMASQ_OPTS="-p0") and have NextDNS listen to :53 directly.

Note that every EdgeOS update seems to wipe out the NextDNS installation, and requires a fresh install… Pain in the ass and doesn’t seem like that’s fixable.

This is my ERL NextDNS config (/etc/nextdns.conf)

hardened-privacy false
bogus-priv true
log-queries false
cache-size 10MB
cache-max-age 0s
report-client-info true
timeout 5s
listen :53
use-hosts true
setup-router false
auto-activate true
config 34xyz8
detect-captive-portals false
max-ttl 0s

The explanation of every flag is explain on their Github page and they are very responsive via issues or through their chat on

All right — next thing I’ve noticed is that my Google Home devices are not sending any DNS requests — which means the devices use hard coded DNS servers.

I have a separate vlan (eth1.90) for Google Home (includes my Android TV, OSMC, Nest Home Hub and all other GHome and Chromecast devices). For this vlan I set up a deflector to be able to cast and ping/ssh from my “main” network/vlan to GHome vlan.

Using this guide I redirected all external DNS traffic to the ERL so I can monitor what’s happening. The important part was the following:

yeri@sg-erl# show service nat rule 4053
destination {
port 53
inbound-interface eth1.90
inside-address {
port 53
protocol tcp_udp
type destination

This allows to “catch” all UDP and TCP connections to :53 and redirect them the ERL DNS server ( The GHome devices were acting a bit weird after committing the change, but a reboot of the device fixed it.

Note that you need to set this up per vlan. If you want to catch DNS requests for your Guest or IoT vlan, you’ll need to do the same.

Hardware Linux Networking

Edgerouter IPsec tunnel to Fritzbox

So, I have an EdgeRouter Lite in Singapore (Starhub) and a FritzBox in Belgium (EDPnet).

This is mostly stuff that I have found from several articles, mostly from here.

ERL: eth0 is WAN, eth1 ( and eth2 (unused, not VPN’ed) are LAN

This is the FritzBox config (go to VPN and them Import a config) fritzvpn.cfg:

vpncfg {
        connections {
                enabled = yes;
                conn_type = conntype_lan;
                name = "VPN Yeri";
                always_renew = yes;
                reject_not_encrypted = no;
                dont_filter_netbios = yes;
                localip =;
                local_virtualip =;
                remoteip =;
                remote_virtualip =;
                remotehostname = "";
                localid {
                        fqdn = "";
                remoteid {
                        fqdn = "";
                mode = phase1_mode_idp;
                phase1ss = "all/all/all";
                keytype = connkeytype_pre_shared;
                key = "SOMEPASSWORD";
                cert_do_server_auth = no;
                use_nat_t = yes;
                use_xauth = no;
                use_cfgmode = no;
                phase2localid {
                        ipnet {
                                ipaddr =;
                                mask =;
                phase2remoteid {
                        ipnet {
                                ipaddr =;
                                mask =;
                phase2ss = "esp-all-all/ah-none/comp-all/pfs";
                accesslist = "permit ip any";
        ike_forward_rules = "udp", 

Be sure to modify the password, local (Fritz) and remote (ERL) LAN and edit the local and remote fqdn.

This is the ERL config (via ssh, you’ll need to set this:

yeri@sg-erl# show vpn ipsec 
 auto-update 60
 auto-firewall-nat-exclude enable
 esp-group FOO0 {
     proposal 1 {
         encryption aes256
         hash sha1
 ike-group FOO0 {
     dead-peer-detection {
         action restart
         interval 60
         timeout 60
     lifetime 3600
     proposal 1 {
         dh-group 2
         encryption aes256
         hash sha1
 ipsec-interfaces {
     interface eth0
 nat-networks {
     allowed-network {
 nat-traversal enable
 site-to-site {
     peer {
         authentication {
             mode pre-shared-secret
             pre-shared-secret SOMEPASSWORD
         connection-type initiate
         description "VPN to"
         ike-group FOO0
         tunnel 1 {
             esp-group FOO0
             local {
             remote {


yeri@sg:~$ show vpn ipsec status
IPSec Process Running PID: 20140

1 Active IPsec Tunnels

IPsec Interfaces :
        eth0    (no IP on interface statically configured as local-address for any VPN peer)
yeri@sg:~$ show vpn ipsec sa #9, ESTABLISHED, IKEv1, 85a2d010ada73113:ca439c40ac3bca06
  local  '' @ 116.87.x.y
  remote '' @ 109.236.x.y
  established 1592s ago, reauth in 1333s #1, INSTALLED, TUNNEL, ESP:AES_CBC-256/HMAC_SHA1_96/MODP_1024
    installed 1592 ago, rekeying in 1200s, expires in 2009s
    in  c0bb652e, 1038032 bytes, 10726 packets,     0s ago
    out 8d5df3f5, 532685 bytes,  6062 packets,     0s ago

I haven’t really figured out what no IP on interface statically configured as local-address for any VPN peer means yet though.

Next up: VLANs


FlatTurtle in elevators: making of

First tests at Glaverbel (circle or “O” shaped building) in Watermael-Boisfort with 12 lifts (about a year ago). Internet wiring makes a whole circle from the internet connection at the technical room (near entrance hall). In this design from the 1960s the lift machine rooms had one shared/common room where we installed switches (to avoid having to pull too much cable and to overcome cable length issues). High quality shielded cable was used to avoid signal loss over the distances we did.


We first opted for wired internet to the cabin (TurtleBox being on top of the cabin, with HDMI to the display in the cabin — the idea was to cover the TurtleBox inside the roof/under some protection in case something would fall, and for moist and dust — this was quickly abandoned due to space & time constraints).


The TurtleBox in this case was again an Intel NUC (Celeron for the first two “tests”, Atom afterwards due to fanless design).


After the first initial test, wired internet was not feasible after our first two cabins:

  • pricing of cable (~€450)
  • Kone provided wrong cable (some weird color codes, not the regular STP/UTP, coating was too big for STP plugs)
  • Test lift one tore the cable (probably got stuck somewhere between the cabin and the wall)
  • In test lift two, during a controlled shut down of the lift (due to other repairs by Kone Refurbishments), Kone Emergencies got called by the customer to start up the shut downed lift (customer not being aware of the shutdown reason). Kone Services didn’t recognise the new wiring as native or normal, and decided to cut down the cable.

=> So wiring is more (expensive) hassle than anything else.

We realized we didn’t want to go through this mess 10 more times.

IMG_20140820_150747 IMG_20140820_150755IMG_20140605_083820-nopm-

I can also tell you lifts are way less ‘clean’ than I would have expected.

The idea my technician (can greatly recommend him for doing an amazing, detailed & clean job) had was to try with WiFi. I was skeptic (10ish floors, lots of metal and other crap inside the shaft)… But it would definitely be cheaper and easier to maintain.


NUCs are mounted on top of the cabin for a clear line of sight. However, after testing this was deemed unnecessary and they got lowered/mounted to the side to provide additional protection.


In the end, I have to say due to all the metal and concrete, signal went WAY further than I’d imagine (we could do two entirely separated shafts with one AP, just the -1 and -2 floors had troubles to have a stable signal). Signal is strong enough to have working WiFi in the (metal) lift cabin, and people working not too far from the (metal) lift doors on the floors can still use WiFi as well (albeit not with the best signal).


WiFi (Ubiquiti unifi, again) uses Power-over-Ethernet and remotely managed using Auki making it very easy to manage and install.


The 12 lifts now have FlatTurtle displays in them, using WiFi as internet connection… And it’s working like a charm!

IMG_20140605_090517 IMG_20140605_092417

Oh, and on a plus side, Kone technicians (all of them) were a charm to work with, doing a great job!!

IMG_20140605_083833-nopm- IMG_20140605_084411 IMG_20140605_092429IMG_20140605_105201

More at FlatTurtle’s blog.