I now own three Raspberry Pi’s.
Using two of them (and my Guruplug as WiFi AP) I connected my new apartment with my old house (= parents) over VPN.
This way I can access the printers/scanners and NAS at home.
The 2 rPI’s are used as router (using a Macbook Air USB-to-Ethernet adapter as 2nd ethernet (eth1) port). Basic howto’s are easily found using Google to do this (a good starting point).
I made my own installation of Raspbian (as the downloadable image contains too much crap), details here (actually not that easy to find when Googling for bootstrap raspbian etc).
I’ve connected three different LANs over an OpenVPN connection:
- LAN1 (home): 192.168.1.0 (Gateway: 192.168.1.1, VPN ip: 10.9.8.254)
- LAN2 (apartment, ethernet): 10.60.111.0 (Gateway: 10.60.111.1, VPN ip: 10.9.8.250)
- LAN3 (apartment, wifi): 10.10.10.0 (Gateway: 10.10.10.1, VPN ip: 10.9.8.246)
OpenVPN range: 10.9.8.0. The subnet is 255.255.255.0 in all cases.
LAN3 is connected via LAN2 to the internet. So the default gateway of router 10.10.10.1 is 10.60.111.1.
The gateway/routers are all Debian-based Linux systems. I’m using EDPnet as ISP, and thus need to use those Sagem/Belgacom approved routers (BBox-2 hardware). These Sagems are set in bridged mode, and don’t do the PPP stuff. PPPoeconfig on Debian takes care of most of the stuff. As EDPnet provides ipv6, I can ping6 from those routers.
The idea is to connect/ping each and every LAN from any of the clients connected the LANs (without running OpenVPN on the clients; only run it on the gateways).
For example: my PC with ip 10.10.10.15 wants to connect to the NAS with ip 192.168.1.100.
This can easily be achieved by setting a client-config-dir in the openvpn.conf file (or whatever the name of your config):
client-config-dir /etc/openvpn/tiete
And don’t forget to add route pushes:
push "route 192.168.1.0 255.255.255.0" push "route 10.60.111.0 255.255.255.0" push "route 10.10.10.0 255.255.255.0"
But here comes the annoying part. As I’m pushing routes 10.60.111.0 via VPN, which is supposed to be my Guruplug’s default gateway as well (ISP > eth0:RaspberryPi:eth1 > eth0:Guruplug, remember?) this was causing quite some routing fuck ups.
The easiest way to solve this was to turn off VPN on the Guruplug all together, and route 10.10.10.0 over the Raspberry Pi, by adding this line to /etc/network/interfaces:
up route add -net 10.10.10.0 netmask 255.255.255.0 gw 10.60.111.2 dev eth1
Then I’ll change the client specific configs on the VPN. Create a file in whatever you picked as client-config-dir, and name it the actual VPN name (the name used when creating the key).
As I have three routers, I created three files (sheeva for my guruplug, Pi for my first rPI and Industry for my 2nd. Yep… Fancy names).
I also want to give a static IP address to the gateways, so I use the option:
ifconfig-push 10.9.8.<valid-ip> 10.9.8.<valid-ip - 1>
And I’ll also add the iroute option to push routes.
This is what it looks like for the router on the 192.168.1.0 network (“Pi”):
ifconfig-push 10.9.8.254 10.9.8.253 iroute 192.168.1.0 255.255.255.0
For “Sheeva”, the WiFi AP on 10.10.10.0:
ifconfig-push 10.9.8.246 10.9.8.245
And for 10.60.111.0 plus 10.10.10.0 routed over 10.60.111.0 (“Industry”):
ifconfig-push 10.9.8.250 10.9.8.249 iroute 10.60.111.0 255.255.255.0 iroute 10.10.10.0 255.255.255.0
And don’t forget to set up masquerading over tun0 (or tun+) with iptables.
Now… Oddly enough, this didn’t require that much configuration, cursing and stress… And, well, it kind of just works.
From my Mac to my NAS:
nazgul ~ $ traceroute 192.168.1.100 traceroute to 192.168.1.100 (192.168.1.100), 64 hops max, 52 byte packets 1 sheeva (10.10.10.1) 1.936 ms 1.159 ms 0.800 ms 2 10.60.111.1 (10.60.111.1) 1.456 ms 1.776 ms 1.539 ms 3 10.9.8.254 (10.9.8.254) 55.745 ms 55.046 ms 54.734 ms 4 192.168.1.100 (192.168.1.100) 62.302 ms 55.327 ms 54.795 ms
From Pi (gateway 192.168.1.1) to nazgul, my Mac:
pi ~ # traceroute 10.10.10.15 traceroute to 10.10.10.15 (10.10.10.15), 30 hops max, 60 byte packets 1 10.9.8.250 (10.9.8.250) 65.892 ms 74.177 ms 73.957 ms 2 10.60.111.2 (10.60.111.2) 73.441 ms 72.902 ms 72.342 ms 3 10.10.10.15 (10.10.10.15) 71.780 ms 71.187 ms 70.760 ms
From Heartbeat (10.9.8.102), my Munin stats server to the printer:
heartbeat ~/bin # traceroute 192.168.1.90 traceroute to 192.168.1.90 (192.168.1.90), 30 hops max, 60 byte packets 1 pi (10.9.8.254) 39.835 ms 40.794 ms 41.567 ms 2 192.168.1.90 (192.168.1.90) 41.541 ms 42.452 ms 43.307 ms
From Heartbeat to Sheeva’s eth0 IP:
heartbeat ~/bin # traceroute 10.60.111.2 traceroute to 10.60.111.2 (10.60.111.2), 30 hops max, 60 byte packets 1 industry (10.9.8.250) 32.716 ms 32.615 ms 34.359 ms 2 sheeva (10.60.111.2) 34.405 ms 34.349 ms 35.014 ms
From Heartbeat to an Android device (not sure why the latency spike):
heartbeat ~/bin # traceroute 10.10.10.72 traceroute to 10.10.10.72 (10.10.10.72), 30 hops max, 60 byte packets 1 industry (10.9.8.250) 31.337 ms 32.269 ms 32.218 ms 2 sheeva (10.60.111.2) 33.006 ms 33.052 ms 32.996 ms 3 10.10.10.72 (10.10.10.72) 471.564 ms 472.169 ms 473.082 ms
Next up (once I have spare time): try to sync local DNS and fix local ipv6.
I’ll put most of the configs on Github at some point.
Leave a Reply…