You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This paste assumes a Vultr Ubuntu cloud-compute instance with default Vultr-issued IPv6 and a home MikroTik RouterOS v7 in roughly the standard defconf state (LAN bridge, IPv6 firewall defconf rules). It gives the home LAN a globally routable IPv6 /64 over a WireGuard tunnel without NAT66 and without DHCPv6-PD.
If your VPS is on a different cloud, your home router is not MikroTik, or the LAN is not behind a single bridge, read the full guide and substitute. The validation report shows the actual end-to-end captures.
Internet
│
┌────────┴────────┐
│ Vultr Ubuntu │
│ <VPS_IP> │
│ enp1s0: │
│ <ORIG /64> on-link (untouched) ← keeps VPS itself online
│ <RES /64> NDP-proxied via ndppd ← owned by the home LAN
│ wg0: fd99:9999:9999::1/64 ← WG transit (ULA)
└────────┬────────┘
│ WireGuard, UDP/51820
│
┌────────┴────────┐
│ MikroTik v7 │
│ wg-cloud: │
│ fd99:9999:9999::2/64
│ bridge (LAN): │
│ <RES /64>::1/64 (RA SLAAC to clients)
└────────┬────────┘
│
LAN clients (SLAAC from <RES /64>)
Item
Value
VPS plan
Vultr Cloud Compute, $5/mo
Reserved IPv6 /64
$3/mo
Total monthly cost
$8/mo
Tunnel
WireGuard, IPv6-only over the tunnel
LAN config
SLAAC, no DHCPv6, no NAT66
Tested
Ubuntu 26.04 + ndppd 0.2.4, RouterOS 7.21.2 on RB5009
Substitute these placeholders before running:
Placeholder
Where to get it
<VPS_IP>
Vultr panel → instance public IPv4
<RES_PREFIX> (a /64)
Reserved IPv6 you allocate in Vultr → e.g. 2001:db8:1:1
<VPS_PRIVKEY> / <VPS_PUBKEY>
wg genkey and wg pubkey on VPS
<MT_PRIVKEY> / <MT_PUBKEY>
wg genkey and wg pubkey on a workstation
<NIC>
VPS NIC name (enp1s0 on modern Vultr Ubuntu)
<LAN_BRIDGE>
MikroTik LAN bridge name (bridge in defconf)
1. Reserve the IPv6 /64 in Vultr
Panel: Products → Network → Reserved IPs → Add Reserved IP, choose IPv6, same region as the VPS, label it wg-lan-prefix, then Attach to Server. No reboot. Verify from the VPS:
wg show # latest handshake should be < 3 min
ping6 -c 2 fd99:9999:9999::2 # WG transit reachable
ping6 -c 2 <RES_PREFIX>::1 # MT LAN gw reachable via wg0
If LAN clients have SLAAC addresses but timeouts beyond ~30 s of idle, your ndppd is not answering — see full guide §6 (the auto vs static rule note) and journalctl -u ndppd.
Cost recap
Line item
Cost
Vultr Cloud Compute (smallest plan with IPv6 in your region)
$5.00 / mo
Reserved IPv6 /64
$3.00 / mo
Total
$8.00 / mo
Add bandwidth overages only if the LAN's IPv6 egress exceeds your VPS plan's transfer cap (1 TB on the $5 plan as of 2026-05).
MikroTik RouterOS v7 — IPv6-over-WireGuard via Vultr Relay
Last updated: 2026-05-11
This is a production-minded recipe for giving a home LAN globally routable IPv6 when the local ISP has no native IPv6, by routing a Vultr-issued IPv6 /64 over a WireGuard tunnel to a MikroTik RouterOS v7 router. It covers:
a single Vultr Ubuntu VPS as the relay endpoint;
a Vultr Reserved IPv6 /64 dedicated to the LAN side, leaving the instance's original on-link /64 alone;
WireGuard site-to-site, IPv6-only over the tunnel;
ndppd answering NDP for the reserved /64 toward Vultr's gateway;
SLAAC on the home LAN, no NAT66, no DHCPv6-PD;
MikroTik defconf IPv6 firewall with one targeted addition.
It is written as a guide, not just a paste. Review every interface name, prefix, key, and route comment before applying.
This design is for: a home or lab user who wants real, globally routable IPv6 with SLAAC, lives behind an IPv4-only ISP (or behind CGNAT), and is willing to pay ~$8/mo for a stable cloud relay.
This design is not: a CDN, a high-throughput proxy, or a substitute for native IPv6 from your ISP. Your IPv6 path is now whatever your ISP-to-VPS IPv4 latency is plus tunnel overhead. In our build, that's ~33 ms ISP-to-Singapore.
If you take one thing away: Vultr does not delegate routed IPv6 prefixes — every Vultr IPv6 (primary or reserved) is on-link from their gateway's perspective. That means anything you want forwarded behind the VPS must be answered for via NDP on the VPS's NIC (e.g. with ndppd). Reserving a second /64 just for the LAN is the cheapest way to keep that NDP-proxied prefix structurally separate from the VPS's own /64 — which is what makes SLAAC on the LAN work cleanly without fighting the kernel's connected-route table. Everything else in this document is plumbing around that idea.
Common gotchas
Gotcha
What goes wrong
Where to read
Vultr only delegates on-link /64s
Routed-prefix design ideas (DHCPv6-PD, simple route /64 dev wg0) won't work in isolation; the upstream gateway never sends packets unless you NDP-proxy
§3, §5
Sharing the original /64 with the LAN fights the kernel
A connected /64 dev <NIC> route on the VPS makes packets for LAN clients dead-end on the local NIC even with proxy_ndp on
§5.2
ndppdstatic mode warns/refuses for prefixes shorter than /120
Ubuntu 26.04 ships ndppd whose static rule logs "Low prefix length" and may not load; auto works
Other Ubuntu LTS works; other Vultr plans work if they include IPv6
VPS NIC
enp1s0
Yes — modern Vultr Ubuntu uses this; older images use ens3
VPS public IPv4
<VPS_IP>
From the panel
VPS original /64
acquired via SLAAC from Vultr's gateway, leave alone
No
Reserved /64
<RES_PREFIX>::/64, e.g. 2001:db8:1:1::/64
Yes — what you reserved
WG transit
fd99:9999:9999::/64 ULA, VPS=::1, MT=::2
Yes — pick any ULA
WG port
UDP 51820
Yes
WG MTU
1420
Lower if you see PMTUD failures
Home router
MikroTik RouterOS v7 with defconf v6 firewall
Yes — adapt firewall rules accordingly
LAN bridge
bridge
Yes — match yours
The recipe does not include a complete IPv4 firewall; defconf v4 on MikroTik plus the existing UFW on the VPS are assumed.
2. Read this before pasting
Take these snapshots first:
/export file=before-ipv6-wg
/system backup save name=before-ipv6-wg
# on the VPS
sshd -T > /root/sshd-T.before
ufw status verbose > /root/ufw.before
ip -6 addr > /root/v6addr.before
ip -6 route > /root/v6route.before
Apply VPS changes first, verify the tunnel comes up from MikroTik, then add the MikroTik firewall accept rule. Doing it in the other order silently drops the WG keepalives.
3. Why Vultr forces NDP proxy
Vultr's standard cloud instances are issued a single /64 on-link via SLAAC from the upstream gateway:
2001:db8:1:0::/64 dev enp1s0 proto ra metric 100
default via fe80::1 dev enp1s0 proto ra
The gateway treats every address inside that /64 as reachable on the local L2 segment. Vultr does not:
Delegate a routed /64 or /48 to your instance;
Honor DHCPv6-PD requests from your instance;
Send traffic for "child" prefixes unless your VPS's MAC has answered NDP for the destination.
Two consequences for an IPv6-over-WG relay:
Anything you want forwarded behind the VPS must be NDP-proxied on the upstream-facing NIC.
There is no "routed prefix" pricing tier you can pay for — the closest thing is a Reserved IPv6 /64 ($3/mo) which is also delivered on-link, just attached to the same NIC. What it gives you is a second, structurally separate /64 that you can dedicate to the LAN side without disturbing the VPS's own /64.
This is the cheapest way to keep the routing model clean. The variants in §12 cover what you have to give up if you skip the $3.
4. Topology and addressing
Internet
│
┌────────────┴────────────┐
│ Vultr Ubuntu <VPS_IP> │
│ enp1s0 (one MAC): │
│ <ORIG_PREFIX>::xxxx ← SLAAC, VPS's own v6 (untouched)
│ <RES_PREFIX>::/64 ← NDP-proxied, owned by LAN
│ wg0: │
│ fd99:9999:9999::1/64 │
└────────────┬────────────┘
│ WG, UDP/51820
│
┌────────────┴────────────┐
│ MikroTik v7 │
│ wg-cloud: │
│ fd99:9999:9999::2/64 │
│ bridge: │
│ <RES_PREFIX>::1/64 ← LAN gateway, RA SLAAC
└────────────┬────────────┘
│
LAN clients (SLAAC)
Element
Value
VPS original /64
<ORIG_PREFIX>::/64 — kept on enp1s0, used by the VPS itself
Reserved /64 (LAN)
<RES_PREFIX>::/64 — routed via wg0, NDP-proxied on enp1s0
WG transit /64 (ULA)
fd99:9999:9999::/64, VPS=::1, MT=::2
WG port
UDP 51820
WG MTU
1420 (1500 outer IPv4 − 80 WG/IPv6 overhead)
5. Why the routing model works
5.1 What ndppd actually does
ndppd listens for IPv6 Neighbor Solicitations on the upstream-facing NIC. When Vultr's gateway asks "who has <RES_PREFIX>::abcd?", ndppd replies with the VPS's MAC address. The gateway then forwards traffic for that destination to the VPS, and the kernel routes it via the more-specific FIB entry (<RES_PREFIX>::/64 dev wg0) into the tunnel.
Two ndppd modes worth knowing:
static — answer for any address in the rule unconditionally. Simple, but Ubuntu 26.04's ndppd (0.2.4 binary) refuses to load this for prefixes shorter than /120 (it logs "Low prefix length" and exits).
auto — consult the local FIB before answering. Since <RES_PREFIX>::/64 dev wg0 exists, every address in the /64 is "reachable via wg0", so ndppd answers. Same effect as static, no warning. Use auto.
5.2 Why we don't share the original /64 with the LAN
Tempting alternative (skip the $3): announce the VPS's original <ORIG_PREFIX>::/64 on the LAN too, and let ndppd answer for both ends.
It does not work cleanly. The Linux kernel has the original /64 as a connected route on enp1s0 (proto ra from Vultr's RA). When a return packet for a LAN client at <ORIG_PREFIX>::abcd arrives, the FIB matches the connected route, the kernel does ND on enp1s0, gets nothing (the LAN client lives behind wg0), and drops the packet. proxy_ndp answers external solicitations — it does not change internal route lookups.
The workarounds (per-address static neighbor proxies, removing the connected /64 to replace with a host route, carving a non-/64 sub-prefix that breaks SLAAC) are all fragile. The reserved /64 sidesteps all of it: it is not a connected route on enp1s0 because we never bind any address from it to the NIC. The kernel only knows it as <RES_PREFIX>::/64 dev wg0, so traffic flows naturally.
accept_ra=2 is the gotcha. With forwarding=1, Linux defaults to accept_ra=0 and drops upstream RAs — meaning the VPS would slowly lose its own IPv6 default route as the existing one expires. accept_ra=2 keeps RA processing on for that specific NIC.
6. VPS configuration
6.1 Reserve the IPv6 /64
Panel: Products → Network → Reserved IPs → Add Reserved IP, choose IPv6, same region as the VPS, label it (wg-lan-prefix is fine), then Attach to Server. No reboot. Verify from the VPS:
wg-quick automatically installs routes for AllowedIPs. Do not add a manual PostUp = ip -6 route add <RES_PREFIX>::/64 dev wg0 — it duplicates and wg-quick aborts with RTNETLINK File exists.
PersistentKeepalive = 25 is set on the VPS even though MT dials out, because we want both sides to keep the conntrack/UDP state warm regardless of which side initiated.
The default Ubuntu 26.04 image arrives with UFW already deny-by-default for INPUT and FORWARD, allowing only SSH. Three additions cover this design:
ufw allow 51820/udp comment "WireGuard"
ufw route allow in on wg0
ufw route allow out on wg0
ufw reload
ufw route ... rules go into the FORWARD chain without requiring you to flip DEFAULT_FORWARD_POLICY to ACCEPT globally. Existing UFW ICMPv6 allows in ufw6-before-input and ufw6-before-forward cover NDP and PMTUD already.
allowed-address=::/0 on the peer means the MT will route any IPv6 destination through this peer — which is the desired behavior for a relay design.
7.2 Firewall — defconf-compatible addition
The MikroTik defconf IPv6 firewall already does the right thing for almost everything:
accepts established/related (return traffic for LAN-initiated connections);
accepts ICMPv6 in input and forward (NDP, PMTUD, ping6 reachability);
drops unsolicited inbound from non-LAN interfaces (FORWARD in-interface-list=!LAN drop).
It will, however, drop input from the WG peer (the keepalives, anything you ping6 the MT for from the VPS), because wg-cloud is not in the LAN list. One rule fixes that:
/ipv6/firewall/filter
add chain=input action=accept in-interface=wg-cloud \
comment="accept input from cloud peer (WG)" \
place-before=[find where chain=input and comment="defconf: drop everything else not coming from LAN"]
Do not put wg-cloud in the LAN interface list. That would expose all LAN-internal services to the VPS (and, transitively, to anyone who compromises the VPS).
If you specifically want LAN clients reachable from the public internet for unsolicited inbound (self-hosting, P2P with full open ports), also add:
add chain=forward action=accept in-interface=wg-cloud out-interface-list=LAN \
comment="accept new inbound from internet to LAN" \
place-before=[find where chain=forward and comment="defconf: drop everything else not coming from LAN"]
For passing test-ipv6.com and similar reachability tests, this rule is not required — the test is outbound-initiated.
7.3 RA / SLAAC
RouterOS v7 defconf includes an implicit /ipv6/nd entry that emits RAs on every interface that has a global IPv6 address. Adding <RES_PREFIX>::1/64 advertise=yes on the bridge is enough to start announcements. If a previous configuration left a disabled or modified /ipv6/nd entry on your bridge, either remove it (so the implicit default takes over) or add an explicit one:
The defconf rule defconf: accept DHCPv6-Client prefix delegation. with src-address=fe80::/10 should stay — it is link-local-scoped and harmless, and you may use DHCPv6 for something else later.
8. Hardening
8.1 SSH
The Ubuntu image arrives with PasswordAuthentication yes. On a public VPS that exposes 22/tcp, that is the only meaningful exposure once WireGuard is up.
The filename must sort before 50-cloud-init.conf — sshd uses first-match semantics, and cloud-init's drop-in sets PasswordAuthentication yes. A 99- prefix loses; a 00- prefix wins, and survives cloud-init regenerations on reboot.
If your laptop's ~/.ssh/config causes "Too many authentication failures" against the now MaxAuthTries 3 server, pin the identity:
Port change for SSH — bots scan all ports; security theater.
fail2ban — once password auth is off, there is nothing to brute-force at the application layer.
Source-IP allowlist for SSH — fragile if your home IP is dynamic; if you do want this, gate it via UFW (ufw delete allow 22 ; ufw allow from <home_ip> to any port 22), not sshd.
2FA / hardware keys — overkill for a single-tenant relay; revisit if you start running anything else on the VPS.
AppArmor / SELinux tuning, auditd, kernel hardening — Ubuntu defaults are fine for this role.
unattended-upgrades is already installed and enabled by the Vultr image — leave it alone.
9. Validation checklist
# VPS
wg show # latest handshake < 3 min, transfer non-zero
ip -6 route | grep '<RES_PREFIX>'# one entry: <RES_PREFIX>::/64 dev wg0
journalctl -u ndppd -n 20 --no-pager # no "Failed to load"
ping6 -c 2 fd99:9999:9999::2 # WG transit reachable
ping6 -c 2 <RES_PREFIX>::1 # MT LAN gateway reachable via wg0
ip -6 addr | grep <RES_PREFIX># client should have an address inside the reserved /64
ping6 -c 3 2606:4700:4700::1111
curl -6 https://test-ipv6.com/json/
A 10/10 test-ipv6.com score with the source listed as the reserved /64 is the canonical pass.
10. Operational notes and troubleshooting
Symptom
Likely cause
First check
LAN clients have SLAAC v6 but timeouts
ndppd not answering
journalctl -u ndppd, look for "Failed to load" or static warnings
Sporadic timeouts after long idle
NDP cache eviction on Vultr GW; ndppd's auto solicit not landing
`tcpdump -i 'icmp6 && (ip6[40] == 135
wg show shows no handshake
UFW / MT firewall, wrong port, wrong public key
UFW 51820/udp allow, MT firewall input accept on wg-cloud
Latest handshake but transfer 0 / 0
AllowedIPs mismatch
The MT side AllowedIPs must include ::/0 (or at least the reserved /64); the VPS side must include the reserved /64
Large pages slow / TCP stalls
PMTUD blackhole
Drop WG MTU to 1380 on both sides; verify ICMPv6 type 2 (packet too big) is allowed in defconf forward (it is)
Pings of <RES_PREFIX>::1 from VPS work, but pings from LAN clients fail
NDP proxy missing on <NIC>; or proxy_ndp=0
sysctl net.ipv6.conf.<NIC>.proxy_ndp should be 1; ndppd should be active
VPS loses its own IPv6 default route after a few hours
accept_ra=0 after enabling forwarding
sysctl net.ipv6.conf.<NIC>.accept_ra should be 2
MikroTik refuses to add <RES_PREFIX>::1/64 on bridge
Address already on another interface, or duplicate from a previous attempt
/ipv6/address/print and remove duplicates
11. Cost
Line item
Cost
Vultr Cloud Compute (smallest plan with IPv6 in your region)
$5.00 / mo
Reserved IPv6 /64
$3.00 / mo
Total
$8.00 / mo
Bandwidth: the $5 Vultr plan includes 1 TB outbound transfer. All LAN-to-internet IPv6 traffic counts against this. If you stream a lot or run torrents on IPv6, watch the meter; egress overage is $0.01/GB on most regions as of 2026-05.
There is no separate per-GB charge for the tunnel itself; bytes count once, on the way out of the VPS.
12. Variants
12.1 Skip the reserved /64 (save $3/mo)
Use only the original on-link /64 by NDP-proxying the whole /64 and replacing the kernel's connected route with a host route for the VPS's own address plus a route for the rest via wg0. Works, but:
Fights Vultr's RA renewals; an RA refresh re-installs the connected /64 and breaks the LAN until you intervene;
Address-collision risk between VPS's SLAAC address and any LAN client's SLAAC address (vanishingly small with EUI-64, but real with privacy extensions);
Hard to recover from a routing-table glitch without console.
Not recommended. The $3/mo buys you "the design just works".
12.2 BGP a true routed /48 (save the NDP proxy entirely)
Get an ASN and a /48 (ARIN/RIPE/APNIC), enable Vultr's BGP feature, run FRR/BIRD on the VPS announcing the /48 to Vultr. The VPS becomes a real edge router for the /48. Pros: cleanest possible model, no NDP proxy at all, can scale to multiple home sites with different /64s within the /48. Cons: ~$50/yr for the prefix + ~$5–25 for an ASN + non-trivial setup time. Worth it if you plan to relay multiple sites or want resilience to Vultr deciding to deprecate Reserved IPv6.
12.3 IPv4 over the same tunnel
Out of scope for this guide — the brief was IPv6-only over WG. To add it: assign IPv4 link addresses to wg0 on both sides (e.g. 10.99.99.1/30 and 10.99.99.2/30), expand AllowedIPs to include the LAN's IPv4 /24, add a SNAT/MASQUERADE on the VPS, and either bring up an IPv4 default route on MT pointing at the VPS or use mangle to selectively route subsets. Note: this swaps your home ISP's IPv4 path for the VPS's, with all the latency that implies.
12.4 Multiple home sites
Each home site gets its own reserved /64, its own WG peer, and its own AllowedIPs entry on the VPS side. ndppd's auto rule pattern can be repeated per /64. The VPS becomes a small IPv6 hub.
This document records the actual build, the gotchas encountered, and the end-to-end validation captures for the IPv6-over-WG relay. It is the validation companion to the guide. The methodology is written so a second team can reproduce it without rediscovering the same gotchas — those are also catalogued in §6.
1. Goal
Validate, against a real Vultr Cloud Compute instance and a real home MikroTik, that the recipe in the guide produces:
a working WireGuard tunnel between the VPS and the MikroTik;
a Vultr-issued IPv6 /64 routed through the tunnel and reachable from the public internet;
working SLAAC on the home LAN with addresses inside that /64;
bidirectional IPv6 (LAN client to internet, internet to LAN client) without NAT66 or DHCPv6-PD;
correct NDP-proxy behavior on the VPS NIC (Vultr's gateway sees the /64 as on-link);
a 10/10 score on test-ipv6.com from a LAN client.
2. Topology under test
public IPv6 internet (Cloudflare 2606:4700:4700::1111)
│
│ ~1 ms (Vultr SGP edge)
│
┌─────────────────┴─────────────────┐
│ Vultr Cloud Compute (Ubuntu 26.04)│
│ Region: SGP │
│ enp1s0 / MAC 52:54:00:12:34:56 │
│ 2001:db8:1:0::/64 (orig) │ ← VPS itself, untouched
│ 2001:db8:1:1::/64 (res ) │ ← NDP-proxied to wg0
│ wg0: │
│ fd99:9999:9999::1/64 │
│ Public IPv4: 203.0.113.10 │
└─────────────────┬─────────────────┘
│ ~33 ms (Vultr SGP ↔ home ISP)
│ WireGuard, UDP/51820
│
┌─────────────────┴─────────────────┐
│ MikroTik RB5009UPr+S+, RouterOS │
│ 7.21.2 │
│ wg-cloud: │
│ fd99:9999:9999::2/64 │
│ bridge (LAN): │
│ 2001:db8:1:1::1/64 RA │
│ Public IPv4: 198.51.100.42 (CGNAT-adjacent, dynamic) │
└─────────────────┬─────────────────┘
│ Ethernet
│
┌─────────┴─────────┐
│ LAN clients │
│ SLAAC EUI-64 │
│ e.g. 2001:db8:1:1:1234:5678:9abc:def0 │
└───────────────────┘
Default allocation, observed from the metadata service and from the kernel:
$ curl -s http://169.254.169.254/v1/interfaces/0/ipv6/network ; echo
2001:db8:1:0::
$ curl -s http://169.254.169.254/v1/interfaces/0/ipv6/prefix ; echo
64
$ curl -s http://169.254.169.254/v1/interfaces/0/ipv6/additional/ ; echo
(empty)
$ ip -6 route
2001:db8:1:0::/64 dev enp1s0 proto ra metric 100 pref medium
2001:db8:1:0::/64 dev enp1s0 proto kernel metric 256 pref medium
fe80::/64 dev enp1s0 proto kernel metric 256 pref medium
default via fe80::1 dev enp1s0 proto ra metric 100
Two observations critical to the design:
The /64 arrives on-link via SLAAC + RA (proto ra in the route table). There is no separately routed prefix.
The kernel installs the /64 as both a proto ra route (from the RA) and a proto kernel connected route (from the SLAAC address). Anything sharing this /64 fights the connected route — see §5.2 of the guide.
4. Reserving the second /64
Action: Vultr panel → Products → Network → Reserved IPs → Add Reserved IP, type IPv6, region SGP, label wg-lan-prefix, then Attach to Server. No reboot.
Observation immediately after attach:
$ curl -s http://169.254.169.254/v1/interfaces/0/ipv6/additional/0/network ; echo
2001:db8:1:1::
$ curl -s http://169.254.169.254/v1/interfaces/0/ipv6/additional/0/prefix ; echo
64
$ ip -6 addr show enp1s0 | grep inet6
inet6 2001:db8:1:0:5054:ff:fe12:3456/64 scope global mngtmpaddr noprefixroute
inet6 fe80::5054:ff:fe12:3456/64 scope link
Notable: Vultr did not auto-bind the reserved /64 to enp1s0. This is the desired state — see guide §6.1.
5. Configuration applied
Verbatim files from the build (private keys redacted):
51820/udp ALLOW IN Anywhere # WireGuard
51820/udp (v6) ALLOW IN Anywhere (v6) # WireGuard
Anywhere ALLOW FWD Anywhere on wg0
Anywhere on wg0 ALLOW FWD Anywhere
Anywhere (v6) ALLOW FWD Anywhere (v6) on wg0
Anywhere (v6) on wg0 ALLOW FWD Anywhere (v6)
5.5 /etc/ssh/sshd_config.d/00-hardening.conf
PasswordAuthentication no
PermitRootLogin prohibit-password
MaxAuthTries 3
Filename 00-hardening.conf chosen so it sorts before 50-cloud-init.conf (sshd uses first-match).
5.6 MikroTik delta (RouterOS 7.21.2)
/interface/wireguard
add name=wg-cloud listen-port=13231 mtu=1420 private-key="<REDACTED>"
/interface/wireguard/peers
add interface=wg-cloud name=vultr \
public-key="RXSlcVkVSWuwTC57VIxluWvZXb2402Bf2sYo66TOWyY=" \
endpoint-address=203.0.113.10 endpoint-port=51820 \
allowed-address=::/0 \
persistent-keepalive=25s
/ipv6/address
add address=fd99:9999:9999::2/64 advertise=no interface=wg-cloud
add address=2001:db8:1:1::1/64 advertise=yes interface=bridge
/ipv6/route
add dst-address=::/0 gateway=fd99:9999:9999::1
/ipv6/firewall/filter
add chain=input action=accept in-interface=wg-cloud \
comment="accept input from cloud peer (WG)" \
place-before=[find where chain=input and comment="defconf: drop everything else not coming from LAN"]
6. Gotchas encountered
6.1 wg-quick aborted on duplicate route
First wg0.conf included a PostUp = ip -6 route add 2001:db8:1:1::/64 dev wg0. wg-quick already adds routes for AllowedIPs, so the explicit PostUp produced:
[#] ip -6 route add 2001:db8:1:1::/64 dev wg0
[#] ip -6 route add 2001:db8:1:1::/64 dev wg0
RTNETLINK answers: File exists
[#] ip link delete dev wg0
wg-quick@wg0.service: Main process exited, status=2/INVALIDARGUMENT
Fix: remove the PostUp / PostDown. Let wg-quick install routes for AllowedIPs.
6.2 ndppd static rule rejected
First ndppd.conf used static for the /64. The 0.2.4 binary logs:
(notice) ndppd (NDP Proxy Daemon) version 0.2.4
(notice) Using configuration file '/etc/ndppd.conf'
(error) Failed to load configuration file '/etc/ndppd.conf'
Running ndppd in foreground reveals the underlying warning:
(warning) Low prefix length (64 <= 120) when using 'static' method
Fix: replace static with auto. Same effect (since the /64 routes via wg0), no warning, config loads. Confirmed:
20:42:13.868799 IP6 fe80::1 > ff02::1:ff58:2da0:
ICMP6, neighbor solicitation, who has 2001:db8:1:1:aaaa:bbbb:cccc:dddd
20:42:13.869286 IP6 fe80::5054:ff:fe12:3456 > fe80::1:
ICMP6, neighbor advertisement, tgt is 2001:db8:1:1:aaaa:bbbb:cccc:dddd
(Vultr GW asks for the LAN client's address; VPS replies with its own MAC. Round-trip < 0.5 ms.)
After MaxAuthTries 3 was applied, SSH from a laptop with multiple keys in the agent failed:
Received disconnect from 203.0.113.10 port 22:2: Too many authentication failures
Fix on client side: pin the identity with IdentitiesOnly yes in ~/.ssh/config so the right key is offered first.
6.5 First ping was a false positive
When ndppd was briefly in foreground for diagnostics, Vultr's gateway populated its NDP cache for the test address. The first MikroTik-sourced ping that succeeded after that briefly looked like the system was working — it was actually riding the cache for ~30 s. Real validation requires ndppd to be running as a service, not a foreground diagnostic, and a fresh test after several minutes of idle.
7. Methodology
Tests, in order:
WG handshake — wg show on VPS shows a recent handshake and non-zero transfer.
WG transit reachable — VPS pings MT's wg0 address (fd99:9999:9999::2).
LAN gateway reachable via wg0 — VPS pings MT's bridge address (2001:db8:1:1::1).
MT-sourced ping6 to public — from MT, /ping 2606:4700:4700::1111 src-address=2001:db8:1:1::1 exercises the outbound path and the return-path NDP proxy in one shot.
NDP proxy capture — tcpdump on enp1s0 for ICMPv6 type 135/136 confirms ndppd answers Vultr's solicitations for live LAN-client SLAAC addresses.
End-to-end from a real LAN client — ping6 and curl -6 from a host that obtained a SLAAC address from RA on the bridge.
test-ipv6.com — public reachability + IPv6-default-by-browser score.
Tests 4–7 were run after waiting > 60 s post ndppd reload to ensure the Vultr-GW NDP cache had expired and the proxy answers were genuinely fresh.
8. Results
8.1 WireGuard handshake
$ wg show
interface: wg0
public key: RXSlcVkVSWuwTC57VIxluWvZXb2402Bf2sYo66TOWyY=
listening port: 51820
peer: wkXldaIVJnvWHxcEWTR7Ne+8MznRkPgU89na2x2Ga2E=
endpoint: 198.51.100.42:8075
allowed ips: fd99:9999:9999::/64, 2001:db8:1:1::/64
latest handshake: 22 seconds ago
transfer: 557.78 KiB received, 1.33 KiB sent
persistent keepalive: every 25 seconds
8075 is the ephemeral source port the MT chose for the outbound dial; the VPS listens on 51820. Asymmetric (outbound) initiation working as designed.
8.2 WG transit and MT LAN gateway reachable
$ ping6 -c 3 fd99:9999:9999::2
64 bytes from fd99:9999:9999::2: icmp_seq=1 ttl=64 time=33.0 ms
64 bytes from fd99:9999:9999::2: icmp_seq=2 ttl=64 time=32.9 ms
64 bytes from fd99:9999:9999::2: icmp_seq=3 ttl=64 time=32.7 ms
3 packets transmitted, 3 received, 0% packet loss
$ ping6 -c 3 2001:db8:1:1::1
64 bytes from 2001:db8:1:1::1: icmp_seq=1 ttl=64 time=32.6 ms
64 bytes from 2001:db8:1:1::1: icmp_seq=2 ttl=64 time=32.7 ms
64 bytes from 2001:db8:1:1::1: icmp_seq=3 ttl=64 time=32.8 ms
3 packets transmitted, 3 received, 0% packet loss
$ ping6 -c 2 2606:4700:4700::1111 # VPS itself, for sanity
64 bytes from 2606:4700:4700::1111: icmp_seq=1 ttl=58 time=1.24 ms
64 bytes from 2606:4700:4700::1111: icmp_seq=2 ttl=58 time=1.44 ms
Tunnel adds ~33 ms (VPS-to-MT WAN RTT). Cloudflare from the VPS is ~1.3 ms (Vultr SGP edge).
hlim=57 from a TTL-256 source means the ICMP reply traversed approximately 7–8 hops on the return path — consistent with Cloudflare being one hop from Vultr SGP and the rest of the count being the WG tunnel and the local LAN.
8.4 ndppd answering real LAN clients
15-second tcpdump on enp1s0, while no induced traffic was generated:
$ tcpdump -i enp1s0 -nn -l 'icmp6 && (ip6[40] == 135 || ip6[40] == 136)'
20:42:03.516376 fe80::1 > fe80::5054:ff:fe12:3456
NS who has fe80::5054:ff:fe12:3456
20:42:03.516517 fe80::5054:ff:fe12:3456 > fe80::1
NA tgt is fe80::5054:ff:fe12:3456
20:42:13.868799 fe80::1 > ff02::1:ff58:2da0
NS who has 2001:db8:1:1:aaaa:bbbb:cccc:dddd
20:42:13.869286 fe80::5054:ff:fe12:3456 > fe80::1
NA tgt is 2001:db8:1:1:aaaa:bbbb:cccc:dddd
Reply latency Vultr-GW → VPS-NIC → ndppd → reply: ~487 µs for the proxied target. Multiple distinct LAN-client SLAAC addresses were observed during the capture, confirming that more than one device on the home LAN had auto-configured from the reserved /64.
8.5 LAN client reachability
From a macOS LAN client that received SLAAC address 2001:db8:1:1:1234:5678:9abc:def0:
$ ping6 2606:4700:4700::1111
PING6(56=40+8+8 bytes) 2001:db8:1:1:1234:5678:9abc:def0 --> 2606:4700:4700::1111
16 bytes from 2606:4700:4700::1111, icmp_seq=0 hlim=56 time=36.389 ms
16 bytes from 2606:4700:4700::1111, icmp_seq=1 hlim=56 time=37.589 ms
16 bytes from 2606:4700:4700::1111, icmp_seq=2 hlim=56 time=37.401 ms
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/std-dev = 36.389/37.126/37.589/0.527 ms
hlim=56 (vs hlim=57 from the MikroTik in §8.3) — one extra hop, as expected (the LAN client → MT → wg0 → ... vs MT → wg0 → ...).
8.6 test-ipv6.com
From the same LAN client:
Check
Result
RTT
Reach IPv4-only websites
✓
119 ms
Reach IPv6-only websites
✓
157 ms
Access modern dual-stack websites
✓
157 ms
Transfer large packets over IPv6
✓
192 ms
Handle large data transfers well
✓
201 ms
Browser uses IPv6 by default
✓
262 ms
Resolve IPv6-only domain names
✓
219 ms
IPv6 score: 10/10. IPv4 score: 10/10. Source IPv6 listed by the test as 2001:db8:1:1:1234:5678:9abc:def0, AS owner Vultr (Singapore). Source IPv4 listed as the home ISP's residential public IP.
The "transfer large packets" pass at MTU 1420 confirms that PMTUD is functional; if it were blackholed, that test would fail or stall.
9. Caveats
Single region of test: Vultr SGP only. Other regions are likely identical from a network-model standpoint, but enp1s0 could be ens3 on older Vultr images (pre-predictable-naming). Adjust the sysctl path and ndppd proxy <NIC> accordingly.
Single home WAN: tested over an IPv4-only residential connection. Behind CGNAT or behind aggressive ISP-side UDP filtering, the WG dial-out may need a different port.
ndppd 0.2.4 binary on Ubuntu 26.04: the auto workaround for the static rejection is what we tested. If a future Ubuntu ships ndppd 0.2.5+ with the warning downgraded, static would also work.
Reserved IPv6 detachment: Vultr does not document a per-region SLA for Reserved IPv6 attachment. If the reserved /64 is ever detached (rebuild, region migration, panel mishap), the LAN loses public IPv6 until reattached and ndppd restarted — the existing config does not need to be edited.
MTU 1420 is conservative: with WG over IPv4, the absolute minimum overhead is 60 bytes, so 1440 is theoretically usable. 1420 leaves headroom for any IPv6-encapsulation path the WG-encapsulated WAN might take.
IPv4 path unchanged: the LAN's IPv4 still goes out the home ISP. Anything that depends on IPv4 source IP from a specific country still does so. Only IPv6 traffic is "from Singapore".
10. Cleanup
If you want to revert the build:
# VPS
systemctl disable --now wg-quick@wg0 ndppd
rm -f /etc/wireguard/wg0.conf /etc/wireguard/server.* /etc/wireguard/mikrotik.* /etc/ndppd.conf
rm -f /etc/sysctl.d/99-wg-relay.conf /etc/ssh/sshd_config.d/00-hardening.conf
sysctl --system >/dev/null
ufw delete allow 51820/udp
ufw route delete allow in on wg0
ufw route delete allow out on wg0
ufw reload
In the Vultr panel, detach the reserved /64 from the instance, then delete the reserved IP if you want to stop the $3/mo charge.
11. Assessment
The recipe in the guide applies cleanly with two corrections that are now folded back into the guide: (a) drop the explicit PostUp route in wg0.conf (wg-quick already does it via AllowedIPs), and (b) use auto not static in /etc/ndppd.conf.
A 10/10 test-ipv6.com score with a Vultr-sourced IPv6 from a behind-the-tunnel LAN client is achievable in well under an hour of work end-to-end.
The single biggest design decision is the Reserved IPv6 /64. At $3/mo, it converts a hostile routing problem (sharing one on-link /64) into a clean one (a second on-link /64 with a non-conflicting routing table). Worth the $3.
Hardening footprint is minimal: only sshd actually needs touching. UFW is already restrictive in the Vultr image, and unattended-upgrades is on by default.