IPv6 forwarding from Internet to WireGuard peers?

I assigned each peer a public IPv6 IP from the IPv6 prefix that VPS provider routed to my server, to use public IPv6 with WireGuard.
I want to make the peer accessible from the Internet via the IP, but it seems like everything works except the Internet → peer direction.

  1. Peers can ping each other when inside WireGuard subnet.
  2. Server can ping peers.
  3. Peers can browse Internet with IPv6 WireGuard IPs.
  4. Public IPv6 (a peer disconnected from WG) cannot ping WG peer.
  5. traceroute for above stops at the server.
  6. Nothing changed after nft flush ruleset (so I guess they aren’t blocked by the firewall).

p.s. I’m using NetworkManager’s WireGuard integration on GNOME. Configs not loaded with wg-quick.

Configs:

Server (peer 1) WireGuard
[Interface]
Address = 10.0.0.1/24, ipv6:public::a/64
PrivateKey = 
ListenPort = 51820

PostUp = sysctl net.ipv4.conf.eth0.forwarding=1
PostUp = sysctl net.ipv4.conf.wg-server.forwarding=1
PostUp = sysctl net.ipv6.conf.all.forwarding=1

PreDown = sysctl net.ipv4.conf.eth0.forwarding=0
PreDown = sysctl net.ipv4.conf.wg-server.forwarding=0
PreDown = sysctl net.ipv6.conf.all.forwarding=0

[Peer]
PublicKey = 
AllowedIPs = 10.0.0.2/32, ipv6:public::b/128

[Peer]
PublicKey = 
AllowedIPs = 10.0.0.3/32, ipv6:public::c/128
Server `firewalld`
# WireGuard listen port
firewall-cmd --permanent --zone=FedoraServer --add-service=wireguard

# New zone to use in policy to filter forwarding
# Basically don't want intra-forwarding on public interface 
firewall-cmd --permanent --new-zone=VPN
firewall-cmd --permanent --zone=VPN --add-interface=wg-server
firewall-cmd --permanent --zone=VPN --add-service=dns

# WG to Internet
firewall-cmd --permanent --new-policy vpnforward
firewall-cmd --permanent --policy vpnforward --add-ingress-zone VPN
firewall-cmd --permanent --policy vpnforward --add-egress-zone FedoraServer
firewall-cmd --permanent --policy vpnforward --add-rich-rule 'rule family="ipv4" source address="10.0.0.0/24" accept'
firewall-cmd --permanent --policy vpnforward --add-rich-rule 'rule family="ipv6" source address="ipv6:public::/64" accept'

# IPv4 masquerade
firewall-cmd --permanent --policy vpnforward --add-masquerade

# WG intra-zone forwarding for between peers
firewall-cmd --permanent --zone=VPN --add-forward

# Internet to WG (allow connect)
firewall-cmd --permanent --new-policy vpnbackward
firewall-cmd --permanent --policy vpnbackward --add-ingress-zone FedoraServer
firewall-cmd --permanent --policy vpnbackward --add-egress-zone VPN
firewall-cmd --permanent --policy vpnbackward --add-rich-rule 'rule family="ipv6" destination address="ipv6:public::/64" accept'
Peer 2
#2
[Interface]
PrivateKey = 
Address = 10.0.0.2/24, ipv6:public::b/64
DNS = 10.0.0.1

#1 server
[Peer]
PublicKey = 
AllowedIPs = 0.0.0.0/0, ::/0
Endpoint = ipv4.public:51820
Peer 3
#3
[Interface]
Address = 10.0.0.3/24, ipv6:public::c/64
PrivateKey = 
ListenPort = 51820
DNS = ::1

#1
[Peer]
PublicKey = 
AllowedIPs = 10.0.0.1/24, ipv6:public::a/128, 10.0.0.2/24, ipv6:public::b/128
Endpoint = [server:slaac:ipv6]:51820

Server:

firewall-cmd --policy vpnbackward --add-rich-rule 'rule family="ipv6" masquerade'

Masquerading IPs from Internet as the server will make it work, despite there is no sensible reason to.
I saw a high rx_frame_errors in /sys/class/net/wg/statistics/.
Better advice is welcome, I don’t like seeing masquerade in IPv6…

tcpdump on the server and peer shows that the server’s WG interface received the ping packets but didn’t forward, quiet on the peer’s WG interface. However if pinging from server it’s received.

You should analyze runtime configuration including routing, firewall and VPN configs from both server and client.
I have a similar dual-stack setup with WireGuard on a VPS with Fedora 38 working just fine.

Here's a dump of server runtime configurations

# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether f2:3c:93:d4:89:66 brd ff:ff:ff:ff:ff:ff
    inet 111.222.333.444/24 brd 111.222.333.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 aaaa:bbbb::cccc:dddd:eeee:ffff/64 scope global dynamic noprefixroute 
       valid_lft 5376sec preferred_lft 1776sec
    inet6 fe80::f03c:93ff:fed4:8966/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: wg-server: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
    link/none 
    inet 10.0.0.1/24 scope global wg-server
       valid_lft forever preferred_lft forever
    inet6 1000:2000:3000:4000::a/64 scope global 
       valid_lft forever preferred_lft forever

# ip -4 route show table all
default via 111.222.333.1 dev eth0 proto static metric 100 
10.0.0.0/24 dev wg-server proto kernel scope link src 10.0.0.1 
111.222.333.0/24 dev eth0 proto kernel scope link src 111.222.333.444 metric 100 
local 10.0.0.1 dev wg-server table local proto kernel scope host src 10.0.0.1 
broadcast 10.0.0.255 dev wg-server table local proto kernel scope link src 10.0.0.1 
local 127.0.0.0/8 dev lo table local proto kernel scope host src 127.0.0.1 
local 127.0.0.1 dev lo table local proto kernel scope host src 127.0.0.1 
broadcast 127.255.255.255 dev lo table local proto kernel scope link src 127.0.0.1 
local 111.222.333.444 dev eth0 table local proto kernel scope host src 111.222.333.444 
broadcast 111.222.333.255 dev eth0 table local proto kernel scope link src 111.222.333.444 

# ip -4 rule show
0:	from all lookup local
32766:	from all lookup main
32767:	from all lookup default

# ip -6 route show table all
::1 dev lo proto kernel metric 256 pref medium
1000:2000:3000:4000::/64 dev wg-server proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 1024 pref medium
default via fe80::1 dev eth0 proto ra metric 100 pref medium
local ::1 dev lo table local proto kernel metric 0 pref medium
anycast aaaa:bbbb:: dev eth0 table local proto kernel metric 0 pref medium
local aaaa:bbbb::cccc:dddd:eeee:ffff dev eth0 table local proto kernel metric 0 pref medium
anycast 1000:2000:3000:4000:: dev wg-server table local proto kernel metric 0 pref medium
local 1000:2000:3000:4000::a dev wg-server table local proto kernel metric 0 pref medium
anycast fe80:: dev eth0 table local proto kernel metric 0 pref medium
local fe80::f03c:93ff:fed4:8966 dev eth0 table local proto kernel metric 0 pref medium
multicast ff00::/8 dev eth0 table local proto kernel metric 256 pref medium
multicast ff00::/8 dev wg-server table local proto kernel metric 256 pref medium

# ip -6 rule show
0:	from all lookup local
32766:	from all lookup main

# wg
interface: wg-server
  public key: (key redacted)
  private key: (hidden)
  listening port: 51820

peer: (key redacted)
  endpoint: (ip redacted):51820
  allowed ips: 10.0.0.2/32, 1000:2000:3000:4000::b/128
  latest handshake: 4 hours, 40 minutes, 24 seconds ago
  transfer: 222.76 KiB received, 105.16 KiB sent

peer: (key redacted)
  endpoint: (ip redacted):51820
  allowed ips: 10.0.0.3/32, 1000:2000:3000:4000::c/128
  latest handshake: 5 hours, 25 minutes, 40 seconds ago
  transfer: 202.68 KiB received, 2.00 MiB sent

# ip netconf
inet lo forwarding on rp_filter loose mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet eth0 forwarding on rp_filter loose mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet wg-server forwarding on rp_filter loose mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet all forwarding on rp_filter off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet default forwarding on rp_filter loose mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet6 lo forwarding on mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet6 eth0 forwarding on mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet6 wg-server forwarding on mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet6 all forwarding on mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet6 default forwarding on mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 

# iptables-save -c

# nft list ruleset
table inet firewalld {
	chain mangle_PREROUTING {
		type filter hook prerouting priority mangle + 10; policy accept;
		jump mangle_PREROUTING_ZONES
	}

	chain mangle_PREROUTING_POLICIES_pre {
		jump mangle_PRE_policy_allow-host-ipv6
	}

	chain mangle_PREROUTING_ZONES {
		iifname "eth0" goto mangle_PRE_FedoraServer
		iifname "wg-server" goto mangle_PRE_VPN
		goto mangle_PRE_FedoraServer
	}

	chain mangle_PREROUTING_POLICIES_post {
	}

	chain nat_PREROUTING {
		type nat hook prerouting priority dstnat + 10; policy accept;
		jump nat_PREROUTING_ZONES
	}

	chain nat_PREROUTING_POLICIES_pre {
		jump nat_PRE_policy_allow-host-ipv6
	}

	chain nat_PREROUTING_ZONES {
		iifname "eth0" goto nat_PRE_FedoraServer
		iifname "wg-server" goto nat_PRE_VPN
		goto nat_PRE_FedoraServer
	}

	chain nat_PREROUTING_POLICIES_post {
	}

	chain nat_POSTROUTING {
		type nat hook postrouting priority srcnat + 10; policy accept;
		jump nat_POSTROUTING_ZONES
	}

	chain nat_POSTROUTING_POLICIES_pre {
		iifname "eth0" oifname "wg-server" jump nat_POST_policy_vpnbackward
		iifname "wg-server" oifname "eth0" jump nat_POST_policy_vpnforward
	}

	chain nat_POSTROUTING_ZONES {
		oifname "eth0" goto nat_POST_FedoraServer
		oifname "wg-server" goto nat_POST_VPN
		goto nat_POST_FedoraServer
	}

	chain nat_POSTROUTING_POLICIES_post {
	}

	chain nat_OUTPUT {
		type nat hook output priority -90; policy accept;
		jump nat_OUTPUT_POLICIES_pre
		jump nat_OUTPUT_POLICIES_post
	}

	chain nat_OUTPUT_POLICIES_pre {
	}

	chain nat_OUTPUT_POLICIES_post {
	}

	chain filter_PREROUTING {
		type filter hook prerouting priority filter + 10; policy accept;
		icmpv6 type { nd-router-advert, nd-neighbor-solicit } accept
		meta nfproto ipv6 fib saddr . mark . iif oif missing drop
	}

	chain filter_INPUT {
		type filter hook input priority filter + 10; policy accept;
		ct state { established, related } accept
		ct status dnat accept
		iifname "lo" accept
		ct state invalid drop
		jump filter_INPUT_ZONES
		reject with icmpx admin-prohibited
	}

	chain filter_FORWARD {
		type filter hook forward priority filter + 10; policy accept;
		ct state { established, related } accept
		ct status dnat accept
		iifname "lo" accept
		ct state invalid drop
		ip6 daddr { ::/96, ::ffff:0.0.0.0/96, 2002::/24, 2002:a00::/24, 2002:7f00::/24, 2002:a9fe::/32, 2002:ac10::/28, 2002:c0a8::/32, 2002:e000::/19 } reject with icmpv6 addr-unreachable
		jump filter_FORWARD_ZONES
		reject with icmpx admin-prohibited
	}

	chain filter_OUTPUT {
		type filter hook output priority filter + 10; policy accept;
		ct state { established, related } accept
		oifname "lo" accept
		ip6 daddr { ::/96, ::ffff:0.0.0.0/96, 2002::/24, 2002:a00::/24, 2002:7f00::/24, 2002:a9fe::/32, 2002:ac10::/28, 2002:c0a8::/32, 2002:e000::/19 } reject with icmpv6 addr-unreachable
		jump filter_OUTPUT_POLICIES_pre
		jump filter_OUTPUT_POLICIES_post
	}

	chain filter_INPUT_POLICIES_pre {
		jump filter_IN_policy_allow-host-ipv6
	}

	chain filter_INPUT_ZONES {
		iifname "eth0" goto filter_IN_FedoraServer
		iifname "wg-server" goto filter_IN_VPN
		goto filter_IN_FedoraServer
	}

	chain filter_INPUT_POLICIES_post {
	}

	chain filter_FORWARD_POLICIES_pre {
		iifname "eth0" oifname "wg-server" jump filter_FWD_policy_vpnbackward
		iifname "wg-server" oifname "eth0" jump filter_FWD_policy_vpnforward
	}

	chain filter_FORWARD_ZONES {
		iifname "eth0" goto filter_FWD_FedoraServer
		iifname "wg-server" goto filter_FWD_VPN
		goto filter_FWD_FedoraServer
	}

	chain filter_FORWARD_POLICIES_post {
	}

	chain filter_OUTPUT_POLICIES_pre {
	}

	chain filter_OUTPUT_POLICIES_post {
	}

	chain filter_IN_VPN {
		jump filter_INPUT_POLICIES_pre
		jump filter_IN_VPN_pre
		jump filter_IN_VPN_log
		jump filter_IN_VPN_deny
		jump filter_IN_VPN_allow
		jump filter_IN_VPN_post
		jump filter_INPUT_POLICIES_post
		meta l4proto { icmp, ipv6-icmp } accept
		reject with icmpx admin-prohibited
	}

	chain filter_IN_VPN_pre {
	}

	chain filter_IN_VPN_log {
	}

	chain filter_IN_VPN_deny {
	}

	chain filter_IN_VPN_allow {
		tcp dport 53 accept
		udp dport 53 accept
	}

	chain filter_IN_VPN_post {
	}

	chain nat_POST_VPN {
		jump nat_POSTROUTING_POLICIES_pre
		jump nat_POST_VPN_pre
		jump nat_POST_VPN_log
		jump nat_POST_VPN_deny
		jump nat_POST_VPN_allow
		jump nat_POST_VPN_post
		jump nat_POSTROUTING_POLICIES_post
	}

	chain nat_POST_VPN_pre {
	}

	chain nat_POST_VPN_log {
	}

	chain nat_POST_VPN_deny {
	}

	chain nat_POST_VPN_allow {
	}

	chain nat_POST_VPN_post {
	}

	chain filter_FWD_VPN {
		jump filter_FORWARD_POLICIES_pre
		jump filter_FWD_VPN_pre
		jump filter_FWD_VPN_log
		jump filter_FWD_VPN_deny
		jump filter_FWD_VPN_allow
		jump filter_FWD_VPN_post
		jump filter_FORWARD_POLICIES_post
		reject with icmpx admin-prohibited
	}

	chain filter_FWD_VPN_pre {
	}

	chain filter_FWD_VPN_log {
	}

	chain filter_FWD_VPN_deny {
	}

	chain filter_FWD_VPN_allow {
		oifname "wg-server" accept
	}

	chain filter_FWD_VPN_post {
	}

	chain nat_PRE_VPN {
		jump nat_PREROUTING_POLICIES_pre
		jump nat_PRE_VPN_pre
		jump nat_PRE_VPN_log
		jump nat_PRE_VPN_deny
		jump nat_PRE_VPN_allow
		jump nat_PRE_VPN_post
		jump nat_PREROUTING_POLICIES_post
	}

	chain nat_PRE_VPN_pre {
	}

	chain nat_PRE_VPN_log {
	}

	chain nat_PRE_VPN_deny {
	}

	chain nat_PRE_VPN_allow {
	}

	chain nat_PRE_VPN_post {
	}

	chain mangle_PRE_VPN {
		jump mangle_PREROUTING_POLICIES_pre
		jump mangle_PRE_VPN_pre
		jump mangle_PRE_VPN_log
		jump mangle_PRE_VPN_deny
		jump mangle_PRE_VPN_allow
		jump mangle_PRE_VPN_post
		jump mangle_PREROUTING_POLICIES_post
	}

	chain mangle_PRE_VPN_pre {
	}

	chain mangle_PRE_VPN_log {
	}

	chain mangle_PRE_VPN_deny {
	}

	chain mangle_PRE_VPN_allow {
	}

	chain mangle_PRE_VPN_post {
	}

	chain filter_IN_FedoraServer {
		jump filter_INPUT_POLICIES_pre
		jump filter_IN_FedoraServer_pre
		jump filter_IN_FedoraServer_log
		jump filter_IN_FedoraServer_deny
		jump filter_IN_FedoraServer_allow
		jump filter_IN_FedoraServer_post
		jump filter_INPUT_POLICIES_post
		meta l4proto { icmp, ipv6-icmp } accept
		reject with icmpx admin-prohibited
	}

	chain filter_IN_FedoraServer_pre {
	}

	chain filter_IN_FedoraServer_log {
	}

	chain filter_IN_FedoraServer_deny {
	}

	chain filter_IN_FedoraServer_allow {
		tcp dport 22 accept
		ip6 daddr fe80::/64 udp dport 546 accept
		tcp dport 9090 accept
		udp dport 51820 accept
	}

	chain filter_IN_FedoraServer_post {
	}

	chain nat_POST_FedoraServer {
		jump nat_POSTROUTING_POLICIES_pre
		jump nat_POST_FedoraServer_pre
		jump nat_POST_FedoraServer_log
		jump nat_POST_FedoraServer_deny
		jump nat_POST_FedoraServer_allow
		jump nat_POST_FedoraServer_post
		jump nat_POSTROUTING_POLICIES_post
	}

	chain nat_POST_FedoraServer_pre {
	}

	chain nat_POST_FedoraServer_log {
	}

	chain nat_POST_FedoraServer_deny {
	}

	chain nat_POST_FedoraServer_allow {
	}

	chain nat_POST_FedoraServer_post {
	}

	chain filter_FWD_FedoraServer {
		jump filter_FORWARD_POLICIES_pre
		jump filter_FWD_FedoraServer_pre
		jump filter_FWD_FedoraServer_log
		jump filter_FWD_FedoraServer_deny
		jump filter_FWD_FedoraServer_allow
		jump filter_FWD_FedoraServer_post
		jump filter_FORWARD_POLICIES_post
		reject with icmpx admin-prohibited
	}

	chain filter_FWD_FedoraServer_pre {
	}

	chain filter_FWD_FedoraServer_log {
	}

	chain filter_FWD_FedoraServer_deny {
	}

	chain filter_FWD_FedoraServer_allow {
	}

	chain filter_FWD_FedoraServer_post {
	}

	chain nat_PRE_FedoraServer {
		jump nat_PREROUTING_POLICIES_pre
		jump nat_PRE_FedoraServer_pre
		jump nat_PRE_FedoraServer_log
		jump nat_PRE_FedoraServer_deny
		jump nat_PRE_FedoraServer_allow
		jump nat_PRE_FedoraServer_post
		jump nat_PREROUTING_POLICIES_post
	}

	chain nat_PRE_FedoraServer_pre {
	}

	chain nat_PRE_FedoraServer_log {
	}

	chain nat_PRE_FedoraServer_deny {
	}

	chain nat_PRE_FedoraServer_allow {
	}

	chain nat_PRE_FedoraServer_post {
	}

	chain mangle_PRE_FedoraServer {
		jump mangle_PREROUTING_POLICIES_pre
		jump mangle_PRE_FedoraServer_pre
		jump mangle_PRE_FedoraServer_log
		jump mangle_PRE_FedoraServer_deny
		jump mangle_PRE_FedoraServer_allow
		jump mangle_PRE_FedoraServer_post
		jump mangle_PREROUTING_POLICIES_post
	}

	chain mangle_PRE_FedoraServer_pre {
	}

	chain mangle_PRE_FedoraServer_log {
	}

	chain mangle_PRE_FedoraServer_deny {
	}

	chain mangle_PRE_FedoraServer_allow {
	}

	chain mangle_PRE_FedoraServer_post {
	}

	chain filter_IN_policy_allow-host-ipv6 {
		jump filter_IN_policy_allow-host-ipv6_pre
		jump filter_IN_policy_allow-host-ipv6_log
		jump filter_IN_policy_allow-host-ipv6_deny
		jump filter_IN_policy_allow-host-ipv6_allow
		jump filter_IN_policy_allow-host-ipv6_post
	}

	chain filter_IN_policy_allow-host-ipv6_pre {
	}

	chain filter_IN_policy_allow-host-ipv6_log {
	}

	chain filter_IN_policy_allow-host-ipv6_deny {
	}

	chain filter_IN_policy_allow-host-ipv6_allow {
		icmpv6 type nd-neighbor-advert accept
		icmpv6 type nd-neighbor-solicit accept
		icmpv6 type nd-router-advert accept
		icmpv6 type nd-redirect accept
	}

	chain filter_IN_policy_allow-host-ipv6_post {
	}

	chain nat_PRE_policy_allow-host-ipv6 {
		jump nat_PRE_policy_allow-host-ipv6_pre
		jump nat_PRE_policy_allow-host-ipv6_log
		jump nat_PRE_policy_allow-host-ipv6_deny
		jump nat_PRE_policy_allow-host-ipv6_allow
		jump nat_PRE_policy_allow-host-ipv6_post
	}

	chain nat_PRE_policy_allow-host-ipv6_pre {
	}

	chain nat_PRE_policy_allow-host-ipv6_log {
	}

	chain nat_PRE_policy_allow-host-ipv6_deny {
	}

	chain nat_PRE_policy_allow-host-ipv6_allow {
	}

	chain nat_PRE_policy_allow-host-ipv6_post {
	}

	chain mangle_PRE_policy_allow-host-ipv6 {
		jump mangle_PRE_policy_allow-host-ipv6_pre
		jump mangle_PRE_policy_allow-host-ipv6_log
		jump mangle_PRE_policy_allow-host-ipv6_deny
		jump mangle_PRE_policy_allow-host-ipv6_allow
		jump mangle_PRE_policy_allow-host-ipv6_post
	}

	chain mangle_PRE_policy_allow-host-ipv6_pre {
	}

	chain mangle_PRE_policy_allow-host-ipv6_log {
	}

	chain mangle_PRE_policy_allow-host-ipv6_deny {
	}

	chain mangle_PRE_policy_allow-host-ipv6_allow {
	}

	chain mangle_PRE_policy_allow-host-ipv6_post {
	}

	chain filter_FWD_policy_vpnbackward {
		jump filter_FWD_policy_vpnbackward_pre
		jump filter_FWD_policy_vpnbackward_log
		jump filter_FWD_policy_vpnbackward_deny
		jump filter_FWD_policy_vpnbackward_allow
		jump filter_FWD_policy_vpnbackward_post
	}

	chain filter_FWD_policy_vpnbackward_pre {
	}

	chain filter_FWD_policy_vpnbackward_log {
	}

	chain filter_FWD_policy_vpnbackward_deny {
	}

	chain filter_FWD_policy_vpnbackward_allow {
		ip6 daddr 1000:2000:3000:4000::/64 accept
	}

	chain filter_FWD_policy_vpnbackward_post {
	}

	chain nat_POST_policy_vpnbackward {
		jump nat_POST_policy_vpnbackward_pre
		jump nat_POST_policy_vpnbackward_log
		jump nat_POST_policy_vpnbackward_deny
		jump nat_POST_policy_vpnbackward_allow
		jump nat_POST_policy_vpnbackward_post
	}

	chain nat_POST_policy_vpnbackward_pre {
	}

	chain nat_POST_policy_vpnbackward_log {
	}

	chain nat_POST_policy_vpnbackward_deny {
	}

	chain nat_POST_policy_vpnbackward_allow {
	}

	chain nat_POST_policy_vpnbackward_post {
	}

	chain filter_FWD_policy_vpnforward {
		jump filter_FWD_policy_vpnforward_pre
		jump filter_FWD_policy_vpnforward_log
		jump filter_FWD_policy_vpnforward_deny
		jump filter_FWD_policy_vpnforward_allow
		jump filter_FWD_policy_vpnforward_post
	}

	chain filter_FWD_policy_vpnforward_pre {
	}

	chain filter_FWD_policy_vpnforward_log {
	}

	chain filter_FWD_policy_vpnforward_deny {
	}

	chain filter_FWD_policy_vpnforward_allow {
		ip saddr 10.0.0.0/24 accept
		ip6 saddr 1000:2000:3000:4000::/64 accept
	}

	chain filter_FWD_policy_vpnforward_post {
	}

	chain nat_POST_policy_vpnforward {
		jump nat_POST_policy_vpnforward_pre
		jump nat_POST_policy_vpnforward_log
		jump nat_POST_policy_vpnforward_deny
		jump nat_POST_policy_vpnforward_allow
		jump nat_POST_policy_vpnforward_post
	}

	chain nat_POST_policy_vpnforward_pre {
	}

	chain nat_POST_policy_vpnforward_log {
	}

	chain nat_POST_policy_vpnforward_deny {
	}

	chain nat_POST_policy_vpnforward_allow {
		meta nfproto ipv4 oifname != "lo" masquerade
	}

	chain nat_POST_policy_vpnforward_post {
	}
}

The server’s public IPs and keys are sanitized. Public IPv4 replaced with 111.222.333.444, public IPv6 aaaa:bbbb::cccc:dddd:eeee:ffff, routed prefix 1000:2000:3000:4000::.
I’m not sure what to look for here. IPv6 forwarding without NAT is just accept and a route right?

chain filter_FWD_policy_vpnbackward_allow {
		ip6 daddr 1000:2000:3000:4000::/64 accept
	}
1000:2000:3000:4000::/64 dev wg-server proto kernel metric 256 pref medium

I can’t seem to find the problem, though I’m unfamiliar with ip route and the ntf rules generated by firewalld is a lot…

1 Like

I think the problem has to do with the fact that you have only one /64 prefix. As soon you have different interfaces, you have to perform prefix delegation to get a second /64 prefix, but your provider should allow this and give you a /48 ot /56 prefix. Problem is if a connection from the internet comes in, the server does a neighbour sollicitation, but the wireguard interfaces do not respond and the server does not know in which direction the packet has to go.
Only thing which can help is, indeed, IPv6 NAT or hard wiring routes to the different client IPv6 addresses separately to wg0. See also the explanation at the “ndppd” program.
If you do not use SLAAC autoconfiguration, you can also split the /64 in e.g. /68, and route one for the LAN and a second for the wg0. This should wotk too, but you should configure the IPv6 addresses both on LAN and WG0 manually.

1 Like

I guess I should clarify that the server’s IPv6 (SLAAC) address is not inside the routed prefix range.
I have not explicitly assigned the prefix to WG interface, but it does received the packet?

tcpdump on server (nothing on peer)
# tcpdump -i wg-server
20:20:11.293111 IP6 2604:a880:800:c1::13b:b001 > 1000:2000:3000:4000::b: ICMP6, echo request, id 31603, seq 1, length 64
20:20:11.698205 IP6 2604:a880:800:c1::13b:b001 > 1000:2000:3000:4000::b: ICMP6, echo request, id 31603, seq 2, length 64
20:20:12.114602 IP6 2604:a880:800:c1::13b:b001 > 1000:2000:3000:4000::b: ICMP6, echo request, id 31603, seq 3, length 64

What happens if the source IP is not in the cryptokey routing table, as would be the case for connection coming from the Internet?

quote from WireGuard whitepaper

Once the packet payload is decrypted, the interface has a plaintext packet. If this is not an IP packet, it is dropped. Otherwise, WireGuard checks to see if the source IP address of the plaintext inner-packet routes correspondingly in the cryptokey routing table. For example, if the source IP of the decrypted plaintext packet is 192.168.31.28, the packet correspondingly routes. But if the source IP is 10.192.122.3, the packet does not route correspondingly for this peer, and is dropped.

Is there a way to check if it’s the peer that dropped the packet? tcpdump shows no packet, could it have been dropped before tcpdump can see it?
Hmm… wait wouldn’t that mean replies from the Internet is always dropped (when using WG to route all traffic), which isn’t the case.

I was tripped up by 3 things.

  1. Indeed AllowedIPs need to be 0.0.0.0/0, ::/0 for packets from Internet to not be dropped. Possibly packets are dropped before tcpdump can see.
  2. Possibly a quirk in the network I’m in, where echo from 1000:2000:3000:4000::a is not dropped but ::b ::c are. Turns out peer 2 is reachable by online ping tests, just not me.
  3. A problem either in WireGuard or NetworkManager where 0.0.0.0/0, ::/0 and IPv6 endpoint will just silently not work (no echo even from server). IPv4 endpoint is fine.

Bonus: Unexpected results making me believe it worked when it didn’t. The browser possibly connected to my site with IPv4 when Ipv6 didn’t work, and also continued to serve cached page. Ping continued to work after disconnecting from WG (non-reproducible).

1 Like

Is now acting evertything as desired? Do I understand it right that you got a prefix routed to you, routed this prefix to the wireguard interface and assigned ::a,::b and ::c to wireguard server plus two peers? And the parent interfaces do not have IPv6 or do have another prefix?
Probably a good command for debugging is “ip -6 route get ” to know how a packet is routed and what the source address is. And whether the destination is able to handle this source address. Another thing why packets disappear silently is
RPFiltering. It is not allowed that the answer to a packet takes another route as the arriving packet.

Yup everything mostly works (I’ll try figuring out problem #3 when I have more time). I didn’t manually route the prefix but the route table shows it (perhaps added by wg-quick, which I use on the server). The interface does have another IPv6 address.

Thanks I’ll remember to use it.

Yup. I read about this for the firewall. I think my firewall rules may be bypassed if Internet sends packets to WG interface with spoofed source address, and this helps prevent it.

I have tried to setup something similar, and it works.
The server had a 4/6 internet connection plus a 6over4 tunnel offering second /64 prefix.
To direct VPN over the tunnel, I used policy routing, consisting of:
“from VPN::/64 lookup main suppress_prefixlength 0” to evaluate Wireguard’s host routes in main table.
“from VPN ::/64 lookup x” with x a table with low metric default route into the tunnel.
Test was on a laptop with second delegated /64 prefix to simulate external internet.
Server plus two peers are reachable from “internet” via the tunnel provider.
Important:

  • Check whether NetworkManager sets the desired routes, check “set peer routes”

  • On the server, wg0 to wg0 forwarding policy could be necessary for peer – peer.
    firewall-cmd --permanent --zone VPN --add-forward seems to work too.

1 Like

Your handshakes are 4+ hours, which looks a bit suspicious.
The peers are recommended to use PersistentKeepalive=25 in order to avoid issues reaching them.
In addition, clients typically use dynamic ports and their endpoint port is not 51820, unless it is explicitly configured statically and they are not behind CGNAT.
Your IPv6 routing table on the server is missing its own prefix route for some reason.

Check this:

  • ipleak.net from the client to confirm it is using the server endpoint for IPv4 and IPv6.
  • IPv4 and IPv6 pings from the server to the clients over the tunnel.
  • Runtime forwarding on the server:
sysctl net 2> /dev/null | grep -e forward
  • Runtime firewalld config on the server:
sudo firewall-cmd --get-active-zones \
| sed -n -e "/^\w/p" | while read -r FW_ZONE
do sudo firewall-cmd --info-zone="${FW_ZONE}"
done
sudo firewall-cmd --get-active-policies \
| sed -n -e "/^\w/p" | while read -r FW_POLICY
do sudo firewall-cmd --info-policy="${FW_POLICY}"
done

ipleak.net reports WG IPs.

IPv4 and IPv6 pings from the server to the clients over the tunnel.
# ping 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=13.9 ms
^C
--- 10.0.0.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 13.886/13.886/13.886/0.000 ms
# ping 10.0.0.3
PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=17.4 ms
^C
--- 10.0.0.3 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 17.386/17.386/17.386/0.000 ms
# ping 1000:2000:3000:4000::b
PING 1000:2000:3000:4000::b(1000:2000:3000:4000::b) 56 data bytes
64 bytes from 1000:2000:3000:4000::b: icmp_seq=1 ttl=64 time=14.9 ms
^C
--- 1000:2000:3000:4000::b ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 14.907/14.907/14.907/0.000 ms
# ping 1000:2000:3000:4000::c
PING 1000:2000:3000:4000::c(1000:2000:3000:4000::c) 56 data bytes
64 bytes from 1000:2000:3000:4000::c: icmp_seq=1 ttl=64 time=16.4 ms
^C
--- 1000:2000:3000:4000::c ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 16.437/16.437/16.437/0.000 ms
sysctl net 2> /dev/null | grep -e forward
# sysctl net 2> /dev/null | grep -e forward
net.ipv4.conf.all.bc_forwarding = 0
net.ipv4.conf.all.forwarding = 1
net.ipv4.conf.all.mc_forwarding = 0
net.ipv4.conf.default.bc_forwarding = 0
net.ipv4.conf.default.forwarding = 1
net.ipv4.conf.default.mc_forwarding = 0
net.ipv4.conf.eth0.bc_forwarding = 0
net.ipv4.conf.eth0.forwarding = 1
net.ipv4.conf.eth0.mc_forwarding = 0
net.ipv4.conf.lo.bc_forwarding = 0
net.ipv4.conf.lo.forwarding = 1
net.ipv4.conf.lo.mc_forwarding = 0
net.ipv4.conf.wg-server.bc_forwarding = 0
net.ipv4.conf.wg-server.forwarding = 1
net.ipv4.conf.wg-server.mc_forwarding = 0
net.ipv4.ip_forward = 1
net.ipv4.ip_forward_update_priority = 1
net.ipv4.ip_forward_use_pmtu = 0
net.ipv6.conf.all.forwarding = 1
net.ipv6.conf.all.mc_forwarding = 0
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.default.mc_forwarding = 0
net.ipv6.conf.eth0.forwarding = 1
net.ipv6.conf.eth0.mc_forwarding = 0
net.ipv6.conf.lo.forwarding = 1
net.ipv6.conf.lo.mc_forwarding = 0
net.ipv6.conf.wg-server.forwarding = 1
net.ipv6.conf.wg-server.mc_forwarding = 0
firewalld
FedoraServer (active)
  target: default
  icmp-block-inversion: no
  interfaces: eth0
  sources: 
  services: cockpit dhcpv6-client ssh wireguard
  ports: 
  protocols: 
  forward: no
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 
VPN (active)
  target: default
  icmp-block-inversion: no
  interfaces: wg-server
  sources: 
  services: dns ssh
  ports: 
  protocols: 
  forward: yes
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 
allow-host-ipv6 (active)
  priority: -15000
  target: CONTINUE
  ingress-zones: ANY
  egress-zones: HOST
  services: 
  ports: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 
	rule family="ipv6" icmp-type name="neighbour-advertisement" accept
	rule family="ipv6" icmp-type name="neighbour-solicitation" accept
	rule family="ipv6" icmp-type name="router-advertisement" accept
	rule family="ipv6" icmp-type name="redirect" accept
vpnbackward (active)
  priority: -1
  target: CONTINUE
  ingress-zones: FedoraServer
  egress-zones: VPN
  services: 
  ports: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 
	rule family="ipv6" destination address="1000:2000:3000:4000::/64" accept
vpnforward (active)
  priority: -1
  target: CONTINUE
  ingress-zones: VPN
  egress-zones: FedoraServer
  services: 
  ports: 
  protocols: 
  masquerade: yes
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 
	rule family="ipv4" source address="10.0.0.0/24" accept
	rule family="ipv6" source address="1000:2000:3000:4000::/64" accept
wg
# wg
interface: wg-server
  public key: 
  private key: (hidden)
  listening port: 51820

peer: 
  endpoint: :51820
  allowed ips: 10.0.0.3/32, 1000:2000:3000:4000::c/128
  latest handshake: 48 seconds ago
  transfer: 26.71 GiB received, 69.76 GiB sent

peer: 
  endpoint: :51820
  allowed ips: 10.0.0.2/32, 1000:2000:3000:4000::b/128
  latest handshake: 1 minute, 40 seconds ago
  transfer: 725.05 KiB received, 2.93 MiB sent
1 Like

Welp good news it’s not working again.
Trying to ping 10.0.0.2 from 10.0.0.3 gets nothing, tcpdump on server shows the echo requests. Though peer2’s WG IPv6 can still be pinged from outside.
Current config:

peer2
#2 local server
[Interface]
PrivateKey = 
Address = 10.0.0.2/24, 1000:2000:3000:4000::b/64
ListenPort = 51820

#1 server
[Peer]
PublicKey = 
AllowedIPs = 0.0.0.0/0, ::/0
Endpoint = :51820

#3 personal computer
[Peer]
PublicKey = 
AllowedIPs = 10.0.0.3/32, 1000:2000:3000:4000::c/128
peer3
#3 personal computer
[Interface]
PrivateKey = 
Address = 10.0.0.3/24, 1000:2000:3000:4000::c/64
#DNS = 10.0.0.1

#1 server
[Peer]
PublicKey = 
AllowedIPs = 0.0.0.0/0, ::/0
Endpoint = :51820

#2 local server
[Peer]
PublicKey = 
AllowedIPs = 10.0.0.2/32, 1000:2000:3000:4000::b/128
Endpoint = :51820

Tried removing peer2 from peer3’s config but still doesn’t work.
I literally just logged in 10.0.0.2 before dumping the configs.

EDIT:
After removing each other in both peer2’s and peer3’s configs, it works now. I guess 0.0.0.0/0 doesn’t go along with other AllowedIPs.
Though if what happened before wasn’t a fluke, there might be a bug in cryptokey routing. But I’m nowhere qualified to say this.

1 Like

Sorry I wasn’t using WG when I dumped the config, attached a new wg output in the previous reply.

51820 is the default when importing WG in NetworkManager, if the port is not specified.

default via fe80::1 dev eth0 proto ra metric 100 pref medium

IIRC IPv6 goes through link local address by default. My local machine shows the same.

I hope all the weirdness has been figured out at this point. I’m so ready to get over this :melting_face:

About AllowIPs=everything not working in IPv6 endpoint. Route table doesn’t show anything different, ip route get says both endpoint address will go to the WG interface in both IPv4 and IPv6 cases. I can only guess WG does something special for the IPv4 endpoint.

  1. AllowedIPs=0.0.0.0/0, ::/0 and IPv4 endpoint, working.
  2. Change to IPv6 endpoint, not working.
  3. Remove ::/0, working.

Is port 51820/udp open on the clients? If closed, the connection could be initiated by some traffic from the client, but conntrack will close it after some time.
A client persistentkeepalive would be alternative.

1 Like

I think so, 1025-65535 is open by default in FedoraWorkstation firewalld zone.

It should work
AllowedIPs = 0.0.0.0/0,fc00::/64 adds to the route table:
fc00::/64 dev wg0 metric 1024 pref medium

AllowedIPs = 0.0.0.0/0,::/0
adds nothing to the routing table, but:

ip -6 rule show
0:	from all lookup local
32764:	from all lookup main suppress_prefixlength 0
32765:	not from all fwmark 0xca6c lookup 51820
32766:	from all lookup main

ip route show table 51820
default dev wg0 scope link 

which means:
take the normal main route table except the default route.
if there is no 0xca6c mark, lookup route table 51820 which tells go default to wg0
if there is a mark, go further with main to deliver the encrypted packets.

ip -6 route get 2000::1
2000::1 from :: dev wg0 table 51820 src VPNV6 metric 1024 pref medium

ip -6 route get 2000::1 mark 51820
2000::1 from :: via fe80::b2ac:d2ff:fe57:410e dev bridge0 proto ra src LANV6 metric 20425 pref medium

1 Like