I am in fedora 34 and for some reason my Ipv6 keeps cutting in and out. Sometimes it works and sometimes it doesn’t. I am not a networking guy, so I would appreciate all guidance before I continue. The only thing I noticed that might be it is /etc/resolv.conf doesn’t have an ipv6 DNS, so maybe it is systemd-resolved?
[andythurman@rockhopper ~]$ cat /etc/resolv.conf
# This is /run/systemd/resolve/stub-resolv.conf managed by man:systemd-resolved(8).
# Do not edit.
#
# This file might be symlinked as /etc/resolv.conf. If you're looking at
# /etc/resolv.conf and seeing this text, you have followed the symlink.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "resolvectl status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs should typically not access this file directly, but only
# through the symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a
# different way, replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver 127.0.0.53
options edns0 trust-ad
search .
It is making certain things like Geary, git, and Lutris super unpredictable, so I would appreciate any solution. For now, I’m probably just going to swap back to F33 until I can figure something out.
resolvectl flush-caches seems to have fixed it for now, but I would not be surprised if it comes back. Will keep you all updated and still appreciate any support.
Is 2001:1890:f6:4b1::2 the endpoint of a Hurricane Electric 6to4 tunnel and 2001:506… the subnet assigned by Hurricane? Another traceroute starts with the 2001:506 addresses and jumps to ipv6.att.net. Something unstable in the routing resulting in traffic with unexpected source address?
Recently, I had some conflict with systemd-resolved caching things longer as expected, but in your case, resolvectl shows both a IPv4 and IPv6 nameserver. And if no IPv6 nameserver, the 192.168.1.254 might be able to resolve IPv6 too…
I would start with inspecting the routes, no idea why clearing the cache helps.
I am so confused. I am finally at a point where clearing the cache does nothing, and before going for the good 'ol reboot I checked my logs and found this: Grace period over, resuming full feature set (UDP+EDNS0) for DNS server 2600:1700:e20:bfb0::1.
and Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 2600:1700:e20:bfb0::1. when running https://ipv6-test.com/ (and failing it). Is this unrelated or could it be part of the issue? I’m going to try @vgaetera’s idea right now.
Seems to have fixed it for now! Hopefully, it stays fixed. What’s weird is I tried using 1.1.1.1 (although through GNOME-settings which may be different) and that didn’t work before… I’m not going to mark as solved just yet as this problem has been extremely unpredictable, and I’ve thought I’ve solved it before, but hopefully things work out.
It was a fluke. Things are still very broken after just a few minutes:
NetworkManager...
STATE CONNECTIVITY WIFI-HW WIFI WWAN-HW WWAN
connected full enabled enabled enabled enabled
NAME UUID TYPE DEVICE
enp3s0 2df4c834-7005-3f61-ad19-ddc7a2668f87 ethernet enp3s0
Resolver...
Global:
Link 2 (enp3s0): 192.168.1.254
Global:
Link 2 (enp3s0): attlocal.net
fedoraproject.org: 67.219.144.68 -- link: enp3s0
38.145.60.21 -- link: enp3s0
38.145.60.20 -- link: enp3s0
209.132.190.2 -- link: enp3s0
140.211.169.206 -- link: enp3s0
8.43.85.73 -- link: enp3s0
140.211.169.196 -- link: enp3s0
152.19.134.142 -- link: enp3s0
152.19.134.198 -- link: enp3s0
8.43.85.67 -- link: enp3s0
2605:bc80:3010:600:dead:beef:cafe:feda -- link: enp3s0
2620:52:3:1:dead:beef:cafe:fed6 -- link: enp3s0
2604:1580:fe00:0:dead:beef:cafe:fed1 -- link: enp3s0
2610:28:3090:3001:dead:beef:cafe:fed3 -- link: enp3s0
2620:52:3:1:dead:beef:cafe:fed7 -- link: enp3s0
2605:bc80:3010:600:dead:beef:cafe:fed9 -- link: enp3s0
-- Information acquired via protocol DNS in 587us.
-- Data is authenticated: no; Data was acquired via local or encrypted transport: no
-- Data from: cache
Routing...
1?: [LOCALHOST] pmtu 1500
1: _gateway (192.168.1.254) 6.193ms
1: _gateway (192.168.1.254) 3.810ms
2: 104-186-20-1.lightspeed.chrlnc.sbcglobal.net (104.186.20.1) 10.077ms
3: 99.144.25.246 (99.144.25.246) 4.739ms
4: 12.240.217.118 (12.240.217.118) 13.684ms asymm 6
5: ggr2.attga.ip.att.net (12.122.140.93) 15.106ms
6: be7018.ccr41.atl04.atlas.cogentco.com (154.54.11.85) 10.318ms
7: be2789.ccr41.atl01.atlas.cogentco.com (154.54.24.249) 9.334ms
8: be2112.ccr41.dca01.atlas.cogentco.com (154.54.7.157) 25.423ms asymm 10
9: be3083.ccr41.iad02.atlas.cogentco.com (154.54.30.54) 25.218ms asymm 10
10: 38.32.106.90 (38.32.106.90) 98.735ms asymm 11
11: gateway (209.132.185.254) 99.024ms asymm 13
12: no reply
13: no reply
14: no reply
15: no reply
16: no reply
17: no reply
18: no reply
19: no reply
20: no reply
21: no reply
22: no reply
23: no reply
24: no reply
25: no reply
26: no reply
27: no reply
28: no reply
29: no reply
30: no reply
Too many hops: pmtu 1500
Resume: pmtu 1500
1?: [LOCALHOST] 0.006ms pmtu 1500
1: no reply
2: no reply
3: no reply
4: no reply
5: no reply
6: no reply
7: no reply
8: no reply
9: no reply
10: no reply
11: no reply
12: no reply
13: no reply
14: no reply
15: no reply
16: no reply
17: no reply
18: no reply
19: no reply
20: no reply
21: no reply
22: no reply
23: no reply
24: no reply
25: no reply
26: no reply
27: no reply
28: no reply
29: no reply
30: no reply
Too many hops: pmtu 1500
Resume: pmtu 1500
Connectivity/quality...
PING (38.145.60.20) 56(84) bytes of data.
--- ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9013ms
rtt min/avg/max/mdev = 95.303/107.322/118.179/8.168 ms
PING fedoraproject.org(proxy06.fedoraproject.org (2605:bc80:3010:600:dead:beef:cafe:fed9)) 56 data bytes
--- fedoraproject.org ping statistics ---
10 packets transmitted, 0 received, 100% packet loss, time 9225ms
Start: 2021-04-28T13:44:33-0400
HOST: rockhopper Loss% Snt Last Avg Best Wrst StDev
It seems… worse.
I am so very, confused. I realized I have managed to forego the possibly obvious solution of resetting my router, so I am going to do that see where that brings me.