-
-
Notifications
You must be signed in to change notification settings - Fork 3.1k
accelerated client includes outdated(?) addrs #10737
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
The normal DHT Client The accelerated DHT Client So the logic is different between the normal and accelerated DHT clients, and the accelerated DHT client is expected to find a superset of addresses, some of them may be stale? Is this behaviour new? There was a recent code change in the Accelerated DHT Client that may be related. |
This does not seem to be new, probing the same peer with The Could it be, the problem here is not a bug in Kubo/kad-dht, but one of the peers that accelerated client hits? If a third-party peer that does not use upstream code does not forget addresses after 15 minutes, but keeps them longer, would it explain this behavior? |
When the accelerated DHT client is enabled, the DHT crawler will enumerate all the peers and dump their routing tables. It will write every address received about any other peer in the host peerstore, without dialing the addresses first. Hence if multiple nodes run the accelerated DHT client, they will keep infecting each other with outdated addresses and these addresses will never disappear from the DHT routing tables. |
Do you expect this change might alleviate some the pain of running the accelerated DHT Crawler itself? I've found that to be the largest point of pain trying to host IPFS content on non-dedicated internet. |
I am not sure what you mean there. The fix will greatly reduce the number of unreachable addresses that are stored in the go-libp2p host peerstore. |
I just released go-libp2p-kad-dht v0.30.0 including the fix. The kad-dht dep is bumped in #10736, since the latest kad-dht version depends on the latest go-datastore version. I tested there and the behaviour is now as expected, same addresses returned with Accelerated DHT Client enabled and disabled. $ ipfs config --json Routing.AcceleratedDHTClient true
$ ipfs id 12D3KooWH7PZhUa4UL2Tz2xumHNAugZBBAzEKyF1qmQENYQrYHDr
{
"ID": "12D3KooWH7PZhUa4UL2Tz2xumHNAugZBBAzEKyF1qmQENYQrYHDr",
"PublicKey": "CAESIGxeQquw1AK8U1WO0yZ/arxyexnOltWE8dac6TgMskKx",
"Addresses": [
"/ip4/15.204.207.119/tcp/4001/p2p/12D3KooWH7PZhUa4UL2Tz2xumHNAugZBBAzEKyF1qmQENYQrYHDr",
"/ip4/15.204.207.119/udp/4001/quic-v1/p2p/12D3KooWH7PZhUa4UL2Tz2xumHNAugZBBAzEKyF1qmQENYQrYHDr",
"/ip4/15.204.207.119/udp/4001/quic-v1/webtransport/certhash/uEiCo75ucNddWP2Zv5aSIPOhBWCm3Vss6_UtZGMvf8YONMw/certhash/uEiDNVNf5mOm4WVE37I_6jsGbi50t4FcJ0e46RZYdfQZEmg/p2p/12D3KooWH7PZhUa4UL2Tz2xumHNAugZBBAzEKyF1qmQENYQrYHDr",
"/ip4/15.204.207.119/udp/4001/webrtc-direct/certhash/uEiCTm6d8CKgoD8ZFBuA2kXypupkh3RUQiZj_JCcjrM6ZbQ/p2p/12D3KooWH7PZhUa4UL2Tz2xumHNAugZBBAzEKyF1qmQENYQrYHDr",
"/ip6/2604:2dc0:101:100::138b/tcp/4001/p2p/12D3KooWH7PZhUa4UL2Tz2xumHNAugZBBAzEKyF1qmQENYQrYHDr",
"/ip6/2604:2dc0:101:100::138b/udp/4001/quic-v1/p2p/12D3KooWH7PZhUa4UL2Tz2xumHNAugZBBAzEKyF1qmQENYQrYHDr",
"/ip6/2604:2dc0:101:100::138b/udp/4001/quic-v1/webtransport/certhash/uEiCo75ucNddWP2Zv5aSIPOhBWCm3Vss6_UtZGMvf8YONMw/certhash/uEiDNVNf5mOm4WVE37I_6jsGbi50t4FcJ0e46RZYdfQZEmg/p2p/12D3KooWH7PZhUa4UL2Tz2xumHNAugZBBAzEKyF1qmQENYQrYHDr",
"/ip6/2604:2dc0:101:100::138b/udp/4001/webrtc-direct/certhash/uEiCTm6d8CKgoD8ZFBuA2kXypupkh3RUQiZj_JCcjrM6ZbQ/p2p/12D3KooWH7PZhUa4UL2Tz2xumHNAugZBBAzEKyF1qmQENYQrYHDr"
],
"AgentVersion": "kubo/0.33.2/ad1868a",
"Protocols": [
"/ipfs/bitswap",
"/ipfs/bitswap/1.0.0",
"/ipfs/bitswap/1.1.0",
"/ipfs/bitswap/1.2.0",
"/ipfs/id/1.0.0",
"/ipfs/id/push/1.0.0",
"/ipfs/kad/1.0.0",
"/ipfs/lan/kad/1.0.0",
"/ipfs/ping/1.0.0",
"/libp2p/autonat/1.0.0",
"/libp2p/autonat/2/dial-back",
"/libp2p/autonat/2/dial-request",
"/libp2p/circuit/relay/0.2.0/hop",
"/libp2p/circuit/relay/0.2.0/stop",
"/libp2p/dcutr",
"/x/"
]
} |
I'm sorry, I'm kind of over my head. What I am hoping is that the stale addresses in the peerstore are part of why of the hourly DHT crawl is so detrimental to nework connections. (if the DHT is trying to reach all of those stale addresses during the crawl.) |
The crawler will still open a connection to all reachable DHT servers (currently around 10k), and hopefully there shouldn't be too much additional dials due to stale addresses. |
Checklist
Installation method
docker image
Version
Config
`ipfs config --json Routing.AcceleratedDHTClient true`
Description
Without accelerated client, we get normal address list:
With
ipfs config --json Routing.AcceleratedDHTClient true
identify returns a lot more addrs:The text was updated successfully, but these errors were encountered: