How to keep Kademlia peers connected #5495
-
Guys, I'm new to libp2p.I want to create a P2P network use DHT.My question is after the two nodes are connected, they are disconnected after a while because of a timeout. If there are multiple nodes in the DHT network, I may need to publish information to other nodes at one node. Maybe at a certain point, rather than during the timeout period, how do these nodes keep them connected? Do I need to do it myself? For example, ping each node regularly? or does libp2p have aspects of the method already implemented? I'm not sure if my code is written correctly? Here's my code:use std::time::Duration;
use std::env;
use anyhow::Result;
use futures::StreamExt;
use libp2p::swarm::SwarmEvent;
use libp2p::Multiaddr;
use libp2p::{kad::{self, store::MemoryStore}, noise, tcp, yamux};
#[tokio::main]
async fn main() -> Result<()> {
let mut swarm = libp2p::SwarmBuilder::with_new_identity()
.with_tokio()
.with_tcp(
tcp::Config::default(),
noise::Config::new,
yamux::Config::default,
)?
.with_behaviour(|key| {
kad::Behaviour::new(
key.public().to_peer_id(),
MemoryStore::new(key.public().to_peer_id()),
)
})?
.with_swarm_config(|c| c.with_idle_connection_timeout(Duration::from_secs(60)))
.build();
let args: Vec<String> = env::args().collect();
let peer_addr;
if args.len() > 1 {
peer_addr = args[1].as_str();
let remote: Multiaddr = peer_addr.parse()?;
swarm.dial(remote)?;
}
swarm.behaviour_mut().set_mode(Some(kad::Mode::Server));
swarm.listen_on("/ip4/0.0.0.0/tcp/0".parse()?)?;
println!("local_peer_id: {}", swarm.local_peer_id());
loop {
let event = swarm.select_next_some().await;
match event {
SwarmEvent::NewListenAddr { address, .. } => {
println!("Listening in {address:?}");
},
SwarmEvent::Behaviour(kad::Event::RoutingUpdated { peer, is_new_peer, addresses, .. }) => {
println!("Routing updated: {} {} {:?}", peer, is_new_peer, addresses);
}
SwarmEvent::Behaviour(kad::Event::PendingRoutablePeer { peer, address }) => {
println!("Pending routable peer: {} {:?}", peer, address);
}
SwarmEvent::ConnectionEstablished { peer_id, endpoint, .. } => {
swarm.behaviour_mut().add_address(&peer_id, endpoint.get_remote_address().to_owned());
println!("add address: {peer_id:?} {endpoint:?} to DHT");
}
event => {
println!("{event:?}");
}
}
}
} open a terminal, execute the command cargo run The result:local_peer_id: 12D3KooWGc9cLPnhYc1DXfR1LSCQWFp8G1Cm9DkgXLNsaEf3f1jJ
Listening in "/ip4/192.168.1.19/tcp/41357"
Listening in "/ip4/192.168.221.1/tcp/41357"
Listening in "/ip4/10.0.1.1/tcp/41357"
Listening in "/ip4/127.0.0.1/tcp/41357"
IncomingConnection { connection_id: ConnectionId(1), local_addr: "/ip4/192.168.1.19/tcp/41357", send_back_addr: "/ip4/192.168.1.19/tcp/42082" }
add address: PeerId("12D3KooWNUvmAsUrixVnyc3jBD254ivynVTDw7CdxA6VMRiUm9o8") Listener { local_addr: "/ip4/192.168.1.19/tcp/41357",
send_back_addr: "/ip4/192.168.1.19/tcp/42082" } to DHT
Routing updated: 12D3KooWNUvmAsUrixVnyc3jBD254ivynVTDw7CdxA6VMRiUm9o8 true ["/ip4/192.168.1.19/tcp/42082/p2p/12D3KooWNUvmAsUrixVnyc3jBD254ivynVTDw7CdxA6VMRiUm9o8"]
ConnectionClosed { peer_id: PeerId("12D3KooWNUvmAsUrixVnyc3jBD254ivynVTDw7CdxA6VMRiUm9o8"), connection_id: ConnectionId(1), endpoint: Listener { local_addr: "/ip4/192.168.1.19/tcp/41357", send_back_addr: "/ip4/192.168.1.19/tcp/42082" }, num_established: 0, cause: Some(KeepAliveTimeout) } Open another terminal cargo run -- /ip4/192.168.1.19/tcp/41357 The result:local_peer_id: 12D3KooWNUvmAsUrixVnyc3jBD254ivynVTDw7CdxA6VMRiUm9o8
Listening in "/ip4/192.168.1.19/tcp/42081"
add address: PeerId("12D3KooWGc9cLPnhYc1DXfR1LSCQWFp8G1Cm9DkgXLNsaEf3f1jJ") Dialer { address: "/ip4/192.168.1.19/tcp/41357", role_override: Dialer } to DHT
Routing updated: 12D3KooWGc9cLPnhYc1DXfR1LSCQWFp8G1Cm9DkgXLNsaEf3f1jJ true ["/ip4/192.168.1.19/tcp/41357/p2p/12D3KooWGc9cLPnhYc1DXfR1LSCQWFp8G1Cm9DkgXLNsaEf3f1jJ"]
Listening in "/ip4/192.168.221.1/tcp/42081"
Listening in "/ip4/10.0.1.1/tcp/42081"
Listening in "/ip4/127.0.0.1/tcp/42081"
ConnectionClosed { peer_id: PeerId("12D3KooWGc9cLPnhYc1DXfR1LSCQWFp8G1Cm9DkgXLNsaEf3f1jJ"), connection_id: ConnectionId(1), endpoint: Dialer { address: "/ip4/192.168.1.19/tcp/41357", role_override: Dialer }, num_established: 0, cause: Some(KeepAliveTimeout) } |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
When multiple nodes participate a DHT, the DHT's routing table is going to store the addresses of some of the peers, following kademlia spec, so that it is able to open connections to these peers if needed. If you need to keep some connections open, or store the addresses of more peers than what the routing table is doing, you would have to handle it yourself, because there is no peerstore implementation in rust-libp2p for now. |
Beta Was this translation helpful? Give feedback.
When multiple nodes participate a DHT, the DHT's routing table is going to store the addresses of some of the peers, following kademlia spec, so that it is able to open connections to these peers if needed.
If you need to keep some connections open, or store the addresses of more peers than what the routing table is doing, you would have to handle it yourself, because there is no peerstore implementation in rust-libp2p for now.