-
-
Notifications
You must be signed in to change notification settings - Fork 274
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
class C downlink issues #236
Comments
I think the real issue is that the UDP protocol might be used in a different way from how it is intended to be used. Each gateway must periodically send a This If Helium is not sending |
Hmm, yeah the helium packet routers basically impersonate each gateway. I think it uses unique port numbers for each gateway so it can identify which gateway the packets are destined for. In this specific case (helium) we know that the ip:port of the gateways are (mostly) stable, as they don't refer to each gateway, but to the helium packet routers which have public ip's and are not behind nat/firewall etc. We could still have some timeout issues with loadbalancing on our side, but that is something we can (hopefully) tweak. Would you accept a PR to make this setting configurable? That would allow us to easily work around this without having to recompile the gateway-bridge. NB: yes there are other options than semtech udp to connect chirpstack LNS's to helium
|
Happy to accept a PR to make this option configurable (with as default the current value), so that this can be adjusted. With regards to a proper solution, I agree that 2. is probably the best option. E.g. we could create a |
👍 💯 agree You can expect a PR from me/a colleague of mine to add For the This would simplify deployments of helium lns's with chirpstack a lot; I will talk to helium about this and see how we can expedite this |
What happened?
Sometimes downlinks for class C devices are not sent directly, they are delivered minutes later after we received another uplink.
What did you expect?
Since the idea of class C devices is that downlinks can be sent at any time; I would expect not to have to wait until we receive an uplink.
Details
We use chirpstack to connect to the helium network.
This means gateways don't maintain an active connection to chirpstack; instead we only get a packet when an uplink is sent from a device.
This leads to issues as currently chirpstack-gateway-bridge clears the gateway address information after 1 minute of inactivity ( https://github.com/chirpstack/chirpstack-gateway-bridge/blob/master/internal/backend/semtechudp/registry.go#L21 )
So when we want to send a downlink to a device we would need to do that within a 1-2 minute timeframe of the last uplink of the gateway; otherwise the address of the gateway is cleared and the downlink cannot be sent. This is not an issue for class A/B devices as they are required to send their downlinks within that timeframe .. but for class C devices this leads to issues
For now we have deployed a local fix that increases the
gatewayCleanupDuration
to 24h to migite this issue.But this is not a permanent fix.
Ideally the gateway address information also needs to be persisted and shared among all instances of the gateway-bridges for this region, as otherwise a reboot, or loadbalancing could lead to the same issues.
Could you share your log output?
chirpstack-deployment-6bdb77dc77-kvxwf chirpstack-deployment 2024-05-24T12:50:48.311247817Z 2024-05-24T12:50:48.311062Z INFO gRPC{uri=/api.DeviceService/Enqueue}: chirpstack::storage::device_queue: Device queue-item enqueued id=603965cc-a91b-418f-9517-11d95850c3b8 dev_eui=8c1f64a870000025 chirpstack-deployment-6bdb77dc77-kvxwf chirpstack-deployment 2024-05-24T12:50:48.655241160Z 2024-05-24T12:50:48.655018Z INFO schedule{dev_eui=8c1f64a870000025 downlink_id=2394125123}: chirpstack::storage::device_queue: Device queue-item updated id=603965cc-a91b-418f-9517-11d95850c3b8 dev_eui=8c1f64a870000025 chirpstack-deployment-6bdb77dc77-kvxwf chirpstack-deployment 2024-05-24T12:50:53.679587870Z 2024-05-24T12:50:53.679184Z INFO schedule{dev_eui=8c1f64a870000025 downlink_id=4261746332}: chirpstack::storage::device_queue: Device queue-item updated id=603965cc-a91b-418f-9517-11d95850c3b8 dev_eui=8c1f64a870000025 .. continues every 5 seconds chirpstack-deployment-6bdb77dc77-kvxwf chirpstack-deployment 2024-05-24T12:56:47.143187955Z 2024-05-24T12:56:47.142963Z INFO up{deduplication_id=07671fe3-efc1-4182-86d1-9593a60fefe0}:data_up{dev_eui="8c1f64a870000025"}:data_down{downlink_id=4007633762}: chirpstack::storage::device_queue: Device queue-item updated id=603965cc-a91b-418f-9517-11d95850c3b8 dev_eui=8c1f64a870000025 chirpstack-deployment-6bdb77dc77-kvxwf chirpstack-deployment 2024-05-24T12:56:47.213894563Z 2024-05-24T12:56:47.213751Z INFO tx_ack{downlink_id=4007633762}: chirpstack::storage::device_queue: Device queue-item deleted id=603965cc-a91b-418f-9517-11d95850c3b8
Your Environment
PS: Happy to help putting together a PR, but let's first figure out how to approach this
The text was updated successfully, but these errors were encountered: