Replies: 14 comments
-
We changed from sending over a large company network to a direct wired connection between our machines and we were able to send up to 16 MB messages. |
Beta Was this translation helpful? Give feedback.
-
Hi, I got a similar issue, IDL is struct HelloWorld
{
unsigned long index;
sequence<char> message;
}; publish code is bool HelloWorldPublisher::publish(
bool waitForListener)
{
std::vector<char> msg;
for (int i=0; i<500*1024; i++)
msg.push_back('A');
if (listener_.firstConnected_ || !waitForListener || listener_.matched_ > 0)
{
hello_.index(hello_.index() + 1);
hello_.message(msg);
writer_->write(&hello_);
return true;
}
return false;
} When the size is large such as bigger than 50KB, subscriber cannot receive any data, If I change the sequence to char message[500*1024] or sequence<char, 500*1024>, it works fine as well. So, I'm not sure it's a problem when using unfixed length vector in IDL. |
Beta Was this translation helpful? Give feedback.
-
Does anyone know what kind of thing comes into play here? It seems that reliable mode doesn't work to mitigate this issue. It seems the message transmission quality has a steep dropoff, where trying to resend is only making things worse. On what kind of networks have you guys seen that reliable mode is effective. Is it just on long time scales and 99.99% reliable networks? |
Beta Was this translation helpful? Give feedback.
-
I was able to replicate this exactly. The subscriber doesn't receive the data unless you specify the sequence size in the IDL. I wouldn't expect that to be normal behavior, but maybe it is? EDIT: after further experimentation, the limit for this specific example (with an unfixed length char vector) seems to be a length of 100. A message with 101 chars in the char vector will not be received by the subscriber. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
There are a two different problems being reported on this ticket. On the one hand the one reported by the one who opened it (@calvertdw) which relates with sending large data message over lossy networks. In this case it is necessary to understand that several layers are involved with this issue.
In order to receive any sample, you must receive every one of the UDP datagrams. If @calvertdw, you may try to increase the MTU if your network hardware allows it or limiting the The second issue reported here is a very common one (#2903, #2740, #2330...). By default, Fast DDS is currently configured with |
Beta Was this translation helpful? Give feedback.
-
To further elaborate on this, if your network experiences IP frame drops at a rate of 1 every 40, no UDP datagram can be reconstructed upon reception and consequently no UPD datagrams are ever handed over to Fast DDS. There is nothing that Fast DDS can do on this situation, since it is a problem of the reliability on lower layers, which can be caused by a myriad of reasons. However, as @JLBuenoLopez-eProsima points out, setting the |
Beta Was this translation helpful? Give feedback.
-
Thank you guys, that's extremely helpful.
These are both critical information to know when using Fast-DDS. Could we add that to the documentation for large data rates?
Is there some database or discussion of these possible reasons somewhere? Anywhere on the internet, it doesn't have to be documentation for Fast-DDS. It'd be nice to have at least some kind of list of common reasons to reference.
From https://en.wikipedia.org/wiki/Maximum_transmission_unit:
It seems as though changing the MTU doesn't address the issue of reliability, so I think we would opt for merely reducing |
Beta Was this translation helpful? Give feedback.
-
Thr origin question is never fix. Even doing this :
ALL this can't help sending and subscribing large data like 20000 * 20000 uint_8. The data is lost in network or somewhere... |
Beta Was this translation helpful? Give feedback.
-
@qpc001 That is correct. We solved our problem only by reducing our message sizes. For us, this meant to stop sending point clouds and switch to sending JPEG or PNG compressed depth images. I proposed some solutions above that should be addressed. |
Beta Was this translation helpful? Give feedback.
-
According to our CONTRIBUTING.md guidelines, I am closing this issue for now. Please, feel free to reopen it if necessary. |
Beta Was this translation helpful? Give feedback.
-
I found some relevant advice now in the documentation. If we run into this again, we'll try these steps and reopen if it doesn't work. Thanks! |
Beta Was this translation helpful? Give feedback.
-
This issue should really stay open. We still don't have a working solution and can't reliably send messages over ~262 kB. |
Beta Was this translation helpful? Give feedback.
-
Hi @calvertdw I can reopen it and move it to the Support discussion forum. As already explained, this is not a proper issue or bug in the library. Al least for the moment. Our CI checks that larger messages than 262 kB are sent so this can be caused by several other factors different from Fast DDS library, for instance, the network architecture, mis-configuration of QoS, mis-expectations... You might consider changing to a different transport like TCP and/or modify the discovery mechanism as DDS relies on multicast and this is troublesome in Wifi connections. Discovery Server mechanism might be the way to go. Finally, eProsima offers architecture studies for users that are struggling making Fast DDS work with their specific use case. You might consider contacting eProsima's commercial support team for more information. |
Beta Was this translation helpful? Give feedback.
-
Hello there, we are having trouble with large message sizes. We've tried increasing the socket buffer sizes but it doesn't seem to have any effect.
Expected behavior
Sending messages that are large get over the network. For instance, 4k video messages of 1 MB, or colored point cloud data from a Realsense L515 that are 4 MB.
Current behavior
Messages larger than ~262 kB maybe send for a second or two, but then stop.
Steps to reproduce
Modify the HelloWorldPublisher and HelloWorldSubscriber, adding a RawCharMessage.idl.
Double the message data size every 1 second.
Fast DDS version/commit
master
Platform/Architecture
Other. Please specify in Additional context section.
Transport layer
UDPv4
Additional context
Arch Linux and Fedora, up-to-date.
Beta Was this translation helpful? Give feedback.
All reactions