Chunk Size for Async #736
Replies: 2 comments
-
Hi @dmillerksu, thanks for opening this discussion. Theoretically 32'000 or multiples of 32'000 should work well as a chunk size. So if you need to write 640'000 cells in total, a chunk size of 32'000 would be reasonable. If you use a chunk size of less than 32'000 cells, TM1py will execute unbound TI processes with less than the maximum number of possible statements. Theoretically, that should be less efficient. If we use a chunk size of 64'000, TM1py breaks the 640'000 updates into 10 chunks of 64'000 cells each. |
Beta Was this translation helpful? Give feedback.
-
Based on the discussions I had recently with people I think TM1py should provide a default value for By the way, if you do investigate further or possibly quantify something, it would be great if you could share your findings. I would like to provide a sensible default value. |
Beta Was this translation helpful? Give feedback.
-
Hello all,
Is there an ideal method or best practice for determining best chunk size for writing data using the async functions? I had heard from someone at a conference that 30k was an ideal chunk size for TM1. I'm guessing it all depends on the cube's size and the complexity of the rules.
I've been using that 30k chunk size for everything so far. The individual async threads usually complete around 0 to 0.2 seconds. Is it better to try to evenly spread the chunks across the cores to avoid any threads waiting (ie. total records / cores)? Has anyone produced a script for testing out ranges of chunk sizes?
Thanks,
Danny
Beta Was this translation helpful? Give feedback.
All reactions