Replies: 2 comments
-
Hi, this isn't particularly surprising. npTDMS has to do a lot more work compared to just reading the file data, and Python is not fast. The file structure can affect reading time a lot too, files with many small segments will be much slower to read compared to files with a smaller number of large segments. There's some relevant discussion in #249. I don't think there are any simple ways to significantly improve performance. Some work could possibly be parallelised, but the TDMS format is inherently sequential as the way segments are interpreted depends on the segments that have come before. I started experimenting with writing TDMS parsing code in Rust (https://github.com/adamreeve/rstdms), which is significantly faster and could be used as a backend for npTDMS, but this would be a large amount of work that I don't really have time for. |
Beta Was this translation helpful? Give feedback.
-
This PR might help a bit #342 |
Beta Was this translation helpful? Give feedback.
-
Hello,
I'm using npTDMS to read very big files created by Labview (>100GB)
I read it using read_data in order to read the file block by block.
However, the time of the read_data is very slow, compare to simple python open+read.
I made a benchmark of reading 308MB file with 10,000 samples block.
npTDMS: 50min
read: 255millisecond!
Beta Was this translation helpful? Give feedback.
All reactions