Replies: 1 comment
-
OS APIs generally use the interleaved format because those are streaming interfaces and chunking data would introduce a baseline latency. However, it is not true that most formats store samples interleaved. It it actually the opposite. Therefore, there will be an interleaving step somewhere in your audio pipeline. I recognize that Symphonia's current interface for doing this with Please have a look and let me know what you think. On that note, many audio DSPs (e.g., the |
Beta Was this translation helpful? Give feedback.
-
I noticed that
AudioBuffer
stores multi-channel sample data in a planar fashion, i.e. each channel is stored as a continuous slice. This leads to some processing overhead in practice that could be avoided if channels would be stored interleaved:Most if not all file formats store their samples interleaved. The same holds for I/O APIs. As an example, when playing back an audio file, there is both a de-interleaving and interleaving operation involved.
I would be curious to hear a rationale for the current design, because honestly I can't find a good one myself. Would you consider interleaved sample storage in the future? I realize it would be breaking change with possibly large impact on the architecture, but it seems like a better design to me overall.
Beta Was this translation helpful? Give feedback.
All reactions