-
-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Get raw header/indices from file? #121
Comments
Hi! Thanks! :-) Is this the same request as this one? #116 |
Yep, seems similar so I close this one and I'll comment there 👍 |
Actually I had a look at that question but this is a bit simpler. In that the OP wants to add the header back to a new file.. which can be a bit complicated in case there are header segments all over the place. In my case I merely want to know the position where we last saw a header segment. So if I read 256KB and find the last bit of header at 200KB then that would be it. In my case I don't try to put it back to a file.. I just need it to be stored in the backend separately from the file. I think this is much simpler than in that issue since we just need to keep track of the index at the rigtmost position so far while parsing. |
Ah, okay, that does seem much simpler. And would make for interesting statistics too to know how large the metadata blocks usually are in practise. It would complicate the code though, I will have to think a bit more about this. |
Thanks for your library. It's very well made and the API is a beauty to use.
I just wonder if there is a way to inspect at what point the header was found. For example something like below:
The reason I need this, is because I need to send the raw header to the backend (in order to support various media file and not only JPG). At the moment I need to send 256KB of data regardless if the header is only in the first 512 bytes for example. This becomes a real issue when dealing with 100s of files.
Are there plans to support this? Or is it something that already exists?
The text was updated successfully, but these errors were encountered: