Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixes #225 zfs-check: efficient handling of sparse files #234

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

kyle0r
Copy link

@kyle0r kyle0r commented Nov 22, 2023

Per #225, this PR offers a solution - critique and changes very welcome.

The PR introduces two main changes:

  1. If xxhash module is available, xxh3_64 is becomes the default --hash algorithm, else fallback to sha1.
  2. Implemenation of new argument zfs-check --hash which enables invocation with a specific hash algorithm.

This PR proposes to use xxHash - an extremely fast non-cryptographic hash algorithm, as the default hash algorithm for zfs-check. This algorithm is able to hash sparse chunks efficiently. This previous revisions hardcoded sha1 algorithm was inefficient in handling sparse chunks. This is documented in #225.

One is able to get a quick idea of the performance of various hash algorithms here: https://github.com/Cyan4973/xxHash#benchmarks. Its clear to see sha1 is near the bottom of the leader-board.

💡 It might be desirable to add xxhash to requirements.txt BUT the caveat is that operating system needs shared libxxhash installed. I've omitted this for now, deferring to the leaders of the project to make a decision on this.
To manually install xxhash in a python env: python3 -m pip install --upgrade xxhash.

Note that the code in the PR gracefully handles the absence of the xxhash module and implements fallback logic.

BlockHasher.py

  • hash_class is sourced from cli args instead of hardcoding it.
  • hash_factory() lays the groundwork to support arbitrary hash libs.
  • Detection of and use of xxhash lib.

ZfsCheck.py

  • Implement new cli arg --hash. The choices for the arg are generated based on what is detected in the python env.
  • The input to --hash is is validated against the arg choices.
  • Implemented helper method determine_algorithms_available(). This tries to pick a performant default with a fallback to sha1.
  • Detection of and use of xxhash lib.

BlockHasher.py
* hash_class is sourced from cli args instead of hardcoding it.
* hash_factory() lays the groundwork to support arbitrary hash libs.
* Detection of and use of xxhash lib.

ZfsCheck.py
* Implement new cli arg --hash. The choices for the arg are generated
based on what is detected in the python env.
* The input to --hash is is validated against the arg choices.
* Implemented helper method determine_algorithms_available(). This tries
to pick a performant default with a fallback to sha1.
* Detection of and use of xxhash lib.
@psy0rz
Copy link
Owner

psy0rz commented Nov 22, 2023

very nice, however i think its a bit tricky to make the default algorithm dynamic: if one node has xxhash and the other node hasn't , it will look to the user as if all the checksums mismatch.

so maybe keep the default a safe sha1? we can add information to the performance tips page about installing the xxhash and select it?

@kyle0r
Copy link
Author

kyle0r commented Nov 23, 2023

very nice, however i think its a bit tricky to make the default algorithm dynamic: if one node has xxhash and the other node hasn't , it will look to the user as if all the checksums mismatch.

so maybe keep the default a safe sha1? we can add information to the performance tips page about installing the xxhash and select it?

That is fair. I had not considered it because I'm used to being cognitive of the technical details of the checksum etc.

Let me have a think. I agree it makes sense for the out of box / default behaviour to be very predictable. Best user experience etc.

Perhaps stderr output should contain checksum details and a hint to performance tips?

@kyle0r
Copy link
Author

kyle0r commented Nov 23, 2023

Additional thoughts. Perhaps the stdout should contain a predictable header (defining the specification of the operation - how zfs-check was invoked) which contains the spec of the invocation, the options/arguments used, block, hash etc and this would form part of the comparison? In the end two invocations with different arguments would likely produce different results.

This approach is also nice for the use case where someone stores the checksums long term so they can be aware of the the spec that was used to generate the checksums? Perhaps offering a generated cli invocation which could be copy + pasted for repeatability?

Hypothetically, in the --check use case, this header if present, could be parsed by zfs-check to invoke a like for like output?

@psy0rz
Copy link
Owner

psy0rz commented Nov 25, 2023

That sounds good, first line as a comment with the "hashing" config? (Things like the hash and chunksize and skip option?)

@kyle0r
Copy link
Author

kyle0r commented Nov 25, 2023

That sounds good, first line as a comment with the "hashing" config? (Things like the hash and chunksize and skip option?)

Yes, along those lines. I was thinking that we could make it an object like json so its easy to parse. I think using the argparse module we could output the exact invocation and this could also be feed back into zfs-check --check to invoke a like-for-like invocation, and/or be copy-pasted by the user. In the json object additional metadata could be added like invocation timestamp, version, stuff like that?

It looks like the BlockHasher.compare() method is reading the --check file handle line by line, so it should be fairly straightforward to check if the first line contains the checksum spec header?

You've probably seen that a new OpenZFS silent corruption bug has recently surfaced (silent for ~18 months?!), I'm currently doing analysis on the available info to see how I might be impacted and how to "check" it 😉 see: openzfs/zfs#15526 and https://redd.it/1826lgs. I pinged you a chat on reddit if you want to chat about this PR or the corruption.

@psy0rz
Copy link
Owner

psy0rz commented Dec 4, 2023

oh no another silent corruption bug in zfs? :(

i think that the header should be read earlier. Blockhasher is a low level function, and by then its too late to change any settings? but i have to look into it.

@psy0rz
Copy link
Owner

psy0rz commented Dec 5, 2023

Coincidently the datacorruption bug is also related to sparse handling of files it seems. They just released a fix: https://github.com/openzfs/zfs/releases/tag/zfs-2.1.14

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants