-
Notifications
You must be signed in to change notification settings - Fork 275
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SBFT24 Competition #1941
base: master
Are you sure you want to change the base?
SBFT24 Competition #1941
Conversation
Co-authored-by: Philipp Görz <phi-go@users.noreply.github.com>
(cherry picked from commit 1a31072)
I think we can.
BTW, There were some |
So it seems that in this code: def set_up_corpus_directories(self):
"""Set up corpora for fuzzing. Set up the input corpus for use by the
fuzzer and set up the output corpus for the first sync so the initial
seeds can be measured."""
fuzz_target_name = environment.get('FUZZ_TARGET')
target_binary = fuzzer_utils.get_fuzz_target_binary(
FUZZ_TARGET_DIR, fuzz_target_name)
input_corpus = environment.get('SEED_CORPUS_DIR')
os.makedirs(input_corpus, exist_ok=True)
if not environment.get('CUSTOM_SEED_CORPUS_DIR'):
_unpack_clusterfuzz_seed_corpus(target_binary, input_corpus)
else:
_copy_custom_seed_corpus(input_corpus) The variable In that function the target binary path is only returned if the file exists. So this is probably a build error not specifically a corpus thing. This is under the assumption that |
I modified part of the Makefile generation to support the mutation testing docker builds, maybe I broke something there. @alan32liu could you take a look at the following changes to the files, I thought those should be fine so maybe another pair of eyes would be good:
|
Oh, I see now. On the cloud it seems the mutation analysis build process is used for the fuzzer builds, which is definitely wrong... Though, I don't yet understand why that happens. |
I did not notice anything either: They replicate However, I noticed that In fact, ignoring the build errors, the Not too sure about the build errors, though. The log is not very useful:
I suppose one thing we can do is to add extensive logs related to these build errors (which seem to have more victims and deserve a higher priority). |
Luckily the build-logs do show something, see my other comment. Though, they are not available on a local build, I'll try to patch that in, maybe I messed something up in the dockerfile dependencies. I can test that locally for now. Also thank you for taking a look. |
Ok, I can confirm that mutation testing is not used locally to build bloaty_fuzz_target-libfuzzer and more imporantly it builds without a problem. So it should be something in the gcb specific code, which I do not really understand that well. Also the build-logs are truncated so we do not see the remaining info: https://www.googleapis.com/download/storage/v1/b/fuzzbench-data/o/sbft-standard-cov-01-16%2Fbuild-logs%2Fbenchmark-bloaty_fuzz_target-fuzzer-libfuzzer.txt?generation=1705424198204020&alt=media. However, before I dig deeper we can still complete the evaluation without fixing this. For now we planned to do the mutation analysis part locally on our blades, we do not support that many benchmarks so this is fine. The coverage for all benchmarks we could do an a branch without our changes and only the fuzzer PRs. |
The missing information seems to just be truncated from the build-log, I changed the code a bit to allow storing everything for the gcb_build execute call, I hope that should reveal the missing info. Let's try the simple test experiment, if you feel comfortable with that. |
How about:
Because it includes good success/failure comparisons on both fuzzers and benchmarks.
Let me know if you'd like to add more. |
Thank you for looking into this so thoroughly. This sounds like a plan. If you want to reduce compute more, even one hour runs and a few trials should give us enough to debug, though, I don't know how to do this with flags. /gcbrun run_experiment.py -a --mutation-analysis --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name sbft-dev-01-18 --fuzzers libafl libfuzzer tunefuzz pastis --benchmarks freetype2_ftfuzzer jsoncpp_jsoncpp_fuzzer bloaty_fuzz_target lcms_cms_transform_fuzzer |
Hello, I'm the author of BandFuzz. I've noticed that stb_stbi_read_fuzzer shows NaN in the 01-16 report, but it appears in mutation tests. Additionally, we also have NaN values for harfbuzz_hb-shape-fuzzer and sqlite3_ossfuzz in sbft-standard-cov-01-18 report I am able to successfully build and run these three targets on my local setup. I have reviewed the build logs above but haven't found any valid reasons for this discrepancy. |
Oh, the 01-16 experiment was in this PR, sadly there seem to still be some issues with our mutation testing integration using gcb and we didn't have time to look into that yet, as discussed above. The other PR is the master branch + plus the competitors fuzzers, so there shouldn't be an issue. |
Hi @phi-go , Thanks for your patience in answering our questions and efforts in hosting this competition! I have questions regarding the final metric. Does this year we still use two sets of benchmarks (bug/cov, both public and new private programs like last year) and measure the coverage and unique bugs? Or the mutation analysis results will also be taken into consideration for those programs that are compatible? Do we use ranking or normalized score as the final metric? |
@kdsjZh thank you for participating which makes it all much more worthwhile! We want to use mutation analysis as the main measurement result, after all it was the main reason for us to do the competition in the first place, though this is not final yet. You will get the FuzzBench coverage on the default benchmarks and the new mutation analysis results on a limited set of benchmarks. Sadly we didn't have time to do a more extensive evaluation. Regarding using ranking or normalized score, we have not yet made a final decision. I expect we would use something along the lines of the geomean of killed mutants across the benchmarks. If you or others want to provide input on what we should chose please feel free to do so. |
@alan32liu do you think we could run the experiment mentioned here: #1941 (comment) This is not urgent, I would just like to fix the issue with running mutation testing on the cloud version of FuzzBench. |
Thanks for your reply!
If I understand correctly, "mutants killed" is used to assess fuzzers' bug-finding capability. In that case, we'll only run mutation analysis and cov results on the default 23 coverage programs, right? Do we have an extended private coverage benchmark like last year to avoid overfitting? |
To be clear, for mutation testing we plan to provide data for 9 subjects. Sadly, we didn't have time to get more to work. We do not plan for extended private benchmarks. We would have liked to have a more thorough evaluation, including private subjects, as you rightly suggest. It just was not possible for us to do timewise. However, note that mutation testing is more resistant to overfitting than a coverage or bug based measurement (@vrthra might want to chime in here). |
As @phi-go mentions, we did not have enough time for extended bench marks. |
/gcbrun run_experiment.py -a --mutation-analysis --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name sbft-dev-01-20 --fuzzers libafl libfuzzer tunefuzz pastis --benchmarks freetype2_ftfuzzer jsoncpp_jsoncpp_fuzzer bloaty_fuzz_target lcms_cms_transform_fuzzer |
Experiment |
This PR combines all fuzzers submitted to SBFT24 and the mutation measurer to allow experiments for the competition.