Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a license on the benchmarks? #1

Open
pavpanchekha opened this issue May 25, 2023 · 4 comments
Open

Is there a license on the benchmarks? #1

pavpanchekha opened this issue May 25, 2023 · 4 comments

Comments

@pavpanchekha
Copy link

When @shoaibkamil gave a talk at the FPBench Community Meetings, he and I briefly discussed merging the benchmarks here into FPBench. Naturally a bunch are already in FPBench, but there are a bunch of extra ones here. However, I have a few questions / blockers on this:

  1. I don't see a license anywhere on the benchmarks. Is it possible to license this repository under something permissive? FPBench uses the MIT license, so that's ideal. If you can't license all of the code, perhaps you could license just the methods.hpp file?
  2. Is there anything you can say about where the benchmarks originally came from? Shoaib mentioned that some were extracted from some internal tools. Is that correct? Does that refer to the exprN expressions? What's the level of detail you could give about these tools? (Is it at least safe to say they are doing "graphics"?) Are the extra_functionN benchmarks randomly generated by the generator in this repo? We could add this to the metadata.
  3. Is anything published about these benchmarks? Just the website? If there is we typically include a citation in the metadata.
  4. Any thoughts about which benchmarks are worth including in FPBench? My guess is: include the exprN benchmarks, don't include the ones that are just one function, maybe include the extra_functionN benchmarks. I'm willing to include randomly generated benchmarks if there's some sense they were filtered & selected, or important for reproducibility, or something like that.
@pavpanchekha
Copy link
Author

And a minor question—should I file bugs if I find them? For example, expr2 doesn't seem to test whether g + h is zero.

@txstc55
Copy link
Collaborator

txstc55 commented May 25, 2023

  1. I can just add an mit license to the repo
  2. The exprn are randomly generated by a python script, mainly just to test the capability of the library's capability. The script is pretty naive but does try to generate constraints.
  3. There is a paper which is submitted to PPAM 2022: https://cims.nyu.edu/gcl/papers/2022-Intervals.pdf
  4. Any composite expression that breaks Boost is worth putting. However, this might mainly come from Boost's implementation of trig functions for interval is quite naive. There's this composite expression 1 which makes filib++'s pred_succ mode to produce inconsistent result on linux and macos which might be interesting. Other than that, the result are quite consistent, nothing unusual.

Since this is an interval project and we pre-generated the data for consistency, we didn't have the issue of g+h=0. However, if you think this is an issue, just send a pull request and i will merge them

@pavpanchekha
Copy link
Author

If we can get this repository MIT licensed, I'll work on adding it to FPBench. Are the expressions in the paper the ones you think should be merged into FPBench? Or would you rather merge all of them?

@txstc55
Copy link
Collaborator

txstc55 commented Jun 1, 2023

I already gave this repo a mit license. There's no need to merge all of them. As mentioned, the composition expression 1 which made filib++ producing inconsistent result on different platform might be interesting to add to benchmark

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants