Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide benchmarks for each available strategy given a dataset of librairies with vulnerabilities #29

Open
3 tasks
antoine-coulon opened this issue Apr 23, 2022 · 0 comments
Assignees
Labels
benchmark Addition of an exhaustive benchmark documentation Improvements or additions to documentation

Comments

@antoine-coulon
Copy link
Member

antoine-coulon commented Apr 23, 2022

The main idea of @nodesecure/vuln is to expose a set of strategies to detect vulnerabilities within a given project.

In my opinion, it would be great to process some benchmarks for each strategy against a dataset of open-source libraries including well-known to rare vulnerabilities.
This would let consumers know the tradeoffs of each @nodesecure/vuln strategy given their project environment and constraints (e.g: npm strategy requires the specific package-lock.json lockfile to be present).

Now that the objective should be clear enough, we must determine three things:

  • the amount of data we must collect and then use to provide a representative dataset for each strategy
  • the criteria from which we determine that a strategy has effectively caught a given vulnerability in a library (i.e: do we take into account the vulnerability score ("medium", "high", etc) or we just say that whenever a strategy catches a vulnerability we count it)
  • the output and format of each benchmark

@fraxken suggested that we could create a /benchmark root directory

@antoine-coulon antoine-coulon added documentation Improvements or additions to documentation benchmark Addition of an exhaustive benchmark labels Apr 23, 2022
@antoine-coulon antoine-coulon self-assigned this Apr 26, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
benchmark Addition of an exhaustive benchmark documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

1 participant