From f579fa2a82e1473d0a3b4b573b1089bf9c743ad4 Mon Sep 17 00:00:00 2001 From: Madhav Mathur Date: Mon, 25 Dec 2023 16:42:09 -0500 Subject: [PATCH 1/2] Update README.md --- README.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index db4ca51..928d141 100644 --- a/README.md +++ b/README.md @@ -97,7 +97,7 @@ ## Installation ### Install via `pip` -We provide a Python package *promptbench* for users who want to start evaluation quickly. Simply run +We provide a Python package *promptbench* for users who want to start evaluation quickly. Simply run: ```sh pip install promptbench ``` @@ -129,12 +129,12 @@ then use pip to install required packages: pip install -r requirements.txt ``` -Note that this only installed basic python packages. For Prompt Attacks, it requires to install textattacks. +Note that this only installed basic python packages. For Prompt Attacks, it is required to install TextAttack. ## Usage -promptbench is easy to use and extend. Going through the bellowing examples will help you familiar with promptbench for quick use, evaluate an existing datasets and LLMs, or creating your own datasets and models. +promptbench is easy to use and extend. Going through the below examples will help you get familiar with promptbench for quick use, evaluate existing datasets and LLMs, or create your own datasets and models. Please see [Installation](#installation) to install promptbench first. @@ -212,7 +212,7 @@ Please refer to our [benchmark website](https://llm-eval.github.io/) for benchma ## Acknowledgements -- [textattacks](https://github.com/textattacks) +- [TextAttack](https://github.com/QData/TextAttack) - [README Template](https://github.com/othneildrew/Best-README-Template) - We thank the volunteers: Hanyuan Zhang, Lingrui Li, Yating Zhou for conducting the semantic preserving experiment in Prompt Attack benchmark. @@ -232,7 +232,7 @@ Please refer to our [benchmark website](https://llm-eval.github.io/) for benchma ## Citing promptbench and other research papers -Please cite us if you fine this project helpful for your project/paper: +Please cite us if you find this project helpful for your project/paper: ``` @article{zhu2023promptbench2, From e68eae2b3f77e16f7d6ebb164dd366222bf18039 Mon Sep 17 00:00:00 2001 From: Madhav Mathur Date: Mon, 25 Dec 2023 17:12:54 -0500 Subject: [PATCH 2/2] Fix more typos in README --- README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 928d141..bebd9b0 100644 --- a/README.md +++ b/README.md @@ -102,7 +102,7 @@ We provide a Python package *promptbench* for users who want to start evaluation pip install promptbench ``` -Note that the pip installation could be behind the recent updates. So, if you want to use the latest features or develop based on our code, you should intall via Github. +Note that the pip installation could be behind the recent updates. So, if you want to use the latest features or develop based on our code, you should install via GitHub. ### Install via GitHub @@ -129,12 +129,12 @@ then use pip to install required packages: pip install -r requirements.txt ``` -Note that this only installed basic python packages. For Prompt Attacks, it is required to install TextAttack. +Note that this only installed basic python packages. For Prompt Attacks, you will also need to install [TextAttack](https://github.com/QData/TextAttack). ## Usage -promptbench is easy to use and extend. Going through the below examples will help you get familiar with promptbench for quick use, evaluate existing datasets and LLMs, or create your own datasets and models. +promptbench is easy to use and extend. Going through the examples below will help you get familiar with promptbench for quick use, evaluate existing datasets and LLMs, or create your own datasets and models. Please see [Installation](#installation) to install promptbench first.