Skip to content

Commit

Permalink
update readme
Browse files Browse the repository at this point in the history
  • Loading branch information
dannylee1020 committed Dec 25, 2024
1 parent 80518ee commit b6b8976
Showing 1 changed file with 10 additions and 4 deletions.
14 changes: 10 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,14 +24,23 @@ OpenPO simplifies building synthetic dataset with AI feedback and state-of-art e

- 💾 **Flexible Storage:** Out of the box storage providers for HuggingFace and S3.


## Installation
### Install from PyPI (recommended)
OpenPO uses pip for installation. Run the following command in the terminal to install OpenPO:

```bash
pip install openpo

# to use vllm
pip install openpo[vllm]

# for running evaluation models
pip install openpo[eval]
```



### Install from source
Clone the repository first then run the follow command
```bash
Expand Down Expand Up @@ -119,11 +128,8 @@ response = client.completion.generate(
```

### Evaluation
OpenPO offers various ways to synthesize your dataset. To run evaluation, first install extra dependencies by running
OpenPO offers various ways to synthesize your dataset.

```bash
pip install openpo[eval]
```

#### LLM-as-a-Judge
To use single judge to evaluate your response data, use `evaluate.eval`
Expand Down

0 comments on commit b6b8976

Please sign in to comment.