Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
Eve-ning authored Feb 26, 2023
1 parent 5ee36f8 commit 4b338a3
Showing 1 changed file with 35 additions and 10 deletions.
45 changes: 35 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# [**Try Out Opal**](https://colab.research.google.com/drive/17WRYX24NJ4JaecJh8rUQ0iXSO93LrBUY?usp=sharing)
# :arrow_forward: [**Try Out Opal on Streamlit**](https://opal-ai.streamlit.app/)

# opal
# :comet: opal
opal is an accuracy-prediction model.

It uses Neural Collaborative Filtering to learn associations between user and maps, then using those associations to
Expand All @@ -9,13 +9,13 @@ predict new scores never before seen.
**Performance Error Graph**
![Performance Graph](models/V2_2023_01/error.png)

## Project Status
## :hourglass_flowing_sand: Project Status

Currently, it's in its early access, that means, it'll have many problems!

However, we're working on it to minimize these issues o wo)b

## Dataset Used
## :arrow_double_down: Dataset Used

I used the top 10K mania users data from https://data.ppy.sh.
After preprocessing, we use
Expand All @@ -29,15 +29,40 @@ This models can thus help predict >300m unplayed scores!
We deem a player on separate years as a different user. This is to reflect
the improvement of the player after time.

## Usage
## :high_brightness: Usage

Currently, I'm only providing the solution as a **PyTorch Lightning Checkpoint**.
That is, you **need** to use **Python** to use this!
If you want to use this locally, you need Python and the packages listed in [requirements.txt](requirements.txt)

I'm working on making the model more accurate before making the model UI through Streamlit.
So hang on tight!
Below is a recipe on how to use it.
```py
path_to_model = MODEL_DIR / "V2_2023_01/checkpoints/epoch=5-step=43584.ckpt"
net = NeuMF.load_from_checkpoint(path_to_model.as_posix())

## Why not Score Metric?
# THIS MUST BE RAN TO AVOID TRAINING THE MODEL
net.eval()

pred = net.predict('<USER_ID>/<YEAR>', '<MAP_ID>/<SPEED>')
preds = net.predict(['<USER_ID>/<YEAR>', '<USER_ID>/<YEAR>', ...],
['<MAP_ID>/<SPEED>', '<MAP_ID>/<SPEED>', ...])
# <SPEED> must be -1, 0, or 1. Where -1 is half time, 0 is normal time, 1 is double time.

# E.g.
# Predict Evening on Year 2017, on the map Triumph & Regret [Regret] at Double Time
pred = net.predict('2193881/2017', '767046/1')

# Note that you can, and should predict in a list format, so it's a significantly faster.
# Note that the map and user id can be different!
preds = net.predict(['2193881/2017', '2193881/2018'], ['767046/1', '767046/0'])
```

## :brain: AlphaOsu!
Currently, opal doesn't provide recommendations, however, you can try out [AlphaOsu!](https://alphaosu.keytoix.vip/).
- [AlphaOsu! GitHub](https://github.com/AlphaOSU)
- [Support AlphaOsu!](https://alphaosu.keytoix.vip/support)

## Annex

### Why not Score Metric?
Score is not straightforward to calculate, and may be difficult to debug. Furthermore, score isn't of interest when
calculating performance points anymore.

Expand Down

0 comments on commit 4b338a3

Please sign in to comment.