-
Notifications
You must be signed in to change notification settings - Fork 42
Log models #504
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Log models #504
Changes from all commits
Commits
Show all changes
17 commits
Select commit
Hold shift + click to select a range
c091b11
first draft
ngrayluna 7b6acab
Added it to JA
ngrayluna cef52b4
Added dropdown examples with more info
ngrayluna 8998e4a
Added noa s better wording
ngrayluna 6baee2a
More word smithing
ngrayluna cc6d935
Added link to colab
ngrayluna 82e905d
lint checker
ngrayluna 0ac7f52
More stuff
ngrayluna 0c8379e
WIP
ngrayluna 7f4d176
Added info about name
ngrayluna 995e6b8
Changed parameter names, more edits
ngrayluna 50788d9
Vsomething
ngrayluna 8e5aca7
Updated notebook example
ngrayluna 8571aa2
Merge branch 'main' into model_reg_new_apis
ngrayluna 7962523
V something +1
ngrayluna fb48697
Version something + 1 + 1
ngrayluna c41a91d
last edits
ngrayluna File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,215 @@ | ||
| --- | ||
| displayed_sidebar: default | ||
| --- | ||
|
|
||
| # Log models | ||
|
|
||
| The following guide describes how to log models to a W&B run and interact with them. | ||
|
|
||
| :::tip | ||
| The following APIs are useful for tracking models as a part of your experiment tracking workflow. Use the APIs listed on this page to quickly log models to a run, in addition to metrics, tables, media and other objects. | ||
|
|
||
| W&B suggests that you use [W&B Artifacts](../../artifacts/intro.md) if you want to: | ||
| - Create and keep track of different versions of serialized data besides models, such as datasets, prompts, and more. | ||
| - Explore [lineage graphs](../../artifacts/explore-and-traverse-an-artifact-graph.md) of a model or any other objects tracked in W&B. | ||
| - How to interact with the model artifacts these methods created, such as [updating properties](../../artifacts/update-an-artifact.md) (metadata, aliases, and descriptions) | ||
|
|
||
| For more information on W&B Artifacts and for more information on advanced versioning use cases, see the [Artifacts](../../artifacts/intro.md) documentation. | ||
| ::: | ||
|
|
||
| :::info | ||
| See this [Colab notebook](https://colab.research.google.com/github/wandb/examples/blob/ken-add-new-model-reg-api/colabs/wandb-model-registry/New_Model_Logging_in_W&B.ipynb) for an end-to-end example of how to use the APIs described on this page. | ||
| ::: | ||
|
|
||
| ## Log a model to a W&B run | ||
| Use the [`log_model`](../../../ref/python/run.md#log_model) to log a model artifact that contains content within a directory you specify. The [`log_model`](../../../ref/python/run.md#log_model) method also marks the resulting model artifact as an output of the W&B run. | ||
|
|
||
| You can track a model's dependencies and the model's associations if you mark the model as the input or output of a W&B run. View the lineage of the model within the W&B App UI. See the [Explore and traverse artifact graphs](../../artifacts/explore-and-traverse-an-artifact-graph.md) page within the [Artifacts](../../artifacts/intro.md) chapter for more information. | ||
|
|
||
| Provide the path where your model file(s) are saved to the `path` parameter. The path can be a local file, directory, or [reference URI](../../artifacts/track-external-files.md#amazon-s3--gcs--azure-blob-storage-references) to an external bucket such as `s3://bucket/path`. | ||
|
|
||
| Ensure to replace values enclosed in `<>` with your own. | ||
|
|
||
| ```python | ||
| import wandb | ||
|
|
||
| # Initialize a W&B run | ||
| run = wandb.init(project="<your-project>", entity="<your-entity>") | ||
|
|
||
| # Log the model | ||
| run.log_model(path="<path-to-model>", name="<name>") | ||
| ``` | ||
|
|
||
| Optionally provide a name for the model artifact for the `name` parameter. If `name` is not specified, W&B will use the basename of the input path prepended with the run ID as the name. | ||
|
|
||
| :::tip | ||
| Keep track of the `'name'` that you, or W&B assigns, to the model. You will need the name of the model to retrieve the model path with the [`use_model`](https://docs.wandb.ai/ref/python/run#use_model) method. | ||
| ::: | ||
|
|
||
| See [`log_model`](../../../ref/python/run.md#log_model) in the API Reference guide for more information on possible parameters. | ||
|
|
||
| <details> | ||
|
|
||
| <summary>Example: Log a model to a run</summary> | ||
|
|
||
| ```python | ||
| import os | ||
| import wandb | ||
| from tensorflow import keras | ||
| from tensorflow.keras import layers | ||
|
|
||
| config = {"optimizer": "adam", "loss": "categorical_crossentropy"} | ||
|
|
||
| # Initialize a W&B run | ||
| run = wandb.init(entity="charlie", project="mnist-experiments", config=config) | ||
|
|
||
| # Hyperparameters | ||
| loss = run.config["loss"] | ||
| optimizer = run.config["optimizer"] | ||
| metrics = ["accuracy"] | ||
| num_classes = 10 | ||
| input_shape = (28, 28, 1) | ||
|
|
||
| # Training algorithm | ||
| model = keras.Sequential( | ||
| [ | ||
| layers.Input(shape=input_shape), | ||
| layers.Conv2D(32, kernel_size=(3, 3), activation="relu"), | ||
| layers.MaxPooling2D(pool_size=(2, 2)), | ||
| layers.Conv2D(64, kernel_size=(3, 3), activation="relu"), | ||
| layers.MaxPooling2D(pool_size=(2, 2)), | ||
| layers.Flatten(), | ||
| layers.Dropout(0.5), | ||
| layers.Dense(num_classes, activation="softmax"), | ||
| ] | ||
| ) | ||
|
|
||
noaleetz marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| # Configure the model for training | ||
| model.compile(loss=loss, optimizer=optimizer, metrics=metrics) | ||
|
|
||
| # Save model | ||
| model_filename = "model.h5" | ||
| local_filepath = "./" | ||
| full_path = os.path.join(local_filepath, model_filename) | ||
| model.save(filepath=full_path) | ||
|
|
||
| # Log the model to the W&B run | ||
| run.log_model(path=full_path, name="MNIST") | ||
| run.finish() | ||
| ``` | ||
|
|
||
| When the user called `log_model`, a model artifact named `MNIST` was created and the file `model.h5` was added to the model artifact. Your terminal or notebook will print information of where to find information about the run the model was logged to. | ||
|
|
||
| ```python | ||
| View run different-surf-5 at: https://wandb.ai/charlie/mnist-experiments/runs/wlby6fuw | ||
| Synced 5 W&B file(s), 0 media file(s), 1 artifact file(s) and 0 other file(s) | ||
| Find logs at: ./wandb/run-20231206_103511-wlby6fuw/logs | ||
| ``` | ||
|
|
||
| </details> | ||
|
|
||
|
|
||
| ## Download and use a logged model | ||
| Use the [`use_model`](../../../ref/python/run.md#use_model) function to access and download models files previously logged to a W&B run. | ||
|
|
||
| Provide the name of the model artifact where the model file(s) you are want to retrieve are stored. The name you provide must match the name of an existing logged model artifact. | ||
|
|
||
| If you did not define `name` when originally logged the file(s) with `log_model`, the default name assigned is the basename of the input path, prepended with the run ID. | ||
|
|
||
| Ensure to replace other the values enclosed in `<>` with your own: | ||
|
|
||
| ```python | ||
| import wandb | ||
|
|
||
| # Initialize a run | ||
| run = wandb.init(project="<your-project>", entity="<your-entity>") | ||
|
|
||
| # Access and download model. Returns path to downloaded artifact | ||
| downloaded_model_path = run.use_model(name="<your-model-name>") | ||
noaleetz marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| ``` | ||
|
|
||
| The `use_model` function returns the path of downloaded model file(s). Keep track of this path if you want to link this model later. In the preceding code snippet, the returned path is stored in a variable called `downloaded_model_path`. | ||
|
|
||
| <details> | ||
|
|
||
| <summary>Example: Download and use a logged model</summary> | ||
|
|
||
| For example, the proceeding code snippet shows how to log a model with `log_model` method. First, the user defines a `model_name` variable that contains the full name of the model artifact. Then the user called the `use_model` API to access and download the model. They then stored the path that is returned from the API to the `downloaded_model_path` variable. | ||
|
|
||
| ```python | ||
| import wandb | ||
|
|
||
| entity = "luka" | ||
| project = "NLP_Experiments" | ||
| alias = "latest" | ||
| model_artifact_name = "fine-tuned-model" | ||
| model_name = f"{entity}/{project}/{model_artifact_name}:{alias}" | ||
|
|
||
| # Initialize a run | ||
| run = wandb.init(project=project, entity=entity) | ||
ngrayluna marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| # Access and download model. Returns path to downloaded artifact | ||
| downloaded_model_path = run.use_model(name=model_name) | ||
| ``` | ||
| </details> | ||
|
|
||
| See [`use_model`](../../../ref/python/run.md#use_model) in the API Reference guide for more information on possible parameters and return type. | ||
|
|
||
| ## Log and link a model to the W&B Model Registry | ||
| Use the [`link_model`](../../../ref/python/run.md#link_model) method to log model file(s) to a W&B run and link it to the [W&B Model Registry](../../model_registry/intro.md). If no registered model exists, W&B will create a new for you with the name you provide for the `registered_model_name` parameter. | ||
|
|
||
| :::tip | ||
| You can think of linking a model similar to 'bookmarking' or 'publishing' a model to a centralized team repository of models that others members of your team can view and consume. | ||
|
|
||
| Note that when you link a model, that model is not duplicated in the [Model Registry](../../model_registry/intro.md). That model is also not moved out of the project and intro the registry. A linked model is a pointer to the original model in your project. | ||
|
|
||
| Use the [Model Registry](../../model_registry/intro.md)to organize your best models by task, manage model lifecycle, facilitate easy tracking and auditing throughout the ML lifecyle, and [automate](../../model_registry/automation.md) downstream actions with webhooks or jobs. | ||
| ::: | ||
|
|
||
| A *Registered Model* is a collection or folder of linked model versions in the [W&B Model Registry](../../model_registry/intro.md). Registered models typically represent candidate models for a single modeling use case or task. | ||
|
|
||
| The proceeding code snippet shows how to link a model with the [`link_model`](../../../ref/python/run.md#link_model) API. Ensure to replace other the values enclosed in `<>` with your own: | ||
|
|
||
| ```python | ||
| import wandb | ||
|
|
||
| run = wandb.init(entity="<your-entity>", project="<your-project>") | ||
|
|
||
noaleetz marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| run.link_model(path="<path-to-model>", registered_model_name="<registered-model-name>") | ||
|
|
||
| run.finish() | ||
| ``` | ||
ngrayluna marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| See [`link_model`](../../../ref/python/run.md#link_model) in the API Reference guide for more information on optional parameters. | ||
|
|
||
| If the `'registered-model-name'` matches the name of a registered model that already exists within the Model Registry, the model will be linked to that registered model. If no such registered model exists, a new one will be created and the model will be the first one linked. | ||
|
|
||
| For example, suppose you have an existing registered model named "Fine-Tuned-Review-Autocompletion" in your Model Registry (see example [here](https://wandb.ai/reviewco/registry/model?selectionPath=reviewco%2Fmodel-registry%2FFinetuned-Review-Autocompletion&view=all-models)). And suppose that a few model versions are already linked to it: v0, v1, v2. If you call `link_model` with `registered-model-name="Fine-Tuned-Review-Autocompletion"`, the new model will be linked to this existing registered model as v4. If no registered model with this name exists, a new one will be created and the new model will be linked as v0. | ||
|
|
||
|
|
||
| <details> | ||
|
|
||
| <summary>Example: Log and link a model to the W&B Model Registry</summary> | ||
|
|
||
| For example, the proceeding code snippet logs model files and links the model model to a registered model name `"Fine-Tuned-Review-Autocompletion"`. | ||
|
|
||
| To do this, a user calls the `link_model` API. When they call the API, they provide a local filepath that points the content of the model (`path`) and they provide a name for the model registry (`registered_model_name`). | ||
|
|
||
| ```python | ||
noaleetz marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| import wandb | ||
|
|
||
| path = "/local/dir/model.pt" | ||
| registered_model_name = "Fine-Tuned-Review-Autocompletion" | ||
|
|
||
| run = wandb.init(project="llm-evaluation", entity="noa") | ||
|
|
||
| run.link_model(path=path, registered_model_name=registered_model_name) | ||
|
|
||
| run.finish() | ||
ngrayluna marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| ``` | ||
|
|
||
noaleetz marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| :::info | ||
| Reminder: A registered model houses a collection of bookmarked model versions. | ||
| ::: | ||
|
|
||
| </details> | ||
89 changes: 89 additions & 0 deletions
89
i18n/ja/docusaurus-plugin-content-docs/current/guides/track/log/log-models.md
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,89 @@ | ||
| --- | ||
| displayed_sidebar: default | ||
| --- | ||
|
|
||
| # Log models | ||
|
|
||
| Use W&B to log models that are created from W&B runs. | ||
|
|
||
| The following guide describes how to log and interact with models logged to a W&B run. | ||
|
|
||
| :::tip | ||
| The following APIs are useful for early model exploration and experiment tracking. Use the APIs listed on this page to quickly log models to a run, in addition to metrics, tables, media and other objects. | ||
|
|
||
| The APIs listed on this page are not intended for [INSERT]. | ||
|
|
||
|
|
||
| W&B suggests that you use W&B Artifacts if you want to: | ||
| - Create and keep track of different versions of serialized data such as models, datasets, prompts, and more. | ||
| - Create lineage graphs of a model or any other objects tracked in W&B. | ||
| - How to interact with the model artifacts these methods created, such as updating properties (metadata, aliases, and descriptions) | ||
| ::: | ||
|
|
||
|
|
||
| ## Log a model to a W&B run | ||
| Declare a model artifact as an output of a run with the [`log_model`](../../../ref/python/run.md#logmodel) method. To do so, provide a name for your model artifact and the path where your model is saved to for the `model_name` and `path` parameters, respectively. | ||
|
|
||
| ```python | ||
| import wandb | ||
|
|
||
| project="<your-project-name>" | ||
| entity="<your-entity>" | ||
| path="/local/dir/70154.h5" | ||
| model_artifact_name="model.h5" | ||
|
|
||
| # Initialize a W&B run | ||
| run = wandb.init(project=project, entity=entity) | ||
|
|
||
| # Log the model | ||
| run.log_model(model_name=model_artifact_name, path=path) | ||
| run.finish() | ||
| ``` | ||
|
|
||
| In the preceding code snippet, the model originally had a file name of `70154.h5` and was locally stored in the user's `/local/dir/` directory. When the user logged the model with `log_model`, they gave the model artifact a name of `model.h5`. | ||
|
|
||
|
|
||
| ## Download and use a logged model | ||
| Use the [`use_model`](../../../ref/python/run.md#usemodel) function to access and download models files previously logged to a W&B run. | ||
|
|
||
| Provide the name of your model artifact to the `model_name` field in `use_model`. | ||
|
|
||
| :::tip | ||
| W&B suggests that you prepend the entity and name of the project your model was saved to. | ||
| ::: | ||
|
|
||
| The proceeding code snippet shows how to download a logged model. The code snippet uses the same variables declared in the [Log a model to a W&B run](#log-a-model-to-a-wb-run). | ||
|
|
||
| ```python | ||
| import wandb | ||
|
|
||
| alias = "v0" | ||
| model_name=f'{entity}/{project}/{model_artifact_name}:{alias}' | ||
|
|
||
| # Initialize a run | ||
| run = wandb.init(project=project, entity=entity) | ||
|
|
||
| # Access and download model. Returns path to downloaded artifact | ||
| downloaded_model_path = run.use_model(model_name=model_name) | ||
| ``` | ||
|
|
||
| The `use_model` function returns the path of downloaded artifact file(s). Keep track of this path, as you will need to have this path to link a model. In the preceeding code snippet, we stored the file path in a variable called `downloaded_model_path`. | ||
|
|
||
|
|
||
| ## Log and link a model to the W&B Model Registry | ||
| Use the `link_model` method to log model file(s) as a model [artifact](../../artifacts/intro.md) to a W&B run and link it to the [W&B Model Registry](../../model_registry/intro.md). | ||
|
|
||
| When you link a model artifact to the registry, this creates a new version of that registered model. The new version is a pointer to the artifact version that exists in that project. | ||
|
|
||
| The following code snippet is a continuation of the code snippet in [Download and use a logged model](#download-and-use-a-logged-model). The code snippet uses the | ||
|
|
||
| The proceeding code snippet shows how to link a model with the `link_model` API. The code snippet uses the `downloaded_model_path` variable defined in the [Download and use a logged model](#download-and-use-a-logged-model) section to provide the path of the model. | ||
|
|
||
| ```python | ||
| run.link_model(path=downloaded_model_path, | ||
| registered_model_name="Industrial ViT", | ||
| linked_model_name=f"model_vit-{wandb.run.id}", | ||
| aliases=["staging", "QA"]) | ||
|
|
||
| run.finish() | ||
| ``` |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.