Skip to content

Commit fc279ce

Browse files
committed
feedback
1 parent 3dd12aa commit fc279ce

File tree

7 files changed

+8
-8
lines changed

7 files changed

+8
-8
lines changed

docs/source/_toctree.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -79,6 +79,8 @@
7979
title: LoKr
8080
- local: package_reference/lora
8181
title: LoRA
82+
- local: package_reference/adapter_utils
83+
title: LyCORIS
8284
- local: package_reference/multitask_prompt_tuning
8385
title: Multitask Prompt Tuning
8486
- local: package_reference/p_tuning
@@ -87,8 +89,6 @@
8789
title: Prefix tuning
8890
- local: package_reference/prompt_tuning
8991
title: Prompt tuning
90-
- local: package_reference/adapter_utils
91-
title: Utilities
9292
title: Adapters
9393
title: API reference
9494

docs/source/package_reference/adapter_utils.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ rendered properly in your Markdown viewer.
1414
1515
-->
1616

17-
# Adapter utilities
17+
# LyCORIS
1818

1919
[LyCORIS](https://hf.co/papers/2309.14859) (Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion) are LoRA-like matrix decomposition adapters that modify the cross-attention layer of the UNet. The [LoHa](loha) and [LoKr](lokr) methods inherit from the `Lycoris` classes here.
2020

docs/source/package_reference/config.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ rendered properly in your Markdown viewer.
44

55
# Configuration
66

7-
The configuration classes stores the configuration of a [`PeftModel`], PEFT adapter models, and the configurations of [`PrefixTuning`], [`PromptTuning`], and [`PromptEncoder`]. They contain methods for saving and loading model configurations from the Hub, specifying the PEFT method to use, type of task to perform, and model configurations like number of layers and number of attention heads.
7+
[`PeftConfigMixin`] is the base configuration class for storing the adapter configuration of a [`PeftModel`], and [`PromptLearningConfig`] is the base configuration class for soft prompt methods (p-tuning, prefix tuning, and prompt tuning). These base classes contain methods for saving and loading model configurations from the Hub, specifying the PEFT method to use, type of task to perform, and model configurations like number of layers and number of attention heads.
88

99
## PeftConfigMixin
1010

docs/source/package_reference/llama_adapter.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ rendered properly in your Markdown viewer.
1616

1717
# Llama-Adapter
1818

19-
[Llama-Adapter](https://hf.co/papers/2303.16199) is a PEFT method specifically designed for turning Llama into an instruction-following model. The Llama model is frozen and only a set of adaptation prompts prefixed to the input instruction tokens are learned. Since randomly initialized modules inserted into the model can cause the model to lose some of it's existing knowledge, Llama-Adapter uses zero-initialized attention with zero gating to progressively add the instructional prompts with the model.
19+
[Llama-Adapter](https://hf.co/papers/2303.16199) is a PEFT method specifically designed for turning Llama into an instruction-following model. The Llama model is frozen and only a set of adaptation prompts prefixed to the input instruction tokens are learned. Since randomly initialized modules inserted into the model can cause the model to lose some of its existing knowledge, Llama-Adapter uses zero-initialized attention with zero gating to progressively add the instructional prompts to the model.
2020

2121
The abstract from the paper is:
2222

docs/source/package_reference/peft_model.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ rendered properly in your Markdown viewer.
44

55
# Models
66

7-
[`PeftModel`] is the base model class for specifying the base Transformer model and configuration to apply a PEFT method to. The base `PeftModel` contains methods for loading and saving models from the Hub, and supports the [`PromptEncoder`] for prompt learning.
7+
[`PeftModel`] is the base model class for specifying the base Transformer model and configuration to apply a PEFT method to. The base `PeftModel` contains methods for loading and saving models from the Hub.
88

99
## PeftModel
1010

docs/source/package_reference/peft_types.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ rendered properly in your Markdown viewer.
1616

1717
# PEFT types
1818

19-
`PeftType` includes the supported adapters in PEFT, and `TaskType` includes PEFT-supported tasks.
19+
[`PeftType`] includes the supported adapters in PEFT, and [`TaskType`] includes PEFT-supported tasks.
2020

2121
## PeftType
2222

docs/source/package_reference/tuners.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ rendered properly in your Markdown viewer.
1616

1717
# Tuners
1818

19-
A tuner (or adapter) is a module that can be plugged into a `torch.nn.Module`. [`BaseTuner`] base class for other tuners and provides shared methods and attributes for preparing an adapter configuration and replacing a target module with the adapter module. [`BaseTunerLayer`] is a tuner mixin offering methods and attributes for managing adapters such as merging and unmerging, and activating and disabling adapters.
19+
A tuner (or adapter) is a module that can be plugged into a `torch.nn.Module`. [`BaseTuner`] base class for other tuners and provides shared methods and attributes for preparing an adapter configuration and replacing a target module with the adapter module. [`BaseTunerLayer`] is a base class for adapter layers. It offers methods and attributes for managing adapters such as activating and disabling adapters.
2020

2121
## BaseTuner
2222

0 commit comments

Comments
 (0)