diff --git a/docs/assets/extractor_widget/extractor.png b/docs/assets/extractor_widget/extractor.png
new file mode 100644
index 0000000..2b326a8
Binary files /dev/null and b/docs/assets/extractor_widget/extractor.png differ
diff --git a/docs/assets/fig1.png b/docs/assets/fig1.png
deleted file mode 100644
index e1a554b..0000000
Binary files a/docs/assets/fig1.png and /dev/null differ
diff --git a/docs/assets/plugins_menu.png b/docs/assets/plugins_menu.png
new file mode 100644
index 0000000..68fafd3
Binary files /dev/null and b/docs/assets/plugins_menu.png differ
diff --git a/docs/assets/segmentation_widget/seg_1.png b/docs/assets/segmentation_widget/seg_1.png
new file mode 100644
index 0000000..70da4b5
Binary files /dev/null and b/docs/assets/segmentation_widget/seg_1.png differ
diff --git a/docs/assets/segmentation_widget/seg_2.png b/docs/assets/segmentation_widget/seg_2.png
new file mode 100644
index 0000000..0a57b29
Binary files /dev/null and b/docs/assets/segmentation_widget/seg_2.png differ
diff --git a/docs/assets/segmentation_widget/seg_3.png b/docs/assets/segmentation_widget/seg_3.png
new file mode 100644
index 0000000..b5e0798
Binary files /dev/null and b/docs/assets/segmentation_widget/seg_3.png differ
diff --git a/docs/assets/segmentation_widget/seg_4.png b/docs/assets/segmentation_widget/seg_4.png
new file mode 100644
index 0000000..404aff8
Binary files /dev/null and b/docs/assets/segmentation_widget/seg_4.png differ
diff --git a/docs/assets/segmentation_widget/seg_5.png b/docs/assets/segmentation_widget/seg_5.png
new file mode 100644
index 0000000..ecc98fa
Binary files /dev/null and b/docs/assets/segmentation_widget/seg_5.png differ
diff --git a/docs/assets/segmentation_widget/seg_6.png b/docs/assets/segmentation_widget/seg_6.png
new file mode 100644
index 0000000..ed63906
Binary files /dev/null and b/docs/assets/segmentation_widget/seg_6.png differ
diff --git a/docs/assets/segmentation_widget/seg_7.png b/docs/assets/segmentation_widget/seg_7.png
new file mode 100644
index 0000000..fc838b1
Binary files /dev/null and b/docs/assets/segmentation_widget/seg_7.png differ
diff --git a/docs/feature_extractor.md b/docs/feature_extractor.md
new file mode 100644
index 0000000..7ad28ed
--- /dev/null
+++ b/docs/feature_extractor.md
@@ -0,0 +1,29 @@
+After selecting your image stack, you need to extract the *features*. Later, these image features will be used as inputs for training a Random Forest model,
+and predicting annotation masks.
+
+!!! info
+ In deep learning, the output of an Encoder model is called embeddings or features.
+
+You can bring up the *Feature Extractor widget* from the napari **Plugins** menu:
+
+![plugins menu](assets/plugins_menu.png){width="360"}
+
+## Widget Tools Description
+![Feature Extractor](assets/extractor_widget/extractor.png){width="360"}
+
+1. **Image Layer**: To select your current image stack.
+2. **Encode Model**: Sets which model you want to use for feature extraction.
+ The **FF** plugins, by default, comes with `MobileSAM`, `SAM (huge)`, `DINOv2`, `SAM2 (large)`, and `SAM2 (base)` models. It is also possible to introduce a new model by adding the [*model adapter*](link here) class.
+3. **Features Storage File**: Where you want to save the features as an `HDF5` file.
+4. **Extract Features** button: Will run the feature extraction process.
+5. **Stop** button: To stop the extraction process!
+
+The extraction process might take some time based on number of image slices and the image resolution. This is due to the fact that in **FF** we turn an image into overlapping patches, then pass those patches to the encoder model to get the features. Why we do this? We need to aquire a feature vector per each pixel and not for the whole image.
+
+## Model Selection
+Our experiments tell us usually the `SAM2 (large)` model works the best. However, for less complicated images, using `MobileSAM` or `DINOv2` might also result in a good segmentation as they are lighter and faster.
+
+!!! note
+ When you use a model for the first time, the model's weight will be downloaded from their repository. So, you might hit a little delay at the first use of model.
+
+Once you have your image features extracted, you can use the [**Segmentation**](./segmentation.md) widget to generate your image masks.
diff --git a/docs/howto.md b/docs/howto.md
index 57aaf86..00ead7b 100644
--- a/docs/howto.md
+++ b/docs/howto.md
@@ -11,7 +11,7 @@ There are two ways to utilize this plugin over a large stack:
As for the first step, we recommend making a small sub-stack to train a Random Forest (RF) model using our plugin. This sub-stack can have about 20 slices selected across the whole stack (not just the beginning or last few slices). This way, when you extract and save the sub-stack's features, the storage file won't occupy too much space on the hard drive.
!!! tip
- If the image resolution is high, it's better to down-scale the images into a resolution of below 1200 pixels for the largest dimension.
+ If the image resolution is high, it's better to down-scale images into a resolution of below 1200 pixels for the largest dimension.
After the training, you can save the RF model, and later apply it on the entire stack.
diff --git a/docs/index.md b/docs/index.md
index b01e44b..88fa651 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -14,8 +14,8 @@ FF plugin includes two widgets: **Feature Extractor** and **Segmentation** widge
Feature Extractor Widget
///
-![Feature Extractor](assets/segmentation_1.png){width="300" align=left}
-![Feature Extractor](assets/segmentation_2.png){width="300" align=right}
+![Segmentation](assets/segmentation_1.png){width="300" align=left}
+![Segmentation](assets/segmentation_2.png){width="300" align=right}
/// caption
Segmentation Widget
///
diff --git a/docs/segmentation.md b/docs/segmentation.md
new file mode 100644
index 0000000..cd23f49
--- /dev/null
+++ b/docs/segmentation.md
@@ -0,0 +1,21 @@
+Hurray! Now you have your features extracted and ready for the main action! 😊
+The Segmentation widget is a long widget with several panels, but don't worry we'll go through all of them, from top to bottom!
+
+## Inputs and Labels' statistics
+![Inputs](assets/segmentation_widget/seg_1.png){width="360" align=right}
+### Inputs
+1. **Input Layer**: To set which napari layer is your input image layer
+2. **Feature Storage**: Select your previously extracted features `HDF5` file here.
+ ***Note***: You need to select the storage file for this particular input image, obviously!
+3. **Ground Truth Layer**: To select your *Labels* layer
+4. **Add Layer** button: To add a new GT layer to napari layers
+
+### Labeling Statistics
+5. **Analyze** button: To get info about number of classes and labels you've added so far.
+
+!!! note
+ - You can have as many *Labels* layer as you want. But **only the selected** one will be used for training the RF model.
+ - You can also drag & drop your previously saved labels into the napari and select that layer.
+
+## Train Model
+![Inputs](assets/segmentation_widget/seg_2.png){width="360" align=left}
diff --git a/docs/stylesheets/extra.css b/docs/stylesheets/extra.css
index 3a28825..7f7db90 100644
--- a/docs/stylesheets/extra.css
+++ b/docs/stylesheets/extra.css
@@ -1,5 +1,13 @@
+.md-typeset {
+ font-weight: 500;
+}
+
+.md-typeset p, ol, ul{
+ text-align: justify;
+}
+
.md-typeset code {
- font-size: 91%;
+ font-size: 94%;
font-weight: 600;
}
diff --git a/mkdocs.yml b/mkdocs.yml
index aebd72c..c0d7b6f 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -42,6 +42,8 @@ nav:
- Welcome: index.md
- Installation: install.md
- How to use the plugin: howto.md
+ - Feature Extractor Widget: feature_extractor.md
+ - Segmentation Widget: segmentation.md
extra:
version:
@@ -67,5 +69,6 @@ markdown_extensions:
- pymdownx.details
- pymdownx.blocks.caption
- def_list
+ - sane_lists
- pymdownx.tasklist:
custom_checkbox: true
\ No newline at end of file