diff --git a/docs/user/batchTracking.md b/docs/user/batchTracking.md
index 724bdb9..c3e5788 100644
--- a/docs/user/batchTracking.md
+++ b/docs/user/batchTracking.md
@@ -28,66 +28,78 @@ The Batch Tracking panel is only accessible in Expert Mode (settings -> Expert M
* 13: Clear stack
* 14: Remove from stack
-The Batch Tracking panel is an advanced tool to track a large number of movies automatically. Several behaviors can be combined to load image sequences in a batch with specific background images or parameter files.
+The Batch Tracking panel is an advanced tool used to automatically track a large number of movies. It allows you to combine several behaviors to load image sequences in a batch, along with specific background images or parameter files. This powerful feature streamlines the tracking process for multiple movies at once.
## Basic usage
-The user can open several image sequences by clicking on the **Open folder** (1) button and selecting one or several folders. FastTrack can automatically load a background and/or a parameters file if a **Tracking_Result** folder is provided with the image sequence; check the **Autoload** (10) tick to activate this behavior.
-After opening, image sequences are added to the **Processing stack** (4). If a background image and/or a set of parameters are automatically loaded, the path will be displayed in the second and third columns. If not, the user can select them with the (5) and (6) buttons after importation.
+The user can open several image sequences by clicking on the **Open folder** (1) button and selecting one or several folders. FastTrack can automatically load a background and/or a parameters file if a **Tracking_Result** folder is provided within the image sequence; check the **Autoload** (10) tick to activate this behavior.
+
+After opening, image sequences are added to the **Processing stack** (4). If a background image and/or a set of parameters are automatically loaded, their paths will be displayed in the second and third columns, respectively. If not, the user can select them using the (5) and (6) buttons after importing.
+
**By default**, if no background image and parameter file are selected, FastTrack will use the parameters provided in the Parameters table (9) **before** the image sequence importation.
-The user can delete an image sequence by selecting the corresponding line in the **Processing stack** (4) and clicking on the **Remove** (14) button. The user can clear all the **Processing stack** (14) by clicking the **Clear** (13) button.
-To process the stack, click the **Start Tracking** (12) button.
+
+The user can delete an image sequence by selecting the corresponding line in the **Processing stack** (4) and clicking on the **Remove** (14) button. To clear the entire **Processing stack**, the user can click the **Clear** (13) button.
+
+To process the stack, click the **Start Tracking** (12) button, and FastTrack will perform the tracking analysis on all the image sequences in the stack.
## More advanced options
### Adding a suffix
-The user can append a suffix to the imported folders *folder_path/ + suffix/*
-For example, it can be usefull with a folder tree like this one:
+The user can append a suffix to the imported folders by using the *folder_path/ + suffix/* notation.
+
+For example, it can be useful with a folder tree like this one:
- /myExperiment/Run1/images
- /myExperiment/Run2/images
- /myExperiment/Run3/images
-The user can easily select in one time the folders:
+The user can easily select all the folders at once:
- /myExperiment/Run1
- /myExperiment/Run2
- /myExperiment/Run3
-And then add the suffix *images/* to select the desired folders without having to do it manually three times.
+And then add the suffix *images/* to select the desired folders without having to do it manually three times. This feature allows for a more efficient and convenient selection of folders.
### Unique background image
-The user can select a unique background image. Open an image with the **Unique background** (2) button, and **all the sequences in the stack** and sequences that will be imported will be using this background image. The user can use the **Clear** (12) to reset the default behavior.
+The user can select a unique background image by opening an image with the **Unique background** (2) button. Once a background image is selected, **all the sequences in the stack** and any sequences that will be imported afterward will use this selected background image. This means that the chosen background image will be applied to all the sequences during the tracking process.
+
+To reset to the default behavior and remove the selected background image, the user can use the **Clear** (12) button. This allows the user to revert to the original state where each sequence might have its own background image or none at all.
### One parameter file
-To apply the same parameters file to all the imported sequences:
+
+To apply the same parameters file to all the imported sequences, you have the following options:
Manual selection:
-* Untick the **Autoload** (10).
-* Select a set of parameters in the **Parameters table** (9).
-* The sequences that will be imported will use this set of parameters.
+1. Untick the **Autoload** (10) checkbox.
+2. Select a set of parameters in the **Parameters table** (9).
+3. The sequences that will be imported will use this set of parameters.
With a file:
-* Tick the **Autoload** (10)
-* Load the sequence with the right parameters file.
-* Untick the **Autoload** (10).
-* The sequences that will be imported will use this set of parameters.
+1. Tick the **Autoload** (10) checkbox.
+2. Load the sequence with the right parameters file.
+3. Untick the **Autoload** (10) checkbox.
+4. The sequences that will be imported will use this set of parameters.
-With a file:
+Alternative method with a file:
-* Untick the **Autoload** (10).
-* Load a sequence.
-* Select the parameter file with the (6) button.
-* The sequences that will be imported will use this set of parameters.
+1. Untick the **Autoload** (10) checkbox.
+2. Load a sequence.
+3. Select the parameter file using the (6) button.
+4. The sequences that will be imported will use this set of parameters.
## Behavior reminder
- (10) unticked, (2) not selected: FastTrack will use the parameters provided in the Parameters table (9) **before** the image sequence is added to the stack. It can be overwritten after importation with the (5) and (6) buttons.
-- (10) ticked, (2) not selected: FastTrack will use the background and the parameters file in the Tracking_Result folder. If these files are missing, FastTrack will use the parameters provided in the Parameters table (9) **before** the image sequence is added to the stack.
-- (10) ticked, (2) selected: the background selected in (2) will overwrite the automatically detected background.
-- (3) selected: the image sequence path will be appended with the suffix, and default behavior will be applied with this path.
-- (2) selected: select a unique background will overwrite all the existing background in the stack.
+
+- (10) ticked, (2) not selected: FastTrack will attempt to use the background and the parameters file found in the Tracking_Result folder of the image sequence. If these files are missing, FastTrack will fallback to using the parameters provided in the Parameters table (9) **before** the image sequence is added to the stack.
+
+- (10) ticked, (2) selected: When you select a unique background in (2), it will overwrite the automatically detected background for all the sequences in the stack. The selected background will be used for tracking.
+
+- (3) selected: If you select a suffix in (3), it will be appended to the image sequence path, and the default behavior will be applied using this modified path.
+
+- (2) selected: When you select a unique background in (2), it will overwrite all the existing backgrounds in the stack. The selected background will be applied to all sequences during tracking.
diff --git a/docs/user/dataOutput.md b/docs/user/dataOutput.md
index 605c385..2155514 100644
--- a/docs/user/dataOutput.md
+++ b/docs/user/dataOutput.md
@@ -6,23 +6,25 @@ sidebar_label: Tracking Result
After a tracking analysis (or an analysis preview), FastTrack saves several files inside the **Tracking_Result** (or inside the **Tracking_Result_VideoFileName** for a video file) folder:
-* *tracking.db*: the tracking result as a SQLite database
-* *tracking.txt*: the tracking result
-* *annotation.txt*: the annotation
-* *background.pgm*: the background image
-* *cfg.toml*: the parameters used for the tracking
-
-The tracking result file is simply a text file with 23 columns separated by a '\t' character. This file can easily be loaded to subsequent analysis see [this Python](https://www.fasttrack.sh/blog/2021/08/09/FastAnalysis-tuto) and [this Julia](https://www.fasttrack.sh/blog/2020/11/25/Data-analysis-julia).
-
-* **xHead, yHead, tHead**: the position (x, y) and the absolute angle of the object's head.
-* **xTail, yTail, tTail**: the position (x, y) and the absolute angle of the object's tail.
-* **xBody, yBody, tBody**: the position (x, y) and the absolute angle of the object.
-* **curvature, areaBody, perimeterBody**: curvature of the object, area and perimeter of the object (in pixels).
-* **headMajorAxisLength, headMinorAxisLength, headExcentricity**: parameters of the head's ellipse (headMinorAxisLength and headExcentricity are semi-axis length).
-* **bodyMajorAxisLength, bodyMinorAxisLength, bodyExcentricity**: parameters of the body's ellipse (bodyMinorAxisLength and bodyExcentricity are semi-axis length).
-* **tailMajorAxisLength, tailMinorAxisLength, tailExcentricity**: parameters of the tail's ellipse (bodyMinorAxisLength and bodyExcentricity are semi-axis length).
-* **imageNumber**: index of the frame.
-* **id**: object unique identification number.
+- *tracking.db*: the tracking result as a SQLite database
+- *tracking.txt*: the tracking result
+- *annotation.txt*: the annotation
+- *background.pgm*: the background image
+- *cfg.toml*: the parameters used for the tracking
+
+The tracking result file is simply a text file with 23 columns separated by a '\t' character. This file can easily be loaded for subsequent analysis using Python (as shown in [this Python tutorial](https://www.fasttrack.sh/blog/2021/08/09/FastAnalysis-tuto)) and Julia (as shown in [this Julia tutorial](https://www.fasttrack.sh/blog/2020/11/25/Data-analysis-julia)).
+
+The 23 columns represent the following data for each tracked object:
+
+- **xHead, yHead, tHead**: the position (x, y) and the absolute angle of the object's head.
+- **xTail, yTail, tTail**: the position (x, y) and the absolute angle of the object's tail.
+- **xBody, yBody, tBody**: the position (x, y) and the absolute angle of the object's body.
+- **curvature, areaBody, perimeterBody**: curvature of the object, area, and perimeter of the object (in pixels).
+- **headMajorAxisLength, headMinorAxisLength, headExcentricity**: parameters of the head's ellipse (headMinorAxisLength and headExcentricity are semi-axis lengths).
+- **bodyMajorAxisLength, bodyMinorAxisLength, bodyExcentricity**: parameters of the body's ellipse (bodyMinorAxisLength and bodyExcentricity are semi-axis lengths).
+- **tailMajorAxisLength, tailMinorAxisLength, tailExcentricity**: parameters of the tail's ellipse (tailMinorAxisLength and tailExcentricity are semi-axis lengths).
+- **imageNumber**: index of the frame.
+- **id**: object's unique identification number.
xHead
yHead
tHead
xTail
yTail
tTail
xBody
yBody
tBody
Float64
Float64
Float64
Float64
Float64
Float64
Float64
Float64
Float64
2,475 rows × 23 columns (omitted printing of 14 columns)
1
514.327
333.12
5.81619
499.96
327.727
6.10226
508.345
330.876
5.94395
2
463.603
327.051
0.301279
449.585
330.323
0.245547
458.058
328.346
0.238877
3
23.9978
287.715
3.70646
34.9722
278.836
3.99819
29.2056
283.505
3.84844
4
372.536
230.143
0.194641
354.226
231.604
6.08737
364.822
230.759
0.0515087
5
480.58
213.482
1.28236
478.125
228.52
1.53303
479.428
220.543
1.42567
6
171.682
143.55
6.09077
155.507
140.116
6.1146
164.913
142.113
6.08216
7
498.151
121.32
6.00177
483.712
119.285
0.0223247
492.683
120.55
6.15298
8
329.56
123.418
6.08726
312.526
119.042
5.9098
322.531
121.614
6.01722
9
465.256
115.045
4.44359
470.057
99.911
4.40559
467.106
109.205
4.40862
10
423.663
66.3789
0.0888056
409.105
67.2971
6.12053
417.615
66.7623
0.0292602
11
424.487
40.4232
5.48198
411.594
30.3912
5.88869
418.96
36.1192
5.64923
12
370.591
35.2147
5.99688
354.672
29.5633
5.89121
364.007
32.8767
5.94008
13
498.502
20.2527
5.66339
487.254
9.19499
5.39497
493.758
15.5781
5.5026
14
367.791
5.03034
6.05933
352.076
6.75603
0.653641
361.12
5.75904
0.152688
15
512.965
332.575
5.86617
499.435
327.759
6.052
507.626
330.673
5.95102
16
463.385
324.659
0.707
451.431
332.193
0.246265
458.959
327.443
0.542368
17
19.4579
293.022
4.28861
25.5579
281.206
4.18379
21.8962
288.302
4.23379
18
379.037
230.527
6.10571
361.728
229.616
0.199343
371.74
230.144
6.25939
19
478.884
206.712
1.27832
475.454
221.757
1.40929
477.197
214.108
1.35472
20
173.923
143.042
0.00732468
157.261
142.182
6.00453
167.066
142.689
6.20403
21
498.561
122.687
5.83253
486.357
118.196
6.13893
493.718
120.906
5.95151
22
328.812
124.134
6.05932
312.848
119.605
5.98617
322.331
122.294
6.00901
23
461.738
116.731
4.47649
466.371
101.736
4.40285
463.615
110.656
4.41641
24
428.631
69.2715
5.87139
415.665
64.6444
6.13862
423.218
67.3364
5.96558
25
425.821
44.9942
5.59983
414.84
33.2028
5.37159
421.248
40.0897
5.461
26
368.362
35.6219
5.97427
353.22
30.4625
5.88261
362.109
33.4891
5.94605
27
503.484
22.7293
5.76026
489.632
16.6315
5.92136
497.924
20.2857
5.86668
28
369.184
5.84074
6.15994
352.622
4.25328
6.24787
362.144
5.16766
6.19236
29
510.519
331.417
5.88883
495.784
327.366
6.12889
504.484
329.758
6.02088
30
464.242
323.533
0.290639
451.756
328.194
0.532686
459.432
325.326
0.37736
@@ -33,7 +35,7 @@ Positions are in pixels, in the frame of reference of the original image, zero i
v
y
-**Note:** If several tracking analyses are performed on the same image sequence, the previous folder is not erased. It will be renamed as **Tracking_result_DateOfTheNewAnalysis**.
+**Note:** If several tracking analyses are performed on the same image sequence, the previous folder will not be erased. Instead, it will be renamed as **Tracking_result_DateOfTheNewAnalysis**.
## Data analysis
diff --git a/docs/user/installation.md b/docs/user/installation.md
index 3cb2dc0..896100f 100644
--- a/docs/user/installation.md
+++ b/docs/user/installation.md
@@ -8,7 +8,7 @@ sidebar_label: Installation
---
**NOTE**
-During the installation on Windows and Mac systems, security alerts are displayed because the FastTrack executable does not possess an EV code signing certificate. These alerts can be ignored. FastTrack executable can be verified easily (and freely) by comparing the MD5 checksum. See the [installation video](https://www.youtube.com/watch?v=EvfCIS7BmSM) for more details.
+During the installation on Windows and Mac systems, security alerts may be displayed due to the absence of an EV code signing certificate for the FastTrack executable. These alerts can be safely ignored. To verify the FastTrack executable, you can easily and freely compare the MD5 checksum. For more detailed instructions, you can refer to the [installation video](https://www.youtube.com/watch?v=EvfCIS7BmSM).
---
diff --git a/docs/user/interactiveTracking.md b/docs/user/interactiveTracking.md
index dbd0e41..23d8e7c 100644
--- a/docs/user/interactiveTracking.md
+++ b/docs/user/interactiveTracking.md
@@ -4,22 +4,23 @@ title: Interactive Tracking
sidebar_label: Interactive Tracking
---
-The Interactive panel provides a way to perform a tracking analysis and review it in an interactive environment.
-Several steps have to be performed in the right order (some are mandatory, some are optional) to perform a successful tracking analysis.
+The Interactive panel serves as a platform for conducting tracking analysis and reviewing the results within an interactive environment. To achieve a successful tracking analysis, it is essential to follow a series of steps in the correct order. While some steps are mandatory, others are optional, allowing flexibility in the process. The provided workflow diagram (see above) illustrates the sequential flow of these steps, assisting users in efficiently navigating through the analysis.
![Workflow](assets/interactive_workflow.svg)
## Opening a file
-The first step of a tracking analysis is to open a video file. FastTrack supports video files and image sequences. Click on the file or an image of a sequence to automatically load the movie.
+The first step of a tracking analysis is to open a video file. FastTrack supports both video files and image sequences. Click on the file or an image sequence to automatically load the movie.
![File opening](assets/interactive_open.gif)
## Computing the background
-The background can be computed or imported. To compute the background, select a method and an image number. Images are selected in the image sequence at regular intervals, and three methods of computation by z-projection are available:
+The background can be computed or imported. To compute the background, select a method and an image number. Images are selected from the image sequence at regular intervals, and three methods of computation by z-projection are available:
-* Min: each pixel of the background image is the pixel with the minimal value across the selected images from the image sequence. Useful when the objects are light on a dark background.
-* Max: each pixel of the background image is the pixel with the maximal value across the image sequence's selected images. Useful when the objects are dark on a light background.
-* Average: each pixel of the background image is the average of the pixels across the image sequence's selected images.
+1. Min: Each pixel of the background image is the pixel with the minimal value across the selected images from the image sequence. This method is useful when the objects are light on a dark background.
+
+2. Max: Each pixel of the background image is the pixel with the maximal value across the selected images from the image sequence. This method is useful when the objects are dark on a light background.
+
+3. Average: Each pixel of the background image is the average of the pixels across the selected images from the image sequence.
The images can be registered before the z-projection. Three methods of registration are available.
![Background computing](assets/interactive_back.gif)
@@ -31,53 +32,55 @@ To select a region of interest, draw a rectangle on the display with the mouse
## Computing the binary image
-To compute the binary image from the background image and the image sequence, select the threshold value, and see the result on the display. The background type is automatically selected after the background computation. However, it can be modified: select Dark Background if the objects are light on a dark background, and Light background if the objects are dark on a light background.
+To generate the binary image from the background image and the image sequence, follow these steps: first, select the threshold value, and then observe the result on the display. The background type is automatically determined during the background computation process. However, it can be adjusted manually if needed: choose 'Dark Background' if the objects are light on a dark background, or select 'Light Background' if the objects are dark on a light background.
![Binarizing](assets/interactive_thresh.gif)
## Applying morphological operations (optional)
-It is possible to apply a morphological operation on the binary image. Select a morphological operation, kernel size, and geometry. See the result on the display. For more information about the different operations, see https://docs.opencv.org/trunk/d9/d61/tutorial_py_morphological_ops.html.
+It is possible to apply a morphological operation to the binary image. Select a morphological operation, choose an appropriate kernel size and geometry, and then observe the result on the display. For more detailed information about the various operations, refer to the following link: https://docs.opencv.org/trunk/d9/d61/tutorial_py_morphological_ops.html.
![Applying morphological operations](assets/interactive_morph.gif)
## Tuning the detection parameters
-Objects are detected by their size. Select the maximum and minimum size of the detected objects. The detected objects will be colored in green in the display, and the rejected object will be displayed in red.
+Objects are detected based on their size. Choose the maximum and minimum size for the detected objects. The identified objects will be highlighted in green on the display, while the rejected objects will be shown in red.
![](assets/interactive_detec.gif)
## Tuning the tracking parameters
-Several parameters can be modified to ensure a good tracking analysis. See [this page](http://www.fasttrack.sh/docs/trackingParameters/) for more details:
+Several parameters can be modified to ensure a good tracking analysis. For more details, see [this page](http://www.fasttrack.sh/docs/trackingParameters/).
### Hard parameters
Hard parameters have to be set manually by the user:
-* Maximal distance: if an object traveled more than this distance between two consecutive images, it would be considered as a new object.
-* Maximal time: number of images an object is allowed to disappear. If an object reappears after this time, it will be considered as a new object. If the number of objects is constant throughout the movie, set the Maximal Time equal to the movie's number of frames.
-* Spot to track: part of the object features used to do the tracking. Select the part that reflects the better the direction of the object. Legacy parameter, head corresponds to the smaller mid-part of the object, tail ellipse the wider mid-part of the object, and body is the full object.
+- Maximal distance: If an object traveled more than this distance between two consecutive images, it would be considered a new object.
+- Maximal time: The number of images an object is allowed to disappear. If an object reappears after this time, it will be considered a new object. If the number of objects is constant throughout the movie, set the Maximal Time equal to the movie's number of frames.
+- Spot to track: This represents part of the object features used for tracking. Select the part that best reflects the direction of the object. Legacy parameters include 'head,' which corresponds to the smaller mid-part of the object, 'tail ellipse,' which corresponds to the wider mid-part of the object, and 'body,' which represents the full object.
### Soft parameters
-The soft parameters can be leveled automatically by clicking on the Level button. This will automatically compute the soft parameters as each contribution weighs one quarter of the total cost (see more at [this page](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1008697#sec003) section "automatic tracking parameters"). It has to be manually fine-tuned by the user to find the optimal soft parameters with the system's knowledge. For example, for a system where the objects' direction is not relevant, the user will select the Normalization angle equal to 0.
+The soft parameters can be automatically adjusted by clicking on the 'Level' button. This action computes the soft parameters, with each contribution weighing one quarter of the total cost (see more at [this page](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1008697#sec003) in the section 'automatic tracking parameters'). However, manual fine-tuning by the user is necessary to find the optimal soft parameters based on the system's knowledge. For example, if the objects' direction is not relevant in the system, the user can select the 'Normalization angle' equal to 0.
-* Normalization distance (legacy Maximal length/ Typical length): typical distance traveled between two consecutive images.
-* Normalization angle (legacy Maximal angle/Typical angle): typical reorientation possible between two consecutive images.
-* Normalization area: typical difference in the area.
-* Normalization perimeter: typical difference in the perimeter.
+The soft parameters include:
+
+- Normalization distance (legacy Maximal length/ Typical length): Represents the typical distance traveled between two consecutive images.
+- Normalization angle (legacy Maximal angle/Typical angle): Represents the typical reorientation possible between two consecutive images.
+- Normalization area: Represents the typical difference in area.
+- Normalization perimeter: Represents the typical difference in perimeter.
## Registration
-The image registration is the process to correct small displacements and rotations of the camera during the movie recording. FastTrack provides several methods for registering the movie:
+Image registration is the process of correcting small displacements and rotations of the camera during movie recording. FastTrack provides several methods for registering the movie:
-* By phase correlation
+* Phase correlation
* ECC image alignment
-* Features based
+* Features-based
-Image registration is very computationally intensive and can drastically decrease the speed of the program.
+Please note that image registration is computationally intensive and can significantly decrease the speed of the program and the Replay tool will not use registered.
## Previewing the tracking
-The tracking can be previewed on a sub-sequence of images. It can be useful to tune parameters if the tracking is slow.
+The tracking can be previewed on a sub-sequence of images. This feature can be useful for tuning parameters if the tracking is slow.
## Display options
@@ -86,10 +89,9 @@ Several display options are available and unlocked at each step of the analysis.
* Original: original image sequence
* Background subtracted: image sequence minus the background image.
* Binary: binary image sequence with detection overlays.
-* Tracking: tracking data overlay.
+* Tracking: tracking data overlay.
## Layout options
-Several layouts and themes are available in the layout menu in the top bar. The user can also build his layout by dragging the option docks in the window.
-
+Several layouts and themes are available in the layout menu in the top bar. Additionally, the user can build their own layout by dragging the option docks within the window.
[See a video demonstration](https://www.youtube.com/watch?v=grxAAX0J6CQ&feature=youtu.be)
diff --git a/docs/user/intro.md b/docs/user/intro.md
index 27cedb2..bf87585 100644
--- a/docs/user/intro.md
+++ b/docs/user/intro.md
@@ -5,18 +5,20 @@ sidebar_label: Getting Started
---
-Welcome to the FastTrack user manual. This manual will present the tracking software and how to use it. Please contact Benjamin Gallois by email at **benjamin.gallois@fasttrack.sh** if you need more information or to signal a bug. If you encounter any problem, please check at [FastTrack issues](https://github.com/FastTrackOrg/FastTrack/issues) to see if the error is already signaled and being addressed. For comments or suggestions, please open a [discussion](https://github.com/FastTrackOrg/FastTrack/discussions).
+Welcome to the FastTrack user manual. This manual will introduce you to the tracking software and provide instructions on how to use it. If you need further information or want to report a bug, please contact Benjamin Gallois via email at **benjamin.gallois@fasttrack.sh**. In case you encounter any issues, we recommend checking the [FastTrack issues](https://github.com/FastTrackOrg/FastTrack/issues) page to see if the problem has already been reported and is currently being addressed. For any comments or suggestions, please feel free to participate in the [discussion](https://github.com/FastTrackOrg/FastTrack/discussions). Your feedback is valuable to us!
-FastTrack is a cross-platform application designed to track multiple objects in video recording. Stable versions of the software are available for Linux, Mac, and Windows. The source code can be downloaded at https://github.com/FastTrackOrg/FastTrack.
+FastTrack is a versatile cross-platform application specifically designed for tracking multiple objects in video recordings. The software offers stable versions for Linux, Mac, and Windows operating systems. For those interested, the source code is available for download at https://github.com/FastTrackOrg/FastTrack.
-Two main features are implemented in the software:
+The software boasts two primary features:
-- An automatic tracking algorithm that can detect and track objects, conserving the objects' identities across the video recording.
-- An ergonomic tool allows the user to check, correct and annotate the tracking.
+1. An automatic tracking algorithm capable of detecting and tracking objects while preserving their identities throughout the video recording.
+2. An ergonomic tool that enables users to review, correct, and annotate the tracking results.
-The FastTrack user interface is implemented with Qt and the image analysis with the OpenCV library. This allows a performant and responsive software amenable to processing large video recordings. FastTrack uses SQLite database to store the data internally and tracking results are exported in a plain text file (as well as accessible through the database).
+With these features, FastTrack provides a comprehensive solution for efficient and accurate object tracking in videos.
-FastTrack was first a [Ph.D. thesis](https://hal.archives-ouvertes.fr/tel-03243224/document) side project started by [Benjamin Gallois](https://github.com/bgallois) in his spare time that has then taken dedicated time in his Ph.D. project. The software's core is still maintained in his spare time; therefore, new features implementation, bug fixes, and help can take some time.
+The FastTrack user interface is implemented with Qt, and image analysis is performed using the OpenCV library. This combination enables the software to deliver excellent performance and responsiveness, making it well-suited for processing large video recordings. To store data internally, FastTrack utilizes an SQLite database, while tracking results are exported in plain text format and are also accessible through the database.
+
+FastTrack originated as a side project during [Benjamin Gallois](https://github.com/bgallois)' Ph.D. thesis, which you can find [here](https://hal.archives-ouvertes.fr/tel-03243224/document) . Over time, it evolved into a dedicated part of his Ph.D. project. Despite being a spare-time endeavor, Benjamin continues to maintain the software's core. As a result, the implementation of new features, bug fixes, and support may take some time to be addressed. We appreciate your understanding and patience in this regard.
**Not sure if you want to use FastTrack? Check these five most common questions:**
@@ -34,3 +36,6 @@ FastTrack is a [free](https://www.gnu.org/philosophy/free-sw.en.html) software u
**Do I need programming skills?**
No.
+
+**I need more flexibility?**
+PyFastTrack is a Python library that integrates the tracking technology of FastTrack. With PyFastTrack, you can combine FastTrack with your own trained YOLO detector, allowing you to detect and track various objects with ease.
diff --git a/docs/user/timeline.md b/docs/user/timeline.md
index 3445d54..f676c49 100644
--- a/docs/user/timeline.md
+++ b/docs/user/timeline.md
@@ -4,16 +4,21 @@ title: Timeline
sidebar_label: Timeline
---
-FastTrack provides a tool, called the timeline, to navigate inside a video or image sequence easily.
-Hover the mouse cursor above the timeline to move across the video. Right-click to place the cursor at a given position and that will save this position when the cursor exit the timeline.
-Double left-click to place a marker, right-click on this marker to delete it.
-Keyboard shortcuts are available to move the cursor frame by frame:
+FastTrack provides a convenient tool called the timeline, which allows users to navigate easily within a video or image sequence.
+
+To use the timeline, hover the mouse cursor over it to move across the video. Right-clicking will place the cursor at a specific position, which will be saved when the cursor exits the timeline.
+
+To place a marker, double left-click on the desired location. To remove a marker, right-click on it.
+
+There are also keyboard shortcuts available to move the cursor frame by frame:
* D: move to the next frame.
* Q: move to the previous frame (AZERTY layout).
* A: move to the previous frame (QWERTY layout).
* Space: start/stop autoplay.
+These features make it easy and efficient to interact with the video or image sequence in FastTrack.
+
diff --git a/docs/user/trackingInspector.md b/docs/user/trackingInspector.md
index 9c8919c..cdc7855 100644
--- a/docs/user/trackingInspector.md
+++ b/docs/user/trackingInspector.md
@@ -39,80 +39,88 @@ The Tracking Inspector panel is accessible in Expert Mode (settings -> Expert Mo
* 23: Display
* 24: Overlay
-
-**The Tracking Inspector** is a tool to display the result of a tracking analysis and to correct the tracking manually if necessary. For example, the user can delete an object to remove an artifact or change the object ID to correct a tracking error. To make the user's life easier, FastTrack provides an ergonomic interface with built-in keyboard shortcuts. FastTrack alleviates the tedious work of review and correction, and the user can achieve 100% tracking accuracy rapidly and efficiently.
+**The Tracking Inspector** is a tool designed to display the results of a tracking analysis and to enable manual corrections when necessary. For instance, users can delete an object to remove artifacts or change the object ID to correct tracking errors. FastTrack provides an ergonomic interface with built-in keyboard shortcuts, making the user's experience more seamless. By alleviating the tedious work of review and correction, FastTrack allows users to achieve 100% tracking accuracy rapidly and efficiently.
## Loading a tracking analysis
-To load a tracking analysis previously tracked in FastTrack, first, click on the **Open** button (1) and select a movie or an image of an image sequence. It will load the latest tracking analysis available.
-If the movie was tracked several times, the last tracking analysis is stored in the **Tracking_Result** folder and the previous tracking analysis in the **Tracking_Result_Date** folders.
-Old tracking analysis can be loaded using the **Open Tracking_result directory** button (2) (only activated if a movie is already loaded).
-Click on the **Reload)** button (3) to reload the tracking data if necessary.
-The software can only load a tracking analysis if the folder architecture is preserved, .ie the folder with the image sequence or the movie has to have a sub-folder named **Tracking_Result** containing at least the *tracking.txt* file.
+To load a previously tracked analysis in FastTrack, follow these steps:
+
+1. Click on the **Open** button (1) and select either a movie or an image sequence. This will load the latest available tracking analysis.
+If the movie was tracked multiple times, the last tracking analysis is stored in the **Tracking_Result** folder, while the previous tracking analyses are stored in **Tracking_Result_Date** folders.
+2. To load an older tracking analysis, use the **Open Tracking_result directory** button (2) (this button is only activated if a movie is already loaded).
+3. If necessary, click on the **Reload** button (3) to refresh the tracking data.
+
+Note: The software can only load a tracking analysis if the folder architecture is preserved. In other words, the folder containing the image sequence or movie must have a sub-folder named **Tracking_Result**, which must contain at least the *tracking.txt* file.
## Display options
Several tracking overlay options are available on the tracking overlay panel (24):
-* Ellipse: display the head, tail, and or body ellipses on the tracked objects.
-* Arrows: display an arrow on the head, tail, and or body of the tracked object indicating the orientation.
-* Numbers: display the ids of the tracked objects.
-* Traces: display the previous 50 positions of the tracked objects.
-* Size: the size of the tracking overlay.
-* Frame rate: display and saving frame rate.
+* **Ellipse**: Display the head, tail, and/or body ellipses on the tracked objects.
+* **Arrows**: Display an arrow on the head, tail, and/or body of the tracked object indicating the orientation.
+* **Numbers**: Display the IDs of the tracked objects.
+* **Traces**: Display the previous 50 positions of the tracked objects.
+* **Size**: Adjust the size of the tracking overlay.
+* **Frame rate**: Display and save frame rate.
-Several useful information on the selected object can be found in the information table (18). The user can go to the image where the object has appeared for the first time by clicking directly on the table's corresponding cell.
+Additionally, several useful pieces of information about the selected object can be found in the information table (18). The user can directly click on the corresponding cell in the table to navigate to the image where the object first appeared.
## Inspecting the tracking
-The tracking can be inspected by moving the display cursor (19), seeing the image number (20), and automatically playing the movie (21) at a selected frame rate (22).
-Automatically detected occlusions (overlapped objects) can be reviewed by clicking on the **Previous** (12) and **Next** (13) occlusion buttons (this function is experimental and can miss some occlusions).
+The tracking can be inspected by moving the display cursor (19), observing the image number (20), and automatically playing the movie (21) at a selected frame rate (22).
+
+Automatically detected occlusions (overlapped objects) can be reviewed by clicking on the **Previous** (12) and **Next** (13) occlusion buttons. Please note that this function is experimental and may miss some occlusions.
## Annotating the tracking
-The user can annotate any image of the tracking. Write the annotation in the annotate text entry (17). The user can search across annotations with the find bar (14) and the buttons (15)(16). All the annotations are saved in the *annotation.txt* file in the **Tracking_Result** folder.
+The user can annotate any image of the tracking by writing the annotation in the annotate text entry (17). Annotations can be searched across using the find bar (14) and the buttons (15)(16). All annotations are saved in the *annotation.txt* file in the **Tracking_Result** folder.
## Correcting the tracking
### Swapping the data of two objects
-The user can correct an error by swapping two object's ID from the current image to the end of the sequence as follow:
+The user can correct an error by swapping the IDs of two objects from the current image with the end of the sequence, as follows:
-* Left-click on the first object, the object ID and color are displayed on the first selection box (6).
-* Left-click on the second object, the object ID and color are displayed on the second selection box (8)
-* Right-click or click on the **Swap Button** (7) to exchange the ID of the two selected objects from the current image to the last image of the sequence.
+1. Left-click on the first object; the object ID and color will be displayed in the first selection box (6).
+2. Left-click on the second object; the object ID and color will be displayed in the second selection box (8).
+3. Right-click or click on the **Swap Button** (7) to exchange the IDs of the two selected objects, moving them from the current image to the last image of the sequence.
### Delete the data of an object
-To delete one object of several frames:
+To delete one object from several frames:
-* Double left click on the object, the object ID and color are displayed on the second selection box (8).
-* Select the number of frames on which to delete the object in the box (11). Shortcut C is available to focus on the selection box.
-* Click on the **Delete** button (10) to delete the object from the current frame to the current frame plus the selected number.
+1. Double left-click on the object; the object ID and color will be displayed in the second selection box (8).
+2. Select the number of frames over which you want to delete the object in the box (11). Shortcut C is available to focus on the selection box.
+3. Click on the **Delete** button (10) to remove the object from the current frame to the current frame plus the selected number.
-To delete one object on the current frame:
+To delete one object from the current frame:
-* Double left-click on the object, the object ID and color are displayed on the second selection box (8).
-* Click on the **Delete One** button (9) to delete the object on the current frame.
+1. Double left-click on the object; the object ID and color will be displayed in the second selection box (8).
+2. Click on the **Delete One** button (9) to remove the object from the current frame.
### Keyboard shortcuts summary
-A set of keyboard shortcuts are available to speed up the tracking correction.
+A set of keyboard shortcuts is available to speed up the tracking correction:
- Q/A: go to the previous image.
- D: go to the next image.
- F: delete the selected object on the current image.
- C: enter the number of images where an object has to be deleted.
-- G: delete an object from the current image to the current plus the selected number.
+- G: delete an object from the current image to the current image plus the selected number.
## Saving
-All the changes made in the inspector are automatically saved in the original *tracking.txt* file and SQLite database.
+All the changes made in the inspector are automatically saved to the original *tracking.txt* file and the SQLite database.
## Exporting a movie
-To export a movie of a tracking analysis, select the desired display overlay and click on the **Save** button (3). Select a folder and a name to save the file. Only .avi format is supported.
+To export a movie of a tracking analysis, follow these steps:
+
+1. Select the desired display overlay in the tracking analysis.
+2. Click on the **Save** button (3).
+3. Choose a folder and specify a name for the file.
+4. Please note that only .avi format is supported for the exported movie.
-Note: Movie with many objects by frame can be challenging to load and review in the tracking Inspector.
+Note: Movies with many objects per frame can be challenging to load and review in the tracking Inspector.
[See a video demonstration](https://youtu.be/5lhx-r_DHLY)
diff --git a/docs/user/trackingParameters.md b/docs/user/trackingParameters.md
index 098cd30..1ea9c9f 100644
--- a/docs/user/trackingParameters.md
+++ b/docs/user/trackingParameters.md
@@ -10,46 +10,46 @@ This section details how to select the relevant tracking features to be included
FastTrack uses the so-called Hungarian method to solve the assignment problem of each object between two frames. This method is based on minimizing the global cost of the association pairs of objects.
### Cost function
-The cost is calculated from a cost function that can be constructed from several parameters, in the following, i is indexing the image n, and j the image n + 1:
-* The distance $d{ij} = \sqrt{(x_i - x_j)^2 + (y_i - y_j)^2}$
-* The angle $a_{ij} = min(\theta_i - \theta_j)$
-* The area $ar_{ij} = abs(area_i - area_j)$
-* The perimeter $p_{ij} = abs(perimeter_i - perimeter_j)$
+The cost is calculated from a cost function that can be constructed from several parameters. In the following, "i" is indexing the image "n," and "j" indexes the image "n + 1":
+* The distance $d_{ij} = \sqrt{(x_i - x_j)^2 + (y_i - y_j)^2}$
+* The angle $a_{ij} = \min(\theta_i - \theta_j)$
+* The area $ar_{ij} = \lvert area_i - area_j \rvert$
+* The perimeter $p_{ij} = \lvert perimeter_i - perimeter_j \rvert$
-The relative weight of these contributions to the cost function are set by 4 normalization parameters:
+The relative weight of these contributions to the cost function is set by 4 normalization parameters:
$$
-c_{ij} = \frac{d{ij}}{D} + \frac{a{ij}}{A}+ \frac{ar{ij}}{AR} + \frac{p{ij}}{P}
+c_{ij} = \frac{d_{ij}}{D} + \frac{a_{ij}}{A}+ \frac{ar_{ij}}{AR} + \frac{p_{ij}}{P}
$$
-These parameters can be set to 0 to cancel one or several tracking feature from the cost computation. All these features are not always relevant and has to be chosen carrefully for the best tracking accuracy. For example, for tracking circles of radius r, and squares of the same area moving at 10px/image, it is best to set
+These parameters can be set to 0 to cancel one or several tracking features from the cost computation. All these features are not always relevant and have to be chosen carefully for the best tracking accuracy. For example, for tracking circles of radius "r" and squares of the same area moving at 10 px/image, it is best to set:
$$
-(D=10, A=0, AR=0, P=2r(\pi-2\sqrt{\pi}))
+(D = 10, A = 0, AR = 0, P = 2r(\pi-2\sqrt{\pi}))
$$
-For tracking fish of the same size, travelling at 35px/image, doing small reorientation of 20°, it is best to set
+For tracking fish of the same size, traveling at 35 px/image, doing small reorientation of 20°, it is best to set:
$$
-(D=35, A=20, AR=0, P=0))
+(D = 35, A = 20, AR = 0, P = 0)
$$
-For tracking fish of different size, travelling at 35px/image, doing small reorientation of 20°, with size difference of 100px it is best to set
+For tracking fish of different sizes, traveling at 35 px/image, doing small reorientation of 20°, with a size difference of 100 px, it is best to set:
$$
-(D=35, A=20, AR=100, P=0))
+(D = 35, A = 20, AR = 100, P = 0)
$$
-The best way to set the parameter is to first set the normalization parameters to the mean of the variable, .ie the typical change between two consecutive images:
-* $D = mean(d_{ij})$ where i and j are the same object.
-* $A = mean(a_{ij})$ where i and j are the same object.
-* $AR = mean(ar_{ij})$ where i and j are the same object.
-* $P = mean(p_{ij})$ where i and j are the same object.
-In this case, each tracking feature has the same contribution to the cost. To tune the cost function by weighting more (resp. less) a tracking feature, decrease (resp. increase) the normalization parameter of this feature, or increase (resp. decrease) all the normalization parameters of the others.
+The best way to set the parameters is to first set the normalization parameters to the mean of the variable, i.e., the typical change between two consecutive images:
+* $D = \text{mean}(d_{ij})$ where "i" and "j" are the same object.
+* $A = \text{mean}(a_{ij})$ where "i" and "j" are the same object.
+* $AR = \text{mean}(ar_{ij})$ where "i" and "j" are the same object.
+* $P = \text{mean}(p_{ij})$ where "i" and "j" are the same object.
+In this case, each tracking feature has the same contribution to the cost. To tune the cost function by weighting more (resp. less) a tracking feature, decrease (resp. increase) the normalization parameter of this feature or increase (resp. decrease) all the normalization parameters of the others.
### Memory and distance
-A parameter of memory named maximal time can be set to account for disappearing objects. If the maximal time is set to m, one object can only disappear during m image. If it reappears after, it will be considered as a new object.
+A parameter of memory named "maximal time" can be set to account for disappearing objects. If the maximal time is set to "m," one object can only disappear during m images. If it reappears after, it will be considered as a new object.
-To speed-up the tracking, the maximal distance (L) parameter sets an infinite cost for all the pairs of objects to such as $d_{ij} > L$. In practice, L is corresponding to the maximal distance an object can disappear.
+To speed up the tracking, the maximal distance "L" parameter sets an infinite cost for all the pairs of objects where $d_{ij} > L$. In practice, L corresponds to the maximal distance an object can disappear.
### Spot
-The spot to track will determine if the distance and the angular difference will be calculated from the head, the tail, or the body of the object. Area and perimeter are always computed from the body. Head is defined as the bigger half of the object, separated alongside the object's minor axis.
+The spot to track will determine if the distance and the angular difference will be calculated from the head, the tail, or the body of the object. Area and perimeter are always computed from the body. The head is defined as the bigger half of the object, separated alongside the object's minor axis.
## Conclusion
-Setting the tracking parameters can be tedious. It can be best achieved by trials and errors (see the Preview option in the Interactive panel). to summarize:
+Setting the tracking parameters can be tedious. It can be best achieved by trials and errors (see the Preview option in the Interactive panel). To summarize:
1. Choose the right tracking features.
-2. Set the normalization parameters equal to the tracking feature's std, ie, the typical value change.
+2. Set the normalization parameters equal to the tracking feature's standard deviation, i.e., the typical value change.
3. Tune the normalization parameters to increase or decrease the relative weight between each contribution.