From 7d98fc5f9c1e34e483557dec229f2667fe1dae40 Mon Sep 17 00:00:00 2001 From: bgallois Date: Fri, 28 Jul 2023 10:35:32 +0200 Subject: [PATCH] [docs] global: update documentation --- docs/user/batchTracking.md | 66 ++++++++++++++++------------ docs/user/dataOutput.md | 38 ++++++++-------- docs/user/installation.md | 2 +- docs/user/interactiveTracking.md | 56 ++++++++++++------------ docs/user/intro.md | 19 +++++--- docs/user/timeline.md | 13 ++++-- docs/user/trackingInspector.md | 74 ++++++++++++++++++-------------- 7 files changed, 151 insertions(+), 117 deletions(-) diff --git a/docs/user/batchTracking.md b/docs/user/batchTracking.md index 724bdb9..c3e5788 100644 --- a/docs/user/batchTracking.md +++ b/docs/user/batchTracking.md @@ -28,66 +28,78 @@ The Batch Tracking panel is only accessible in Expert Mode (settings -> Expert M * 13: Clear stack * 14: Remove from stack -The Batch Tracking panel is an advanced tool to track a large number of movies automatically. Several behaviors can be combined to load image sequences in a batch with specific background images or parameter files. +The Batch Tracking panel is an advanced tool used to automatically track a large number of movies. It allows you to combine several behaviors to load image sequences in a batch, along with specific background images or parameter files. This powerful feature streamlines the tracking process for multiple movies at once. ## Basic usage -The user can open several image sequences by clicking on the **Open folder** (1) button and selecting one or several folders. FastTrack can automatically load a background and/or a parameters file if a **Tracking_Result** folder is provided with the image sequence; check the **Autoload** (10) tick to activate this behavior. -After opening, image sequences are added to the **Processing stack** (4). If a background image and/or a set of parameters are automatically loaded, the path will be displayed in the second and third columns. If not, the user can select them with the (5) and (6) buttons after importation. +The user can open several image sequences by clicking on the **Open folder** (1) button and selecting one or several folders. FastTrack can automatically load a background and/or a parameters file if a **Tracking_Result** folder is provided within the image sequence; check the **Autoload** (10) tick to activate this behavior. + +After opening, image sequences are added to the **Processing stack** (4). If a background image and/or a set of parameters are automatically loaded, their paths will be displayed in the second and third columns, respectively. If not, the user can select them using the (5) and (6) buttons after importing. + **By default**, if no background image and parameter file are selected, FastTrack will use the parameters provided in the Parameters table (9) **before** the image sequence importation. -The user can delete an image sequence by selecting the corresponding line in the **Processing stack** (4) and clicking on the **Remove** (14) button. The user can clear all the **Processing stack** (14) by clicking the **Clear** (13) button. -To process the stack, click the **Start Tracking** (12) button. + +The user can delete an image sequence by selecting the corresponding line in the **Processing stack** (4) and clicking on the **Remove** (14) button. To clear the entire **Processing stack**, the user can click the **Clear** (13) button. + +To process the stack, click the **Start Tracking** (12) button, and FastTrack will perform the tracking analysis on all the image sequences in the stack. ## More advanced options ### Adding a suffix -The user can append a suffix to the imported folders *folder_path/ + suffix/* -For example, it can be usefull with a folder tree like this one: +The user can append a suffix to the imported folders by using the *folder_path/ + suffix/* notation. + +For example, it can be useful with a folder tree like this one: - /myExperiment/Run1/images - /myExperiment/Run2/images - /myExperiment/Run3/images -The user can easily select in one time the folders: +The user can easily select all the folders at once: - /myExperiment/Run1 - /myExperiment/Run2 - /myExperiment/Run3 -And then add the suffix *images/* to select the desired folders without having to do it manually three times. +And then add the suffix *images/* to select the desired folders without having to do it manually three times. This feature allows for a more efficient and convenient selection of folders. ### Unique background image -The user can select a unique background image. Open an image with the **Unique background** (2) button, and **all the sequences in the stack** and sequences that will be imported will be using this background image. The user can use the **Clear** (12) to reset the default behavior. +The user can select a unique background image by opening an image with the **Unique background** (2) button. Once a background image is selected, **all the sequences in the stack** and any sequences that will be imported afterward will use this selected background image. This means that the chosen background image will be applied to all the sequences during the tracking process. + +To reset to the default behavior and remove the selected background image, the user can use the **Clear** (12) button. This allows the user to revert to the original state where each sequence might have its own background image or none at all. ### One parameter file -To apply the same parameters file to all the imported sequences: + +To apply the same parameters file to all the imported sequences, you have the following options: Manual selection: -* Untick the **Autoload** (10). -* Select a set of parameters in the **Parameters table** (9). -* The sequences that will be imported will use this set of parameters. +1. Untick the **Autoload** (10) checkbox. +2. Select a set of parameters in the **Parameters table** (9). +3. The sequences that will be imported will use this set of parameters. With a file: -* Tick the **Autoload** (10) -* Load the sequence with the right parameters file. -* Untick the **Autoload** (10). -* The sequences that will be imported will use this set of parameters. +1. Tick the **Autoload** (10) checkbox. +2. Load the sequence with the right parameters file. +3. Untick the **Autoload** (10) checkbox. +4. The sequences that will be imported will use this set of parameters. -With a file: +Alternative method with a file: -* Untick the **Autoload** (10). -* Load a sequence. -* Select the parameter file with the (6) button. -* The sequences that will be imported will use this set of parameters. +1. Untick the **Autoload** (10) checkbox. +2. Load a sequence. +3. Select the parameter file using the (6) button. +4. The sequences that will be imported will use this set of parameters. ## Behavior reminder - (10) unticked, (2) not selected: FastTrack will use the parameters provided in the Parameters table (9) **before** the image sequence is added to the stack. It can be overwritten after importation with the (5) and (6) buttons. -- (10) ticked, (2) not selected: FastTrack will use the background and the parameters file in the Tracking_Result folder. If these files are missing, FastTrack will use the parameters provided in the Parameters table (9) **before** the image sequence is added to the stack. -- (10) ticked, (2) selected: the background selected in (2) will overwrite the automatically detected background. -- (3) selected: the image sequence path will be appended with the suffix, and default behavior will be applied with this path. -- (2) selected: select a unique background will overwrite all the existing background in the stack. + +- (10) ticked, (2) not selected: FastTrack will attempt to use the background and the parameters file found in the Tracking_Result folder of the image sequence. If these files are missing, FastTrack will fallback to using the parameters provided in the Parameters table (9) **before** the image sequence is added to the stack. + +- (10) ticked, (2) selected: When you select a unique background in (2), it will overwrite the automatically detected background for all the sequences in the stack. The selected background will be used for tracking. + +- (3) selected: If you select a suffix in (3), it will be appended to the image sequence path, and the default behavior will be applied using this modified path. + +- (2) selected: When you select a unique background in (2), it will overwrite all the existing backgrounds in the stack. The selected background will be applied to all sequences during tracking. diff --git a/docs/user/dataOutput.md b/docs/user/dataOutput.md index 605c385..2155514 100644 --- a/docs/user/dataOutput.md +++ b/docs/user/dataOutput.md @@ -6,23 +6,25 @@ sidebar_label: Tracking Result After a tracking analysis (or an analysis preview), FastTrack saves several files inside the **Tracking_Result** (or inside the **Tracking_Result_VideoFileName** for a video file) folder: -* *tracking.db*: the tracking result as a SQLite database -* *tracking.txt*: the tracking result -* *annotation.txt*: the annotation -* *background.pgm*: the background image -* *cfg.toml*: the parameters used for the tracking - -The tracking result file is simply a text file with 23 columns separated by a '\t' character. This file can easily be loaded to subsequent analysis see [this Python](https://www.fasttrack.sh/blog/2021/08/09/FastAnalysis-tuto) and [this Julia](https://www.fasttrack.sh/blog/2020/11/25/Data-analysis-julia). - -* **xHead, yHead, tHead**: the position (x, y) and the absolute angle of the object's head. -* **xTail, yTail, tTail**: the position (x, y) and the absolute angle of the object's tail. -* **xBody, yBody, tBody**: the position (x, y) and the absolute angle of the object. -* **curvature, areaBody, perimeterBody**: curvature of the object, area and perimeter of the object (in pixels). -* **headMajorAxisLength, headMinorAxisLength, headExcentricity**: parameters of the head's ellipse (headMinorAxisLength and headExcentricity are semi-axis length). -* **bodyMajorAxisLength, bodyMinorAxisLength, bodyExcentricity**: parameters of the body's ellipse (bodyMinorAxisLength and bodyExcentricity are semi-axis length). -* **tailMajorAxisLength, tailMinorAxisLength, tailExcentricity**: parameters of the tail's ellipse (bodyMinorAxisLength and bodyExcentricity are semi-axis length). -* **imageNumber**: index of the frame. -* **id**: object unique identification number. +- *tracking.db*: the tracking result as a SQLite database +- *tracking.txt*: the tracking result +- *annotation.txt*: the annotation +- *background.pgm*: the background image +- *cfg.toml*: the parameters used for the tracking + +The tracking result file is simply a text file with 23 columns separated by a '\t' character. This file can easily be loaded for subsequent analysis using Python (as shown in [this Python tutorial](https://www.fasttrack.sh/blog/2021/08/09/FastAnalysis-tuto)) and Julia (as shown in [this Julia tutorial](https://www.fasttrack.sh/blog/2020/11/25/Data-analysis-julia)). + +The 23 columns represent the following data for each tracked object: + +- **xHead, yHead, tHead**: the position (x, y) and the absolute angle of the object's head. +- **xTail, yTail, tTail**: the position (x, y) and the absolute angle of the object's tail. +- **xBody, yBody, tBody**: the position (x, y) and the absolute angle of the object's body. +- **curvature, areaBody, perimeterBody**: curvature of the object, area, and perimeter of the object (in pixels). +- **headMajorAxisLength, headMinorAxisLength, headExcentricity**: parameters of the head's ellipse (headMinorAxisLength and headExcentricity are semi-axis lengths). +- **bodyMajorAxisLength, bodyMinorAxisLength, bodyExcentricity**: parameters of the body's ellipse (bodyMinorAxisLength and bodyExcentricity are semi-axis lengths). +- **tailMajorAxisLength, tailMinorAxisLength, tailExcentricity**: parameters of the tail's ellipse (tailMinorAxisLength and tailExcentricity are semi-axis lengths). +- **imageNumber**: index of the frame. +- **id**: object's unique identification number.

2,475 rows × 23 columns (omitted printing of 14 columns)

xHeadyHeadtHeadxTailyTailtTailxBodyyBodytBody
Float64Float64Float64Float64Float64Float64Float64Float64Float64
1514.327333.125.81619499.96327.7276.10226508.345330.8765.94395
2463.603327.0510.301279449.585330.3230.245547458.058328.3460.238877
323.9978287.7153.7064634.9722278.8363.9981929.2056283.5053.84844
4372.536230.1430.194641354.226231.6046.08737364.822230.7590.0515087
5480.58213.4821.28236478.125228.521.53303479.428220.5431.42567
6171.682143.556.09077155.507140.1166.1146164.913142.1136.08216
7498.151121.326.00177483.712119.2850.0223247492.683120.556.15298
8329.56123.4186.08726312.526119.0425.9098322.531121.6146.01722
9465.256115.0454.44359470.05799.9114.40559467.106109.2054.40862
10423.66366.37890.0888056409.10567.29716.12053417.61566.76230.0292602
11424.48740.42325.48198411.59430.39125.88869418.9636.11925.64923
12370.59135.21475.99688354.67229.56335.89121364.00732.87675.94008
13498.50220.25275.66339487.2549.194995.39497493.75815.57815.5026
14367.7915.030346.05933352.0766.756030.653641361.125.759040.152688
15512.965332.5755.86617499.435327.7596.052507.626330.6735.95102
16463.385324.6590.707451.431332.1930.246265458.959327.4430.542368
1719.4579293.0224.2886125.5579281.2064.1837921.8962288.3024.23379
18379.037230.5276.10571361.728229.6160.199343371.74230.1446.25939
19478.884206.7121.27832475.454221.7571.40929477.197214.1081.35472
20173.923143.0420.00732468157.261142.1826.00453167.066142.6896.20403
21498.561122.6875.83253486.357118.1966.13893493.718120.9065.95151
22328.812124.1346.05932312.848119.6055.98617322.331122.2946.00901
23461.738116.7314.47649466.371101.7364.40285463.615110.6564.41641
24428.63169.27155.87139415.66564.64446.13862423.21867.33645.96558
25425.82144.99425.59983414.8433.20285.37159421.24840.08975.461
26368.36235.62195.97427353.2230.46255.88261362.10933.48915.94605
27503.48422.72935.76026489.63216.63155.92136497.92420.28575.86668
28369.1845.840746.15994352.6224.253286.24787362.1445.167666.19236
29510.519331.4175.88883495.784327.3666.12889504.484329.7586.02088
30464.242323.5330.290639451.756328.1940.532686459.432325.3260.37736
@@ -33,7 +35,7 @@ Positions are in pixels, in the frame of reference of the original image, zero i v y -**Note:** If several tracking analyses are performed on the same image sequence, the previous folder is not erased. It will be renamed as **Tracking_result_DateOfTheNewAnalysis**. +**Note:** If several tracking analyses are performed on the same image sequence, the previous folder will not be erased. Instead, it will be renamed as **Tracking_result_DateOfTheNewAnalysis**. ## Data analysis diff --git a/docs/user/installation.md b/docs/user/installation.md index 3cb2dc0..896100f 100644 --- a/docs/user/installation.md +++ b/docs/user/installation.md @@ -8,7 +8,7 @@ sidebar_label: Installation --- **NOTE** -During the installation on Windows and Mac systems, security alerts are displayed because the FastTrack executable does not possess an EV code signing certificate. These alerts can be ignored. FastTrack executable can be verified easily (and freely) by comparing the MD5 checksum. See the [installation video](https://www.youtube.com/watch?v=EvfCIS7BmSM) for more details. +During the installation on Windows and Mac systems, security alerts may be displayed due to the absence of an EV code signing certificate for the FastTrack executable. These alerts can be safely ignored. To verify the FastTrack executable, you can easily and freely compare the MD5 checksum. For more detailed instructions, you can refer to the [installation video](https://www.youtube.com/watch?v=EvfCIS7BmSM). --- diff --git a/docs/user/interactiveTracking.md b/docs/user/interactiveTracking.md index dbd0e41..23d8e7c 100644 --- a/docs/user/interactiveTracking.md +++ b/docs/user/interactiveTracking.md @@ -4,22 +4,23 @@ title: Interactive Tracking sidebar_label: Interactive Tracking --- -The Interactive panel provides a way to perform a tracking analysis and review it in an interactive environment. -Several steps have to be performed in the right order (some are mandatory, some are optional) to perform a successful tracking analysis. +The Interactive panel serves as a platform for conducting tracking analysis and reviewing the results within an interactive environment. To achieve a successful tracking analysis, it is essential to follow a series of steps in the correct order. While some steps are mandatory, others are optional, allowing flexibility in the process. The provided workflow diagram (see above) illustrates the sequential flow of these steps, assisting users in efficiently navigating through the analysis. ![Workflow](assets/interactive_workflow.svg) ## Opening a file -The first step of a tracking analysis is to open a video file. FastTrack supports video files and image sequences. Click on the file or an image of a sequence to automatically load the movie. +The first step of a tracking analysis is to open a video file. FastTrack supports both video files and image sequences. Click on the file or an image sequence to automatically load the movie. ![File opening](assets/interactive_open.gif) ## Computing the background -The background can be computed or imported. To compute the background, select a method and an image number. Images are selected in the image sequence at regular intervals, and three methods of computation by z-projection are available: +The background can be computed or imported. To compute the background, select a method and an image number. Images are selected from the image sequence at regular intervals, and three methods of computation by z-projection are available: -* Min: each pixel of the background image is the pixel with the minimal value across the selected images from the image sequence. Useful when the objects are light on a dark background. -* Max: each pixel of the background image is the pixel with the maximal value across the image sequence's selected images. Useful when the objects are dark on a light background. -* Average: each pixel of the background image is the average of the pixels across the image sequence's selected images. +1. Min: Each pixel of the background image is the pixel with the minimal value across the selected images from the image sequence. This method is useful when the objects are light on a dark background. + +2. Max: Each pixel of the background image is the pixel with the maximal value across the selected images from the image sequence. This method is useful when the objects are dark on a light background. + +3. Average: Each pixel of the background image is the average of the pixels across the selected images from the image sequence. The images can be registered before the z-projection. Three methods of registration are available. ![Background computing](assets/interactive_back.gif) @@ -31,53 +32,55 @@ To select a region of interest, draw a rectangle on the display with the mouse ## Computing the binary image -To compute the binary image from the background image and the image sequence, select the threshold value, and see the result on the display. The background type is automatically selected after the background computation. However, it can be modified: select Dark Background if the objects are light on a dark background, and Light background if the objects are dark on a light background. +To generate the binary image from the background image and the image sequence, follow these steps: first, select the threshold value, and then observe the result on the display. The background type is automatically determined during the background computation process. However, it can be adjusted manually if needed: choose 'Dark Background' if the objects are light on a dark background, or select 'Light Background' if the objects are dark on a light background. ![Binarizing](assets/interactive_thresh.gif) ## Applying morphological operations (optional) -It is possible to apply a morphological operation on the binary image. Select a morphological operation, kernel size, and geometry. See the result on the display. For more information about the different operations, see https://docs.opencv.org/trunk/d9/d61/tutorial_py_morphological_ops.html. +It is possible to apply a morphological operation to the binary image. Select a morphological operation, choose an appropriate kernel size and geometry, and then observe the result on the display. For more detailed information about the various operations, refer to the following link: https://docs.opencv.org/trunk/d9/d61/tutorial_py_morphological_ops.html. ![Applying morphological operations](assets/interactive_morph.gif) ## Tuning the detection parameters -Objects are detected by their size. Select the maximum and minimum size of the detected objects. The detected objects will be colored in green in the display, and the rejected object will be displayed in red. +Objects are detected based on their size. Choose the maximum and minimum size for the detected objects. The identified objects will be highlighted in green on the display, while the rejected objects will be shown in red. ![](assets/interactive_detec.gif) ## Tuning the tracking parameters -Several parameters can be modified to ensure a good tracking analysis. See [this page](http://www.fasttrack.sh/docs/trackingParameters/) for more details: +Several parameters can be modified to ensure a good tracking analysis. For more details, see [this page](http://www.fasttrack.sh/docs/trackingParameters/). ### Hard parameters Hard parameters have to be set manually by the user: -* Maximal distance: if an object traveled more than this distance between two consecutive images, it would be considered as a new object. -* Maximal time: number of images an object is allowed to disappear. If an object reappears after this time, it will be considered as a new object. If the number of objects is constant throughout the movie, set the Maximal Time equal to the movie's number of frames. -* Spot to track: part of the object features used to do the tracking. Select the part that reflects the better the direction of the object. Legacy parameter, head corresponds to the smaller mid-part of the object, tail ellipse the wider mid-part of the object, and body is the full object. +- Maximal distance: If an object traveled more than this distance between two consecutive images, it would be considered a new object. +- Maximal time: The number of images an object is allowed to disappear. If an object reappears after this time, it will be considered a new object. If the number of objects is constant throughout the movie, set the Maximal Time equal to the movie's number of frames. +- Spot to track: This represents part of the object features used for tracking. Select the part that best reflects the direction of the object. Legacy parameters include 'head,' which corresponds to the smaller mid-part of the object, 'tail ellipse,' which corresponds to the wider mid-part of the object, and 'body,' which represents the full object. ### Soft parameters -The soft parameters can be leveled automatically by clicking on the Level button. This will automatically compute the soft parameters as each contribution weighs one quarter of the total cost (see more at [this page](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1008697#sec003) section "automatic tracking parameters"). It has to be manually fine-tuned by the user to find the optimal soft parameters with the system's knowledge. For example, for a system where the objects' direction is not relevant, the user will select the Normalization angle equal to 0. +The soft parameters can be automatically adjusted by clicking on the 'Level' button. This action computes the soft parameters, with each contribution weighing one quarter of the total cost (see more at [this page](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1008697#sec003) in the section 'automatic tracking parameters'). However, manual fine-tuning by the user is necessary to find the optimal soft parameters based on the system's knowledge. For example, if the objects' direction is not relevant in the system, the user can select the 'Normalization angle' equal to 0. -* Normalization distance (legacy Maximal length/ Typical length): typical distance traveled between two consecutive images. -* Normalization angle (legacy Maximal angle/Typical angle): typical reorientation possible between two consecutive images. -* Normalization area: typical difference in the area. -* Normalization perimeter: typical difference in the perimeter. +The soft parameters include: + +- Normalization distance (legacy Maximal length/ Typical length): Represents the typical distance traveled between two consecutive images. +- Normalization angle (legacy Maximal angle/Typical angle): Represents the typical reorientation possible between two consecutive images. +- Normalization area: Represents the typical difference in area. +- Normalization perimeter: Represents the typical difference in perimeter. ## Registration -The image registration is the process to correct small displacements and rotations of the camera during the movie recording. FastTrack provides several methods for registering the movie: +Image registration is the process of correcting small displacements and rotations of the camera during movie recording. FastTrack provides several methods for registering the movie: -* By phase correlation +* Phase correlation * ECC image alignment -* Features based +* Features-based -Image registration is very computationally intensive and can drastically decrease the speed of the program. +Please note that image registration is computationally intensive and can significantly decrease the speed of the program and the Replay tool will not use registered. ## Previewing the tracking -The tracking can be previewed on a sub-sequence of images. It can be useful to tune parameters if the tracking is slow. +The tracking can be previewed on a sub-sequence of images. This feature can be useful for tuning parameters if the tracking is slow. ## Display options @@ -86,10 +89,9 @@ Several display options are available and unlocked at each step of the analysis. * Original: original image sequence * Background subtracted: image sequence minus the background image. * Binary: binary image sequence with detection overlays. -* Tracking: tracking data overlay. +* Tracking: tracking data overlay. ## Layout options -Several layouts and themes are available in the layout menu in the top bar. The user can also build his layout by dragging the option docks in the window. - +Several layouts and themes are available in the layout menu in the top bar. Additionally, the user can build their own layout by dragging the option docks within the window. [See a video demonstration](https://www.youtube.com/watch?v=grxAAX0J6CQ&feature=youtu.be) diff --git a/docs/user/intro.md b/docs/user/intro.md index 27cedb2..bf87585 100644 --- a/docs/user/intro.md +++ b/docs/user/intro.md @@ -5,18 +5,20 @@ sidebar_label: Getting Started --- -Welcome to the FastTrack user manual. This manual will present the tracking software and how to use it. Please contact Benjamin Gallois by email at **benjamin.gallois@fasttrack.sh** if you need more information or to signal a bug. If you encounter any problem, please check at [FastTrack issues](https://github.com/FastTrackOrg/FastTrack/issues) to see if the error is already signaled and being addressed. For comments or suggestions, please open a [discussion](https://github.com/FastTrackOrg/FastTrack/discussions). +Welcome to the FastTrack user manual. This manual will introduce you to the tracking software and provide instructions on how to use it. If you need further information or want to report a bug, please contact Benjamin Gallois via email at **benjamin.gallois@fasttrack.sh**. In case you encounter any issues, we recommend checking the [FastTrack issues](https://github.com/FastTrackOrg/FastTrack/issues) page to see if the problem has already been reported and is currently being addressed. For any comments or suggestions, please feel free to participate in the [discussion](https://github.com/FastTrackOrg/FastTrack/discussions). Your feedback is valuable to us! -FastTrack is a cross-platform application designed to track multiple objects in video recording. Stable versions of the software are available for Linux, Mac, and Windows. The source code can be downloaded at https://github.com/FastTrackOrg/FastTrack. +FastTrack is a versatile cross-platform application specifically designed for tracking multiple objects in video recordings. The software offers stable versions for Linux, Mac, and Windows operating systems. For those interested, the source code is available for download at https://github.com/FastTrackOrg/FastTrack. -Two main features are implemented in the software: +The software boasts two primary features: -- An automatic tracking algorithm that can detect and track objects, conserving the objects' identities across the video recording. -- An ergonomic tool allows the user to check, correct and annotate the tracking. +1. An automatic tracking algorithm capable of detecting and tracking objects while preserving their identities throughout the video recording. +2. An ergonomic tool that enables users to review, correct, and annotate the tracking results. -The FastTrack user interface is implemented with Qt and the image analysis with the OpenCV library. This allows a performant and responsive software amenable to processing large video recordings. FastTrack uses SQLite database to store the data internally and tracking results are exported in a plain text file (as well as accessible through the database). +With these features, FastTrack provides a comprehensive solution for efficient and accurate object tracking in videos. -FastTrack was first a [Ph.D. thesis](https://hal.archives-ouvertes.fr/tel-03243224/document) side project started by [Benjamin Gallois](https://github.com/bgallois) in his spare time that has then taken dedicated time in his Ph.D. project. The software's core is still maintained in his spare time; therefore, new features implementation, bug fixes, and help can take some time. +The FastTrack user interface is implemented with Qt, and image analysis is performed using the OpenCV library. This combination enables the software to deliver excellent performance and responsiveness, making it well-suited for processing large video recordings. To store data internally, FastTrack utilizes an SQLite database, while tracking results are exported in plain text format and are also accessible through the database. + +FastTrack originated as a side project during [Benjamin Gallois](https://github.com/bgallois)' Ph.D. thesis, which you can find [here](https://hal.archives-ouvertes.fr/tel-03243224/document) . Over time, it evolved into a dedicated part of his Ph.D. project. Despite being a spare-time endeavor, Benjamin continues to maintain the software's core. As a result, the implementation of new features, bug fixes, and support may take some time to be addressed. We appreciate your understanding and patience in this regard. **Not sure if you want to use FastTrack? Check these five most common questions:** @@ -34,3 +36,6 @@ FastTrack is a [free](https://www.gnu.org/philosophy/free-sw.en.html) software u **Do I need programming skills?** No. + +**I need more flexibility?** +PyFastTrack is a Python library that integrates the tracking technology of FastTrack. With PyFastTrack, you can combine FastTrack with your own trained YOLO detector, allowing you to detect and track various objects with ease. diff --git a/docs/user/timeline.md b/docs/user/timeline.md index 3445d54..f676c49 100644 --- a/docs/user/timeline.md +++ b/docs/user/timeline.md @@ -4,16 +4,21 @@ title: Timeline sidebar_label: Timeline --- -FastTrack provides a tool, called the timeline, to navigate inside a video or image sequence easily. -Hover the mouse cursor above the timeline to move across the video. Right-click to place the cursor at a given position and that will save this position when the cursor exit the timeline. -Double left-click to place a marker, right-click on this marker to delete it. -Keyboard shortcuts are available to move the cursor frame by frame: +FastTrack provides a convenient tool called the timeline, which allows users to navigate easily within a video or image sequence. + +To use the timeline, hover the mouse cursor over it to move across the video. Right-clicking will place the cursor at a specific position, which will be saved when the cursor exits the timeline. + +To place a marker, double left-click on the desired location. To remove a marker, right-click on it. + +There are also keyboard shortcuts available to move the cursor frame by frame: * D: move to the next frame. * Q: move to the previous frame (AZERTY layout). * A: move to the previous frame (QWERTY layout). * Space: start/stop autoplay. +These features make it easy and efficient to interact with the video or image sequence in FastTrack. + diff --git a/docs/user/trackingInspector.md b/docs/user/trackingInspector.md index 9c8919c..cdc7855 100644 --- a/docs/user/trackingInspector.md +++ b/docs/user/trackingInspector.md @@ -39,80 +39,88 @@ The Tracking Inspector panel is accessible in Expert Mode (settings -> Expert Mo * 23: Display * 24: Overlay - -**The Tracking Inspector** is a tool to display the result of a tracking analysis and to correct the tracking manually if necessary. For example, the user can delete an object to remove an artifact or change the object ID to correct a tracking error. To make the user's life easier, FastTrack provides an ergonomic interface with built-in keyboard shortcuts. FastTrack alleviates the tedious work of review and correction, and the user can achieve 100% tracking accuracy rapidly and efficiently. +**The Tracking Inspector** is a tool designed to display the results of a tracking analysis and to enable manual corrections when necessary. For instance, users can delete an object to remove artifacts or change the object ID to correct tracking errors. FastTrack provides an ergonomic interface with built-in keyboard shortcuts, making the user's experience more seamless. By alleviating the tedious work of review and correction, FastTrack allows users to achieve 100% tracking accuracy rapidly and efficiently. ## Loading a tracking analysis -To load a tracking analysis previously tracked in FastTrack, first, click on the **Open** button (1) and select a movie or an image of an image sequence. It will load the latest tracking analysis available. -If the movie was tracked several times, the last tracking analysis is stored in the **Tracking_Result** folder and the previous tracking analysis in the **Tracking_Result_Date** folders. -Old tracking analysis can be loaded using the **Open Tracking_result directory** button (2) (only activated if a movie is already loaded). -Click on the **Reload)** button (3) to reload the tracking data if necessary. -The software can only load a tracking analysis if the folder architecture is preserved, .ie the folder with the image sequence or the movie has to have a sub-folder named **Tracking_Result** containing at least the *tracking.txt* file. +To load a previously tracked analysis in FastTrack, follow these steps: + +1. Click on the **Open** button (1) and select either a movie or an image sequence. This will load the latest available tracking analysis. +If the movie was tracked multiple times, the last tracking analysis is stored in the **Tracking_Result** folder, while the previous tracking analyses are stored in **Tracking_Result_Date** folders. +2. To load an older tracking analysis, use the **Open Tracking_result directory** button (2) (this button is only activated if a movie is already loaded). +3. If necessary, click on the **Reload** button (3) to refresh the tracking data. + +Note: The software can only load a tracking analysis if the folder architecture is preserved. In other words, the folder containing the image sequence or movie must have a sub-folder named **Tracking_Result**, which must contain at least the *tracking.txt* file. ## Display options Several tracking overlay options are available on the tracking overlay panel (24): -* Ellipse: display the head, tail, and or body ellipses on the tracked objects. -* Arrows: display an arrow on the head, tail, and or body of the tracked object indicating the orientation. -* Numbers: display the ids of the tracked objects. -* Traces: display the previous 50 positions of the tracked objects. -* Size: the size of the tracking overlay. -* Frame rate: display and saving frame rate. +* **Ellipse**: Display the head, tail, and/or body ellipses on the tracked objects. +* **Arrows**: Display an arrow on the head, tail, and/or body of the tracked object indicating the orientation. +* **Numbers**: Display the IDs of the tracked objects. +* **Traces**: Display the previous 50 positions of the tracked objects. +* **Size**: Adjust the size of the tracking overlay. +* **Frame rate**: Display and save frame rate. -Several useful information on the selected object can be found in the information table (18). The user can go to the image where the object has appeared for the first time by clicking directly on the table's corresponding cell. +Additionally, several useful pieces of information about the selected object can be found in the information table (18). The user can directly click on the corresponding cell in the table to navigate to the image where the object first appeared. ## Inspecting the tracking -The tracking can be inspected by moving the display cursor (19), seeing the image number (20), and automatically playing the movie (21) at a selected frame rate (22). -Automatically detected occlusions (overlapped objects) can be reviewed by clicking on the **Previous** (12) and **Next** (13) occlusion buttons (this function is experimental and can miss some occlusions). +The tracking can be inspected by moving the display cursor (19), observing the image number (20), and automatically playing the movie (21) at a selected frame rate (22). + +Automatically detected occlusions (overlapped objects) can be reviewed by clicking on the **Previous** (12) and **Next** (13) occlusion buttons. Please note that this function is experimental and may miss some occlusions. ## Annotating the tracking -The user can annotate any image of the tracking. Write the annotation in the annotate text entry (17). The user can search across annotations with the find bar (14) and the buttons (15)(16). All the annotations are saved in the *annotation.txt* file in the **Tracking_Result** folder. +The user can annotate any image of the tracking by writing the annotation in the annotate text entry (17). Annotations can be searched across using the find bar (14) and the buttons (15)(16). All annotations are saved in the *annotation.txt* file in the **Tracking_Result** folder. ## Correcting the tracking ### Swapping the data of two objects -The user can correct an error by swapping two object's ID from the current image to the end of the sequence as follow: +The user can correct an error by swapping the IDs of two objects from the current image with the end of the sequence, as follows: -* Left-click on the first object, the object ID and color are displayed on the first selection box (6). -* Left-click on the second object, the object ID and color are displayed on the second selection box (8) -* Right-click or click on the **Swap Button** (7) to exchange the ID of the two selected objects from the current image to the last image of the sequence. +1. Left-click on the first object; the object ID and color will be displayed in the first selection box (6). +2. Left-click on the second object; the object ID and color will be displayed in the second selection box (8). +3. Right-click or click on the **Swap Button** (7) to exchange the IDs of the two selected objects, moving them from the current image to the last image of the sequence. ### Delete the data of an object -To delete one object of several frames: +To delete one object from several frames: -* Double left click on the object, the object ID and color are displayed on the second selection box (8). -* Select the number of frames on which to delete the object in the box (11). Shortcut C is available to focus on the selection box. -* Click on the **Delete** button (10) to delete the object from the current frame to the current frame plus the selected number. +1. Double left-click on the object; the object ID and color will be displayed in the second selection box (8). +2. Select the number of frames over which you want to delete the object in the box (11). Shortcut C is available to focus on the selection box. +3. Click on the **Delete** button (10) to remove the object from the current frame to the current frame plus the selected number. -To delete one object on the current frame: +To delete one object from the current frame: -* Double left-click on the object, the object ID and color are displayed on the second selection box (8). -* Click on the **Delete One** button (9) to delete the object on the current frame. +1. Double left-click on the object; the object ID and color will be displayed in the second selection box (8). +2. Click on the **Delete One** button (9) to remove the object from the current frame. ### Keyboard shortcuts summary -A set of keyboard shortcuts are available to speed up the tracking correction. +A set of keyboard shortcuts is available to speed up the tracking correction: - Q/A: go to the previous image. - D: go to the next image. - F: delete the selected object on the current image. - C: enter the number of images where an object has to be deleted. -- G: delete an object from the current image to the current plus the selected number. +- G: delete an object from the current image to the current image plus the selected number. ## Saving -All the changes made in the inspector are automatically saved in the original *tracking.txt* file and SQLite database. +All the changes made in the inspector are automatically saved to the original *tracking.txt* file and the SQLite database. ## Exporting a movie -To export a movie of a tracking analysis, select the desired display overlay and click on the **Save** button (3). Select a folder and a name to save the file. Only .avi format is supported. +To export a movie of a tracking analysis, follow these steps: + +1. Select the desired display overlay in the tracking analysis. +2. Click on the **Save** button (3). +3. Choose a folder and specify a name for the file. +4. Please note that only .avi format is supported for the exported movie. -Note: Movie with many objects by frame can be challenging to load and review in the tracking Inspector. +Note: Movies with many objects per frame can be challenging to load and review in the tracking Inspector. [See a video demonstration](https://youtu.be/5lhx-r_DHLY)