From 16c29d20542b6479411f660670146f82917510d9 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Tim=20B=C3=BCchner?= Date: Wed, 10 Apr 2024 16:37:54 +0200 Subject: [PATCH 01/15] Include reviewer feedback --- .gitignore | 3 ++- paper/paper.md | 18 +++++++++--------- 2 files changed, 11 insertions(+), 10 deletions(-) diff --git a/.gitignore b/.gitignore index ab753a2..592b2fb 100644 --- a/.gitignore +++ b/.gitignore @@ -77,6 +77,7 @@ venv* paper/paper.jats paper/paper.pdf paper/media +paper/notes.md !tests/**/*.mp4 -!tests/**/*.csv \ No newline at end of file +!tests/**/*.csv diff --git a/paper/paper.md b/paper/paper.md index 902f0cd..587afa5 100644 --- a/paper/paper.md +++ b/paper/paper.md @@ -57,13 +57,13 @@ The programming interface enables efficient processing of temporal resolution vi The GUI offers non-programmers an intuitive way to use the analysis functions, visualize the results, and export the data for further analysis. `JeFaPaTo` is designed to be extendable by additional analysis functions and facial features and is under joint development by computer vision and medical experts to ensure high usability and relevance for the target group. -`JeFaPoTo` leverages the `mediapipe` library [@lugaresiMediaPipeFrameworkBuilding2019;@kartynnikRealtimeFacialSurface2019a] to extract facial landmarks and blend shape features from video data at 60 FPS (on modern hardware). +`JeFaPaTo` leverages the `mediapipe` library [@lugaresiMediaPipeFrameworkBuilding2019;@kartynnikRealtimeFacialSurface2019a] to extract facial landmarks and blend shape features from video data at 60 FPS (on modern hardware). With the landmarks, we compute the `EAR` (Eye-Aspect-Ratio) [@soukupovaRealTimeEyeBlink2016] for both eyes over the videos. Additionally, `JeFaPaTo` detects blinks, matches left and right eye, and computes medically relevant statistics, a visual summary for the provided video, shown in \autoref{fig:summary}, and exports the data in various formats for further independent analysis. The visual summary lets medical experts quickly get an overview of the blinking behavior. As shown in \autoref{fig:summary}, the blinks per minute are shown as a histogram over time in the upper axis, and the delay between blinks is shown in the right axis. The main plot comprises the scatter plot of the `EAR` score for the left and right eye, and the dots indicate the detected blinks, with the rolling mean and standard deviation shown as a line plot. -This summary enables a quick individualized analysis for each video, thus also patients, and can be included in medical reports. +This summary creates a compact overview by summarizing the blinking behavior throughout the video, enabling a quick individualized analysis for each video. ![The plot presents a visual summary of blinking patterns captured over 20 minutes, recorded at 240 frames per second (FPS). It illustrates the temporal variation in paired blinks, quantifies the blink rate as blinks per minute, and characterizes the distribution of the time discrepancy between left and right eye closures.\label{fig:summary}](img/summary.png) @@ -78,7 +78,7 @@ Hence, the correct localization of facial landmarks is of high importance and th Once a user provides a video in the GUI, the tool performs an automatic face detection, and the user can adapt the bounding box if necessary. Due to the usage of `mediapipe` [@lugaresiMediaPipeFrameworkBuilding2019;@kartynnikRealtimeFacialSurface2019a], the tool can extract 468 facial landmarks and 52 blend shape features. To describe the state of the eye, we use the Eye-Aspect-Ratio (EAR) [@soukupovaRealTimeEyeBlink2016], a standard measure for blinking behavior computed based on the 2D coordinates of the landmarks. -The ratio ranges between 0 and 1, where 0 indicates a fully closed eye and higher values indicate an open eye, whereas most people have an EAR score between 0.2 and 0.4. +The ratio ranges between 0 and 1, where 0 indicates a fully closed eye and higher values indicate an open eye, while we observed that most people have an EAR score between 0.2 and 0.4. This measure describes the ratio between the vertical and horizontal distance between the landmarks, resulting in a detailed motion approximation of the upper and lower eyelids. Please note that all connotations for the left and right eye are based on the subject's viewing perspective. @@ -91,7 +91,7 @@ However, the first experiments indicated that the 2D approach is sufficient to a `JeFaPaTo` optimizes io-read by utilizing several queues for loading and processing the video, assuring adequate RAM usage. The processing pipeline extracts the landmarks and facial features, such as the `EAR` score for each frame, and includes a validity check ensuring that the eyes have been visible. -On completion, all values are stored in a CSV file for either external tools or for further processing `JeFaPaTo` to obtain insights into the blinking behavior of a person, shown in \autoref{fig:summary}. +On completion, all values are stored in a CSV file for either external tools or for further processing by `JeFaPaTo` to obtain insights into the blinking behavior of a person, shown in \autoref{fig:summary}. The blinking detection and extraction employ the `scipy.signal.find_peaks` algorithm [@virtanenSciPyFundamentalAlgorithms2020], and the time series can be smoothed if necessary. We automatically match the left and right eye blinks based on the time of apex closure. Additionally, we use the prominence of the blink to distinguish between `complete` and `partial` blinks based on a user-provided threshold (for each eye) or an automatic threshold computed using Otsu's method [@otsu]. @@ -106,11 +106,11 @@ In \autoref{fig:ui}, we show the blinking analysis graphical user interface comp We give a short overview of the functionality of each area to provide a better understanding of the tool's capabilities. The A-Area is the visualization of the selected EAR time series for the left (drawn as a blue line) and right eye (drawn as a red line) over time. Additionally, after successful blinking detection and extraction, the detected `complete` blinks (pupil not visible) are shown as dots, and `partial` blinks (pupil visible) as triangles. -If the user selects a blink in the table in the B-Area, the graph automatically highlights and zooms into the according area to allow a detailed analysis. +If the user selects a blink in the table in the B-Area, the graph automatically highlights and zooms into the corresponding area to allow a detailed analysis. -The B-Area contains the main table for the blinking extraction results, and the user can select the according blink to visualize the according period in the EAR plot. +The B-Area contains the main table for the blinking extraction results, and the user can select the a blink to visualize the corresponding time range in the EAR plot. The table contains the main properties of the blink: the EAR score at the blink apex, the prominence of the blink, the internal width in frames, the blink height, and the automatically detected blinking state (`none`, `partial`, `complete`). -If the user provides the original video, the user can drag and drop the video into the GUI into the D-Area, and the video will jump to the according frame to manually correct the blinking state. +If the user provides the original video, the user can drag and drop the video into the GUI into the D-Area, and the video will jump to the corresponding frame to manually correct the blinking state. The content of the table is used to compute the blinking statistics and the visual summary. These statistics are also shown in the B-Area at different tabs, and the user can export the data as a CSV or Excel file for further analysis. @@ -126,7 +126,7 @@ While this feature is optional, it helps manually correct the blinking state whe We provided a set of relevant statistics for medical analysis of blinking behavior, which are valuable to healthcare experts. The `JeFaPaTo` software is being developed in partnership with medical professionals to guarantee the included statistics are relevant. -Future updates may incorporate new statistics based on medical expert feedback. +Future updates may incorporate new statistics based on expert medical feedback. A sample score file is available in the `examples/` directory within the repository, enabling users to evaluate the functionality of `JeFaPaTo` without recording a video. | Statistic | Description | Unit/Range | @@ -209,7 +209,7 @@ Given the potential of high temporal resolution video data to yield novel insigh An issue frequently associated with facial palsy is synkinesis, characterized by involuntary facial muscle movements concurrent with voluntary movements of other facial muscles, such as the eye closing involuntarily when the patient smiles. Hence, a joint analysis of the blinking pattern and mouth movement could help better understand the underlying processes. The EAR is sensitive to head rotation. -Careful setting up the experiment can reduce the influence of head rotation, but it is not always possible. +Care must be taken when recording the video to reduce the influence of head rotation, but it is not always possible. To support the analysis of facial palsy patients, we plan to implement a 3D head pose estimation to correct the future EAR score for head rotation. # Acknowledgements From 066864893929185a0328603343894a80e218f0e0 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Tim=20B=C3=BCchner?= Date: Wed, 10 Apr 2024 16:43:40 +0200 Subject: [PATCH 02/15] Add parameter settings table - include our recommended parameters for 30 and 240 fps --- paper/paper.md | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/paper/paper.md b/paper/paper.md index 587afa5..e2201f9 100644 --- a/paper/paper.md +++ b/paper/paper.md @@ -202,6 +202,25 @@ We list the main libraries used in `JeFaPaTo` and their version used for the dev | `rich` | `~=12.0` | Logging | Colored logging| | `plyer` | `~=2.1` | Notifications | Notification for the user for completed processing| +## Extraction Parameter Recommendations + +The following parameters are recommended the blinking detection based on the current implementation of `JeFaPaTo`. +We list the settings for `30 FPS` and `240 FPS` videos and the time based parameters are measured in frames. +These settings can be adjusted in the GUI to adapt to the specific video data and the blinking behavior of the subject, if necessary. + +| Parameter | 30 FPS | 240 FPS | +| --- | --- | --- | +| Minimum Distance | 10 Frames | 50 Frames | +| Minimum Prominence | 0.1 EAR Score | 0.1 EAR Score | +| Minimum Internal Width | 4 Frames | 20 Frames | +| Maximum Internal Width | 20 Frames | 100 Frames | +| Maximum Matching Distance | 15 Frames | 30 Frames | +| Partial Threshold Left | 0.18 EAR Score | 0.18 EAR Score | +| Partial Threshold Right | 0.18 EAR Score | 0.18 EAR Score | +| Smoothing Window Size | 7 | 7 | +| Smoothing Polynomial Degree | 3 | 3 | + + # Ongoing Development `JeFaPaTo` finished the first stable release and will continue to be developed to support the analysis of facial features and expressions. From 4a5fb3c1d647b5fa05eb7ea231d0ebb987ba4811 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Tim=20B=C3=BCchner?= Date: Fri, 12 Apr 2024 11:33:15 +0200 Subject: [PATCH 03/15] Improve Statement of need - now include more statement of the field - make the difference more clear as custom tool - hightligh the target user group a bit more --- paper/paper.md | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/paper/paper.md b/paper/paper.md index e2201f9..111a7a4 100644 --- a/paper/paper.md +++ b/paper/paper.md @@ -50,9 +50,14 @@ Such detailed analysis could help medical experts better understand the blinking # Statement of need To analyze the blinking behavior in detail, medical experts often use high-speed cameras to record the blinking process. -Therefore, experiments record videos with 240 FPS or higher, which results in large amounts of data and requires optimized algorithms for consumer hardware. +Existing tools modeling the eye state based on the Eye-Aspect-Ratio (EAR), such as [@soukupovaRealTimeEyeBlink2016], only classify the eye state as open or closed, requiring a labeled dataset for training a suitable classifier. +This approach neglects relevant information such as the blink intensity, duration, or partial blinks, which are crucial for a detailed analysis in a medical context. +Moreover, this simple classification approach does not factor in high temporal resolution video data, which is essential for a thorough analysis of the blinking process as most blinks are shorter than 100 ms. +We developed `JeFaPaTo` to go beyond the simple eye state classification and offer a method to extract complete blinking intervals for detailed analysis. +We aim to provide a custom tool that is easy for medical experts, abstracting the complexity of the underlying computer vision algorithms and high-temporal processing and enabling them to analyze blinking behavior without requiring programming skills. + `JeFaPaTo` is a Python-based [@python] program to support medical and psychological experts in analyzing blinking and facial features for high temporal resolution video data. -The tool splits into two main parts: An extendable programming interface and a graphical user interface (GUI) entirely written in Python. +The tool is split into two main parts: An extendable programming interface and a graphical user interface (GUI) entirely written in Python. The programming interface enables efficient processing of temporal resolution video data, automatically extracts selected facial features, and provides a set of analysis functions specialized for blinking analysis. The GUI offers non-programmers an intuitive way to use the analysis functions, visualize the results, and export the data for further analysis. `JeFaPaTo` is designed to be extendable by additional analysis functions and facial features and is under joint development by computer vision and medical experts to ensure high usability and relevance for the target group. From 41dd814f27ec92f023cab6cbc2673a8a3a2cbfb7 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Tim=20B=C3=BCchner?= Date: Tue, 16 Apr 2024 08:39:20 +0200 Subject: [PATCH 04/15] Update summary - describe non-facial norm more - add more references for non-experts reader for possible follow up --- paper/paper.bib | 105 +++++++++++++++++++++++++++++++++++++++++++++++- paper/paper.md | 7 +++- 2 files changed, 109 insertions(+), 3 deletions(-) diff --git a/paper/paper.bib b/paper/paper.bib index 5635953..473fe04 100644 --- a/paper/paper.bib +++ b/paper/paper.bib @@ -103,4 +103,107 @@ @ARTICLE{otsu number={1}, pages={62-66}, doi={10.1109/TSMC.1979.4310076} -} \ No newline at end of file +} + +@article{kwonHighspeedCameraCharacterization2013, + title = {High-Speed Camera Characterization of Voluntary Eye Blinking Kinematics}, + author = {Kwon, Kyung-Ah and Shipley, Rebecca J. and Edirisinghe, Mohan and Ezra, Daniel G. and Rose, Geoff and Best, Serena M. and Cameron, Ruth E.}, + year = {2013}, + month = aug, + journal = {Journal of the Royal Society, Interface}, + volume = {10}, + number = {85}, + pages = {20130227}, + issn = {1742-5662}, + doi = {10.1098/rsif.2013.0227}, + langid = {english}, + pmcid = {PMC4043155}, + pmid = {23760297}, +} + +@article{vanderwerfBlinkRecoveryPatients2007, + title = {Blink {{Recovery}} in {{Patients}} with {{Bell}}'s {{Palsy}}: {{A Neurophysiological}} and {{Behavioral Longitudinal Study}}}, + shorttitle = {Blink {{Recovery}} in {{Patients}} with {{Bell}}'s {{Palsy}}}, + author = {VanderWerf, Frans and Reits, Dik and Smit, Albertine Ellen and Metselaar, Mick}, + year = {2007}, + month = jan, + journal = {Investigative Ophthalmology \& Visual Science}, + volume = {48}, + number = {1}, + pages = {203--213}, + issn = {1552-5783}, + doi = {10.1167/iovs.06-0499}, + urldate = {2024-04-16}, +} + +@article{nuuttilaDiagnosticAccuracyGlabellar2021, + title = {Diagnostic Accuracy of Glabellar Tap Sign for {{Parkinson}}'s Disease}, + author = {Nuuttila, Simo and Eklund, Mikael and Joutsa, Juho and Jaakkola, Elina and M{\"a}kinen, Elina and Honkanen, Emma A. and Lindholm, Kari and Noponen, Tommi and Ihalainen, Toni and Murtom{\"a}ki, Kirsi and Nojonen, Tanja and Levo, Reeta and Mertsalmi, Tuomas and Scheperjans, Filip and Kaasinen, Valtteri}, + year = {2021}, + journal = {Journal of Neural Transmission}, + volume = {128}, + number = {11}, + pages = {1655--1661}, + issn = {0300-9564}, + doi = {10.1007/s00702-021-02391-3}, + urldate = {2024-04-16}, +} + +@article{vanderwerfEyelidMovementsBehavioral2003, + title = {Eyelid Movements: Behavioral Studies of Blinking in Humans under Different Stimulus Conditions}, + shorttitle = {Eyelid Movements}, + author = {VanderWerf, Frans and Brassinga, Peter and Reits, Dik and Aramideh, Majid and {Ongerboer de Visser}, Bram}, + year = {2003}, + month = may, + journal = {Journal of Neurophysiology}, + volume = {89}, + number = {5}, + pages = {2784--2796}, + issn = {0022-3077}, + langid = {english}, +} + +@article{cruzSpontaneousEyeblinkActivity2011, + title = {Spontaneous Eyeblink Activity}, + author = {Cruz, Antonio A. V. and Garcia, Denny M. and Pinto, Carolina T. and Cechetti, Sheila P.}, + year = {2011}, + month = jan, + journal = {The Ocular Surface}, + volume = {9}, + number = {1}, + pages = {29--41}, + issn = {1542-0124}, + langid = {english}, + pmid = {21338567}, +} + +@article{volkInitialSeverityMotor2017, + title = {Initial Severity of Motor and Non-Motor Disabilities in Patients with Facial Palsy: An Assessment Using Patient-Reported Outcome Measures}, + shorttitle = {Initial Severity of Motor and Non-Motor Disabilities in Patients with Facial Palsy}, + author = {Volk, Gerd Fabian and Granitzka, Thordis and Kreysa, Helene and Klingner, Carsten M. and {Guntinas-Lichius}, Orlando}, + year = {2017}, + month = jan, + journal = {European archives of oto-rhino-laryngology: official journal of the European Federation of Oto-Rhino-Laryngological Societies (EUFOS): affiliated with the German Society for Oto-Rhino-Laryngology - Head and Neck Surgery}, + volume = {274}, + number = {1}, + pages = {45--52}, + issn = {1434-4726}, + doi = {10.1007/s00405-016-4018-1}, + abstract = {Patients with facial palsy (FP) not only suffer from their facial movement disorder, but also from social and psychological disabilities. These can be assessed by patient-reported outcome measures (PROMs) like the quality-of-life Short-Form 36 Item Questionnaire (SF36) or FP-specific instruments like the Facial Clinimetric Evaluation Scale (FaCE) or the Facial Disability Index (FDI). Not much is known about factors influencing PROMs in patients with FP. We identified predictors for baseline SF36, FaCE, and FDI scoring in 256 patients with unilateral peripheral FP using univariate correlation and multivariate linear regression analyses. Mean age was 52~{\textpm}~18~years. 153 patients (60~\%) were female. 90 patients (31~\%) and 176 patients (69~\%) were first seen {$<$}90 or {$>$}90~days after onset, respectively, i.e., with acute or chronic FP. House-Brackmann grading was 3.9~{\textpm}~1.4. FaCE subscores varied from 41~{\textpm}~28 to 71~{\textpm}~26, FDI scores from 65~{\textpm}~20 to 70~{\textpm}~22, and SF36 domains from 52~{\textpm}~20 to 80~{\textpm}~24. Older age, female gender, higher House-Brackmann grading, and initial assessment {$>$}90~days after onset were independent predictors for lower FaCE subscores and partly for lower FDI subscores (all p~{$<~$}0.05). Older age and female gender were best predictors for lower results in SF36 domains. Comorbidity was associated with lower SF General health perception and lower SF36 Emotional role (all p~{$<~$}0.05). Specific PROMs reveal that older and female patients and patients with chronic FP suffer particularly from motor and non-motor disabilities related to FP. Comorbidity unrelated to the FP could additionally impact the quality of life of patients with FP.}, + langid = {english}, + pmid = {27040558}, + keywords = {Bell's palsy,Disability Evaluation,Disabled Persons,Facial nerve,Facial nerve reconstruction,Facial Paralysis,Humans,Patient Reported Outcome Measures,Patient-oriented methods,Quality of life,Quality of Life,Surveys and Questionnaires} +} + +@article{louReviewAutomatedFacial2020, + title = {A {{Review}} on {{Automated Facial Nerve Function Assessment From Visual Face Capture}}}, + author = {Lou, Jianwen and Yu, Hui and Wang, Fei-Yue}, + year = {2020}, + month = feb, + journal = {IEEE Transactions on Neural Systems and Rehabilitation Engineering}, + volume = {28}, + number = {2}, + pages = {488--497}, + issn = {1558-0210}, + doi = {10.1109/TNSRE.2019.2961244}, +} diff --git a/paper/paper.md b/paper/paper.md index 111a7a4..3ef93ff 100644 --- a/paper/paper.md +++ b/paper/paper.md @@ -34,8 +34,11 @@ bibliography: paper.bib Analyzing facial features and expressions is a complex task in computer vision. The human face is intricate, with significant shape, texture, and appearance variations. -In medical contexts, facial structures that differ from the norm, such as those affected by paralysis, are particularly important to study and require precise analysis. -One area of interest is the subtle movements involved in blinking, a process that is not yet fully understood and needs high-resolution, time-specific analysis for detailed understanding. +In medical contexts, facial structures and movements that differ from the norm are particularly important to study and require precise analysis to understand the underlying conditions. +Given that solely the facial muscles, innervated by the facial nerve, are responsible for facial expressions, facial palsy can lead to severe impairments in facial movements [@volkInitialSeverityMotor2017;@louReviewAutomatedFacial2020]. + +One affected area of interest is the subtle movements involved in blinking [@vanderwerfBlinkRecoveryPatients2007;@nuuttilaDiagnosticAccuracyGlabellar2021;@vanderwerfEyelidMovementsBehavioral2003]. +It is an intricate spontaneous process that is not yet fully understood and needs high-resolution, time-specific analysis for detailed understanding [@kwonHighspeedCameraCharacterization2013;@cruzSpontaneousEyeblinkActivity2011]. However, a significant challenge is that many advanced computer vision techniques demand programming skills, making them less accessible to medical professionals who may not have these skills. The Jena Facial Palsy Toolbox (JeFaPaTo) has been developed to bridge this gap. It utilizes cutting-edge computer vision algorithms and offers a user-friendly interface for those without programming expertise. From 0c71627dc4008706a0c3573485684857955e481a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Tim=20B=C3=BCchner?= Date: Tue, 16 Apr 2024 08:43:47 +0200 Subject: [PATCH 05/15] Update summary - make usage of computer vision a bit clearer --- paper/paper.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/paper/paper.md b/paper/paper.md index 3ef93ff..8ce70ae 100644 --- a/paper/paper.md +++ b/paper/paper.md @@ -39,7 +39,7 @@ Given that solely the facial muscles, innervated by the facial nerve, are respon One affected area of interest is the subtle movements involved in blinking [@vanderwerfBlinkRecoveryPatients2007;@nuuttilaDiagnosticAccuracyGlabellar2021;@vanderwerfEyelidMovementsBehavioral2003]. It is an intricate spontaneous process that is not yet fully understood and needs high-resolution, time-specific analysis for detailed understanding [@kwonHighspeedCameraCharacterization2013;@cruzSpontaneousEyeblinkActivity2011]. -However, a significant challenge is that many advanced computer vision techniques demand programming skills, making them less accessible to medical professionals who may not have these skills. +However, a significant challenge is that many computer vision techniques demand programming skills for automated extraction and analysis, making them less accessible to medical professionals who may not have these skills. The Jena Facial Palsy Toolbox (JeFaPaTo) has been developed to bridge this gap. It utilizes cutting-edge computer vision algorithms and offers a user-friendly interface for those without programming expertise. This toolbox is designed to make advanced facial analysis more accessible to medical experts, simplifying integration into their workflow. From 62329e171e67a328f91fe864a5e0cbd6375d2d00 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Tim=20B=C3=BCchner?= Date: Tue, 16 Apr 2024 08:48:34 +0200 Subject: [PATCH 06/15] Update summary - smooth the transition between paragraphs - include references for the reader --- paper/paper.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/paper/paper.md b/paper/paper.md index 8ce70ae..5fdd6af 100644 --- a/paper/paper.md +++ b/paper/paper.md @@ -44,7 +44,7 @@ The Jena Facial Palsy Toolbox (JeFaPaTo) has been developed to bridge this gap. It utilizes cutting-edge computer vision algorithms and offers a user-friendly interface for those without programming expertise. This toolbox is designed to make advanced facial analysis more accessible to medical experts, simplifying integration into their workflow. -The state of the eye closure is of high interest to medical experts, e.g., in the context of facial palsy or Parkinson's disease. +This simple-to-use tool could enable medical professionals to quickly establish the blinking behavior of patients, providing valuable insights into their condition, especially in the context of facial palsy or Parkinson's disease [@nuuttilaDiagnosticAccuracyGlabellar2021;@vanderwerfBlinkRecoveryPatients2007]. Due to facial nerve damage, the eye-closing process might be impaired and could lead to many undesirable side effects. Hence, more than a simple distinction between open and closed eyes is required for a detailed analysis. Factors such as duration, synchronicity, velocity, complete closure, the time between blinks, and frequency over time are highly relevant. From 80dbe330c3e6f0c3e5a9154a988b6bbc9911193d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Tim=20B=C3=BCchner?= Date: Tue, 16 Apr 2024 09:13:28 +0200 Subject: [PATCH 07/15] Update statement of need - include more existing approaches - one for highspeed (but only every 5ms) - two medical ones - create new subsection for statement of need as some kind of overview --- paper/paper.md | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/paper/paper.md b/paper/paper.md index 5fdd6af..918aa73 100644 --- a/paper/paper.md +++ b/paper/paper.md @@ -42,7 +42,7 @@ It is an intricate spontaneous process that is not yet fully understood and need However, a significant challenge is that many computer vision techniques demand programming skills for automated extraction and analysis, making them less accessible to medical professionals who may not have these skills. The Jena Facial Palsy Toolbox (JeFaPaTo) has been developed to bridge this gap. It utilizes cutting-edge computer vision algorithms and offers a user-friendly interface for those without programming expertise. -This toolbox is designed to make advanced facial analysis more accessible to medical experts, simplifying integration into their workflow. +This toolbox makes advanced facial analysis more accessible to medical experts, simplifying integration into their workflow. This simple-to-use tool could enable medical professionals to quickly establish the blinking behavior of patients, providing valuable insights into their condition, especially in the context of facial palsy or Parkinson's disease [@nuuttilaDiagnosticAccuracyGlabellar2021;@vanderwerfBlinkRecoveryPatients2007]. Due to facial nerve damage, the eye-closing process might be impaired and could lead to many undesirable side effects. @@ -58,6 +58,13 @@ This approach neglects relevant information such as the blink intensity, duratio Moreover, this simple classification approach does not factor in high temporal resolution video data, which is essential for a thorough analysis of the blinking process as most blinks are shorter than 100 ms. We developed `JeFaPaTo` to go beyond the simple eye state classification and offer a method to extract complete blinking intervals for detailed analysis. We aim to provide a custom tool that is easy for medical experts, abstracting the complexity of the underlying computer vision algorithms and high-temporal processing and enabling them to analyze blinking behavior without requiring programming skills. +An existing approach [@kwonHighspeedCameraCharacterization2013] for high temporal videos uses only every frame 5 ms and requires manual measuring of the upper and lower eyelid margins. +Other methods require additional sensors such as electromyography (EMG) or magnetic search coils to measure the eyelid movement [@vanderwerfBlinkRecoveryPatients2007;@vanderwerfEyelidMovementsBehavioral2003]. +Such sensors necessitate additional human resources and are unsuitable for routine clinical analysis. +`JeFaPaTo` is a novel approach that combines the advantages of high temporal resolution video data [@kwonHighspeedCameraCharacterization2013] and computer vision algorithms [@soukupovaRealTimeEyeBlink2016] +to analyze the blinking behavior. + +## Overview of JeFaPaTo `JeFaPaTo` is a Python-based [@python] program to support medical and psychological experts in analyzing blinking and facial features for high temporal resolution video data. The tool is split into two main parts: An extendable programming interface and a graphical user interface (GUI) entirely written in Python. From 641d247a791e47a37c06c8a42b199eeeeb621a66 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Tim=20B=C3=BCchner?= Date: Tue, 16 Apr 2024 09:13:50 +0200 Subject: [PATCH 08/15] add missing `high` --- paper/paper.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/paper/paper.md b/paper/paper.md index 918aa73..eeb3477 100644 --- a/paper/paper.md +++ b/paper/paper.md @@ -68,7 +68,7 @@ to analyze the blinking behavior. `JeFaPaTo` is a Python-based [@python] program to support medical and psychological experts in analyzing blinking and facial features for high temporal resolution video data. The tool is split into two main parts: An extendable programming interface and a graphical user interface (GUI) entirely written in Python. -The programming interface enables efficient processing of temporal resolution video data, automatically extracts selected facial features, and provides a set of analysis functions specialized for blinking analysis. +The programming interface enables efficient processing of high temporal resolution video data, automatically extracts selected facial features, and provides a set of analysis functions specialized for blinking analysis. The GUI offers non-programmers an intuitive way to use the analysis functions, visualize the results, and export the data for further analysis. `JeFaPaTo` is designed to be extendable by additional analysis functions and facial features and is under joint development by computer vision and medical experts to ensure high usability and relevance for the target group. From 2f136dd02b00668ff84c8733d6e009227cb8da5e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Tim=20B=C3=BCchner?= Date: Tue, 16 Apr 2024 09:16:00 +0200 Subject: [PATCH 09/15] split rather long sentence --- paper/paper.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/paper/paper.md b/paper/paper.md index eeb3477..a5c8f7c 100644 --- a/paper/paper.md +++ b/paper/paper.md @@ -74,7 +74,8 @@ The GUI offers non-programmers an intuitive way to use the analysis functions, v `JeFaPaTo` leverages the `mediapipe` library [@lugaresiMediaPipeFrameworkBuilding2019;@kartynnikRealtimeFacialSurface2019a] to extract facial landmarks and blend shape features from video data at 60 FPS (on modern hardware). With the landmarks, we compute the `EAR` (Eye-Aspect-Ratio) [@soukupovaRealTimeEyeBlink2016] for both eyes over the videos. -Additionally, `JeFaPaTo` detects blinks, matches left and right eye, and computes medically relevant statistics, a visual summary for the provided video, shown in \autoref{fig:summary}, and exports the data in various formats for further independent analysis. +Additionally, `JeFaPaTo` detects blinks, matches the left and right eye, and computes medically relevant statistics. +Furthermore, a visual summary for the video is provided in the GUI, shown in \autoref{fig:summary}, and the data can be exported in various formats for further independent analysis. The visual summary lets medical experts quickly get an overview of the blinking behavior. As shown in \autoref{fig:summary}, the blinks per minute are shown as a histogram over time in the upper axis, and the delay between blinks is shown in the right axis. The main plot comprises the scatter plot of the `EAR` score for the left and right eye, and the dots indicate the detected blinks, with the rolling mean and standard deviation shown as a line plot. From c7b50b2e3e5c0dc3b3b9bec18745d57f90842e8e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Tim=20B=C3=BCchner?= Date: Tue, 16 Apr 2024 10:01:31 +0200 Subject: [PATCH 10/15] Fix section header -> medical -> medically --- paper/paper.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/paper/paper.md b/paper/paper.md index a5c8f7c..ad1c3d1 100644 --- a/paper/paper.md +++ b/paper/paper.md @@ -138,7 +138,7 @@ Upon data extraction, corrections to the blinking state can be made directly wit The D-Area displays the current video frame, given that the user supplies the original video. While this feature is optional, it helps manually correct the blinking state when required. -## Extracted Medical Relevant Statistics +## Extracted Medically Relevant Statistics We provided a set of relevant statistics for medical analysis of blinking behavior, which are valuable to healthcare experts. The `JeFaPaTo` software is being developed in partnership with medical professionals to guarantee the included statistics are relevant. From 47a85012dac6323618ce61859db7dd85bcfb7ef5 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Tim=20B=C3=BCchner?= Date: Tue, 16 Apr 2024 12:11:26 +0200 Subject: [PATCH 11/15] Make api and gui distinction more clear - explain what api extendablilty means - make api usage more clear - make gui usage more clear --- paper/paper.bib | 34 ++++++++++++++++++++++++++++++++++ paper/paper.md | 10 ++++++---- 2 files changed, 40 insertions(+), 4 deletions(-) diff --git a/paper/paper.bib b/paper/paper.bib index 473fe04..a1341f1 100644 --- a/paper/paper.bib +++ b/paper/paper.bib @@ -207,3 +207,37 @@ @article{louReviewAutomatedFacial2020 issn = {1558-0210}, doi = {10.1109/TNSRE.2019.2961244}, } + +@article{hochreiterMachineLearningBasedDetectingEyelid2023, + title = {Machine-{{Learning-Based Detecting}} of {{Eyelid Closure}} and {{Smiling Using Surface Electromyography}} of {{Auricular Muscles}} in {{Patients}} with {{Postparalytic Facial Synkinesis}}: {{A Feasibility Study}}}, + shorttitle = {Machine-{{Learning-Based Detecting}} of {{Eyelid Closure}} and {{Smiling Using Surface Electromyography}} of {{Auricular Muscles}} in {{Patients}} with {{Postparalytic Facial Synkinesis}}}, + author = {Hochreiter, Jakob and Hoche, Eric and Janik, Luisa and Volk, Gerd Fabian and Leistritz, Lutz and Anders, Christoph and {Guntinas-Lichius}, Orlando}, + year = {2023}, + month = jan, + journal = {Diagnostics}, + volume = {13}, + number = {3}, + pages = {554}, + publisher = {Multidisciplinary Digital Publishing Institute}, + issn = {2075-4418}, + doi = {10.3390/diagnostics13030554}, + urldate = {2023-03-15}, + langid = {english}, +} + +@article{chenSmartphoneBasedArtificialIntelligenceAssisted2021, + title = {Smartphone-{{Based Artificial Intelligence-Assisted Prediction}} for {{Eyelid Measurements}}: {{Algorithm Development}} and {{Observational Validation Study}}}, + shorttitle = {Smartphone-{{Based Artificial Intelligence-Assisted Prediction}} for {{Eyelid Measurements}}}, + author = {Chen, Hung-Chang and Tzeng, Shin-Shi and Hsiao, Yen-Chang and Chen, Ruei-Feng and Hung, Erh-Chien and Lee, Oscar K.}, + year = {2021}, + month = oct, + journal = {JMIR mHealth and uHealth}, + volume = {9}, + number = {10}, + pages = {e32444}, + issn = {2291-5222}, + doi = {10.2196/32444}, + langid = {english}, + pmcid = {PMC8538024}, + pmid = {34538776}, +} diff --git a/paper/paper.md b/paper/paper.md index ad1c3d1..8166d6f 100644 --- a/paper/paper.md +++ b/paper/paper.md @@ -67,10 +67,12 @@ to analyze the blinking behavior. ## Overview of JeFaPaTo `JeFaPaTo` is a Python-based [@python] program to support medical and psychological experts in analyzing blinking and facial features for high temporal resolution video data. -The tool is split into two main parts: An extendable programming interface and a graphical user interface (GUI) entirely written in Python. -The programming interface enables efficient processing of high temporal resolution video data, automatically extracts selected facial features, and provides a set of analysis functions specialized for blinking analysis. -The GUI offers non-programmers an intuitive way to use the analysis functions, visualize the results, and export the data for further analysis. -`JeFaPaTo` is designed to be extendable by additional analysis functions and facial features and is under joint development by computer vision and medical experts to ensure high usability and relevance for the target group. +We follow a two-way approach to encourage programmers and non-programmers to use the tool. +On the one hand, we provide a programming interface for efficiently processing high-temporal resolution video data, automatic facial feature extraction, and specialized blinking analysis functions. +This interface is extendable, allowing the easy addition of new or existing facial feature-based processing functions (e.g., mouth movement analysis [@hochreiterMachineLearningBasedDetectingEyelid2023] or MRD1/MRD2 [@chenSmartphoneBasedArtificialIntelligenceAssisted2021]). +On the other hand, we offer a graphical user interface (GUI) entirely written in Python to enable non-programmers to use the full analysis functions, visualize the results, and export the data for further analysis. +All functionalities of the programming interface are accessible through the GUI with additional input validations, making it easy for medical experts to use. +`JeFaPaTo` is designed to be extendable and transparent and is under joint development by computer vision and medical experts to ensure high usability and relevance for the target group. `JeFaPaTo` leverages the `mediapipe` library [@lugaresiMediaPipeFrameworkBuilding2019;@kartynnikRealtimeFacialSurface2019a] to extract facial landmarks and blend shape features from video data at 60 FPS (on modern hardware). With the landmarks, we compute the `EAR` (Eye-Aspect-Ratio) [@soukupovaRealTimeEyeBlink2016] for both eyes over the videos. From 73847d7ce94b4d48984073c02928c91d67cedfbb Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Tim=20B=C3=BCchner?= Date: Tue, 16 Apr 2024 12:13:53 +0200 Subject: [PATCH 12/15] Fix some citations styles --- paper/paper.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/paper/paper.md b/paper/paper.md index 8166d6f..ac84a1a 100644 --- a/paper/paper.md +++ b/paper/paper.md @@ -53,12 +53,12 @@ Such detailed analysis could help medical experts better understand the blinking # Statement of need To analyze the blinking behavior in detail, medical experts often use high-speed cameras to record the blinking process. -Existing tools modeling the eye state based on the Eye-Aspect-Ratio (EAR), such as [@soukupovaRealTimeEyeBlink2016], only classify the eye state as open or closed, requiring a labeled dataset for training a suitable classifier. +Existing tools modeling the eye state based on the Eye-Aspect-Ratio (EAR), such as @soukupovaRealTimeEyeBlink2016, only classify the eye state as open or closed, requiring a labeled dataset for training a suitable classifier. This approach neglects relevant information such as the blink intensity, duration, or partial blinks, which are crucial for a detailed analysis in a medical context. Moreover, this simple classification approach does not factor in high temporal resolution video data, which is essential for a thorough analysis of the blinking process as most blinks are shorter than 100 ms. We developed `JeFaPaTo` to go beyond the simple eye state classification and offer a method to extract complete blinking intervals for detailed analysis. We aim to provide a custom tool that is easy for medical experts, abstracting the complexity of the underlying computer vision algorithms and high-temporal processing and enabling them to analyze blinking behavior without requiring programming skills. -An existing approach [@kwonHighspeedCameraCharacterization2013] for high temporal videos uses only every frame 5 ms and requires manual measuring of the upper and lower eyelid margins. +An existing approach by @kwonHighspeedCameraCharacterization2013 for high temporal videos uses only every frame 5 ms and requires manual measuring of the upper and lower eyelid margins. Other methods require additional sensors such as electromyography (EMG) or magnetic search coils to measure the eyelid movement [@vanderwerfBlinkRecoveryPatients2007;@vanderwerfEyelidMovementsBehavioral2003]. Such sensors necessitate additional human resources and are unsuitable for routine clinical analysis. `JeFaPaTo` is a novel approach that combines the advantages of high temporal resolution video data [@kwonHighspeedCameraCharacterization2013] and computer vision algorithms [@soukupovaRealTimeEyeBlink2016] From d5a2b5be28d241875a3dcbbd01d12fb3949e66d3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Tim=20B=C3=BCchner?= Date: Tue, 16 Apr 2024 12:16:08 +0200 Subject: [PATCH 13/15] Update Lugares et al. paper - replace arxiv with cvpr workshop paper --- paper/paper.bib | 18 ++++++------------ 1 file changed, 6 insertions(+), 12 deletions(-) diff --git a/paper/paper.bib b/paper/paper.bib index a1341f1..9477175 100644 --- a/paper/paper.bib +++ b/paper/paper.bib @@ -8,20 +8,14 @@ @article{soukupovaRealTimeEyeBlink2016 url = {https://api.semanticscholar.org/CorpusID:35923299}, } -@misc{lugaresiMediaPipeFrameworkBuilding2019, - title = {{{MediaPipe}}: {{A Framework}} for {{Building Perception Pipelines}}}, - shorttitle = {{{MediaPipe}}}, - author = {Lugaresi, Camillo and Tang, Jiuqiang and Nash, Hadon and McClanahan, Chris and Uboweja, Esha and Hays, Michael and Zhang, Fan and Chang, Chuo-Ling and Yong, Ming Guang and Lee, Juhyun and Chang, Wan-Teh and Hua, Wei and Georg, Manfred and Grundmann, Matthias}, - year = {2019}, - month = jun, - number = {arXiv:1906.08172}, - eprint = {1906.08172}, - primaryclass = {cs}, - publisher = {{arXiv}}, - doi = {10.48550/arXiv.1906.08172}, - archiveprefix = {arxiv}, +@inproceedings{lugaresiMediaPipeFrameworkBuilding2019, + title = {{{MediaPipe}}: {{A}} Framework for Perceiving and Processing Reality}, + booktitle = {Third Workshop on Computer Vision for {{AR}}/{{VR}} at {{IEEE}} Computer Vision and Pattern Recognition ({{CVPR}}) 2019}, + author = {Lugaresi, Camillo and Tang, Jiuqiang and Nash, Hadon and McClanahan, Chris and Uboweja, Esha and Hays, Michael and Zhang, Fan and Chang, Chuo-Ling and Yong, Ming and Lee, Juhyun and Chang, Wan-Teh and Hua, Wei and Georg, Manfred and Grundmann, Matthias}, + year = {2019} } + @article{kartynnikRealtimeFacialSurface2019a, title = {Real-Time {{Facial Surface Geometry}} from {{Monocular Video}} on {{Mobile GPUs}}}, author = {Kartynnik, Yury and Ablavatski, Artsiom and Grishchenko, Ivan and Grundmann, Matthias}, From eb2520ba0d6bad922e9ae4f578ba769d9a407fe2 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Tim=20B=C3=BCchner?= Date: Fri, 19 Apr 2024 08:34:13 +0200 Subject: [PATCH 14/15] Delete INSTALL.md Moved into the wiki --- INSTALL.md | 101 ----------------------------------------------------- 1 file changed, 101 deletions(-) delete mode 100644 INSTALL.md diff --git a/INSTALL.md b/INSTALL.md deleted file mode 100644 index af5cd7b..0000000 --- a/INSTALL.md +++ /dev/null @@ -1,101 +0,0 @@ -# Installation - -This section describes how to install JeFaPaTo from source. If you want to use the precompiled binaries, please refer to the [README.md](../README.md). - -The general installation process is the same for all platforms as we highly recommend the usage of `conda` to manage the dependencies. Even though `conda` is not required and `venv` would be sufficient enough, it is highly recommended to use it. -If you decide to use `venv`, please refer to the [venv documentation](https://docs.python.org/3/library/venv.html) for further instructions and manually install the dependencies listed in the [requirements.txt](requirements.txt) file and the [requirements-dev.txt](requirements-dev.txt). -If you have issues with the `venv` installation, you can find some inspiration in the [build_mac-intel_v13.sh](build_mac-intel_v13.sh) script as we used it to build the binaries for macOS to avoid conflicts with the `conda` python. - -## Prerequisites - -JeFaPaTo utilizes only packages available through the `python package index (PyPI)`. We did our best to enforce correct versioning of the dependencies, but we cannot guarantee that the installation will work with newer versions of the packages. Also for sending notifications via `plyer` some systems need further dependencies but require we hope to got them all. Please refer to the [plyer documentation](https://plyer.readthedocs.io/en/latest/) for further instructions if you encounter any issues, and please let us know via the [issue tracker](https://github.com/cvjena/JeFaPaTo/issues/new). - -If your dbus is not configured correctly, you might have to install the following packages: - -```bash -sudo apt install build-essential libdbus-glib-1-dev libgirepository1.0-dev -``` - -We currently use `Python 3.10` to develop JeFaPaTo, and older versions are not recommended as make high usage of the new typing features. We recommend using `Python 3.10` or newer, but we cannot guarantee that it will work with older versions. - -## Local installation - -The installation, without using our precompiled binaries, is the same for all platforms. -We assume that you have `conda` installed and configured. If not, please refer to the [conda documentation](https://docs.conda.io/projects/conda/en/latest/user-guide/install/) for further instructions. - -You can either run the `dev_init.sh` script or follow the instructions below. - -```bash -bash dev_init.sh -``` - -or manually: - -```bash -# Clone the repository -git clone https://github.com/cvjena/JeFaPaTo.git -cd JeFaPaTo -conda create -n jefapato python=3.10 pip -y -conda activate jefapato -# not 100% necessary but recommended if you want to build the binaries -conda install libpython-static -y - -# Install the dependencies, including the development dependencies -python -m pip install -r requirements-dev.txt -python -m pip install -e . -``` - -## Usage - -After the installation, you can run JeFaPaTo with the following command: - -```bash -# assuming you activated the conda environment :^) -# else conda activate jefapato -python main.py -``` - -The GUI should open, intermediate information should be visible in the terminal, and you can start using JeFaPaTo. - -## Build the binaries - -This section describes how to build the binaries for the different platforms. We recommend using the precompiled binaries, but if you want to build them yourself, you can follow the instructions below, but please note that we do not guarantee that the binaries will work on your system. - -The binaries are built with `pyinstaller`, and each platform has its own script to build the binaries. The scripts are fully automated but not yet hooked into the CI/CD pipeline. Hence, we currently build the binaries manually and upload them to the [release page](). -If you want to build the binaries yourself, you can use the following scripts: - -### Windows 11 (x64) - -This script has to be executed in a `Windows 11` machine. - -```bash -.\build_windows-11.bat -``` - -### macOS v13+ (x64) - -This script has to be executed in a `macOS v13+` machine, and the `universal version` is only supported on `Apple Silicon`. -The `Intel` version is only supported on `Intel` machines and `Apple Silicon` machines with `Rosetta 2` installed. - -```bash -# for Apple Silicon and Intel -./build_mac-universal2_v13.sh -# for Intel only -./build_mac-intel_v13.sh -``` - -### macOS v10 - -This version is only supported on the branch `intel_macosx10_mediapipe_v0.9.0` and is only kept alive to support older macOS versions. We highly recommend using the latest version of macOS. - -```bash -./build_mac-intel_v10.sh -``` - -### Linux (x64) - -This script has to be executed in a `Linux` machine, in our case `Ubuntu 22.04`. - -```bash -./build_linux.sh -``` From 6a68d86b67c4bb13084e31c32f9e728d7740c9dd Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Tim=20B=C3=BCchner?= Date: Fri, 19 Apr 2024 08:51:49 +0200 Subject: [PATCH 15/15] Update README.md Link to wiki --- README.md | 29 ++++++----------------------- 1 file changed, 6 insertions(+), 23 deletions(-) diff --git a/README.md b/README.md index dec8f78..54aacf5 100644 --- a/README.md +++ b/README.md @@ -20,9 +20,9 @@ Additionally, our software can be extended to include new methods and algorithms - **Support For High Temporal Videos**: The human blink is fast, and JeFaPaTo is designed to handle it. JeFaPaTo can process videos with any FPS but with an extraction optimized for 240 FPS. - **Anywhere**: JeFaPaTo is a cross-platform tool that allows you to use it on Windows, Linux, and MacOS. -## Get Started +## Getting Started -Ready to dive into the world of precise facial feature extraction and analysis? Give JeFaPaTo a try and experience the power of this tool for yourself! Download the latest version of JeFaPaTo for your operating system from the [releases page](https://github.com/cvjena/JeFaPaTo/releases/tag/v1.0.0) or the following links: +Ready to dive into the world of precise facial feature extraction and analysis? Give JeFaPaTo a try and experience the power of this tool for yourself! Download the latest version of JeFaPaTo for your operating system from the [releases page](https://github.com/cvjena/JeFaPaTo/releases) or the following links: - [Windows 11](https://github.com/cvjena/JeFaPaTo/releases/latest/download/JeFaPaTo_windows.exe) - [Linux/Ubuntu 22.04](https://github.com/cvjena/JeFaPaTo/releases/latest//download/JeFaPaTo_linux) @@ -30,28 +30,11 @@ Ready to dive into the world of precise facial feature extraction and analysis? - [MacOS Intel v13+](https://github.com/cvjena/JeFaPaTo/releases/latest/download/JeFaPaTo_intel.dmg) - [MacOS Intel v10.15+](https://github.com/cvjena/JeFaPaTo/releases/latest/download/JeFaPaTo_intel_v10.dmg) -If you want to install JeFaPaTo from source, please follow the instructions in the [installation guide](INSTALL.md). +## Tutorials -## How to use JeFaPaTo - -### Facial Features - -1. Start JeFaPaTo -2. Select the video file or drag and drop it into the indicated area -3. The face should be found automatically; if not, adjust the bounding box -4. Select the facial features you want to analyze in the sidebar -5. Press the play button to start the analysis - -### Blinking Detection - -1. Start JeFaPaTo -2. Select the feature "Blinking Detection" in the top bar -3. Drag and drop the `.csv` file containing the EAR-Score values into the indicated area - - you can also drag and drop the video file into the indicated area to jump to the corresponding frame -4. Press the `Extract Blinks` buttons to extract the blinks (in a future version, the settings are not needed anymore) -5. In the table, you now have the option to label the blinks -6. Press `Summarize` to get a summary of the blink behavior -7. Press `Export` to export the data in the appropriate format +If you want to know more about how to use `JeFaPaTo`, please refer to the [Wiki Pages](https://github.com/cvjena/JeFaPaTo/wiki). +There, you can find a custom installation guide and two tutorials, one for the facial feature extraction and another one for the eye blink extraction. +Additionally, we list specific background information on the usage of the tool. ## Citing JeFaPaTo