From 1b0c245a8c323ca3f0430d45ab3f4b08c7bbc70f Mon Sep 17 00:00:00 2001 From: EtienneCmb Date: Fri, 20 Sep 2024 14:42:42 +0200 Subject: [PATCH] Small fix --- paper/paper.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/paper/paper.md b/paper/paper.md index 6da56cac..c02da6b5 100644 --- a/paper/paper.md +++ b/paper/paper.md @@ -58,7 +58,7 @@ bibliography: paper.bib Recent research studying higher-order interactions with information theoretic measures provides new angles and valuable insights in different fields, such as neuroscience [@gatica:2021; @herzog:2022; @combrisson:2024; @luppi:2022; @baudot:2019], music [@rosas:2019], economics [@scagliarini:2023] and psychology [@marinazzo:2022]. Information theory allows investigating higher-order interactions using a rich set of metrics that provide interpretable values of the statistical interdependency among multivariate data [@williams:2010; @mediano:2021; @barrett:2015; @rosas:2019; @scagliarini:2023; @williams:2010]. -Despite the relevance of studying higher-order interactions across various fields, there is currently no toolkit that compiles the latest approaches and offers user-friendly functions for calculating higher-order information metrics. Computing higher-order information presents two main challenges. First, these metrics rely on entropy and mutual information, whose estimation must be adapted to different types of data [@madukaife:2024; czyz:2024]. Second, the computational complexity increases exponentially as the number of variables and interaction orders grows. For example, a dataset with 100 variables, has approximately 1.6e5 possible triplets, 4e6 quadruplets, and 7e7 quintuplets. Therefore, an efficient implementation, scalable on modern hardware is required. +Despite the relevance of studying higher-order interactions across various fields, there is currently no toolkit that compiles the latest approaches and offers user-friendly functions for calculating higher-order information metrics. Computing higher-order information presents two main challenges. First, these metrics rely on entropy and mutual information, whose estimation must be adapted to different types of data [@madukaife:2024; @czyz:2024]. Second, the computational complexity increases exponentially as the number of variables and interaction orders grows. For example, a dataset with 100 variables, has approximately 1.6e5 possible triplets, 4e6 quadruplets, and 7e7 quintuplets. Therefore, an efficient implementation, scalable on modern hardware is required. Several toolboxes have implemented a few HOI metrics like [`infotopo`](https://github.com/pierrebaudot/INFOTOPO) [@baudot:2019], [`infotheory`](http://mcandadai.com/infotheory/) [@candadai:2019] in C++, [`DIT`](https://github.com/dit/dit) [@james:2018], [`IDTxl`](https://github.com/pwollstadt/IDTxl) [@wollstadt:2018] and [`pyphi`](https://github.com/wmayner/pyphi) [@mayner:2018], in Python. However, `HOI` is the only pure Python toolbox specialized in the study of higher-order interactions offering functions to estimate with an optimal computational cost a wide range of metrics as the O-information [@rosas:2019], the topological information [@baudot:2019] and the redundancy-synergy index [@timme:2018]. Moreover, `HOI` allows to handle Gaussian, non-Gaussian, and discrete data using different state-of-the-art estimators [@madukaife:2024; @czyz:2024]. `HOI` also distinguishes itself from other toolboxes by leveraging [`Jax`](https://jax.readthedocs.io/), a library optimized for fast and efficient linear algebra operations on both CPU, GPU and TPU. Taken together, `HOI` combines efficient implementations of current methods and is adaptable enough to host future metrics, facilitating comparisons between different approaches and promoting collaboration across various disciplines.