Skip to content

Commit

Permalink
Small fix
Browse files Browse the repository at this point in the history
  • Loading branch information
EtienneCmb committed Sep 20, 2024
1 parent 04e1757 commit 1b0c245
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion paper/paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ bibliography: paper.bib

Recent research studying higher-order interactions with information theoretic measures provides new angles and valuable insights in different fields, such as neuroscience [@gatica:2021; @herzog:2022; @combrisson:2024; @luppi:2022; @baudot:2019], music [@rosas:2019], economics [@scagliarini:2023] and psychology [@marinazzo:2022]. Information theory allows investigating higher-order interactions using a rich set of metrics that provide interpretable values of the statistical interdependency among multivariate data [@williams:2010; @mediano:2021; @barrett:2015; @rosas:2019; @scagliarini:2023; @williams:2010].

Despite the relevance of studying higher-order interactions across various fields, there is currently no toolkit that compiles the latest approaches and offers user-friendly functions for calculating higher-order information metrics. Computing higher-order information presents two main challenges. First, these metrics rely on entropy and mutual information, whose estimation must be adapted to different types of data [@madukaife:2024; czyz:2024]. Second, the computational complexity increases exponentially as the number of variables and interaction orders grows. For example, a dataset with 100 variables, has approximately 1.6e5 possible triplets, 4e6 quadruplets, and 7e7 quintuplets. Therefore, an efficient implementation, scalable on modern hardware is required.
Despite the relevance of studying higher-order interactions across various fields, there is currently no toolkit that compiles the latest approaches and offers user-friendly functions for calculating higher-order information metrics. Computing higher-order information presents two main challenges. First, these metrics rely on entropy and mutual information, whose estimation must be adapted to different types of data [@madukaife:2024; @czyz:2024]. Second, the computational complexity increases exponentially as the number of variables and interaction orders grows. For example, a dataset with 100 variables, has approximately 1.6e5 possible triplets, 4e6 quadruplets, and 7e7 quintuplets. Therefore, an efficient implementation, scalable on modern hardware is required.

Several toolboxes have implemented a few HOI metrics like [`infotopo`](https://github.com/pierrebaudot/INFOTOPO) [@baudot:2019], [`infotheory`](http://mcandadai.com/infotheory/) [@candadai:2019] in C++, [`DIT`](https://github.com/dit/dit) [@james:2018], [`IDTxl`](https://github.com/pwollstadt/IDTxl) [@wollstadt:2018] and [`pyphi`](https://github.com/wmayner/pyphi) [@mayner:2018], in Python. However, `HOI` is the only pure Python toolbox specialized in the study of higher-order interactions offering functions to estimate with an optimal computational cost a wide range of metrics as the O-information [@rosas:2019], the topological information [@baudot:2019] and the redundancy-synergy index [@timme:2018]. Moreover, `HOI` allows to handle Gaussian, non-Gaussian, and discrete data using different state-of-the-art estimators [@madukaife:2024; @czyz:2024]. `HOI` also distinguishes itself from other toolboxes by leveraging [`Jax`](https://jax.readthedocs.io/), a library optimized for fast and efficient linear algebra operations on both CPU, GPU and TPU. Taken together, `HOI` combines efficient implementations of current methods and is adaptable enough to host future metrics, facilitating comparisons between different approaches and promoting collaboration across various disciplines.

Expand Down

0 comments on commit 1b0c245

Please sign in to comment.