From 376dbeaaaace6dc5f6acdb28c788b87b751b524f Mon Sep 17 00:00:00 2001 From: Gabriel Gutierrez Date: Thu, 18 Jul 2024 16:18:07 -0300 Subject: [PATCH 01/13] Adding more guidelines to the contributing code --- CONTRIBUTING.md | 39 ++++++++++++++++++++++++++++++++++++--- 1 file changed, 36 insertions(+), 3 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 266b3cb..e62deb1 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -26,7 +26,7 @@ Any other discussion should be done in the [Discussions](https://github.com/disc ## Getting started -For changes bigger than one or two line fix: +For changes or fixes: 1. Create a new fork for your changes 2. Make the changes needed in this fork @@ -36,8 +36,6 @@ For changes bigger than one or two line fix: 3. Your code is well documented 4. Make your PR -Small contributions such as fixing spelling errors, where the content is small enough don't need to be made from another fork. - As a rule of thumb, changes are obvious fixes if they do not introduce any new functionality or creative thinking. As long as the change does not affect functionality, some likely examples include the following: * Spelling / grammar fixes @@ -48,6 +46,41 @@ As a rule of thumb, changes are obvious fixes if they do not introduce any new f * Changes to ‘metadata’ files like Gemfile, .gitignore, build scripts, etc. * Moving source files from one directory or package to another +## Making Code Contributions + +Every code contribution should be made through a pull request. This applies to all changes, including bug fixes and new features. This allows the maintainers to review the code and discuss it with you before merging it. It also allows the community to discuss the changes and learn from them. + +You code should follow the following guidelines: + +* **Documentation**: Make sure to document your code. This includes docstrings for functions and classes, as well as comments in the code when necessary. For the documentation, we use the numpydoc style. Also make sure to update the `README` file or other metadata files if necessary. +* **Tests**: Make sure to write tests for your code. We use `pytest` for testing. You can run the tests with `python -m pytest` in the root directory of the project. +* **Commit messages**: Make sure to write clear and concise commit messages. Include the issue number if you are fixing a bug. +* **Dependencies**: Make sure to include any new dependencies in the `requirements.txt` and `pyproject.toml` file. If you are adding a new dependency, make sure to include a brief description of why it is needed. +* **Code formatting**: Make sure to run a code formatter on your code before submitting the PR. We use `black` for this. + +You should also try to avoid rewriting functionality, or adding dependencies that are already present on one of our dependencies. This would make the codebase more bloated and harder to maintain. + +If you are contributing code that you did not write, you must ensure that the code is licensed under an [MIT License](https://opensource.org/licenses/MIT). If the code is not licensed under an MIT License, you must get permission from the original author to license the code under the MIT License. Also make sure to credit the original author in a comment in the code. + +### Module Specific Guidelines + +#### `models` module + +Our models are based on the `lightning.LightningModule` class. This class is a PyTorch Lightning module that simplifies the training process. You should follow the PyTorch Lightning guidelines for writing your models. You can find more information [here](https://pytorch-lightning.readthedocs.io/en/latest/common/lightning_module.html). + +As a rule of thumb, all front facing model classes should inherit from the `LightningModule` class. Subclasses of this class can be only `torch.nn.Module` classes. + +In the same way, all front facing model classes should have default parameters for the `__init__` method. This classes also should be able to receive a `config` parameter that will be used to configure the model. The config parameter should be a dictionary with the parameters needed to configure the model. + +The `models` module is divided into `nets` and `ssl`: + +* The `nets` module contains model architectures that can be trained in a supervised way. +* The `ssl` module contains logic and implementations for self-supervised learning techniques. + +In a general way, you should be able to use a `nets` model in to a `ssl` implementation to train a model in a self-supervised way. + +We strongly recommend that, when possible, you divide your model into a backbone and a head. This division allows for more flexibility when using the model in different tasks and with different ssl techniques. + ## How to report a bug ### Security Vulnerabilities From ffee8c9ce5b22ff7a490d3fcf7de4777ad1dee71 Mon Sep 17 00:00:00 2001 From: Gabriel Gutierrez Date: Thu, 18 Jul 2024 16:53:03 -0300 Subject: [PATCH 02/13] chore: Update contributor covenant links to use inline markdown links, add info to README --- CODE_OF_CONDUCT.md | 6 +++--- README.md | 30 ++++++++++++++++++++++++++---- 2 files changed, 29 insertions(+), 7 deletions(-) diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md index 14db571..eeace6b 100644 --- a/CODE_OF_CONDUCT.md +++ b/CODE_OF_CONDUCT.md @@ -116,7 +116,7 @@ the community. This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.0, available at -https://www.contributor-covenant.org/version/2/0/code_of_conduct.html. +. Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder](https://github.com/mozilla/diversity). @@ -124,5 +124,5 @@ enforcement ladder](https://github.com/mozilla/diversity). [homepage]: https://www.contributor-covenant.org For answers to common questions about this code of conduct, see the FAQ at -https://www.contributor-covenant.org/faq. Translations are available at -https://www.contributor-covenant.org/translations. +. Translations are available at +. diff --git a/README.md b/README.md index 23b9fb4..fab94ce 100644 --- a/README.md +++ b/README.md @@ -2,29 +2,51 @@ [![Continuous Test](https://github.com/discovery-unicamp/Minerva/actions/workflows/continuous-testing.yml/badge.svg)](https://github.com/discovery-unicamp/Minerva/actions/workflows/python-app.yml) -Minerva is a framework for training machine learning models for researchers. +Welcome to Minerva, a comprehensive framework designed to enhance the experience of researchers training machine learning models. Minerva allows you to effortlessly create, train, and evaluate models using a diverse set of tools and architectures. + +Featuring a robust command-line interface (CLI), Minerva streamlines the process of training and evaluating models. Additionally, it offers a versioning and configuration system for experiments, ensuring reproducibility and facilitating comparison of results within the community. ## Description This project aims to provide a robust and flexible framework for researchers working on machine learning projects. It includes various utilities and modules for data transformation, model creation, and analysis metrics. +### Features + +Minerva offers a wide range of features to help you with your machine learning projects: + +- **Model Creation**: Minerva offers a variety of models and architectures to choose from, including pre-trained models and custom models. +- **Training and Evaluation**: Minerva provides tools to train and evaluate your models, including loss functions, optimizers, and evaluation metrics. +- **Data Transformation**: Minerva provides tools to preprocess and transform your data, including data loaders, data augmentation, and data normalization. +- **Command-Line Interface (CLI)**: Minerva offers a CLI to streamline the process of training and evaluating models. +- **Modular Design**: Minerva is designed to be modular and extensible, allowing you to easily add new features and functionalities. +- **Reproducibility**: Minerva ensures reproducibility by providing tools for versioning, configuration, and logging of experiments. +- **Experiment Management**: Minerva allows you to manage your experiments, including versioning, configuration, and logging. +- **SSL Support**: Minerva supports SSL (Semi-Supervised Learning) for training models with limited labeled data. + +### Near Future Features + +- **Hyperparameter Optimization**: Minerva will offer tools for hyperparameter optimization powered by Ray Tune. +- **PyPI Package**: Minerva will be available as a PyPI package for easy installation. + ## Installation -### Intall Locally +### Install Locally + To install Minerva, you can use pip: ```sh pip install . ``` + ### Get container from Docker Hub -``` +```sh docker pull gabrielbg0/minerva:latest ``` ## Usage -You can eather use Minerva's modules directly or use the command line interface (CLI) to train and evaluate models. +You can ether use Minerva's modules directly or use the command line interface (CLI) to train and evaluate models. ### CLI From cdae3758f0e049cd5e9ba3c0dfbd9860ec4dbc15 Mon Sep 17 00:00:00 2001 From: Gabriel Gutierrez Date: Thu, 18 Jul 2024 22:06:00 -0300 Subject: [PATCH 03/13] corrections --- README.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index fab94ce..9e60379 100644 --- a/README.md +++ b/README.md @@ -14,7 +14,7 @@ This project aims to provide a robust and flexible framework for researchers wor Minerva offers a wide range of features to help you with your machine learning projects: -- **Model Creation**: Minerva offers a variety of models and architectures to choose from, including pre-trained models and custom models. +- **Model Creation**: Minerva offers a variety of models and architectures to choose from. - **Training and Evaluation**: Minerva provides tools to train and evaluate your models, including loss functions, optimizers, and evaluation metrics. - **Data Transformation**: Minerva provides tools to preprocess and transform your data, including data loaders, data augmentation, and data normalization. - **Command-Line Interface (CLI)**: Minerva offers a CLI to streamline the process of training and evaluating models. @@ -27,6 +27,7 @@ Minerva offers a wide range of features to help you with your machine learning p - **Hyperparameter Optimization**: Minerva will offer tools for hyperparameter optimization powered by Ray Tune. - **PyPI Package**: Minerva will be available as a PyPI package for easy installation. +- **Pre-trained Models**: Minerva will offer pre-trained models for common tasks and datasets. ## Installation From 3bba6b9c3b84adb15adbbb10e069f5599aaf9ce0 Mon Sep 17 00:00:00 2001 From: Gabriel Gutierrez Date: Thu, 18 Jul 2024 22:08:48 -0300 Subject: [PATCH 04/13] corrections --- CONTRIBUTING.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index e62deb1..06be1a9 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -58,7 +58,7 @@ You code should follow the following guidelines: * **Dependencies**: Make sure to include any new dependencies in the `requirements.txt` and `pyproject.toml` file. If you are adding a new dependency, make sure to include a brief description of why it is needed. * **Code formatting**: Make sure to run a code formatter on your code before submitting the PR. We use `black` for this. -You should also try to avoid rewriting functionality, or adding dependencies that are already present on one of our dependencies. This would make the codebase more bloated and harder to maintain. +You should also try to avoid rewriting functionality, or adding dependencies for functionalities that are already present on one of our dependencies. This would make the codebase more bloated and harder to maintain. If you are contributing code that you did not write, you must ensure that the code is licensed under an [MIT License](https://opensource.org/licenses/MIT). If the code is not licensed under an MIT License, you must get permission from the original author to license the code under the MIT License. Also make sure to credit the original author in a comment in the code. From 891fd9f66dd955c9f759c2551590094959e0824c Mon Sep 17 00:00:00 2001 From: Gabriel Gutierrez Date: Sat, 20 Jul 2024 02:13:19 -0300 Subject: [PATCH 05/13] Finishing module specific guidelines --- CONTRIBUTING.md | 34 ++++++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 06be1a9..9c53def 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -81,6 +81,40 @@ In a general way, you should be able to use a `nets` model in to a `ssl` impleme We strongly recommend that, when possible, you divide your model into a backbone and a head. This division allows for more flexibility when using the model in different tasks and with different ssl techniques. +Moreover both `nets` and `ssl` are divided into the model's use area (e.g. image, time series, etc.). This division allows for a more organized codebase and easier maintenance. If you are adding a new model, make sure to add it to the correct area. If the model does not fit in any of the areas, you can create a new one (make sure to justify it on your PR). To determine the area of the model you should follow the area used in its original proposal or the area where it is most used. + +#### `data` module + +The `data` module is responsible for handling the data used in the models. This module is divided into `datasets`, `readers` and `data_modules`. + +`readers` are the lowest level in our data pipeline. It is responsible to read the data in it's format and return it in a format that can be used by the `datasets`. Every reader should know both the method for reading the data from the file it self and the file structure of the data if applicable. + +`datasets` are the middle level in our data pipeline. It is composed by one or more readers and is responsible to transform the data read by the readers if necessary. Datasets usually are composed by a slice, of partition of the data (e.g. train, validation, test). The datasets and its partitions will be created and managed by a data module. + +`data_modules` are the front facing classes of the data pipeline. It is responsible for receiving all the parameters needed to create all datasets and readers. Data modules should inherit from the `lightning.LightningDataModule` class. This class is a PyTorch Lightning class that simplifies the data loading process. You should follow the PyTorch Lightning guidelines for writing your data modules. You can find more information [here](https://pytorch-lightning.readthedocs.io/en/latest/common/lightning_module.html). + +#### `losses` module + +The `losses` module houses loss functions that can be used by the modules. As stated before, you should avoid rewriting functionality that is already present in one of our dependencies. If you are adding a new loss function, make sure to include a brief description of why it is needed. Every loss function should inherit from the `torch.nn.modules.loss._Loss` class. + +#### `transforms` module + +The `transforms` module houses transformations that can be used by a dataset. Every transformation should inherit from our `_Transform` class. If you are adding a new transformation, make sure to include a brief description of why it is needed. + +#### `analysis` module + +The `analysis` module have both metrics and visualizations that can be used to analyze the models. If you are adding a new metric or visualization, make sure to include a brief description of why it is needed. Again you should avoid rewriting functionality that is already present in one of our dependencies. All metrics should inherit from the `torchmetrics.Metric` class. + +#### `pipelines` module + +Pipelines are the core of minerva. They are responsible for training and evaluating the models. Pipelines should be able to receive a config file that will be used to configure the pipeline. The config file should be a yaml file with the parameters needed to configure the pipeline. All pipelines should inherit from the our `Pipeline` class. + +Pipelines are meant to be reusable and are usually complex. If you are adding a new pipeline, make sure to include a description of why it is needed, how it works and why you can't accomplish the same thing with existing ones. + +#### `utils` module + +The `utils` module is a temporary module that houses functions that don't fit in any of the other modules. '`utils` will cease to exist in the future, so if you are adding a new function to it, make sure to justify it on your PR. + ## How to report a bug ### Security Vulnerabilities From bb4de484361df9d63fa9c9e2a1ba3004c0777a06 Mon Sep 17 00:00:00 2001 From: Gabriel Gutierrez Date: Sat, 20 Jul 2024 16:45:49 -0300 Subject: [PATCH 06/13] corrections in contributing guidelines --- CONTRIBUTING.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 9c53def..4bba0e6 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -85,11 +85,11 @@ Moreover both `nets` and `ssl` are divided into the model's use area (e.g. image #### `data` module -The `data` module is responsible for handling the data used in the models. This module is divided into `datasets`, `readers` and `data_modules`. +The `data` module is responsible for handling the data used by the models. This module is divided into `datasets`, `readers` and `data_modules`. -`readers` are the lowest level in our data pipeline. It is responsible to read the data in it's format and return it in a format that can be used by the `datasets`. Every reader should know both the method for reading the data from the file it self and the file structure of the data if applicable. +`readers` are the lowest level in our data pipeline. It is responsible to read the right data requested by a dataset in it's format and return it in a format that can be used. Every reader should know both the method for reading the data from the file it self and the file structure of the data if applicable. Every reader should inherit from the `_Reader` class. -`datasets` are the middle level in our data pipeline. It is composed by one or more readers and is responsible to transform the data read by the readers if necessary. Datasets usually are composed by a slice, of partition of the data (e.g. train, validation, test). The datasets and its partitions will be created and managed by a data module. +`datasets` are the middle level in our data pipeline. It is composed by one or more readers and is responsible to transform the data read by the readers if necessary. Datasets usually are composed by a slice, of partition of the data (e.g. train, validation, test). The datasets and its partitions will be created and managed by a data module. Every dataset should inherit from the `torch.utils.data.Dataset` class. `data_modules` are the front facing classes of the data pipeline. It is responsible for receiving all the parameters needed to create all datasets and readers. Data modules should inherit from the `lightning.LightningDataModule` class. This class is a PyTorch Lightning class that simplifies the data loading process. You should follow the PyTorch Lightning guidelines for writing your data modules. You can find more information [here](https://pytorch-lightning.readthedocs.io/en/latest/common/lightning_module.html). From 0342775c5dd99dde4ca9a1a786324500ff8ade86 Mon Sep 17 00:00:00 2001 From: Gabriel Gutierrez Date: Sat, 20 Jul 2024 19:02:55 -0300 Subject: [PATCH 07/13] changes to models table --- minerva/models/README.md | 39 +++++++++++++++++++-------------------- 1 file changed, 19 insertions(+), 20 deletions(-) diff --git a/minerva/models/README.md b/minerva/models/README.md index fd21816..723caff 100644 --- a/minerva/models/README.md +++ b/minerva/models/README.md @@ -1,26 +1,25 @@ # Models -| **Model** | **Authors** | **Task** | **Type** | **Input Shape** | **Python Class** | **Observations** | -|------------------------------------------------------------------------------------ |---------------------------------- |---------------- |----------------------- |:---------------: |:-----------------------------------------------------------: |----------------------------------------------------------------------------------------------------------------------------- | -| [DeepConvLSTM](https://www.mdpi.com/1424-8220/16/1/115) | Ordóñez and Roggen | Classification | 2D Conv + LSTM | (C, S, T) | minerva.models.nets.deep_conv_lstm.DeepConvLSTM | | -| [Simple 1D Convolutional Network](https://www.mdpi.com/1424-8220/16/1/115) | Ordóñez and Roggen | Classification | 1D Conv | (S, T) | minerva.models.nets.convnet.Simple1DConvNetwork | 1D Variant of "Baseline CNN", used by Ordóñez and Roggen, with dropout layers included. | -| [Simple 2D Convolutional Network](https://www.mdpi.com/1424-8220/16/1/115) | Ordóñez and Roggen | Classification | 2D Conv | (C, S, T) | minerva.models.nets.convnet.Simple2DConvNetwork | 2D Variant of "Baseline CNN", used by Ordóñez and Roggen, with dropout layers included. | -| [CNN_HaEtAl_1D](https://ieeexplore.ieee.org/document/7379657) | Ha, Yun and Choi | Classification | 1D Conv | (S, T) | minerva.models.nets.cnn_ha_etal.CNN_HaEtAl_1D | 1D proposed variant. | -| [CNN_HaEtAl_2D](https://ieeexplore.ieee.org/document/7379657) | Ha, Yun and Choi | Classification | 2D Conv | (C, S, T) | minerva.models.nets.cnn_ha_etal.CNN_HaEtAl_2D | 2D proposed variant. | -| [CNN PF](https://ieeexplore.ieee.org/document/7727224) | Ha and Choi | Classification | 2D Conv | (C, S, T) | minerva.models.nets.cnn_pf.CNN_PF_2D | Partial weight sharing in first convolutional layer and full weight sharing in second convolutional layer. | -| [CNN PPF](https://ieeexplore.ieee.org/document/7727224) | Ha and Choi | Classification | 2D Conv | (C, S, T) | minerva.models.nets.cnn_pf.CNN_PFF_2D | Partial and full weight sharing in first convolutional layer and full weight sharing in second convolutional layer. | -| [IMU Transformer](https://ieeexplore.ieee.org/document/9393889) | Shavit and Klein | Classification | 1D Conv + Transformer | (S, T) | minerva.models.nets.imu_transformer.IMUTransformerEncoder | | -| [IMU CNN](https://ieeexplore.ieee.org/document/9393889) | Shavit and Klein | Classification | 1D Conv | (S, T) | minerva.models.nets.imu_transformer.IMUCNN | Baseline CNN for IMUTransnformer work. | -| [InceptionTime](https://doi.org/10.1007/s10618-020-00710-y) | Fawaz et al. | Classification | 1D Conv | (S, T) | minerva.models.nets.inception_time.InceptionTime | | -| [1D-ResNet](https://www.mdpi.com/1424-8220/22/8/3094) | Mekruksavanich and Jitpattanakul | Classification | 1D Conv | (S, T) | minerva.models.nets.resnet_1d.ResNet1D_8 | Baseline resnet from paper. Uses ELU and 8 residual blocks | -| [1D-ResNet-SE-8](https://www.mdpi.com/1424-8220/22/8/3094) | Mekruksavanich and Jitpattanakul | Classification | 1D Conv | (S, T) | minerva.models.nets.resnet_1d.ResNetSE1D_8 | ResNet with Squeeze and Excitation. Uses ELU and 8 residual blocks | -| [1D-ResNet-SE-5](https://ieeexplore.ieee.org/document/9771436) | Mekruksavanich et al. | Classification | 1D Conv | (S, T) | minerva.models.nets.resnet_1d.ResNetSE1D_5 | ResNet with Squeeze and Excitation. Uses ReLU and 8 residual blocks | -| [MCNN](https://ieeexplore.ieee.org/document/8975649) | Sikder et al. | Classification | 2D Conv | (2, C, S,T) | minerva.models.nets.multi_channel_cnn.MultiChannelCNN_HAR | First dimension is FFT data and second is Welch Power Density periodgram data. Must adapt dataset to return data like this. | - +| **Model** | **Authors** | **Task** | **Type** | **Input Shape** | **Python Class** | **Observations** | +| -------------------------------------------------------------------------- | -------------------------------- | -------------- | --------------------- | :-------------: | :-------------------: | --------------------------------------------------------------------------------------------------------------------------- | +| [DeepConvLSTM](https://www.mdpi.com/1424-8220/16/1/115) | Ordóñez and Roggen | Classification | 2D Conv + LSTM | (C, S, T) | DeepConvLSTM | | +| [Simple 1D Convolutional Network](https://www.mdpi.com/1424-8220/16/1/115) | Ordóñez and Roggen | Classification | 1D Conv | (S, T) | Simple1DConvNetwork | 1D Variant of "Baseline CNN", used by Ordóñez and Roggen, with dropout layers included. | +| [Simple 2D Convolutional Network](https://www.mdpi.com/1424-8220/16/1/115) | Ordóñez and Roggen | Classification | 2D Conv | (C, S, T) | Simple2DConvNetwork | 2D Variant of "Baseline CNN", used by Ordóñez and Roggen, with dropout layers included. | +| [CNN_HaEtAl_1D](https://ieeexplore.ieee.org/document/7379657) | Ha, Yun and Choi | Classification | 1D Conv | (S, T) | CNN_HaEtAl_1D | 1D proposed variant. | +| [CNN_HaEtAl_2D](https://ieeexplore.ieee.org/document/7379657) | Ha, Yun and Choi | Classification | 2D Conv | (C, S, T) | CNN_HaEtAl_2D | 2D proposed variant. | +| [CNN PF](https://ieeexplore.ieee.org/document/7727224) | Ha and Choi | Classification | 2D Conv | (C, S, T) | CNN_PF_2D | Partial weight sharing in first convolutional layer and full weight sharing in second convolutional layer. | +| [CNN PPF](https://ieeexplore.ieee.org/document/7727224) | Ha and Choi | Classification | 2D Conv | (C, S, T) | CNN_PFF_2D | Partial and full weight sharing in first convolutional layer and full weight sharing in second convolutional layer. | +| [IMU Transformer](https://ieeexplore.ieee.org/document/9393889) | Shavit and Klein | Classification | 1D Conv + Transformer | (S, T) | IMUTransformerEncoder | | +| [IMU CNN](https://ieeexplore.ieee.org/document/9393889) | Shavit and Klein | Classification | 1D Conv | (S, T) | IMUCNN | Baseline CNN for IMUTransnformer work. | +| [InceptionTime](https://doi.org/10.1007/s10618-020-00710-y) | Fawaz et al. | Classification | 1D Conv | (S, T) | InceptionTime | | +| [1D-ResNet](https://www.mdpi.com/1424-8220/22/8/3094) | Mekruksavanich and Jitpattanakul | Classification | 1D Conv | (S, T) | ResNet1D_8 | Baseline resnet from paper. Uses ELU and 8 residual blocks | +| [1D-ResNet-SE-8](https://www.mdpi.com/1424-8220/22/8/3094) | Mekruksavanich and Jitpattanakul | Classification | 1D Conv | (S, T) | ResNetSE1D_8 | ResNet with Squeeze and Excitation. Uses ELU and 8 residual blocks | +| [1D-ResNet-SE-5](https://ieeexplore.ieee.org/document/9771436) | Mekruksavanich et al. | Classification | 1D Conv | (S, T) | ResNetSE1D_5 | ResNet with Squeeze and Excitation. Uses ReLU and 8 residual blocks | +| [MCNN](https://ieeexplore.ieee.org/document/8975649) | Sikder et al. | Classification | 2D Conv | (2, C, S,T) | MultiChannelCNN_HAR | First dimension is FFT data and second is Welch Power Density periodgram data. Must adapt dataset to return data like this. | # SSL Models -| **Model** | **Authors** | **Task** | **Type** | **Input Shape** | **Python Class** | **Observations** | -|-----------------------------------------|---------------|----------|----------|:---------------:|:--------------------------------------------:|-------------------| -| [LFR](https://arxiv.org/abs/2310.07756) | Yi Sui et al. | Any | Any | Any | minerva.models.nets.LearnFromRandomnessModel | | +| **Model** | **Authors** | **Task** | **Type** | **Input Shape** | **Python Class** | **Observations** | +| --------------------------------------- | ------------- | -------- | -------- | :-------------: | :----------------------: | ---------------- | +| [LFR](https://arxiv.org/abs/2310.07756) | Yi Sui et al. | Any | Any | Any | LearnFromRandomnessModel | | From e9b4968eb757f7a740dc9423ad56858a488e774e Mon Sep 17 00:00:00 2001 From: Gabriel Gutierrez Date: Sat, 20 Jul 2024 19:08:50 -0300 Subject: [PATCH 08/13] Fixing space line break --- minerva/models/README.md | 32 ++++++++++++++++---------------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/minerva/models/README.md b/minerva/models/README.md index 723caff..99e5139 100644 --- a/minerva/models/README.md +++ b/minerva/models/README.md @@ -1,22 +1,22 @@ # Models -| **Model** | **Authors** | **Task** | **Type** | **Input Shape** | **Python Class** | **Observations** | -| -------------------------------------------------------------------------- | -------------------------------- | -------------- | --------------------- | :-------------: | :-------------------: | --------------------------------------------------------------------------------------------------------------------------- | -| [DeepConvLSTM](https://www.mdpi.com/1424-8220/16/1/115) | Ordóñez and Roggen | Classification | 2D Conv + LSTM | (C, S, T) | DeepConvLSTM | | -| [Simple 1D Convolutional Network](https://www.mdpi.com/1424-8220/16/1/115) | Ordóñez and Roggen | Classification | 1D Conv | (S, T) | Simple1DConvNetwork | 1D Variant of "Baseline CNN", used by Ordóñez and Roggen, with dropout layers included. | -| [Simple 2D Convolutional Network](https://www.mdpi.com/1424-8220/16/1/115) | Ordóñez and Roggen | Classification | 2D Conv | (C, S, T) | Simple2DConvNetwork | 2D Variant of "Baseline CNN", used by Ordóñez and Roggen, with dropout layers included. | -| [CNN_HaEtAl_1D](https://ieeexplore.ieee.org/document/7379657) | Ha, Yun and Choi | Classification | 1D Conv | (S, T) | CNN_HaEtAl_1D | 1D proposed variant. | -| [CNN_HaEtAl_2D](https://ieeexplore.ieee.org/document/7379657) | Ha, Yun and Choi | Classification | 2D Conv | (C, S, T) | CNN_HaEtAl_2D | 2D proposed variant. | -| [CNN PF](https://ieeexplore.ieee.org/document/7727224) | Ha and Choi | Classification | 2D Conv | (C, S, T) | CNN_PF_2D | Partial weight sharing in first convolutional layer and full weight sharing in second convolutional layer. | -| [CNN PPF](https://ieeexplore.ieee.org/document/7727224) | Ha and Choi | Classification | 2D Conv | (C, S, T) | CNN_PFF_2D | Partial and full weight sharing in first convolutional layer and full weight sharing in second convolutional layer. | -| [IMU Transformer](https://ieeexplore.ieee.org/document/9393889) | Shavit and Klein | Classification | 1D Conv + Transformer | (S, T) | IMUTransformerEncoder | | -| [IMU CNN](https://ieeexplore.ieee.org/document/9393889) | Shavit and Klein | Classification | 1D Conv | (S, T) | IMUCNN | Baseline CNN for IMUTransnformer work. | -| [InceptionTime](https://doi.org/10.1007/s10618-020-00710-y) | Fawaz et al. | Classification | 1D Conv | (S, T) | InceptionTime | | -| [1D-ResNet](https://www.mdpi.com/1424-8220/22/8/3094) | Mekruksavanich and Jitpattanakul | Classification | 1D Conv | (S, T) | ResNet1D_8 | Baseline resnet from paper. Uses ELU and 8 residual blocks | -| [1D-ResNet-SE-8](https://www.mdpi.com/1424-8220/22/8/3094) | Mekruksavanich and Jitpattanakul | Classification | 1D Conv | (S, T) | ResNetSE1D_8 | ResNet with Squeeze and Excitation. Uses ELU and 8 residual blocks | -| [1D-ResNet-SE-5](https://ieeexplore.ieee.org/document/9771436) | Mekruksavanich et al. | Classification | 1D Conv | (S, T) | ResNetSE1D_5 | ResNet with Squeeze and Excitation. Uses ReLU and 8 residual blocks | -| [MCNN](https://ieeexplore.ieee.org/document/8975649) | Sikder et al. | Classification | 2D Conv | (2, C, S,T) | MultiChannelCNN_HAR | First dimension is FFT data and second is Welch Power Density periodgram data. Must adapt dataset to return data like this. | +| **Model** | **Authors** | **Task** | **Type** | **Input Shape** | **Python Class** | **Observations** | +| -------------------------------------------------------------------------- | -------------------------------- | -------------- | --------------------- | :-------------------------: | :-------------------: | --------------------------------------------------------------------------------------------------------------------------- | +| [DeepConvLSTM](https://www.mdpi.com/1424-8220/16/1/115) | Ordóñez and Roggen | Classification | 2D Conv + LSTM | (C, S, T) | DeepConvLSTM | | +| [Simple 1D Convolutional Network](https://www.mdpi.com/1424-8220/16/1/115) | Ordóñez and Roggen | Classification | 1D Conv | (S, T) | Simple1DConvNetwork | 1D Variant of "Baseline CNN", used by Ordóñez and Roggen, with dropout layers included. | +| [Simple 2D Convolutional Network](https://www.mdpi.com/1424-8220/16/1/115) | Ordóñez and Roggen | Classification | 2D Conv | (C, S, T) | Simple2DConvNetwork | 2D Variant of "Baseline CNN", used by Ordóñez and Roggen, with dropout layers included. | +| [CNN_HaEtAl_1D](https://ieeexplore.ieee.org/document/7379657) | Ha, Yun and Choi | Classification | 1D Conv | (S, T) | CNN_HaEtAl_1D | 1D proposed variant. | +| [CNN_HaEtAl_2D](https://ieeexplore.ieee.org/document/7379657) | Ha, Yun and Choi | Classification | 2D Conv | (C, S, T) | CNN_HaEtAl_2D | 2D proposed variant. | +| [CNN PF](https://ieeexplore.ieee.org/document/7727224) | Ha and Choi | Classification | 2D Conv | (C, S, T) | CNN_PF_2D | Partial weight sharing in first convolutional layer and full weight sharing in second convolutional layer. | +| [CNN PPF](https://ieeexplore.ieee.org/document/7727224) | Ha and Choi | Classification | 2D Conv | (C, S, T) | CNN_PFF_2D | Partial and full weight sharing in first convolutional layer and full weight sharing in second convolutional layer. | +| [IMU Transformer](https://ieeexplore.ieee.org/document/9393889) | Shavit and Klein | Classification | 1D Conv + Transformer | (S, T) | IMUTransformerEncoder | | +| [IMU CNN](https://ieeexplore.ieee.org/document/9393889) | Shavit and Klein | Classification | 1D Conv | (S, T) | IMUCNN | Baseline CNN for IMUTransnformer work. | +| [InceptionTime](https://doi.org/10.1007/s10618-020-00710-y) | Fawaz et al. | Classification | 1D Conv | (S, T) | InceptionTime | | +| [1D-ResNet](https://www.mdpi.com/1424-8220/22/8/3094) | Mekruksavanich and Jitpattanakul | Classification | 1D Conv | (S, T) | ResNet1D_8 | Baseline resnet from paper. Uses ELU and 8 residual blocks | +| [1D-ResNet-SE-8](https://www.mdpi.com/1424-8220/22/8/3094) | Mekruksavanich and Jitpattanakul | Classification | 1D Conv | (S, T) | ResNetSE1D_8 | ResNet with Squeeze and Excitation. Uses ELU and 8 residual blocks | +| [1D-ResNet-SE-5](https://ieeexplore.ieee.org/document/9771436) | Mekruksavanich et al. | Classification | 1D Conv | (S, T) | ResNetSE1D_5 | ResNet with Squeeze and Excitation. Uses ReLU and 8 residual blocks | +| [MCNN](https://ieeexplore.ieee.org/document/8975649) | Sikder et al. | Classification | 2D Conv | (2, C, S, T) | MultiChannelCNN_HAR | First dimension is FFT data and second is Welch Power Density periodgram data. Must adapt dataset to return data like this. | # SSL Models From 6dee87fb8571830c29ca41ac5bd714580cc4c92f Mon Sep 17 00:00:00 2001 From: Gabriel Gutierrez Date: Sat, 20 Jul 2024 19:39:10 -0300 Subject: [PATCH 09/13] formatting changes --- minerva/models/README.md | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/minerva/models/README.md b/minerva/models/README.md index 99e5139..0f3c554 100644 --- a/minerva/models/README.md +++ b/minerva/models/README.md @@ -1,22 +1,22 @@ # Models -| **Model** | **Authors** | **Task** | **Type** | **Input Shape** | **Python Class** | **Observations** | -| -------------------------------------------------------------------------- | -------------------------------- | -------------- | --------------------- | :-------------------------: | :-------------------: | --------------------------------------------------------------------------------------------------------------------------- | -| [DeepConvLSTM](https://www.mdpi.com/1424-8220/16/1/115) | Ordóñez and Roggen | Classification | 2D Conv + LSTM | (C, S, T) | DeepConvLSTM | | -| [Simple 1D Convolutional Network](https://www.mdpi.com/1424-8220/16/1/115) | Ordóñez and Roggen | Classification | 1D Conv | (S, T) | Simple1DConvNetwork | 1D Variant of "Baseline CNN", used by Ordóñez and Roggen, with dropout layers included. | -| [Simple 2D Convolutional Network](https://www.mdpi.com/1424-8220/16/1/115) | Ordóñez and Roggen | Classification | 2D Conv | (C, S, T) | Simple2DConvNetwork | 2D Variant of "Baseline CNN", used by Ordóñez and Roggen, with dropout layers included. | -| [CNN_HaEtAl_1D](https://ieeexplore.ieee.org/document/7379657) | Ha, Yun and Choi | Classification | 1D Conv | (S, T) | CNN_HaEtAl_1D | 1D proposed variant. | -| [CNN_HaEtAl_2D](https://ieeexplore.ieee.org/document/7379657) | Ha, Yun and Choi | Classification | 2D Conv | (C, S, T) | CNN_HaEtAl_2D | 2D proposed variant. | -| [CNN PF](https://ieeexplore.ieee.org/document/7727224) | Ha and Choi | Classification | 2D Conv | (C, S, T) | CNN_PF_2D | Partial weight sharing in first convolutional layer and full weight sharing in second convolutional layer. | -| [CNN PPF](https://ieeexplore.ieee.org/document/7727224) | Ha and Choi | Classification | 2D Conv | (C, S, T) | CNN_PFF_2D | Partial and full weight sharing in first convolutional layer and full weight sharing in second convolutional layer. | +| **Model** | **Authors** | **Task** | **Type** | **Input Shape** | **Python Class** | **Observations** | +| -------------------------------------------------------------------------- | -------------------------------- | :------------: | :-------------------: | :-------------------------: | :-------------------: | --------------------------------------------------------------------------------------------------------------------------- | +| [DeepConvLSTM](https://www.mdpi.com/1424-8220/16/1/115) | Ordóñez and Roggen | Classification | 2D Conv + LSTM | (C, S, T) | DeepConvLSTM | -- | +| [Simple 1D Convolutional Network](https://www.mdpi.com/1424-8220/16/1/115) | Ordóñez and Roggen | Classification | 1D Conv | (S, T) | Simple1DConvNetwork | 1D Variant of "Baseline CNN", used by Ordóñez and Roggen, with dropout layers included. | +| [Simple 2D Convolutional Network](https://www.mdpi.com/1424-8220/16/1/115) | Ordóñez and Roggen | Classification | 2D Conv | (C, S, T) | Simple2DConvNetwork | 2D Variant of "Baseline CNN", used by Ordóñez and Roggen, with dropout layers included. | +| [CNN_HaEtAl_1D](https://ieeexplore.ieee.org/document/7379657) | Ha, Yun and Choi | Classification | 1D Conv | (S, T) | CNN_HaEtAl_1D | 1D proposed variant. | +| [CNN_HaEtAl_2D](https://ieeexplore.ieee.org/document/7379657) | Ha, Yun and Choi | Classification | 2D Conv | (C, S, T) | CNN_HaEtAl_2D | 2D proposed variant. | +| [CNN PF](https://ieeexplore.ieee.org/document/7727224) | Ha and Choi | Classification | 2D Conv | (C, S, T) | CNN_PF_2D | Partial weight sharing in first convolutional layer and full weight sharing in second convolutional layer. | +| [CNN PPF](https://ieeexplore.ieee.org/document/7727224) | Ha and Choi | Classification | 2D Conv | (C, S, T) | CNN_PFF_2D | Partial and full weight sharing in first convolutional layer and full weight sharing in second convolutional layer. | | [IMU Transformer](https://ieeexplore.ieee.org/document/9393889) | Shavit and Klein | Classification | 1D Conv + Transformer | (S, T) | IMUTransformerEncoder | | -| [IMU CNN](https://ieeexplore.ieee.org/document/9393889) | Shavit and Klein | Classification | 1D Conv | (S, T) | IMUCNN | Baseline CNN for IMUTransnformer work. | -| [InceptionTime](https://doi.org/10.1007/s10618-020-00710-y) | Fawaz et al. | Classification | 1D Conv | (S, T) | InceptionTime | | -| [1D-ResNet](https://www.mdpi.com/1424-8220/22/8/3094) | Mekruksavanich and Jitpattanakul | Classification | 1D Conv | (S, T) | ResNet1D_8 | Baseline resnet from paper. Uses ELU and 8 residual blocks | -| [1D-ResNet-SE-8](https://www.mdpi.com/1424-8220/22/8/3094) | Mekruksavanich and Jitpattanakul | Classification | 1D Conv | (S, T) | ResNetSE1D_8 | ResNet with Squeeze and Excitation. Uses ELU and 8 residual blocks | -| [1D-ResNet-SE-5](https://ieeexplore.ieee.org/document/9771436) | Mekruksavanich et al. | Classification | 1D Conv | (S, T) | ResNetSE1D_5 | ResNet with Squeeze and Excitation. Uses ReLU and 8 residual blocks | -| [MCNN](https://ieeexplore.ieee.org/document/8975649) | Sikder et al. | Classification | 2D Conv | (2, C, S, T) | MultiChannelCNN_HAR | First dimension is FFT data and second is Welch Power Density periodgram data. Must adapt dataset to return data like this. | +| [IMU CNN](https://ieeexplore.ieee.org/document/9393889) | Shavit and Klein | Classification | 1D Conv | (S, T) | IMUCNN | Baseline CNN for IMUTransnformer work. | +| [InceptionTime](https://doi.org/10.1007/s10618-020-00710-y) | Fawaz et al. | Classification | 1D Conv | (S, T) | InceptionTime | | +| [1D-ResNet](https://www.mdpi.com/1424-8220/22/8/3094) | Mekruksavanich and Jitpattanakul | Classification | 1D Conv | (S, T) | ResNet1D_8 | Baseline resnet from paper. Uses ELU and 8 residual blocks | +| [1D-ResNet-SE-8](https://www.mdpi.com/1424-8220/22/8/3094) | Mekruksavanich and Jitpattanakul | Classification | 1D Conv | (S, T) | ResNetSE1D_8 | ResNet with Squeeze and Excitation. Uses ELU and 8 residual blocks | +| [1D-ResNet-SE-5](https://ieeexplore.ieee.org/document/9771436) | Mekruksavanich et al. | Classification | 1D Conv | (S, T) | ResNetSE1D_5 | ResNet with Squeeze and Excitation. Uses ReLU and 8 residual blocks | +| [MCNN](https://ieeexplore.ieee.org/document/8975649) | Sikder et al. | Classification | 2D Conv | (2, C, S, T) | MultiChannelCNN_HAR | First dimension is FFT data and second is Welch Power Density periodgram data. Must adapt dataset to return data like this. | # SSL Models From 6e1aa19659306513302bec0ed82111ea00a8864b Mon Sep 17 00:00:00 2001 From: Gabriel Gutierrez Date: Sat, 20 Jul 2024 19:50:18 -0300 Subject: [PATCH 10/13] changing name and et al formatting --- minerva/models/README.md | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/minerva/models/README.md b/minerva/models/README.md index 0f3c554..3be8bf7 100644 --- a/minerva/models/README.md +++ b/minerva/models/README.md @@ -3,23 +3,23 @@ | **Model** | **Authors** | **Task** | **Type** | **Input Shape** | **Python Class** | **Observations** | | -------------------------------------------------------------------------- | -------------------------------- | :------------: | :-------------------: | :-------------------------: | :-------------------: | --------------------------------------------------------------------------------------------------------------------------- | -| [DeepConvLSTM](https://www.mdpi.com/1424-8220/16/1/115) | Ordóñez and Roggen | Classification | 2D Conv + LSTM | (C, S, T) | DeepConvLSTM | -- | +| [Deep Conv LSTM](https://www.mdpi.com/1424-8220/16/1/115) | Ordóñez and Roggen | Classification | 2D Conv + LSTM | (C, S, T) | DeepConvLSTM | -- | | [Simple 1D Convolutional Network](https://www.mdpi.com/1424-8220/16/1/115) | Ordóñez and Roggen | Classification | 1D Conv | (S, T) | Simple1DConvNetwork | 1D Variant of "Baseline CNN", used by Ordóñez and Roggen, with dropout layers included. | | [Simple 2D Convolutional Network](https://www.mdpi.com/1424-8220/16/1/115) | Ordóñez and Roggen | Classification | 2D Conv | (C, S, T) | Simple2DConvNetwork | 2D Variant of "Baseline CNN", used by Ordóñez and Roggen, with dropout layers included. | -| [CNN_HaEtAl_1D](https://ieeexplore.ieee.org/document/7379657) | Ha, Yun and Choi | Classification | 1D Conv | (S, T) | CNN_HaEtAl_1D | 1D proposed variant. | -| [CNN_HaEtAl_2D](https://ieeexplore.ieee.org/document/7379657) | Ha, Yun and Choi | Classification | 2D Conv | (C, S, T) | CNN_HaEtAl_2D | 2D proposed variant. | +| [CNN Ha *et al.* 1D](https://ieeexplore.ieee.org/document/7379657) | Ha, Yun and Choi | Classification | 1D Conv | (S, T) | CNN_HaEtAl_1D | 1D proposed variant. | +| [CNN Ha *et al.* 2D](https://ieeexplore.ieee.org/document/7379657) | Ha, Yun and Choi | Classification | 2D Conv | (C, S, T) | CNN_HaEtAl_2D | 2D proposed variant. | | [CNN PF](https://ieeexplore.ieee.org/document/7727224) | Ha and Choi | Classification | 2D Conv | (C, S, T) | CNN_PF_2D | Partial weight sharing in first convolutional layer and full weight sharing in second convolutional layer. | | [CNN PPF](https://ieeexplore.ieee.org/document/7727224) | Ha and Choi | Classification | 2D Conv | (C, S, T) | CNN_PFF_2D | Partial and full weight sharing in first convolutional layer and full weight sharing in second convolutional layer. | -| [IMU Transformer](https://ieeexplore.ieee.org/document/9393889) | Shavit and Klein | Classification | 1D Conv + Transformer | (S, T) | IMUTransformerEncoder | | +| [IMU Transformer](https://ieeexplore.ieee.org/document/9393889) | Shavit and Klein | Classification | 1D Conv + Transformer | (S, T) | IMUTransformerEncoder | -- | | [IMU CNN](https://ieeexplore.ieee.org/document/9393889) | Shavit and Klein | Classification | 1D Conv | (S, T) | IMUCNN | Baseline CNN for IMUTransnformer work. | -| [InceptionTime](https://doi.org/10.1007/s10618-020-00710-y) | Fawaz et al. | Classification | 1D Conv | (S, T) | InceptionTime | | -| [1D-ResNet](https://www.mdpi.com/1424-8220/22/8/3094) | Mekruksavanich and Jitpattanakul | Classification | 1D Conv | (S, T) | ResNet1D_8 | Baseline resnet from paper. Uses ELU and 8 residual blocks | -| [1D-ResNet-SE-8](https://www.mdpi.com/1424-8220/22/8/3094) | Mekruksavanich and Jitpattanakul | Classification | 1D Conv | (S, T) | ResNetSE1D_8 | ResNet with Squeeze and Excitation. Uses ELU and 8 residual blocks | -| [1D-ResNet-SE-5](https://ieeexplore.ieee.org/document/9771436) | Mekruksavanich et al. | Classification | 1D Conv | (S, T) | ResNetSE1D_5 | ResNet with Squeeze and Excitation. Uses ReLU and 8 residual blocks | -| [MCNN](https://ieeexplore.ieee.org/document/8975649) | Sikder et al. | Classification | 2D Conv | (2, C, S, T) | MultiChannelCNN_HAR | First dimension is FFT data and second is Welch Power Density periodgram data. Must adapt dataset to return data like this. | +| [Inception Time](https://doi.org/10.1007/s10618-020-00710-y) | Fawaz *et al.* | Classification | 1D Conv | (S, T) | InceptionTime | -- | +| [1D ResNet](https://www.mdpi.com/1424-8220/22/8/3094) | Mekruksavanich and Jitpattanakul | Classification | 1D Conv | (S, T) | ResNet1D_8 | Baseline resnet from paper. Uses ELU and 8 residual blocks | +| [1D ResNet SE 8](https://www.mdpi.com/1424-8220/22/8/3094) | Mekruksavanich and Jitpattanakul | Classification | 1D Conv | (S, T) | ResNetSE1D_8 | ResNet with Squeeze and Excitation. Uses ELU and 8 residual blocks | +| [1D ResNet SE 5](https://ieeexplore.ieee.org/document/9771436) | Mekruksavanich *et al.* | Classification | 1D Conv | (S, T) | ResNetSE1D_5 | ResNet with Squeeze and Excitation. Uses ReLU and 8 residual blocks | +| [MCNN](https://ieeexplore.ieee.org/document/8975649) | Sikder *et al.* | Classification | 2D Conv | (2, C, S, T) | MultiChannelCNN_HAR | First dimension is FFT data and second is Welch Power Density periodgram data. Must adapt dataset to return data like this. | # SSL Models -| **Model** | **Authors** | **Task** | **Type** | **Input Shape** | **Python Class** | **Observations** | -| --------------------------------------- | ------------- | -------- | -------- | :-------------: | :----------------------: | ---------------- | -| [LFR](https://arxiv.org/abs/2310.07756) | Yi Sui et al. | Any | Any | Any | LearnFromRandomnessModel | | +| **Model** | **Authors** | **Task** | **Type** | **Input Shape** | **Python Class** | **Observations** | +| --------------------------------------- | --------------- | -------- | -------- | :-------------: | :----------------------: | ---------------- | +| [LFR](https://arxiv.org/abs/2310.07756) | Yi Sui *et al.* | Any | Any | Any | LearnFromRandomnessModel | | From 5fcac4caa32e081e7bb34b9d33db60969932b85c Mon Sep 17 00:00:00 2001 From: Gabriel Gutierrez Date: Sat, 20 Jul 2024 20:47:26 -0300 Subject: [PATCH 11/13] Adding description to nets readme --- minerva/models/README.md | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/minerva/models/README.md b/minerva/models/README.md index 3be8bf7..34f212b 100644 --- a/minerva/models/README.md +++ b/minerva/models/README.md @@ -1,6 +1,16 @@ # Models +The models module is a collection of models that can be used for various tasks. I has two main submodules: `nets` and `ssl`. The `nets` submodule contains architectures for neural networks that can be trained in a supervised manner. The `ssl` submodule contains implementations of semi-supervised learning techniques, usually requiring a architecture from the `nets` submodule to run. + +Both `nets` and `ssl` submodules are further divided into areas of use (e.g. `image` and `time_series`). This division is made to make it easier to find the right model for the right task. It's not uncommon for a model to be used in more than one area of use, in that case it will be in the main area of use in it's original paper. + +In the case of a architecture or semi-supervised learning technique been agnostic to the type of data, it will be in the root of the submodule. + +## Nets + +These are the models implemented in the `nets` submodule: + | **Model** | **Authors** | **Task** | **Type** | **Input Shape** | **Python Class** | **Observations** | | -------------------------------------------------------------------------- | -------------------------------- | :------------: | :-------------------: | :-------------------------: | :-------------------: | --------------------------------------------------------------------------------------------------------------------------- | | [Deep Conv LSTM](https://www.mdpi.com/1424-8220/16/1/115) | Ordóñez and Roggen | Classification | 2D Conv + LSTM | (C, S, T) | DeepConvLSTM | -- | @@ -18,7 +28,9 @@ | [1D ResNet SE 5](https://ieeexplore.ieee.org/document/9771436) | Mekruksavanich *et al.* | Classification | 1D Conv | (S, T) | ResNetSE1D_5 | ResNet with Squeeze and Excitation. Uses ReLU and 8 residual blocks | | [MCNN](https://ieeexplore.ieee.org/document/8975649) | Sikder *et al.* | Classification | 2D Conv | (2, C, S, T) | MultiChannelCNN_HAR | First dimension is FFT data and second is Welch Power Density periodgram data. Must adapt dataset to return data like this. | -# SSL Models +## SSL Models + +These are the models implemented in the `ssl` submodule: | **Model** | **Authors** | **Task** | **Type** | **Input Shape** | **Python Class** | **Observations** | | --------------------------------------- | --------------- | -------- | -------- | :-------------: | :----------------------: | ---------------- | From fdacd71da56601a0e903fc5ddc4a8c4d7a429d2a Mon Sep 17 00:00:00 2001 From: Gabriel Gutierrez Date: Sat, 20 Jul 2024 20:49:07 -0300 Subject: [PATCH 12/13] corrections --- minerva/pipelines/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/minerva/pipelines/README.md b/minerva/pipelines/README.md index a68ae32..b2146bb 100644 --- a/minerva/pipelines/README.md +++ b/minerva/pipelines/README.md @@ -26,7 +26,7 @@ Welcome to the Pipelines section! Here, we'll explore the core functionalities a ## 5. Clonability -- **Cloning Pipelines**: Pipelines are cloneable, enabling the creation of independent instances from existing ones. The `clone` method initializes a deep copy, providing a clean slate for each clone. +- **Cloning Pipelines**: Pipelines are clonable, enabling the creation of independent instances from existing ones. The `clone` method initializes a deep copy, providing a clean slate for each clone. ## 6. Parallel and Distributed Environments @@ -73,4 +73,4 @@ Our modular approach to configuration files provides flexibility and organizatio Pipelines are powerful tools for automating tasks efficiently. By following best practices and leveraging versatile pipelines like `SimpleLightningPipeline`, you can streamline your workflow and achieve reproducible results with ease. Happy pipelining! -Feel free to explore more examples and documentation for detailed insights into pipeline usage and customization. \ No newline at end of file +Feel free to explore more examples and documentation for detailed insights into pipeline usage and customization. From 5ad6b60f0861251527f8efc31a3eddf2f9234fca Mon Sep 17 00:00:00 2001 From: Gabriel Gutierrez Date: Tue, 23 Jul 2024 15:42:20 -0300 Subject: [PATCH 13/13] readme changes --- minerva/data/README.md | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/minerva/data/README.md b/minerva/data/README.md index 5f19123..c65dbaf 100644 --- a/minerva/data/README.md +++ b/minerva/data/README.md @@ -1,10 +1,12 @@ -# Readers +# Data -| **Reader** | **Data Unit** | **Order** | **Class** | **Observations** | -|-------------------- |----------------------------------------------------------------------------------- |--------------------- |-------------------------------------------------------------- |------------------------------------------------------------------------------------------------------------------------------------ | -| PNGReader | Each unit of data is a image file (PNG) inside the root folder | Lexigraphical order | minerva.data.readers.png_reader.PNGReader | File extensions: .png | -| TIFFReader | Each unit of data is a image file (TIFF) inside the root folder | Lexigraphical order | minerva.data.readers.tiff_reader.TiffReader | File extensions: .tif and .tiff | -| TabularReader | Each unit of data is the i-th row in a dataframe, with columns filtered | Dataframe rows | minerva.data.readers.tabular_reader.TabularReader | Support pandas dataframe | -| CSVReader | Each unit of data is the i-th row in a CSV file, with columns filtered | CSV Rowd | minerva.data.readers.csv_reader.CSVReader | If dataframe is already open, use TabularReader instead. This class will open and load the CSV file and pass it to a TabularReader | -| PatchedArrayReader | Each unit of data is a submatrix of specified shape inside an n-dimensional array | Dimension order | minerva.data.readers.patched_array_reader.PatchedArrayReader | Supports any data with ndarray protocol (tensor, xarray, zarr) | -| PatchedZarrReader | Each unit of data is a submatrix of specified shape inside an Zarr Array | Dimension order | minerva.data.readers.zarr_reader.ZarrArrayReader | Open zarr file in lazy mode and pass it to PatchedArrayReader | \ No newline at end of file +## Readers + +| **Reader** | **Data Unit** | **Order** | **Class** | **Observations** | +| :----------------- | ---------------------------------------------------------------------------------- | :-------------------: | :----------------: | ----------------------------------------------------------------------------------------------------------------------------------- | +| PNGReader | Each unit of data is a image file (PNG) inside the root folder | Lexicographical order | PNGReader | File extensions: .png | +| TIFFReader | Each unit of data is a image file (TIFF) inside the root folder | Lexicographical order | TiffReader | File extensions: .tif and .tiff | +| TabularReader | Each unit of data is the i-th row in a dataframe, with columns filtered | Dataframe rows | TabularReader | Support pandas dataframe | +| CSVReader | Each unit of data is the i-th row in a CSV file, with columns filtered | CSV Rowd | CSVReader | If data frame is already open, use TabularReader instead. This class will open and load the CSV file and pass it to a TabularReader | +| PatchedArrayReader | Each unit of data is a sub matrix of specified shape inside an n-dimensional array | Dimension order | PatchedArrayReader | Supports any data with ndarray protocol (tensor, xarray, zarr) | +| PatchedZarrReader | Each unit of data is a sub matrix of specified shape inside an Zarr Array | Dimension order | ZarrArrayReader | Open zarr file in lazy mode and pass it to PatchedArrayReader |