Skip to content

Releases: giotto-ai/giotto-deep

v0.0.4

11 Oct 12:54
b09b735
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.0.3...v0.0.4

v0.0.3

04 Oct 17:30
f83811f
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.0.2...v0.0.3

giotto-deep release v0.0.2

19 Jul 09:42
fa9e702
Compare
Choose a tag to compare
Pre-release

What's New

There has been a new version for the computations distribution on kubernetes:

  • using RQ to parallelise jobs by @matteocao in #94
    Full Changelog: v0.0.1...v0.0.2
  • creating the visualisation tool for persistence diagrams (PD) attributions: in the Visualiser the method is called plot_attributions_persistence_diagrams
  • New notebooks: a full example on how to use Persformer on the Orbit5K dataset (as published in the paper) and a notebook that uses Persformer inside a classical giotto-tda pipeline.

Breaking changes

the betti surface function is now called: plot_betti_surface_layers rather than betti_plot_layers. There is the Betti curves counterpart: plot_betti_curves_layers that plots the Betti curves associated to each PD (hence to each layer)

Bug fixes

Bug related to the use of the SAMOptimizer in HPO, Bug related to converting gtda PD to OneHotEncodedPersistenceDiagram

Acknowledgement

@matteocao, @nberkouk and @raphaelreinauer contributed to this minor release.

giotto-deep release v0.0.1

17 Jun 18:35
Compare
Choose a tag to compare
Pre-release

Major Features and Improvements

Introduction

This is the first release in open-source of the new library giotto-deep. This library is the doorway to bring together topological data analysis and deep learning. giotto-deep can also work with many deep learning technologies that are not topology-related and its simple API allow researchers to focus on building new model/layer, losses,... while doing automatically the dull and repetitive work.

Main dependencies

The library is built on top of PyTorch and it uses most of its features.
The hyper parameters optimisation capabilities are based on Optuna and the integration will soon allow the user to distribute the computations over a Kubernetes cluster.
The interpretability tools are based on captum
Tensorboard is heavily used for plotting

Major innovation

The main innovations proposed in this version are

  • The Performer algorithm (here the preprint)
  • Persistence Diagram data type compatible with PyTorch and GPUs
  • Persistence gradient implementation using giotto-ph
  • Full integration with tensorboard for plotting
  • Fully fledged hyper parameter search capabilities, including the possibility to search over model architecture, automatically benchmarking the models over multiple datasets.
  • Integrating over twenty interpretability tools (Saliency maps, GuidedGradCAM, Occlusions, Integrated Gradients, ...). The interpretability tools are based on captum.

Ideal audience and user persona

We have built this library primarily to support applied mathematicians that know a great deal of cool unheard algorithms and would like to quickly combine their ideas with deep learning. The high-level API is very simple and require minimal efforts to run the HPOs and trainings.

Machine learning engineers and data scientist would find it useful to use giotto-deep for their analysis, as they can quickly build and train their models on a variety of use cases. Also, giotto-deep has simple APIs to build new data types as well as their preprocessing. A comprehensive example of this can be found by checking the persistence diagram data type.

Bug Fixes

None.

Backwards-Incompatible Changes

None.

Thanks to our Contributors

This release contains contributions from:

Matteo Caorsi @matteocao
Raphael Reinauer @raphaelreinauer
Nicolas Berkouk @nberkouk
Sydney Hauke @sydneyhauke
Abdul Jabbar

We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.