Skip to content

Releases: basf/mamba-tabular

New Version Release 1.0.0

04 Dec 16:05
0a829f6
Compare
Choose a tag to compare

v1.0.0: Major Update for Mambular

This release marks a significant upgrade to Mambular, introducing new models, enhanced functionality, and improved efficiency. Below is an overview of the key changes:


🚀 New Models

  • TabM: A cutting-edge tabular deep learning model optimized for performance and flexibility.
  • NODE: Incorporates Neural Oblivious Decision Ensembles for more robust handling of tabular data.
  • NDTF: Neural Decision Tree Forest, combining decision tree logic with deep learning capabilities.
  • TabulaRNN: A recurrent neural network tailored for tabular tasks, with configurable options to use GRU or LSTM cells for sequence modeling.

🎛️ Hyperparameter Optimization

  • Integrated support for hyperparameter optimization:
    • Built-in Bayesian optimization for more advanced tuning.
  • Fully compatible with Scikit-learn's optimization framework, enabling seamless integration for all models.

Efficiency Improvements

  • Leveraging the new mamba-ssm package for a more efficient implementation of the Mamba framework, ensuring faster runtime and reduced memory usage.

🛠️ Enhanced Preprocessing

  • Expanded preprocessing options for greater control over feature transformations.
  • Improved feature information handling to better accommodate various dataset types and structures.

🧬 Improved Embedding Layers

  • New embedding layers, including PLR.
  • Customizable activation functions for enhanced flexibility in embedding generation.

This release sets the foundation for continued innovation in tabular deep learning. Feedback and contributions are welcome!

Release v0.2.4

21 Oct 15:03
e65e88b
Compare
Choose a tag to compare

What's Changed

  • sklearn base_modules: Modified conditional checks to use if X_val is not None instead of if X_val in the build_model and fit methods. by @AnFreTh in #142
  • mambular/data_utils/datamodule.py: Ensured that keys are converted to strings when constructing cat_key, binned_key, and num_key in the setup and preprocess_test_data methods. by @AnFreTh in #142

Full Changelog: v0.2.3...v0.2.4

Release v0.2.3

19 Sep 08:40
48d22da
Compare
Choose a tag to compare
  • Including Quantile Regression
  • Fixing param count bug in sklearnbaselss

v0.2.2

13 Aug 16:25
3068e8f
Compare
Choose a tag to compare
  • Fixing SklearnBaseClassifier error
  • Fixing TabulaRNNRegressor error

Release v0.2.1

13 Aug 03:01
dad9a12
Compare
Choose a tag to compare

What's Changed

  • Included new models (MambaTab, TabulaRNN)
  • Added utility functionality to model build
  • Improved Embedding layers
  • Added AB layernorm and weight decay to Mamba
  • Added score function to sklearn base classes

Full Changelog: v0.1.7...v0.2.1

Hotfix Release v0.1.7

11 Jul 08:43
57ca8fc
Compare
Choose a tag to compare

What's Changed

  • np version fixed by @mkumar73 in #71
  • Switch to Numpy <=1.26.4 instead of 2.0.

New Release v0.1.6

01 Jul 13:26
5157dd5
Compare
Choose a tag to compare

New Version Release Highlights:

  • Addition of New Models: We've expanded our model suite to include the following architectures:

    • FT-Transformer: Leverages transformer encoders for improved performance on tabular data.
    • MLP (Multi-Layer Perceptron): A classical deep learning model for handling a wide range of tabular data tasks.
    • ResNet: Adapted from the classical ResNet architecture and proven to be a good baseline for tabular tasks.
    • TabTransformer: Utilizes transformer-based models for categorical features.
  • Bidirectional and Feature Interaction Capabilities: Mambular now includes bidirectional capabilities and enhanced feature interaction mechanisms, enabling more complex and dynamic data representations and improving model accuracy.

  • Architectural Restructuring: The internal architecture has been restructured to facilitate the easy integration of new models. This modular approach simplifies the process of extending Mambular with custom models.

  • New Preprocessing Methods: We have introduced new preprocessing techniques to better prepare your data for modeling:

    • Quantile Preprocessing: Transforms numerical features to follow a uniform or normal distribution, improving robustness to outliers.
    • Polynomial Features: Generates polynomial and interaction features to capture more complex relationships within the data.
    • Spline Transformation: Applies piecewise polynomial functions to numerical features, effectively capturing nonlinear relationships.

Beta release: v0.1.4

04 Jun 11:28
c74cff8
Compare
Choose a tag to compare

Mambular: Tabular Deep Learning with Mamba Architectures

Introduction

Mambular is a Python package that brings the power of Mamba architectures to tabular data, offering a suite of deep learning models for regression, classification, and distributional regression tasks.

Features

Comprehensive Model Suite: Includes modules for regression (MambularRegressor), classification (MambularClassifier), and distributional regression (MambularLSS), catering to a wide range of tabular data tasks.

  • State-of-the-Art Architectures: Leverages the Mamba architecture, known for its effectiveness in handling sequential and time-series data within a state-space modeling framework, adapted here for tabular data.
  • Seamless Integration: Designed to work effortlessly with scikit-learn, allowing for easy inclusion in existing machine learning pipelines, cross-validation, and hyperparameter tuning workflows.
  • Extensive Preprocessing: Comes with a powerful preprocessing module that supports a broad array of data transformation techniques, ensuring that your data is optimally prepared for model training.
  • Sklearn-like API: The familiar scikit-learn fit, predict, and predict_proba methods mean minimal learning curve for those already accustomed to scikit-learn.
  • PyTorch Lightning Under the Hood: Built on top of PyTorch Lightning, Mambular models benefit from streamlined training processes, easy customization, and advanced features like distributed training and 16-bit precision.