Skip to content

[BMM 24-25] HippoTrainer: Gradient-Based Hyperparameter Optimization

License

Notifications You must be signed in to change notification settings

intsystems/hippotrainer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 

Repository files navigation

HippoTrainer

HippoTrainer

Gradient-Based Hyperparameter Optimization for PyTorch 🦛

PyTorch Inspired by Optuna

License Contributors Issues Pull Requests

HippoTrainer is a PyTorch-compatible library for gradient-based hyperparameter optimization, implementing cutting-edge algorithms that leverage automatic differentiation to efficiently tune hyperparameters.

🚀 Features

  • Algorithm Zoo: T1-T2, Billion Hyperparameters, HOAG, DrMAD
  • PyTorch Native: Direct integration with torch.nn.Module
  • Memory Efficient: Checkpointing & implicit differentiation
  • Scalable: From laptop to cluster with PyTorch backend

📜 Algorithms

  • T1-T2 (Paper): Unrolled optimization with explicit gradient computation
  • Billion Hyperparams (Paper): Large-scale optimization with PyTorch fusion
  • HOAG (Paper): Implicit differentiation via conjugate gradient (Daniil Dorin)
  • DrMAD (Paper): Memory-efficient piecewise-linear backprop

🤝 Contributors

  • Daniil Dorin (Basic code writing, Final demo, Algorithms)
  • Igor Ignashin (Project wrapping, Documentation writing, Algorithms)
  • Nikita Kiselev (Project planning, Blog post, Algorithms)
  • Andrey Veprikov (Tests writing, Documentation writing, Algorithms)
  • We welcome contributions!

📄 License

HippoTrainer is MIT licensed. See LICENSE for details.

About

[BMM 24-25] HippoTrainer: Gradient-Based Hyperparameter Optimization

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published