《李宏毅深度学习教程》(李宏毅老师推荐👍,苹果书🍎),PDF下载地址:https://github.com/datawhalechina/leedl-tutorial/releases
-
Updated
Oct 8, 2024 - Jupyter Notebook
《李宏毅深度学习教程》(李宏毅老师推荐👍,苹果书🍎),PDF下载地址:https://github.com/datawhalechina/leedl-tutorial/releases
Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
Official PyTorch implementation of "A Comprehensive Overhaul of Feature Distillation" (ICCV 2019)
Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. This project provides researchers, developers, and engineers advanced quantization and compression tools for deploying state-of-the-art neural networks.
Neural Network Quantization & Low-Bit Fixed Point Training For Hardware-Friendly Algorithm Design
Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons (AAAI 2019)
MUSCO: MUlti-Stage COmpression of neural networks
Using ideas from product quantization for state-of-the-art neural network compression.
Group Fisher Pruning for Practical Network Compression(ICML2021)
Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression. CVPR2020.
Knowledge Distillation with Adversarial Samples Supporting Decision Boundary (AAAI 2019)
This is the official implementation of "DHP: Differentiable Meta Pruning via HyperNetworks".
Code for "Variational Depth Search in ResNets" (https://arxiv.org/abs/2002.02797)
Pytorch implemenation of "Learning Filter Basis for Convolutional Neural Network Compression" ICCV2019
AIMET GitHub pages documentation
💍 Efficient tensor decomposition-based filter pruning
This repository consists of application of Deep Learning Models like DNN, CNN (1D and 2D), RNN (LSTM and GRU) and Variational Autoencoders written from scratch in tensorflow.
🧠 Singular values-driven automated filter pruning
Overparameterization and overfitting are common concerns when designing and training deep neural networks. Network pruning is an effective strategy used to reduce or limit the network complexity, but often suffers from time and computational intensive procedures to identify the most important connections and best performing hyperparameters. We s…
Add a description, image, and links to the network-compression topic page so that developers can more easily learn about it.
To associate your repository with the network-compression topic, visit your repo's landing page and select "manage topics."