Skip to content

Latest commit

 

History

History
80 lines (60 loc) · 3.69 KB

README.md

File metadata and controls

80 lines (60 loc) · 3.69 KB

AnalogVNN

arXiv AML Open In Colab

PyPI version Documentation Status Python License: MPL 2.0

Documentation: https://analogvnn.readthedocs.io/

Installation:

  # Current stable release for CPU and GPU
  pip install analogvnn
  
  # For additional optional features
  pip install analogvnn[full]

Usage:

Open In Colab

Abstract

3 Layered Linear Photonic Analog Neural Network

AnalogVNN is a simulation framework built on PyTorch which can simulate the effects of optoelectronic noise, limited precision, and signal normalization present in photonic neural network accelerators. We use this framework to train and optimize linear and convolutional neural networks with up to 9 layers and ~1.7 million parameters, while gaining insights into how normalization, activation function, reduced precision, and noise influence accuracy in analog photonic neural networks. By following the same layer structure design present in PyTorch, the AnalogVNN framework allows users to convert most digital neural network models to their analog counterparts with just a few lines of code, taking full advantage of the open-source optimization, deep learning, and GPU acceleration libraries available through PyTorch.

AnalogVNN Paper: https://doi.org/10.1063/5.0134156

Citing AnalogVNN

We would appreciate if you cite the following paper in your publications for which you used AnalogVNN:

@article{shah2023analogvnn,
  title={AnalogVNN: A fully modular framework for modeling and optimizing photonic neural networks},
  author={Shah, Vivswan and Youngblood, Nathan},
  journal={APL Machine Learning},
  volume={1},
  number={2},
  year={2023},
  publisher={AIP Publishing}
}

Or in textual form:

Vivswan Shah, and Nathan Youngblood. "AnalogVNN: A fully modular framework for modeling 
and optimizing photonic neural networks." APL Machine Learning 1.2 (2023).
DOI: 10.1063/5.0134156