This is a Tensorflow implementation of the paper Homogeneous Learning: Self-Attention Decentralized Deep Learning, IEEE Access 2022
Homogeneous Learning (HL) is decentralized neural networks based on the Global Workspace Theory for fast learning of novel tasks leverging many expert models knowledge. Different from the attention macanism, we leverage reinforcement learning (RL) to generate the meta agent's policy observing its inner state and surrounding environment’s states, such that the systems can quickly adapt to the given tasks. This is the preliminary study of how the human brain can learn new things very fast based on many models of the world.
This is a quick guide to get started with the sources.
You will need Python 3 and Tensorflow 2, to run the systems.
Upgrade pip to the latest version, use:
python -m pip install --upgrade pip
Set up other modules and libraries dependencies, use:
pip install -r requirements.txt
There are two components in HL, the decentralized learning system in the file of "environment.py", and the DQN-based RL agent system in the file of "node.py". More detailed information can be found in the Section 3.3 of the Homogeneous Learning paper.
"environment.py" includes the decentralized learning algorithm, which allows the systems to envolve based on the decisions made by RL agents.
"node.py" includes the reinforcement learning algorithm for learning an optimized communication policy based on observations of model parameters and the correlated rewards.
The HL systems can be run from the terminal by simply typing:
python main.py
Note that "main.py" will include a total of 120 episodes' learning of how to train a local foundation model to achieve a desired goal within the minimum steps, and at the same time with less communication cost, where each episode includes a whole training procedure of the decentralized learning algorithm.
If you want to make changes to the source, such as the total episodes and the training goal, you are going to need to refer to the Section 4.1, 4.2.1, A.2 in the paper for more information on how these components work with each other.
If this repository is helpful for your research or you want to refer the provided results in this work, you could cite the work using the following BibTeX entry:
@article{sun2022homolearn,
author = {Yuwei Sun and
Hideya Ochiai},
title = {Homogeneous Learning: Self-Attention Decentralized Deep Learning},
journal = {IEEE Access},
year = {2021}
}
- The Consciousness Prior, Yoshua Bengio, arXiv preprint.
- Coordination among neural modules through a shared global workspace, Goyal et al., ICLR'22.
- GFlowNet Foundations, Bengio et al., arXiv preprint.
- Decentralized Deep Learning for Multi-Access Edge Computing: A Survey on Communication Efficiency and Trustworthiness, Yuwei Sun et al., IEEE Transactions on Artificial Intelligence.