This repository contains a trading bot that leverages deep Q-learning to optimize trading strategies and maximize profits. The bot is designed to interact with financial markets, continuously learning from market data to improve its decision-making capabilities over time. The goal is to achieve a higher return on investment by adapting to market conditions using reinforcement learning.
- Deep Q-Learning Algorithm: The bot employs a deep Q-learning model to predict the best actions to take based on current market conditions.
- Continuous Learning: The model is trained and updated continuously to adapt to changing market dynamics.
- Profit Optimization: The bot's primary objective is to maximize profits by selecting optimal trading actions.
- Customizable Parameters: Various hyperparameters and configurations can be adjusted to tailor the bot's behaviour to specific trading scenarios.
-
Clone the repository:
git clone https://github.com/NeuroVortex/simple-deep-q-learning-trading-bot cd simple-deep-q-learning-trading-bot -
Install required packages: Make sure you have Python 3.11+ installed. Then, install the necessary dependencies using pip:
pip install -r requirements.txt
-
Use offline Data: The bot requires offline candlestick data of specific tickers as stored in a repository.
-
Training the Model: Run the script to start training the model using historical market data:
python train_app.py
-
Running the Bot: Once the model is trained, you can start the bot to begin trading:
python evaluate_app.py
-
Evaluating Performance: After running the bot, you can evaluate its performance by reviewing the profit/loss logs generated in the
logs/directory.
- Learning Rate: Adjust the learning rate in the
config.pyfile to control how quickly the model adapts to new data. - Exploration vs. Exploitation: Modify the epsilon-greedy strategy parameters to balance the exploration of new strategies versus the exploitation of known profitable ones.
- Batch Size: Set the batch size for training the model. Larger batches can lead to more stable learning but require more memory.
- Replay Memory: Configure the replay memory size, which stores past experiences to be replayed during training for more robust learning.
Here's a basic example of how the bot might be configured and run:
from trading_bot import TradingBot
# Initialize the bot
bot = TradingBot(api_key='YOUR_API_KEY', secret_key='YOUR_SECRET_KEY')
# Train the model
bot.train(epochs=1000)
# Run the bot for live trading
bot.run(trade_duration='1d', risk_tolerance=0.05)Contributions are welcome! If you have ideas for new features or improvements, please feel free to submit a pull request or open an issue.
This project is licensed under the MIT License - see the LICENSE file for details.
For any inquiries or questions, please contact Fatemeh Salboukh at [mkarbasioun@gmail.com].