Skip to content

Usage – Quick Start Guide

Nikkel Mollenhauer edited this page Jul 14, 2022 · 15 revisions

Installation

For setup information, please refer to our README, which contains installation instructions as well as first steps to take in the simulation framework.

Getting to know the simulation framework

Our simulation framework, which we refer to as the recommerce-framework, is made up of a number of different components. These are all laid out and introduced in the Framework introduction section of this wiki.

To start you off we would suggest you take a look at the Marketplace section to get acquainted with the core of our simulation framework.

Afterwards, take a look at the Vendors section for information about the different types of vendors that interact with the market.

Below is old, might (re)move

Running the simulation

Right now you have 2 options when running our market simulation. You can:

Train a reinforcement learning agent

  • Choose a market scenario from sim_market.py (concrete class). If you feel unsure about what market scenario to choose, do not hesitate to take a look at our market type documentation.

  • Choose a fitting agent type from agent.py (concrete class)

  • Set the economy variable in training_scenario.py (Line 7) to an instance of your chosen market type. For example if you chose "Circular Economy Rebuy Price - Duopoly Scenario" set

economy = sim.CircularEconomyRebuyPriceOneCompetitor()
  • Set RL_agent to a respective agent instance which fits the chosen market scenario. Don't worry about the arguments within the brackets, these are the same for all agents. In our example it would be
RL_agent = agent.QLearningCERebuyAgent(n_observation=economy.observation_space.shape[0], n_actions=n_actions, optim=torch.optim.Adam)
  • Run training_scenario.py. It might take a while until the trained models get created (models get created from epsilon <= 0.1 on). The models will be stored in a directory called trainedModels in the project folder as .dat files.

Observe an agent's performance in the environment

If you want to use tensorboard feel free to use exampleprinter.py to get real-time statistics. You will obtain other views like histograms and graphs when running agent_monitoring.py.

Using agent_monitoring.py

The agent_monitoring is designed to observe the performance of an agent in a marketplace over several episodes. It will than create different diagrams saved in the monitoring folder in the project directory. There will be a subfolder for each run. The default settings for the agent_monitoring are a QLearningCEAgent in a CircularEconomy monopoly observed over 500 episodes, with plots being generated every 50 episodes. If you want to change these settings, please use the setup_monitoring function in main.

Possible parameters:

  • Use agents to pass an array of agents. An agent is represented by a tuple. The tuple contains first the class name of the agent (do not pass instances) and second a list of arguments which should be used to create the agent. For QLearning agents, this list should first contain a path to the trained model file. The modelfile should be in the monitoring folder. Note that there are already some pre-trained models for you to choose from if you do not want to train your own agents. Optionally the list can contain a second argument, which will be interpreted as the name of the agent. For rule-based agents, please consult the class definitions to know what arguments should/can be passed. When passing multiple agents, please remember that they all need to be of the same economy type. Default is one QLearningCEAgent.

The following example will create two agents which play the same scenario independently. One QLearningCEAgent named my_favorite_agent and one FixedPriceCEAgent, who will always set the prices 4 and 6.

monitor.setup_monitoring(agents=[(agent.QLearningCEAgent, ['CircularEconomy_QLearningCEAgent.dat', 'my_favorite_agent']), (agent.FixedPriceCEAgent, [(4,6)])])
  • Use marketplace to define a marketplace that fits your agents. Please provide class names only. If the agents and the marketplace do not fit together the program will tell you. Default is CircularEconomyMonopolyScenario.

  • Use enable_live_draw to set whether or not histograms will be displayed during the monitoring session, or if they should only be saved. Default is True.

  • Use episodes to change the number of episodes the monitoring session runs for. Default is 500.

  • Use plot_interval to determine after how many episodes diagrams will be drawn. Default is 50.

  • Use subfolder_name to give the subfolder for the current monitoring session a custom prefix. Default is plots_CurrentDateTime.

Using exampleprinter.py

The exampleprinter is designed to observe the performance of an agent over one episode. It uses Tensorboard to track the results. It will put the resulting files in the runs folder in the project directory. You can start a tensorboard by using the following command:

tensorboard serve --logdir "<path_to_log_folder>"

The tensorboard is now running, usually at localhost:6006. After running the exampleprinter you can open a browser and look at the different diagrams, to get to know what happens within an episode. You can change the environment and the agent used at the bottom of the file.