Popular Libraries

Stable Baselines

Introduction

This page explains how to build, train, deploy and store stable baselines 3 models.

Import Libraries

Import the gym and stable_baselines3 libraries.

from AlgorithmImports import *
import gym
from stable_baselines3 import DQN

Create Subscriptions

In the Initializeinitialize method, subscribe to some data so you can train the stable_baselines model and make predictions.

# Subscribe to security data and store symbol for referencing in the algorithm.
self._symbol = self.add_equity("SPY", Resolution.DAILY).symbol

Build Models

In this example, create a gym environment to initialize the training environment, agent, and reward. Then, create a reinforcement learning model by a single-asset deep Q-network learning algorithm using the following observations and rewards:

Data CategoryDescription
ObservationsThe 5-day open, high, low, close, and volume (OHLCV) of the SPY
RewardsMaximum portfolio return

Follow these steps to create a method to build the model:

  1. Create a custom gym environment class.
  2. In this example, create a custom environment with the previous 5 OHLCV log-return data as observation and the highest portfolio value as reward.

    # Example of a custom environment with the previous 5 OHLCV log-return data as observation and the highest portfolio value as reward.
    class TradingEnv(gym.Env):
        FLAT = 0
        LONG = 1
        SHORT = 2
    
        def __init__(self, ohlcv, ret):
            super(TradingEnv, self).__init__()
            
            self.ohlcv = ohlcv
            self.ret = ret
            self.trading_cost = 0.01
            self.reward = 1
            
            # The number of step the training has taken, starts at 5 since we're using the previous 5 data for observation.
            self.current_step = 5
            # The last action
            self.last_action = 0
    
            # Define action and observation space
            # Example when using discrete actions, we have 3: LONG, SHORT and FLAT.
            n_actions = 3
            self.action_space = gym.spaces.Discrete(n_actions)
            # The observation will be the coordinate of the agent, shape for (5 previous data poionts, OHLCV)
            self.observation_space = gym.spaces.Box(low=-2, high=2, shape=(5, 5, 5), dtype=np.float64)
    
        def reset(self):
            # Reset the number of step the training has taken
            self.current_step = 5
            # Reset the last action
            self.last_action = 0
            # must return np.array type
            return self.ohlcv[self.current_step-5:self.current_step].astype(np.float32)
    
        def step(self, action):
            if action == self.LONG:
                self.reward *= 1 + self.ret[self.current_step] - (self.trading_cost if self.last_action != action else 0)
            elif action == self.SHORT:
                self.reward *= 1 + -1 * self.ret[self.current_step] - (self.trading_cost if self.last_action != action else 0)
            elif action == self.FLAT:
                    self.reward *= 1 - (self.trading_cost if self.last_action != action else 0)
            else:
                raise ValueError("Received invalid action={} which is not part of the action space".format(action))
                
            self.last_action = action
            self.current_step += 1
    
            # Have we iterate all data points?
            done = (self.current_step == self.ret.shape[0]-1)
    
            # Reward as return
            return self.ohlcv[self.current_step-5:self.current_step].astype(np.float32), self.reward, done, {}
  3. Get the processed training data.
  4. # Fetch observations and rewards to set up the training environment and train the model.
    obs, rewards = self.get_observations_and_rewards()
  5. Initialize the environment with the observations and results.
  6. # Initialize the trading environment with observations and rewards to provide the necessary data for simulating trading actions and feedback during model training.
    self.env = TradingEnv(obs, rewards)
  7. Call the DQN constructor with the learning policy and the gym environment.
  8. # Initialize the Deep Q-Network model with the environment to train and evaluate the policy using Q-learning for action-value estimation.
    self.model = DQN(MlpPolicy, env)

Train Models

You can train the model at the beginning of your algorithm and you can periodically re-train it as the algorithm executes.

Warm Up Training Data

You need historical data to initially train the model at the start of your algorithm. To get the initial training data, in the Initializeinitialize method, make a history request.

# Initialize training data with a rolling window of size 252*2 days.
training_length = 252*2
self.training_data = RollingWindow[TradeBar](training_length)
history = self.history[TradeBar](self.spy, training_length, Resolution.DAILY)
for trade_bar in history:
    self.training_data.add(trade_bar)

Define a Training Method

To train the model, define a method that fits the model with the training data.

# Prepare feature and label data for training by processing rolling window data to create time-series sequences for model training.
def get_observations_and_rewards(self, n_step=5):
    training_df = self.pandas_converter.get_data_frame[TradeBar](list(self.training_data)[::-1])
    daily_pct_change = training_df['close'].pct_change().dropna()

    obs = []
    rewards = []
    for i in range(len(daily_pct_change)-n_step):
        obs.append(training_df.iloc[i:i+n_step].values)
        rewards.append(float(daily_pct_change.iloc[i+n_step]))
    obs = np.array(obs)
    rewards = np.array(rewards)

    return obs, rewards

def my_training_method(self):
    obs, rewards = self.get_observations_and_rewards()
    self.env = TradingEnv(obs, rewards)
    self.model = DQN("MlpPolicy", self.env)
    self.model.learn(total_timesteps=500)

Set Training Schedule

To train the model at the beginning of your algorithm, in the Initializeinitialize method, call the Traintrain method.

# Train the model initially to provide a baseline for prediction and decision-making.
self.train(self.my_training_method)

To periodically re-train the model as your algorithm executes, in the Initializeinitialize method, call the Traintrain method as a Scheduled Event.

# Train the model every Sunday at 8:00 AM
self.train(self.date_rules.every(DayOfWeek.SUNDAY), self.time_rules.at(8, 0), self.my_training_method)

Update Training Data

To update the training data as the algorithm executes, in the OnDataon_data method, add the current TradeBar to the RollingWindow that holds the training data.

# Add the latest bar to training data to ensure the model is trained with the most recent market data.
def on_data(self, slice: Slice) -> None:
    if self._symbol in slice.Bars:
        self.training_data.Add(slice.Bars[self._symbol])

Predict Labels

To predict the labels of new data, in the OnDataon_data method, get the most recent set of features and then call the predict method.

# Generate feature set and predict with the latest data for current market decisions.
features, _ = self.get_observations_and_rewards()
action, _ = self.model.predict(features[-5:], deterministic=True)
_, _, _, _ = self.env.step(action)

You can use the label prediction to place orders.

# Use label prediction to place orders based on forecasted market direction.
if action == 0:
    self.liquidate(self.spy)
elif action == 1:
    self.set_holdings(self.spy, 1)
elif action == 2:
    self.set_holdings(self.spy, -1)

Save Models

Follow these steps to save stable_baselines models into the Object Store:

  1. Set the key name of the model to be stored in the Object Store.
  2. # Set the key to store the model in Object Store for reuse across sessions.
    model_key = "model"
  3. Call the GetFilePathget_file_path method with the key.
  4. # Get the file path to correctly save and access the model in Object Store.
    file_name = self.object_store.get_file_path(model_key)

    This method returns the file path where the model will be stored.

  5. Call the save method the file path.
  6. # Serialize Python objects into a file to save the model's state for other runs.
    self.model.save(file_name)

Load Models

You can load and trade with pre-trained keras models that you saved in the Object Store. To load a keras model from the Object Store, in the Initializeinitialize method, get the file path to the saved model and then call the load_model method.

# Load the keras model from Object Store to use its saved state and update with new data if needed.
def initialize(self) -> None:
    if self.object_store.contains_key(model_key):
        file_name = self.object_store.get_file_path(model_key)
        self.model = DQN.load(file_name, env=env)

The ContainsKeycontains_key method returns a boolean that represents if the model_key is in the Object Store. If the Object Store doesn't contain the model_key, save the model using the model_key before you proceed.

Clone Example Algorithm

You can also see our Videos. You can also get in touch with us via Discord.

Did you find this page helpful?

Contribute to the documentation: