Popular Libraries
Stable Baselines
Create Subscriptions
In the Initialize
initialize
method, subscribe to some data so you can train the stable_baselines
model and make predictions.
# Add a security and save a reference to its Symbol. self._symbol = self.add_equity("SPY", Resolution.DAILY).symbol
Build Models
In this example, create a gym
environment to initialize the training environment, agent, and reward. Then, create a reinforcement learning model by a single-asset deep Q-network learning algorithm using the following observations and rewards:
Data Category | Description |
---|---|
Observations | The 5-day open, high, low, close, and volume (OHLCV) of the SPY |
Rewards | Maximum portfolio return |
Follow these steps to create a method to build the model:
- Create a custom
gym
environment class. - Get the processed training data.
- Initialize the environment with the observations and results.
- Call the
DQN
constructor with the learning policy and thegym
environment.
In this example, create a custom environment with the previous 5 OHLCV log-return data as observation and the highest portfolio value as reward.
# Define a custom environment with the previous 5 bars as the observation and # portfolio growth as the reward. class TradingEnv(gym.Env): FLAT = 0 LONG = 1 SHORT = 2 def __init__(self, ohlcv, ret): super(TradingEnv, self).__init__() self.ohlcv = ohlcv self.ret = ret self.trading_cost = 0.01 self.reward = 1 # The number of step the training has taken. It starts at 5 since we're using the previous 5 # bars for the observation. self.current_step = 5 # The last action self.last_action = 0 # Define the action and observation spaces. # Example when using discrete actions, we have 3: LONG, SHORT and FLAT. n_actions = 3 self.action_space = gym.spaces.Discrete(n_actions) # The observation will be the coordinate of the agent, shape for (5 previous data points, OHLCV) self.observation_space = gym.spaces.Box(low=-2, high=2, shape=(5, 5, 5), dtype=np.float64) def reset(self): # Reset the number of step the training has taken. self.current_step = 5 # Reset the last action. self.last_action = 0 # This method must return the np.array. return self.ohlcv[self.current_step-5:self.current_step].astype(np.float32) def step(self, action): if action == self.LONG: self.reward *= 1 + self.ret[self.current_step] - (self.trading_cost if self.last_action != action else 0) elif action == self.SHORT: self.reward *= 1 + -1 * self.ret[self.current_step] - (self.trading_cost if self.last_action != action else 0) elif action == self.FLAT: self.reward *= 1 - (self.trading_cost if self.last_action != action else 0) else: raise ValueError("Received invalid action={} which is not part of the action space".format(action)) self.last_action = action self.current_step += 1 # Have we iterate all data points? done = (self.current_step == self.ret.shape[0]-1) # Reward as return. return self.ohlcv[self.current_step-5:self.current_step].astype(np.float32), self.reward, done, {}
# Fetch observations and rewards to set up the training environment and train the model. obs, rewards = self.get_observations_and_rewards()
# Initialize the trading environment with the observations and rewards to provide the necessary # data for simulating trading actions and feedback during training. self.env = TradingEnv(obs, rewards)
# Initialize the Deep Q-Network model with the environment to train and evaluate the policy # using Q-learning for action-value estimation. self.model = DQN(MlpPolicy, env)
Train Models
You can train the model at the beginning of your algorithm and you can periodically re-train it as the algorithm executes.
Warm Up Training Data
You need historical data to initially train the model at the start of your algorithm. To get the initial training data, in the Initialize
initialize
method, make a history request.
# Fill a RollingWindow with 2 years of historical closing bars. training_length = 252*2 self.training_data = RollingWindow[TradeBar](training_length) history = self.history[TradeBar](self.spy, training_length, Resolution.DAILY) for trade_bar in history: self.training_data.add(trade_bar)
Define a Training Method
To train the model, define a method that fits the model with the training data.
# Prepare feature and label data for training by processing the RollingWindow data into a time series. def get_observations_and_rewards(self, n_step=5): training_df = self.pandas_converter.get_data_frame[TradeBar](list(self.training_data)[::-1]) daily_pct_change = training_df['close'].pct_change().dropna() obs = [] rewards = [] for i in range(len(daily_pct_change)-n_step): obs.append(training_df.iloc[i:i+n_step].values) rewards.append(float(daily_pct_change.iloc[i+n_step])) obs = np.array(obs) rewards = np.array(rewards) return obs, rewards def my_training_method(self): obs, rewards = self.get_observations_and_rewards() self.env = TradingEnv(obs, rewards) self.model = DQN("MlpPolicy", self.env) self.model.learn(total_timesteps=500)
Set Training Schedule
To train the model at the beginning of your algorithm, in the Initialize
initialize
method, call the Train
train
method.
# Train the model initially to provide a baseline for prediction and decision-making. self.train(self.my_training_method)
To periodically re-train the model as your algorithm executes, in the Initialize
initialize
method, call the Train
train
method as a Scheduled Event.
# Train the model every Sunday at 8:00 AM self.train(self.date_rules.every(DayOfWeek.SUNDAY), self.time_rules.at(8, 0), self.my_training_method)
Update Training Data
To update the training data as the algorithm executes, in the OnData
on_data
method, add the current TradeBar
to the RollingWindow
that holds the training data.
# Add the latest bar to the training data to ensure the model is trained with the most recent market data. def on_data(self, slice: Slice) -> None: if self._symbol in slice.Bars: self.training_data.Add(slice.Bars[self._symbol])
Predict Labels
To predict the labels of new data, in the OnData
on_data
method, get the most recent set of features and then call the predict
method.
# Get the current feature set and generate an action. features, _ = self.get_observations_and_rewards() action, _ = self.model.predict(features[-5:], deterministic=True) _, _, _, _ = self.env.step(action)
You can use the label prediction to place orders.
# Place orders based on the action. if action == 0: self.liquidate(self.spy) elif action == 1: self.set_holdings(self.spy, 1) elif action == 2: self.set_holdings(self.spy, -1)
Save Models
Follow these steps to save stable_baselines
models into the Object Store:
- Set the key name of the model to be stored in the Object Store.
- Call the
GetFilePath
get_file_path
method with the key. - Call the
save
method the file path.
# Set the key to store the model in the Object Store so you can use it later. model_key = "model"
# Get the file path to correctly save and access the model in Object Store. file_name = self.object_store.get_file_path(model_key)
This method returns the file path where the model will be stored.
# Serialize the model and save it to the file. self.model.save(file_name)
Load Models
You can load and trade with pre-trained keras
models that you saved in the Object Store. To load a keras
model from the Object Store, in the Initialize
initialize
method, get the file path to the saved model and then call the load_model
method.
# Load the model from the Object Store to use its saved state and update it with new data if needed. def initialize(self) -> None: if self.object_store.contains_key(model_key): file_name = self.object_store.get_file_path(model_key) self.model = DQN.load(file_name, env=env)
The ContainsKey
contains_key
method returns a boolean that represents if the model_key
is in the Object Store. If the Object Store doesn't contain the model_key
, save the model using the model_key
before you proceed.