This is the original name of the algorithm that I created as a result of a successful collaboration
on the Quantopian forum thread "New Strategy - In & Out" in October 2020.
Unfortunately, the collaboration did not continue on the QuantConnect forum.
At least I am very uncomfortable with the strange names used by Peter Gunther in the algorithms,
such as "Distilled Bear", variable names and decision making logic.
This algorithm unlike those from Peter Gunther has three pairs as a source, two parameters and
concensus of all three for exit signal.
I did not optimized parameters, so you can get better results.
I want to thank Jared Broad and his team for giving me the opportunity to recover one of
my favorite algorithms.
Happy New Year to all
Thunder Chicken
All,
Thank you very much for your insights. I am going to stay away from giving my opinion on this strategy, but wanted to thank Menno Dreischor, Vladimir, and everyone else for the spirited discussion. Great stuff.
Thank you!
Frank Schikarski
Hi there, I enjoy a lot following this thread - a big thanks to all contributors!
Building on the version Intersection of ROC comparison using OUT_DAY approach by Vladimir v1.1 (diversified static lists) please find below a little tweak on the impact of volatility towards the lookback period for the pair comparison of returns and the wait_days. With a VOLA_FCTR of 0.6, PSR is now at 99.324%.
Key Question is: should we start working on a new formula for PSR calculation soon ;) ? Have fun!
def daily_check(self): vola = self.history[[self.MKT]].pct_change().std() * np.sqrt(252) * VOLA_FCTR # <-- tweak wait_days = int(vola * BASE_RET) period = int((1.0 - vola) * BASE_RET)
Â
Guy Fleury
@Frank, good work. Replacing MSFT with TQQQ would raise CAGR to 50%. This accentuates volatility and increases the average win to 5.54%.
Guy Fleury
Changing TLT for TMF (3x leveraged ETF) raised the CAGR to 59.6% with an average win of 6.62%. Note that doing so also increased max drawdown to 35.6%. This should be expected since you are injecting volatility in the whole trading process.
Guy Fleury
It should be noted that you would reach the same result if you reduce np.sqrt(252) to np.sqrt(91) and removing VOLA_FCTR.
Also, trading earlier in the morning would add almost 3 million to the final result. Raising CAGR to 60.3% over the trading interval. The rationale: you are participating for slightly longer time intervals in those trades. And it would appear that it all adds up to be for your benefit in a generally rising market. It did raise the average win to 6.66%.
Vladimir
Hi Miko M,
I really miss the discussions in your great thread on Quantopian forum 5 years ago.
Glad to meet you again at the QC forum.
Thanks for your contribution to this thread.
I've been working in the same direction since Oct 2020 but believe my QC version is not ready yet.
So I played around with your version that dynamically selects top momentum stocks.
First I activated OnEndOfDay(self) with added Target Leverage and run it as is.
You may see that Lean engine with original trading logic can not keep the Target Leverage.
Â
So I returned to my first Quantopian logic
           for sec, weight in self.wt.items():
               if weight == 0 and self.Portfolio[sec].IsLong:
                   self.Liquidate(sec)   Â
           for sec, weight in self.wt.items():
               if weight != 0:
                   self.SetHoldings(sec, weight)
Yes Fees increased, CAGR decreased accordingly, but in this case I am sure I will not receive unexpected margin calls.
What do you think?
Â
Â
Â
Â
Â
Â
Frank Schikarski
Mikko M,Â
when I was playing with the dynamic selection of stocks I ran into an error in the "delete non tradeable stocks" section and replaced one line of code. I pwas playing with the number of stocks in coarse and fine. Can you please check if this is in line with your intention? Thanks a lot!
def trade(self): # Delete non-tradable stocks for sym in self.STOCKS: if self.Securities[sym].IsTradable == False: # del self.Securities[sym] # <- error self.STOCKS.remove(sym)
Â
Frank Schikarski
Vladimir
you mentioned leverage. In my live algo, I have a small logic that reduces leverage smoothly depending on drawdown and current portfolio trend, I am working with a leverage range of 0.8 .. 2.0 so I can keep it overnight. In the backtest, it is able to limit drawdown e.g. to a value of 22%. The current portfolio trend is needed in order not to slow down the recovery from maximum drawdown, so to set leverage to a higher value again in portfolio uptrend even if drawdown is bad.
Also, you mentioned transaction fees. In backtesting, I have played with rounding weights to filter out the small unimportant trades, especially if I rebalance very frequently. There is a tradeoff between transaction fees & slippage reduction and the implications on algo performance, maybe around buckets of 2% size. In the live version, I am working with Alpaca so no need to cut of transaction fees.
Maybe helpful? I don't want to spam your thread so please indicate if you are interested in the code ;)
Vladimir
v1.2
Another iteration of the dynamic selector parameters for choosing top momentum stocks.
I would appreciate it if someone makes it a little faster.
Guy Fleury
@Vladimir, looking at the above strategy and it stops trading in 2014 after doing 10,001 trades. Is there a reason for this? Or, is it a QC bug or limitation?
Jared Broad
Hey @Guy -Â That is a limitation of the free account. It's quite easy to use 10-20MB for a larger backtest result and so we require paid plans to support the hosting and serving of those backtests.Â
The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by QuantConnect. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. QuantConnect makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. All investments involve risk, including loss of principal. You should consult with an investment professional before making any investment decisions.
Frank Schikarski
Vladimir I highly appreciate that you are picking up the dynamic stock selection as results seem to be more likely to repeat in the future (should US tech stocks continue to rise). On performance, this is what I have come up with without a change in result.
(a) got rid of one if clause, even if this looks like liquidating stocks more than once, but I could not see an issue for the backtest and probably also not in live due to Lean engine:
for sec, weight in self.wt.items(): #if weight == 0 and self.Portfolio[sec].IsLong: if weight == 0: self.Liquidate(sec) #elif weight != 0: else: self.SetHoldings(sec, weight)
(b) traded of clarity against microseconds by avoiding two lines of calculation:
def daily_check(self): vola = self.history[[self.MKT]].pct_change().std() * np.sqrt(252) #wait_days = int(vola * BASE_RET) #period = int((1.0 - vola) * BASE_RET) #r = self.history.pct_change(period).iloc[-1] r = self.history.pct_change(int((1.0 - vola) * BASE_RET)).iloc[-1] exit = ((r[self.SLV] < r[self.GLD]) and (r[self.XLI] < r[self.XLU]) and (r[self.DBB] < r[self.UUP])) if exit: self.bull = 0 self.outday = self.count #if self.count >= self.outday + wait_days: if self.count >= self.outday + int(vola * BASE_RET): self.bull = 1 self.count += 1
What did not work was (c) to find a way for a monthly cheduling of fine coarse selection. In this Github Issue there is a feature request but so far unsolved. And (d) to schedule plotting only on week ends, as this changed the result slightly. Also, (e) I did not find a way to get around the consolidator on day level without loosing edge. However setting all resolutions to minute seemed not to impact speed.
Â
Personnally, if I have times of heavy backtesting I allow myself three (paid) backtest nodes which I also reduce later on. I think it's well worth the trade-off between speed an money spent. Also, I carefully pick the start and end date, typically with a switch similar to this, allowing for a better compareability with an end date in the past:
if timing == 0: # 12y full self.SetStartDate(2008, 1, 1) # Set Start Date self.SetEndDate (2020, 12, 31) # Set End Date elif timing == 1: # 9y excluding 2020 self.SetStartDate(2011, 1, 1) # Set Start Date self.SetEndDate (2019, 12, 31) # Set End Date elif timing == 2: # 6y representative self.SetStartDate(2014, 1, 1) # Set Start Date self.SetEndDate (2019, 12, 31) # Set End Date elif timing == 3: # 11 months new normal self.SetStartDate(2019, 7, 1) # Set Start Date self.SetEndDate (2020, 5, 30) # Set End Date elif timing == 4: # recent days self.SetStartDate(2021, 1, 1) # Set Start Date self.SetEndDate (2021,12, 31) # Set End Date
Hope this helps,
F
Vladimir
v1.3
with dynamic selector for fundamentals and momentum (Leandro Maya setup),
efficient and fast (completed in 174.97 seconds).
Leandro Maia
Vladimir,
thank you for the medal. Results of this dynamic selector are indeed very attractive. I started to test it live, however the code still have some sensitives that I still can't explain and you might want to have a look. For example:
1. Deleting self.data[symbol] for the symbols thar are excluded from the universe, as suggested by Derek Melchin in a previous version reduce the results
2. Registering the indicator inside the SymbolData class instead of doing it inside the OnSecuritiesChanged function, improve the results
3. Having the indicator warm-up call inside the OnSecuritiesChanged function instead of warming-up the indicator inside the SymbolData class, as suggested by Derek Melchin, reduces the results.
I wouldn't expect that any of this changes would alter the results, but they do.
Frank Schikarski
Leandro Maia
can you try if your sensitivities disapear with the following change? The logic is purely in the fine filter function.
I have set N_COARSE = 1000 to allow more variety. As I am building on Vlads v1.3 (but in a hack style), I have also set N_MOM = 1000 to swith off your SymbolData class and RoC calculation.
In case you give it a try, please share!
def FineFilter(self, fundamental): if self.UpdateFineFilter == 0: return Universe.Unchanged universe_valid = [x for x in fundamental if float(x.EarningReports.BasicAverageShares.ThreeMonths) * x.Price > 1e9 and x.SecurityReference.IsPrimaryShare and x.SecurityReference.SecurityType == "ST00000001" and x.SecurityReference.IsDepositaryReceipt == 0 and x.CompanyReference.IsLimitedPartnership == 0 and x.OperationRatios.ROIC and x.OperationRatios.CapExGrowth and x.OperationRatios.FCFGrowth and x.ValuationRatios.BookValueYield and x.ValuationRatios.EVToEBITDA and x.ValuationRatios.PricetoEBITDA and x.ValuationRatios.PERatio ] returns, volatility, sharpe_ratio = self.get_momentum(universe_valid) sortedByfactor0 = sorted(universe_valid, key=lambda x: returns[x.Symbol], reverse=False) # high return or sharpe or low volatility sortedByfactor1 = sorted(universe_valid, key=lambda x: x.OperationRatios.ROIC.OneYear, reverse=False) # high ROIC sortedByfactor2 = sorted(universe_valid, key=lambda x: x.OperationRatios.CapExGrowth.ThreeYears, reverse=False) # high growth sortedByfactor3 = sorted(universe_valid, key=lambda x: x.OperationRatios.FCFGrowth.ThreeYears, reverse=False) # high growth sortedByfactor4 = sorted(universe_valid, key=lambda x: x.ValuationRatios.BookValueYield, reverse=False) # high Book Value Yield sortedByfactor5 = sorted(universe_valid, key=lambda x: x.ValuationRatios.EVToEBITDA, reverse=True) # low enterprise value to EBITDA sortedByfactor6 = sorted(universe_valid, key=lambda x: x.ValuationRatios.PricetoEBITDA, reverse=True) # low share price to EBITDA sortedByfactor7 = sorted(universe_valid, key=lambda x: x.ValuationRatios.PERatio, reverse=True) # low share price to its per-share earnings stock_dict = {} for i, elem in enumerate(sortedByfactor0): rank0 = i rank1 = sortedByfactor1.index(elem) rank2 = sortedByfactor2.index(elem) rank3 = sortedByfactor3.index(elem) rank4 = sortedByfactor4.index(elem) rank5 = sortedByfactor5.index(elem) rank6 = sortedByfactor6.index(elem) rank7 = sortedByfactor7.index(elem) score = sum([rank0*1.0, rank1*1.0, rank2*0.0, rank3*0.3, rank4*0.0, rank5*0.0, rank6*0.0, rank7*0.0]) stock_dict[elem] = score self.sorted_stock_dict = sorted(stock_dict.items(), key=lambda x:x[1], reverse=True) sorted_symbol = [x[0] for x in self.sorted_stock_dict] top = [x for x in sorted_symbol[:self.N_FACTOR]] self.symbols = [i.Symbol for i in top] self.UpdateFineFilter = 0 self.RebalanceCount = self.count return self.symbols def get_momentum(self, universe): symbols = [i.Symbol for i in universe] hist_df = self.History(symbols, 63, Resolution.Daily) returns = {} volatility = {} sharpe = {} for s in symbols: ret = np.log( hist_df.loc[str(s)]['close'] / hist_df.loc[str(s)]['close'].shift(1) ) returns[s] = ret.mean() * 252 volatility[s] = ret.std() * np.sqrt(252) sharpe[s] = (returns[s] - 0.03) / volatility[s] return returns, volatility, sharpe
Â
Guy Fleury
@Vladimir, made only two modifications to your strategy at this time (v. 1.3).
The first one to increase initial capital to 1 million. The reason is simple: it is an easy test to do to see if the strategy is scalable upward, it is. But I knew that before doing the test. So, it is a trivial observation. It is part of my usual procedures when analyzing a new strategy. If a strategy does not scale up, I lose interest, fast. This increased the initial average bet size from 1k to 10k. A portfolio needs to grow and handle more over time. If it cannot, why play that game?
The second modification was simply replacing TLT with TMF (3x-leveraged). Here the reason is also simple. The strategy switch to bonds at the first sign of trouble. The average loss per losing trade is only -1.52%, meaning that the strategy is very sensitive to losses and thereby relatively risk-averse. It is not waiting for a 5% to 15%+ decline before taking action. It will not suffer much in a downtrend and will recuperate faster. The strategy maintained its 69% win rate which is impressive. We will still have a max drawdown. Evidently, we cannot escape that.
Bonds are not that critical and do not vary by much over the short-term intervals they are being held, especially a bond ETF. It is more like going to the sidelines in periods of market turmoil. Therefore, introducing some volatility will not be that detrimental. We should expect volatility to increase a bit as should the max drawdown. The compensation is a higher overall return.
I consider the changes to be administrative decisions and represent choices anyone can make. Nonetheless, those two decisions might be worth some 300 million more in profits.
So, how do you play this strategy? You find 9 friends, each put up 100k, and instead of expecting 9M, each gets around 38M. A scenario worth a little effort over the same trading interval using one trading script.
Vladimir
Leandro Maia,
I tried to apply some leverage to the strategy.
I applied:
LEV = 1.5
self.SetBrokerageModel(BrokerageName.InteractiveBrokersBrokerage, AccountType.Margin)
Here are what the leverages  looks like
- For your execution style
- For my execution styleÂ
Do you have any idea why the code does not hold target leverage?
Vladimir
Guy Fleury,
I have an idea how to improve your latest code.
In Quanopian, you could use:
Â
   BONDS = symbols('TMF') if data.can_trade(symbol('TMF')) else symbols('TLT')
Leandro Maia
Frank,
thank you for the suggestion. The issue I see are the frequent history calls, applied to an unfiltered universe. I feel it would make backtest much slower.
Leandro Maia
Vladimir,
I think I found a good explanation for the leverage behaviour in Ernst Chan's Algorithimic Trading page 170:
"No matter how the optimal leverage is determined, the one central theme is that the leverage should be kept constant. This is necessary to optimize the growth rate wheter or not we have the maximum drawdown constraint. Keeping a constant leverage may sound rather mundane, but can be counterintuitive when put into action. For example, if you have a long stock portfolio, and your P&L was positive in the last trading period, the constant leverage requirement force you to buy more stocks for this period. However, if your P&L was negative in the last period, it forces you to sell stocks into the loss."
So I think Lean is behaving correctly and Quantopian was the one with strange behavior.
To keep leverage constant we'll need to rebalance everyday and to obtain a straight line, plot the leverage just after rebalance.Â
Vladimir
The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by QuantConnect. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. QuantConnect makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. All investments involve risk, including loss of principal. You should consult with an investment professional before making any investment decisions.
To unlock posting to the community forums please complete at least 30% of Boot Camp.
You can continue your Boot Camp training progress from the terminal. We hope to see you in the community soon!