This thread is meant to continue the development of the In & Out strategy started on Quantopian. The first challenge for us will probalbly be to translate our ideas to QC code.
I'll start by attaching the version Bob Bob kindly translated on Vladimir's request.
Vladimir:
About your key error, did you also initialize UUP like this?
self.UUP = self.AddEquity('UUP', res).Symbol
S.T.E
Hi Guy - Would you mind uploading your version so we could see what have you done differently on the shared Algo?
Thanks!
Nathan Swenson
Menno, I read your cautioning posts a couple time and you seem to be saying that the out of sample performs even better than the in sample. With that you conclude that one should NOT use this strategy. I must not understand you correctly. The small out of sample I tested was also better. I take that as a good thing. Regarding low trade count, if you take the original version that also performs well, it has around 5x the number of trades.
Edit: I see. You are saying that using random signals that are meaningless, you can use parameter fitting to get good results anyway. Interesting,. I wonder how randomized the siganl data really is. Sounds suspicious.
Nathan Swenson
I should add, this strategies performance is highly effected by the Out holdings. By switching to IEF in place of TLT/IEF combo, you take out a big part of what makes this system work. IEF has a much smaller impact than TLT.
Matthew Wormington
Nathan Swenson Agree per earlier comments and the Out holdings are probably where the back tests will fall short with the bond yeilds where they are. What other alternatives Out assets are available that have not been so so impacted by the feds? Also, what do folks think about using the signals to just avoid drawdowns from to sharp downs maybe together with something like a 200 day SMA for prolonged downturns, i.e. use In-and-Out as an early waring indicator and then continue to stay ot if In holdngs below 200 SMA in the case of a recession? In accounts such as 401K there are limited asset selections and so being able to time the market even just with SPY like assets might do better than buy-hold over the long term especially with 60/40 not working so well. Not a 1000% gain use-case I know, but wonder what folks think for accounts with limit asset choice.
Nathan Swenson
Menno, thank you for your feedback on this. You obviously have a great deal of experience in this field. I guess we will find out as we get live data rolling in. I don't have a lot of experience in long term, low trade count algos. I generally create intraday algos created around price action, market internals such as ADV/DECL, VIX, and Cumulative Delta. The good thing about intraday, you quickly know if the algo is working due to the high trade count in a short period of live trading. With these one trade every month type, it takes a long time to validate. You seem quite certain of your findings which makes me again question things.
Goldie Yalamanchi
Matthew Wormington you could always just rollover your 401k or a portion of it over into any online broker as a rollover IRA then you have total control -- except you can't short.
Menno Dreischor Are you saying that running backtests with too many parameters and wide range of them is creating a false overperformance that doesn't measure up when extended to out of sample data? Which out of sample data time frame did you try? I tried other years or sub-years as a subset and they had mixed performance like the period from 2015-2017.
Interestingly, there was an earlier version of this algorithm which had like just a SMA 200 cross for S&P500 to determine in or out strategy and it performed OK. I wonder if maybe there is a more pure way to just combine something like an SMA cross and maybe a VIX % change to determine an OUT state that isn't overly curve fit.
Matthew Wormington
Goldie Yalamanchi thanks for the suggestion but jI was just using a 401k as an example where you don't have freedom over asset seection ;-) I'd certainly not risk my retirement any more than I would with a trip to vagas and a spin on a roulette wheel. I'm trying to understand how things are impacted by the timing of the in/out signal rather than choice of assets and so SPY to cash seems like the simplest case. The original versions of the algo seemed like a neat idea to come up with some possible leading ecomic indicators that might get you out of some quick drawdowns. Doing that alone, e.g. avoiding the 2009 fast drop in Menno's post and nothing else might be more acheivable and realistic than some of the huge gains that have been shown as the discussion progresses.
Jimothy
Hi all, I was wondering if we could prevent big drops of 20% or more, like in march of 2020 by using a stop loss. I tried doing something like this:
self.stop_price = self.Securities[self.STKS].Price * 0.90
self.price = self.Securities[self.STKS].Price
if price <= stop_price:
self.SetHoldings(self.STKS, -self.Portfolio[self.STKS].Quantity)
But I can't seem to get it to work, is there a better way to do this in QC?
Joshua Tsai
One thing I'd like to note is that in real life, you can't always find correlations in assets that will continue. Thus, if you can find assets that are highly correlated and logically consistent (their correlation makes sense), you can argue that even a strategy based on the false assumption that our signals precede the SPY, the basic idea (that drops in a lot of corresponding assets gives rise to possible bear markets) could work. After all, we can be reasonably certain that these correlations will continue working in the future. Of course, it does raise the question on why you're trying to use the other ETFs as predictors...
Guy Fleury
Had to test the other side of scalability. This time running the strategy with $10 million as initial stake. No other modification.
Total net profit came in at 120,219.296% which would tend to confirm the strategy's upside scalability. This translated into a net profit of $1,243,714,357.98 over the 13.24 years.
The Sharpe ratio remained the same at 1.855, as well as the beta at 0.369. The win rate also came out the same at 65%. All indicating the trade mechanics and portfolio metrics were about the same as in the $10k case. This would suggest that there was not that much incremental risk since those numbers stayed the same. The main change was in the trade execution, in the bet sizing department (larger bets). CAGR stood at 73.366% which was very close to the $10k scenario with its 73.358%.
This does demonstrate that the strategy could scale from $10k to $10M (a factor of 1,000) and still perform as should be expected. Accepting its scalability, I even tried an initial capital selected almost at random ($237,815) and got back 120,215.286% for net profit indicating again the strategy's scalability. Its CAGR was 73.365%, also very close to either the $10k or the $10M scenario.
Note that the initial stake is not a program decision. That decision is up to the strategy designer, and therefore we are the ones making that decision.
Will proceed to the next level of testing in order to find the limits and boundaries of this program to then scale back to whatever performance level I might find acceptable. I know, it will be some compromise in the risk/reward space. We all have to make those choices. It does take time to explore a strategy's potential, pitfalls, and shortcomings to then code improvements.
@S.T.E, sorry, but understandably, after having transformed a basic trading strategy above a certain performance level I do not put out code. However, I do provide outcome examples of what could be done and some data on what makes it reasonable and possible.
Joshua Tsai
Guy Fleury Would you give the max drawdown? The current Sharpe stands at 1.85ish, so it seems like simply increasing leverage and doing some minor modifications would achieve similar results?
Nathan Swenson
Perhaps we should have a thread for tracking live results? Anyway, the last trade was on 10/6 into bonds. I am in the aggressive setup so the entry for TMF of 10/6 was 37.06. I started mid-cycle and entered at 34.10. Either case, trade is looking good. If you are using default conservative setup, entry on 10/6 was TLT: $159.15 and IEF: $120.97. I'm still uncertain if my waitdays is really the same as a long term run, or if it is tied to my mid cycle start. We shall see if the move back to IN occurs soon.
Aalap Sharma
+1 to that Nathan.
My Live algo entered the TMF position today @37.46 on both Alpaca and Quantconnect paper trading
Guy Fleury
@Joshua, max drawdown was about 0.54. But that might not matter so much in the beginning of a testing process. The strategy starts during the 2008 financial crisis and some drawdowns were unavoidable if not unpredictable. The phase to reduce max drawdown comes later in my testing process.
You are pushing on a trading strategy and forcing it to seek volatility. Not only seeking it but amplifying it by using leveraging. You first want to see how far it can go, with and without constraints to then refine your objectives and protective measures.
First show that the strategy has something, then give it the restrictions you want, and see if there is anything left. At this stage, I kind of find the strategy promising. That might change going forward after more tests and a better understanding of what the trading strategy really does. Afterward, its benefits, if there are any, will be compared to other strategies anyway.
Average win per trade was at 16.53% compared to the average loss of -5.84% with a 65% win rate. That is the reason the strategy outperformed. Nonetheless, the average portfolio beta (0.369) showed less volatility than a market surrogate like SPY which has a beta of 1.0.
In the beginning of the battery of tests I tend to apply to a trading strategy I find the drawdown as something that is “correctable” later on by applying better protective measures. I often borrow them from other programs that have shown better trend-following procedures. I am still a novice at using QC, so it will take time to adapt.
Obviously, we design trading strategies to make money with the lowest acceptable risk. It does not matter so much how we do it as long as it is done honestly and without going bankrupt. We might not know the future but one thing we do want is to not lose our trading capital over the long term. It is the reason why we spread out our bets over time and tradable assets. We also test our trading methods on historical data to see if our strategies would have at least survived over extended periods of time, and how well.
@Menno, I do not believe in simple trading systems. We are millions having tried that for decades and decades, and look at the results... If those simple systems were that good, we would never even consider trying to go beyond. We all knew that already. We are analyzing one of the most complex and chaotic systems out there with millions and millions of participants and we want something simple and permanent out of it, could I say: think again. We have not even scratched the surface of possibilities even after over 200 years of trying to do so. Nonetheless, somehow, somewhere, someone will find something interestingly intricate and push forward.
Tristan F
Menno Dreischor good points. I got comfortable with this strategy because it appears robust even with significant reductions in degrees of freedom.
For example, in the attached, all logic related to varying the number of days out of the market are removed. This drops the following parameters: maximum number of days out of the market, time decay, and the 3 conditions associated with extending those number of days.
What we're left with is a strategy with the following thesis: if recent 3 month returns in certain indicator assets (metals, natural resources...), or relationship between related assets (silver less gold...) hit extremes (less than 1 percentile over last year), derisk for 15 days; otherwise risk on. The following levers still remain:
With this simplification, we have a strategy with a sharpe ratio of 1.85 since 2008. This is almost as good as the 1.9 sharpe of the original strategy. I've tested a few variations in the numerical parameters above (#1-4), and the results still hold up well. Unfortunately, QC doesn't have a way to test parameter ranges in backtests so there's no means test robustness more systematically. Instead of parameter optimization, do you run any tests for robustness?
Leif Trulsson
Menno Dreischor by proving that the strategy/algorithm is not good because it gives good results with random data, you have actually proved the strength of the strategy/algorithm. The strength of the algorithm per se does not lay in the data itself, but in how the data is handled, and in particular this part:
hist_shift = hist.apply(lambda x: (x.shift(65) + x.shift(64) + x.shift(63) + x.shift(62) + x.shift( 61) + x.shift(60) + x.shift(59) + x.shift(58) + x.shift(57) + x.shift(56) + x.shift(55)) / 11) returns_sample = (hist / hist_shift - 1) # Reverse code USDX: sort largest changes to bottom returns_sample[self.USDX] = returns_sample[self.USDX] * (-1) # For pairs, take returns differential, reverse coded returns_sample['G_S'] = -(returns_sample[self.GOLD] - returns_sample[self.SLVA]) returns_sample['U_I'] = -(returns_sample[self.UTIL] - returns_sample[self.INDU]) returns_sample['C_A'] = -(returns_sample[self.SHCU] - returns_sample[self.RICU]) self.pairlist = ['G_S', 'U_I', 'C_A'] # Extreme observations; statist. significance = 1% pctl_b = np.nanpercentile(returns_sample, 1, axis=0) extreme_b = returns_sample.iloc[-1] < pctl_b # Determine waitdays empirically via safe haven excess returns, 50% decay self.WDadjvar = int( max(0.50 * self.WDadjvar, self.INI_WAIT_DAYS * max(1, np.where((returns_sample[self.GOLD].iloc[-1]>0) & (returns_sample[self.SLVA].iloc[-1]<0) & (returns_sample[self.SLVA].iloc[-2]>0), self.INI_WAIT_DAYS, 1), np.where((returns_sample[self.UTIL].iloc[-1]>0) & (returns_sample[self.INDU].iloc[-1]<0) & (returns_sample[self.INDU].iloc[-2]>0), self.INI_WAIT_DAYS, 1), np.where((returns_sample[self.SHCU].iloc[-1]>0) & (returns_sample[self.RICU].iloc[-1]<0) & (returns_sample[self.RICU].iloc[-2]>0), self.INI_WAIT_DAYS, 1) )) ) adjwaitdays = min(60, self.WDadjvar)
in rebalance_when_out_of_the_market.
"Absence of evidence is not evidence of absence"
Vladimir
Menno Dreischor
John von Neumann famously said:
"With four parameters I can fit an elephant, and with five I can make him wiggle his trunk."
In a algorithm any word is a parameter.
Joshua Tsai
While returns drop significantly if you change parameters, the PSR remains quite high even if I shift them, so I'd say the strategy is rather robust (if slightly overfit). Thus, returns are likely to be less in the future but still beat the Sp500
Vladimir
Leif Trulsson,
Can you explain why in rebalance_when_out_of_the_market adjwaitdays can get a value of 225
and then in the next line is limited to 60?
Vladimir
Leif Trulsson,
Why adjwaitdays today is calculated based on price to 11 day moving average ratio 55 days ago?
Tentor Testivis
The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by QuantConnect. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. QuantConnect makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. All investments involve risk, including loss of principal. You should consult with an investment professional before making any investment decisions.
To unlock posting to the community forums please complete at least 30% of Boot Camp.
You can continue your Boot Camp training progress from the terminal. We hope to see you in the community soon!