This thread is meant to continue the development of the In & Out strategy started on Quantopian. The first challenge for us will probalbly be to translate our ideas to QC code.
I'll start by attaching the version Bob Bob kindly translated on Vladimir's request.
Vladimir:
About your key error, did you also initialize UUP like this?
self.UUP = self.AddEquity('UUP', res).Symbol
Miguel Palanca
Hi Kamal, do you have a backtest or code for the above?
Kamal G
Thanks for replying Miguel. I know this algorithm is from another thread but the problem happens sporadically on algorithms in this thread too.
On one backtest the error was the below (I cant attach a backtest with an error).
BacktestingRealTimeHandler.Run(): There was an error in a scheduled event EveryDay: SPY: 100 min after MarketOpen. The error was ValueError : cannot convert float NaN to integer
And then on the following backtest with no changes made, the backtest ran successfully.
Peter Guenther
Great analysis, Strongs, thanks for sharing it with us! I was musing about whether one should follow 50:50 the Distilled Bear or ROC on the one hand and the In & Out on the other hand. To participate a bit in both worlds. The Distilled Bear and ROC outperformed leading up to 2020, while the In & Out outperformed in 2020. So, one could create a tiered in & out regime, which is 0% in (when both in & outs are 'out'), 50% in (when one of the in & out is in/out), or 100% in (when both in & outs are in).
Great point regarding inflation. I have used the ETF RINF before. It might be useful to provide a signal and funnel us into, say, gold instead of bonds. Alternatively, one could go into TIPS (Treasury Inflation-Protected Security). I will definitely give that a shot.
Peter Guenther
Thanks for sharing this issue, Kamal G!
I was wondering, since it specifically mentions the conversion to an integer, that the issue might be related to the calculation of the volatility (vola) and subsequent steps, ie the following code:
vola = self.history[[self.MKT]].pct_change().std() * np.sqrt(252)
wait_days = int(vola * BASE_RET)
period = int((1.0 - vola) * BASE_RET)
We try to convert to an integer to calculate wait_days and period.
Not sure why the error occurs, since it actually should work alright, but maybe it could help to use a dropna() again when calculating vola, ie along these lines:
vola = self.history[[self.MKT]].pct_change().dropna().std() * np.sqrt(252)
Jared Broad
Just a little update; we've been using this as a benchmark to optimize a real-world application of python on QC. We've made it run 100% faster over the last 2 months. Behind the scenes, we've migrated to a new framework (.net) and optimized the bridge to python 55%. 🏎
The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by QuantConnect. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. QuantConnect makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. All investments involve risk, including loss of principal. You should consult with an investment professional before making any investment decisions.
Peter Guenther
Wow, this is amazing! Thanks a lot, Jared Broad and team, for all your relentless work. Fantastic innovations and improvements!
Peter Guenther
Kamal G, I think you are right about the line. We can try the same thing here, ie remove the nans.
See the attached backtest. Let me/us know whether this fixes the issue or whether it continues to occur.
Kamal G
@Peter Guenther, That didn't work either, but I worked it out with the help of @Alex Catarino
Basically the entire self.history(*) was returning nan values on some backtests. Alex advised that it was due to multiple time resolutions on the same equity. In this case “SPY”
I was initially using Hour for the Asset, Daily for the MKT and Minute for the SPY Put Option. Changing them all to Hour, only on SPY, resulted in no failed backtests!!! (It was driving me up the wall). The backtests return very similar returns and drawdowns after the change too, so that's pleasing.
spy = self.AddEquity("SPY", Resolution.Minute)
self.STK1 = self.AddEquity('SPY', Resolution.Hour).Symbol
self.MKT = self.AddEquity('SPY', Resolution.Daily).Symbol
Thanks all.
Peter Guenther
Thanks for sharing these details, Kamal G! This is really useful since others may be scratching their heads about the very same thing, so thanks again for reporting back to this thread.
Guy Fleury
On The Use of a Rebalancer, a Flipper, and a Flusher
I published a new article dealing with the IN & OUT trading strategy. In it, I try to provide a better understanding of the trade mechanics in order to better “control” the future outcome of this trading strategy.
Follow the link below:
https://alphapowertrading.com/index.php/2-uncategorised/408-on-the-use-of-a-rebalancer-a-flipper-and-a-flusher
Often, a different look at a problem can help us better understand our own methods.
Abbi McKann
Guy,
This looks rather vacuous. For what it's worth, in my opinion a little bit of math notation and a lot of superfluous words do not together (nor apart) amount to a meaningful furtherance of the discussion. In this post on ‘AlphaPowerTrading’ (by the way, what could that possibly mean??) exactly nothing is added to the conversation. To be fair, a very circuitous and nonsensical evaluation of the logic of this algorithm is rendered, but it seems to be far below the level of the conversation here.
Chak
Hey Abbi,
When you have the chance, explore the development of this trading strategy since its inception on quantconnect and quantopian, as well as an individual's intellectual and technical contributions to this trading strategy. I might be wrong here, but Guy's blog dates back to 2010 and he's produced other written works, some of which can be purchased as books. This could mean that he knows what he's talking about. Most people would think so.
Guy Fleury
@Abbi,
The origin of 'AlphaPowerTrading' comes from a tribute to the 'alpha' as defined by Jensen in the late '60s as the excess return over and above the average market return. However, his paper concluded that the average alpha was negative (by about -1.7%) meaning that professional money manages, on average, did not even cover trading expenses.
A lot of fund managers, at the time, did not like his conclusions since it was putting little value on their money management skills. And from his premises, we saw the emergence of index funds. A 'if you cannot beat them, at least, join them' kind of mentality. Trillions are now managed that way.
The 'power' part comes from the alpha's compounding impact over time as illustrated in the following formula: F(t) = F_0 ∙ (1 + r_m + α – exp)^twhere the 'alpha' is added to the average market return r_m. An 'expense' component was added to represent the operating costs of gaining that added return. It is easy to see that if the alpha is greater than the expenses incurred: α > |exp|, it would improve performance over the long term, even for a small positive alpha value. The impact gets greater as time increases. To get the higher alpha requires a probabilistic edge, some sustainable skills, or better long-term methods of play.
The Jensen alpha is different from the 'alpha streams' described and used in QC where any source of profit becomes an 'alpha stream' even if it turns out that a portfolio might produce less than the average market return. Jensen's idea was simple: if there are management skills, it will show in a positive alpha α > |exp|. The average market return was easily obtainable using mutual or managed funds. So, r_m was a trivial component of the equation since it was available with practically no effort or skills.
I like Mr. Buffett's methods of play where compounding and time are put to the forefront. He has managed an 'alpha' of about 10 points over his 54 years or so on the job with a CAGR close to 20% (10% coming from the average market return and 10% from his alpha skills after trading expenses).
So, for me, he is the benchmark. If over the long term your trading strategy cannot outperform Mr. Buffett's investment methods, you technically missed the boat since there was an easy solution available that would have outperformed your own trading methods. A long-term structured trading plan is what is needed, and backtesting can help you find it. At least, that is how I use this strategy and many others.
You might want to read my 2007 paper on the subject. It is an old paper but still relevant. It is titled appropriately: Alpha Power
Todd
There has been discussions across this and other threads regarding bonds (TLT / SHY) and what we don't know about the future. I didn't want to side track the focus of this thread so I created a new thread and posted a simple tactical bond strategy for the community that pulls from bond ETFs across maturities, credit quality and yield. It's a simple and automated way to play the yield curve.
Peter Guenther
Welcome to the discussion, Todd. Fantastic work on this bond strategy and thanks a lot for sharing it with the community! You are absolutely right, there is some debate in this thread and the “Amazing returns = …” thread regarding that the ‘out’ side of the in & out strategy has been relatively neglected and that there might be room for improvement/optimization. So, your strategy is definitely timely. Some comments recommend to also look beyond bonds and consider additional alternative assets (e.g., gold), so there might even be room for an ‘alternative asset rotator’ which then could be combined with / plugged into the in & out algo.
Han Ruobin
Hi Peter Guenther thanks so much for starting this thread. I've briefly looked through the historical discussion from Quantopian times, and also at the newer updates. I have a small point to make about the use of the one-percentile for determining the threshold values for extreme_b. You used the 1% statistical significance to explain why the bottom one percentile was taken to be the threshold. However, if that were the case, I think a normal distribution should have been assumed, and the threshold taken to be -2.58 standard deviations (z-score threshold for a one-tail test requiring 1% confidence-level) away from the mean of the rolling window. I performed a backtest replacing line 225:
with the following:
I've attached a backtest (code taken from the algo you posted on Jan 2021 on the Amazing returns = … thread), but there doesn't seem to be a huge difference. Nonetheless I think there is a slight misuse of the statistical confidence concept that I wish to clarify. Or I could have misunderstood why the one-percentile was used, in which case I would also like to clarify :)
I also would like to ask if there is any reason why the initial ‘Debt’ and ‘Tips’ (obtained when self.dcount ==0) are always used in comparison with the current ‘Debt’ and ‘Tips’ to obtain the median, which in turn will be used to determine if there is any inflation at the current moment. It seems more intuitive to me to use compare the current prices with prices perhaps a year or two ago. Considering that the algo runs from 2008 to present, I would not expect prices from 10 years ago to be relevant to making trading decisions in the present. I also do not understand why there was division involved, so I hope I could get some clarification on this.
Thanks so much for sharing this with the community! I think it's the first time I've seen an algo that makes use of price signals from a variety of sources to make trading decisions, so it's very interesting!
Peter Guenther
Thanks for sharing these observations, Han Ruobin!
Valid idea regarding changing to a ‘mean minus x-times standard deviation’ logic.
In terms of your question concerning misuse, or different use, of stats concepts: Using the 1% extreme from the observed returns sample can have the advantage that we do not need to make any assumption about the distribution of the underlying returns. In contrast, when using the ‘mean – 2.58*sd’ we need to assume that stock returns are normally distributed which does not always hold. For instance, see ‘fat tails’ in returns distributions. Using the 1% extreme of the observed returns takes the returns distribution as is and does not require any distributional assumptions. Not sure whether the following is a completely sound comparison, but if you are into statistics, you could compare typical significance testing (estimate/sd) which is usually based on a normal distribution assumption vs significance testing based on bootstrapping which, similar to the In & Out algo approach, is based on the empirical distribution of the observed data (i.e., also holds for non-normal data).
In terms of ‘Debt’ and ‘Tips’, annual resets are also a valid idea and worth a try. If the backtest starts not earlier than Jan 2012, the preference may be to use RINF (see an earlier algo version) since it directly measures inflation. Regarding why to divide by the base level, this is to calculate a return (relative to the base level). For example, if the base level is $100 and, after a year, the ETF lists at $110, the calculation yields 1.1 (i.e. 10% above base level). Expressing as returns helps making ETFs with different price levels comparable. To measure inflation, we subtract the ‘Tips’ return (bond yield without inflation) from the ‘Debt’ return (bond yield plus inflation). The reason for this calculation is that when inflation expectations are increasing in the market, then bond yields increase while TIPS do not. Regarding using the 2008 base level: Both the ‘Debt’ underlying (SHY) and the ‘Tips’ underlying (TIP) do not increase extensively over time, so that using the 2008 base level vs annual resets might be valid to some degree. Anyway, it’s definitely worth a try to use annual resets.
Good observations, keep them coming and let me know if things don’t add up and how your tests played out!
SHY
TIP
Han Ruobin
Hi Peter Guenther thank you for your replies!
I have downloaded some data for backtesting on my PC instead of using the QC engines. In my backtest, I decided to go with my definition for short-term inflation using SHY-TIPS but only looking at the 187 most recent returns. I also assumed no slippage and trading fees (not very realistic oops, but I don't think those would be a big problem when performing so few trades and trading very liquid ETFs).
I decided to investigate how the width of the rolling window of prices (in the original algo this is 11) and the lookback period (in the original algo this is 55? I think) used to determine returns (returns=hist/hist_shift) affects the performance of the in-out strat. I compared it against just holding TQQQ from the same starting period (sometime in January 2010), and I have attached an image of the results. I would attach the file if anyone can tell me how I can do it.
The appearance of a band is rather interesting, and hopefully this could be helpful in dealing with the issue of having fixed constants (why was a window size of 11 lagged by 55 days used to determine returns?). It seems to me that in general rule for the band would be k_min<window width + lookback<k_max. I suppose strictly speaking the lookback period is also partially determined by the size of the window (in the above data set, the lookback period is defined as the number of trading days between the most recent end of the window and current day), so maybe by presenting the data as such I might get a clearer picture of what is happening. I'll update it here sometime later. I'll explore to see if the waiting time would affect the band or not.
What I think would be helpful / interesting would be an explanation about why there is such a band. My intuitive explanation for why returns for window_width+lookback > k_max is disappointing is because the algo looks too far back (window_width also affects how far back the algo looks) and data in the past would have become irrelevant. I do not yet have an explanation for why window_width+lookback < k_min could also yield disappointing results. I don't think it is noise because if it were, I should be seeing a lot more greens at wider windows.
Peter Guenther
Really great work there, Han Ruobin! Nice illustration using the green coloured returns. Since this strategy tries to time the market, I reckon that there will always be a ‘perfect timing’ in terms of lookback/window (and other) settings. The green diagonal band is indeed quite interesting and somewhat reassuring, since it shows that there is not only one perfect lookback/window combo, but there seem to be several combos that work similarly well. This is definitely a new interesting insight. Also, great regarding formalizing the diagonal band. Yes, it’s interesting to mused about this a bit more. One could argue that, by the end of the day, we need to select a specific combo from the band—so then, a selection criterion would be needed to determine this selection. Alternatively, you could select multiple (or all, if the calculation effort is worth it) combos from the band (according to your formula for the band) and use the results from these combos to determine whether to go ‘out’ of the market or stay ‘in’, e.g. as a majority call based on the different results from the band combos.
Regarding percentiles and distribution assumptions: Hmm ... I would argue that percentiles have no distribution assumption, not explicit nor implicit, because the, say, 1% extreme observations are the 1% extreme observations no matter what the returns distribution looks like. They are the 1% most extreme observations on the left-hand side, always, no matter whether the returns follow a normal distribution, Chi2, Poisson, uniform distribution, or any other distribution. In contrast, when we work with -2.58*sd, we only really know what we will be getting if the returns are normally distributed. For any other distribution, -2.58*sd could be anything, really. For a normal distribution, it means the 1% most extreme (smallest) observations, but for a Chi2 distribution, it means something else, and again it means something different for a uniform distribution etc.
Regarding TIPS and SHY: I think you are right that this should be the other way around, but for a different reason. As I understand it, inflation expectation changes do not move TIPS since these have an inflation protection guarantee. In contrast, changes in inflation expectations move SHY. Specifically, and this is the reason why the calculation needs to be reversed, SHY (i.e. bond prices) goes down as inflation expectations go up since investors demand additional returns to be compensated for inflation. So, let’s say inflation expectations increase by 1%, then SHY (i.e., bond prices) should drop by 1% (i.e., the bond yield increases by 1% to compensate for inflation) while TIPS should stay constant, meaning that more negative values in (SHY-TIPS) indicate higher inflation expectations. So, when we check for above-median inflation expectations, we should either check for (SHY-TIPS) below its historic median or, equivalently, check for whether the reverse-coded difference (i.e., -(SHY-TIPS) = (TIPS-SHY)) is above its historic median. Thus, since we currently work with an above-median type of check in the algo, the easiest fix seems to be to calculate the return difference reverse-coded via TIPS-SHY instead of SHY-TIPS.
Chak
Actually, percentile has an implicit assumption of a normal distribution. If you want to customize your own distribution, then you need to use the first, second, third, etc. moments to obtain non-normal distributions.
Tentor Testivis
The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by QuantConnect. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. QuantConnect makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. All investments involve risk, including loss of principal. You should consult with an investment professional before making any investment decisions.
To unlock posting to the community forums please complete at least 30% of Boot Camp.
You can continue your Boot Camp training progress from the terminal. We hope to see you in the community soon!