I am trying to train a model with LSTM. The dataset is the history of 730 days (with Open, Close, Hig, low, Volume - Multivariant) and an epoch of 30. But the training seems to timeout after 30 minutes using the Train method. and this is with just one symbol.
Ideally i would like to train multiple symbols.
Any idea how i can do this without it timing out?
Omegab
The free tier has a timeout of 30 minutes for Machine Learning
https://www.quantconnect.com/pricingPatrick Li
for me, after training LSTM 10 mins, the kernal dead...
Jared Broad
The free tier is limited in ram and bandwidth but still quite a lot of horse-power considering! If you'd like to help please take a peek at LEAN to make it run more efficiently and increase its speed. Those contributions will earn free QC time and make the free tier more usable.
The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by QuantConnect. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. QuantConnect makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. All investments involve risk, including loss of principal. You should consult with an investment professional before making any investment decisions.
Adam W
Patrick Li
You can try training it in the Research environment, then saving the model's parameters to ObjectStore and loading it in your algorithm.
Also, implementing Deep Learning models in native TensorFlow can help speed things up if you haven't tried that already (I'm assuming you're using keras), as well as standard NN tuning techniques for performance (i.e. choice of optimizer, batch size, architecture, layer/batch normalization, etc)
Guru Selvaraj
The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by QuantConnect. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. QuantConnect makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. All investments involve risk, including loss of principal. You should consult with an investment professional before making any investment decisions.
To unlock posting to the community forums please complete at least 30% of Boot Camp.
You can continue your Boot Camp training progress from the terminal. We hope to see you in the community soon!