hi everyone, I'm new to the field of price prediction modeling and currently I'm experimenting the LSTM model. I found a few posts online, like this one : https://towardsdatascience.com/lstm...stock-prices-using-an-lstm-model-6223e9644a2f . They however all suffer from the problem that their predictions are not recursive. I took the trained model from the above mentioned post and ran a recursive prediction; the results are wrong and not usable. Wonder if anyone can recommend a few time-series price prediction models? Thanks!

https://www.elitetrader.com/et/thre...dict-market-moves.357176/page-35#post-5357934 https://www.elitetrader.com/et/thre...dict-market-moves.357176/page-36#post-5358066 The above model isn't recursive, so the it would need to be regenerated (e.g., after every bar).

Thanks for the pointers. Seems like that you are hinting decomposing the price movement in the frequency domain? I did some search online and found some conflicting opinions on using fourier transform, like this one: https://journals.vgtu.lt/index.php/JBEM/article/download/2243/1799/ Wonder what's your experience using it in practice? That's very interesting.

By "recursive" I mean to feed the output of one day as the input to the next day. This way you can predict many days in the future and use that to evaluate whether your model really "understands" the underlying dynamics or simply making silly guesses. The website doesn't provide a model, but it's fairly trivial to follow to train your own.

The model is not created from a Fourier transform, and Fourier transforms (at least the common Fast Fourier Transform) are not good choices for modeling prices according to John Ehlers' "Rocket Science for Traders: Digital Signal Processing Applications." This method models asset prices as a parabolic trend (often close to linear) plus a cyclical part (sum of a few sinusoids). To create the function, I choose the window of data (89 closing prices in my example) and the number of sinusoids (3 in my example). Then software finds the remaining parameters through genetic optimization to attempt to get a good fit. The number of sinusoids is small to keep the fit fairly smooth. Another example this time for MDYV SPDR S&P 400 Mid Cap Value ETF for 89 calendar days from 20210107 through 20210405 using data adjusted for dividends and splits and interpolating close prices between non-trading days has a raw chart The fitted function for the close prices is Code: y = 56.4195823669 + 0.1046799496 * x + 0.0002184126 * x^2 + 1.0908385515 * cos(twopi / 28.9732589925 * x + 4.5311441422) + 0.8659873009 * cos(twopi / 64.9392941518 * x + 0.3379080296) + 0.7130651474 * cos(twopi / 16.9317009049 * x + 0.6330339909) ; The prices and fitted curve with the parabolic trend subtracted are This suggests going long tomorrow 20210406 and exiting on 20210416 to capture the next cyclic segment predicted to rise. The three cyclical parts are Notice there is no single, dominant cycle (Conventional cycle analysis often assumes there is one). And the cycle with the largest period (64.9392941518 calendar days) is more than half the data size (89 calendar days). A Fourier transform would not be able to find a period more than half the data size. I haven't been using this method very long. And like everything else, it works -- sometimes.

If the first forecast shows to be 80% accurate, for example, then the next bar will be 64% accurate, so on and so forth. It won't be that the model is 'bad,' it could be that the model is not used in the best way possible. In the above example, I'd suggest only utilizing the first forecast, then updating the data in order to use actual data rather than forecasted data, for making the next forecast. In this case, you'd only be able to forecast one bar ahead at a time, but you'd have the greatest accuracy. I've found that it better to not use a 'recursive' approach; rather, I'd use a new model instead. One model to forecast one bar ahead. A separate model to forecast two days ahead. Etc. Or, one model to forecast all days ahead that are desired. Hence, I don't use a time-series approach. About how many iterations/cycles/hours/minutes did you train the model; and with how much data?

I've played with a similar approach years ago; but not in a robust way. How accurate are the models? Rather than limit your models to cos and x^2, have you considered using 'all' functions and genetic programming?

I ran some longer-term (20 years of data or to start date) tests with EEM (iShares MSCI Emerging Markets ETF), GLD (SPDR Gold Trust), IWM (iShares Russell 2000 ETF), and SPY (SPDR S&P 500 ETF). The tests were simulating trades entering on the close when the next predicted inflection point in the detrended curve was the next day. The simulated exit was on the following predicted inflection point. About 60% of the tests were profitable. On the surface, the method could be robust because it's basically predicting trend retracements. And the amount of fitting is limited because of the relatively small number of terms in the fitted functions. This is visible in the fitted curve images because the curve is often not that close to the data the curve was fitted to. I have tried genetic programming. https://www.elitetrader.com/et/thre...your-edge-for-2019.329802/page-8#post-4809209 https://www.elitetrader.com/et/thre...ine-learning-for-astronomical-profits.334373/ https://www.elitetrader.com/et/threads/oscillators.337471/page-23#post-4960785 https://www.elitetrader.com/et/threads/machine-learning-for-price-wave-analysis.339646/#post-4997882 And you even liked some of the posts! Those models tended to overfit and were not very good on unseen data -- except strangely for the astrology-based one. https://www.elitetrader.com/et/thre...tronomical-profits.334373/page-2#post-4896025 It might be an interesting exercise to see if genetic programming just based on time could fit a time series and make a good prediction.

Of course is that algorithm "recursive". Each one is. You can just include the point prediction in your source data for inferencing and obtain the next prediction. What is the issue? As other users suggested, its best to replace the prediction with the actual value as input to the next point prediction. How frequent you recalibrate the model or retrain is up to the user