Backtesting vs. Neural vs. Genetics

Discussion in 'Trading Software' started by mmillar, Apr 29, 2004.

  1. mmillar

    mmillar

    Can someone please explain the differences between...

    1. Normal backtesting (such as with TradeStation).
    2. Neural networks
    3. Genetic algorithms


    My (cynical) assumption is that using neural and genetic stuff is the same as backtesting - just faster and with more curve fitting! Is that true or am I missing something? Is there some advantage of one above the other?

    Thanks
     
  2. Backetsting is just testing - run the system(s) & measure its/their performance.

    Genetic systems do more than this - they use backtesting to determine the best preforming systems. Then they breed from these systems, backtest the results - and repeat for mny generations.

    Neural systems are a little different - but again they USE backtesting.

    I've never used tradestation so I don't know if it just does backtesting or also includes some system optimisation based on the backtesting (genetic, neural or other).
     
  3. I did a lot of backtesting, strategy development , backtesting and the whole lot. The reply to your question would tke about a book to answer. But let's try to resume.
    Tradestation(TS) will do backtesting on a given stock, index. whatever, but uses the "brute force" method. Moreover, it doesn't allow for price modulation. What's that? Well, it means that when you run 10 times a backtest on the same dataset you will get EXACTLY the same results. Which will be the ideal path to curve fitting. When you have a tool that allows price modulation, you can add small "variations" into your dataset, which will avoid strong curve fitting. Needless to say that after you have done all this you should test your strategy with out-of-strategy data. Now about the optimisation part: Suppose you want to test the effects of a couple of input parameters in your strategy. And let's say you will test each input parameter on 10 different values. 1 input parameter = 10 tests. 2 input parameters = 10x10=100 tests. n input parameters takes 10(power n) tests. In other words. As soon as want to test more than 5-6 inputs with TS, you're into hours(if not days) of calculations. (TS doesn't use multi^rocessor capabilities). So how do you tackle let's say a run with 10 million tests. One of the things you can do is use a tooll that uses genetic algorithms. It just starts testing with a couple of values, and when it finds "interesting results", it focusses in on these areas. So in a way, it is "smarter" then brute force and therefore uses MUCH less time to run HUGE simulations.
    Look at products like saphir XP and optimax, and you will have lots more of information there. I use these extensively because TS alone just cannot hack it.

    Sorry to be a little bit short. Maybe one day I'll write a book about it..:)

    TFD
     
  4. mmillar

    mmillar

    Thanks for your reply 1contract.

    With TradeStation (and I assume other backtest software) I can vary my inputs. So, to test a moving average crossover system I can vary the first MA from, say, 2-5 and the second MA from 10-50. It then produces a set of results (2/10 cross, 2/11 cross, ..., 2/50 cross, 3/10 cross, ..., 5/50 cross) and I can pick the 'best' result from that.

    What do Genetic algorithms and Neural networks do differently?
     
  5. mmillar

    mmillar

    Thanks flyingdutch. I look forward to your book :)

    I get the bit about speed.

    But I've never heard of 'modulation'. Wouldn't varying my data slightly (running a perl script on my ohlc data to vary it by a small random amount) have the same effect?


    Some products were recently mentioned here..

    http://www.elitetrader.com/vb/showthread.php?s=&threadid=31917

    ...and you can buy third party addons for TradeStation.

    I'll also have a look at the two products you mentioned.

    Thanks
     

  6. From your description it seems that TS does the backtesting & you do the optimising (i.e. choose a range of strategies, test them & then choose the best)

    Genetic systems do the optimisation for you - using algorithms inspired by evolution by natural selection. Same for neural systems - but this time using algorithms inspired (loosely) by neorons.
     
  7. Genetic algorithms are a combinatorial optimization technique (simulated annealing which is based on thermodynamic principles is another). They allow you to find near-optimal (if not optimal) solutions to combinatorial problems that for all intents are computationally intractable - meaning you either need a supercomputer and come back in a year for the answer, or it would take millions of years to solve with the fastest computers available.

    If you had a problem with a few hundred variables that can take millions of values each, well, you get the idea, it will take a very long time to solve. Of course there are search space reduction techniques to make such combinatorial searches feasible, such as the ones mentioned above. It depends on the problem, there are others.

    In the case of trading system development, you might want to find what are the (near)-optimal system inputs, and their parameters, i.e. MA period, RSI period, threshold levels, ratios, etc, etc.....

    If overdone, a curve fitted system is produced and the consequences are often dire....

    Neural Networks on the other hand produce a non-linear model of the system that you are trying to model, based on the input data. Many statistical techniques/models are linear, however, real life systems are often anything but linear. Neural Nets can be trained to identify such non-linear models, and the interactions of inputs. For example, you could develop an SP system that takes as inputs, apart from change in close price, volume oscilators, etc, things like Interest rates, Put/Call Ratio, USD/YEN, etc, etc, and determine some sort of relationship.

    Of course, in real life nothing is easy and you have to figure what makes good inputs, what preprocessing is required, input reduction (say via principal component analysis). Also, the type of Neural Net and its architecture can have significant effetcs on the resultant model. i.e. some NNs model temporal relationships well (TDNNs), some do not. You have to pick the right ones. This simply takes knowledge of the subject and experience.

    It takes some work, but in the end you could develop a decent intermarket NN system. You need enough data for training + traning-validation (walk forward testing whilst still in learning mode) and then a pure out-of-sample period.

    It is also difficut to develop models that last a long time. In my experience if you develop a NN model that has lasted for 2 years you are doing well. Typically they require rework and retraining as market conditions change and the model adapts. Again that raises the question, how much OOS data is required to validate a model and allow you to trade it before breaking down.

    However, beware, to quote someone from their website article : "If there is a relationship in the data then a Neural Net will find it. Also, if there is No relationship in the data, the Neural Net will find something"!!!

    Basically if you think you can just throw OHLCV to a Neural Net thinking its time to buy that Island, then I'm afraid you are mistaken. Otherwise all those Neural Net academics would be lighting cigars with $100 bills :D

    Hope this helps...
     
  8. Backtesting is a general term that is can englobe many techniques.

    Neural networks (ANN) is one is of technique available. Shortly ANN optimise a neural net with an objective function whose parameters (called weights) have to be determined by optimising this function. Because the number of parameters can be huge Genetic algorithms can be used to find these parameters.

    So it's just a tool and as any tool you have anything to feed it with something or you are like a monkey with a bunch of pencils :D
    See ANNs: A Little Knowledge Can Be A Dangerous Thing (he even takes trading as example)
    Posted by Dr. Halbert White
    http://www.secondmoment.org/articles/ann.php
    Exerpt
    The Bad News
    “The real drawback to using artificial neural networks,” White continues, “is that to do the estimation, you must train the network, which requires optimizing a nonconvex function, something that people in the field know is not an easy problem. Typically what this means is that you can fairly easily arrive at a local optimum—often a good local optimum—but depending where you start, the results can differ. In other words, if you start on two different points in the network weight space and go through the training exercise or the optimization routine, you might end up with two different sets of estimated coefficients or trained network weights.

    “One means for dealing with this is the so-called multistart method, which is a provably effective way to arrive at a global optimum. Basically what you do is begin with multiple starting points and then let the thing go and see which ones converge and which ones don’t. Of the ones that do converge, you pick the best one, or as Leo Brieman (of Stanford University) has suggested, you can combine them. The idea is to take a whole bunch of neural networks that you’ve trained—maybe 500 or 1000—and average the results so that in the end what you have is something much more reliable and robust than the result of training just one network. Still, while this is a workable solution, in my mind it’s not very appealing because what you’ve done is taken a problem that may take two or three hours to solve and multiplied it by a thousand, and that’s just computing time. There’s also the time to do the optimization itself, which typically involves a great deal of tweaking, not to mention false starts.”

    Over Fitting
    Another means for dealing with the problem of local versus global optima is to perturb the point in question and to see if the function continues to return to that same point. But here again, White sees that as extremely time intensive. Moreover, as he goes on to point out, “the nature of the optima on a very fine scale can be quite irregular. You can find yourself hopping in and out of very small local optima, like a saw tooth. The finer and finer the resolution, it can still persist. So what does it mean in this case for it to be an optimum? The right answer is probably some smoothing of the objective function, which is perhaps what you should care about.”

    “The thing about neural networks is that they are a great data mining tool if you have the time and patience. So you can turn out a whole bunch of different models. For instance, you might try different tweaks for the training parameters, or different preprocessing steps, or give the learning algorithm different sets of inputs to play with. As you go through the process you are generating models that work well or not so well. If they do not work so well you can keep going until you get something that does work well. The danger though, is that ultimately you will end up with a network that is basically fitting the noise. Even cross validation is not by itself a guarantee against this, because once you go back and revisit the cross validation set over and over again, you’re going to be fitting the noise that it contains. So there is this pitfall.”


    “Let me give you an illustration. Let’s suppose you want to trade the S&P 500 on a daily basis. Basically what you want to know is your average continuously compounded return from one day to the next, which is calculated as the logarithm of the net asset value on day t divided by the net asset value on day t minus 1. You want to maximize this average over a particular time frame, which tells you what your target is, and then to some degree you are going to be focusing on being able to predict when that’s large or small. One simple way that people attempt to do this is with what is called a moving average indicator. A moving average is an average of prices over a period of n days, and by comparing the moving average over n days with a moving average for a smaller number of days you supposedly get an indication of whether the market is headed up or down. If the short moving average is above the long moving average, it means that market is moving up and you want to buy. If the short moving average is below the long moving average it means the market is moving down, and you want to sell. So, for example, you can pick two days for the short moving average and 10 days for the long moving average and buy or sell whenever the short moving average touches the long moving average. This will create a series of buy and sell strategies, which will determine your portfolio position in time, which in turn will determine what your net asset values are, which in turn will determine what your performance measure is. Next you can look over all the different combinations of numbers of days for the short and long moving averages to see which one gives you the best performance. The ultimate issue though, is whether the apparently good performance you ultimately find is real, or is it in line with the random variation you would expect to find, given all the scenarios you’ve looked at. The sad answer is usually that it’s in line with the random variation.”


    Gold or Fool’s Gold?
    “There are entire literatures and industries built around so called technical trading indicators, and a lot of new versions of things that you can buy to put into your program that will crank out these indicators and that will calculate the profits you would have made if you only you had done that ahead of time. And it’s not simply stock market predictions. Whether it’s credit card fraud, mortgage fraud, CRM (Customer Relationship Management)—the danger with neural networks is that if there is something there, you will find it, and if there is nothing there you will also find something. So while there’s no question that neural networks are a powerful tool for discovering relationships within a collection of data, their very power makes them dangerous. Great care has to be taken to ensure that what one finds is truly gold and not simply fool’s gold. "