Some systems removed. Their positions were taken over by other systems: System 1245186707: position in AWM (11000 shares) taken by 11474209709737240394 at price 0.27 System 3811011792: position in BAH (5000 shares) taken by 2813552119 at price 0.589 System 1093642095: position in BAH (5084 shares) taken by 14883623285626620515 at price 0.589 System 618370572: position in BAH (5000 shares) taken by 4138994883218683219 at price 0.589 System 618370572: position in CFI (13636 shares) taken by 2813552119 at price 0.216 System 1093642095: position in WIS (6171 shares) taken by 2813552119 at price 0.36 System 3811011792: position in WIS (4166 shares) taken by 209666634 at price 0.36 System 3811011792: position in GTN (5504 shares) taken by 209666634 at price 0.519
System 209666634 bought 6521 shares of SLZ today morning at 0.45 No more positions in SLZ as it's already at (almost) 10% of capital.
I wanted to see if systems trained on more recent data have any advantage over systems trained on older data. So I split my data into parts: 1. First training part: years 2008-2014 2. Second training part: years 2009-2015 3. Third training part: years 2010-2016 4. Fourth training part: years 2011-2017 5. Fifth training part: years 2012-2018 6. Control training part: years 2008-2018 And the last part: Testing years 2018-2023 for out-of-sample test. Then I trained/created population of systems on each training part. As soon as I had 50 decent systems on each part, I stopped the training and tested those systems on out of sample years. After executing system on out of sample years I had 50 values like profit factor, information ratio, final NAV and more. Then I did Two-Sample T Test to check if there is any significant difference in out-of-sample performance between different populations and the population trained on the control years. I was a bit surprise to see that there is no big difference between means for any metric (So in other words, there is no reason to reject null hypothesis that means of populations are equal). The only funny thing was that system trained on most recent years (2012-2018) performed significantly worse on out-of-sample years than others (but still profitable). This gave me some food for thought. First of all, this process works well enough and finds systems that usually perform well on out-of-sample. Also, it doesn’t really matter what years I use for training. So I’m going to change my systems to those trained on 2008 – 2024. That is, there will be no testing period and I will be using fresh data to update systems’ weights and maybe also reject systems that break down. Any thoughts?
My opinion is using more data for training is good because shorter training periods are more likely to result in overfitting. But using real money without having a testing period seems risky. I wrote something related to this area in this post.
Interesting. You mentioned some good anti-overfitting tricks, here is one more: pretend some signals didn't happen (randomly). I also sometimes change prices randomly by some small percentage. A system needs to be resistant to such randomness. It's safe to say that all my systems are traded without checking on a testing period - I won't remove a system because it failed. Instead I will change the process to have fewer such systems next time I run training. Also, I noticed such systems were worse-than-average on training data and therefore will have lower weights.
Those are interesting ideas. I think I've read about adding noise to inputs but never tried it -- maybe I will in the future. I'm not so sure about ignoring some of the signals during training. One other thing I've added recently is sometimes randomly ordering the input records. The fitness function in my system uses a variation of the Ulcer Performance Index which depends on the order of the inputs. I think training on randomly shuffled input records might also help avoid overfitting. It seems the your system is choosing penny stocks, some of which are thinly-traded. For example, If I understand the symbols correctly, these trades used a large proportion of the daily volumes: https://finance.yahoo.com/quote/CFI.WA/history?p=CFI.WA https://finance.yahoo.com/quote/KCH.WA/history?p=KCH.WA This might make it hard to unload the positions at a good price.
Thanks for pointing me to this, looks very helpful. I never liked penalizing upward volatility and hence never used Sharpe ratio. I might have to modify UPI as well to penalize long drawdowns a bit more. Yes, for some reason my systems pick "cheap" stocks. I tried to fight it by multiplying prices for each market by some random constant, but this didn't help at all. But liquidity shouldn't be so bad. There is a filter to ignore an entry signal if recent turnover is below certain value, but obviously I can't see into the future and liquidity may drop very badly with me on board. The biggest offender right now is WIS, which has so low liquidity that it didn't even open last few days.