Discretionary Performance vs Backtest

Discussion in 'Strategy Building' started by Dollardogs, Dec 30, 2024.

  1. Dollardogs

    Dollardogs

    Question about how to make realistic performance projections. I'm still a relative newbie with about 2 years of daytrading under my belt where I was constantly system hopping. I finally settled into a system I like this past quarter. My system projects about 500 ticks per month on the futures contract I trade. But it's semi-discretionary (discretionary entries, systematic exits). This past quarter I only averaged 1/3 of what the system is capable of, about 160 ticks per month. You know how it goes. You miss a couple entries you shoulda taken. You take a couple you shouldn't have, etc etc.

    What I'm wondering is how close to 100% of projections can/should one hope/expect to get? 50% aka 250 ticks per month would be perfectly fine, but is 60% or 70% correlation possible? I know better than to hope for 100% but I have no idea statistically what a good rule of thumb is for this. And I just want to have some basis for gauging if I'm improving month by month in 2025 at the pace I should be.

    Thanks in advance for any advice, and Happy New Year!
     
  2. 2rosy

    2rosy

    backtest the system without discretion. are results > live?
     

  3. It’s going to depend on how reliable your back-testing is, and how you’ve done it, and that in turn rests on a wide variety of other factors (a wider variety than many people think!).



    You seem to have that disparity the right way round, anyway: exits affect most people’s results far more, overall, than entries.




    We all hear you there, I think.



    I think 70+% should in principle be possible, if your back-testing avoids all the classic and obvious traps and problems, and if it’s not all horribly fast-moving.

    Good luck and happy New Year!
     
  4. Dollardogs

    Dollardogs

    Thanks! 70% sounds like a nice 'stretch goal,' that would be great if I could get there eventually. But before I start sizing up, I want to consistently be hitting that 50% threshold at least.

    I backtest 2 ways. I trade off the 1m chart, so I click thru them quickly 1 candle at a time looking for my setups. I did that going 180 days back. And then I also have a trading partner, and she reviews the charts blind for me after I've traded, then we compare my results to hers. All quarter she was averaging 500 to 550 ticks per month. She tries to make the decisions quickly as she's clicking thru the 1m candles so there's at least a little bit of the same time pressure I'm feeling live to make fairly quick entry decisions. It's not too high octane of a system so I think the backtest results aren't too far removed from reality. I only average 2 trades a day, even if the decisions have to be made quickly once I see my trigger candle.
     
    Probability likes this.
  5. The best author I found on this subject is Ernest Chan.

    https://www.goodreads.com/author/list/8158172.Ernest_P_Chan

    He goes in detail about building your own backtests.

    The main poins are:

    - Cure your data properly. Do not trust free datasets
    - Avoid over fitting.
    - Run forward tests.
    - Stress your algos against many markets to find flaws.
     
  6. tomorton

    tomorton

    I don't understand the concept of using discretionary entries.
     
  7. Dollardogs

    Dollardogs

    Don't know his books, I'll look them up, thanks for the rec. I wonder if my approach already counts as forward testing, having a 2nd trader trade the same chart semi-live?
     
  8. Dollardogs

    Dollardogs

    Fascinating!
     
  9. You will solve many of your concerns with those books.

    People may call forward testing to different ways of testing your algo. One of them is a discretionary approach that can be the one you are doing. Having a second person to try your algo can be part of that as well.

    But the basic definition of forward testing comes from running your algo on a chunk of the dataset that has been cut from the initial training data.

    Let's say you have a dataset, you use 80% of it to train your algo, then use that remaining 20% to run a forward test. Then compare the two results and see if your forward test performs as expected.

    https://academy.ftmo.com/lesson/forward-testing-of-trading-strategies/

    The reason for a forward test is mainly to avoid over fitting. One of the main issues with developing algorithms based on data is that said algo might fit the dataset perfectly and give you false hopes that is performing well. Using a second dataset that has been left aside in the initial process can give you a second opinion of the performance.

    Everything I am saying is in those books, anyway.
     
  10. #10     Dec 31, 2024