I assume the comparison (SR 1.0 vs 1.4) was done on a backtest over many decades (since 1970?). What is the correlation between the old and new approach? Perhaps it is quite high and even 10 year period comparison is meaningful? How do they compare if you only take the 2010s period? (2010-2019, no 2020 or 2021 allowed!) Curious because 2010s is such a difficult period.
I'm looking at this topic https://www.elitetrader.com/et/threads/btc-8-000-000-usd.363495/ and apparently Yahoo had $8000000 price for BTCUSD. What is the best way to automatically recognize such errors in price data and filter them? One solution is having a price band of X% (+50%, -50%) and compare it to price X-days ago or to use 5-day MA. If you also trade stocks, things can get even more complicated (stock splits, reverse stock splits). Any other ideas?
I do something like that. I basically look at the return since the last recorded price, and translate that into a daily change (because my data is irregular and could be an hour or a few days since the last price), and check to see if that exceeds N * standard deviations of the recent daily price. https://github.com/robcarver17/pysystemtrade/blob/master/syscore/merge_data.py#L181 With N= 8 on a day like yesterday (a big day for energy markets) I get about 4 prices to check. Maybe a few dozen checks a year. Of course you can set the filter tighter if you like. It's true that it's a massive hassle with stocks. GAT
I just looked at an extremely simple strategy, basically the 'no rule rule' from ST - we just take a risk adjusted long only position (equivalent to always having a forecast of +10). And that also shows the pattern of falling skew over time. So it's not something weird about the strategy - it's the underlying instrument returns. Next I'm going to try and work out if it's due to more neg skew instruments coming into the portfolio, or because in markets generally positive skew is fading. GAT
Ignore this: When I look at monthly skew there is no clear effect. Also if I look at the average rolling monthly skew across instruments, that shows no pattern. So it's more complicated than that. Next I'll try the same checks on some simple trend following systems GAT
Time for my annual introspective post I won't dwell on performance, since I plan to do my usual full report in April, but I'm down 9% for the year with a ~15% drawdown. That will be my largest calendar year loss, and actually only the second loss since 2018 when I was down about 7%. In fact these annual figures help to put things in perspective, and we definitely see the positive skew coming through here: Code: capital_value 2014-12-31 36.2 2015-12-31 29.4 2016-12-31 16.4 2017-12-31 7.7 2018-12-31 -7.1 2019-12-31 21.8 2020-12-31 13.2 2021-12-31 -9.3 Here's the post I did last year: So what does 2021 look like (ignoring, again, the pandemic sized pachyderm in the parlour). Much to my surprise my teaching contract was renewed, so I have to prep my course which starts in late January. That will be a bit more work than usual since it's going to be online, and I need to create a lot more resources (not easy, as there is a lot of confusion about exactly how this is going to be delivered, plus a beweildering array of advice when I just want to be told what to do). Teaching happened. Online teaching was absolutely awful. The first of my new trading PCs should be delivered on Monday, so I'll be configuring that, and perhaps even building another one (see previous posts). I didn't build a PC but I do have two new PCs running now. I'm about halfway through a refactoring/tidying up of pysystemtrade to ensure everything is reasonably bullet proof and also extendable. Then there's a relatively short list of features I want to add to it before heading into my first piece new trading system research, what I've called 'stage 5' of the 'project'. Most of these are things to make the system easier to run more easily on a day to day basis; the only one that might cause some pain is upgrading pandas to 1.0 That all went well. The pandas upgrade was relatively painless. Pysystemtrade is pretty stable and with a couple of exceptions is nicely refactored, although I am a bad person because I keep letting the tests rot.... It seems to have a very active userbase of perhaps half a dozen people so it's sort of proper open source now, given I'm not the only one actually committing code. It even has it's own youtube channel (sort of) Stage 5 basically involves an optimiser that utilities small amounts of capital more efficiently, and can thus 'trade' (or at least generate signals for) a very large number of markets (including markets I can't actually trade, because the undelayed data is too expensive or the lot size too big). I'm fairly confident I can finish stage 5 and get something implemented fairly quickly. Well.... this took a while. In fact it wasn't until June that I started on this. And it wasn't until November that I actually got something working. Not mentioned above, but related to it, I did add a huge number of new markets to my system (which with the dynamic optimisation, I can now actually trade). I would say I've now got every liquid market I would be happy to trade, and I've even added some that I wouldn't want to trade. My other major goal is to get a proposal written for my next book, and start writing it. Working backwards, if I can get a proposal done by April then the book will come out in October 2022. So there will probably be some overlap; I find I'm happier if I'm not 100% on one project at a time (context switching in weekly blocks works pretty well). It won't be a massive surprise to hear that this was also delayed. I eventually signed the contract for this book in mid September, and I've been writing it mostly 'full-time'. What will 2022 bring? I don't plan to make any major changes to my current strategy, with the exception of perhaps adding a few more instruments. I'm going to spend a little time investigating the backtest drawdown and seeing if the risk overlay that I dropped from the strategy would have helped at all with Black Friday. I am down to teach again in January, and it's back to face to face; well sort of - some of my students will still be overseas so I'm likely to be trying to do a sort of in person and live zoom simulcast. Again it's not clear exactly how the tech will work so likely to be a baptism of fire. The book is clearly going to be the biggest project. My deadline for the book is next summer, which means publishing in October will probably be a stretch - although December might not be. For the next 6 months or so I need to focus mainly on that. How confident am I of meeting that deadline? I'd say fairly confident, even though on paper I haven't finished part one of what will be a five part book (part one will be the longest though). There has been a bit of backtracking trying to work out exactly how to present stuff, changing the order but now that's done it's just a question of bashing through the outline. The hard part will be the chapters on spread trading, since I will have to implement that in backtesting before I write it. Ditto for faster mean reversion. This is going to be a long book - part one is already over 150 pages. Although I usually edit down in length once I've finished, even so - it's going to be a blockbuster! I don't plan to do anything major with pysystemtrade. One issue is the dependency on arctic, and therefore an old version of pandas (discussion here). If I'm forced to I might have to do something with this, but ideally I'd like to put it off until Q3 or next year. Having said that, there are few little features on my to do list so I'll probably add some of these when I get bored of writing english and want to write python. I'm still down to do monthly TTU podcasts. I do enjoy teasing the 'pure' trend followers Actually did a couple of in person conferences in London late last year, and I've got a couple of guest lectures already booked in for next year. Overseas travel still seems difficult however. I did get invited to speak in Barcelona, but decided it was too much hassle with all the tests and changing regulations. Once the manuscript is handed in I'll have a little more time (although I'll still have work to do with editing, and creating the web page for the book along with spreadsheets and python code). As I have still nearly two years left on my barchart subscription I think I will pause the 'adding new markets' project for Q1 and Q2 so I have a little more time on the book. In Q3 I also want to implement the new strategies I will have developed for the book. All that is left to say is a merry christmas and a happy new year, good luck with your trading, and let's hope that omicron is just a last twitch in the tail of the dragon's corpse, rather than the dragon waking up again... GAT
I've been running Dynamic Optimization in Paper for a couple of weeks now and it seems to working fine, not trading too much, not crashing. CPU usage is high but that's expected (I'm doing it every 10 seconds or so). Backtest also showed improved perfotmance. One thing, is a bit concerning though: my tracking error of the current portfolio, versus the portfolio with unrounded optimal positions (tracking error /unrounded) is almost never below the buffer, so the system always has to run full DO, which most of the time after returning to the edge of the buffer using “Adjustment factor” again produces the same portfolio as my current. I expressed all risks (portfolio sigmas) in dollars and converted to daily, so my buffer level is 251$USD\day (buffer size = 0.0175 (even slightly larger than recommended), 230,000$ base capital, divided by 16 to convert annual to daily = 251$USD). Below is a typical example where “Current-Minus-Unrounded Gap-portfolio” has a risk above the buffer value (644.95 > 251), therefore, the system ran a full optimization, which produced a slightly different Int portfolio, but after applying the Adgustment factor it became exactly as my current portfolio again, so basically I shouldn’t have bothered to run DO this time at all. Don't know, maybe it's just an effect of rounding small positions or my logic\calcs are off.. Also strange that when I just calculate separately risks of current(optimal Int) and unrounded-ideal portfolios, the difference between them is much less than the risk of the gap portfolio ("tracking error /unrounded") (e.g. 1063.61-957.46=106.15 vs 644.95 in this case and for the whole backtest on the graph below), is this normal?
I realized that when the difference between optimized-integer and current portfolios is only 1 contract (in some or all instruments) and AdjustmentFactor is less than 0.5 the system will not adjust any positions (multiplying 1 by '<0.5' is zero)., So there are long-running situations when the 'tracking error/unrounded' is double of the buffer or even slightly higher, but all the instrument-level differences are just one contract, and until 'tracking error/rounded' becomes double of the buffer (and therefore Adjustment Factor > 0.5 ), none of these 1-contract differences will be corrected.. Because of that I'm considering making the initial if-check to require 'tracking error /unrounded' to be greater than 2xBuffer (or even 2.5x) to run optimization., because if it's less than 2xBuffer, the Adjustment Factor will end up being less than 0.5 and none of the 1-contract differences (which are most common) will be corrected anyway..
How much impact has higher interest rate (+5% like in 2006 or 70s) on futures trading strategies? High interest rates are great for spare cash in a trading account, but what about returns on futures trading?