Hi @globalarbtrader hope you're well! I have recently read your Systematic Trading (was a great introductory read to the world of systematic trading btw) and now I am trying to get familiar with pysystemtrade to understand how the program actually runs. Naturally, to achieve this I am trying to debug the system in prebakedsystems.py (the code I am running is shown below, everything else has been commented out) so I can understand what the flow of the program is. Code: from systems.provided.futures_chapter15.estimatedsystem import futures_system system = futures_system() instrument_list = system.combForecast.parent.get_instrument_list() result = system.accounts.portfolio().sharpe() print(result) I have set breakpoints on all lines of code above and I have also set breakpoints within the system.accounts.portfolio() method. I have also set breakpoints at some key points in the code that I know for sure must be getting hit e.g. within forecastScaleCap.get_capped_forecast() method. However, when I launch my debugger (the launch.json configuration for which is shown below) only the breakpoints in the prebakedsystems.py are hit, none of the other breakpoints further downstream get hit. When I try to step into the system.accounts.portfolio() method it just stays in the prebakedsystems.py file. Launch.json configuration: Code: { "name": "pysystemtrade", "type": "python", "request": "launch", "program": "/Users/darshildholakia/Documents/pysystemtrade/examples/introduction/prebakedsystems.py", "justMyCode": true, "subProcess": true, }, Can anyone advise me on what I'm doing wrong and how to fix this issue? Apologies if this is a basic question but would appreciate any help on this Thanks in advance, Darshil
Seems like a stack exchange issue since it's clearly a debugger config that is at fault (I assume that if you run it without the breakpoints it produces output) Rob
@globalarbtrader I have been working my way through AFT. I am working on the strategy 2 (buy & hold w/ risk scaling) and strategy 3 (buy and hold with variable risk scaling). I am running into a difficulty calculating the daily return of the strategy. I am not sure I am conceptualizing the problem properly. If I have a strategy that is holding N contracts of micro ES: Strategy 2 Strategy 3: I am holding a variable long position. So say I'm holding 30 micro contracts. I am assuming to calculate the return you would need to calculate the notional for held contracts which would be 30 * Multiplier * Price (ES). To calculate the daily return ($) I would multiply this number by Daily Pct. Return of ES. But to convert this back to percentage terms I would need to divide return ($) by (30 * Multiplier * Price (ES)) which would just leave the term Pct. Change(ES). This seems incorrect because say on one day we are holding 2 contracts and the next day we are holding 4 contracts. We would expect the return on the second day to be twice that of the first if the returns on both days are the same. My thought was to calculate Daily Pct. Returns for N contracts held as (N * Pct. Change(ES) * Multiplier * Price (ES)) / (Multiplier * Price) = (N * Pct. Change(ES)). This also feels kind of wrong. I am trying to keep my development consistent with what is in the book. Am I thinking about Daily Strategy Returns properly?
Hi Rob, Thanks for all your great work. What are your thoughts on using hybrid optimisation methods and mixing handcrafting (old style) with your new way of clustering? I mean that since correlation matrices are noisy, clustering is not always superior to naïve allocation methods (1/N, for example). Do you think it's a terrible idea to expand on this and use clustering only when we see enough structure in our correlation data? Your question might be, 'How do we see such structure?' The answer would be monitoring a series of features in assets returns correlation matrices like sparsity, min/max/median/average correlation, eigenvalues of the matrix, etc. We could even look at cophenetic correlation to see how well the dendrogram represents structure in our data. By monitoring these features in our matrices, we could know when hierarchical methods work best, rely more on them, and when there isn't much structure in our data. In this second case, we could use your handcrafting approach, as you described in your first book (i.e., a series of pre-set groupings that is applied following the taxonomy of our data). In your new version of handcrafting, your approach uses clustering. It fully accounts for parameter uncertainty and return variability, effectively ensuring that we enforce a specific structure regardless of what correlation we observe. But what do you think of a more explicit way of measuring structure in the data and switching between a more data-driven method when we detect structure in the data vs. defaulting back to a pre-set weight structure when we don't see it? Cheers!
I don't actually use % of instrument returns when calculating returns; I only mention them in the book as some people prefer to use them when calculating volatility for position sizing. So $ return = 30 * multiplier * (price ES today - price ES yesterday) And the % return will be that $ divided by your $ capital NOT your notional position. Rob
With a large and interesting enough set of assets like my portfolio of futures, you don't really get much of a departure from 1/n with clustering, so this research area isn't very interesting except for some extreme arbitrary examples. And there is also the fact that correlations are relatively predictable, and unlike a classic MVO the resulting instrument weights aren't super sensitive to perbutations in the correlation matrix. But I'll indulge you for 10 seconds. So one option would be to do the clustering on the correlation matrix, but to have some measure of uncertainty on the correlations. We could for example draw the correlations from a sampling distribution, form different clusters; and then take the average of the resulting instrument weights. Or we could do as you suggest, and create a minimum threshold before a particular group splits into a cluster rather than enforcing clusters of a specific size. I wouldn't waste any time on this myself, but feel free to have a go! Rob
Hi Rob, I am running your "safer fast mean reversion" strategy 27 in a paper trading account. If all goes well I am planning to run it live on 250K of capital using 7 instruments (MXP, GAS_US_mini, US10, SP500_micro, VIX, CORN, GOLD_micro). Will 7 instruments be enough to dial back the negative skew exhibited by the strategy, and obtain a sharpe close to that of the Jumbo portfolio? Thanks.
Great stuff, I'm still being lazy about it Have you tried backtesting it at all to see how it behaved historically ?
Graphs here answer your question for SR https://qoppac.blogspot.com/2023/03/i-got-more-than-99-instruments-in-my.html (they're for trend following so won't be much difference). For skew I couldn't say as I haven't tested it and with such a small sample your mileage will vary a lot for any given time period of live trading. GAT
Symbol name change noticed! March contract symbol name is ZN MAR 24 June contract symbol name is ZNM4 I'm not sure if any other contract has new naming convention, did anyone get email about this change from IB?