Hi Craig. There is some math and guesswork behind it which I am not willing to discuss here plus I know other traders using conceptually same method on larger accounts so it is also based on precedents. Val
That is...frustrating. I've been asked this question a number of times, to which I've only been able to shrug my shoulders. It's obviously a function of the advs of the issues traded but how one goes from that to an estimate of overall capacity is not obvious. Anybody else prepared to give an answer here? It hardly seems like leaking information.
The general idea is that at a very small scale you can size every position by the strength of the alpha and ignore liquidity constraints. As the GMV increases you'd have to apply some sort of liquidity scaling (lets take a simple one, capping size at 1% of ADV). So at a very large scale you can only fully size by the strength of your alphas in let's say 20 stocks and the rest of the positions is clipped by liquidity. That should give you a good place to start. For example, a very simple approach would be to backtest with an increasing GMV (which keeping the same liquidity scale approach) and see how your performance deteriorates.
@ValeryN could you share some thoughts about your research and monitoring tools? I.e. how you display stuff intraday, alerting systems, backtest exploration tools? I am primarily interested in the infrastructure aspects of it (i understand that the actual details can be very proprietary), so like languages/services etc. I am planning to revamp that aspect of my business and can definitely use some fresh ideas.
This was one idea that did occur to me, however, I thought it would be regarded as too crude. But using this idea it would be simple to plot various metrics as a function of account size.
Do you try to make your account market neutral? Do you hold overnight of short positions? How large of each position? Thanks
For research I use RealTest. This is a strategies portfolio backtesting/research tool @mhparker created. For some unique stuff I'll just write something custom or get by with Google spreadsheets + data I collect myself. Local environment is Macbook with Parallels if I need to use Windows tools. For infra/production I have two VMs for full automation. Both on DigitalOcean. VM1 with backtesting software (RealTest) to update daily data, do scans and find candidates for all strategies. Results are uploaded to AWS S3 bucket. Automation there is pretty straightforward and is simply done via Windows Tasks Manager which calls a sequence every 30 mins. VM2 is Linux with whole trading stack including IB and my execution software. It is virtualized via docker so it is easy to up/down/scale the whole stack with logging, monitoring, DBs, broker software etc. My execution app picks up trading plan with partial EOD snapshot from S3 on schedule. Then this app covers all pre-market/trading session and after-hours routines, entries, positions management, alerting, stats, some data collection and reporting. Human interface with the app is Slack. Pre-defined reports are generated at intervals depending on a time of a day. Plus notifications are sent on key events. There are 2 slack bots to remote-control things. One is to manage the whole stack, another specifically for execution app. But I rarely use them as things supposed to just work. Mostly to generate trades report once in couple of weeks. There are 3 tiers of health monitoring. Built into the app and it will email+slack me if something critical happens. There is another app which will attempt to auto-heal if health didn't pass for any of the docker containers. Pingdom SaaS rules them all - if they are unavailable or health checks didn't pass I'll get SMS. Detailed apps logs are shipped to Loggly near realtime. Execution app is built using Kotlin. DB is Mongo. DB has a bunch of stuff in it but is optional in a sense that everything is designed so app can pickup state from either broker or DB to get by intraday for extra resiliency. That design saved me numerous times as once in a while either of them has incorrect data or is unavailable. Bonus: there is also an "updater" app that facilitates stack's configuration updates including changes in execution app. A simple CI/CD. If I need to make a change I never need to login into the servers. Just "push" code into git repository from local IDE and updated everything is up and running in ~1 minute. Below is a screenshot of operational Slack channel on a quite day. Unless there were any critical notifications via email/sms I don't look at it till 2 hours after the market close when slippage report is generated. If I have time I'll scroll thru the day to see how the story unfolded. Val
Yes. Yes. Currently <50k$ per position and always under 10% of account value. Exact max % will depend on a strategy.
I splurged on coding this weekend. To save a bit of time on doing weekly/monthly routine I now automated PL/DD graphs generation. In the past, once a week I'd export trades from execution software, do some data massaging and put it into RealTest format, import new market data and run a test to reproduce equity from individual trades. Now just need to open one page and it's all there. Next time I splurge it will likely be to automate model vs live comparison. While doing this came up with this format to visualize individual trades PL per strategy + combined. Idea was to combine visual and quantified variability. "Candles" wicks represent upper/lower fence, body is q1 and q3 quartiles (Wiki article) This is for last 3 months, PL is in % relative to account size. And here is new PL/DD graphs per strategy with combined info represented as bars (intentionally barely visible so it doesn't distract). Combined DD is yet to be done. According to RealTest it was ~3% max for that period. Same stats are generated for every account which will save even more time.