Sorry I'm gradually reading through this thread from both ends It's important to say we have to seperate out two components here - the first is 'how large should a trade be given how risky it is, and my account size, and my risk target'; and the second is 'should I adjust my trade size given a forecast'. I'm not dogmatic about always doing the second part of this, c.f. the fixed forecasts in Systematic Trading and the 'starter system' in Leveraged Trading (which only has 'all in' trades with a fixed risk which get closed by a stoploss). But I firmly believe you should properly size positions for risk, independent of whether you are doing the forecast adjustment thing. And that means using a fixed % of your account value is wrong, and potentially dangerous. The correct % of your account value to use will depend on: - risk of the instrument (depending on whether 1% is based on the exposure you are taking or the risk you are taking - if the latter you can ignore this) - forecast horizon (faster you are trading, the smaller the % risk on each trade) - risk target - number of instruments traded and any expected diversification benefits Summing all these up might give you 1% as the right answer, but it's unlikely... GAT
Just to say that seeing people implementing my ideas with their own code is utterly brilliant - the whole idea of the stuff I do is to inspire people to go off and build their own systems: both writing their own code but also developing their own strategies. Sadly not everyone has those skills at least initially which is why I've also shared my code. But I will never be one of those guys who sells a shrink wrapped system that does XYZ and expect that is what people will use (apart from anything else, it's a tough way to make not much money writing production quality paid for software and having to support it). GAT
So turned out that I did not have stop limits working in live trading. That was not fun to discover. Fixed bug. FML.
Re-ran some tests. Looks like Arctic's main benefit is that it works with Pandas dataframes natively which nicely steps around my SQLite <-> Pandas dataframe problem. The query performance as a result was nearly instantaneous. Will start using Arctic alongside SQLite to store ticks and see if it makes me more productive. For completeness, here is how the data is stored in mongodb: 1. Each date range is a document 2. Each column is a compressed base64 encoded value - presumably this is a serialized dataframe portion So the query efficiency to me really seems to be due to the fact that it is storing pandas dataframes natively. This is fine, I guess. It is 100% worth it to me to make the transition to use MongoDB for tick data. But then I realized... I could just do the same thing with SQLite: https://www.sqlite.org/json1.html So the question comes back down to buy vs build. Hmm...
I expect this code will change, but this is all of it: And the little "app" to convert the data: Loading about 90 days of ticks for a currency (with some gaps), around 8 million rows takes about 7 seconds unoptimized. I expect pickling is a big problem, but when I used feather as the binary format, I got seg faults. Will look into it again later but I'm very happy with this. Glad I did the investigation into how Arctic worked.
Updating the chunksize, now I can load months of ticks in milliseconds. wheeeeeee. And I don't need to run mongodb.
Super interesting result... Database size went up by only 5G from 100G after converting all the data overnight. Which means that my dumdum compression works surprisingly well.
Did some hard benchmarking with my duct tape (unoptimized) solution vs Arctic and discovered that my duct tape solution is about half the performance of Arctic. The majority of the slowdown in my solution is due to the compression being used, which is bz2. Because I'm retarded and stuck in the 90s. Now I will have to recompress using lz4