Developing "Spartan"

Discussion in 'Journals' started by nooby_mcnoob, Feb 3, 2019.

  1. Sorry I'm gradually reading through this thread from both ends :)

    It's important to say we have to seperate out two components here - the first is 'how large should a trade be given how risky it is, and my account size, and my risk target'; and the second is 'should I adjust my trade size given a forecast'. I'm not dogmatic about always doing the second part of this, c.f. the fixed forecasts in Systematic Trading and the 'starter system' in Leveraged Trading (which only has 'all in' trades with a fixed risk which get closed by a stoploss).

    But I firmly believe you should properly size positions for risk, independent of whether you are doing the forecast adjustment thing. And that means using a fixed % of your account value is wrong, and potentially dangerous. The correct % of your account value to use will depend on:

    - risk of the instrument (depending on whether 1% is based on the exposure you are taking or the risk you are taking - if the latter you can ignore this)
    - forecast horizon (faster you are trading, the smaller the % risk on each trade)
    - risk target
    - number of instruments traded and any expected diversification benefits

    Summing all these up might give you 1% as the right answer, but it's unlikely...

    GAT
     
    #251     Nov 15, 2019
  2. Just to say that seeing people implementing my ideas with their own code is utterly brilliant - the whole idea of the stuff I do is to inspire people to go off and build their own systems: both writing their own code but also developing their own strategies.

    Sadly not everyone has those skills at least initially which is why I've also shared my code. But I will never be one of those guys who sells a shrink wrapped system that does XYZ and expect that is what people will use (apart from anything else, it's a tough way to make not much money writing production quality paid for software and having to support it).

    GAT
     
    #252     Nov 15, 2019
    djames and nooby_mcnoob like this.
  3. So turned out that I did not have stop limits working in live trading.

    That was not fun to discover.

    Fixed bug.

    FML.
     
    #253     Nov 16, 2019
  4. Re-ran some tests. Looks like Arctic's main benefit is that it works with Pandas dataframes natively which nicely steps around my SQLite <-> Pandas dataframe problem.

    The query performance as a result was nearly instantaneous. Will start using Arctic alongside SQLite to store ticks and see if it makes me more productive.

    For completeness, here is how the data is stored in mongodb:

    1. Each date range is a document
    2. Each column is a compressed base64 encoded value - presumably this is a serialized dataframe portion

    So the query efficiency to me really seems to be due to the fact that it is storing pandas dataframes natively. This is fine, I guess.

    It is 100% worth it to me to make the transition to use MongoDB for tick data. But then I realized... I could just do the same thing with SQLite: https://www.sqlite.org/json1.html

    So the question comes back down to buy vs build.

    Hmm...
     
    #254     Nov 16, 2019
  5. Well that was easy (chose build). Took all of one hour to write and test, now to convert the data...
     
    #255     Nov 16, 2019
  6. I expect this code will change, but this is all of it:

    upload_2019-11-16_21-2-55.png

    And the little "app" to convert the data:

    upload_2019-11-16_21-3-5.png

    Loading about 90 days of ticks for a currency (with some gaps), around 8 million rows takes about 7 seconds unoptimized. I expect pickling is a big problem, but when I used feather as the binary format, I got seg faults. Will look into it again later but I'm very happy with this. Glad I did the investigation into how Arctic worked.

    upload_2019-11-16_21-0-45.png
     
    #256     Nov 16, 2019
  7. Updating the chunksize, now I can load months of ticks in milliseconds. wheeeeeee. And I don't need to run mongodb.
     
    #257     Nov 16, 2019
  8. Super interesting result... Database size went up by only 5G from 100G after converting all the data overnight. Which means that my dumdum compression works surprisingly well.
     
    #258     Nov 17, 2019
  9. Did some hard benchmarking with my duct tape (unoptimized) solution vs Arctic and discovered that my duct tape solution is about half the performance of Arctic. The majority of the slowdown in my solution is due to the compression being used, which is bz2. Because I'm retarded and stuck in the 90s. Now I will have to recompress using lz4

    upload_2019-11-17_8-48-39.png
     
    #259     Nov 17, 2019
  10. After switching to lz4 compression, my method is now 2x faster than Arctic.

    Next!
     
    #260     Nov 17, 2019