personal back-testing setup

Discussion in 'Hardware' started by sle, Jan 1, 2018.

  1. sle

    sle

    In cross-sectional cases where there are relatively few assets and a lot of data, it's fairly easy to chunk it (many commercial/OS libraries do that for you). In longitudinal cases (time series to time series, that's where I get these large matrices) it naturally becomes a "special case" since I only care about relatively few principal components. There are various tricks that have been worked out by people who deal with really big datasets (e.g. genomics and image analysis guys). It boils down to doing some sort of random subsampling or a "compression" of your matrix - I think I am using a compression type algo, don't remember where I got it from.
     
    #21     Jan 2, 2018
  2. Mysteron

    Mysteron

    BS
     
    #22     Jan 2, 2018
  3. sle

    sle

    You have a better numerical recipe you'd like to suggest, I take it?
     
    #23     Jan 2, 2018
  4. Mysteron

    Mysteron

     
    #24     Jan 2, 2018
  5. sle

    sle

    I was specifically asked by @truetype how I manage to perform SVD on very large matrices to get principal components. I gave a general answer because (a) I don’t know enough details as I am using someone else’s code and (b) this particular calculation is not particularly important for me any more as that strategy has bled out and died. If that failed to impress, I am sorry and will try harder the next time :)
     
    #25     Jan 2, 2018
    IAS_LLC likes this.
  6. sle

    sle

    So, stupid question - with the new hardware exploit, Xeon chips should be cheaper now, right?
     
    #26     Jan 5, 2018