My next motherboard

Discussion in 'Hardware' started by nitro, Feb 21, 2004.

  1. nitro

    nitro



    :eek: :eek: :eek:

    nitro :cool:
     
    #31     Feb 22, 2004
  2. Hi nitro,

    I have been following your rather costly hardware odysee. You may think that I am a simpleton, but what makes you believe or hope that "unleashing 1 BILLION operations a second" on a big heap of data will give you some kind of an edge?

    Be good,

    nononsense
     
    #32     Feb 22, 2004
  3. nitro

    nitro

    Hi nononsense,

    1) It is amazing what we see if we look.
    2) The reason for fast CPUs and fast crafted programs is that the "market," like in quantum mechanics, does not like to leave things untidy for "too long."

    nitro
     
    #33     Feb 22, 2004
  4. prophet

    prophet

    Here is some explanation of the win32 priorities:
    http://msdn.microsoft.com/library/d...setpriority_method_in_class_win32_process.asp

    If the kernel or TCP/IP stack were not running above "high" priority then any user process at "high" priority would cause the TCP/IP stack to freeze. This obviously doesn't happen. I run TWS API at high priority all the time with no ill affect to it's net connection quality or the network connection quality of processes running at "normal" or below.

    I tried to send a reply to your PM but your mailbox is full.
     
    #34     Feb 22, 2004
  5. Hi nitro,

    I can't agree more with you on your first point.

    As to your second point I must say that calling in quantum mechanics to make your point about the "market" is plain "scientism". Quantum mechanics is a physical theory that evolved over more than a century now. People who contributed to quantum mechanics were rather precise and rigorous in every single one of their steps. So I fail to see any orderly kind of reasoning between what you call "the market" and quantum mechamics.

    Now as to "the market" properly, nobody can exclude the possibility to crank out some profit by "unleashing one BILLION cpu operations a second" at it. The likelihood of this is not obviously greater than making a profit by unleashing only 0.1 or rather 10 billion ops/sec!

    Coming back to your first point: "It is amazing what we see if we simply open our eyes", I would add that unleashing excessive computing power without simply opening our eyes is perilous to profits.

    Be good,

    nononsense
     
    #35     Feb 22, 2004
  6. prophet

    prophet

    As far as backtesting, it's a HUGE edge. Instead of trying to think about what might work or looking for patterns manually, I find success through finding ways to test the most parameters per second.

    For example, I'll take a batch of 17 tick-based system trade lists, generate an intraday tick-based equity series for each system (6.7M ticks over 161 days), and test each of these against 1014 stop loss combinations. This process takes about 16 minutes using highly optimized code, for which the inner loops seem to fit into the L1 cache. I also use some subtle optimizations where I can figure the outcome of certain parameters without testing them.
     
    #36     Feb 22, 2004
  7. prophet

    prophet

    I don't understand how, in general, numerically intensity is perilous to profits, or that there is some adequate level of computation.

    Say you have a small edge which is valid over many issues. It makes great sense to scale it to a large number of issues, then build from there, maybe generating whole-market forecasts or examine profitabiliy patterns across market breadth.

    My systems work best on the most liquid futures markets. Even for a handful of markets I could envision calculating a million different indicators (per tick, per market) and feeding these into a 1M input network. I have already done this for up to 100 inputs. If done right, the network can be very robust (not over fit) and computation time (backtesting and real-time evaluation) scales as n*log(n) where n is the number of inputs (or number of issues). Thus it's quite manageable and easy to test and understand given enough CPU power.

    The pitfall is designing a system which is overly complex, taking forever to debug, test and optimize... say something that has n^2 running time or complexity.

    I don't think any argument can be made saying a certain amount of processing is enough, unless you assume the programmer is foolish. Thus the real argument is for designing robust systems that are not a computational burden to test or manage given the amount of CPU power available.
     
    #37     Feb 22, 2004
  8. Hi prophet,

    Here I could agree with you as you do not dish up a wildly irrelevant analogy but talk about a case that, if needed, you could document in detail. I can kind of follow your reasoning as you told us some time ago about your research methods. If I recall though, you did not dwell very much on computer technology. For many of us, computers are obviously an important part of our life. Some can do with less, some need more to turn a profit.

    In fact, what made me curious about nitro's postings is that he seems to believe that extreme computational power is going to lead to success. I am still curious about his ways. His quantum mechanics example is a non sequitur in proving his point.

    Be good,

    nononsense
     
    #38     Feb 22, 2004
  9. prophet

    prophet

    Exactly true. Profitable systems can be found at all levels of complexity.
     
    #39     Feb 22, 2004
  10. The fastest hardware in the world won't replace a positive mental attitude and the ability to execute a system.

    Those who focus on hardware perfection over the mastering the natural flaws in human hardware are missing the point entirely.

     
    #40     Feb 22, 2004