Pseudo-random runs around Visual Studios

Discussion in 'Journals' started by TSGannGalt, May 29, 2009.

  1. Have you been drinking tonight?
     
    #61     Jul 23, 2009
  2. No...

    I'm on shrooms and vicodin, hallucinating a lot of shit. Plus, it's still 2PM here in Asia...
     
    #62     Jul 23, 2009
  3. Deterministic programming framework to help your computer do stuff for you:

    http://en.wikipedia.org/wiki/Procedural_Reasoning_System

    Starting point:

    http://en.wikipedia.org/wiki/Intelligent_agent

    Personally, the following has been my biggest challenge. I study / research this on my free time considering that I have to pay my bills and need to keep my trading operation running:

    http://en.wikipedia.org/wiki/Cognitive_architecture

    ===========================

    An "OVER-SIMPLIFIED" usage of the above would be:

    - Price Action. As a casual discretionary trader, I use price action to scalp and swing intraday markets using the DOM and charts. I rationally determine whether a support/resistance is going to hold or breakout. Obviously, I have a cumulative memory regarding the size of the price moves remembering TOS/Depth Volume. I can have a seperate agent analyzing this factor, as "a character". I can have an agent cumulatively learning this.

    Each model uses the "feedback" from the agent based on the nature of the models (statistical).

    - Stock picking. Statistically, a specific product may be unreasonable to trade. But depending on the condition the stock can be very reasonable to trade. A discretionary example would be a stock that just released their earnings or when a stock that has been consistent with their "relationship" has hit a major outlier... (like a stock that historically never hit 3-sigma with another stock pair is at 5-sigma and stays there for a significant amount of time... or it suddenly is on the largest %Loser/Winner list).

    There's no prediction involved but a deterministic "feedback" provided to the models regarding the aftermath...
     
    #63     Jul 23, 2009
  4. The two biggest challenge I have is using my resources in the most efficient manner and knowing when I have enough expertise on a subject to integrate it into my trading. I'm always trying to juggle learning about computation, statistical methods and researching the markets. I haven't

    For example, there are different degrees of robustness and I always feel like nothing is ever robust enough because, number one, I'm always learning and able to improve my models, secondly in trading there are no clear standards of expertise and finally because I don't know how to get help from other traders because I'm paranoid about sharing my work.

    It seems you have taught yourself programming and are now able to automate everything. During this period, how did you plan your learning process with respect to statistics, computation, market research and the overall implementation?
     
    #64     Jul 23, 2009
  5. 1. I don't have everything automated. Let's see... let us assume that the trading process has 5 steps (Step A...E). I also have multiple routines for each steps.

    When I'm running a Market Making model, I have Steps A, B, D automated, while C and E require human intervention. For a simple intraday scalping, I have C, D and E automated, Steps A and B require human intervention. The list goes on... depending on the type of models and the style of trading I am trying to automate, there are different aspects I have automated and some that are not.

    Let's assume that Step C represents an on/off switch for a model. And I delegate Step C of the process to myself for my intraday mean reversion models. Obviously, late last year was a disaster for a lot of the RTM models due to the crazy volatility. The models were not able to cope with the series of outliers so I manually turned all of them off. I don't have the ability/resources to have the AIs detect the market condition based on the news and economic conditions we went through.

    Though, for outright momo. scalping, I have Step C completely automated. I have the AI looking into the order flow and etc. to determine whether or not the models should be trading the current market. I had the momentum dependent scalp models riding and fading momentum while the RTM was shutdown. (The fact of the matter is intraday momentum is easier to "handle" than volatility... because I am more experience with it)

    2. Learning process... I really don't know what to say. I have a set of things I want to do and learn the skills to implement them. I wanted to develop models like Toby Crabel and Alan Crary so I learned statistics. I wanted to develop models like Mark Brown so learned about AIs and other IT perks. I needed to automate my trades so learned how to program in C#. I needed to implement a high speed OES/OMS so I learned a bunch of stuff and realized that I can't get this done solo, so I worked out a deal with the sell-side institutions to get them implemented (They want my business as much as I need their inside expertise on high volume exchange connectivity technologies...) . I think it goes with any kind of learning process that you need a goal first and work your way up.

    It all comes down to the fact that I'm very lazy and I hate manual routines.
     
    #65     Jul 23, 2009
  6. Thanks!
     
    #66     Jul 24, 2009
  7. I'm reading too much BS in ET about.... obviously it's nothing new but... the significance of backtesting with a long historical data and running a out-sample.

    Too many posters in here mention about 10+ years of historical backtesting as a necessity for these but most of them don't make the prerequisite to mention any advice to people because they have it all wrong.

    The length of the initial testing does not matter.

    Taking a 10+ year initial backtesting and out-sampling it with a recent data does not matter. It's the "mind" put into between the initial backtesting and out-sample... even more, the implementation criteria that matters...

    Seriously... who's tested and confident to provide proof that backtesting with a long dataset = probably robust???? I've done my homework but ET tells me different things... It's not funny how much BS. I read in ET regarding conventional knowledge... No one tests... or most pretend like they do... It's pathetic!!!

    I'll post some more... in the next few days... * I have a life and need to pay my bills...

    Seriously... all these motha' fuckers need to test. How many times do I have mention, "TEST EVERYTHING!!!" Do I need to show all you fuckers that Out-sample is "trivial" and testing models with long periods of time is "trivial"... (Meaning there's some truth to it, if you realize the flaws of running it and objectively reducing the risk...) On the flip-side, it's kinda kewl that I'm still smarter than EVERYONE in ET.

    I am the ET Gawd!!!! (next to Baron)
     
    #67     Aug 4, 2009
  8. The following is a test I've done...

    1. The top column is the length of the test period in which the model initialially ran their backtesting. The row column is the model that the program optimized.

    So the computer would randomly pick a starting time for the test and run the amount of time to test under. The result is average annualized return for each test. The test was ran 100,000 times for each incident to keep the statistical measure unbiased as much as possible. In another words, the program iterated 4,200,000 times to come up with the figure. Not to mention... it also randomly picked the market to trade under, out of 50 markets from Futures, Equities to FX...

    2. From each iteration, it took the best fitting optimization parameter #s and out-sampled for an year. The 2 set of numbers are the averaged return of the set of iteration (100,000 iterations).

    3. Finally, I took another year and checked if a model would be profitable if the both the initial test and the out-sample were profitable...

    The result is quite interesting, not for me but I think it would be for some numb minds... that there's not much edge in running a out-sample model with reasonable amount of historical data....

    So yeah... all you fuckers debating me about using a lot of historical datas are a bunch of morons. EAT SHIT!!!!! Who the fuck started mentioning about out-sample and long historical backtesting? Are they running tests??? NO

    But really... there's an exposable bias here. And it doesn't require any hardcore IT stuff like I always mention. It's a matter of simple "objective testing" using your brain...

    More coming soooooooooooooon....
     
    #68     Aug 4, 2009