Just another trading platform - But this time different!

Discussion in 'Automated Trading' started by mhtrader, Jan 19, 2011.

  1. mhtrader

    mhtrader

    Hi guys,

    I'm building a trading platform that will shake things around. At his moment I'm working on the back-testing part.

    Why another platform??? ....because I know where the lacking in the industry is!

    Here are some of the features:
    -User written programs/strategies can run on the server AND on the client computer.
    -User written programs are not script languages. They are COMPILED for the CPU where it is running. No Java and No .NET, this is the real deal!! It means you will be able to use your 64bit computer(S) at full steam.
    -Event based strategies: your program will react to events like orderFilled and newTick. And the behavior is guarantied to be the same for back-testing and real-time.
    -Use of huge amount of cached calculation. We already have the back-end for this.( As of now we haven't decide if optimization will only run on client side only, but this could be the case ).
    -You can use more than one computer for total collaboration and sharing of calculation. They will use a similar architecture that we use for our severs.

    We are planning to launch a Beta before the end of the year.

    I would like to hear some opinions and recommendations.

    Thanks,
    ~mhtrader~


    PS: I think I posted this in the wrong forum before.
     
  2. Any strategy written in user written terminology will not be flexible enough to build a decent strategy that makes money. Why bother?
     
  3. mhtrader

    mhtrader

    Hi Runningbear,

    I'm not sure what is your experience. I came here to look for recommendations and ideas about most-have features that no one is offering, I'm most interested in these features that most platform dev. companies deem "impossible" to develop with the current technology.

    I don't agree with you, I worked 8 straight years on that area( backtesting, compilers, huge amount of real time data, you name it). And I never found a platform up to the task... I would like to not hint from where I come for obvious reasons.

    By the way, any user written terminology can be compiled to machine code, EL is an example of it(still it has lot of limitations).

    Comments on my previous post or any recommendation is welcome.

    Thanks,
    ~mhtrader~
     
  4. If you want any respect (or response) from the guys in this forum that actually know what they are talking about, I suggest you do some basic research first.

    The .NET JIT compiler produces bytecode that is specific to the machine's CPU architecture. In some instances it can make use of CPU optimisations which are very tricky to implement using C++. Thus in some instances .NET code will actually run faster than compiled C++.
     
  5. Right .NET is perfectly fine for the purpose.

    Some of my "users" are still wondering how i can process so fast amount of data like 100 or more years of millisec tickdata without minimally blowing up or taking much memory and with such incredible speed in a totally graphic environment. Well .NET is the answer (clearly must also know what you are doing) !

    Especially if you work with Windows, it is for sure currently the best way.

    Tom
     
  6. That's true.

    If not even the creator of the program is able to provide a profitable strategy, how could one expect the the final user, who has a little clue about what is going on (and problably 1/(10^6)-th of the specific knowledge of the creator :)), will be able to be profitable ?

    Tom
     
  7. It is true that the .Net JIT does some special optimization:
    * Processor-Specific Optimizations— At run time, the JIT knows whether or not it can make use of SSE or 3DNow instructions. Your executable will be compiled specially for P4, Athlon or any future processor families. You deploy once, and the same code will improve along with the JIT and the user's machine.
    * Optimizing away levels of indirection, since function and object location are available at run time.
    * The JIT can perform optimizations across assemblies, providing many of the benefits you get when compiling a program with static libraries but maintaining the flexibility and small footprint of using dynamic ones.
    * Aggressively inline functions that are called more often, since it is aware of control flow during run time. The optimizations can provide a substantial speed boost, and there is a lot of room for additional improvement in vNext.

    It is also true that the JIT optimizer is correctly limited to shallow, high-return/low-cost strategies. This limitation comes from the fact that the user is already waiting so it is counterproductive to spend a few seconds to save a few milliseconds.

    Dr Dobbs had an article showing different primitive benchmarks (we are talking about trading systems which do alot of math and not much else.) From the first few figures, C++ on Linux kicked butt. C++ on Windows was usually 2nd, C# 1.1 3rd, C# 2.0 4th, and all Javas were far behind.

    I am a fan of C#, but I have seen platforms before that were extensible using C++ for performance sake. More power to him.
     
  8. LeeD

    LeeD

    Hello, an interesting post. Is it gonna be an in-house platfrom or are you planning to sell it? If it is in-house, then it's very difficult to claim "unique features": you can't compare it to every algo-trading shop around as people keep their secrets. If it is for sale, let's go through the features:
    This is how all "server" software is created. The developer's PC acts as a server for the developer. Once software is ready, it's deployed to the server.

    Every platform that accepts trading strategies compiled in a DLL does it. Starting with TradeStation that did it over 10 years ago...

    On another topic, I don't share your frouning upon .NET and Java. That's true that it's not the leading technologies in HFT. Because of processes happening internally in the virtuial machine it is near impossible to guarantee response time. But then Windows is handicap too because of mutiple processes having to share the processor and so is Linux.

    Event-based approach was a part of QuantFactory and OpenQuant for over 5 years and is a part of current NinjaTrader.

    If you are talking about simple things like aggregation (say, building 60-tick bars from tick data), specialised database products do aggregation in a shorter time than it takes a layman's application read the aggregated data from disk (and these databases cache things too). Further, the most time-consuming calculations are likely happening inside the user's trading strategy. Do you actually parse and optimise the user's code before it is compiled so that it can take advantage of caching? This is very non-trivial task with ahigh risk of making the resulting code actually slower.
    Also every platform that allows optimising trading strategy parameter uses some sort of caching for data. If users choose to do some extra work, in a number of platform they are allowed to manually cache further intermediate calculations. To reiterate, this feature would only be new if it could parse user-written code and automatically decide what to cahce based on the logic.

    Distributed optimisation... This is something NeoTicker has been doing for 6 years or more.

    I don't mean to be negative. Creating a trading platform from scratch is an interesting and challenging endevour. Good luck making a profitable enterprise too.

    However, as I have pointed out every of the features exists in one of the well-established platforms. Every feature (except for running code on a server) is availabel in a "retail" platform.

    OK, I don't know a platfrom that has all these features in one but is it enough for unique selling point?
     
  9. You make an important point LeeD about response time. Garbage collection causes major response time problems. Being per process, it halts all of your threads, and can take long enough to cause communication issues beyond latency.
    Accessing a trading system class using virtual members or events works out about the same. One would have to include a largish chunk of platform code into the DLL to avoid the run-time costs of extensibility.
    At CSI, I wrote the interpreted back testing system to not need object oriented programming knowledge for the customer. I initially use Perl with a whole back-testing/scanning/optimization framework in Perl. To open things for more languages, I also offered the Microsoft Script Control which allowed VBScript, JScript/JavaScript, PerlScript, and extensibly others. To encourage adoption, I then wrote another option which was an interpreter for VBScript which had extensions to make most EZ Language code work. It even had an Edit and Continue ability well ahead of its time. Facility was never wide adopted, wrong audience. If I still worked there, I would go the C# route since the compiler is freely available and performance isn't that critical when back-testing/scanning. For HFT, I can't see using anything that doesn't run unmanaged.
     
  10. rosy2

    rosy2

    highly agree. most c++ written in this industry is not targeting any cpu; its just loaded up with stl and boost. for the jvm, the longer it is running the faster it gets.
     
    #10     Jan 20, 2011