Old-timers in the industry: what is the best way to architect code?

Discussion in 'Automated Trading' started by garchbrooks, Feb 27, 2010.

  1. And by architect, I mean how do you avoid bubble-ups from core code in the trading world affecting everything downstream across all the various interfaces you use to trade?

    I know the obvious answer, but everyone knows the obvious answer and yet it seems unavoidable that small feature changes eventually require recompilation and redesign across the board. In the boring software world, it was okay to use stuff like COM and upgrade interfaces and such; however, given that performance is very important, there's less incentive to use COM-style interfaces building all the various components.

    What I find is that "old assumptions" can dog new trading systems as code evolves, and rewrites are expensive, time consuming, and error prone. And feature-bloat isn't something you can address just by playing games with compile-time preprocessor defines.

    Or, let's rephrase the question a bit -- say you bought into the popular technique of the day, is it unavoidable then that in 5 years time that you are locked into the old technique of yesterday? At what point do you call it a day and do a rewrite?
     
  2. the only way to reduce the risk of a huge rewrite, is to have your software be composed of loosely couplet modules. You may remove and/or update a module, but the rest is unaffected.

    That architecture is also much more scalable, which is good on multi-core systems.

    None of that should be news to someone with modest programming skills.
     
  3. pkwumo

    pkwumo

    Yes, I'll second that reply. Loose coupling and tight cohesion are the hallmarks of robust software design. In the context of trading systems, this requires you to have a decent middleware component to allow loosly coupled trading server processes to be able to publish/subscribe messages between each other. Building the messaging middleware is the hard part in my view. If you don't have resources to build this yourself, then take a look at some of the messaging vendors (Solace, TIBCO, etc).
     
  4. Well, this is what I was referring to when I said the obvious answer. But it isn't so simple, at least, I don't think so.

    Let me give you a (possibly poor) example: Win32. If you, for example, designed around an event-based system, but then find that certain scenarios make the event multiplexing in a loop seriously inefficient, you have a core, fundamental problem. Ok, fine you say, you can change that event loop, maybe add threads, and so on and so forth -- but, unfortunately, you made a bunch of other modules elsewhere that posts events to this event loop. Now that change has to bubble up to your modules, and your modules have to have a shim or new piece of code inserted to facilitate a transition from the old interface. But then if you have a module on top of a module on top of the module that worked around that event loop and posting events to that event loop, you have a more serious problem.

    But there are more examples I've seen in the wild than just this, so I'm not referring to just this as an example. I'm saying that there are real paradigm shifts that happen where fundamental assumptions made about the structure of the code can cause serious issues later on.
     
  5. (Moreover, these "fundamental assumptions" may have been insignificant issues at the time of design.)
     
  6. ::however, given that performance is very important, there's less
    ::incentive to use COM-style interfaces building all the various
    ::components.

    And why would that be?

    Seriously, get the basics straight.

    * COM in the core is jus tthe use of function tables (Vtables). Basically using "interfaces" in the langauge. This is the core- the rest is tacked on.
    * Compilers are very good optimizing out small function calls to other classes.

    So, if you develop COM style (i.e. using interfaces etc.) in a langauge like C++ or even C#.... the runtime may kill most of the overhead for you (yes, even in C# - not in the debugger, but attaching to a real process you often see "function could not be evaluatd because it has been inlined" or so - basically no single stepping anymore).

    Plus, the overhead is trivial. Unless you are at the exchange hosted, a 0.5ms or so overhead will not make any difference. Not if you are behind a 40ms network delay to start with ;)

    Go proper OO, identify components, use unit tests to make sure the basics (not the trading - that may be too complicated) works well (stuff like order handling etc.) Care about the performance when you realize you ahve a problem there - must likely you will not (that is unless you write UI - then better get that fully decoupled from the backend trading, thread wise, and use multiple message pumps).
     
  7. gigi

    gigi

    NetTecture is right, COM is not slow. As a matter of fact, DirectX is COM based. You probably mean OLE Automation, and that indeed is slow (because it's dynamic), but that's something build on top of raw COM.

    That said, I'm not recommending COM, but not because performance reasons.

    About compiling times, who actually cares? If you have a true money making system, you have money for a build cluster (we used IncrediBuild, does wonders). Again, I'm not advocating slow build times.

    And performance: if you actually need it, as in HFT, then EVERYTHING you do is with an eye on performance. That means no stupid design patterns, inversion of control, layer based architecture and so on.... You'll have a monolithic trading kernel with everything integrated in it, and with special access to the hardware.

    And if you don't do HFT, you might as well code in Python, since language speed will not be an issue.

    What I mean is that if you need performance, you throw all the "good" practices away and you start to micro-optimize the code. I've been there, I've squeezed the last bit of performance from a system (for a different industry, not for trading/finance) using stuff like threads, custom pooled memory allocators, SSE and GPUs. The code was clean and easily maintained, but there were no deep virtual class hierarchies and stuff like that.
     
  8. 377OHMS

    377OHMS

    One observation I can contribute is that most of the time the engineering trade is modularity vs. speed. Most of the attempts at modularity I have been involved with are usually a little slower than something completely nested.

    I personally like processes that go out and locate the objects of interest and create them if they don't exist ala OOP constructors. Sometimes those processes can't be implemented in the main thread due to the overhead needed to find those objects.

    I go as modular as can be tolerated given conditions.
     
  9. The real key to high-performance, real-time and low-latency systems is the decoupling I mentioned. If there's a component "in the middle" that slows things down, it means you either didn't design that module right, or that the overall module breakdown is poor.

    For my trading software, I have spent 100s of hours making it responsive and fault tolerant. I know no other way to deal with it than constantly to think, re-think and measure the performance.

    Just one bit of empirical evidence of how hard it can be to scale and run fast: I used to run my core live trading engine on a dual core xeon system. I basically ran something like 300 system in realtime each taking an average of 5 ms to generate orders.

    I then upgraded to an 8 core xeon system and expected it to be faster, but it wasn't. It was actually a little slower! As it turned out one of my scalability assumptions that held on a 2-core xeon, didn't on an 8 core xeon. Following the advice I gave initially, i simply duplicated (actually triplicated) the module that now was the bottle neck, and the system was screaming again.

    There is never *one* answer and what's optimal today, will likely be sub-optimal tomorrow. Just try to load a system with dual-channel memory controller and compare to a tri-channel memory ctrl (such as an i920) and you'll be surprised how the stress points move.

    Now let's make some money with all these cycles being spent!
     
  10. cashcow

    cashcow

    As per the other threads, good software design principles are best. Most of all, to stop degredation of a system I would recommend the following (in C++, but same principles apply in other languages):

    Make everything private/protected unless it actually is required to be exposed, and ensure that any changes to private scope are cleared with an architect first. For example:

    public:
    int get_result() { return this->inner_get_result(); }

    protected:
    virtual int inner_get_result() = 0
    {
    throw my_bad_result_exception(); // yes pure virtuals can have a body
    }

    The above stops virtuals cluttering the global namespace - a common problem when the virtuals are intended for internal use but start getting called from everywhere.

    Also, in C++ try to use the private implementation idiom. Not only does this decrease compile time for large projects, but it also ensures that private implementation does not polute the global namespace.

    Although the two above examples stop code degeneration they do (sometimes - most compilers can optimize away some) incur a small performance hit.

    As for the performance, a good modular design generally can be made more performant. This should be dealt with afterwards for most projects, using a good profiler (such as Intel VTune) allows bottlenecks to easily be identified and targetted. A common mistake is to target performance too much before the code is written. Often the performance bottlenecks are not where you expected.
    Of course, some advanced projects do require an amount of design specifically targetted at performance, but on the whole a lot of time is wasted using pre-build optimization.
     
    #10     Mar 1, 2010