Dedicated Line to IB Data Centers

Discussion in 'Automated Trading' started by jhukov, Oct 7, 2014.

  1. Butterfly

    Butterfly

    Again, more confusion by the usual MS drones here. Strict typing is NOT going to bring any improvement in the code execution but it's a great technique for code integrity. Only if you had actual programming experience you would know that.

    Another confusion here is that Python could replace Windows based development code. The "heavy" apps in Windows will have thousand of lines because that's how Windows Development "Framework" is structured therefore you will need Visual Studio type of Environment for development.

    But again, in a headless server environment, the code is much simpler, you no longer need to generate thousands of lines for silly Windows decorum, placement and interactions.

    So in the case of a simple OMS, the front could a simple Java App that would manage the order parameters and feedback through different windows, and then the actual code to process the orders with attached algos if they needed to be would be performed on a headless server with Python.

    and speaking of dying languages like C# and Java in a Windows environment, HTML5 is taking over as a rich client, so the code logic to run inside your browser, mostly JS, will not even need hundreds of thousands of lines to perform the same functions and the same feelings as a Windows "heavy" app.

    the fashion for those "fat" languages is over, it's so 1990s and we are in the 2010s
     
    #71     Oct 21, 2014
  2. Are you retarded?

    a) Dynamic typing is unrelated to code integrity: Code integrity reflects the quality of code and usually is a result of unit testing and the passing of thereof. If anything then dynamic typing leads to more bugs not less. With compile time type checking you avoid bugs rather than introducing more. Every beginning programmer would know that.

    b) The development environment has nothing to do with how complex an application is. Unarguably (and most every C++ coder concedes to that fact) Visual Studio is one of the most advanced and feature rich development environments there is. Just think about debugging async or parallel executing code or debugging a race condition. Show me an IDE that is more feature rich than VS and its plenty add-on products. Where can you visualize task execution and completion? And yes, huge and complex applications require a feature rich IDE, no question.

    c) You need the same thousands or millions of "silly lines of code" (as you put it) whether you run code on a linux box or windows box in order to run a complex application. Any complex application will necessarily have lots of lines of code whether it be Python, C++, or C# or Java. Code re-use is encouraged in any development environment which is precisely why nobody nowadays writes multi-tasked and async apps without the toolboxes and libraries that come with C++, Java, or C#.

    d) There is no simple OMS. OMS is one, if not THE, most complex and error prone component of ANY execution and trading environment. Again, NOBODY (except maybe you) writes OMS in Python. I think there are plenty enough seasoned developers and hft traders here on this thread that already pointed out to you that you are simply mistaken.

    e) Nobody in a bank or hedge fund environment writes HTML5 code for anything other than front end data serving purposes. All the hard work and heavy lifting in the background runs on C++, Java, or C# algorithms.

    You are either stirring up a debate knowing full well how false your claims are or you are indeed too amateurish and dumb to realize how stupid your claims are. And for reference, I never claimed I have not written code in C++ or C#. I said CLEARLY that I have not written any code in previous trading positions at hedge funds and banks because I was paid to manage risk and generate PnL and not to write code. That never precluded the fact that I do posses extensive coding skills and that I have subsequently written a whole trading and development environment while I have outsourced certain bits and pieces for lack of time and resources. Get a life you retard!!! You are the one who exposes himself as a know-nothing!!! I will not reply to ANY of your subsequent drivel and accusations anymore on this thread.


     
    #72     Oct 21, 2014
  3. Butterfly

    Butterfly

    god, you just proved again your incompetence in programming LOL

    a) you are confusing dynamic and strict typing LOL and you are making the exact arguments I was referring to about strict typing, that is code integrity to avoid silly bugs due to poor typing assignment or transformation (polymorphic).

    b) VS is a giant Piece of Garbage, that only a clueless Microsoftie could put up with. There is another world out there, get out LOL

    c) the core of the apps doesn't need have thousands of lines when it's properly structured. Design patterns, ever heard of them ? of course not LOL

    d) You confuse OMS and EMS, as usual. Not the first time.

    e) Not today, nobody does, but it's coming. Again, look at what the MIT is doing, they are building projects and apps that run entirely in a browser. Back in the late 90s, the Big Banks were late in the game and didn't embrace "web technologies" until very much later. This time web revolution will not be different. You won't see HTML5 trading apps in Big Banks and Hedge Funds, but they will come one day. Btw, did you know that the really heavy lifting in banks is still done on AS/400 and zOS ? LOL
     
    #73     Oct 21, 2014
  4. Butterfly

    Butterfly

    dude, you are a psycho. You can't claim to run risk reports and generate PnL (isn't that the work of an admin clerk these days btw) and then claim to have some kind of authority on code writing when you never had any professional experience in that matter. Building "hello world" apps in Visual Studio with C# or C++ doesn't make you a frigging programmer, you fool. Your angry reactions to all this just speaks volume about your true ignorance LOL.
     
    #74     Oct 21, 2014
  5. hft_boy

    hft_boy

    Can't believe I'm getting caught up in this thread but.. volpunter said he (she?) managed risk and generated PnL. That's huge value and entirely different from generating risk and pnl reports. Vicious though some of his posts may be you would do well to read them.

    Also, I am not am MS drone. Dynamic typing, at least the CPython implementation literally slows down your code by orders of magnitude. If you don't believe me, run perf record python app.py and see for yourself. That requires linux-tools btw. That's not to mention about the potential blowup waiting to happen because of lack of compile time checks on .. well anything.
     
    #75     Oct 21, 2014
  6. Butterfly

    Butterfly

    surely you can't believe verbatim what volpunter is saying,

    he claims to know programming and managing software design and yet never wrote a single line of code professionally, by his own admission

    I mean WTF do you think he is kidding ?

    as for the CPython implementation, I guess it's not the language itself but maybe the compiler lack of optimization ? it's like saying the SQL query in a database would be faster for a lookup on a strict numeric vs a "decimal" column store. The different data type of the column are not optimizing the query speed, but does optimize the storage, which indirectly could be said of "saving" query speed, even though in reality it doesn't in any significant way. That was maybe true 25 years ago though, hence the huge success of Oracle RDBMS then.

    the strict data type would bring code integrity, like it does in a database, and storage efficiency, like it does in a database. Do we get faster "memory" access because a strict type is more "optimized" in its addressing ? certainly, but not to be significant enough in today's CPUs.

    In comparison, today's it's all about Big Data and their poorly optimized (read not strict) data structure, and "ad hoc" data store that make queries on those data extremely fast. Oracle in that regard is everything except Big Data.

    So in conclusion, what's important in programming language these days is NOT their strict and optimized data types, but how the programming code is structured and "executed" at run time.
     
    Last edited: Oct 21, 2014
    #76     Oct 21, 2014
  7. rohan2008

    rohan2008

    Don't want to get into this debate/discussion, but let me add some numbers from my own experience for the benefit of everyone... I worked for a company that offers Tier 1/2 type storage products to Banks and other institutions. They have developed a all flash SAN solution from scratch and I was responsible for designing some part of it. We first developed an in-house data IO generator that produces very specific dedup and compression patterns for our QA using python during the early days of our project; it as able to generate about 4000 IOPs (input output operations).... which was awesome at that time... slowly as our product matured, we needed more load with more patterns... In our line of work, even a cache miss which takes 10 ns on our multi-core HW is a big deal. We tried out best to increase the IOPs, but were out of luck and were only able to achieve 12000 using all the tricks we could think off. We then redesigned a IO load generator from scratch using C++/Linux and were able to bump it to 250K iops... the only downside of it was that it took 10 times longer to develop the C++ version compared to the Python version. Python and other web technologies sound very attractive on paper, but when you start designing a serious application, as time progresses you will slowly realize its short comings in terms of performance...

    I am not sure about windows operating system, although I do use it for MFC programming for fun, but take a highly optimized version of linux and compare the performance of python code and code written in C... you will see the difference right away. Besides, you can also write Linux drivers in C to get over the kernel context switch overhead or even have your own C application running on an isolated core (using isolcpu) that takes over the devices by mmaping their IOMMU... this how we extract every clock cycle from from the cpu. You can't do this type of system programming using python... unless you use libraries which in turn are written in C.

    Python is a good language and it is great for automation that doesn't require performance... I don't think any language C/C++/Java can beat it terms of its flexibility and its definitely the scripting language of my choice if I were to pick one for the situation.
     
    #77     Oct 21, 2014
  8. hft_boy

    hft_boy



    Actually what I believe he said is that when he worked at banks he did not write code but more in a trading capacity. Later he developed a trading system from scratch. Which takes quite a bit of programming knowledge. It is possible I misread, but he repeated several times and was very adamant about it.

    There isn't really such thing as faster or slower languages. Maybe runtimes. But in this case, the point I was making earlier was that it due to the semantics of the language it is difficult to write a compiler which optimizes it well. It is nearly impossible to optimize the code statically. If you are really bright you might be able to create a JIT for Python which ameliorates much of the runtime inefficiency (by using heuristics to eliminate checks and optimize accesses). But like I said, this has drawbacks because running Python in a JITted runtime would likely make it difficult to interface with existing C libraries. In any case this is all hypothetical because such a magic JIT does not exist to my knowledge.

    Actually this makes no sense. Today's CPUs are extremely sensitive to memory layout and access patterns. This is why pointer based data structures are usually slow because they are full of data dependency (and FYI languages like Java or Python which require objects to be allocated on the heap are seriously susceptible to this). In addition you have to trust the compactor to lay everything in a way so that the memory access is optimal .. I say just do the memory layout yourself.

    If you want an example, consider that an int in x86 is 4 bytes. I don't remember the exact numbers but if you have an int in Python it will allocate a PyInt which extends PyObject or something.. and it will take up like 64 bytes or something. So your data structures are going to be huge and not fit in the fast upper level caches. Then if you want to compare two PyInts the runtime will call like PyInt.richcompare or something crazy. So you need a pointer dereference and a function call, instead of just calling x86 cmp. I think calling cmp will probably take one cycle, and loading two ints from L1 is almost instant. Say a nanosecond for the whole thing, and in a pipeline significantly less. On the other hand if you're doing the equivalent operation in Python, well, you have to allocate and fill in the function frame which is going to take several cycles. And you are going to have to dereference the pointer. If you're lucky it will be in L1 but the memory management is so sloppy it will probably be in main memory. This is going to take something like 30ns.

    Further if you have a bunch of quick non-dependent operations in x86 they can be pipelined and reordered. Not the case for your Python stuff because each operation is filling up multiple pipelines, lol.

    Sorry for the rant but it is totally not the case that memory access 'does not matter' for today's CPUs. There are other costs such as longer development time (although I see this as being important for QA), but the decision about how much development time to spend is a business decision.
     
    #78     Oct 21, 2014
  9. Butterfly

    Butterfly

    HW interaction and the fact that C compiliers have been around and improved since the beginning of time, I don't see how you could even consider bringing Python in such a project in a production environment.

    In Banking, Python is becoming the new language of choice, don't think I ever mentioned using Python in a high performance hardware solution, where C/C++ would be more appropriate because of the maturity of the compilers for such environment.
     
    #79     Oct 22, 2014
  10. Butterfly

    Butterfly

    You are making exactly my point, today's CPUs are so sophisticated for memory layout and access patterns as you put it, that "strict" typing in a language have no impact on the code execution performance, it will be all optimized by the CPU. The "strict" type will have no impact on such performance.

    now are we confusing native types with "process" ? the native type allocation would go through a few process and it might take a bit more "RAM" to process but again it has no significant impact on the speed of the code. The type will not be "64 bytes" as you put it, a few objects might be created in between but they will be automatically destroyed. In terms of process, yes, it would take more time, but not significant enough to be meaningful outside of a lab test. Good point about the JIT for Python, maybe someone is already working on one.

    You are speaking old school programming, you can't think like this anymore, execution at the CPU level will be very different from what you "think" it will do. It's all optimized at the core. See below my argument.

    it doesn't matter in today's CPU because it's something we can't control and something we know will be optimized properly automatically by the CPU. Speaking of cycles in programming like we used to in the good old days is no longer possible, simply because the CPU has a life of its own and will optimize it all at the end, and even the compiler is no longer "writing" for the CPU, but simply "piping" instructions that the CPU will re-order itself. The CPU is in control, not us or the compiler or even the syntax of the programming language.
     
    #80     Oct 22, 2014