Here is an example of a trival reverse code engineering project: Decompilation in Practice: Reversing helpctr.exe The following example illustrates a reverse engineering session against helpctr.exe, a Microsoft program provided with the Windows XP OS. The program happens to have a security vulnerability known as a buffer overflow. This particular vulnerability was made public quite some time ago, so revealing it here does not pose a real security threat. What is important for our purposes is describing the process of revealing the fault through reverse engineering. We use IDA-Pro to disassemble the target software. The target program produces a special debug file called a Dr. Watson log. We use only IDA and the information in the debug log to locate the exact coding error that caused the problem. Note that no source code is publicly available for the target software. Figure 3-4 shows IDA in action. http://www.informit.com/articles/article.aspx?p=353553&seqNum=8
it looks like reverse engineering is not that complicated. so does it mean very profitable hedge fund algorithm will never be colocated with the broker? in that case, if the hedge fund has to build data line to the exchange, it could be too costly.
No really. The ultimate solution is to split the system into multiple separate parts on separate collocated servers at separate hosting firms, all of course unknown to each other. Each part is totally useless without access to the others. That coupled with a significant ratio of dummy trades, which are washed out as soon as entered by another totally independent system would make the system pretty secure. This is under the assumption that the major value of the system is its ability to understand what is happening now and act on it a few milliseconds before the majority of traders. However if you have a system, which can successfully predict future market events not visible in current activity, then all this is totally unnecessary. For example if you have a predictive model that can look at current activity and predict that in say 12 bars in the future the market will be significantly higher then if a order takes 400 milliseconds or 10 seconds to reach the execution queue (assuming you are trading 5 minutes bars or longer) you don't need to worry about the whole issue of colocation. Keep your machine locked in an undisclosed location like your basement or at a hosting service disguised as an inventory control system.
the major benefit of colocation is speed. if you colocate server with several brokers, and request different server to communicate, it will erase the speed advantage from colocation.
Correct. However, where I am in Chicago there are multiple colocation companies within a few hundred yards of the CBOT and other exchanges. Like I said what is more valuable: 1) Keeping the trading strategy private 2) Gaining 50 ms? If the strategy is based on the 50 ms then it's probably not much of a strategy anyway......it's just being at the front of the stampede. In that case keeping it private doesnât much matter anyway.
make sense. I have a friend working for GS HFT group, their latency is around 10 microseconds. though they make billions, they do not have strategy not publicly known.
Yes, after 12 pages of posts and discussion on the topic we have the tentative conclusion that the topic is not very relevant: 1) If you have a strategy made from mostly public components with only reactive capacity there is no need to keep it secret Or 2) If you have a model with superior future predictive capacity there is no need for it to be at the head of the stampede. Keep your server locked in your basement. The topic is moot, IMO.
On principal I'd agree. Those with unique proprietary systems that are exceptionally profitable are very, very quiet and almost impossible to locate. Since they make what many would consider science fiction returns they have no need to be marketed, revealed or rented to the public. These Dark Systems are much like Dark Matter in physics.... out there but yet totally invisible.