Your Account  •  Become a Member  •  Help  •  Search
 Forums ›› Tools of the Trade ›› Data Sets and Feeds ›› Tick Database, Now Want to Run SQL

 Page 12 of 12:   « First Page   3  4  5  6  7  8  9  10  11   12
 NetTecture   Registered: Mar 2009 Posts: 1010 07-27-12 05:39 AM Quote from amazingIndustry: getting rid of all double, float variable types may not apply to all trading strategies. You could go lower and lower level and arrive at coding assembler but that would get into a very deep discussion about the pros and cons. Not everything has to run on integer values, the speedup sometimes warrants that, yes, but often times it is negligible. Actually it does wonderfully apply to trading. Start with bars - open, high, low, close are TICKS - not arbitrary numbers. So, store and aggregate them i nticks. Then go fro mthere like another 6 digits (hard factor) and use integer from now on. That is a fixed factor from the orriginal bar to the expanded resolution one. But instaed of 6 digits (like 1.000.000) use a power of 2 number, so going down to the original tick resolution is a simple shift. You may not be able to avoid ALL divisions, but most of them. Obviously that doesn ot help if some overly smart developers decided to use floats all over their framework. It is EXTREMELY expensive to dean with any price comparison after any mathematical operation in floats because not only do you ahve to do the operation, but also have to.... round it to the proper resolution, which means you basically are wasting a LOOOOT of cycles. Even something like x * TickSize in float is NOT guaranteed to be equal to A * TickSize + B * TickSize - the two floats may be CLOSE, but not IDENTICAL. So that in ints, and they are not identical, but then I compare: ((A * TickSize) >>x) + ((B * TickSize)>>X). Funny thing is that >>X is a VERY cheap operation these days thanks to barrel shift registers. Most likely 1 tick. The alternative is: Math.Round(A*TickSize, x) + Math.Round(B*TickSize,x) And you do NOT want to know the cost of the Math.Round operation. It is EXPENSIVE. That is NOT relevant when you do real time processing, but when you do extensive backtests + optimization it may cut down processing times by a factor of 10 and more, depending how heavy the operation is ;) When you use modern Opterons you get the additional benefit that the 2 cores of a bulldozer module... have ONE SHARED FPU UNIT - but two separate Integer units ;) Edit/Delete • Quote • Complain
 NetTecture   Registered: Mar 2009 Posts: 1010 07-27-12 06:24 AM I agree. And that is where there is ONE thing only to solve it efficiently - a profiler. Plus possibly some simple mathematical optimizations. For example I know one library where a moving average is calculated by - adding the elements in it, then dividing by number of elements. Evers time. instead of adding up, remember that for next run, then remove the element falling out and add the one going in. For a longer moving average that is significant (50 additions instead of 1 addition, 1 substraction). But a profiler will show you where the problem is straight in. The one from Visual Studio (mind you, not the lower ties) is VERY good in that and ALSO can properly deal helping debugging issues with TPM. Now we just need Visual Studio 2012 released sooonish... their profilers are a LOT better than the one from 2010. Edit/Delete • Quote • Complain
 bidask201   Registered: May 2012 Posts: 19 08-02-12 04:20 AM Take a look at vectorwise for a sql solution focused on optimizing for modern processors. Still only say 3x the speed of other dbs on tpc-h. Be ready to hand over huge sums of money though Edit/Delete • Quote • Complain
 Page 12 of 12:   « First Page   3  4  5  6  7  8  9  10  11   12