Folks, An idea occurred to me today. Just wondering if it has any interest. Suppose, we have software engineered to test strategies using tick data that also supports massive parallel processing. In other words, say you want to test your money management strategy on 100 instruments using tick data going back 5 years. Now let's say we add parrallel computing to that so that it can use any number of servers to help in those iterations. In other words, what if at least 25 to 100 computers all work on the problem at the same time? Then it finishes in seconds. Now here's my question, it would cost some money to get that many servers. And you certainly wouldn't use them all the time or even a fraction of the time you're paying for. What if we pooled resources by a monthly subscription to access the server farm? Since with so many CPUs everyones tests would finish in seconds, it could handle many scores or perhaps hundreds of people using the service. Of course, additional CPUs can be added to handle the load. It seems cool if we could do that and keep it affordable at the same or less price as other trading platforms. Please, what's your opinion? Worth while? or a waste? Sincerely, Wayne
Rent Your Own Supercomputer for $2.77 per Hour http://news.softpedia.com/news/Rent-Your-Own-Supercomputer-for-2-77-per-Hour-82166.shtml http://futurismic.com/2007/10/17/astrophysicist-replaces-supercomputer-with-eight-playstation-3s/
OMG. I wasn't the first with the idea. That's awesome. All we have to do is make TickZOOM capable of interfacing with these. Then users can rent time......I can't wait to try it out myself!!! $2.77 for an hour sounds affordable. But I wonder how many CPUs you get for that hour? How do you measure an hour when you're talking in parrallel? A program that runs in 1 hour on my PC, does that still cost $2.77 even though it runs in 2 seconds on a supercomputer? I tried reading the article but didn't find any explanation of how to measure the pricing. Still this is exciting to know about. Wayne
Really, you couldn't find the pricing when you 'tried' to read the article? To quote: Every processor-hour you use the supercomputer, you pay $2.77. Hence, 168 * $2.77 = $465. To repeat, for every processor you use on the super computer, you have to pay $2.77 to rent it for an hour. To compare to Amazon EC2 ... for $465 an hour, you could get 581 Extra Large High CPU instances or Extra Large Standard instances, which have the following stats: Extra Large Standard: 15 GB of memory 8 EC2 Compute Units (4 virtual cores with 2 EC2 Compute Units each) 1690 GB of instance storage, 64-bit platform Extra Large High CPU: 7 GB of memory 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each) 1690 GB of instance storage, 64-bit platform) I'm going to give the win to Amazon on this one...
Super cool. The bittorrent example did it for me. Wow. Imagine a tick testing engine enable with "cloud computing" technology. So that a test normally with tick data might take 30 minutes of many instruments and years of data happens in a few seconds. I'm not sure either if the $2.77 /hr works unless they only bill waht you use. Imagine I'm code up a strategy, test it in 2 seconds and see i need to fix some things maybe that takes me 30 minutes of putzing around and I get a phone call. Now my hour is up. The other services sound better where they track what you use like a "utility" like electricity or water. Ideally it would be on 1 second increments. I remember when the phone company used to bill in 1 minute increments. Competition forced to actual phone usage now--to the second. Do that with super computing and that could get very, very interesting. So can the GPU idea. I'm all for speed. People have recommended I focus on building and maintaining the TickZOOM engine and let other people build the GUI. Sounds good to me. but we're going to need a better GUI. Wayne
If you decided to go the route of building your own server farm for this. Which wouldn't be a bad idea. To have a server farm dedicated specifically for TickZoom.. one issue you will run into is the data.... To share data with users like this, especially 5 years worth of tick data is VERY expensive. Data vendors charge INSANE amounts of money if you are planning to share your data... Which if you had a server farm like this, it would make sense to have all the data already available....Just something to think about.. For instance I've recently gotton quotes in the range of $12k a year just for END Of day data to be shared... I imagine tick level would be MUCH MUCH larger... Then again if you have a few thousand "subscribers" to your server farm it may not be.... but if your looking at < 100 subscribers, data cost will eat you alive...
Dude, no disrespect, but I'm seeing "headless chicken syndrome" here. I have read your posts here as of recent, and you have some good ideas, but I see a classic example of feature creep before the project has even began. Define a subset of deliverables that make sense and are achievable, and make a start. You can add all the fairy dust you wish for later. Yes you do have to plan for some features from now, otherwise there could be major architectural rework further down the track, but you need to make a start and not let this turn into analysis-paralysis and an acute case of featurecreepitis (the mother of all trading SW). Just MHO.
Yes, I agree. What will help is when i have the website up where people can post these issues as "research". I realize you can't see the system or code yet. But it's only a stones through from doing most of this stuff. All I have left to do now before releasing it is refactor the API to custom code. The way I have it at the moment won't be too forward compatible. I have a better way. Maybe I should go incognito till I finish? That's a fair point. Wayne