Issues installing SQL Server on a NAS

Discussion in 'Automated Trading' started by nitro, Nov 22, 2008.

  1. Nitro, how you got time to put up 10,000 posts ;-) :p
     
    #11     Dec 1, 2008
  2. nitro

    nitro

    Ok, but still worth researching. Thanks.
     
    #12     Dec 1, 2008
  3. "openfiler" -- it supports CIFS, so actually it might work...
     
    #13     Dec 1, 2008
  4. Oh, and ALL of our servers are using I think two filers: email, users' home directories, Citrix terminals, about a hundred SQL Server databases...

    So if you wanna go with NetApps you won't be makin' a mistake.

    My 2 cents
     
    #14     Dec 1, 2008
  5. nitro

    nitro

    Thx for the response. I am looking into

    Coraid:

    http://www.coraid.com/

    IBM

    http://www-03.ibm.com/systems/storage/disk/ds3000/ds3400/index.html

    and the NetApp S family

    http://www.netapp.com/us/products/storage-systems/s-family/

    Then the choices explode. Do I want SAS or SATA drives? Do I want to attach through iSCSI or FC or GIGe? Do I want SMB or CIFs, etc? What RAID level support? Do I want realtime recovery capabilities?

    The decision complexity [hyper] surface is probably 100 dimensional. And that is just the hardware!
     
    #15     Dec 2, 2008
  6. sam028

    sam028

    For giving the right answers, it's important to have some facts:
    - size of the SQLServer database(s) ?
    - how many clients need to have access to the database(s)
    - what throughput will you need ? 10 mb/s for each client desktop, 100 mb/s ?
    - do you need a failover/high availability solution ?
    - etc, etc...

    Without these information, it's hard to size a solution...
    Anyway, the simpliest solutions are the most reliable.

    So you might avoid GPFS, iSCSI and SAN solutions (great throughput, but complex and expensive).
    A "simple" solution with a "small" NAS box, on a dedicated Gigabit network, with SAS disks and 2 good SAS controlers (2 for redundancy, a large memory cache, all elements hot-swapable, ...) is maybe enough for you, and can deliver an excellent throughput.

    I've been working for a long time with NetApps, only high-end products, and I really like their products. But no idea on their small solutions...
     
    #16     Dec 2, 2008
  7. nitro

    nitro

    Thx for the response.

    I think you are right. I have done some computations and back of the envelope estimates and come to the conclusion that writing about 1MByte a second is a high end safe number [during bursts] that I need to keep up with in realtime. Therefore, 10GIGe NAS or SAN is fine, and that radically reduces the dimension of possibilities. I expect to have to deal with about 10GB of data a day max on everage. The SQL tables will be say approximately 6,000,000 rows per symbol per year max.

    I am far more bound by write throughput than read throughput, since I am doing the reading during offline analysis later.
     
    #17     Dec 2, 2008
  8. That's a lotta data nitro. Depending on the number of symbols and how exactly you're indexing the tables, you could have a bit of a problem with throughput. I wouldn't skimp on the storage solution in that case.

    Plus, yer lookin' at 2.5 TB in a year. That's a lot for SQL Server to handle.

    IMHO, Sam, what do you think?
     
    #18     Dec 2, 2008
  9. sam028

    sam028

    Well, it's hard to say, without more details. I have some large databases, few million rows, and have no problem with the access time. But it's not because my hardware is great, it's just because my sql requests are correctly wrote, and the database well designed.
    It's very easy to have very very bad response times with a 10 mb database...
    So yes, you can have horrible response times on a few Tb database. It's possible to have good response times with the good indexes, good SQL requests, well tuned database, enough memory on the database server for its internal caches, etc, etc.
    I'm not an SQL Server expert, I only used to work with U*X hosted databases (Oracle, Sybase, Postgres, MySQL, ...), but a few Tb database might not be a problem for it.
     
    #19     Dec 2, 2008