For the security paranoid it's better. I saw the comment regarding someone at a shady hosting provider can just as easily take your drives, but that's not entirely true. In order to copy your data from a bare metal server, they'd have to take it offline. To copy it from a VPS -- you'd never know they did it. Your server going offline unexpectedly at least lets you know something's up. Furthermore, full disk encryption will protect a bare metal system, but not a VPS -- they can snapshot a running VPS, which would contain the decryption keys in memory for example. I think this isn't something to be so concerned about. Data centers have so many customers they're not likely to even notice you unless it's some super small time place. Also, most data center techs probably haven't the slightest clue or interest in trading anyway. The other benefit would be performance. If microseconds count, bare metal wins. If you're using specialized hardware, you will get improved capabilities. Practically, unless you're doing something very specialized (hardware), microseconds cost $, or are super security paranoid, VPS offers way too many conveniences over bare metal that I'd just stick with VPS.
I guess I'm getting sold more on them Since I'm new to either - which one requires the smallest footprint (I'd like to avoid another server and run the management part on my MAC) and which one is free or lowest cost ?
All of them have free options which would be suitable for a handful of servers.I don't like the puppet or chef architecture and was leaning more towards ansible and saltstack. I ultimately chose saltstack (it's open source as well). That is not to say saltstack is the best choice for you, it's just what I went with. They all do effectively the same job, but the saltstack method makes more logical sense to my brain, so that's why I went with that. The problem with these provisioning systems is that you have to write out your package install, file copy, etc instructions using their specific methods. It's a learning curve and they're each different. I'd say the learning curve to get up and running on Docker can be much lower though. See with docker you have a Dockerfile, which at a basic level is just all the commands you would want to run when first set up a new VM. So all those yum install/remove/file copy commands you do, you would put those in the Dockerfile. You then run docker build and it runs those commands within the docker container and creates your docker image based on what happened after all those commands were run. The docker image is like a VPS image, you can boot it on any host machine that supports docker (and you could install docker on a linux vm at any hosting provider). While docker requires some Linux kernel features to operate, the Docker team has made the process damn near seamless on mac. Once you install it, docker uses OSX's native virtualization system to run an extremely lightweight linux VM transparently and runs the docker commands in it. If I didn't tell you that's how it works, you'd have no idea it is actually running docker via a linux VM. They make it completely transparent so you can just run your docker commands in OSX Terminal. All of these systems do have commercial support and paid tiers, but that's more for larger scale installations where you're managing lots of servers. Any of these routes is going to take you longer to get up and running initially, but the benefits are enough that I think you should seriously take a look at them. Being that you're redoing everything now anyway, it seems like an optimal time to go that route if you choose to do so.
I forgot to address the footprint bit. While I can't speak to the specifics of chef/puppet/ansible (I'm pretty sure they support this mode as well). Anyway saltstack supports what is called "masterless" mode. In this case, you do not require a separate management server. On their websites, they'll tout using a separate management server to push config changes out, but that really only makes sense when you're talking about multiple servers in a way that having that extra management system saves time over running the provisioning on each server individually. In your case, it's not worth the hasel. In a masterless mode, you install saltstack on the machine, copy the config directory, and then run salt, so there's no external management resources required. And when running in masterless mode, there's no daemon or anything that runs after the provisioning is complete. In fact you can uninstall saltstack after the provisioning is complete if you want to. For docker, when running docker in a non-Linux host (ie OSX), it does run a lightweight Linux VM, but when running docker via Linux, it just uses the Linux kernel's container features to run the docker containers whereas chef/puppet/ansible/saltstack are merely there to automate package installation/removal/configuration. They don't add an extra virtualization/containerization layer in the way that docker does.
I'd love to use docker - I was under the impression it wouldn't work though due to the graphical UI requirements of TWS - I just did some googling and it seems that was very wrong - there are definitely some examples out there - not sure how well they work. I'll give that a try first - would love to have TWS in a container that passes in credentials - allows for two factor auth and just exposes the api port.
Docker has been more of a pain in the ass in production in my experience, you can read numerous similar feedbacks on popular HFT / Trading blogs. Ansible is easy to manage and the whole idea is to go master/slave (as opposed to masterless), just run it on your dev machine and hook up your server on it. When you want to update your server, run the updated software on a new server first, see if it works, and upgrade the original server accordingly. Automated software deployment really IS a game changer, don't miss it to try to save 2-3 days of work now, which can save you 5-15 days in 6 months.
Well - obviously I've a hard time listening - and ignored your input. I've built a couple docker images locally and just as I was getting ready to push them onto the VPS docker wouldn't install nicely there. A couple of FAQs and support tickets later I found out it's not supported on this particular VPS provider. Since most of my stuff requires only super basic configuration changes and file swaps I've decided to just script the entire install. I'm pretty sure it'll blow up once java or mysql versions change but at the same time it's so much faster to do it that way and possibly deal with the fallout at a later point in time than to go through the learning curve of salt open or chef basic - especially if I've already a second server running in standby mode.
Yes but script it using Ansible, it will take you 1hr to figure out, they have videos and a good dock. I regret not having used Ansible / Chef / Puppet waaay before because I was too lazy to read the doc.
Well - you do have a decent track record after being right the first time - so I'll listen So far the stack is pretty slim anyway - secured ssh (redirected port) - got rid of the stuff I didn't want/need (smtp) - installed some basic security monitoring (splunk) and deployed mysql on it. https://www.digitalocean.com/community/tutorials/how-to-install-mysql-on-centos-7 All of the above had less than a handful of firewall rule changes and it should be easy to port this into any of these automation tools.
Not much progress on this - Got accounting for one strategy moved over to the new environment. TWS is up and running - no execution code in place yet. Going to work on quote feeds next.