I was asked today to spec and configure a GPU box and I've never messed with GPU computing at all. Outside the easy and obvious like buying the GPU and sticking it into a machine and then putting an OS on it I have a few questions that hopefully someone can comment on. Happy to document or at least post pics of the build(s) and specs, etc. The biggest thing I'm wondering is whether or not you can assign GPU cores to processes just as you can for CPU cores. For example, if I have a 12-core server (host) and I put in two GPU units (yes I have two PCIe x16 slots ) can I assign blocks of GPU cores to specific applications? Is anyone using VMware and GPUs? I know you can pass through the whole device to a VM/VPS but can you allow the Hypervisor to control the GPU and then use something like vCloud Director to scale up 20-30 "on demand" VM/VPS machines and utilize the physical server CPU cores for operating system but blocks of 50-100 GPU cores for computing and processing work?.... OR... Do I have to configure it to just throw the jobs/work at both of the GPU units and let them figure out the hierarchy and order? Final question is that of Memory (RAM). Normally on the big RAM boxes I have it's optimal to divide up the RAM in blocks per CPU (if you have 2 CPUs and 64GB of RAM you give each CPU a 32GB RAM allocation vs. use the RAM as one giant block like a normal desktop). Should I be giving extra memory to the GPUs (usually through the OS not in the BIOS)? Also, if I have two GPUs and two CPUs and I divide the RAM in half and then allocate an extra ?? 1mb per core?? 10mb per gpu core?? do I need to then specify or assign the GPUs each to their own core at the OS level? Also regarding memory on these GPU units - can that be upgraded or is it soldered into the GPU board. Seems foolish to spend $2200 for a 448-core GPU that only has 6GB of DDR5 - or - 2GB of DDR5 for 256-cores on the alternate we are looking at. OK this is actually the last question - I've been shopping for old/legacy PCI video cards (yes PCI) because I assumed that these things like to be dedicated GPUs vs. actually drive monitors. Does it make any difference in performance (this will be racked so there will never be a monitor physically attached to it) if the OS thinks it's also driving a monitor vs. being a dedicated GPU? Looking forward to comments - thx.