@Kim_Bottu … Virtually Gung Ho
This article does not give you a complete list of what to do when, but I hope it will make you rethink your socket vs core vs NUMA allocations.
Mostly when you read an article about using sockets versus cores the conclusion is always that you should use socket vs core licensing where licensing requires it. If your application is licensed by socket, it might be better and cheaper to add cores instead of sockets to your VM.
Since that statement had been made years ago the VMs we have come to know have grown A LOT compared to the VMs we had to deal with back then . So the way VM’s deal with CPUs and RAM has changed a lot as well. A VM using 12 CPUs and 384 GB of RAM is not that exceptional anymore.
In light of this you should consider a few things before you decide which CPU type allocation you will apply: sockets vs cores.
If your license only supports one single socket, you will have to allocate cores.
2. How many sockets and cores does your physical host have?
Your VM is a general purpose VM using up to 8 cores and 32 GB of RAM. Licensing is not dependent on the amount of sockets. The physical host has 2 sockets and 8 cores and has 512 GB of RAM.
In this case, it doesn’t really matter if you add sockets or cores. Unless you change the vNUMA settings, vSphere will run this VM on either NUMA node, where vSphere thinks that this VM will run optimally without annoying the other workloads.
Your VM is a general purpose VM using up to 8 cores and 32 GB of RAM. Licensing is not dependent on the amount of sockets. In addition, your VM environment is running at 82% of CPU workload capacity and this particular VM will be really hard on the physical server. The physical host has 2 sockets and 8 cores and has 512 GB of RAM.
In this case you have to consider that the thing to watch out for is the CPU workload. Not only are the pCPUs already under heavy load, but this VM really needs a lot of juice. Rather than setting a reservation for this VM, you could also give this VM 2 sockets and divide the cores over both sockets. The advantage here is that in case the physical CPU has a hard time processing all requests, the VM will have access to the local L2 cache of all used cores which will be evenly spread out over both sockets.. L2 cache is still faster than LLC (last Level Cache aka L3 cache) and way faster than normal RAM. In case of severe CPU workloads, this could actually help this VM. So why would you spread out the cores over both sockets if the cores have their own local L2 cache? In case one particular socket has more work than the other socket, L2 cache will be faster accessible over the other cores. While this *might* induce co-stop, not all applications will use all cores at the same time. So, check with the vendor
To give you a better view of exactly how core see their cache, here is another example using the mobile Clarksfield core i7 processor.
Your VM is a big DB server using 6 cores and 384 GB of RAM. Licensing is not dependent on the amount of sockets. The physical server is running at 60% of its CPU capacity and 40% of the MEM capacity.
Personally I think it is better to give this VM 2 sockets with 3 cores. Each sockets can access its own RAM locally, so half of the instructions would be handled over one socket and the other half over the other socket. In case you choose to make this a one socket and eight core server, part of the RAM will be accessed using the interconnect between both sockets, which makes RAM access slower. Of course, always check with the vendor.
Advised articles and books: