![](https://pdfstore-manualsonline.prod.a.ki/pdfasset/5/c6/5c6e7b45-396b-409b-9d57-f56572bf190c/5c6e7b45-396b-409b-9d57-f56572bf190c-bg4b.png)
The IBM Enterprise X-Architecture supports servers with up to four nodes (also called CECs or SMP Expansion
Complexes in IBM terminology). Each node can contain up to four Intel Xeon MP processors for a total of 16
CPUs. The next generation IBM eServer x445 uses an enhanced version of the Enterprise X-Architecture, and
scales to eight nodes with up to four Xeon MP processors for a total of 32 CPUs. The third-generation IBM
eServer x460 provides similar scalability but also supports 64-bit Xeon MP processors. The high scalability of
all these systems stems from the Enterprise X-Architecture’s NUMA design that is shared with IBM high end
POWER4-based pSeries servers.
AMD Opteron-Based Systems
AMD Opteron-based systems, such as the HP ProLiant DL585 Server, also provide NUMA support.
The BIOS setting for node interleaving determines whether the system behaves more like a NUMA system or
more like a Uniform Memory Architecture (UMA) system. See the HP ProLiant DL585 Server technology brief.
See also the HP ROM-Based Setup Utility User Guide at the HP Web site.
By default, node interleaving is disabled, so each processor has its own memory. The BIOS builds a System
Resource Allocation Table (SRAT), so the ESX/ESXi host detects the system as NUMA and applies NUMA
optimizations. If you enable node interleaving (also known as interleaved memory), the BIOS does not build
an SRAT, so the ESX/ESXi host does not detect the system as NUMA.
Currently shipping Opteron processors have up to four cores per socket. When node memory is enabled, the
memory on the Opteron processors is divided such that each socket has some local memory, but memory for
other sockets is remote. The single-core Opteron systems have a single processor per NUMA node and the
dual-core Opteron systems have two processors for each NUMA node.
SMP virtual machines (having two virtual processors) cannot reside within a NUMA node that has a single
core, such as the single-core Opteron processors. This also means they cannot be managed by the ESX/ESXi
NUMA scheduler. Virtual machines that are not managed by the NUMA scheduler still run correctly. However,
those virtual machines don't benefit from the ESX/ESXi NUMA optimizations. Uniprocessor virtual machines
(with a single virtual processor) can reside within a single NUMA node and are managed by the ESX/ESXi
NUMA scheduler.
NOTE For small Opteron systems, NUMA rebalancing is now disabled by default to ensure scheduling fairness.
Use the Numa.RebalanceCoresTotal and Numa.RebalanceCoresNode options to change this behavior.
Specifying NUMA Controls
If you have applications that use a lot of memory or have a small number of virtual machines, you might want
to optimize performance by specifying virtual machine CPU and memory placement explicitly.
This is useful if a virtual machine runs a memory-intensive workload, such as an in-memory database or a
scientific computing application with a large data set. You might also want to optimize NUMA placements
manually if the system workload is known to be simple and unchanging. For example, an eight-processor
system running eight virtual machines with similar workloads is easy to optimize explicitly.
NOTE In most situations, an ESX/ESXi host’s automatic NUMA optimizations result in good performance.
ESX/ESXi provides two sets of controls for NUMA placement, so that administrators can control memory and
processor placement of a virtual machine.
The vSphere Client allows you to specify two options.
CPU Affinity
A virtual machine should use only the processors on a given node.
Memory Affinity
The server should allocate memory only on the specified node.
If you set both options before a virtual machine starts, the virtual machine runs only on the selected node and
all of its memory is allocated locally.
Chapter 8 Using NUMA Systems with ESX/ESXi
VMware, Inc. 75