We are excited to announce that on September 12th, 2014, we will be migrating the CTO blog to a new version of the blogging platform.

Posts Tagged ‘ memory ’

joshsimons

vNUMA: What it is and why it matters

September 19, 2011
By
vNUMA: What it is and why it matters

In vSphere 5 , we introduced vNUMA which allows interested guest operating systems to see that they are running on a NUMA (Non Uniform Memory Architecture) topology. For those not familar, here is a one-diagram NUMA explanation. As you can see, in the UMA case, the cost of accessing a particular memory address is the same regardless of which socket your program is running on. In the NUMA case, however, it does matter. With memory attached directly to each socket there can be significant performance penalties if an application gener ates large numbers of non-local memory accesses. The ESX hypervisor has been NUMA-aware for quite some time, making memory and CPU allocation decisions based on its full understanding of the topology...

Read more

joshsimons

Future Memory Technology

July 29, 2011
By

Dean Klein , VP of Memory System Development at Micron, gave the Thursday keynote at ISC ’11 last week. He spoke about future memory technologies with an emphasis on what will be required for exacscale HPC systems. I had often heard that memory consumes roughly half the power in a typical system. While I realize that reading data objects smaller than cache lines still results in an entire cache line being transferred from memory, which wastes both system bandwith and power, I did not realize that the nature of DRAM designs causes a similar issue (called “overfetch”) at a lower level in which up to 256 times more DRAM bits than have been requested from the part are lit up, which also...

Read more

joshsimons

Memory Virtualization and HPC Performance

December 2, 2010
By

Jeff Buell in our performance group just posted another excellent HPC performance piece on VROOM . This study examines the effects of memory virtualization on HPL and StarRandomAccess, two HPC benchmarks with very different memory access patterns. The results show that hardware-based memory virtualization does not always lead to the best overall performance, especially with HPC applications whose performance is dominated by TLB misses. By reverting to software-based memory virtualization in such cases, Jeff shows that the performance disparity between virtual and native is essentially¬† eliminated. Fascinating stuff. Read his article here . I should also mention that SC ’10 in New Orleans was Jeff’s first SC conference. Read his trip report here .

Read more