Starting with the Xeon E5 processors “Sandy Bridge EP” in 2012, all of Intel’s mainstream multicore server processors have included a distributed L3 cache with distributed coherence processing. The L3 cache is divided into “slices”, which are distributed around the chip — typically one “slice” for each processor core.
Each core’s L1 and L2 caches are local and private, but outside the L2 cache addresses are distributed in a random-looking way across the L3 slices all over the chip.
As an easy case, for the Xeon Gold 6142 processor (1st generation Xeon Scalable Processor with 16 cores and 16 L3 slices), every aligned group of 16 cache line addresses is mapped so that one of those 16 cache lines is assigned to each of the 16 L3 slices, using an undocumented permutation generator. The total number of possible permutations of the L3 slice numbers [0…15] is 16! (almost 21 trillion), but measurements on the hardware show that only 16 unique permutations are actually used. The observed sequences are the “binary permutations” of the sequence [0,1,2,3,…,15]. The “binary permutation” operator can be described in several different ways, but the structure is simple:
binary permutation “0” of a sequence is just the original sequence
binary permutation “1” of a sequence swaps elements in each even/odd pair, e.g.
Binary permutation 1 of [0,1,2,3,4,5,6,7] is [1,0,3,2,5,4,7,6]
binary permutation “2” of a sequence swaps pairs of elements in each set of four, e.g.,
Binary permutation 2 of [0,1,2,3,4,5,6,7] is [2,3,0,1,6,7,4,5]
binary permutation “3” of a sequence performs both permutation “1” and permutation “2”
binary permutation “4” of a sequence swaps 4-element halves of each set of 8 element, e.g.,
Binary permutation 4 of [0,1,2,3,4,5,6,7] is [4,5,6,7,0,1,2,3]
Binary permutation operators are very cheap to implement in hardware, but are limited to sequence lengths that are a power of 2. When the number of slices is not a power of 2, using binary permutations requires that you create a power-of-2-length sequence that is bigger than the number of slices, and which contains each slice number approximately an equal number of times. As an example, the Xeon Scalable Processors (gen1 and gen2) with 24 L3 slices use a 512 element sequence that contains each of the values 0…16 21 times each and each of the values 17…23 22 times each. This almost-uniform “base sequence” is then permuted using the 512 binary permutations that are possible for a 512-element sequence.
Intel does not publish the length of the base sequences, the values in the base sequences, or the formulas used to determine which binary permutation of the base sequence will be used for any particular address in memory.
Over the years, a few folks have investigated the properties of these mappings, and have published a small number of results — typically for smaller L3 slice counts.
Today I am happy to announce the availability of the full base sequences and the binary permutation selector equations for many Intel processors. The set of systems includes:
Xeon Scalable Processors (gen1 “Skylake Xeon” and gen2 “Cascade Lake Xeon”) with 14, 16, 18, 20, 22, 24, 26, 28 L3 slices
Xeon Scalable Processors (gen3 “Ice Lake Xeon”) with 28 L3 slices
The results for the Xeon Scalable Processors (all generations) are based on my own measurements. The results for the Xeon Phi x200 are based on the mappings published by Kommrusch, et al. (e.g., https://arxiv.org/abs/2011.05422), but re-interpreted in terms of the “base sequence” plus “binary permutation” model used for the other processors.
The technical report and data files (base sequences and permutation selector masks for each processor) are available at https://hdl.handle.net/2152/87595
Have fun!
Next up — using these address to L3/CHA mapping results to understand observed L3 and Snoop Filter conflicts in these processors….
I recently saw a reference to a future Intel “Atom” core called “Tremont” and ran across an interesting new instruction, “CLDEMOTE”, that will be supported in “Future Tremont and later” microarchitectures (ref: “Intel® Architecture Instruction Set Extensions and Future Features Programming Reference”, document 319433-035, October 2018).
The “CLDEMOTE” instruction is a “hint” to the hardware that it might help performance to move a cache line from the cache level(s) closest to the core to a cache level that is further from the core.
What might such a hint be good for? There are two obvious use cases:
Temporal Locality Control: The cache line is expected to be re-used, but not so soon that it should remain in the closest/smallest cache.
Cache-to-Cache Intervention Optimization: The cache line is expected to be accessed soon by a different core, and cache-to-cache interventions may be faster if the data is not in the closest level(s) of cache.
Intel’s instruction description mentions this use case explicitly.
If you are not a “cache hint instruction” enthusiast, this may not seem like a big deal, but it actually represents a relatively important shift in instruction design philosophy.
Instructions that directly pertain to caching can be grouped into three categories:
Mandatory Control
The specified cache transaction must take place to guarantee correctness.
E.g., In a system with some non-volatile memory, a processor must have a way to guarantee that dirty data has been written from the (volatile) caches to the non-volatile memory. The Intel CLWB instruction was added for this purpose — see https://software.intel.com/en-us/blogs/2016/09/12/deprecate-pcommit-instruction
“Direct” Hints
A cache transaction is requested, but it is not required for correctness.
The instruction definition is written in terms of specific transactions on a model architecture (with caveats that an implementation might do something different).
E.g., Intel’s PREFETCHW instruction requests that the cache line be loaded in anticipation of a store to the line.
This allows the cache line to be brought into the processor in advance of the store.
More importantly, it also allows the cache coherence transactions associated with obtaining exclusive access to the cache line to be started in advance of the store.
“Indirect” Hints
A cache transaction is requested, but it is not required for correctness.
The instruction definition is written in terms of the semantics of the program, not in terms of specific cache transactions (though specific transactions might be provided as an example).
E.g., “Push For Sharing Instruction” (U.S. Patent 8099557) is a hint to the processor that the current process is finished working on a cache line and that another processing core in the same coherence domain is expected to access the cache line next. The hardware should move the cache line and/or modify its state to minimize the overhead that the other core will incur in accessing this line.
I was the lead inventor on this patent, which was filed in 2008 while I was working at AMD.
The patent was an attempt to generalize my earlier U.S. Patent 7194587, “Localized Cache Block Flush Instruction”, filed while I was at IBM in 2003.
Intel’s CLDEMOTE instruction is clearly very similar to my “Push For Sharing” instruction in both philosophy and intent.
Even though I have contributed to several patents on cache control instructions, I have decidedly mixed feelings about the entire approach. There are several issues at play here:
The gap between processor and memory performance continues to increase, making performance more and more sensitive to the effective use of caches.
Cache hierarchies are continuing to increase in complexity, making it more difficult to understand what to do for optimal performance — even if precise control were available.
This has led to the near-extinction of “bottom-up” performance analysis in computing — first among customers, but also among vendors.
The cost of designing processors continues to increase, with caches & coherence playing a significant role in the increased cost of design and validation.
Protocols must become more complex to deal with increasing core counts and cache sizes without sacrificing frequency and/or latency.
Implementations must have more complex behavior to deal with the increasing opportunities for contention — dynamically adaptive prefetching, dynamically adaptive coherence policies.
Implementations must become increasingly effective at power management without degrading cache performance.
Technology trends show that the power associated with data motion (including cache coherence traffic) has come to far exceed the power required by computations, and that the ratio will continue to increase.
This does not currently dominate costs (as discussed in the SC16 talk cited above), but that is in large part because the processors have remained expensive!
Decreasing processor cost will require simpler designs — this decreases the development cost that must be recovered and simultaneously reduces the barriers to entry into the market for processors (allowing more competition and innovation).
Cache hints are only weakly effective at improving performance, but contribute to the increasing costs of design, validation, and power. More of the same is not an answer — new thinking is required.
If one starts from current technology (rather than the technology of the late 1980’s), one would naturally design architectures to address the primary challenges:
“Vertical” movement of data (i.e., “private” data moving up and down the levels of a memory hierarchy) must be explicitly controllable.
“Horizontal” movement of data (e.g., “shared” data used to communicate between processing elements) must be explicitly controllable.
Continuing to apply “band-aids” to the transparent caching architecture of the 1980’s will not help move the industry toward the next disruptive innovation.
Here are the annotated slides from my SC18 presentation on Snoop Filter Conflicts that cause performance variability in HPL and DGEMM on the Xeon Platinum 8160 processor.
This slide presentation includes data (not included in the paper) showing that Snoop Filter Conflicts occur in all Intel Scalable Processors (a.k.a., “Skylake Xeon”) with 18 cores or more, and also occurs on the Xeon Phi x200 processors (“Knights Landing”).
The published paper is available (with ACM subscription) at https://dl.acm.org/citation.cfm?id=3291680
This is less boring than it sounds!
A more exciting version of the title.
This story is very abridged — please read the paper!
Execution times only — no performance counters yet.
500 nodes tested, but only 392 nodes had the 7 runs needed for a good computation of the median performance.
Dozens of different nodes showed slowdowns of greater than 5%.
I measured memory bandwidth first simply because I had the tools to do this easily.
Read memory controller performance counters before and after each execution and compute DRAM traffic.
Write traffic was almost constant — only the read traffic showed significant variability.
It is important to decouple the sockets for several reasons. (1) Each socket manages its frequency independently to remain within the Running Average Power Limit. (2) Cache coherence is managed differently within and between sockets.
The performance counter infrastructure is at https://github.com/jdmccalpin/periodic-performance-counters
Over 25,000 DGEMM runs in total, generating over 240 GiB of performance counter output.
I already saw that slow runs were associated with higher DRAM traffic, but needed to find out which level(s) of the cache were experience extra load misses.
The strongest correlation between execution time and cache miss counts was with L2 misses (measured here as L2 cache fills).
The variation of L2 fills for the full-speed runs is surprisingly large, but the slow runs all have L2 fill counts that are at least 1.5x the minimum value.
Some runs tolerate increased L2 fill counts up to 2x the minimum value, but all cases with >2x L2 fills are slow.
This chart looks at the sum of L2 fills for all the cores on the chip — next I will look at whether these misses are uniform across the cores.
I picked 15-20 cases in which a “good” trial (at or above median performance) was followed immediately by a “slow” trial (at least 20% below median performance).
This shows the L2 Fills by core for a “good” trial — the red dashed line corresponds to the minimum L2 fill count from the previous chart divided by 24 to get the minimum per-core value.
Different sets of cores and different numbers of cores had high counts in each run — even on the same node.
This adds the “slow” execution that immediately followed the “good” execution.
For the slow runs, most of cores had highly elevated L2 counts. Again, different sets of cores and different numbers of cores had high counts in each run.
This data provides a critical clue: Since L2 caches are private and 2MiB caches fully control the L2 cache index bits, the extra L2 cache misses must be caused by something *external* to the cores.
The Snoop Filter is essentially the same as the directory tags of the inclusive L3 cache of previous Intel Xeon processors, but without room to store the data for the cache lines tracked.
The key concept is “inclusivity” — lines tracked by a Snoop Filter entry must be invalidated before that Snoop Filter entry can be freed to track a different cache line address.
I initially found some poorly documented core counters that looked like they were related to Snoop Filter evictions, then later found counters in the “uncore” that count Snoop Filter evictions directly.
This allowed direct confirmation of my hypothesis, as summarized in the next slides.
About 1% of the runs are more than 10% slower than the fastest run.
Snoop Filter Evictions clearly account for the majority of the excess L2 fills.
But there is one more feature of the “slow” runs….
For all of the “slow” runs, the DRAM traffic is increased. This means that a fraction of the data evicted from the L2 caches by the Snoop Filter evictions was also evicted from the L3 cache, and so must be retrieved from DRAM.
At high Snoop Filter conflict rates (>4e10 Snoop Filter evictions), all of the cases have elevated DRAM traffic, with something like 10%-15% of the Snoop Filter evictions missing in the L3 cache.
There are some cases in the range of 100-110 seconds that have elevated snoop filter evictions, but not elevated DRAM reads, that show minimal slowdowns.
This suggests that DGEMM can tolerate the extra latency of L2 miss/L3 hit for its data, but not the extra latency of L2 miss/L3 miss/DRAM hit.
Based on my experience in processor design groups at SGI, IBM, and AMD, I wondered if using contiguous physical addresses might avoid these snoop filter conflicts….
Baseline with 2MiB pages.
With 1GiB pages, the tail almost completely disappears in both width and depth.
Zooming in on the slowest 10% of the runs shows no evidence of systematic slowdowns when using 1GiB pages.
The performance counter data confirms that the snoop filter eviction rate is very small.
So we have a fix for single-socket DGEMM, what about HPL?
Intel provided a test version of their optimized HPL benchmark in December 2017 that supported 1GiB pages.
First, I verified that the performance variability for single-node (2-socket) HPL runs was eliminated by using 1GiB pages.
The variation across nodes is strong (due to different thermal characteristics), but the variation across runs on each node is extremely small.
The 8.6% range of average performance for this set of 31 nodes increases to >12% when considering the full 1736-node SKX partition of the Stampede2 system.
So we have a fix for single-node HPL, what about the full system?
Intel provided a full version of their optimized HPL benchmark in March 2018 and we ran it on the full system in April 2018.
The estimated breakdown of performance improvement into individual contributions is a ballpark estimate — it would be a huge project to measure the details at this scale.
The “practical peak performance” of this system is 8.77 PFLOPS on the KNLs plus 3.73 PFLOPS on the SKX nodes, for 12.5 PFLOPS “practical peak”. The 10.68 PFLOPS obtained is about 85% of this peak performance.
During the review of the paper, I was able to simplify the test further to allow quick testing on other systems (and larger ensembles).
This is mostly new material (not in the paper).
https://github.com/jdmccalpin/SKX-SF-Conflicts
This lets me more directly address my hypothesis about conflicts with contiguous physical addresses, since each 1GiB page is much larger than the 24 MiB of aggregate L2 cache.
It turns out I was wrong — Snoop Filter Conflicts can occur with contiguous physical addresses on this processor.
The pattern repeats every 256 MiB.
If the re-used space is in the 1st 32 MiB of any 1GiB page, there will be no Snoop Filter Conflicts.
What about other processors?
I tested Skylake Xeon processors with 14, 16, 18, 20, 22, 24, 26, 28 cores, and a 68-core KNL (Xeon Phi 7250).
These four processors are the only ones that show Snoop Filter Conflicts with contiguous physical addresses.
But with random 2MiB pages, all processors with more than 16 cores show Snoop Filter conflicts for some combinations of addresses…..
These are average L2 miss rates — individual cores can have significantly higher miss rates (and the maximum miss rate may be the controlling factor in performance for multi-threaded codes).
The details are interesting, but no time in the current presentation….
Overall, the uncertainty associated with this performance variability is probably more important than the performance loss.
Using performance counter measurements to look for codes that are subject to this issue is a serious “needle in haystack” problem — it is probably easier to choose codes that might have the properties above and test them explicitly.
Cache-contained shallow water model, cache-contained FFTs.
The new DGEMM implementation uses dynamic scheduling of the block updates to decouple the memory access patterns. There is no guarantee that this will alleviate the Snoop Filter Conflict problem, but in this case it does.
I now have a model that predicts all Snoop Filter Conflicts involving 2MiB pages on the 24-core SKX processors.
Unfortunately, the zonesort approach won’t work under Linux because pages are allocated on a “first-come, first-served” basis, so the fine control required is not possible.
An OS with support for page coloring (such as BSD) could me modified to provide this mitigation.
Again, the inability of Linux to use the virtual address as a criterion for selecting the physical page to use will prevent any sorting-based approach from working.
Intel has prepared a response. If you are interested, you should ask your Intel representative for a copy.
Memory systems using caches have a lot more potential flexibility than most implementations are able to exploit – you get the standard behavior all the time, even if an alternative behavior would be allowable and desirable in a specific circumstance. One area in which many vendors have provided an alternative to the standard behavior is in “non-temporal” or “streaming” stores. These are sometimes also referred to as “cache-bypassing” or “non-allocating” stores, and even “Non-Globally-Ordered stores” in some cases.
The availability of streaming stores on various processors can often be tied to the improvements in performance that they provide on the STREAM benchmark, but there are several approaches to implementing this functionality, with some subtleties that may not be obvious. (The underlying mechanism of write-combining was developed long before, and for other reasons, but exposing it to the user in standard cached memory space required the motivation provided by a highly visible benchmark.)
Before getting into details, it is helpful to review the standard behavior of a “typical” cached system for store operations. Most recent systems use a “write-allocate” policy — when a store misses the cache, the cache line containing the store’s target address is read into the cache, then the parts of the line that are receiving the new data are updated. These “write-allocate” cache policies have several advantages (mostly related to implementation correctness), but if the code overwrites all the data in the cache line, then reading the cache line from memory could be considered an unnecessary performance overhead. It is this extra overhead of read transactions that makes the subject of streaming stores important for the STREAM benchmark.
As in many subjects, a lot of mistakes can be avoided by looking carefully at what the various terms mean — considering both similarities and differences.
“Non-temporal store” means that the data being stored is not going to be read again soon (i.e., no “temporal locality”).
So there is no benefit to keeping the data in the processor’s cache(s), and there may be a penalty if the stored data displaces other useful data from the cache(s).
“Streaming store” is suggestive of stores to contiguous addresses (i.e., high “spatial locality”).
“Cache-bypassing store” says that at least some aspects of the transaction bypass the cache(s).
“Non-allocating store” says that a store that misses in a cache will not load the corresponding cache line into the cache before performing the store.
This implies that there is some other sort of structure to hold the data from the stores until it is sent to memory. This may be called a “store buffer”, a “write buffer”, or a “write-combining buffer”.
“Non-globally-ordered store” says that the results of an NGO store might appear in a different order (relative to ordinary stores or to other NGO stores) on other processors.
All stores must appear in program order on the processor performing the stores, but may only need to appear in the same order on other processors in some special cases, such as stores related to interprocessor communication and synchronization.
There are at least three issues raised by these terms:
Caching: Do we want the result of the store to displace something else from the cache?
Allocating: Can we tolerate the extra memory traffic incurred by reading the cache line before overwriting it?
Ordering: Do the results of this store need to appear in order with respect to other stores?
Different applications may have very different requirements with respect to these issues and may be best served by different implementations. For example, if the priority is to prevent the stored data from displacing other data in the processor cache, then it may suffice to put the data in the cache, but mark it as Least-Recently-Used, so that it will displace as little useful data as possible. In the case of the STREAM benchmark, the usual priority is the question of allocation — we want to avoid reading the data before over-writing it.
While I am not quite apologizing for that, it is clear that STREAM over-represents stores in general, and store misses in particular, relative to the high-bandwidth applications that I intended it to represent.
To understand the benefit of non-temporal stores, you need to understand if you are operating in (A) a concurrency-limited regime or (B) a bandwidth-limited regime. (This is not a short story, but you can learn a lot about how real systems work from studying it.)
(Note: The performance numbers below are from the original private version of this post created on 2015-05-29. Newer systems require more concurrency — numbers will be updated when I have a few minutes to dig them up….)
(A) For a single thread you are almost always working in a concurrency-limited regime. For example, my Intel Xeon E5-2680 (Sandy Bridge EP) dual-socket systems have an idle memory latency to local memory of 79 ns and a peak DRAM bandwidth of 51.2 GB/s (per socket). If we assume that some cache miss handling buffer must be occupied for approximately one latency per transaction, then queuing theory dictates that you must maintain 79 ns * 51.2 GB/s = 4045 Bytes “in flight” at all times to “fill the memory pipeline” or “tolerate the latency”. This rounds up to 64 cache lines in flight, while a single core only supports 10 concurrent L1 Data Cache misses. In the absence of other mechanisms to move data, this limits a single thread to a read bandwidth of 10 lines * 64 Bytes/line / 79 ns = 8.1 GB/s.
Store misses are essentially the same as load misses in this analysis.
L1 hardware prefetchers don’t help performance here because they share the same 10 L1 cache miss buffers. L2 hardware prefetchers do help because bringing the data closer reduces the occupancy required by each cache miss transaction. Unfortunately, L2 hardware prefetchers are not directly controllable, so experimentation is required. The best read bandwidth that I have been able to achieve on this system is about 17.8 GB/s, which corresponds to an “effective concurrency” of about 22 cache lines or an “effective latency” of about 36 ns (for each of the 10 concurrent L1 misses).
L2 hardware prefetchers on this system are also able to perform prefetches for store miss streams, thus reducing the occupancy for store misses and increasing the store miss bandwidth.
For non-temporal stores there is no concept corresponding to “prefetch”, so you are stuck with whatever buffer occupancy the hardware gives you. Note that since the non-temporal store does not bring data *to* the processor, there is no reason for its occupancy to be tied to the memory latency. One would expect a cache miss buffer holding a non-temporal store to have a significantly lower latency than a cache miss, since it is allocated, filled, and then transfers its contents out to the memory controller (which presumably has many more buffers than the 10 cache miss buffers that the core supports).
But, while non-temporal stores are expected to have a shorter buffer occupancy than that of a cache miss that goes all the way to memory, it is not at all clear whether they will have a shorter buffer occupancy than a store misses that activates an L2 hardware prefetch engine. It turns out that on the Xeon E5-2680 (Sandy Bridge EP), non-temporal stores have *higher* occupancy than store misses that activate the L2 hardware prefetcher, so using non-temporal stores slows down the performance of each of the four STREAM kernels. I don’t have all the detailed numbers in front of me, but IIRC, STREAM Triad runs at about 10 GB/s with one thread on a Xeon E5-2680 when using non-temporal stores and at between 12-14 GB/s when *not* using non-temporal stores.
This result is not a general rule. On the Intel Xeon E3-1270 (also a Sandy Bridge core, but with the “client” L3 and memory controller), the occupancy of non-temporal stores in the L1 cache miss buffers appears to be much shorter, so there is not an occupancy-induced slowdown. On the older AMD processors (K8 and Family 10h), non-temporal stores used a set of four “write-combining registers” that were independent of the eight buffers used for L1 data cache misses. In this case the use of non-temporal stores allowed one thread to have more concurrent
memory transactions, so it almost always helped performance.
(B) STREAM is typically run using as many cores as is necessary to achieve the maximum sustainable memory bandwidth. In this bandwidth-limited regime non-temporal stores play a very different role. When using all the cores there are no longer concurrency limits, but there is a big difference in bulk memory traffic. Store misses must read the target lines into the L1 cache before overwriting it, while non-temporal stores avoid this (semantically unnecessary) load from memory. Thus for the STREAM Copy and Scale kernels, using cached stores results in two memory reads and one memory write, while using non-temporal stores requires only one memory read and one memory write — a 3:2 ratio. Similarly, the STREAM Add and Triad kernels transfer 4:3 as much data when using cached stores compared to non-temporal stores.
On most systems the reported performance ratios for STREAM (using all processors) with and without non-temporal stores are very close to these 3:2 and 4:3 ratios. It is also typically the case that STREAM results (again using all processors) are about the same for all four kernels when non-temporal stores are used, while (due to the differing ratios of extra read traffic) the Add and Triad kernels are typically ~1/3 faster on systems that use cached stores.
The Xeon Phi x200 (Knights Landing) has a lot of modes of operation (selected at boot time), and the latency and bandwidth characteristics are slightly different for each mode.
It is also important to remember that the latency can be different for each physical address, depending on the location of the requesting core, the location of the coherence agent responsible for that address, and the location of the memory controller for that address. Intel has not publicly disclosed the mapping of core numbers (APIC IDs) to physical locations on the chip or the locations of coherence agents (CHA boxes) on the chip, nor has it disclosed the hash functions used to map physical addresses to coherence agents and to map physical addresses to MCDRAM or DDR4 memory controllers. (In some modes of operation the memory mappings are trivial, but not in all modes.)
The modes that are important are:
“Flat” vs “Cache”
In “Flat” mode, MCDRAM memory is used as directly accessible memory, occupying the upper 16 GiB of physical address space.
The OS exposes this memory as being on “NUMA node 1”, so it can be accessed using the standard NUMA control facilities (e.g., numactl).
Sustained bandwidth from MCDRAM is highest in “Flat” mode.
In “Cache” mode, MCDRAM memory is used as an L3 cache for the main DDR4 memory.
In this mode the MCDRAM is invisible and effectively uncontrollable. I will discuss the performance characteristics of Cache mode at a later date.
“All-to-All” vs “Quadrant”
In “All-to-All” mode, consecutive physical (cache-line) addresses are assigned to coherence controllers (CHA boxes) distributed across the entire chip using an undocumented hash function, and consecutive physical (cache-line) addresses are assigned to memory controllers (MCDRAM or DDR4) distributed across the entire chip.
Initial testing indicates that addresses mapped to MCDRAM are distributed across the 8 MCDRAM controllers using a simple modulo-8 function on the 3 bits above the cache line address.
In “Quadrant” mode, consecutive physical (cache-line) addresses are assigned to coherence controllers distributed across the entire chip, but the each address is assigned to one of the MCDRAM controllers in the same “quadrant” as the coherence controller.
This reduces the number of “hops” required for request/response/coherence messages on the mesh, and should reduce both latency and contention.
Initial testing indicates that addresses mapped to MCDRAM are hashed across the 8 controllers using a complex hash function based on many high-order address bits.
Conjecture: This was done to allow the assignment of addresses to coherence agents to remain the same, with the “same quadrant” property enforced by changing the MCDRAM controller owning the address, rather than by changing the coherence agent owning the address.
“Sub-NUMA-Cluster”
There are several of these modes, only one of which will be discussed here.
“Sub-NUMA-Cluster 4” (SNC4) mode divides the chip into four “quadrants”, each of which acts like a NUMA node in a multi-socket system.
“node 0” owns the 1st quarter of contiguous physical address space.
The cores belonging to “node 0” are “close to” MCDRAM controllers 0 and 1.
Initial tests indicate that consecutive cache-line addresses are mapped to MCDRAM controllers 0/1 using a simple even/odd interleave.
The physical addresses that belong to “node 0” are mapped to coherence agents that are also located “close to” MCDRAM controllers 0 and 1.
Ditto for nodes 1, 2, and 3.
The Knights Landing system at TACC uses the Xeon Phi 7250 processor (68 cores, 1.4 GHz nominal).
My preferred latency tester provides the values in the table below for data mapped to MCDRAM memory. The values presented are averaged over many addresses, with the ranges showing the variation of average latency across cores.
Mode of Operation
Flat-Quadrant
Flat-All2All
SNC4 local
SNC4 remote
MCDRAM maximum latency (ns)
156.1
158.3
153.6
164.7
MCDRAM average latency (ns)
154.0
155.9
150.5
156.8
MCDRAM minimum latency (ns)
152.3
154.4
148.3
150.3
MCDRAM standard deviation (ns)
1.0
1.0
0.9
3.1
Caveats:
My latency tester uses permutations of even-numbered cache lines in various sized address range blocks, so it is not guaranteed that my averages are uniformly distributed over all the coherence agents.
Variability across nodes is not entirely negligible, in part because different nodes have different patterns of disabled tiles.
E.g., Four of the 38 tiles are disabled on each Xeon Phi 7250 processor.
Run-to-run variability is typically small (1-2 ns) when using large pages, but there are certain idiosyncrasies that have yet to be explained.
Note that even though the average latency differences are quite small across these modes of operation, the sustained bandwidth differences are much larger. The decreased number of “hops” required for coherence transactions in “Quadrant” and “SNC-4” modes reduces contention on the mesh links and thereby allows higher sustained bandwidths. The difference between sustained bandwidth in Flat-All-to-All and Flat-Quadrant modes suggests that contention on the non-data mesh links (address, acknowledge, and invalidate) is more important than contention on the data transfer links (which should be the same for those two modes of operation). I will post more details to my blog as they become available….
The corresponding data for addresses mapped to DDR4 memory are included in the table below:
Mode of Operation
Flat-Quadrant
Flat-All2All
SNC4 local
SNC4 remote
DDR4 maximum latency (ns)
133.3
136.8
130.0
141.5
DDR4 average latency (ns)
130.4
131.8
128.2
133.1
DDR4 minimum latency (ns)
128.2
128.5
125.4
126.5
DDR4 standard deviation (ns)
1.2
2.4
1.1
3.1
There is negligible sustained bandwidth variability across modes for data in DDR4 memory because the DDR4 memory runs out of bandwidth long before the mesh runs out of bandwidth.
In a recent Intel Software Developer Forum discussion (https://software.intel.com/en-us/forums/intel-moderncode-for-parallel-architectures/topic/700477), I put together a few notes on the steps required for a single-producer, single-consumer communication using separate cache lines for “data” and “flag” values.
Although this was not a carefully-considered formal analysis, I think it is worth re-posting here as a reminder of the ridiculous complexity of implementing even the simplest communication operations in shared memory on cached systems.
I usually implement a producer/consumer code using “data” and “flag” in separate cache lines. This enables the consumer to spin on the flag while the producer updates the data. When the data is ready, the producer writes the flag variable. At a low level, the steps are:
The producer executes a store instruction, which misses in its L1 Data Cache and L2 cache, thus generating an RFO transaction on the ring.
The data for the store is held in a store buffer at least until the producer core has write permission on the cache line in its own L1 Data Cache.
The data for the store may be held in the store buffer for longer periods, awaiting other potential stores to the same cache line.
The RFO traverses the ring to find the L3 slice that owns the physical address being used, and the RFO hits in the L3.
The L3 has the data, but also sees that it is marked as being in a clean state in one or more processor private caches, so it issues an invalidate on the cache line containing the flag variable.
The L3 may or may not have a way of tracking whether the cache line containing the flag variable is shared in another chip.
The L3 may have directories that track which cores may have a copy of the cache line containing the flag variable. If there are directories, the invalidates may be targeted at the specific caches that may hold the cache line, rather than being broadcast to all cores.
The L3 may send the data for the cache line containing the flag to the producer’s cache at this time, but that data cannot be used until the coherence transactions are complete.
The consumer receives the invalidate, invalidates its copy of the flag line, and responds to the L3 that the line has been invalidated.
Immediately after responding to the L3 that the line has been invalidated, the spin loop in the consumer tries to load the flag variable again.
The L3 receives the invalidation acknowledgements from all the cores that may have had a shared copy of the line, and notifies the producer core that it now has write permission on the cache line containing the flag data.
If you are lucky, the producer core will write the flag variable from the store buffer to the L1 Data Cache immediately on receipt of permission.
If you are unlucky, the producing core may lose the line containing the flag variable before the store buffer is written to the L1 Data Cache. I don’t know if Intel processors can do this, but I know some processors than can lose the line before dumping the store buffer.
Very shortly after sending the write permission notification to the producer core, the L3 will receive a read request from the consumer core for the same cache line containing the flag.
Depending on the implementation, several different things might happen.
One option is for the L3 to hold the line containing the flag variable in a “transient” state while it waits for an acknowledgement from the Producer core that it has received the write permission message. In this case the L3 will either:
Stall the read request from the consumer core, or
NACK the read request from the consumer core (i.e., tell it to try again).
Another option is for the L3 to immediately process the read request and send an intervention/downgrade request for the cache line containing the flag variable to the producer core’s cache.
In the “lucky” case, the intervention/downgrade request generated by the read from the consumer core will get the new value of the cache line containing the flag variable and return it to the consumer core and to the L3 slice that owns that physical address.
Various implementations have specific ordering requirements here that determine whether the cache line must be sent to the L3 first, then the to consumer core, or whether it can be sent to both at the same time.
Some implementations require an extra handshaking step after the consumer core receives the data, before the L3 will give it permission to use the data. (This is more common in the case of a store than a read.)
Finally the consumer core gets the new value of the flag variable and sees that it has changed! The data is now ready!
The spin loop on the consumer core now exits, which incurs a 20-cycle mispredicted branch delay.
The consumer core now executes a load instruction to get the data. This misses in the consumer’s L1 and L2 caches and generates a read request on the ring.
The read request traverses the ring to the slice that owns the physical address of the cache line containing the data (which may be a different slice than the one controlling the cache line containing the flag), and the read request hits in the L3.
The data in the L3 is stale, but the L3 knows exactly which core has write permission on the cache line containing the data, so it issues an intervention/downgrade on the cache line containing the data and targeting the cache of the producer core.
The cache(s) of the producer core receive the intervention/downgrade request and return the new value of the cache line containing the data variable to the L3, simultaneously downgrading the permissions on the line so that it is now “read-only” in the producer’s caches.
As was the case for the cache line containing the flag variable, the cache line containing the data variable makes its way to both the L3 slice that owns the physical address and the consumer core that requested the data.
The cache line containing the data arrives in the consumer core’s cache, and the load instruction is allowed to complete.
Once the consumer core has gotten the data safely into a register, it typically has to re-write the flag variable to let the producer core know that the value has been consumed and that the producer core is free to write to the cache line containing the data variable again.
This requires the consumer to make an “upgrade” request on the cache line containing the flag, so it can write to it. This is similar to the sequence above, but since the consumer already has the data, it does not need the L3 to send it again — it only needs to wait until all other cores have acknowledge the invalidate before it can write to the flag line.
Double-buffering can be used to avoid this extra transaction — if the consumer uses a different set of addresses to send data back to the producer, then the fact that the producer has received another message from the consumer means that the consumer must have finished using the original data buffer, so it is safe for the producer to use it again.
There are many variants and many details that can be non-intuitive in an actual implementation. These often involve extra round trips required to ensure ordering in ugly corner cases. A common example is maintaining global ordering across stores that are handled by different coherence controllers. This can be different L3 slices (and/or Home Agents) in a single package, or the more difficult case of stores that alternate across independent packages.
There are fewer steps in the case where the “data” and “flag” are in the same cache line, but extra care needs to be taken in that case because it is easier for the polling activity of the consumer to take the cache line away from the producer before it has finished doing the updates to the data part of the cache line. This can result in more performance variability and reduced total performance, especially in cases with multiplier producers and multiple consumers (with locks, etc.).
I usually implement a producer/consumer code using “data” and “flag” in separate cache lines. This enables the consumer to spin on the flag while the producer updates the data. When the data is ready, the producer writes the flag variable. At a low level, the steps are:
There are many variants and many details that can be non-intuitive in an actual implementation. These often involve extra round trips required to ensure ordering in ugly corner cases. A common example is maintaining global ordering across stores that are handled by different coherence controllers. This can be different L3 slices (and/or Home Agents) in a single package, or the more difficult case of stores that alternate across independent packages.
There are fewer steps in the case where the “data” and “flag” are in the same cache line, but extra care needs to be taken in that case because it is easier for the polling activity of the consumer to take the cache line away from the producer before it has finished doing the updates to the data part of the cache line. This can result in more performance variability and reduced total performance, especially in cases with multiplier producers and multiple consumers (with locks, etc.).