(These are old notes for relatively old systems — I just found this in my “drafts” folder and decided to switch the status to “public” so I can find it again!)
Some notes on how to determine the DRAM and Memory Controller configuration for a system using AMD Opteron/Phenom or other Family 10h processors. All of this information is available in AMD’s publication: “BIOS and Kernel Developer’s Guide (BKDG) For AMD Family 10h Processors”, document number 31116. I am using “Rev 3.48 – April 22, 2010”
Background: How to Read and Interpret the PCI Configuration Bits
The processor configuration bits are available in PCI configuration space and can be read with the “lspci” program. Unfortunately it requires a relatively new kernel (2.26 or newer) to read the extended configuration bits (i.e., those with offsets greater than 256 Bytes) — I will try to mark the problematic configuration bits as I go along. To get the configuration bits, run “lspci -xxxx” (as root) and save the text output.
The “lspci” program prints out the output by bus, device, function, and offset. Here we are only interested in the processor configurations, so we look through the output until we get to a line that looks like:
00:18.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration
The initial characters of the line are interpreted as “bus 0”, “device 18”, “function 0”, followed by a text label for this PCI configuration space region.
The following lines will look like:
00: 22 10 00 12 00 00 10 00 00 00 00 06 00 00 80 00 10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 20: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 30: 00 00 00 00 80 00 00 00 00 00 00 00 00 00 00 00
These are hexadecimal dumps of the values at various “offsets” into the PCI configuration space.
The values are organized with the higher addresses and most significant bits to the right (except that within each 2-digit hexadecimal number the least significant bits are to the right!). These PCI configuration space values are organized into 32-bit “registers”, so the first line above corresponds to
Offset | 00 | 04 | 08 | 0C | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
bits | 7:0 | 15:8 | 23:16 | 31:24 | 7:0 | 15:8 | 23:16 | 31:24 | 7:0 | 15:8 | 23:16 | 31:24 | 7:0 | 15:8 | 23:16 | 31:24 |
value (hex) | 22 | 10 | 00 | 12 | 00 | 00 | 10 | 00 | 00 | 00 | 00 | 06 | 00 | 00 | 80 | 00 |
This first line of output corresponds to four 32-bit PCI configuration space registers, shown as offsets 00, 04, 08, and 0C in the table. The BKDG describes these as:
F0x00 Device/Vendor ID Register
Reset: 1200 1022h. <– note the “little-endian” ordering
F0x04 Status/Command Register
Reset: 0010 0000h. Bit[20] is set to indicate the existence of a PCI-defined capability block.
F0x08 Class Code/Revision ID Register
Reset: 0600 0000h.
F0x0C Header Type Register
Reset: 0080 0000h.
Important Memory Controller and DRAM Configuration Bits
Information About Installed Hardware
The first thing to look at are the properties of the installed hardware. The DIMM configuration information is contained in F2x[1,0]80 DRAM Bank Address Mapping Register. The Family 10h Opterons have two memory controllers — the information for controller 0 is located at Function 2, Offset 080, while the information for controller 1 is located at Function 2, Offset 180. Note that these offsets are specified in hexadecimal, so that offsets greater than 100 (=256 decimal) are located in the extended PCI configuration area and will not be included in the output of “lspci” on systems running 2.6.25 or earlier Linux kernels. For this configuration bit the inability to read the controller 1 values is not likely to be a proble — if the DIMMs have been properly installed in matching pairs, then the two memory controllers will be configured identically by the BIOS.
For the system I am looking at today, the output of lspci for Function 2, offset 80 is
80: 55 00 00 00
Comparing the description of the bit field mappings from the BKDG (page 238) with my data for this configuration register and the device information in Table 85 (page 239) of the BKDG gives:
Bits | Field Name | Value | Meaning |
---|---|---|---|
31:16 | Reserved | 00 00h | (it is nice to see that these reserved bits are actually zero) |
15:12 | Dimm3AddrMap | 0h | ignored because this DIMM is not populated |
11:8 | Dimm2AddrMap | 0h | ignored because this DIMM is not populated |
7:4 | Dimm1AddrMap | 5h = 1010b | CS size = 1 GB, Device size/width = 1G, x8, Bank Address bits = 15,14,13 |
3:0 | Dimm0AddrMap | 5h = 1010b | CS size = 1 GB, Device size/width = 1G, x8, Bank Address bits = 15,14,13 |
These interpretations make sense. The DIMMs are composed of 1 Gbit DRAM chips, each with 8 output bits (“x8”). To create a 64-bit DIMM, 8 of these DRAM chips work together as a single “rank” (actually there are 9 chips in a rank, to provide extra bits for ECC error correction). This “rank” will then have a capacity of 1 Gbit/chip * 8 chips = 1 GiB. (Here I am using the newer notation to distinguish between binary and decimal sizes — see http://en.wikipedia.org/wiki/Gibibyte for more information.)
A few things to note:
- The bits in this configuration register don’t tell whether or not a DIMM is installed. The value of ‘0’ for Dimm2AddrMap and Dimm3AddrMap could correspond to a 128 MiB chip select size composed of 256 Mb parts with x16 width. It is quite unlikely that anyone will run across such a DIMM in their system — 256 Mbit parts are old technology and x16 width is quite unusual outside of the embedded processor space — but there may be no guarantee that the BIOS will set the bits here to such an easily identified value in the event that no DIMM is installed in that slot, so you do need to look elsewhere to be sure of the configuration.
- The bits in this configuration register don’t tell how many “ranks” are included on each DIMM. A DIMM can be constructed with 1, 2, or 4 ranks (though 1 and 2 are by far the most common), so you need to check elsewhere to find the number of ranks.
- In any system using both DRAM channels (either ganged or unganged-but-interleaved — see below) the address bit numbers in above must be incremented by one. The BKDG includes a comment on the bottom of page 238 that the address bits only need to be incremented when running in ganged mode. I think this is incorrect — the “effective” bank size is doubled (corresponding to incrementing the bank address bits) by the use of two DRAM channels, whether the interleaving is within a cache line (as in ganged mode) or between cache lines (as in unganged-but-interleaved) mode.
Information About Configuration of Hardware
Memory Controller Channel Interleave or Ganging
There are a number of common configurations choices that determine how the hardware makes use of the two 64-bit DRAM channels.
- One feature is called “DRAM controller ganging”, which sets up the two DRAM channels to work in lockstep, with each cache line being split between the two channels. This feature is typically activated when the strongest error-correction features are desired. The downside of this approach is that each memory controller is only transferring 32 Bytes per cache line, which corresponds to 4 8-Byte bursts. This is the shortest burst length supported by DDR2 memory and makes the DRAM bus overheads relatively larger. DRAM controller ganging is enabled if F2x110 DRAM Controller Select Low Register Bit 4: DctGangEn: “DRAM controller ganging enable”, is set.
- If the memory controllers are not ganged, the BIOS will attempt to set up “DRAM channel interleaving”. In this mode, each channel transfers full cache lines, and cache lines are distributed evenly between the two channels.
DRAM channel interleaving is enabled if F2x110 DRAM Controller Select Low Register Bit 5: DctDatIntLv: “DRAM controller data interleave enable” is set. There are a number of options controlling exactly how the cache lines are mapped to the two DRAM channels. These options are controlled by F2x110 DRAM Controller Select Low Register Bits 7:6 DctSelIntLvAddr: “DRAM controller select channel interleave address bit”. The recommended option is the value “10”, which causes the DRAM channel select bit to be computed as the Exclusive-OR of address bits 20:16 & 6. When the value is “1”, channel 1 is selected, otherwise channel 0 is selected. Bit 6 is the bit above the cache line address, so using bit 6 alone would cause cache lines to be mapped odd/even to the two DRAM channels. By computing the channel select bit using the Exclusive-OR of six bits it is much less likely that an access pattern will repeatedly access only one of the two DRAM channels.