- Source: DDR4 SDRAM
Double Data Rate 4 Synchronous Dynamic Random-Access Memory (DDR4 SDRAM) is a type of synchronous dynamic random-access memory with a high bandwidth ("double data rate") interface.
Released to the market in 2014, it is a variant of dynamic random-access memory (DRAM), some of which have been in use since the early 1970s, and a higher-speed successor to the DDR2 and DDR3 technologies.
DDR4 is not compatible with any earlier type of random-access memory (RAM) due to different signaling voltage and physical interface, besides other factors.
DDR4 SDRAM was released to the public market in Q2 2014, focusing on ECC memory, while the non-ECC DDR4 modules became available in Q3 2014, accompanying the launch of Haswell-E processors that require DDR4 memory.
Features
The primary advantages of DDR4 over its predecessor, DDR3, include higher module density and lower voltage requirements, coupled with higher data rate transfer speeds. The DDR4 standard allows for DIMMs of up to 64 GB in capacity, compared to DDR3's maximum of 16 GB per DIMM.
Unlike previous generations of DDR memory, prefetch has not been increased above the 8n used in DDR3;: 16 the basic burst size is eight 64-bit words, and higher bandwidths are achieved by sending more read/write commands per second. To allow this, the standard divides the DRAM banks into two or four selectable bank groups, where transfers to different bank groups may be done more rapidly.
Because power consumption increases with speed, the reduced voltage allows higher speed operation without unreasonable power and cooling requirements.
DDR4 RAM operates at a voltage of 1.2 V and supports frequencies between 800 and 1600 MHz (DDR4-1600 through DDR4-3200). Compared to DDR3, which operates at 1.5 V with frequencies from 400 to 1067 MHz (DDR3-800 through DDR3-2133), DDR4 offers better performance and energy efficiency. DDR4 speeds are advertised as double the base clock rate due to its Double Data Rate (DDR) nature, with common speeds including DDR4-2400 and DDR4-3200, and higher speeds like DDR4-4266 and DDR4-5000 available at a premium. Unlike DDR3, DDR4 does not have a low voltage variant; it consistently operates at 1.2 V. Additionally, DDR4 improves on DDR3 with a longer burst length of 16 and supports larger memory capacities, enhancing both performance and system flexibility.
Timeline
2005: Standards body JEDEC began working on a successor to DDR3 around 2005, about 2 years before the launch of DDR3 in 2007. The high-level architecture of DDR4 was planned for completion in 2008.
2007: Some advance information was published in 2007, and a guest speaker from Qimonda provided further public details in a presentation at the August 2008 San Francisco Intel Developer Forum (IDF). DDR4 was described as involving a 30 nm process at 1.2 volts, with bus frequencies of 2133 MT/s "regular" speed and 3200 MT/s "enthusiast" speed, and reaching market in 2012, before transitioning to 1 volt in 2013.
2009: In February, Samsung validated 40 nm DRAM chips, considered a "significant step" towards DDR4 development since in 2009, DRAM chips were only beginning to migrate to a 50 nm process.
2010: Subsequently, further details were revealed at MemCon 2010, Tokyo (a computer memory industry event), at which a presentation by a JEDEC director titled "Time to rethink DDR4" with a slide titled "New roadmap: More realistic roadmap is 2015" led some websites to report that the introduction of DDR4 was probably or definitely delayed until 2015. However, DDR4 test samples were announced in line with the original schedule in early 2011 at which time manufacturers began to advise that large scale commercial production and release to market was scheduled for 2012.
2011: In January, Samsung announced the completion and release for testing of a 2 GB DDR4 DRAM module based on a process between 30 and 39 nm. It has a maximum data transfer rate of 2133 MT/s at 1.2 V, uses pseudo open drain technology (adapted from graphics DDR memory) and draws 40% less power than an equivalent DDR3 module.In April, Hynix announced the production of 2 GB DDR4 modules at 2400 MT/s, also running at 1.2 V on a process between 30 and 39 nm (exact process unspecified), adding that it anticipated commencing high volume production in the second half of 2012. Semiconductor processes for DDR4 were expected to transition to sub-30 nm at some point between late 2012 and 2014.
2012: In May, Micron announced it was aiming at starting production in late 2012 of 30 nm modules. In July, Samsung announced that it would begin sampling the industry's first 16 GB registered dual inline memory modules (RDIMMs) using DDR4 SDRAM for enterprise server systems. In September, JEDEC released the final specification of DDR4.
2013: DDR4 was expected to represent 5% of the DRAM market in 2013, and to reach mass market adoption and 50% market penetration around 2015; as of 2013, however, adoption of DDR4 had been delayed and it was no longer expected to reach a majority of the market until 2016 or later. The transition from DDR3 to DDR4 is thus taking longer than the approximately five years taken for DDR3 to achieve mass market transition over DDR2. In part, this is because changes required to other components would affect all other parts of computer systems, which would need to be updated to work with DDR4.
2014: In April, Hynix announced that it had developed the world's first highest-density 128 GB module based on 8 Gbit DDR4 using 20 nm technology. The module works at 2133 MHz, with a 64-bit I/O, and processes up to 17 GB of data per second.
2016: In April, Samsung announced that they had begun to mass-produce DRAM on a "10 nm-class" process, by which they mean the 1x nm node regime of 16 nm to 19 nm, which supports a 30% faster data transfer rate of 3,200 Mbit/s. Previously, a size of 20 nm was used.
2020: DDR5 RAM was formally introduced by the JEDEC Solid State Technology Association in July 2020 as the successor to DDR4. JEDEC, a global leader in developing open standards for the microelectronics industry, spearheaded the development of DDR5 to address the growing demands for higher performance and efficiency in modern computing. The DDR5 standard builds on the advancements of DDR4 with notable improvements in bandwidth, efficiency, and capacity, offering a base data rate of 4800 MT/s and supporting higher speeds as the technology matures. DDR5 also features enhanced power management, increased burst length, and improved prefetch capabilities, making it suitable for a wide range of applications from high-performance gaming to data-intensive computing tasks.
= Market perception and adoption
=In April 2013, a news writer at International Data Group (IDG) – an American technology research business originally part of IDC – produced an analysis of their perceptions related to DDR4 SDRAM. The conclusions were that the increasing popularity of mobile computing and other devices using slower but low-powered memory, the slowing of growth in the traditional desktop computing sector, and the consolidation of the memory manufacturing marketplace, meant that margins on RAM were tight.
As a result, the desired premium pricing for the new technology was harder to achieve, and capacity had shifted to other sectors. SDRAM manufacturers and chipset creators were, to an extent, "stuck between a rock and a hard place" where "nobody wants to pay a premium for DDR4 products, and manufacturers don't want to make the memory if they are not going to get a premium", according to Mike Howard from iSuppli. A switch in consumer sentiment toward desktop computing and release of processors having DDR4 support by Intel and AMD could therefore potentially lead to "aggressive" growth.
Intel's 2014 Haswell roadmap, revealed the company's first use of DDR4 SDRAM in Haswell-EP processors.
AMD's Ryzen processors, revealed in 2016 and shipped in 2017, use DDR4 SDRAM.
Operation
DDR4 RAM operates with a primary supply voltage of 1.2 V and an auxiliary 2.5 V supply for wordline boosting (VPP). This contrasts with DDR3, which runs at 1.5 V and had lower voltage variants at 1.35 V introduced in 2013. DDR4 was introduced with a minimum transfer rate of 2133 MT/s, influenced by DDR3's nearing limit at similar speeds, and is expected to reach up to 4266 MT/s. Notable improvements in DDR4 include increased data transfer rates and enhanced efficiency. Early DDR4 samples, such as those from Samsung in January 2011, showed a CAS latency of 13 clock cycles, comparable to the DDR2 to DDR3 transition. Additionally, DDR4 features a longer burst length of 16, higher capacity support, and improved signal integrity with tighter pin spacing (0.85 mm vs. 1.0 mm), slightly increased height (31.25 mm vs. 30.35 mm), and increased thickness (1.2 mm vs. 1.0 mm) for better signal routing and performance.
Internal banks are increased to 16 (4 bank select bits), with up to 8 ranks per DIMM.: 16
Protocol changes include:: 20
Parity on the command/address bus
Data bus inversion (like GDDR4)
CRC on the data bus
Independent programming of individual DRAMs on a DIMM, to allow better control of on-die termination.
Increased memory density is anticipated, possibly using TSV ("through-silicon via") or other 3D stacking processes. The DDR4 specification will include standardized 3D stacking "from the start" according to JEDEC, with provision for up to 8 stacked dies.: 12 X-bit Labs predicted that "as a result DDR4 memory chips with very high density will become relatively inexpensive".
Switched memory banks are also an anticipated option for servers.
In 2008, the book Wafer Level 3-D ICs Process Technology highlighted concerns about the increasing die area consumption due to non-scaling analog elements like charge pumps, voltage regulators, and additional circuitry. These components, including CRC error-detection, on-die termination, burst hardware, programmable pipelines, low impedance, and a greater need for sense amplifiers (driven by reduced bits per bitline due to lower voltage), have significantly increased bandwidth but at the cost of occupying more die area. Consequently, the proportion of die allocated to the memory array itself has decreased over time: from 70–78% for SDRAM and DDR1 to 47% for DDR2, 38% for DDR3, and potentially less than 30% for DDR4.
The specification defined standards for ×4, ×8 and ×16 memory devices with capacities of 2, 4, 8 and 16 Gbit.
In addition to bandwidth and capacity variants, DDR4 modules can optionally implement:
ECC, which is an extra data byte lane used for correcting minor errors and detecting major errors for better reliability. Modules with ECC are identified by an additional ECC in their designation. PC4-19200 ECC or PC4-19200E is a PC4-19200 module with ECC.
Registered (or buffered) RAM enhances signal integrity, which can improve clock rates and allow for higher physical slot capacity, by buffering signals electrically. This comes at the cost of an additional clock cycle of latency. These modules are identified by an "R" in their designation, such as PC4-19200R. Typically, modules with this designation are also ECC (Error-Correcting Code) Registered, though the 'E' for ECC may not always be included in the designation. Conversely, non-registered RAM, also known as unbuffered RAM, is identified by a "U" in the designation. e.g. PC4-19200U.
Be Load reduced modules, which are designated by LR and are similar to registered/buffered memory, in a way that LRDIMM modules buffer both control and data lines while retaining the parallel nature of all signals. As such, LRDIMM memory provides larger overall maximum memory capacities, while addressing some of the performance and power consumption issues of FB memory induced by the required conversion between serial and parallel signal forms.
= Command encoding
=Although it still operates in fundamentally the same way, DDR4 makes one major change to the command formats used by previous SDRAM generations. A new command signal, ACT, is low to indicate the activate (open row) command.
The activate command requires more address bits than any other (18 row address bits in a 16 Gbit part), so the standard RAS, CAS, and WE active low signals are shared with high-order address bits that are not used when ACT is high. The combination of RAS=L and CAS=WE=H that previously encoded an activate command is unused.
As in previous SDRAM encodings, A10 is used to select command variants: auto-precharge on read and write commands, and one bank vs. all banks for the precharge command. It also selects two variants of the ZQ calibration command.
As in DDR3, A12 is used to request burst chop: truncation of an 8-transfer burst after four transfers. Although the bank is still busy and unavailable for other commands until eight transfer times have elapsed, a different bank can be accessed.
Also, the number of bank addresses has been increased greatly. There are four bank select bits to select up to 16 banks within each DRAM: two bank address bits (BA0, BA1), and two bank group bits (BG0, BG1). There are additional timing restrictions when accessing banks within the same bank group; it is faster to access a bank in a different bank group.
In addition, there are three chip select signals (C0, C1, C2), allowing up to eight stacked chips to be placed inside a single DRAM package. These effectively act as three more bank select bits, bringing the total to seven (128 possible banks).
Standard transfer rates are 1600, 1866, 2133, 2400, 2666, 2933, and 3200 MT/s (12⁄15, 14⁄15, 16⁄15, 18⁄15, 20⁄15, 22⁄15, and 24⁄15 GHz clock frequencies, double data rate), with speeds up to DDR4-4800 (2400 MHz clock) commercially available.
= Design considerations
=The DDR4 team at Micron Technology identified some key points for IC and PCB design:
IC design:
VrefDQ calibration (DDR4 "requires that VrefDQ calibration be performed by the controller");
New addressing schemes ("bank grouping", ACT to replace RAS, CAS, and WE commands, PAR and Alert for error checking and DBI for data bus inversion);
New power saving features (low-power auto self-refresh, temperature-controlled refresh, fine-granularity refresh, data-bus inversion, and CMD/ADDR latency).
Circuit board design:
New power supplies (VDD/VDDQ at 1.2 V and wordline boost, known as VPP, at 2.5 V);
VrefDQ must be supplied internal to the DRAM while VrefCA is supplied externally from the board;
DQ pins terminate high using pseudo-open-drain I/O (this differs from the CA pins in DDR3 which are center-tapped to VTT).
Rowhammer mitigation techniques include larger storage capacitors, modifying the address lines to use address space layout randomization and dual-voltage I/O lines that further isolate potential boundary conditions that might result in instability at high write/read speeds.
Modules
= Module packaging
=DDR4 memory is supplied in 288-pin dual in-line memory modules (DIMMs), similar in size to 240-pin DDR3 DIMMs. DDR4 RAM modules feature pins that are spaced more closely at 0.85 mm compared to the 1.0 mm spacing in DDR3, allowing for a higher pin density within the same standard DIMM length of 133.35 mm (5¼ inches). The height of DDR4 modules is slightly increased to 31.25 mm (1.23 inches) from 30.35 mm (1.2 inches) to facilitate easier signal routing. Additionally, the thickness of DDR4 modules has been increased to 1.2 mm from 1.0 mm to support more signal layers, enhancing overall performance and reliability. DDR4 DIMM modules have a slightly curved edge connector so not all of the pins are engaged at the same time during module insertion, lowering the insertion force.
DDR4 SO-DIMMs have 260 pins instead of the 204 pins of DDR3 SO-DIMMs, spaced at 0.5 rather than 0.6 mm, and are 2.0 mm wider (69.6 versus 67.6 mm), but remain the same 30 mm in height.
For its Skylake microarchitecture, Intel designed a SO-DIMM package named UniDIMM, which can be populated with either DDR3 or DDR4 chips. At the same time, the integrated memory controller (IMC) of Skylake CPUs is announced to be capable of working with either type of memory. The purpose of UniDIMMs is to help in the market transition from DDR3 to DDR4, where pricing and availability may make it undesirable to switch the RAM type. UniDIMMs have the same dimensions and number of pins as regular DDR4 SO-DIMMs, but the edge connector's notch is placed differently to avoid accidental use in incompatible DDR4 SO-DIMM sockets.
= JEDEC standard DDR4 module
=CAS latency (CL)
Clock cycles between sending a column address to the memory and the beginning of the data in response
tRCD
Clock cycles between row activate and reads/writes
tRP
Clock cycles between row precharge and activate
DDR4-xxxx denotes per-bit data transfer rate, and is normally used to describe DDR chips. PC4-xxxxx denotes overall transfer rate, in megabytes per second, and applies only to modules (assembled DIMMs). Because DDR4 memory modules transfer data on a bus that is 8 bytes (64 data bits) wide, module peak transfer rate is calculated by taking transfers per second and multiplying by eight.
Successor
At the 2016 Intel Developer Forum, the future of DDR5 SDRAM was discussed. The specifications were finalized at the end of 2016 – but no modules will be available before 2020. Other memory technologies – namely HBM in version 3 and 4 – aiming to replace DDR4 have also been proposed.
In 2011, JEDEC introduced the Wide I/O 2 standard, which features stacked memory dies placed directly on top of the CPU within the same package. This configuration provides higher bandwidth and improved power efficiency compared to DDR4 SDRAM, thanks to its wide interface and short signal lengths. Wide I/O 2 aims to replace various mobile DDRX SDRAM standards used in high-performance embedded and mobile devices like smartphones.
In parallel, Hynix developed High Bandwidth Memory (HBM), standardized as JEDEC JESD235. Both Wide I/O 2 and HBM utilize a very wide parallel memory interface—up to 512 bits for Wide I/O 2 compared to 64 bits for DDR4—although they operate at lower frequencies than DDR4. Wide I/O 2 is designed for high-performance, compact devices, often integrated into processors or system on a chip (SoC) packages. In contrast, HBM targets graphics memory and general computing, while Hybrid Memory Cube (HMC) is aimed at high-end servers and enterprise applications.
Micron Technology's Hybrid Memory Cube (HMC) stacked memory uses a serial interface. Many other computer buses have migrated towards replacing parallel buses with serial buses, for example by the evolution of Serial ATA replacing Parallel ATA, PCI Express replacing PCI, and serial ports replacing parallel ports. In general, serial buses are easier to scale up and have fewer wires/traces, making circuit boards using them easier to design.
In the longer term, experts speculate that non-volatile RAM types like PCM (phase-change memory), RRAM (resistive random-access memory), or MRAM (magnetoresistive random-access memory) could replace DDR4 SDRAM and its successors.
GDDR5 SGRAM is a graphics type of DDR3 synchronous graphics RAM, which was introduced before DDR4, and is not a successor to DDR4.
See also
Synchronous dynamic random-access memory – main article for DDR memory types
List of interface bit rates
Memory timings
Notes
References
External links
Main Memory: DDR3 & DDR4 SDRAM, JEDEC, DDR4 SDRAM STANDARD (JESD79-4)
DDR4 (PDF) (white paper), Corsair Components, archived from the original (PDF) on October 10, 2014.
Kata Kunci Pencarian:
- Standar memori JEDEC
- DDR3 SDRAM
- Memori (komputer)
- Elpida Memory
- PlayStation 5
- DDR4 SDRAM
- DDR SDRAM
- DDR5 SDRAM
- DDR3 SDRAM
- GDDR SDRAM
- Synchronous dynamic random-access memory
- LPDDR
- DIMM
- Serial presence detect
- Memory module