Highlighted notes while studying Concurrent Data Structures:
DDR4 SDRAM
Source: Wikipedia
Double Data Rate 4 Synchronous Dynamic Random-Access Memory, officially abbreviated as DDR4 SDRAM, is a type of synchronous dynamic random-access memory with a high bandwidth ("double data rate") interface.
Wikipedia is a free online encyclopedia, created and edited by volunteers around the world and hosted by the Wikimedia Foundation.
Highlighted notes while studying Concurrent Data Structures:
DDR3 SDRAM
Source: Wikipedia
Double Data Rate 3 Synchronous Dynamic Random-Access Memory, officially abbreviated as DDR3 SDRAM, is a type of synchronous dynamic random-access memory (SDRAM) with a high bandwidth ("double data rate") interface, and has been in use since 2007. It is the higher-speed successor to DDR and DDR2 and predecessor to DDR4 synchronous dynamic random-access memory (SDRAM) chips. DDR3 SDRAM is neither forward nor backward compatible with any earlier type of random-access memory (RAM) because of different signaling voltages, timings, and other factors.
Wikipedia is a free online encyclopedia, created and edited by volunteers around the world and hosted by the Wikimedia Foundation.
The document describes the specifications and operations of Double Data Rate (DDR) SDRAM memory. It details features like double data rate architecture, burst lengths, CAS latencies, commands like read, write, refresh, and initialization procedures. It provides timing diagrams for different memory operations.
DDR memory is a type of RAM that allows for increased performance over single data rate memory by facilitating two data transactions per clock cycle without doubling the clock speed. It consists of over 130 signals and uses mode and extended mode registers to control operations. DDR memory comes in SRAM and DRAM varieties, with DRAM being more common due to its lower power consumption and use in main memory, though it requires constant refreshing to prevent data loss.
DDR - SDRAMs are classified into different types including SDRAM, DDR1, DDR2, DDR3, and DDR4. SDRAM synchronizes itself with the CPU timing to allow for faster memory access. DDR1 allows for higher transfer rates through double pumping of the data bus. DDR2 further increases speeds through lower power usage and internal clock running at half the external clock rate. DDR3 and DDR4 continue to improve speeds and bandwidth through higher data transfer rates and lower voltage requirements. Each new generation is not compatible with previous types due to changes in signaling and interfaces.
Computer memory, also known as RAM, is temporary storage that allows the computer to perform tasks by holding instructions and data in an easily accessible location. There are two main types of computer memory: volatile and non-volatile. Volatile memory, like RAM, loses its contents when power is removed while non-volatile types like ROM retain data without power. Over time, RAM technologies have evolved from SIMMs to DIMMs and SDRAM to DDR, DDR2, and DDR3, with each generation offering faster speeds and higher capacities. Proper identification and installation of the correct RAM type is important for system functionality and performance.
DDR3 is an evolution of DDR2 RAM that provides faster speeds, lower power consumption, and other improvements. Key features of DDR3 include higher clock frequencies up to 1600MHz, lower voltage of 1.5V, 8-bit prefetch, on-die termination for better signal quality, and fly-by topology. DDR3 also has read/write leveling to calibrate timing, lower signaling standards for reduced power/noise, and improved routing guidelines.
IDT DDR4 RCD register and DB data buffer enable RDIMM and LRDIMM to faster speeds and deeper memories. This video helps you understand the DDR4 feature enhancements of IDT's DDR4 RCD and DB compared to earlier DDR3 technology. An introduction into some available LeCroy testing and debug tools completes the video. Presented by Douglas Malech, Product Marketing Manager at IDT and Mike Micheletti, Product Manager at Teledyne LeCroy. To learn more about IDT's leading portfolio of memory interface products, visit www.idt.com/go/MIP.
Designed a fully customized 128x10b SRAM by constructing schematic & virtuoso layout of memory cell array (6T cell), row & column decoder, pre-charge circuit, write circuit and sense amplifier using Cadence. Manually placed and routed all components, performed DRC & LVS debugging of constructed schematic and layout and ran PEX to generate the final Netlist, Hspice Spectre simulation of final design for verification of the correct functionality and analysis of best read, best write cycles & the worst case timing for read and write. Timing and power consumed is analyzed through STA-Primetime (Static timing Analysis)
Highlighted notes while studying Concurrent Data Structures:
DDR3 SDRAM
Source: Wikipedia
Double Data Rate 3 Synchronous Dynamic Random-Access Memory, officially abbreviated as DDR3 SDRAM, is a type of synchronous dynamic random-access memory (SDRAM) with a high bandwidth ("double data rate") interface, and has been in use since 2007. It is the higher-speed successor to DDR and DDR2 and predecessor to DDR4 synchronous dynamic random-access memory (SDRAM) chips. DDR3 SDRAM is neither forward nor backward compatible with any earlier type of random-access memory (RAM) because of different signaling voltages, timings, and other factors.
Wikipedia is a free online encyclopedia, created and edited by volunteers around the world and hosted by the Wikimedia Foundation.
The document describes the specifications and operations of Double Data Rate (DDR) SDRAM memory. It details features like double data rate architecture, burst lengths, CAS latencies, commands like read, write, refresh, and initialization procedures. It provides timing diagrams for different memory operations.
DDR memory is a type of RAM that allows for increased performance over single data rate memory by facilitating two data transactions per clock cycle without doubling the clock speed. It consists of over 130 signals and uses mode and extended mode registers to control operations. DDR memory comes in SRAM and DRAM varieties, with DRAM being more common due to its lower power consumption and use in main memory, though it requires constant refreshing to prevent data loss.
DDR - SDRAMs are classified into different types including SDRAM, DDR1, DDR2, DDR3, and DDR4. SDRAM synchronizes itself with the CPU timing to allow for faster memory access. DDR1 allows for higher transfer rates through double pumping of the data bus. DDR2 further increases speeds through lower power usage and internal clock running at half the external clock rate. DDR3 and DDR4 continue to improve speeds and bandwidth through higher data transfer rates and lower voltage requirements. Each new generation is not compatible with previous types due to changes in signaling and interfaces.
Computer memory, also known as RAM, is temporary storage that allows the computer to perform tasks by holding instructions and data in an easily accessible location. There are two main types of computer memory: volatile and non-volatile. Volatile memory, like RAM, loses its contents when power is removed while non-volatile types like ROM retain data without power. Over time, RAM technologies have evolved from SIMMs to DIMMs and SDRAM to DDR, DDR2, and DDR3, with each generation offering faster speeds and higher capacities. Proper identification and installation of the correct RAM type is important for system functionality and performance.
DDR3 is an evolution of DDR2 RAM that provides faster speeds, lower power consumption, and other improvements. Key features of DDR3 include higher clock frequencies up to 1600MHz, lower voltage of 1.5V, 8-bit prefetch, on-die termination for better signal quality, and fly-by topology. DDR3 also has read/write leveling to calibrate timing, lower signaling standards for reduced power/noise, and improved routing guidelines.
IDT DDR4 RCD register and DB data buffer enable RDIMM and LRDIMM to faster speeds and deeper memories. This video helps you understand the DDR4 feature enhancements of IDT's DDR4 RCD and DB compared to earlier DDR3 technology. An introduction into some available LeCroy testing and debug tools completes the video. Presented by Douglas Malech, Product Marketing Manager at IDT and Mike Micheletti, Product Manager at Teledyne LeCroy. To learn more about IDT's leading portfolio of memory interface products, visit www.idt.com/go/MIP.
Designed a fully customized 128x10b SRAM by constructing schematic & virtuoso layout of memory cell array (6T cell), row & column decoder, pre-charge circuit, write circuit and sense amplifier using Cadence. Manually placed and routed all components, performed DRC & LVS debugging of constructed schematic and layout and ran PEX to generate the final Netlist, Hspice Spectre simulation of final design for verification of the correct functionality and analysis of best read, best write cycles & the worst case timing for read and write. Timing and power consumed is analyzed through STA-Primetime (Static timing Analysis)
The document introduces dual-port RAM (DPRAM), which is a single static RAM array that can be accessed by two sets of address, data, and control signals simultaneously. This increases bandwidth and offers shorter development times than alternatives. DPRAM uses an 8-transistor cell compared to 6 for regular SRAM, allowing two independent ports to read and write at the same time using different clocks. Applications include cellular base stations, routers, and video conferencing where high-speed concurrent access is needed.
PCI Express is a high-speed serial computer expansion bus standard that was created to replace older standards like PCI, PCI-X, and AGP. It provides dedicated bandwidth to devices through the use of lanes and is commonly used as the interface for graphics cards, hard drives, and other peripherals. PCIe has gone through several generations that have increased its maximum bandwidth. It uses a layered protocol architecture and is designed for compatibility while providing scalable bandwidth and other advantages over older standards.
The document discusses various types of computer memory technologies, including RAM types like DRAM, SRAM, DDR, DDR2, and DDR3. It explains the memory hierarchy from registers to cache to main memory to disks. Key points covered include how DRAM works using capacitors that must be periodically refreshed, advantages of SDRAM over regular DRAM like pipelining commands. Generations of DDR memory are compared in terms of clock speeds, data rates, and other features.
The document discusses a 5T SRAM cell for embedded cache memory. It begins by explaining the basic operations of memory and different types of memory like RAM and ROM. It then discusses the structure and operation of a typical 6T SRAM cell. It introduces a 5T SRAM cell that aims to reduce leakage and increase density compared to 6T cells. The document outlines the read and write operations of the 5T cell and provides results of implementing the cell showing improvements in leakage and area. It concludes by discussing potential applications and areas for future work.
High Bandwidth Memory (HBM) is a high-speed stacked memory interface used in high-performance graphics cards and supercomputers. HBM achieves higher bandwidth than GDDR5 using 3D stacking of DRAM dies and through-silicon vias. The first HBM was produced in 2013, and the technology has since progressed through HBM2, HBM2E, and upcoming HBMnext standards, doubling bandwidth with each generation. HBM is used to provide massive memory bandwidth for applications such as graphics processing and AI.
The document discusses direct memory access (DMA) and DMA controllers. It explains that DMA allows hardware subsystems like disk drives and graphics cards to access main memory independently of the CPU. This is useful because it allows data transfers to occur in parallel with other CPU operations, improving overall system performance. A DMA controller generates memory addresses and initiates read/write cycles. It has registers that specify the I/O port, transfer direction, and number of bytes to transfer per burst. DMA controllers use different transfer modes like burst, cycle stealing, and transparent to move blocks of data efficiently between peripheral devices and memory.
PCIe is a standard expansion card interface introduced in 2004 to replace PCI and PCI-X. It uses serial instead of parallel communication and is scalable, allowing for higher maximum system bandwidth. The presentation discusses the history of expansion card standards leading to PCIe, including ISA, EISA, VESA, PCI, and PCI-X. It also covers key aspects of PCIe such as the root complex, endpoints, switches, lanes, bus:device.function notation, enumeration, and address spaces such as configuration space.
eMMC 5.0 is the latest generation of embedded NAND Flash IP. Arasan provides a complete solution including digital controllers for host and device, the mixed PHY I/O and pads, software drivers, hardware validation and support.
This document summarizes different types of random access memory (RAM), including static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), and double data rate SDRAM (DDR SDRAM). It describes the basic operation and characteristics of each type of RAM, such as the use of transistors and capacitors, refresh requirements, packaging, and timing. Key details covered include the differences between SRAM and DRAM, DRAM refresh requirements, DRAM and SDRAM timing diagrams, and how DDR SDRAM transfers data on both clock edges.
This document summarizes the key aspects of a DDR2 SDRAM controller, including:
1) It describes the differences between DDR1 and DDR2 memory technologies, such as lower power consumption and higher data rates in DDR2.
2) It provides a block diagram of the main components and I/O signals of a DDR2 SDRAM controller.
3) It explains the basic functionality of a DDR2 SDRAM controller, including initialization, refresh operations, and read and write operations.
Semiconductor memories have become essential in electronics as processors have become more common and software more sophisticated, greatly increasing the need for memory. There are several types of semiconductor memory technologies that have emerged to meet different needs, including DRAM, SRAM, SDRAM, EEPROM, flash memory, and the newer MRAM. Each type has its advantages for different applications like main memory, caches, and non-volatile storage.
The document describes a memory controller for DDR SDRAM that is implemented using Verilog HDL. DDR SDRAM operates at double the frequency of the processor and transfers data on both the rising and falling edges of the clock, allowing it to have higher bandwidth than SDR SDRAM. The controller generates timing and control signals to properly initialize and refresh the memory and handle read and write operations. Simulation and synthesis of the controller design is done using Xilinx ISE 14.5 software.
The AXI protocol specification describes an advanced bus architecture with burst-based transactions using separate address/control and data phases over independent channels. It supports features like out-of-order transaction completion, exclusive access for atomic operations, cache coherency, and a low power interface. The AXI protocol is commonly used in System-on-Chip designs for high performance embedded processors and peripherals.
The document discusses the key aspects of the PCIe transaction layer including:
- It defines the packet format and different transaction types for memory, I/O, configuration and messages.
- Rules are specified for TLPs with data payloads, digest rules, address-based and ID-based routing.
- Transaction descriptors contain the transaction ID, attributes and traffic class fields.
- Memory, I/O and configuration request rules and completion rules are also outlined.
Explain cache memory with a diagram, demonstrate hit ratio and miss penalty with an example. Discussed different types of cache mapping: direct mapping, fully-associative mapping and set-associative mapping. Discussed temporal and spatial locality of references in cache memory. Explained cache write policies: write through and write back. Shown the differences between unified cache and split cache.
This document is a presentation about memory and storage. It begins by defining memory as temporary storage used to run programs and defining storage as long-term storage like a hard drive. It then discusses the structure of storage and memory, including primary, secondary, and tertiary levels. The main types of memory - RAM and ROM - are described. RAM is volatile and used for active programs, while ROM is non-volatile and holds startup programs. Various storage devices like optical discs, magnetic disks, and flash memory are also outlined.
GDDR4 SDRAM is a type of graphics card memory that was intended to replace GDDR3. In 2005, Samsung developed the first 256-Mbit GDDR4 chip running at 2.5 Gbit/s. GDDR4 introduced technologies like Data Bus Inversion and Multi-Preamble to reduce power consumption and improve performance. While it achieved higher speeds and bandwidth than GDDR3, GDDR4 was quickly replaced by GDDR5 within a year as manufacturers like Qimonda moved directly to the newer standard.
Highlighted notes while studying Concurrent Data Structures:
DDR SDRAM
Source: Wikipedia
Double Data Rate Synchronous Dynamic Random-Access Memory, officially abbreviated as DDR SDRAM, is a double data rate (DDR) synchronous dynamic random-access memory (SDRAM) class of memory integrated circuits used in computers. DDR SDRAM, also retroactively called DDR1 SDRAM, has been superseded by DDR2 SDRAM, DDR3 SDRAM, and DDR4 SDRAM, and soon will be superseded by DDR5 SDRAM. None of its successors are forward or backward compatible with DDR1 SDRAM, meaning DDR2, DDR3, DDR4 and DDR5 memory modules will not work in DDR1-equipped motherboards, and vice versa.
Wikipedia is a free online encyclopedia, created and edited by volunteers around the world and hosted by the Wikimedia Foundation.
The document introduces dual-port RAM (DPRAM), which is a single static RAM array that can be accessed by two sets of address, data, and control signals simultaneously. This increases bandwidth and offers shorter development times than alternatives. DPRAM uses an 8-transistor cell compared to 6 for regular SRAM, allowing two independent ports to read and write at the same time using different clocks. Applications include cellular base stations, routers, and video conferencing where high-speed concurrent access is needed.
PCI Express is a high-speed serial computer expansion bus standard that was created to replace older standards like PCI, PCI-X, and AGP. It provides dedicated bandwidth to devices through the use of lanes and is commonly used as the interface for graphics cards, hard drives, and other peripherals. PCIe has gone through several generations that have increased its maximum bandwidth. It uses a layered protocol architecture and is designed for compatibility while providing scalable bandwidth and other advantages over older standards.
The document discusses various types of computer memory technologies, including RAM types like DRAM, SRAM, DDR, DDR2, and DDR3. It explains the memory hierarchy from registers to cache to main memory to disks. Key points covered include how DRAM works using capacitors that must be periodically refreshed, advantages of SDRAM over regular DRAM like pipelining commands. Generations of DDR memory are compared in terms of clock speeds, data rates, and other features.
The document discusses a 5T SRAM cell for embedded cache memory. It begins by explaining the basic operations of memory and different types of memory like RAM and ROM. It then discusses the structure and operation of a typical 6T SRAM cell. It introduces a 5T SRAM cell that aims to reduce leakage and increase density compared to 6T cells. The document outlines the read and write operations of the 5T cell and provides results of implementing the cell showing improvements in leakage and area. It concludes by discussing potential applications and areas for future work.
High Bandwidth Memory (HBM) is a high-speed stacked memory interface used in high-performance graphics cards and supercomputers. HBM achieves higher bandwidth than GDDR5 using 3D stacking of DRAM dies and through-silicon vias. The first HBM was produced in 2013, and the technology has since progressed through HBM2, HBM2E, and upcoming HBMnext standards, doubling bandwidth with each generation. HBM is used to provide massive memory bandwidth for applications such as graphics processing and AI.
The document discusses direct memory access (DMA) and DMA controllers. It explains that DMA allows hardware subsystems like disk drives and graphics cards to access main memory independently of the CPU. This is useful because it allows data transfers to occur in parallel with other CPU operations, improving overall system performance. A DMA controller generates memory addresses and initiates read/write cycles. It has registers that specify the I/O port, transfer direction, and number of bytes to transfer per burst. DMA controllers use different transfer modes like burst, cycle stealing, and transparent to move blocks of data efficiently between peripheral devices and memory.
PCIe is a standard expansion card interface introduced in 2004 to replace PCI and PCI-X. It uses serial instead of parallel communication and is scalable, allowing for higher maximum system bandwidth. The presentation discusses the history of expansion card standards leading to PCIe, including ISA, EISA, VESA, PCI, and PCI-X. It also covers key aspects of PCIe such as the root complex, endpoints, switches, lanes, bus:device.function notation, enumeration, and address spaces such as configuration space.
eMMC 5.0 is the latest generation of embedded NAND Flash IP. Arasan provides a complete solution including digital controllers for host and device, the mixed PHY I/O and pads, software drivers, hardware validation and support.
This document summarizes different types of random access memory (RAM), including static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), and double data rate SDRAM (DDR SDRAM). It describes the basic operation and characteristics of each type of RAM, such as the use of transistors and capacitors, refresh requirements, packaging, and timing. Key details covered include the differences between SRAM and DRAM, DRAM refresh requirements, DRAM and SDRAM timing diagrams, and how DDR SDRAM transfers data on both clock edges.
This document summarizes the key aspects of a DDR2 SDRAM controller, including:
1) It describes the differences between DDR1 and DDR2 memory technologies, such as lower power consumption and higher data rates in DDR2.
2) It provides a block diagram of the main components and I/O signals of a DDR2 SDRAM controller.
3) It explains the basic functionality of a DDR2 SDRAM controller, including initialization, refresh operations, and read and write operations.
Semiconductor memories have become essential in electronics as processors have become more common and software more sophisticated, greatly increasing the need for memory. There are several types of semiconductor memory technologies that have emerged to meet different needs, including DRAM, SRAM, SDRAM, EEPROM, flash memory, and the newer MRAM. Each type has its advantages for different applications like main memory, caches, and non-volatile storage.
The document describes a memory controller for DDR SDRAM that is implemented using Verilog HDL. DDR SDRAM operates at double the frequency of the processor and transfers data on both the rising and falling edges of the clock, allowing it to have higher bandwidth than SDR SDRAM. The controller generates timing and control signals to properly initialize and refresh the memory and handle read and write operations. Simulation and synthesis of the controller design is done using Xilinx ISE 14.5 software.
The AXI protocol specification describes an advanced bus architecture with burst-based transactions using separate address/control and data phases over independent channels. It supports features like out-of-order transaction completion, exclusive access for atomic operations, cache coherency, and a low power interface. The AXI protocol is commonly used in System-on-Chip designs for high performance embedded processors and peripherals.
The document discusses the key aspects of the PCIe transaction layer including:
- It defines the packet format and different transaction types for memory, I/O, configuration and messages.
- Rules are specified for TLPs with data payloads, digest rules, address-based and ID-based routing.
- Transaction descriptors contain the transaction ID, attributes and traffic class fields.
- Memory, I/O and configuration request rules and completion rules are also outlined.
Explain cache memory with a diagram, demonstrate hit ratio and miss penalty with an example. Discussed different types of cache mapping: direct mapping, fully-associative mapping and set-associative mapping. Discussed temporal and spatial locality of references in cache memory. Explained cache write policies: write through and write back. Shown the differences between unified cache and split cache.
This document is a presentation about memory and storage. It begins by defining memory as temporary storage used to run programs and defining storage as long-term storage like a hard drive. It then discusses the structure of storage and memory, including primary, secondary, and tertiary levels. The main types of memory - RAM and ROM - are described. RAM is volatile and used for active programs, while ROM is non-volatile and holds startup programs. Various storage devices like optical discs, magnetic disks, and flash memory are also outlined.
GDDR4 SDRAM is a type of graphics card memory that was intended to replace GDDR3. In 2005, Samsung developed the first 256-Mbit GDDR4 chip running at 2.5 Gbit/s. GDDR4 introduced technologies like Data Bus Inversion and Multi-Preamble to reduce power consumption and improve performance. While it achieved higher speeds and bandwidth than GDDR3, GDDR4 was quickly replaced by GDDR5 within a year as manufacturers like Qimonda moved directly to the newer standard.
Highlighted notes while studying Concurrent Data Structures:
DDR SDRAM
Source: Wikipedia
Double Data Rate Synchronous Dynamic Random-Access Memory, officially abbreviated as DDR SDRAM, is a double data rate (DDR) synchronous dynamic random-access memory (SDRAM) class of memory integrated circuits used in computers. DDR SDRAM, also retroactively called DDR1 SDRAM, has been superseded by DDR2 SDRAM, DDR3 SDRAM, and DDR4 SDRAM, and soon will be superseded by DDR5 SDRAM. None of its successors are forward or backward compatible with DDR1 SDRAM, meaning DDR2, DDR3, DDR4 and DDR5 memory modules will not work in DDR1-equipped motherboards, and vice versa.
Wikipedia is a free online encyclopedia, created and edited by volunteers around the world and hosted by the Wikimedia Foundation.
Highlighted notes while studying Concurrent Data Structures:
GDDR5 SDRAM
Source: Wikipedia
GDDR5 SDRAM, an abbreviation for Graphics Double Data Rate 5 Synchronous Dynamic Random-Access Memory, is a modern type of synchronous graphics random-access memory (SGRAM) with a high bandwidth ("double data rate") interface designed for use in graphics cards, game consoles, and high-performance computing. [1] It is a type of GDDR SDRAM (graphics DDR SDRAM).
Wikipedia is a free online encyclopedia, created and edited by volunteers around the world and hosted by the Wikimedia Foundation.
PowerEdge Rack and Tower Server Masters - AMD Server Memory.pptxNeoKenj
This document provides an overview of AMD server memory options for Dell PowerEdge servers, including:
- Details on 2nd generation EPYC memory configurations and benefits like increased memory speeds and bandwidth
- Examples of memory technologies, capacity options, and population rules for configuring Dell PowerEdge rack and tower servers equipped with AMD EPYC processors
- Charts showing the memory support for different PowerEdge server models, including up to 4TB of memory support on some 2-socket models
1) DDR memory technology enables memory subsystems to transfer data at twice the frequency of single data rate memory by transferring data on both the rising and falling edges of the clock. This improves performance but also makes the design and debugging more challenging due to reduced timing margins.
2) Debugging DDR memory modules requires examining components like the PLL to ensure proper clock generation and alignment, termination resistors to optimize timing, and registers to confirm signals are latched within specifications. Tuning elements like feedback capacitors and resistors can help optimize timing.
3) Testing tools are needed to thoroughly evaluate DDR memory, including memory testers, stress tests, and equipment to measure clock signals on DIMMs independently of a system
Design, Validation and Correlation of Characterized SODIMM Modules Supporting...IOSR Journals
Abstract : In any computing environment, it is necessary for the processor to have fast accessible RAM that allows temporary storage of data. DDR3- SODIMM module is a key component in the memory interface and is becoming increasingly important in enabling higher speeds. Considering higher bandwidths and speeds more than 1GHz, DDR3 is enabling poses more and more high speed signaling and design challenges. Characterized SODIMM module need to be designed to understand and analyze the impact of SODIMM parameters at higher speeds and thereby define more robust memory interface. This will include simulation, board design, validation and results correlation and involves high speed simulation and validation methodologies. Keywords – Validation, Correlation, DDR3, Characterized SODIMM, Signal Integrity
This document provides information about basic computer components and types of computers. It discusses the basic competencies required for computer operations as well as common and core competencies. It then defines what a computer is, its main parts including hardware and software, and types of computers such as laptops, desktops, tablets, and more. The rest of the document describes the basic components of a desktop computer in detail, including the monitor, keyboard, mouse, motherboard, RAM, power supply, CPU, hard disk drive, and optical drive. Memory types such as SIMMs, SDRAM, RDRAM, DDR, DDR2, DDR3, and DDR4 are also explained.
Modeling of DDR4 Memory and Advanced Verifications of DDR4 Memory SubsystemIRJET Journal
The document describes the modeling and verification of DDR4 memory systems. It discusses the objectives of building an accurate SystemVerilog model of a DDR4 memory that improves data rates compared to previous generations. The model is developed according to JEDEC DDR4 specifications and is verified against an existing memory controller model using simulation tools. The document provides details on DDR4 memory architecture and operation, including initialization procedures, command protocols, and verification scenarios to validate correct read and write functionality.
The document discusses 3D memory technologies that can provide alternatives to traditional scaling approaches. It proposes using a shared lithography approach where the same lithography steps are used across multiple memory layers to reduce costs. This approach is already being used successfully in 3D NAND flash memory. The document explores how resistive RAM (RRAM) could potentially be used to build a 3D cross-point memory or 3D 1T-1R memory with shared lithography steps to provide a lower-cost memory solution between DRAM and NAND flash in the memory hierarchy. Significant research is still needed to develop an RRAM-based 3D memory that can meet requirements for endurance, latency, and retention time.
This document provides an overview of the DRAM module market and discusses various module configurations for different applications. It describes the transition from DDR1 to DDR2 and upcoming shift to DDR3. Key markets discussed include personal computers, servers, networking equipment, and peripherals. For servers, it notes ongoing debate around using fully buffered DIMMs (FB-DIMMs) versus registered DIMMs (RDIMMs). New module formats like mini-RDIMMs and 72b SO-RDIMMs are presented as solutions for networking routers and other embedded applications.
This technical note discusses NAND flash memory. It covers the basics of NAND flash including its organization into blocks and pages. NAND flash has faster write and erase speeds than NOR flash, making it suitable for storage applications. The note describes NAND flash commands like read, program, erase and explains addressing. Partial page programming is also covered, allowing smaller amounts of data to be written than a full page.
In this presentation, Yasunori Goto and Qi fuli will talk about basis of NVDIMM, the issues of RAS of Non Volatile DIMM(NVDIMM), and what feature is made and is developing for it.
NVDIMM is expected as new age device recently. Though a cpu can read/write the NVDIMM directly like RAM, the data of NVDIMM remains after power down or reboot. So, on memory database will be one of the good example of usecase of NVDIMM.
Since many people have made great effort for Linux, NVDIMM drivers, filesystems,management command, and many libraries has been well developed for a few years,
However, Yasunori Goto found some issues about RAS(Reliabivity, Availability, and Serviceability) feature of NVDIMM, because characteristic of the NVDIMM is likely mixture of Storage and RAM. For example, NVDIMM does not have hotplug feature because it is inserted at DIMM slot like RAM, but its data must be back-upped/restored like storage.
New Memory Solutions for Enterprise ComputingIntel IT Center
The document discusses memory solutions and trends in enterprise computing. It forecasts strong growth in the total available market for DRAM through 2016. It outlines opportunities for Micron to provide memory architectures like DDR4, small form factor DIMMs for microservers, and non-volatile DIMMs that combine DRAM and NAND flash.
The document describes the memory hierarchy in computers from fastest to slowest: CPU caches (L1, L2, L3), main memory (RAM), virtual memory, and permanent storage (hard disks). L1 cache is built into the CPU and holds frequently used data for very fast access. Main memory (RAM) is where operating systems and active programs are run but is slower than cache. Virtual memory manages RAM use through disk storage. Permanent storage on disks retains data even when powered off but is the slowest to access.
Advanced and innovative features of DDR4 designs enable high speed operation and broad applicability in a variety including servers, laptops, desktop PCs and consumer products. It aims at simplifying migration and enabling adoption of an industry-wide standard.
HAMR, HDMR, DuraWrite, SHIELD, RAISE sind Technologien, mit denen Seagate der stetig wachsenden Datenmenge in Unternehmen entgegen tritt. Im Webinar erfahren Sie direkt vom Hersteller, was sich dahinter verbirgt.
About TrueTime, Spanner, Clock synchronization, CAP theorem, Two-phase lockin...Subhajit Sahu
TrueTime is a service that enables the use of globally synchronized clocks, with bounded error. It returns a time interval that is guaranteed to contain the clock’s actual time for some time during the call’s execution. If two intervals do not overlap, then we know calls were definitely ordered in real time. In general, synchronized clocks can be used to avoid communication in a distributed system.
The underlying source of time is a combination of GPS receivers and atomic clocks. As there are “time masters” in every datacenter (redundantly), it is likely that both sides of a partition would continue to enjoy accurate time. Individual nodes however need network connectivity to the masters, and without it their clocks will drift. Thus, during a partition their intervals slowly grow wider over time, based on bounds on the rate of local clock drift. Operations depending on TrueTime, such as Paxos leader election or transaction commits, thus have to wait a little longer, but the operation still completes (assuming the 2PC and quorum communication are working).
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
Adjusting Bitset for graph : SHORT REPORT / NOTESSubhajit Sahu
Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is commonly used for efficient graph computations. Unfortunately, using CSR for dynamic graphs is impractical since addition/deletion of a single edge can require on average (N+M)/2 memory accesses, in order to update source-offsets and destination-indices. A common approach is therefore to store edge-lists/destination-indices as an array of arrays, where each edge-list is an array belonging to a vertex. While this is good enough for small graphs, it quickly becomes a bottleneck for large graphs. What causes this bottleneck depends on whether the edge-lists are sorted or unsorted. If they are sorted, checking for an edge requires about log(E) memory accesses, but adding an edge on average requires E/2 accesses, where E is the number of edges of a given vertex. Note that both addition and deletion of edges in a dynamic graph require checking for an existing edge, before adding or deleting it. If edge lists are unsorted, checking for an edge requires around E/2 memory accesses, but adding an edge requires only 1 memory access.
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Experiments with Primitive operations : SHORT REPORT / NOTESSubhajit Sahu
This includes:
- Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
- Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
- Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
- Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Adjusting OpenMP PageRank : SHORT REPORT / NOTESSubhajit Sahu
For massive graphs that fit in RAM, but not in GPU memory, it is possible to take
advantage of a shared memory system with multiple CPUs, each with multiple cores, to
accelerate pagerank computation. If the NUMA architecture of the system is properly taken
into account with good vertex partitioning, the speedup can be significant. To take steps in
this direction, experiments are conducted to implement pagerank in OpenMP using two
different approaches, uniform and hybrid. The uniform approach runs all primitives required
for pagerank in OpenMP mode (with multiple threads). On the other hand, the hybrid
approach runs certain primitives in sequential mode (i.e., sumAt, multiply).
word2vec, node2vec, graph2vec, X2vec: Towards a Theory of Vector Embeddings o...Subhajit Sahu
Below are the important points I note from the 2020 paper by Martin Grohe:
- 1-WL distinguishes almost all graphs, in a probabilistic sense
- Classical WL is two dimensional Weisfeiler-Leman
- DeepWL is an unlimited version of WL graph that runs in polynomial time.
- Knowledge graphs are essentially graphs with vertex/edge attributes
ABSTRACT:
Vector representations of graphs and relational structures, whether handcrafted feature vectors or learned representations, enable us to apply standard data analysis and machine learning techniques to the structures. A wide range of methods for generating such embeddings have been studied in the machine learning and knowledge representation literature. However, vector embeddings have received relatively little attention from a theoretical point of view.
Starting with a survey of embedding techniques that have been used in practice, in this paper we propose two theoretical approaches that we see as central for understanding the foundations of vector embeddings. We draw connections between the various approaches and suggest directions for future research.
DyGraph: A Dynamic Graph Generator and Benchmark Suite : NOTESSubhajit Sahu
https://gist.github.com/wolfram77/54c4a14d9ea547183c6c7b3518bf9cd1
There exist a number of dynamic graph generators. Barbasi-Albert model iteratively attach new vertices to pre-exsiting vertices in the graph using preferential attachment (edges to high degree vertices are more likely - rich get richer - Pareto principle). However, graph size increases monotonically, and density of graph keeps increasing (sparsity decreasing).
Gorke's model uses a defined clustering to uniformly add vertices and edges. Purohit's model uses motifs (eg. triangles) to mimick properties of existing dynamic graphs, such as growth rate, structure, and degree distribution. Kronecker graph generators are used to increase size of a given graph, with power-law distribution.
To generate dynamic graphs, we must choose a metric to compare two graphs. Common metrics include diameter, clustering coefficient (modularity?), triangle counting (triangle density?), and degree distribution.
In this paper, the authors propose Dygraph, a dynamic graph generator that uses degree distribution as the only metric. The authors observe that many real-world graphs differ from the power-law distribution at the tail end. To address this issue, they propose binning, where the vertices beyond a certain degree (minDeg = min(deg) s.t. |V(deg)| < H, where H~10 is the number of vertices with a given degree below which are binned) are grouped into bins of degree-width binWidth, max-degree localMax, and number of degrees in bin with at least one vertex binSize (to keep track of sparsity). This helps the authors to generate graphs with a more realistic degree distribution.
The process of generating a dynamic graph is as follows. First the difference between the desired and the current degree distribution is calculated. The authors then create an edge-addition set where each vertex is present as many times as the number of additional incident edges it must recieve. Edges are then created by connecting two vertices randomly from this set, and removing both from the set once connected. Currently, authors reject self-loops and duplicate edges. Removal of edges is done in a similar fashion.
Authors observe that adding edges with power-law properties dominates the execution time, and consider parallelizing DyGraph as part of future work.
My notes on shared memory parallelism.
Shared memory is memory that may be simultaneously accessed by multiple programs with an intent to provide communication among them or avoid redundant copies. Shared memory is an efficient means of passing data between programs. Using memory for communication inside a single program, e.g. among its multiple threads, is also referred to as shared memory [REF].
A Dynamic Algorithm for Local Community Detection in Graphs : NOTESSubhajit Sahu
**Community detection methods** can be *global* or *local*. **Global community detection methods** divide the entire graph into groups. Existing global algorithms include:
- Random walk methods
- Spectral partitioning
- Label propagation
- Greedy agglomerative and divisive algorithms
- Clique percolation
https://gist.github.com/wolfram77/b4316609265b5b9f88027bbc491f80b6
There is a growing body of work in *detecting overlapping communities*. **Seed set expansion** is a **local community detection method** where a relevant *seed vertices* of interest are picked and *expanded to form communities* surrounding them. The quality of each community is measured using a *fitness function*.
**Modularity** is a *fitness function* which compares the number of intra-community edges to the expected number in a random-null model. **Conductance** is another popular fitness score that measures the community cut or inter-community edges. Many *overlapping community detection* methods **use a modified ratio** of intra-community edges to all edges with atleast one endpoint in the community.
Andersen et al. use a **Spectral PageRank-Nibble method** which minimizes conductance and is formed by adding vertices in order of decreasing PageRank values. Andersen and Lang develop a **random walk approach** in which some vertices in the seed set may not be placed in the final community. Clauset gives a **greedy method** that *starts from a single vertex* and then iteratively adds neighboring vertices *maximizing the local modularity score*. Riedy et al. **expand multiple vertices** via maximizing modularity.
Several algorithms for **detecting global, overlapping communities** use a *greedy*, *agglomerative approach* and run *multiple separate seed set expansions*. Lancichinetti et al. run **greedy seed set expansions**, each with a *single seed vertex*. Overlapping communities are produced by a sequentially running expansions from a node not yet in a community. Lee et al. use **maximal cliques as seed sets**. Havemann et al. **greedily expand cliques**.
The authors of this paper discuss a dynamic approach for **community detection using seed set expansion**. Simply marking the neighbours of changed vertices is a **naive approach**, and has *severe shortcomings*. This is because *communities can split apart*. The simple updating method *may fail even when it outputs a valid community* in the graph.
Scalable Static and Dynamic Community Detection Using Grappolo : NOTESSubhajit Sahu
A **community** (in a network) is a subset of nodes which are _strongly connected among themselves_, but _weakly connected to others_. Neither the number of output communities nor their size distribution is known a priori. Community detection methods can be divisive or agglomerative. **Divisive methods** use _betweeness centrality_ to **identify and remove bridges** between communities. **Agglomerative methods** greedily **merge two communities** that provide maximum gain in _modularity_. Newman and Girvan have introduced the **modularity metric**. The problem of community detection is then reduced to the problem of modularity maximization which is **NP-complete**. **Louvain method** is a variant of the _agglomerative strategy_, in that is a _multi-level heuristic_.
https://gist.github.com/wolfram77/917a1a4a429e89a0f2a1911cea56314d
In this paper, the authors discuss **four heuristics** for Community detection using the _Louvain algorithm_ implemented upon recently developed **Grappolo**, which is a parallel variant of the Louvain algorithm. They are:
- Vertex following and Minimum label
- Data caching
- Graph coloring
- Threshold scaling
With the **Vertex following** heuristic, the _input is preprocessed_ and all single-degree vertices are merged with their corresponding neighbours. This helps reduce the number of vertices considered in each iteration, and also help initial seeds of communities to be formed. With the **Minimum label heuristic**, when a vertex is making the decision to move to a community and multiple communities provided the same modularity gain, the community with the smallest id is chosen. This helps _minimize or prevent community swaps_. With the **Data caching** heuristic, community information is stored in a vector instead of a map, and is reused in each iteration, but with some additional cost. With the **Vertex ordering via Graph coloring** heuristic, _distance-k coloring_ of graphs is performed in order to group vertices into colors. Then, each set of vertices (by color) is processed _concurrently_, and synchronization is performed after that. This enables us to mimic the behaviour of the serial algorithm. Finally, with the **Threshold scaling** heuristic, _successively smaller values of modularity threshold_ are used as the algorithm progresses. This allows the algorithm to converge faster, and it has been observed a good modularity score as well.
From the results, it appears that _graph coloring_ and _threshold scaling_ heuristics do not always provide a speedup and this depends upon the nature of the graph. It would be interesting to compare the heuristics against baseline approaches. Future work can include _distributed memory implementations_, and _community detection on streaming graphs_.
Application Areas of Community Detection: A Review : NOTESSubhajit Sahu
This is a short review of Community detection methods (on graphs), and their applications. A **community** is a subset of a network whose members are *highly connected*, but *loosely connected* to others outside their community. Different community detection methods *can return differing communities* these algorithms are **heuristic-based**. **Dynamic community detection** involves tracking the *evolution of community structure* over time.
https://gist.github.com/wolfram77/09e64d6ba3ef080db5558feb2d32fdc0
Communities can be of the following **types**:
- Disjoint
- Overlapping
- Hierarchical
- Local.
The following **static** community detection **methods** exist:
- Spectral-based
- Statistical inference
- Optimization
- Dynamics-based
The following **dynamic** community detection **methods** exist:
- Independent community detection and matching
- Dependent community detection (evolutionary)
- Simultaneous community detection on all snapshots
- Dynamic community detection on temporal networks
**Applications** of community detection include:
- Criminal identification
- Fraud detection
- Criminal activities detection
- Bot detection
- Dynamics of epidemic spreading (dynamic)
- Cancer/tumor detection
- Tissue/organ detection
- Evolution of influence (dynamic)
- Astroturfing
- Customer segmentation
- Recommendation systems
- Social network analysis (both)
- Network summarization
- Privary, group segmentation
- Link prediction (both)
- Community evolution prediction (dynamic, hot field)
<br>
<br>
## References
- [Application Areas of Community Detection: A Review : PAPER](https://ieeexplore.ieee.org/document/8625349)
This paper discusses a GPU implementation of the Louvain community detection algorithm. Louvain algorithm obtains hierachical communities as a dendrogram through modularity optimization. Given an undirected weighted graph, all vertices are first considered to be their own communities. In the first phase, each vertex greedily decides to move to the community of one of its neighbours which gives greatest increase in modularity. If moving to no neighbour's community leads to an increase in modularity, the vertex chooses to stay with its own community. This is done sequentially for all the vertices. If the total change in modularity is more than a certain threshold, this phase is repeated. Once this local moving phase is complete, all vertices have formed their first hierarchy of communities. The next phase is called the aggregation phase, where all the vertices belonging to a community are collapsed into a single super-vertex, such that edges between communities are represented as edges between respective super-vertices (edge weights are combined), and edges within each community are represented as self-loops in respective super-vertices (again, edge weights are combined). Together, the local moving and the aggregation phases constitute a stage. This super-vertex graph is then used as input fof the next stage. This process continues until the increase in modularity is below a certain threshold. As a result from each stage, we have a hierarchy of community memberships for each vertex as a dendrogram.
Approaches to perform the Louvain algorithm can be divided into coarse-grained and fine-grained. Coarse-grained approaches process a set of vertices in parallel, while fine-grained approaches process all vertices in parallel. A coarse-grained hybrid-GPU algorithm using multi GPUs has be implemented by Cheong et al. which grabbed my attention. In addition, their algorithm does not use hashing for the local moving phase, but instead sorts each neighbour list based on the community id of each vertex.
https://gist.github.com/wolfram77/7e72c9b8c18c18ab908ae76262099329
Survey for extra-child-process package : NOTESSubhajit Sahu
Useful additions to inbuilt child_process module.
📦 Node.js, 📜 Files, 📰 Docs.
Please see attached PDF for literature survey.
https://gist.github.com/wolfram77/d936da570d7bf73f95d1513d4368573e
Dynamic Batch Parallel Algorithms for Updating PageRank : POSTERSubhajit Sahu
This paper presents two algorithms for efficiently computing PageRank on dynamically updating graphs in a batched manner: DynamicLevelwisePR and DynamicMonolithicPR. DynamicLevelwisePR processes vertices level-by-level based on strongly connected components and avoids recomputing converged vertices on the CPU. DynamicMonolithicPR uses a full power iteration approach on the GPU that partitions vertices by in-degree and skips unaffected vertices. Evaluation on real-world graphs shows the batched algorithms provide speedups of up to 4000x over single-edge updates and outperform other state-of-the-art dynamic PageRank algorithms.
Abstract for IPDPS 2022 PhD Forum on Dynamic Batch Parallel Algorithms for Up...Subhajit Sahu
For the PhD forum an abstract submission is required by 10th May, and poster by 15th May. The event is on 30th May.
https://gist.github.com/wolfram77/1c1f730d20b51e0d2c6d477fd3713024
Fast Incremental Community Detection on Dynamic Graphs : NOTESSubhajit Sahu
In this paper, the authors describe two approaches for dynamic community detection using the CNM algorithm. CNM is a hierarchical, agglomerative algorithm that greedily maximizes modularity. They define two approaches: BasicDyn and FastDyn. BasicDyn backtracks merges of communities until each marked (changed) vertex is its own singleton community. FastDyn undoes a merge only if the quality of merge, as measured by the induced change in modularity, has significantly decreased compared to when the merge initially took place. FastDyn also allows more than two vertices to contract together if in the previous time step these vertices eventually ended up contracted in the same community. In the static case, merging several vertices together in one contraction phase could lead to deteriorating results. FastDyn is able to do this, however, because it uses information from the merges of the previous time step. Intuitively, merges that previously occurred are more likely to be acceptable later.
https://gist.github.com/wolfram77/1856b108334cc822cdddfdfa7334792a
Building a Raspberry Pi Robot with Dot NET 8, Blazor and SignalR - Slides Onl...Peter Gallagher
In this session delivered at Leeds IoT, I talk about how you can control a 3D printed Robot Arm with a Raspberry Pi, .NET 8, Blazor and SignalR.
I also show how you can use a Unity app on an Meta Quest 3 to control the arm VR too.
You can find the GitHub repo and workshop instructions here;
https://bit.ly/dotnetrobotgithub
Google Calendar is a versatile tool that allows users to manage their schedules and events effectively. With Google Calendar, you can create and organize calendars, set reminders for important events, and share your calendars with others. It also provides features like creating events, inviting attendees, and accessing your calendar from mobile devices. Additionally, Google Calendar allows you to embed calendars in websites or platforms like SlideShare, making it easier for others to view and interact with your schedules.
1. 02/10/2020 DDR4 SDRAM - Wikipedia
https://en.wikipedia.org/wiki/DDR4_SDRAM 1/12
DDR4 SDRAM
Double Data Rate 4
Synchronous Dynamic Random-
Access Memory
Type of RAM
8 GiB DDR4-2133 ECC 1.2 V
RDIMM
Developer JEDEC
Type Synchronous
dynamic random-
access memory
(SDRAM)
Generation 4th generation
Release
date
2014
Standards DDR4-1600 (PC4-
12800)
DDR4-1866 (PC4-
14900)
DDR4-2133 (PC4-
17000)
DDR4-2400 (PC4-
19200)
DDR4-2666 (PC4-
21333)
DDR4-2933 (PC4-
23466)
DDR4-3200 (PC4-
25600)
Clock rate 800–1600 MHz
Voltage Reference 1.2 V
Predecessor DDR3 SDRAM
(2007)
Successor DDR5 SDRAM
(2020)
DDR4 SDRAM
Double Data Rate 4 Synchronous Dynamic Random-Access
Memory, officially abbreviated as DDR4 SDRAM, is a type of
synchronous dynamic random-access memory with a high bandwidth
("double data rate") interface.
Released to the market in 2014,[1][2][3] it is a variant of dynamic
random-access memory (DRAM), of which some have been in use
since the early 1970s,[4] and a higher-speed successor to the DDR2 and
DDR3 technologies.
DDR4 is not compatible with any earlier type of random-access
memory (RAM) due to different signaling voltage and physical
interface, besides other factors.
DDR4 SDRAM was released to the public market in Q2 2014, focusing
on ECC memory,[5] while the non-ECC DDR4 modules became
available in Q3 2014, accompanying the launch of Haswell-E
processors that require DDR4 memory.[6]
Features
Timeline
Market perception and adoption
Operation
Command encoding
Design considerations
Module packaging
Modules
JEDEC standard DDR4 module
Successor
See also
Notes
References
External links
The primary advantages of DDR4 over its predecessor, DDR3, include
higher module density and lower voltage requirements, coupled with
higher data rate transfer speeds. The DDR4 standard allows for DIMMs
of up to 64 GiB in capacity, compared to DDR3's maximum of 16 GiB
per DIMM.[7]
Contents
Features
2. 02/10/2020 DDR4 SDRAM - Wikipedia
https://en.wikipedia.org/wiki/DDR4_SDRAM 2/12
The first DDR4 memory module prototype was
manufactured by Samsung and announced in January
2011.[b]
Physical comparison of DDR,
DDR2, DDR3, and DDR4 SDRAM
Unlike previous generations of DDR memory, prefetch has not been increased above the 8n used in
DDR3;[8]:16 the basic burst size is eight words, and higher bandwidths are achieved by sending more read/write
commands per second. To allow this, the standard divides the DRAM banks into two or four selectable bank
groups,[9] where transfers to different bank groups may be done more rapidly.
Because power consumption increases with speed, the reduced voltage allows higher speed operation without
unreasonable power and cooling requirements.
DDR4 operates at a voltage 1.2 V with a frequency between 800 and 1600 MHz (DDR4-1600 through DDR4-
3200), compared to frequencies between 400 and 1067 MHz (DDR3-800 through DDR3-2133)[10][a] and
voltage requirements of 1.5 V of DDR3. Due to the nature of DDR, speeds are typically advertised as doubles
of these numbers (DDR3-1600 and DDR4-2400 are common, with DDR4-3200, DDR4-4800 and DDR4-5000
available at high cost). Unlike DDR3's 1.35 V low voltage standard DDR3L, there is no DDR4L low voltage
version of DDR4.[12][13]
2005: standards body JEDEC began working
on a successor to DDR3 around 2005,[15]
about 2 years before the launch of DDR3 in
2007.[16][17] The high-level architecture of
DDR4 was planned for completion in 2008.[18]
2007: some advance information was
published in 2007,[19] and a guest speaker
from Qimonda provided further public details
in a presentation at the August 2008 San
Francisco Intel Developer Forum (IDF).[19][20][21][22] DDR4
was described as involving a 30 nm process at 1.2 volts, with
bus frequencies of 2133 MT/s "regular" speed and 3200 MT/s
"enthusiast" speed, and reaching market in 2012, before
transitioning to 1 volt in 2013.[20][22]
2009: in February, Samsung validated 40 nm DRAM chips,
considered a "significant step" towards DDR4
development[23] since in 2009, DRAM chips were only
beginning to migrate to a 50 nm process.[24]
2010: subsequently, further details were revealed at MemCon
2010, Tokyo (a computer memory industry event), at which a
presentation by a JEDEC director titled "Time to rethink
DDR4"[25] with a slide titled "New roadmap: More realistic
roadmap is 2015" led some websites to report that the
introduction of DDR4 was probably[26] or definitely[27][28]
delayed until 2015. However, DDR4 test samples were
announced in line with the original schedule in early 2011 at
which time manufacturers began to advise that large scale
commercial production and release to market was scheduled
for 2012.[1]
2011: in January, Samsung announced the completion and
release for testing of a 2 GiB DDR4 DRAM module based on
a process between 30 and 39 nm.[29] It has a maximum data transfer rate of 2133 MT/s at 1.2 V,
uses pseudo open drain technology (adapted from graphics DDR memory[30]) and draws 40%
less power than an equivalent DDR3 module.[29][31][32]
In April, Hynix announced the production of 2 GiB DDR4 modules at 2400 MT/s, also running at
1.2 V on a process between 30 and 39 nm (exact process unspecified),[1] adding that it
anticipated commencing high volume production in the second half of 2012.[1] Semiconductor
Timeline
3. 02/10/2020 DDR4 SDRAM - Wikipedia
https://en.wikipedia.org/wiki/DDR4_SDRAM 3/12
Front and back of 8 GB DDR4
memory modules
processes for DDR4 are expected to transition to sub-30 nm
at some point between late 2012 and 2014.[33][34]
2012: in May, Micron announced[2] it is aiming at starting
production in late 2012 of 30 nm modules.
In July, Samsung announced that it would begin sampling the
industry's first 16 GiB registered dual inline memory modules
(RDIMMs) using DDR4 SDRAM for enterprise server
systems.[35][36]
In September, JEDEC released the final specification of
DDR4.[37]
2013: DDR4 was expected to represent 5% of the DRAM
market in 2013,[1] and to reach mass market adoption and
50% market penetration around 2015;[1] as of 2013, however,
adoption of DDR4 had been delayed and it was no longer expected to reach a majority of the
market until 2016 or later.[38] The transition from DDR3 to DDR4 is thus taking longer than the
approximately five years taken for DDR3 to achieve mass market transition over DDR2.[33] In
part, this is because changes required to other components would affect all other parts of
computer systems, which would need to be updated to work with DDR4.[39]
2014: in April, Hynix announced that it had developed the world's first highest-density 128 GiB
module based on 8 Gibit DDR4 using 20 nm technology. The module works at 2133 MHz, with a
64-bit I/O, and processes up to 17 GB of data per second.
2016: in April, Samsung announced that they had begun to mass-produce DRAM on a "10 nm-
class" process, by which they mean the 1x nm node regime of 16 nm to 19 nm, which supports a
30% faster data transfer rate of 3,200 megabits per second. Previously, a size of 20 nm was
used.[40][41]
In April 2013, a news writer at International Data Group (IDG) – an American technology research business
originally part of IDC – produced an analysis of their perceptions related to DDR4 SDRAM.[42] The
conclusions were that the increasing popularity of mobile computing and other devices using slower but low-
powered memory, the slowing of growth in the traditional desktop computing sector, and the consolidation of
the memory manufacturing marketplace, meant that margins on RAM were tight.
As a result, the desired premium pricing for the new technology was harder to achieve, and capacity had shifted
to other sectors. SDRAM manufacturers and chipset creators were, to an extent, "stuck between a rock and a
hard place" where "nobody wants to pay a premium for DDR4 products, and manufacturers don't want to make
the memory if they are not going to get a premium", according to Mike Howard from iSuppli.[42] A switch in
market sentiment toward desktop computing and release of processors having DDR4 support by Intel and AMD
could therefore potentially lead to "aggressive" growth.[42]
Intel's 2014 Haswell roadmap, revealed the company's first use of DDR4 SDRAM in Haswell-EP
processors.[43]
AMD's Ryzen processors, revealed in 2016 and shipped in 2017, use DDR4 SDRAM.[44]
DDR4 chips use a 1.2 V supply[8]:16[45][46] with a 2.5 V auxiliary supply for wordline boost called VPP,[8]:16 as
compared with the standard 1.5 V of DDR3 chips, with lower voltage variants at 1.35 V appearing in 2013.
DDR4 is expected to be introduced at transfer rates of 2133 MT/s,[8]:18 estimated to rise to a potential
4266 MT/s[39] by 2013. The minimum transfer rate of 2133 MT/s was said to be due to progress made in DDR3
Market perception and adoption
Operation
4. 02/10/2020 DDR4 SDRAM - Wikipedia
https://en.wikipedia.org/wiki/DDR4_SDRAM 4/12
speeds which, being likely to reach 2133 MT/s, left little commercial benefit to specifying DDR4 below this
speed.[33][39] Techgage interpreted Samsung's January 2011 engineering sample as having CAS latency of 13
clock cycles, described as being comparable to the move from DDR2 to DDR3.[30]
Internal banks are increased to 16 (4 bank select bits), with up to 8 ranks per DIMM.[8]:16
Protocol changes include:[8]:20
Parity on the command/address bus
Data bus inversion (like GDDR4)
CRC on the data bus
Independent programming of individual DRAMs on a DIMM, to allow better control of on-die
termination.
Increased memory density is anticipated, possibly using TSV ("through-silicon via") or other 3D stacking
processes.[33][39][47][48] The DDR4 specification will include standardized 3D stacking "from the start"
according to JEDEC,[48] with provision for up to 8 stacked dies.[8]:12 X-bit Labs predicted that "as a result
DDR4 memory chips with very high density will become relatively inexpensive".[39]
Switched memory banks are also an anticipated option for servers.[33][47]
In 2008 concerns were raised in the book Wafer Level 3-D ICs Process Technology that non-scaling analog
elements such as charge pumps and voltage regulators, and additional circuitry "have allowed significant
increases in bandwidth but they consume much more die area". Examples include CRC error-detection, on-die
termination, burst hardware, programmable pipelines, low impedance, and increasing need for sense amps
(attributed to a decline in bits per bitline due to low voltage). The authors noted that, as a result, the amount of
die used for the memory array itself has declined over time from 70–78% for SDRAM and DDR1, to 47% for
DDR2, to 38% for DDR3 and to potentially less than 30% for DDR4.[49]
The specification defined standards for ×4, ×8 and ×16 memory devices with capacities of 2, 4, 8 and 16
Gib.[50]
Although it still operates in fundamentally the same way, DDR4 makes one major change to the command
formats used by previous SDRAM generations. A new command signal, ACT, is low to indicate the activate
(open row) command.
The activate command requires more address bits than any other (18 row address bits in an 16 Gb part), so the
standard RAS, CAS, and WE active low signals are shared with high-order address bits that are not used when
ACT is high. The combination of RAS=L and CAS=WE=H that previously encoded an activate command is
unused.
As in previous SDRAM encodings, A10 is used to select command variants: auto-precharge on read and write
commands, and one bank vs. all banks for the precharge command. It also selects two variants of the ZQ
calibration command.
As in DDR3, A12 is used to request burst chop: truncation of an 8-transfer burst after four transfers. Although
the bank is still busy and unavailable for other commands until eight transfer times have elapsed, a different
bank can be accessed.
Also, the number of bank addresses has been increased greatly. There are four bank select bits to select up to 16
banks within each DRAM: two bank address bits (BA0, BA1), and two bank group bits (BG0, BG1). There are
additional timing restrictions when accessing banks within the same bank group; it is faster to access a bank in
Command encoding
5. 02/10/2020 DDR4 SDRAM - Wikipedia
https://en.wikipedia.org/wiki/DDR4_SDRAM 5/12
DDR4 command encoding[51]
Command
CS
BG1–0,
BA1–0
ACT
A17
A16
RAS
A15
CAS
A14
WE
A13
A12
BC
A11
A10
AP
A9–0
eselect
o
peration)
H X
ctive
ctivate):
pen a row
L Bank L Row address
o
peration
L V H V H H H V
Q
alibration
L V H V H H L V Long V
ead (BC,
urst chop)
L Bank H V H L H V BC V AP Column
rite (AP,
uto-
echarge)
L Bank H V H L L V BC V AP Column
nassigned,
served
L V v V L H H V
recharge
banks
L V H V L H L V H V
recharge
ne bank
L Bank H V L H L V L V
efresh L V H V L L H V
ode
gister set
MR0–MR6)
L Register H L L L L L Data
gnal level (H, high · L, low · V, either low or high, a valid signal · X, irrelevant) · Logic level ( Active · Inactive · Not
erpreted)
a different bank group.
In addition, there are three chip select signals (C0, C1, C2), allowing up to eight stacked chips to be placed
inside a single DRAM package. These effectively act as three more bank select bits, bringing the total to seven
(128 possible banks).
Standard transfer rates are 1600, 1866, 2133, 2400, 2666, 2933, and 3200 MT/s[51][52] (12⁄15, 14⁄15, 16⁄15, 18⁄15,
20⁄15, 22⁄15, and 24⁄15 GHz clock frequencies, double data rate), with speeds up to DDR4-4800 (2400 MHz
clock) commercially available.[53]
The DDR4 team at Micron Technology identified some key points for IC and PCB design:[54]
IC design:[54]
VrefDQ calibration (DDR4 "requires that VrefDQ calibration be performed by the controller");
Design considerations
6. 02/10/2020 DDR4 SDRAM - Wikipedia
https://en.wikipedia.org/wiki/DDR4_SDRAM 6/12
New addressing schemes ("bank grouping", ACT to replace RAS, CAS, and WE commands, PAR
and Alert for error checking and DBI for data bus inversion);
New power saving features (low-power auto self-refresh, temperature-controlled refresh, fine-
granularity refresh, data-bus inversion, and CMD/ADDR latency).
Circuit board design:[54]
New power supplies (VDD/VDDQ at 1.2 V and wordline boost, known as VPP, at 2.5 V);
VrefDQ must be supplied internal to the DRAM while VrefCA is supplied externally from the
board;
DQ pins terminate high using pseudo-open-drain I/O (this differs from the CA pins in DDR3 which
are center-tapped to VTT).[54]
Rowhammer mitigation techniques include larger storage capacitors, modifying the address lines to use address
space layout randomization and dual-voltage I/O lines that further isolate potential boundary conditions that
might result in instability at high write/read speeds.
DDR4 memory is supplied in 288-pin dual in-line memory modules (DIMMs), similar in size to 240-pin DDR3
DIMMs. The pins are spaced more closely (0.85 mm instead of 1.0) to fit the increased number within the same
5¼ inch (133.35 mm) standard DIMM length, but the height is increased slightly (31.25 mm/1.23 in instead of
30.35 mm/1.2 in) to make signal routing easier, and the thickness is also increased (to 1.2 mm from 1.0) to
accommodate more signal layers.[55] DDR4 DIMM modules have a slightly curved edge connector so not all of
the pins are engaged at the same time during module insertion, lowering the insertion force.[14]
DDR4 SO-DIMMs have 260 pins instead of the 204 pins of DDR3 SO-DIMMs, spaced at 0.5 rather than
0.6 mm, and are 2.0 mm wider (69.6 versus 67.6 mm), but remain the same 30 mm in height.[56]
For its Skylake microarchitecture, Intel designed a SO-DIMM package named UniDIMM, which can be
populated with either DDR3 or DDR4 chips. At the same time, the integrated memory controller (IMC) of
Skylake CPUs is announced to be capable of working with either type of memory. The purpose of UniDIMMs
is to help in the market transition from DDR3 to DDR4, where pricing and availability may make it undesirable
to switch the RAM type. UniDIMMs have the same dimensions and number of pins as regular DDR4 SO-
DIMMs, but the edge connector's notch is placed differently to avoid accidental use in incompatible DDR4 SO-
DIMM sockets.[57]
CAS latency (CL)
Clock cycles between sending a column address to the memory and the beginning of the data
in response
tRCD
Clock cycles between row activate and reads/writes
tRP
Clock cycles between row precharge and activate
DDR4-xxxx denotes per-bit data transfer rate, and is normally used to describe DDR chips. PC4-xxxxx denotes
overall transfer rate, in megabytes per second, and applies only to modules (assembled DIMMs). Because
DDR4 memory modules transfer data on a bus that is 8 bytes (64 data bits) wide, module peak transfer rate is
Module packaging
Modules
JEDEC standard DDR4 module
7. 02/10/2020 DDR4 SDRAM - Wikipedia
https://en.wikipedia.org/wiki/DDR4_SDRAM 7/12
Standard
name
Memory
clock
(MHz)
I/O bus
clock
(MHz)
Data
rate
(MT/s)
Module
name
Peak trans-
fer rate
(MB/s)
Timings
CL-tRCD-tRP
CAS
latency
(ns)
DDR4-1600J*
DDR4-1600K
DDR4-1600L
200 800 1600 PC4-12800 12800
10-10-10
11-11-11
12-12-12
12.5
13.75
15
DDR4-1866L*
DDR4-1866M
DDR4-1866N
233.33 933.33 1866.67 PC4-14900 14933.33
12-12-12
13-13-13
14-14-14
12.857
13.929
15
DDR4-2133N*
DDR4-2133P
DDR4-2133R
266.67 1066.67 2133.33 PC4-17000 17066.67
14-14-14
15-15-15
16-16-16
13.125
14.063
15
DDR4-2400P*
DDR4-2400R
DDR4-2400T
DDR4-2400U
300 1200 2400 PC4-19200 19200
15-15-15
16-16-16
17-17-17
18-18-18
12.5
13.32
14.16
15
DDR4-2666T
DDR4-2666U
DDR4-2666V
DDR4-2666W
333.33 1333.33 2666.67 PC4-21333 21333.33
17-17-17
18-18-18
19-19-19
20-20-20
12.75
13.50
14.25
15
DDR4-2933V
DDR4-2933W
DDR4-2933Y
DDR4-2933AA
366.67 1466.67 2933.33 PC4-23466 23466.67
19-19-19
20-20-20
21-21-21
22-22-22
12.96
13.64
14.32
15
DDR4-3200W
DDR4-3200AA
DDR4-3200AC
400 1600 3200 PC4-25600 25600
20-20-20
22-22-22
24-24-24
12.5
13.75
15
calculated by taking transfers per second and multiplying by eight.[58]
At the 2016 Intel Developer Forum, the future of DDR5 SDRAM was discussed. The specifications were
finalized at the end of 2016 – but no modules will be available before 2020.[59] Other memory technologies –
namely HBM in version 3 and 4[60] – aiming to replace DDR4 have also been proposed.
In 2011, JEDEC published the Wide I/O 2 standard; it stacks multiple memory dies, but does that directly on
top of the CPU and in the same package. This memory layout provides higher bandwidth and better power
performance than DDR4 SDRAM, and allows a wide interface with short signal lengths. It primarily aims to
replace various mobile DDRX SDRAM standards used in high-performance embedded and mobile devices,
such as smartphones.[61][62] Hynix proposed similar High Bandwidth Memory (HBM), which was published as
JEDEC JESD235. Both Wide I/O 2 and HBM use a very wide parallel memory interface, up to 512 bits wide
for Wide I/O 2 (compared to 64 bits for DDR4), running at a lower frequency than DDR4.[63] Wide I/O 2 is
targeted at high-performance compact devices such as smartphones, where it will be integrated into the
processor or system on a chip (SoC) packages. HBM is targeted at graphics memory and general computing,
while HMC targets high-end servers and enterprise applications.[63]
Micron Technology's Hybrid Memory Cube (HMC) stacked memory uses a serial interface. Many other
computer buses have migrated towards replacing parallel buses with serial buses, for example by the evolution
of Serial ATA replacing Parallel ATA, PCI Express replacing PCI, and serial ports replacing parallel ports. In
general, serial buses are easier to scale up and have fewer wires/traces, making circuit boards using them easier
to design.[64][65][66]
Successor
8. 02/10/2020 DDR4 SDRAM - Wikipedia
https://en.wikipedia.org/wiki/DDR4_SDRAM 8/12
In the longer term, experts speculate that non-volatile RAM types like PCM (phase-change memory), RRAM
(resistive random-access memory), or MRAM (magnetoresistive random-access memory) could replace DDR4
SDRAM and its successors.[67]
GDDR5 SGRAM is a graphics type of DDR3 synchronous graphics RAM, which was introduced before
DDR4, and is not a successor to DDR4.
Synchronous dynamic random access memory – main article for DDR memory types
List of device bandwidths
Memory timings
a. Some factory-overclocked DDR3 memory modules operate at higher frequencies, up to
1600 MHz.[11]
b. As a prototype, this DDR4 memory module has a flat edge connector at the bottom, while
production DDR4 DIMM modules have a slightly curved edge connector so not all of the pins are
engaged at a time during module insertion, lowering the insertion force.[14]
1. Marc (2011-04-05). "Hynix produces its first DDR4 modules" (https://web.archive.org/web/201204
15182459/http://www.behardware.com/news/11425/hynix-produces-its-first-ddr4-modules.html).
Be hardware. Archived from the original (http://www.behardware.com/news/11425/hynix-produces
-its-first-ddr4-modules.html) on 2012-04-15. Retrieved 2012-04-14.
2. Micron teases working DDR4 RAM (https://www.engadget.com/2012/05/08/micron-teases-workin
g-ddr4-ram-module), Engadget, 2012-05-08, retrieved 2012-05-08
3. "Samsung mass-produces DDR4" (https://arstechnica.com/gadgets/2013/08/samsung-mass-prod
uces-ddr4-which-still-has-nowhere-to-go/). Retrieved 2013-08-31.
4. The DRAM Story (http://www.ieee.org/portal/cms_docs_societies/sscs/PrintEditions/200801.pdf)
(PDF), IEEE, 2008, p. 10, retrieved 2012-01-23
5. "Crucial DDR4 Server Memory Now Available" (http://globenewswire.com/news-release/2014/06/
02/641205/10083787/en/Crucial-DDR4-Server-Memory-Now-Available.html). Globe newswire. 2
June 2014. Retrieved 12 December 2014.
6. btarunr (14 September 2014). "How Intel Plans to Transition Between DDR3 and DDR4 for the
Mainstream" (https://www.techpowerup.com/205231/how-intel-plans-to-transition-between-ddr3-a
nd-ddr4-for-the-mainstream.html). TechPowerUp. Retrieved 28 April 2015.
7. Wang, David (12 March 2013). "Why migrate to DDR4?" (https://www.eetimes.com/document.as
p?doc_id=1280577). Inphi Corp. – via EE Times.
8. Jung, JY (2012-09-11), "How DRAM Advancements are Impacting Server Infrastructure", Intel
Developer Forum 2012 (https://web.archive.org/web/20121127074139/https://intel.activeevents.c
om/sf12/scheduler/catalog.do), Intel, Samsung; Active events, archived from the original (https://i
ntel.activeevents.com/sf12/scheduler/catalog.do) on 2012-11-27, retrieved 2012-09-15
9. "Main Memory: DDR4 & DDR5 SDRAM" (https://www.jedec.org/category/technology-focus-area/
main-memory-ddr3-ddr4-sdram). JEDEC. Retrieved 2012-04-14.
10. "DDR3 SDRAM Standard JESD79-3F, sec. Table 69 – Timing Parameters by Speed Bin" (https://
www.jedec.org/standards-documents/docs/jesd-79-3d). JEDEC. July 2012. Retrieved
2015-07-18.
See also
Notes
References
9. 02/10/2020 DDR4 SDRAM - Wikipedia
https://en.wikipedia.org/wiki/DDR4_SDRAM 9/12
11. "Vengeance LP Memory — 8GB 1600MHz CL9 DDR3 (CML8GX3M1A1600C9)" (http://www.cors
air.com/en/vengeance-lp-memory-8gb-1600mhz-cl9-ddr3-cml8gx3m1a1600c9). Corsair.
Retrieved 17 July 2015.
12. "DDR4 – Advantages of Migrating from DDR3" (https://www.micron.com/products/dram/ddr3-to-d
dr4), Products, retrieved 2014-08-20.
13. "Corsair unleashes world's fastest DDR4 RAM and 16GB costs more than your gaming PC
(probably) | TechRadar" (https://www.techradar.com/amp/news/corsair-unleashes-worlds-fastest-r
am-and-16gb-costs-more-than-your-gaming-pc-probably). www.techradar.com.
14. "Molex DDR4 DIMM Sockets, Halogen-free" (http://www.arroweurope.com/services/arrow-downlo
ad-center.html?tx_sfdownloadcenter%5Bdownload%5D=86). Arrow Europe. Molex. 2012.
Retrieved 2015-06-22.
15. Sobolev, Vyacheslav (2005-05-31). "JEDEC: Memory standards on the way" (https://web.archive.
org/web/20131203181702/http://de.viatech.com/de/company/events/vtf2005/interview_desi_rhod
en.jsp). Digitimes. Via tech. Archived from the original (http://de.viatech.com/de/company/events/
vtf2005/interview_desi_rhoden.jsp) on 2013-12-03. Retrieved 2011-04-28. "Initial investigations
have already started on memory technology beyond DDR3. JEDEC always has about three
generations of memory in various stages of the standardization process: current generation, next
generation, and future."
16. "DDR3: Frequently asked questions" (https://web.archive.org/web/20110728020853/http://www.ki
ngston.com/channelmarketingcenter/hyperx/literature/MKF_1223-1_DDR3_FAQ.pdf) (PDF).
Kingston Technology. Archived from the original (http://www.kingston.com/channelmarketingcente
r/hyperx/literature/MKF_1223-1_DDR3_FAQ.pdf) (PDF) on 2011-07-28. Retrieved 2011-04-28.
"DDR3 memory launched in June 2007"
17. Valich, Theo (2007-05-02). "DDR3 launch set for May 9th" (http://www.theinquirer.net/inquirer/new
s/1016272/ddr3-launch-set-may-9th). The Inquirer. Retrieved 2011-04-28.
18. Hammerschmidt, Christoph (2007-08-29). "Non-volatile memory is the secret star at JEDEC
meeting" (https://www.eetimes.com/document.asp?doc_id=1248476). EE Times. Retrieved
2011-04-28.
19. "DDR4 – the successor to DDR3 memory" (https://web.archive.org/web/20110526181258/http://w
ww.h-online.com/newsticker/news/item/IDF-DDR4-the-successor-to-DDR3-memory-
736983.html). The "H" (online ed.). 2008-08-21. Archived from the original (http://www.h-online.co
m/newsticker/news/item/IDF-DDR4-the-successor-to-DDR3-memory-736983.html) on 26 May
2011. Retrieved 2011-04-28. "The JEDEC standardisation committee cited similar figures around
one year ago"
20. Graham-Smith, Darien (2008-08-19). "IDF: DDR3 won't catch up with DDR2 during 2009" (https://
web.archive.org/web/20110607101302/http://www.pcpro.co.uk/news/220257/idf-ddr3-wont-catch-
up-with-ddr2-during-2009). PC Pro. Archived from the original (http://www.pcpro.co.uk/news/2202
57/idf-ddr3-wont-catch-up-with-ddr2-during-2009) on 2011-06-07. Retrieved 2011-04-28.
21. Volker, Rißka (2008-08-21). "IDF: DDR4 als Hauptspeicher ab 2012" (https://www.computerbase.
de/2008-08/idf-ddr4-als-hauptspeicher-ab-2012/) [Intel Developer Forum: DDR4 as the main
memory from 2012]. Computerbase (in German). DE. Retrieved 2011-04-28. (English (https://tran
slate.google.com/translate?hl=en&sl=de&tl=en&u=https%3A%2F%2Fwww.computerbase.de%2F
2008-08%2Fidf-ddr4-als-hauptspeicher-ab-2012%2F))
22. Novakovic, Nebojsa (2008-08-19). "Qimonda: DDR3 moving forward" (http://www.theinquirer.net/i
nquirer/news/1012591/qimonda-ddr3-moving-forward). The Inquirer. Retrieved 2011-04-28.
23. Gruener, Wolfgang (February 4, 2009). "Samsung hints to DDR4 with first validated 40 nm
DRAM" (https://web.archive.org/web/20090524133306/http://www.tgdaily.com/content/view/4131
6/139/). TG daily. Archived from the original (http://www.tgdaily.com/content/view/41316/139/) on
May 24, 2009. Retrieved 2009-06-16.
24. Jansen, Ng (January 20, 2009). "DDR3 Will be Cheaper, Faster in 2009" (https://web.archive.org/
web/20090622084614/http://www.dailytech.com/DDR3+Will+be+Cheaper+Faster+in+2009/article
13977.htm). Dailytech. Archived from the original (http://www.dailytech.com/DDR3+Will+be+Chea
per+Faster+in+2009/article13977.htm) on June 22, 2009. Retrieved 2009-06-17.
10. 02/10/2020 DDR4 SDRAM - Wikipedia
https://en.wikipedia.org/wiki/DDR4_SDRAM 10/12
25. Gervasi, Bill. "Time to rethink DDR4" (http://discobolusdesigns.com/personal/20100721a_gervasi
_rethinking_ddr4.pdf) (PDF). July 2010. Discobolus Designs. Retrieved 2011-04-29.
26. "DDR4-Speicher kommt wohl später als bisher geplant" (http://www.heise.de/newsticker/meldung/
DDR4-Speicher-kommt-wohl-spaeter-als-bisher-geplant-1060545.html) [DDR4 memory is
probably later than previously planned]. Heise (in German). DE. 2010-08-17. Retrieved
2011-04-29. (English (https://translate.google.com/translate?hl=en&sl=de&tl=en&u=http%3A%2
F%2Fwww.heise.de%2Fnewsticker%2Fmeldung%2FDDR4-Speicher-kommt-wohl-spaeter-als-bis
her-geplant-1060545.html))
27. Nilsson, Lars-Göran (2010-08-16). "DDR4 not expected until 2015" (http://semiaccurate.com/201
0/08/16/ddr4-not-expected-until-2015/). Semi accurate. Retrieved 2011-04-29.
28. 'annihilator' (2010-08-18). "DDR4 memory in Works, Will reach 4.266 GHz" (http://wccftech.com/2
010/08/18/ddr4-memory-works-reach-4266ghz/). WCCF tech. Retrieved 2011-04-29.
29. "Samsung Develops Industry's First DDR4 DRAM, Using 30nm Class Technology" (http://www.sa
msung.com/us/business/semiconductor/newsView.do?news_id=1202). Samsung. 2011-04-11.
Retrieved 26 April 2011.
30. Perry, Ryan (2011-01-06). "Samsung Develops the First 30nm DDR4 DRAM" (http://techgage.co
m/news/samsung_develops_the_first_30nm_ddr4_dram/). Tech gage. Retrieved 2011-04-29.
31. "Samsung Develops Industry's First DDR4 DRAM, Using 30 nm Class Technology" (http://www.sa
msung.com/us/business/semiconductor/newsView.do?news_id=1202) (press release). Samsung.
2011-01-04. Retrieved 2011-03-13.
32. Protalinski, Emil (2011-01-04), Samsung develops DDR4 memory, up to 40% more efficient (htt
p://www.techspot.com/news/41818-samsung-develops-ddr4-memory-up-to-40-more-efficient.htm
l), Techspot, retrieved 2012-01-23
33. 後藤, 弘茂[Gotou Shigehiro]. "メモリ4Gbps時代へと向かう次世代メモリDDR4" (https://pc.watc
h.impress.co.jp/docs/column/kaigai/387444.html) [Towards Next-Generation 4Gbps DDR4
Memory]. 2010-08-16 (in Japanese). JP: PC Watch. Retrieved 2011-04-25. (English translation (h
ttps://translate.google.com/translate?js=y&prev=_t&hl=en&ie=UTF-8&layout=1&eotf=1&u=http%3
A%2F%2Fpc.watch.impress.co.jp%2Fdocs%2Fcolumn%2Fkaigai%2F20100816_387444.html&sl
=ja&tl=en))
34. "Diagram: Anticipated DDR4 timeline" (http://pc.watch.impress.co.jp/img/pcw/docs/387/444/html/k
aigai-09.jpg.html). 2010-08-16. JP: PC Watch. Retrieved 2011-04-25.
35. "Samsung Samples Industry's First DDR4 Memory Modules for Servers" (https://web.archive.org/
web/20131104021100/http://www.xbitlabs.com/news/memory/display/20120702221021_Samsung
_Samples_Industry_s_First_DDR4_Memory_Modules_for_Servers.html) (press release).
Samsung. Archived from the original (http://www.xbitlabs.com/news/memory/display/2012070222
1021_Samsung_Samples_Industry_s_First_DDR4_Memory_Modules_for_Servers.html) on
2013-11-04.
36. "Samsung Samples Industry's First 16-Gigabyte Server Modules Based on DDR4 Memory
technology" (http://www.samsung.com/global/business/semiconductor/news-events/press-release
s/detail?newsId=11701) (press release). Samsung.
37. Emily Desjardins (25 September 2012). "JEDEC Announces Publication of DDR4 Standard" (http
s://www.jedec.org/news/pressreleases/jedec-announces-publication-ddr4-standard). JEDEC.
Retrieved 5 April 2019.
38. Shah, Agam (April 12, 2013), "Adoption of DDR4 memory faces delays" (http://www.techhive.co
m/article/2034175/adoption-of-ddr4-memory-facing-delays.html), TechHive, IDG, retrieved
June 30, 2013.
39. Shilov, Anton (2010-08-16), Next-Generation DDR4 Memory to Reach 4.266 GHz (https://web.arc
hive.org/web/20101219085440/http://www.xbitlabs.com/news/memory/display/20100816124343_
Next_Generation_DDR4_Memory_to_Reach_4_266GHz_Report.html), Xbit labs, archived from
the original (http://www.xbitlabs.com/news/memory/display/20100816124343_Next_Generation_
DDR4_Memory_to_Reach_4_266GHz_Report.html) on 2010-12-19, retrieved 2011-01-03
40. "Samsung Begins Production of 10-Nanometer Class DRAM" (https://ddr4.org/samsung-mass-pr
oducing-first-10-nanometer-class-dram/). Official DDR4 Memory Technology News Blog. 2016-
05-21. Retrieved 2016-05-23.
11. 02/10/2020 DDR4 SDRAM - Wikipedia
https://en.wikipedia.org/wiki/DDR4_SDRAM 11/12
41. "1xnm DRAM Challenges" (http://semiengineering.com/1xnm-dram-challenges/). Semiconductor
Engineering. 2016-02-18. Retrieved 2016-06-28.
42. Shah, Agam (2013-04-12). "Adoption of DDR4 memory faces delays" (https://www.pcworld.com/a
rticle/2034175/adoption-of-ddr4-memory-facing-delays.html). IDG News. Retrieved 22 April 2013.
43. "Haswell-E – Intel's First 8 Core Desktop Processor Exposed" (https://www.techpowerup.com/185
719/haswell-e-intels-first-8-core-desktop-processor-exposed). TechPowerUp.
44. "AMD's Zen processors to feature up to 32 cores, 8-channel DDR4" (http://www.techspot.com/ne
ws/63796-amd-zen-cpu-up-32-cores.html).
45. Looking forward to DDR4 (http://www.pcpro.co.uk/news/220257/idf-ddr3-wont-catchup-with-ddr2-
during-2009.html), UK: PC pro, 2008-08-19, retrieved 2012-01-23
46. IDF: DDR4 – the successor to DDR3 memory (http://www.heise-online.co.uk/news/IDF-DDR4-the
-successor-to-DDR3-memory--/111367) (online ed.), UK: Heise, 2008-08-21, retrieved
2012-01-23
47. Swinburne, Richard (2010-08-26). "DDR4: What we can Expect" (http://www.bit-tech.net/hardwar
e/memory/2010/08/26/ddr4-what-we-can-expect/1). Bit tech. Retrieved 2011-04-28. Page 1 (htt
p://www.bit-tech.net/hardware/memory/2010/08/26/ddr4-what-we-can-expect/2), 2 (http://www.bit-
tech.net/hardware/memory/2010/08/26/ddr4-what-we-can-expect/2), 3 (http://www.bit-tech.net/har
dware/memory/2010/08/26/ddr4-what-we-can-expect/3).
48. "JEDEC Announces Broad Spectrum of 3D-IC Standards Development" (http://www.jedec.org/ne
ws/pressreleases/jedec-announces-broad-spectrum-3d-ic-standards-development) (press
release). JEDEC. 2011-03-17. Retrieved 26 April 2011.
49. Tan, Gutmann; Tan, Reif (2008). Wafer Level 3-D ICs Process Technology (https://books.google.c
om/books?id=fhen8HeoC1AC&pg=PA278). Springer. p. 278 (sections 12.3.4–12.3.5). ISBN 978-
0-38776534-1.
50. JESD79-4 – JEDEC Standard DDR4 SDRAM September 2012 (https://doc.xdevs.com/doc/Stand
ards/DDR4/JESD79-4%20DDR4%20SDRAM.pdf) (PDF), X devs.
51. JEDEC Standard JESD79-4: DDR4 SDRAM (https://www.jedec.org/standards-documents/docs/je
sd79-4a), JEDEC Solid State Technology Association, September 2012, retrieved 2012-10-11.
Username "cypherpunks" and password "cypherpunks" will allow download.
52. JEDEC Standard JESD79-4B: DDR4 SDRAM (https://www.jedec.org/system/files/docs/JESD79-4
B.pdf) (PDF), JEDEC Solid State Technology Association, June 2017, retrieved 2017-08-18.
Username "cypherpunks" and password "cypherpunks" will allow download.
53. Lynch, Steven (19 June 2017). "G.Skill Brought Its Blazing Fast DDR4-4800 To Computex" (http
s://www.tomshardware.com/news/gskill-ddr4-4800-memory-computex,34825.html). Tom's
Hardware.
54. "Want the latest scoop on DDR4 DRAM? Here are some technical answers from the Micron team
of interest to IC, system, and pcb designers" (https://web.archive.org/web/20131202235148/http://
denalimemoryreport.com/2012/07/26/want-the-latest-scoop-on-ddr4-dram-here-are-some-technic
al-answers-from-the-micron-team-of-interest-to-ic-system-and-pcb-designers/). Denali Memory
Report, a memory market reporting site. 2012-07-26. Archived from the original (http://denalimem
oryreport.com/2012/07/26/want-the-latest-scoop-on-ddr4-dram-here-are-some-technical-answers
-from-the-micron-team-of-interest-to-ic-system-and-pcb-designers/) on 2013-12-02. Retrieved
22 April 2013.
55. MO-309E (http://www.jedec.org/sites/default/files/docs/MO-309E.pdf) (PDF) (whitepaper),
JEDEC, retrieved Aug 20, 2014.
56. "DDR4 SDRAM SO-DIMM (MTA18ASF1G72HZ, 8 GiB) Datasheet" (https://web.archive.org/web/
20141129035318/http://www.micron.com/-/media/documents/products/data%20sheet/modules/so
dimm/ddr4/asf18c1gx72hz.pdf) (PDF). Micron Technology. 2014-09-10. Archived from the original
(http://www.micron.com/-/media/documents/products/data%20sheet/modules/sodimm/ddr4/asf18
c1gx72hz.pdf) (PDF) on 2014-11-29. Retrieved 2014-11-20.
57. "How Intel Plans to Transition Between DDR3 and DDR4 for the Mainstream" (https://www.techpo
werup.com/205231/how-intel-plans-to-transition-between-ddr3-and-ddr4-for-the-mainstream).
Tech Power Up.
12. 02/10/2020 DDR4 SDRAM - Wikipedia
https://en.wikipedia.org/wiki/DDR4_SDRAM 12/12
Main Memory: DDR3 & DDR4 SDRAM (http://www.jedec.org/category/technology-focus-area/mai
n-memory-ddr3-ddr4-sdram), JEDEC, DDR4 SDRAM STANDARD (JESD79-4) (http://www.jedec.
org/standards-documents/docs/jesd79-4)
DDR4 (https://web.archive.org/web/20141010000932/http://www.corsair.com/~/media/Corsair/do
wnload-files/manuals/dram/DDR4-White-Paper.pdf) (PDF) (white paper), Corsair Components,
archived from the original (http://www.corsair.com/~/media/Corsair/download-files/manuals/dram/
DDR4-White-Paper.pdf) (PDF) on October 10, 2014.
Retrieved from "https://en.wikipedia.org/w/index.php?title=DDR4_SDRAM&oldid=977723265"
This page was last edited on 10 September 2020, at 15:44 (UTC).
Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using this
site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia
Foundation, Inc., a non-profit organization.
58. Denneman, Frank (2015-02-25). "Memory Deep Dive: DDR4 Memory" (http://frankdenneman.nl/2
015/02/25/memory-deep-dive-ddr4/). frankdenneman.nl. Retrieved 2017-05-14.
59. "Arbeitsspeicher: DDR5 nähert sich langsam der Marktreife" (http://www.golem.de/news/arbeitssp
eicher-ddr5-naehert-sich-langsam-der-marktreife-1608-122737.html). Golem.de.
60. Rißka, Volker. " "DDR is over": HBM3/HBM4 bringt Bandbreite für High-End-Systeme" (https://ww
w.computerbase.de/2018-03/ddr-hbm3-hbm4-ram/). ComputerBase.
61. Bailey, Brian. "Is Wide I/O a game changer?" (http://www.edn.com/electronics-blogs/practical-chip
-design/4374004/Is-Wide-I-O-a-game-changer-). EDN.
62. "JEDEC Publishes Breakthrough Standard for Wide I/O Mobile DRAM" (http://www.jedec.org/new
s/pressreleases/jedec-publishes-breakthrough-standard-wide-io-mobile-dram). Jedec.
63. "Beyond DDR4: The differences between Wide I/O, HBM, and Hybrid Memory Cube" (http://www.
extremetech.com/computing/197720-beyond-ddr4-understand-the-differences-between-wide-io-h
bm-and-hybrid-memory-cube). Extreme Tech. Retrieved 25 January 2015.
64. "Xilinx Ltd – Goodbye DDR, hello serial memory" (http://www.epdtonthenet.net/article/85020/Goo
dbye-DDR-hello-serial-memory.aspx). EPDT on the Net.
65. Schmitz, Tamara (October 27, 2014). "The Rise of Serial Memory and the Future of DDR" (http://
www.xilinx.com/support/documentation/white_papers/wp456-DDR-serial-mem.pdf) (PDF).
Retrieved March 1, 2015.
66. "Bye-Bye DDRn Protocol?" (http://www.semiwiki.com/forum/content/3315-bye-bye-ddrn-protocol.
html). SemiWiki.
67. "DRAM will live on as DDR5 memory is slated to reach computers in 2020" (http://www.pcworld.c
om/article/3109505/components/dram-will-live-on-as-ddr5-memory-is-slated-to-reach-computers-i
n-2020.html).
External links