SlideShare a Scribd company logo
1 of 35
• Defination
• Working
• Level
• Organization
• Application
INTRODUCTION
• Direct Mapping
• Associative Mapping
• Set-associative Mapping
MAPPING TECHNIQUES
CACHE COHERENCY
• Spatial Locality of reference
• Temporal Locality of reference
LOCALITY OF REFERENCE
CACHE PERFORMANCE
Cache memory is a small-sized type of volatile computer
memory that provides high-speed data access to a
processor and stores frequently used computer programs,
applications and data.
It stores and retains data only until a computer is powered
up.
Cache memory is used to reduce the average time to
access data from the Main memory.
The cache is a smaller and faster memory which stores
copies of the data from frequently used main memory
locations.
•
•
•
•
• A LEVEL 1 CACHE (L1 CACHE) IS A MEMORY CACHE THAT IS DIRECTLY BUILT
INTO THE MICROPROCESSOR, WHICH IS USED FOR STORING THE
MICROPROCESSOR’S RECENTLY ACCESSED INFORMATION, THUS IT IS ALSO
CALLED THE PRIMARY CACHE.
• IT IS ALSO REFERRED TO AS THE INTERNAL CACHE OR SYSTEM CACHE.
• L1 CACHE IS THE FASTEST CACHE MEMORY, SINCE IT IS ALREADY BUILT
WITHIN THE CHIP WITH A ZERO WAIT-STATE INTERFACE, MAKING IT THE MOST
EXPENSIVE CACHE AMONG THE CPU CACHES.
• IT IS USED TO STORE DATA THAT WAS ACCESSED BY THE PROCESSOR
RECENTLY, CRITICAL FILES THAT NEED TO BE EXECUTED IMMEDIATELY AND IT
IS THE FIRST CACHE TO BE ACCESSED AND PROCESSED WHEN THE
PROCESSOR ITSELF PERFORMS A COMPUTER INSTRUCTION.
• IN MORE RECENT MICROPROCESSORS, THE L1 CACHE IS DIVIDED EQUALLY
INTO TWO: A CACHE THAT IS USED TO KEEP PROGRAM DATA AND ANOTHER
CACHE THAT IS USED TO KEEP INSTRUCTIONS FOR THE MICROPROCESSOR.
• IT IS IMPLEMENTED WITH THE USE OF STATIC RANDOM ACCESS MEMORY
(SRAM), WHICH COMES IN DIFFERENT SIZES DEPENDING ON THE GRADE OF
THE PROCESSOR.
• A LEVEL 2 CACHE (L2 CACHE) IS A CPU CACHE MEMORY THAT IS LOCATED
OUTSIDE AND SEPARATE FROM THE MICROPROCESSOR CHIP CORE,
• ALTHOUGH, IT IS FOUND ON THE SAME PROCESSOR CHIP PACKAGE. EARLIER
L2 CACHE DESIGNS PLACED THEM ON THE MOTHERBOARD WHICH MADE
THEM QUITE SLOW.
• INCLUDING L2 CACHES IN MICROPROCESSOR DESIGNS ARE VERY COMMON
IN MODERN CPUS EVEN THOUGH THEY MAY NOT BE AS FAST AS THE L1
CACHE, BUT SINCE IT IS OUTSIDE OF THE CORE, THE CAPACITY CAN BE
INCREASED AND IT IS STILL FASTER THAN THE MAIN MEMORY.
• A LEVEL 2 CACHE IS ALSO CALLED THE SECONDARY CACHE OR AN EXTERNAL
CACHE.
• THE LEVEL 2 CACHE SERVES AS THE BRIDGE FOR THE PROCESS AND
MEMORY PERFORMANCE GAP.
• ITS MAIN GOAL IS TO PROVIDE THE NECESSARY STORED INFORMATION TO
THE PROCESSOR WITHOUT ANY INTERRUPTIONS OR ANY DELAYS OR WAIT-
STATES.
• MODERN MICROPROCESSORS SOMETIMES INCLUDE A FEATURE CALLED DATA
PRE-FETCHING, AND THE L2 CACHE BOOSTS THIS FEATURE BY BUFFERING
THE PROGRAM INSTRUCTIONS AND DATA THAT IS REQUESTED BY THE
PROCESSOR FROM THE MEMORY, SERVING AS A CLOSER WAITING AREA
COMPARED TO THE RAM.
• A LEVEL 3 (L3) CACHE IS A SPECIALIZED CACHE THAT
THAT IS USED BY THE CPU AND IS USUALLY BUILT ONTO
THE MOTHERBOARD AND, IN CERTAIN SPECIAL
PROCESSORS, WITHIN THE CPU MODULE ITSELF.
• IT WORKS TOGETHER WITH THE L1 AND L2 CACHE TO
IMPROVE COMPUTER PERFORMANCE BY PREVENTING
BOTTLENECKS DUE TO THE FETCH AND EXECUTE CYCLE
TAKING TOO LONG.
• THE L3 CACHE IS USUALLY BUILT ONTO THE
MOTHERBOARD BETWEEN THE MAIN MEMORY (RAM) AND
THE L1 AND L2 CACHES OF THE PROCESSOR MODULE.
• THIS SERVES AS ANOTHER BRIDGE TO PARK INFORMATION
LIKE PROCESSOR COMMANDS AND FREQUENTLY USED
DATA IN ORDER TO PREVENT BOTTLENECKS RESULTING
FROM THE FETCHING OF THESE DATA FROM THE MAIN
MEMORY.
• USUALLY, THE CACHE MEMORY CAN STORE A
REASONABLE NUMBER OF BLOCKS AT ANY
GIVEN TIME, BUT THIS NUMBER IS SMALL
COMPARED TO THE TOTAL NUMBER OF BLOCKS
IN THE MAIN MEMORY.
• THE CORRESPONDENCE BETWEEN THE MAIN
MEMORY BLOCKS AND THOSE IN THE CACHE IS
SPECIFIED BY A MAPPING FUNCTION.
• A PRIMARY CACHE IS ALWAYS LOCATED ON THE
PROCESSOR CHIP. THIS CACHE IS SMALL AND ITS
ACCESS TIME IS COMPARABLE TO THAT OF
PROCESSOR REGISTERS.
• SECONDARY CACHE IS PLACED BETWEEN THE
PRIMARY CACHE AND THE REST OF THE MEMORY. IT
IS REFERRED TO AS THE LEVEL 2 (L2) CACHE.
OFTEN, THE LEVEL 2 CACHE IS ALSO HOUSED ON
THE PROCESSOR CHIP.
The transformation of data
from the main memory to
the cache memory is
referred to as cache
memory mapping.
• THE SIMPLEST TECHNIQUE, KNOWN AS DIRECT MAPPING, MAPS EACH BLOCK OF MAIN MEMORY INTO ONLY ONE POSSIBLE
CACHE LINE.
• IN DIRECT MAPPING, ASSIGNED EACH MEMORY BLOCK TO A SPECIFIC LINE IN THE CACHE.
• IF A LINE IS PREVIOUSLY TAKEN UP BY A MEMORY BLOCK WHEN A NEW BLOCK NEEDS TO BE LOADED, THE OLD BLOCK IS
TRASHED.
• AN ADDRESS SPACE IS SPLIT INTO TWO PARTS INDEX FIELD AND TAG FIELD.
• THE CACHE IS USED TO STORE THE TAG FIELD WHEREAS THE REST IS STORED IN THE MAIN MEMORY.
•
•
•
Address length = (s + w) bitsAddress
Number of addressable units = 2s+w words or bytesNumber
Block size = line size = 2w words or bytesBlock
Number of lines in cache = m = 2rNumber
Size of tag = (s – r) bitsSize
•
•
•
•
•
•
•
•
•
Address length = (s + w) bitsAddress
Number of addressable units = 2s+w words or bytesNumber
Block size = line size = 2w words or bytesBlock
Number of lines in cache = undeterminedNumber
Size of tag = s bitsSize
•
•
•
•
•
•
•
•
Address length = (s + w) bitsAddress
Number of addressable units = 2s+w words or bytesNumber
Block size = line size = 2w words or bytesBlock
Number of blocks in main memory = 2dNumber
Number of lines in set = kNumber
Number of sets = v = 2dNumber
Size of tag = (s – d) bitsSize
•
•
•
•
•
• LOCALITY OF REFERENCE REFERS TO A PHENOMENON IN
WHICH A COMPUTER PROGRAM TENDS TO ACCESS SAME SET
OF MEMORY LOCATIONS FOR A PARTICULAR TIME PERIOD.
• LOCALITY OF REFERENCE REFERS TO THE TENDENCY OF
THE COMPUTER PROGRAM TO ACCESS INSTRUCTIONS
WHOSE ADDRESSES ARE NEAR ONE ANOTHER.
• THE PROPERTY OF LOCALITY OF REFERENCE IS MAINLY
SHOWN BY LOOPS AND SUBROUTINE CALLS IN A PROGRAM.
• IN CASE OF LOOPS IN PROGRAM
CONTROL PROCESSING UNIT
REPEATEDLY REFERS TO THE SET OF
INSTRUCTIONS THAT CONSTITUTE THE
LOOP.
• IN CASE OF SUBROUTINE CALLS,
EVERYTIME THE SET OF
INSTRUCTIONS ARE FETCHED FROM
MEMORY.
• REFERENCES TO DATA ITEMS ALSO
GET LOCALIZED THAT MEANS SAME
DATA ITEM IS REFERENCED AGAIN AND
AGAIN.
•IN FIGURE, YOU CAN SEE THAT CPU WANTS TO READ OR
FETCH THE DATA OR INSTRUCTION.FIRST IT WILL ACCESS
THE CACHE MEMORY AS IT IS NEAR TO IT AND PROVIDES
VERY FAST ACCESS. IF THE REQUIRED DATA OR
INSTRUCTION IS FOUND, IT WILL BE FETCHED. THIS
SITUATION IS KNOWN AS CACHE HIT. BUT IF THE REQUIRED
DATA OR INSTRUCTION IS NOT FOUND IN THE CACHE
MEMORY THEN THIS SITUATION IS KNOWN AS CACHE
MISS.NOW THE MAIN MEMORY WILL BE SEARCHED FOR
THE REQUIRED DATA OR INSTRUCTION THAT WAS BEING
SEARCHED AND IF FOUND WILL GO THROUGH ONE OF THE
TWO WAYS:
•FIRST WAY IS THAT THE CPU SHOULD FETCH THE
REQUIRED DATA OR INSTRUCTION AND USE IT AND THAT’S
IT BUT WHAT, WHEN THE SAME DATA OR INSTRUCTION IS
REQUIRED AGAIN.CPU AGAIN HAS TO ACCESS SAME MAIN
MEMORY LOCATION FOR IT AND WE ALREADY KNOW THAT
MAIN MEMORY IS THE SLOWEST TO ACCESS.
•THE SECOND WAY IS TO STORE THE DATA OR
INSTRUCTION IN THE CACHE MEMORY SO THAT IF IT IS
NEEDED SOON AGAIN IN NEAR FUTURE IT COULD BE
FETCHED IN A MUCH FASTER WAY.
•
1.
2.
•
•
•
•
•
•
• THE PERFORMANCE OF THE CACHE IS MEASURED IN TERMS OF HIT
RATIO. WHEN CPU REFERS TO MEMORY AND FIND THE DATA OR
INSTRUCTION WITHIN THE CACHE MEMORY, IT IS KNOWN AS CACHE
HIT.
• IF THE DESIRED DATA OR INSTRUCTION IS NOT FOUND IN CACHE
MEMORY AND CPU REFERS TO THE MAIN MEMORY TO FIND THAT
DATA OR INSTRUCTION, IT IS KNOWN AS CACHE MISS.
• HIT + MISS = TOTAL CPU REFERENCE
• HIT RATIO(H) = HIT / (HIT+MISS)
• AVERAGE ACCESS TIME OF ANY MEMORY SYSTEM CONSISTS OF TWO
LEVELS: CACHE AND MAIN MEMORY. IF TC IS TIME TO ACCESS
CACHE MEMORY AND TM IS THE TIME TO ACCESS MAIN MEMORY
THEN WE CAN WRITE:
• TAVG = AVERAGE TIME TO ACCESS MEMORY
• TAVG = H*TC + (1-H)*(TM+TC)
HTTPS://WWW.TECHOPEDIA
.COM
HTTPS://WWW.GEEKSFORG
EEKS.ORG/CACHE-MEMORY
HTTPS://EN.WIKIPEDIA.ORG
/WIKI/CACHE_COHERENCE

More Related Content

What's hot

Memory organization (Computer architecture)
Memory organization (Computer architecture)Memory organization (Computer architecture)
Memory organization (Computer architecture)Sandesh Jonchhe
 
Cache memory principles
Cache memory principlesCache memory principles
Cache memory principlesbit allahabad
 
Memory management ppt coa
Memory management ppt coaMemory management ppt coa
Memory management ppt coaBharti Khemani
 
Static and Dynamic Read/Write memories
Static and Dynamic Read/Write memoriesStatic and Dynamic Read/Write memories
Static and Dynamic Read/Write memoriesAbhilash Nair
 
Cache memory
Cache memoryCache memory
Cache memoryAnuj Modi
 
Direct memory access (dma)
Direct memory access (dma)Direct memory access (dma)
Direct memory access (dma)Zubair Khalid
 
Computer organization memory
Computer organization memoryComputer organization memory
Computer organization memoryDeepak John
 
Direct Memory Access(DMA)
Direct Memory Access(DMA)Direct Memory Access(DMA)
Direct Memory Access(DMA)Page Maker
 

What's hot (20)

Cache Memory
Cache MemoryCache Memory
Cache Memory
 
Cache memory
Cache memoryCache memory
Cache memory
 
Memory organization (Computer architecture)
Memory organization (Computer architecture)Memory organization (Computer architecture)
Memory organization (Computer architecture)
 
Memory management
Memory managementMemory management
Memory management
 
Cache mapping
Cache mappingCache mapping
Cache mapping
 
Cache memory principles
Cache memory principlesCache memory principles
Cache memory principles
 
Cache Memory
Cache MemoryCache Memory
Cache Memory
 
Dram and its types
Dram and its typesDram and its types
Dram and its types
 
Memory Organization
Memory OrganizationMemory Organization
Memory Organization
 
Cache memory
Cache memoryCache memory
Cache memory
 
Memory management ppt coa
Memory management ppt coaMemory management ppt coa
Memory management ppt coa
 
Cache memory
Cache memoryCache memory
Cache memory
 
Static and Dynamic Read/Write memories
Static and Dynamic Read/Write memoriesStatic and Dynamic Read/Write memories
Static and Dynamic Read/Write memories
 
80486 and pentium
80486 and pentium80486 and pentium
80486 and pentium
 
Cache memory
Cache memoryCache memory
Cache memory
 
Cache memory
Cache  memoryCache  memory
Cache memory
 
Cache memory
Cache memoryCache memory
Cache memory
 
Direct memory access (dma)
Direct memory access (dma)Direct memory access (dma)
Direct memory access (dma)
 
Computer organization memory
Computer organization memoryComputer organization memory
Computer organization memory
 
Direct Memory Access(DMA)
Direct Memory Access(DMA)Direct Memory Access(DMA)
Direct Memory Access(DMA)
 

Similar to Cache memory

cachememory-210517060741 (1).pdf
cachememory-210517060741 (1).pdfcachememory-210517060741 (1).pdf
cachememory-210517060741 (1).pdfOmGadekar2
 
Elements of cache design
Elements of cache designElements of cache design
Elements of cache designRohail Butt
 
Introduction to memory management
Introduction to memory managementIntroduction to memory management
Introduction to memory managementSweety Singhal
 
GRP13_CACHE MEMORY ORGANIZATION AND DIFFERENT CACHE MAPPING TECHNIQUES.pptx
GRP13_CACHE MEMORY ORGANIZATION AND DIFFERENT CACHE MAPPING TECHNIQUES.pptxGRP13_CACHE MEMORY ORGANIZATION AND DIFFERENT CACHE MAPPING TECHNIQUES.pptx
GRP13_CACHE MEMORY ORGANIZATION AND DIFFERENT CACHE MAPPING TECHNIQUES.pptxDANCERAMBA
 
Memory organization.pptx
Memory organization.pptxMemory organization.pptx
Memory organization.pptxRamanRay105
 
04_Cache Memory.ppt
04_Cache Memory.ppt04_Cache Memory.ppt
04_Cache Memory.pptShiva340703
 
Computer organization memory hierarchy
Computer organization memory hierarchyComputer organization memory hierarchy
Computer organization memory hierarchyAJAL A J
 
Computer architecture
Computer architecture Computer architecture
Computer architecture Ashish Kumar
 
coa-Unit5-ppt1 (1).pptx
coa-Unit5-ppt1 (1).pptxcoa-Unit5-ppt1 (1).pptx
coa-Unit5-ppt1 (1).pptxRuhul Amin
 
8 memory management strategies
8 memory management strategies8 memory management strategies
8 memory management strategiesDr. Loganathan R
 
Computer architecture abhmail
Computer architecture abhmailComputer architecture abhmail
Computer architecture abhmailOm Prakash
 

Similar to Cache memory (20)

cachememory-210517060741 (1).pdf
cachememory-210517060741 (1).pdfcachememory-210517060741 (1).pdf
cachememory-210517060741 (1).pdf
 
Elements of cache design
Elements of cache designElements of cache design
Elements of cache design
 
Cache Memory.pptx
Cache Memory.pptxCache Memory.pptx
Cache Memory.pptx
 
Cache Memory.pptx
Cache Memory.pptxCache Memory.pptx
Cache Memory.pptx
 
Introduction to memory management
Introduction to memory managementIntroduction to memory management
Introduction to memory management
 
GRP13_CACHE MEMORY ORGANIZATION AND DIFFERENT CACHE MAPPING TECHNIQUES.pptx
GRP13_CACHE MEMORY ORGANIZATION AND DIFFERENT CACHE MAPPING TECHNIQUES.pptxGRP13_CACHE MEMORY ORGANIZATION AND DIFFERENT CACHE MAPPING TECHNIQUES.pptx
GRP13_CACHE MEMORY ORGANIZATION AND DIFFERENT CACHE MAPPING TECHNIQUES.pptx
 
Memory organization.pptx
Memory organization.pptxMemory organization.pptx
Memory organization.pptx
 
04_Cache Memory.ppt
04_Cache Memory.ppt04_Cache Memory.ppt
04_Cache Memory.ppt
 
Computer organization memory hierarchy
Computer organization memory hierarchyComputer organization memory hierarchy
Computer organization memory hierarchy
 
Computer architecture
Computer architecture Computer architecture
Computer architecture
 
Cache simulator
Cache simulatorCache simulator
Cache simulator
 
coa-Unit5-ppt1 (1).pptx
coa-Unit5-ppt1 (1).pptxcoa-Unit5-ppt1 (1).pptx
coa-Unit5-ppt1 (1).pptx
 
cache memory.ppt
cache memory.pptcache memory.ppt
cache memory.ppt
 
cache memory.ppt
cache memory.pptcache memory.ppt
cache memory.ppt
 
Computer architecture
Computer architectureComputer architecture
Computer architecture
 
Cache Memory- JMD.pptx
Cache Memory- JMD.pptxCache Memory- JMD.pptx
Cache Memory- JMD.pptx
 
CPU Caches
CPU CachesCPU Caches
CPU Caches
 
COA (Unit_4.pptx)
COA (Unit_4.pptx)COA (Unit_4.pptx)
COA (Unit_4.pptx)
 
8 memory management strategies
8 memory management strategies8 memory management strategies
8 memory management strategies
 
Computer architecture abhmail
Computer architecture abhmailComputer architecture abhmail
Computer architecture abhmail
 

Recently uploaded

Computer Networks Basics of Network Devices
Computer Networks  Basics of Network DevicesComputer Networks  Basics of Network Devices
Computer Networks Basics of Network DevicesChandrakantDivate1
 
Electromagnetic relays used for power system .pptx
Electromagnetic relays used for power system .pptxElectromagnetic relays used for power system .pptx
Electromagnetic relays used for power system .pptxNANDHAKUMARA10
 
PE 459 LECTURE 2- natural gas basic concepts and properties
PE 459 LECTURE 2- natural gas basic concepts and propertiesPE 459 LECTURE 2- natural gas basic concepts and properties
PE 459 LECTURE 2- natural gas basic concepts and propertiessarkmank1
 
Introduction to Data Visualization,Matplotlib.pdf
Introduction to Data Visualization,Matplotlib.pdfIntroduction to Data Visualization,Matplotlib.pdf
Introduction to Data Visualization,Matplotlib.pdfsumitt6_25730773
 
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...HenryBriggs2
 
Theory of Time 2024 (Universal Theory for Everything)
Theory of Time 2024 (Universal Theory for Everything)Theory of Time 2024 (Universal Theory for Everything)
Theory of Time 2024 (Universal Theory for Everything)Ramkumar k
 
Design For Accessibility: Getting it right from the start
Design For Accessibility: Getting it right from the startDesign For Accessibility: Getting it right from the start
Design For Accessibility: Getting it right from the startQuintin Balsdon
 
Introduction to Artificial Intelligence ( AI)
Introduction to Artificial Intelligence ( AI)Introduction to Artificial Intelligence ( AI)
Introduction to Artificial Intelligence ( AI)ChandrakantDivate1
 
Memory Interfacing of 8086 with DMA 8257
Memory Interfacing of 8086 with DMA 8257Memory Interfacing of 8086 with DMA 8257
Memory Interfacing of 8086 with DMA 8257subhasishdas79
 
HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKAR
HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKARHAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKAR
HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKARKOUSTAV SARKAR
 
Worksharing and 3D Modeling with Revit.pptx
Worksharing and 3D Modeling with Revit.pptxWorksharing and 3D Modeling with Revit.pptx
Worksharing and 3D Modeling with Revit.pptxMustafa Ahmed
 
Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayEpec Engineered Technologies
 
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...Amil baba
 
Online electricity billing project report..pdf
Online electricity billing project report..pdfOnline electricity billing project report..pdf
Online electricity billing project report..pdfKamal Acharya
 
Max. shear stress theory-Maximum Shear Stress Theory ​ Maximum Distortional ...
Max. shear stress theory-Maximum Shear Stress Theory ​  Maximum Distortional ...Max. shear stress theory-Maximum Shear Stress Theory ​  Maximum Distortional ...
Max. shear stress theory-Maximum Shear Stress Theory ​ Maximum Distortional ...ronahami
 
Introduction to Robotics in Mechanical Engineering.pptx
Introduction to Robotics in Mechanical Engineering.pptxIntroduction to Robotics in Mechanical Engineering.pptx
Introduction to Robotics in Mechanical Engineering.pptxhublikarsn
 
8th International Conference on Soft Computing, Mathematics and Control (SMC ...
8th International Conference on Soft Computing, Mathematics and Control (SMC ...8th International Conference on Soft Computing, Mathematics and Control (SMC ...
8th International Conference on Soft Computing, Mathematics and Control (SMC ...josephjonse
 
Augmented Reality (AR) with Augin Software.pptx
Augmented Reality (AR) with Augin Software.pptxAugmented Reality (AR) with Augin Software.pptx
Augmented Reality (AR) with Augin Software.pptxMustafa Ahmed
 

Recently uploaded (20)

Computer Networks Basics of Network Devices
Computer Networks  Basics of Network DevicesComputer Networks  Basics of Network Devices
Computer Networks Basics of Network Devices
 
Electromagnetic relays used for power system .pptx
Electromagnetic relays used for power system .pptxElectromagnetic relays used for power system .pptx
Electromagnetic relays used for power system .pptx
 
PE 459 LECTURE 2- natural gas basic concepts and properties
PE 459 LECTURE 2- natural gas basic concepts and propertiesPE 459 LECTURE 2- natural gas basic concepts and properties
PE 459 LECTURE 2- natural gas basic concepts and properties
 
Introduction to Data Visualization,Matplotlib.pdf
Introduction to Data Visualization,Matplotlib.pdfIntroduction to Data Visualization,Matplotlib.pdf
Introduction to Data Visualization,Matplotlib.pdf
 
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
 
Theory of Time 2024 (Universal Theory for Everything)
Theory of Time 2024 (Universal Theory for Everything)Theory of Time 2024 (Universal Theory for Everything)
Theory of Time 2024 (Universal Theory for Everything)
 
Design For Accessibility: Getting it right from the start
Design For Accessibility: Getting it right from the startDesign For Accessibility: Getting it right from the start
Design For Accessibility: Getting it right from the start
 
Introduction to Artificial Intelligence ( AI)
Introduction to Artificial Intelligence ( AI)Introduction to Artificial Intelligence ( AI)
Introduction to Artificial Intelligence ( AI)
 
Memory Interfacing of 8086 with DMA 8257
Memory Interfacing of 8086 with DMA 8257Memory Interfacing of 8086 with DMA 8257
Memory Interfacing of 8086 with DMA 8257
 
HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKAR
HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKARHAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKAR
HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKAR
 
Worksharing and 3D Modeling with Revit.pptx
Worksharing and 3D Modeling with Revit.pptxWorksharing and 3D Modeling with Revit.pptx
Worksharing and 3D Modeling with Revit.pptx
 
Integrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - NeometrixIntegrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - Neometrix
 
Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power Play
 
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...
 
Online electricity billing project report..pdf
Online electricity billing project report..pdfOnline electricity billing project report..pdf
Online electricity billing project report..pdf
 
Max. shear stress theory-Maximum Shear Stress Theory ​ Maximum Distortional ...
Max. shear stress theory-Maximum Shear Stress Theory ​  Maximum Distortional ...Max. shear stress theory-Maximum Shear Stress Theory ​  Maximum Distortional ...
Max. shear stress theory-Maximum Shear Stress Theory ​ Maximum Distortional ...
 
Introduction to Robotics in Mechanical Engineering.pptx
Introduction to Robotics in Mechanical Engineering.pptxIntroduction to Robotics in Mechanical Engineering.pptx
Introduction to Robotics in Mechanical Engineering.pptx
 
8th International Conference on Soft Computing, Mathematics and Control (SMC ...
8th International Conference on Soft Computing, Mathematics and Control (SMC ...8th International Conference on Soft Computing, Mathematics and Control (SMC ...
8th International Conference on Soft Computing, Mathematics and Control (SMC ...
 
Augmented Reality (AR) with Augin Software.pptx
Augmented Reality (AR) with Augin Software.pptxAugmented Reality (AR) with Augin Software.pptx
Augmented Reality (AR) with Augin Software.pptx
 
Signal Processing and Linear System Analysis
Signal Processing and Linear System AnalysisSignal Processing and Linear System Analysis
Signal Processing and Linear System Analysis
 

Cache memory

  • 1.
  • 2. • Defination • Working • Level • Organization • Application INTRODUCTION • Direct Mapping • Associative Mapping • Set-associative Mapping MAPPING TECHNIQUES CACHE COHERENCY • Spatial Locality of reference • Temporal Locality of reference LOCALITY OF REFERENCE CACHE PERFORMANCE
  • 3. Cache memory is a small-sized type of volatile computer memory that provides high-speed data access to a processor and stores frequently used computer programs, applications and data. It stores and retains data only until a computer is powered up. Cache memory is used to reduce the average time to access data from the Main memory. The cache is a smaller and faster memory which stores copies of the data from frequently used main memory locations.
  • 5.
  • 6. • A LEVEL 1 CACHE (L1 CACHE) IS A MEMORY CACHE THAT IS DIRECTLY BUILT INTO THE MICROPROCESSOR, WHICH IS USED FOR STORING THE MICROPROCESSOR’S RECENTLY ACCESSED INFORMATION, THUS IT IS ALSO CALLED THE PRIMARY CACHE. • IT IS ALSO REFERRED TO AS THE INTERNAL CACHE OR SYSTEM CACHE. • L1 CACHE IS THE FASTEST CACHE MEMORY, SINCE IT IS ALREADY BUILT WITHIN THE CHIP WITH A ZERO WAIT-STATE INTERFACE, MAKING IT THE MOST EXPENSIVE CACHE AMONG THE CPU CACHES. • IT IS USED TO STORE DATA THAT WAS ACCESSED BY THE PROCESSOR RECENTLY, CRITICAL FILES THAT NEED TO BE EXECUTED IMMEDIATELY AND IT IS THE FIRST CACHE TO BE ACCESSED AND PROCESSED WHEN THE PROCESSOR ITSELF PERFORMS A COMPUTER INSTRUCTION. • IN MORE RECENT MICROPROCESSORS, THE L1 CACHE IS DIVIDED EQUALLY INTO TWO: A CACHE THAT IS USED TO KEEP PROGRAM DATA AND ANOTHER CACHE THAT IS USED TO KEEP INSTRUCTIONS FOR THE MICROPROCESSOR. • IT IS IMPLEMENTED WITH THE USE OF STATIC RANDOM ACCESS MEMORY (SRAM), WHICH COMES IN DIFFERENT SIZES DEPENDING ON THE GRADE OF THE PROCESSOR.
  • 7. • A LEVEL 2 CACHE (L2 CACHE) IS A CPU CACHE MEMORY THAT IS LOCATED OUTSIDE AND SEPARATE FROM THE MICROPROCESSOR CHIP CORE, • ALTHOUGH, IT IS FOUND ON THE SAME PROCESSOR CHIP PACKAGE. EARLIER L2 CACHE DESIGNS PLACED THEM ON THE MOTHERBOARD WHICH MADE THEM QUITE SLOW. • INCLUDING L2 CACHES IN MICROPROCESSOR DESIGNS ARE VERY COMMON IN MODERN CPUS EVEN THOUGH THEY MAY NOT BE AS FAST AS THE L1 CACHE, BUT SINCE IT IS OUTSIDE OF THE CORE, THE CAPACITY CAN BE INCREASED AND IT IS STILL FASTER THAN THE MAIN MEMORY. • A LEVEL 2 CACHE IS ALSO CALLED THE SECONDARY CACHE OR AN EXTERNAL CACHE. • THE LEVEL 2 CACHE SERVES AS THE BRIDGE FOR THE PROCESS AND MEMORY PERFORMANCE GAP. • ITS MAIN GOAL IS TO PROVIDE THE NECESSARY STORED INFORMATION TO THE PROCESSOR WITHOUT ANY INTERRUPTIONS OR ANY DELAYS OR WAIT- STATES. • MODERN MICROPROCESSORS SOMETIMES INCLUDE A FEATURE CALLED DATA PRE-FETCHING, AND THE L2 CACHE BOOSTS THIS FEATURE BY BUFFERING THE PROGRAM INSTRUCTIONS AND DATA THAT IS REQUESTED BY THE PROCESSOR FROM THE MEMORY, SERVING AS A CLOSER WAITING AREA COMPARED TO THE RAM.
  • 8. • A LEVEL 3 (L3) CACHE IS A SPECIALIZED CACHE THAT THAT IS USED BY THE CPU AND IS USUALLY BUILT ONTO THE MOTHERBOARD AND, IN CERTAIN SPECIAL PROCESSORS, WITHIN THE CPU MODULE ITSELF. • IT WORKS TOGETHER WITH THE L1 AND L2 CACHE TO IMPROVE COMPUTER PERFORMANCE BY PREVENTING BOTTLENECKS DUE TO THE FETCH AND EXECUTE CYCLE TAKING TOO LONG. • THE L3 CACHE IS USUALLY BUILT ONTO THE MOTHERBOARD BETWEEN THE MAIN MEMORY (RAM) AND THE L1 AND L2 CACHES OF THE PROCESSOR MODULE. • THIS SERVES AS ANOTHER BRIDGE TO PARK INFORMATION LIKE PROCESSOR COMMANDS AND FREQUENTLY USED DATA IN ORDER TO PREVENT BOTTLENECKS RESULTING FROM THE FETCHING OF THESE DATA FROM THE MAIN MEMORY.
  • 9. • USUALLY, THE CACHE MEMORY CAN STORE A REASONABLE NUMBER OF BLOCKS AT ANY GIVEN TIME, BUT THIS NUMBER IS SMALL COMPARED TO THE TOTAL NUMBER OF BLOCKS IN THE MAIN MEMORY. • THE CORRESPONDENCE BETWEEN THE MAIN MEMORY BLOCKS AND THOSE IN THE CACHE IS SPECIFIED BY A MAPPING FUNCTION. • A PRIMARY CACHE IS ALWAYS LOCATED ON THE PROCESSOR CHIP. THIS CACHE IS SMALL AND ITS ACCESS TIME IS COMPARABLE TO THAT OF PROCESSOR REGISTERS. • SECONDARY CACHE IS PLACED BETWEEN THE PRIMARY CACHE AND THE REST OF THE MEMORY. IT IS REFERRED TO AS THE LEVEL 2 (L2) CACHE. OFTEN, THE LEVEL 2 CACHE IS ALSO HOUSED ON THE PROCESSOR CHIP.
  • 10. The transformation of data from the main memory to the cache memory is referred to as cache memory mapping.
  • 11. • THE SIMPLEST TECHNIQUE, KNOWN AS DIRECT MAPPING, MAPS EACH BLOCK OF MAIN MEMORY INTO ONLY ONE POSSIBLE CACHE LINE. • IN DIRECT MAPPING, ASSIGNED EACH MEMORY BLOCK TO A SPECIFIC LINE IN THE CACHE. • IF A LINE IS PREVIOUSLY TAKEN UP BY A MEMORY BLOCK WHEN A NEW BLOCK NEEDS TO BE LOADED, THE OLD BLOCK IS TRASHED. • AN ADDRESS SPACE IS SPLIT INTO TWO PARTS INDEX FIELD AND TAG FIELD. • THE CACHE IS USED TO STORE THE TAG FIELD WHEREAS THE REST IS STORED IN THE MAIN MEMORY.
  • 12.
  • 13.
  • 15. Address length = (s + w) bitsAddress Number of addressable units = 2s+w words or bytesNumber Block size = line size = 2w words or bytesBlock Number of lines in cache = m = 2rNumber Size of tag = (s – r) bitsSize
  • 17.
  • 18.
  • 20. Address length = (s + w) bitsAddress Number of addressable units = 2s+w words or bytesNumber Block size = line size = 2w words or bytesBlock Number of lines in cache = undeterminedNumber Size of tag = s bitsSize
  • 22.
  • 23.
  • 25. Address length = (s + w) bitsAddress Number of addressable units = 2s+w words or bytesNumber Block size = line size = 2w words or bytesBlock Number of blocks in main memory = 2dNumber Number of lines in set = kNumber Number of sets = v = 2dNumber Size of tag = (s – d) bitsSize
  • 27.
  • 28. • LOCALITY OF REFERENCE REFERS TO A PHENOMENON IN WHICH A COMPUTER PROGRAM TENDS TO ACCESS SAME SET OF MEMORY LOCATIONS FOR A PARTICULAR TIME PERIOD. • LOCALITY OF REFERENCE REFERS TO THE TENDENCY OF THE COMPUTER PROGRAM TO ACCESS INSTRUCTIONS WHOSE ADDRESSES ARE NEAR ONE ANOTHER. • THE PROPERTY OF LOCALITY OF REFERENCE IS MAINLY SHOWN BY LOOPS AND SUBROUTINE CALLS IN A PROGRAM.
  • 29. • IN CASE OF LOOPS IN PROGRAM CONTROL PROCESSING UNIT REPEATEDLY REFERS TO THE SET OF INSTRUCTIONS THAT CONSTITUTE THE LOOP. • IN CASE OF SUBROUTINE CALLS, EVERYTIME THE SET OF INSTRUCTIONS ARE FETCHED FROM MEMORY. • REFERENCES TO DATA ITEMS ALSO GET LOCALIZED THAT MEANS SAME DATA ITEM IS REFERENCED AGAIN AND AGAIN.
  • 30. •IN FIGURE, YOU CAN SEE THAT CPU WANTS TO READ OR FETCH THE DATA OR INSTRUCTION.FIRST IT WILL ACCESS THE CACHE MEMORY AS IT IS NEAR TO IT AND PROVIDES VERY FAST ACCESS. IF THE REQUIRED DATA OR INSTRUCTION IS FOUND, IT WILL BE FETCHED. THIS SITUATION IS KNOWN AS CACHE HIT. BUT IF THE REQUIRED DATA OR INSTRUCTION IS NOT FOUND IN THE CACHE MEMORY THEN THIS SITUATION IS KNOWN AS CACHE MISS.NOW THE MAIN MEMORY WILL BE SEARCHED FOR THE REQUIRED DATA OR INSTRUCTION THAT WAS BEING SEARCHED AND IF FOUND WILL GO THROUGH ONE OF THE TWO WAYS: •FIRST WAY IS THAT THE CPU SHOULD FETCH THE REQUIRED DATA OR INSTRUCTION AND USE IT AND THAT’S IT BUT WHAT, WHEN THE SAME DATA OR INSTRUCTION IS REQUIRED AGAIN.CPU AGAIN HAS TO ACCESS SAME MAIN MEMORY LOCATION FOR IT AND WE ALREADY KNOW THAT MAIN MEMORY IS THE SLOWEST TO ACCESS. •THE SECOND WAY IS TO STORE THE DATA OR INSTRUCTION IN THE CACHE MEMORY SO THAT IF IT IS NEEDED SOON AGAIN IN NEAR FUTURE IT COULD BE FETCHED IN A MUCH FASTER WAY.
  • 34. • THE PERFORMANCE OF THE CACHE IS MEASURED IN TERMS OF HIT RATIO. WHEN CPU REFERS TO MEMORY AND FIND THE DATA OR INSTRUCTION WITHIN THE CACHE MEMORY, IT IS KNOWN AS CACHE HIT. • IF THE DESIRED DATA OR INSTRUCTION IS NOT FOUND IN CACHE MEMORY AND CPU REFERS TO THE MAIN MEMORY TO FIND THAT DATA OR INSTRUCTION, IT IS KNOWN AS CACHE MISS. • HIT + MISS = TOTAL CPU REFERENCE • HIT RATIO(H) = HIT / (HIT+MISS) • AVERAGE ACCESS TIME OF ANY MEMORY SYSTEM CONSISTS OF TWO LEVELS: CACHE AND MAIN MEMORY. IF TC IS TIME TO ACCESS CACHE MEMORY AND TM IS THE TIME TO ACCESS MAIN MEMORY THEN WE CAN WRITE: • TAVG = AVERAGE TIME TO ACCESS MEMORY • TAVG = H*TC + (1-H)*(TM+TC)