SlideShare a Scribd company logo
1 of 35
• Defination
• Working
• Level
• Organization
• Application
INTRODUCTION
• Direct Mapping
• Associative Mapping
• Set-associative Mapping
MAPPING TECHNIQUES
CACHE COHERENCY
• Spatial Locality of reference
• Temporal Locality of reference
LOCALITY OF REFERENCE
CACHE PERFORMANCE
Cache memory is a small-sized type of volatile computer
memory that provides high-speed data access to a
processor and stores frequently used computer programs,
applications and data.
It stores and retains data only until a computer is powered
up.
Cache memory is used to reduce the average time to
access data from the Main memory.
The cache is a smaller and faster memory which stores
copies of the data from frequently used main memory
locations.
•
•
•
•
• A LEVEL 1 CACHE (L1 CACHE) IS A MEMORY CACHE THAT IS DIRECTLY BUILT
INTO THE MICROPROCESSOR, WHICH IS USED FOR STORING THE
MICROPROCESSOR’S RECENTLY ACCESSED INFORMATION, THUS IT IS ALSO
CALLED THE PRIMARY CACHE.
• IT IS ALSO REFERRED TO AS THE INTERNAL CACHE OR SYSTEM CACHE.
• L1 CACHE IS THE FASTEST CACHE MEMORY, SINCE IT IS ALREADY BUILT
WITHIN THE CHIP WITH A ZERO WAIT-STATE INTERFACE, MAKING IT THE MOST
EXPENSIVE CACHE AMONG THE CPU CACHES.
• IT IS USED TO STORE DATA THAT WAS ACCESSED BY THE PROCESSOR
RECENTLY, CRITICAL FILES THAT NEED TO BE EXECUTED IMMEDIATELY AND IT
IS THE FIRST CACHE TO BE ACCESSED AND PROCESSED WHEN THE
PROCESSOR ITSELF PERFORMS A COMPUTER INSTRUCTION.
• IN MORE RECENT MICROPROCESSORS, THE L1 CACHE IS DIVIDED EQUALLY
INTO TWO: A CACHE THAT IS USED TO KEEP PROGRAM DATA AND ANOTHER
CACHE THAT IS USED TO KEEP INSTRUCTIONS FOR THE MICROPROCESSOR.
• IT IS IMPLEMENTED WITH THE USE OF STATIC RANDOM ACCESS MEMORY
(SRAM), WHICH COMES IN DIFFERENT SIZES DEPENDING ON THE GRADE OF
THE PROCESSOR.
• A LEVEL 2 CACHE (L2 CACHE) IS A CPU CACHE MEMORY THAT IS LOCATED
OUTSIDE AND SEPARATE FROM THE MICROPROCESSOR CHIP CORE,
• ALTHOUGH, IT IS FOUND ON THE SAME PROCESSOR CHIP PACKAGE. EARLIER
L2 CACHE DESIGNS PLACED THEM ON THE MOTHERBOARD WHICH MADE
THEM QUITE SLOW.
• INCLUDING L2 CACHES IN MICROPROCESSOR DESIGNS ARE VERY COMMON
IN MODERN CPUS EVEN THOUGH THEY MAY NOT BE AS FAST AS THE L1
CACHE, BUT SINCE IT IS OUTSIDE OF THE CORE, THE CAPACITY CAN BE
INCREASED AND IT IS STILL FASTER THAN THE MAIN MEMORY.
• A LEVEL 2 CACHE IS ALSO CALLED THE SECONDARY CACHE OR AN EXTERNAL
CACHE.
• THE LEVEL 2 CACHE SERVES AS THE BRIDGE FOR THE PROCESS AND
MEMORY PERFORMANCE GAP.
• ITS MAIN GOAL IS TO PROVIDE THE NECESSARY STORED INFORMATION TO
THE PROCESSOR WITHOUT ANY INTERRUPTIONS OR ANY DELAYS OR WAIT-
STATES.
• MODERN MICROPROCESSORS SOMETIMES INCLUDE A FEATURE CALLED DATA
PRE-FETCHING, AND THE L2 CACHE BOOSTS THIS FEATURE BY BUFFERING
THE PROGRAM INSTRUCTIONS AND DATA THAT IS REQUESTED BY THE
PROCESSOR FROM THE MEMORY, SERVING AS A CLOSER WAITING AREA
COMPARED TO THE RAM.
• A LEVEL 3 (L3) CACHE IS A SPECIALIZED CACHE THAT
THAT IS USED BY THE CPU AND IS USUALLY BUILT ONTO
THE MOTHERBOARD AND, IN CERTAIN SPECIAL
PROCESSORS, WITHIN THE CPU MODULE ITSELF.
• IT WORKS TOGETHER WITH THE L1 AND L2 CACHE TO
IMPROVE COMPUTER PERFORMANCE BY PREVENTING
BOTTLENECKS DUE TO THE FETCH AND EXECUTE CYCLE
TAKING TOO LONG.
• THE L3 CACHE IS USUALLY BUILT ONTO THE
MOTHERBOARD BETWEEN THE MAIN MEMORY (RAM) AND
THE L1 AND L2 CACHES OF THE PROCESSOR MODULE.
• THIS SERVES AS ANOTHER BRIDGE TO PARK INFORMATION
LIKE PROCESSOR COMMANDS AND FREQUENTLY USED
DATA IN ORDER TO PREVENT BOTTLENECKS RESULTING
FROM THE FETCHING OF THESE DATA FROM THE MAIN
MEMORY.
• USUALLY, THE CACHE MEMORY CAN STORE A
REASONABLE NUMBER OF BLOCKS AT ANY
GIVEN TIME, BUT THIS NUMBER IS SMALL
COMPARED TO THE TOTAL NUMBER OF BLOCKS
IN THE MAIN MEMORY.
• THE CORRESPONDENCE BETWEEN THE MAIN
MEMORY BLOCKS AND THOSE IN THE CACHE IS
SPECIFIED BY A MAPPING FUNCTION.
• A PRIMARY CACHE IS ALWAYS LOCATED ON THE
PROCESSOR CHIP. THIS CACHE IS SMALL AND ITS
ACCESS TIME IS COMPARABLE TO THAT OF
PROCESSOR REGISTERS.
• SECONDARY CACHE IS PLACED BETWEEN THE
PRIMARY CACHE AND THE REST OF THE MEMORY. IT
IS REFERRED TO AS THE LEVEL 2 (L2) CACHE.
OFTEN, THE LEVEL 2 CACHE IS ALSO HOUSED ON
THE PROCESSOR CHIP.
The transformation of data
from the main memory to
the cache memory is
referred to as cache
memory mapping.
• THE SIMPLEST TECHNIQUE, KNOWN AS DIRECT MAPPING, MAPS EACH BLOCK OF MAIN MEMORY INTO ONLY ONE POSSIBLE
CACHE LINE.
• IN DIRECT MAPPING, ASSIGNED EACH MEMORY BLOCK TO A SPECIFIC LINE IN THE CACHE.
• IF A LINE IS PREVIOUSLY TAKEN UP BY A MEMORY BLOCK WHEN A NEW BLOCK NEEDS TO BE LOADED, THE OLD BLOCK IS
TRASHED.
• AN ADDRESS SPACE IS SPLIT INTO TWO PARTS INDEX FIELD AND TAG FIELD.
• THE CACHE IS USED TO STORE THE TAG FIELD WHEREAS THE REST IS STORED IN THE MAIN MEMORY.
•
•
•
Address length = (s + w) bitsAddress
Number of addressable units = 2s+w words or bytesNumber
Block size = line size = 2w words or bytesBlock
Number of lines in cache = m = 2rNumber
Size of tag = (s – r) bitsSize
•
•
•
•
•
•
•
•
•
Address length = (s + w) bitsAddress
Number of addressable units = 2s+w words or bytesNumber
Block size = line size = 2w words or bytesBlock
Number of lines in cache = undeterminedNumber
Size of tag = s bitsSize
•
•
•
•
•
•
•
•
Address length = (s + w) bitsAddress
Number of addressable units = 2s+w words or bytesNumber
Block size = line size = 2w words or bytesBlock
Number of blocks in main memory = 2dNumber
Number of lines in set = kNumber
Number of sets = v = 2dNumber
Size of tag = (s – d) bitsSize
•
•
•
•
•
• LOCALITY OF REFERENCE REFERS TO A PHENOMENON IN
WHICH A COMPUTER PROGRAM TENDS TO ACCESS SAME SET
OF MEMORY LOCATIONS FOR A PARTICULAR TIME PERIOD.
• LOCALITY OF REFERENCE REFERS TO THE TENDENCY OF
THE COMPUTER PROGRAM TO ACCESS INSTRUCTIONS
WHOSE ADDRESSES ARE NEAR ONE ANOTHER.
• THE PROPERTY OF LOCALITY OF REFERENCE IS MAINLY
SHOWN BY LOOPS AND SUBROUTINE CALLS IN A PROGRAM.
• IN CASE OF LOOPS IN PROGRAM
CONTROL PROCESSING UNIT
REPEATEDLY REFERS TO THE SET OF
INSTRUCTIONS THAT CONSTITUTE THE
LOOP.
• IN CASE OF SUBROUTINE CALLS,
EVERYTIME THE SET OF
INSTRUCTIONS ARE FETCHED FROM
MEMORY.
• REFERENCES TO DATA ITEMS ALSO
GET LOCALIZED THAT MEANS SAME
DATA ITEM IS REFERENCED AGAIN AND
AGAIN.
•IN FIGURE, YOU CAN SEE THAT CPU WANTS TO READ OR
FETCH THE DATA OR INSTRUCTION.FIRST IT WILL ACCESS
THE CACHE MEMORY AS IT IS NEAR TO IT AND PROVIDES
VERY FAST ACCESS. IF THE REQUIRED DATA OR
INSTRUCTION IS FOUND, IT WILL BE FETCHED. THIS
SITUATION IS KNOWN AS CACHE HIT. BUT IF THE REQUIRED
DATA OR INSTRUCTION IS NOT FOUND IN THE CACHE
MEMORY THEN THIS SITUATION IS KNOWN AS CACHE
MISS.NOW THE MAIN MEMORY WILL BE SEARCHED FOR
THE REQUIRED DATA OR INSTRUCTION THAT WAS BEING
SEARCHED AND IF FOUND WILL GO THROUGH ONE OF THE
TWO WAYS:
•FIRST WAY IS THAT THE CPU SHOULD FETCH THE
REQUIRED DATA OR INSTRUCTION AND USE IT AND THAT’S
IT BUT WHAT, WHEN THE SAME DATA OR INSTRUCTION IS
REQUIRED AGAIN.CPU AGAIN HAS TO ACCESS SAME MAIN
MEMORY LOCATION FOR IT AND WE ALREADY KNOW THAT
MAIN MEMORY IS THE SLOWEST TO ACCESS.
•THE SECOND WAY IS TO STORE THE DATA OR
INSTRUCTION IN THE CACHE MEMORY SO THAT IF IT IS
NEEDED SOON AGAIN IN NEAR FUTURE IT COULD BE
FETCHED IN A MUCH FASTER WAY.
•
1.
2.
•
•
•
•
•
•
• THE PERFORMANCE OF THE CACHE IS MEASURED IN TERMS OF HIT
RATIO. WHEN CPU REFERS TO MEMORY AND FIND THE DATA OR
INSTRUCTION WITHIN THE CACHE MEMORY, IT IS KNOWN AS CACHE
HIT.
• IF THE DESIRED DATA OR INSTRUCTION IS NOT FOUND IN CACHE
MEMORY AND CPU REFERS TO THE MAIN MEMORY TO FIND THAT
DATA OR INSTRUCTION, IT IS KNOWN AS CACHE MISS.
• HIT + MISS = TOTAL CPU REFERENCE
• HIT RATIO(H) = HIT / (HIT+MISS)
• AVERAGE ACCESS TIME OF ANY MEMORY SYSTEM CONSISTS OF TWO
LEVELS: CACHE AND MAIN MEMORY. IF TC IS TIME TO ACCESS
CACHE MEMORY AND TM IS THE TIME TO ACCESS MAIN MEMORY
THEN WE CAN WRITE:
• TAVG = AVERAGE TIME TO ACCESS MEMORY
• TAVG = H*TC + (1-H)*(TM+TC)
HTTPS://WWW.TECHOPEDIA
.COM
HTTPS://WWW.GEEKSFORG
EEKS.ORG/CACHE-MEMORY
HTTPS://EN.WIKIPEDIA.ORG
/WIKI/CACHE_COHERENCE

More Related Content

What's hot

Memory organisation
Memory organisationMemory organisation
Memory organisationankush_kumar
 
Computer architecture cache memory
Computer architecture cache memoryComputer architecture cache memory
Computer architecture cache memoryMazin Alwaaly
 
Computer organization memory
Computer organization memoryComputer organization memory
Computer organization memoryDeepak John
 
Csc1401 lecture05 - cache memory
Csc1401   lecture05 - cache memoryCsc1401   lecture05 - cache memory
Csc1401 lecture05 - cache memoryIIUM
 
04 cache memory.ppt 1
04 cache memory.ppt 104 cache memory.ppt 1
04 cache memory.ppt 1Anwal Mirza
 
Harvard vs Von Neumann Architecture
Harvard vs Von Neumann ArchitectureHarvard vs Von Neumann Architecture
Harvard vs Von Neumann ArchitectureProject Student
 
Cache replacement policies,cache miss,writingtechniques
Cache replacement policies,cache miss,writingtechniquesCache replacement policies,cache miss,writingtechniques
Cache replacement policies,cache miss,writingtechniquesSnehalataAgasti
 

What's hot (20)

Memory hierarchy
Memory hierarchyMemory hierarchy
Memory hierarchy
 
Cache memory
Cache memoryCache memory
Cache memory
 
Memory hierarchy
Memory hierarchyMemory hierarchy
Memory hierarchy
 
Cache memory
Cache memoryCache memory
Cache memory
 
Memory organisation
Memory organisationMemory organisation
Memory organisation
 
Cache memory
Cache  memoryCache  memory
Cache memory
 
Memory mapping
Memory mappingMemory mapping
Memory mapping
 
Cache Memory- JMD.pptx
Cache Memory- JMD.pptxCache Memory- JMD.pptx
Cache Memory- JMD.pptx
 
Cache Memory
Cache MemoryCache Memory
Cache Memory
 
Memory Mapping Cache
Memory Mapping CacheMemory Mapping Cache
Memory Mapping Cache
 
Computer architecture cache memory
Computer architecture cache memoryComputer architecture cache memory
Computer architecture cache memory
 
Memory Organization
Memory OrganizationMemory Organization
Memory Organization
 
Cache memory
Cache memory Cache memory
Cache memory
 
Computer organization memory
Computer organization memoryComputer organization memory
Computer organization memory
 
Cache memory
Cache memoryCache memory
Cache memory
 
Csc1401 lecture05 - cache memory
Csc1401   lecture05 - cache memoryCsc1401   lecture05 - cache memory
Csc1401 lecture05 - cache memory
 
Memory system
Memory systemMemory system
Memory system
 
04 cache memory.ppt 1
04 cache memory.ppt 104 cache memory.ppt 1
04 cache memory.ppt 1
 
Harvard vs Von Neumann Architecture
Harvard vs Von Neumann ArchitectureHarvard vs Von Neumann Architecture
Harvard vs Von Neumann Architecture
 
Cache replacement policies,cache miss,writingtechniques
Cache replacement policies,cache miss,writingtechniquesCache replacement policies,cache miss,writingtechniques
Cache replacement policies,cache miss,writingtechniques
 

Similar to Cache memory

cachememory-210517060741 (1).pdf
cachememory-210517060741 (1).pdfcachememory-210517060741 (1).pdf
cachememory-210517060741 (1).pdfOmGadekar2
 
Elements of cache design
Elements of cache designElements of cache design
Elements of cache designRohail Butt
 
Introduction to memory management
Introduction to memory managementIntroduction to memory management
Introduction to memory managementSweety Singhal
 
GRP13_CACHE MEMORY ORGANIZATION AND DIFFERENT CACHE MAPPING TECHNIQUES.pptx
GRP13_CACHE MEMORY ORGANIZATION AND DIFFERENT CACHE MAPPING TECHNIQUES.pptxGRP13_CACHE MEMORY ORGANIZATION AND DIFFERENT CACHE MAPPING TECHNIQUES.pptx
GRP13_CACHE MEMORY ORGANIZATION AND DIFFERENT CACHE MAPPING TECHNIQUES.pptxDANCERAMBA
 
Memory organization.pptx
Memory organization.pptxMemory organization.pptx
Memory organization.pptxRamanRay105
 
04_Cache Memory.ppt
04_Cache Memory.ppt04_Cache Memory.ppt
04_Cache Memory.pptShiva340703
 
Computer organization memory hierarchy
Computer organization memory hierarchyComputer organization memory hierarchy
Computer organization memory hierarchyAJAL A J
 
Computer architecture
Computer architecture Computer architecture
Computer architecture Ashish Kumar
 
coa-Unit5-ppt1 (1).pptx
coa-Unit5-ppt1 (1).pptxcoa-Unit5-ppt1 (1).pptx
coa-Unit5-ppt1 (1).pptxRuhul Amin
 
8 memory management strategies
8 memory management strategies8 memory management strategies
8 memory management strategiesDr. Loganathan R
 
Computer architecture abhmail
Computer architecture abhmailComputer architecture abhmail
Computer architecture abhmailOm Prakash
 

Similar to Cache memory (20)

cachememory-210517060741 (1).pdf
cachememory-210517060741 (1).pdfcachememory-210517060741 (1).pdf
cachememory-210517060741 (1).pdf
 
Elements of cache design
Elements of cache designElements of cache design
Elements of cache design
 
Cache Memory.pptx
Cache Memory.pptxCache Memory.pptx
Cache Memory.pptx
 
Cache Memory.pptx
Cache Memory.pptxCache Memory.pptx
Cache Memory.pptx
 
Introduction to memory management
Introduction to memory managementIntroduction to memory management
Introduction to memory management
 
GRP13_CACHE MEMORY ORGANIZATION AND DIFFERENT CACHE MAPPING TECHNIQUES.pptx
GRP13_CACHE MEMORY ORGANIZATION AND DIFFERENT CACHE MAPPING TECHNIQUES.pptxGRP13_CACHE MEMORY ORGANIZATION AND DIFFERENT CACHE MAPPING TECHNIQUES.pptx
GRP13_CACHE MEMORY ORGANIZATION AND DIFFERENT CACHE MAPPING TECHNIQUES.pptx
 
Memory organization.pptx
Memory organization.pptxMemory organization.pptx
Memory organization.pptx
 
04_Cache Memory.ppt
04_Cache Memory.ppt04_Cache Memory.ppt
04_Cache Memory.ppt
 
Computer organization memory hierarchy
Computer organization memory hierarchyComputer organization memory hierarchy
Computer organization memory hierarchy
 
Computer architecture
Computer architecture Computer architecture
Computer architecture
 
Cache simulator
Cache simulatorCache simulator
Cache simulator
 
coa-Unit5-ppt1 (1).pptx
coa-Unit5-ppt1 (1).pptxcoa-Unit5-ppt1 (1).pptx
coa-Unit5-ppt1 (1).pptx
 
cache memory.ppt
cache memory.pptcache memory.ppt
cache memory.ppt
 
cache memory.ppt
cache memory.pptcache memory.ppt
cache memory.ppt
 
Computer architecture
Computer architectureComputer architecture
Computer architecture
 
CPU Caches
CPU CachesCPU Caches
CPU Caches
 
COA (Unit_4.pptx)
COA (Unit_4.pptx)COA (Unit_4.pptx)
COA (Unit_4.pptx)
 
8 memory management strategies
8 memory management strategies8 memory management strategies
8 memory management strategies
 
Computer architecture abhmail
Computer architecture abhmailComputer architecture abhmail
Computer architecture abhmail
 
Cache memory
Cache memoryCache memory
Cache memory
 

Recently uploaded

TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor CatchersTechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catcherssdickerson1
 
Novel 3D-Printed Soft Linear and Bending Actuators
Novel 3D-Printed Soft Linear and Bending ActuatorsNovel 3D-Printed Soft Linear and Bending Actuators
Novel 3D-Printed Soft Linear and Bending ActuatorsResearcher Researcher
 
Computer Graphics Introduction, Open GL, Line and Circle drawing algorithm
Computer Graphics Introduction, Open GL, Line and Circle drawing algorithmComputer Graphics Introduction, Open GL, Line and Circle drawing algorithm
Computer Graphics Introduction, Open GL, Line and Circle drawing algorithmDeepika Walanjkar
 
Earthing details of Electrical Substation
Earthing details of Electrical SubstationEarthing details of Electrical Substation
Earthing details of Electrical Substationstephanwindworld
 
SOFTWARE ESTIMATION COCOMO AND FP CALCULATION
SOFTWARE ESTIMATION COCOMO AND FP CALCULATIONSOFTWARE ESTIMATION COCOMO AND FP CALCULATION
SOFTWARE ESTIMATION COCOMO AND FP CALCULATIONSneha Padhiar
 
『澳洲文凭』买麦考瑞大学毕业证书成绩单办理澳洲Macquarie文凭学位证书
『澳洲文凭』买麦考瑞大学毕业证书成绩单办理澳洲Macquarie文凭学位证书『澳洲文凭』买麦考瑞大学毕业证书成绩单办理澳洲Macquarie文凭学位证书
『澳洲文凭』买麦考瑞大学毕业证书成绩单办理澳洲Macquarie文凭学位证书rnrncn29
 
Python Programming for basic beginners.pptx
Python Programming for basic beginners.pptxPython Programming for basic beginners.pptx
Python Programming for basic beginners.pptxmohitesoham12
 
multiple access in wireless communication
multiple access in wireless communicationmultiple access in wireless communication
multiple access in wireless communicationpanditadesh123
 
System Simulation and Modelling with types and Event Scheduling
System Simulation and Modelling with types and Event SchedulingSystem Simulation and Modelling with types and Event Scheduling
System Simulation and Modelling with types and Event SchedulingBootNeck1
 
Ch10-Global Supply Chain - Cadena de Suministro.pdf
Ch10-Global Supply Chain - Cadena de Suministro.pdfCh10-Global Supply Chain - Cadena de Suministro.pdf
Ch10-Global Supply Chain - Cadena de Suministro.pdfChristianCDAM
 
Prach: A Feature-Rich Platform Empowering the Autism Community
Prach: A Feature-Rich Platform Empowering the Autism CommunityPrach: A Feature-Rich Platform Empowering the Autism Community
Prach: A Feature-Rich Platform Empowering the Autism Communityprachaibot
 
Robotics-Asimov's Laws, Mechanical Subsystems, Robot Kinematics, Robot Dynami...
Robotics-Asimov's Laws, Mechanical Subsystems, Robot Kinematics, Robot Dynami...Robotics-Asimov's Laws, Mechanical Subsystems, Robot Kinematics, Robot Dynami...
Robotics-Asimov's Laws, Mechanical Subsystems, Robot Kinematics, Robot Dynami...Sumanth A
 
High Voltage Engineering- OVER VOLTAGES IN ELECTRICAL POWER SYSTEMS
High Voltage Engineering- OVER VOLTAGES IN ELECTRICAL POWER SYSTEMSHigh Voltage Engineering- OVER VOLTAGES IN ELECTRICAL POWER SYSTEMS
High Voltage Engineering- OVER VOLTAGES IN ELECTRICAL POWER SYSTEMSsandhya757531
 
Engineering Drawing section of solid
Engineering Drawing     section of solidEngineering Drawing     section of solid
Engineering Drawing section of solidnamansinghjarodiya
 
Turn leadership mistakes into a better future.pptx
Turn leadership mistakes into a better future.pptxTurn leadership mistakes into a better future.pptx
Turn leadership mistakes into a better future.pptxStephen Sitton
 
2022 AWS DNA Hackathon 장애 대응 솔루션 jarvis.
2022 AWS DNA Hackathon 장애 대응 솔루션 jarvis.2022 AWS DNA Hackathon 장애 대응 솔루션 jarvis.
2022 AWS DNA Hackathon 장애 대응 솔루션 jarvis.elesangwon
 
Mine Environment II Lab_MI10448MI__________.pptx
Mine Environment II Lab_MI10448MI__________.pptxMine Environment II Lab_MI10448MI__________.pptx
Mine Environment II Lab_MI10448MI__________.pptxRomil Mishra
 
signals in triangulation .. ...Surveying
signals in triangulation .. ...Surveyingsignals in triangulation .. ...Surveying
signals in triangulation .. ...Surveyingsapna80328
 
Levelling - Rise and fall - Height of instrument method
Levelling - Rise and fall - Height of instrument methodLevelling - Rise and fall - Height of instrument method
Levelling - Rise and fall - Height of instrument methodManicka Mamallan Andavar
 
DEVICE DRIVERS AND INTERRUPTS SERVICE MECHANISM.pdf
DEVICE DRIVERS AND INTERRUPTS  SERVICE MECHANISM.pdfDEVICE DRIVERS AND INTERRUPTS  SERVICE MECHANISM.pdf
DEVICE DRIVERS AND INTERRUPTS SERVICE MECHANISM.pdfAkritiPradhan2
 

Recently uploaded (20)

TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor CatchersTechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
 
Novel 3D-Printed Soft Linear and Bending Actuators
Novel 3D-Printed Soft Linear and Bending ActuatorsNovel 3D-Printed Soft Linear and Bending Actuators
Novel 3D-Printed Soft Linear and Bending Actuators
 
Computer Graphics Introduction, Open GL, Line and Circle drawing algorithm
Computer Graphics Introduction, Open GL, Line and Circle drawing algorithmComputer Graphics Introduction, Open GL, Line and Circle drawing algorithm
Computer Graphics Introduction, Open GL, Line and Circle drawing algorithm
 
Earthing details of Electrical Substation
Earthing details of Electrical SubstationEarthing details of Electrical Substation
Earthing details of Electrical Substation
 
SOFTWARE ESTIMATION COCOMO AND FP CALCULATION
SOFTWARE ESTIMATION COCOMO AND FP CALCULATIONSOFTWARE ESTIMATION COCOMO AND FP CALCULATION
SOFTWARE ESTIMATION COCOMO AND FP CALCULATION
 
『澳洲文凭』买麦考瑞大学毕业证书成绩单办理澳洲Macquarie文凭学位证书
『澳洲文凭』买麦考瑞大学毕业证书成绩单办理澳洲Macquarie文凭学位证书『澳洲文凭』买麦考瑞大学毕业证书成绩单办理澳洲Macquarie文凭学位证书
『澳洲文凭』买麦考瑞大学毕业证书成绩单办理澳洲Macquarie文凭学位证书
 
Python Programming for basic beginners.pptx
Python Programming for basic beginners.pptxPython Programming for basic beginners.pptx
Python Programming for basic beginners.pptx
 
multiple access in wireless communication
multiple access in wireless communicationmultiple access in wireless communication
multiple access in wireless communication
 
System Simulation and Modelling with types and Event Scheduling
System Simulation and Modelling with types and Event SchedulingSystem Simulation and Modelling with types and Event Scheduling
System Simulation and Modelling with types and Event Scheduling
 
Ch10-Global Supply Chain - Cadena de Suministro.pdf
Ch10-Global Supply Chain - Cadena de Suministro.pdfCh10-Global Supply Chain - Cadena de Suministro.pdf
Ch10-Global Supply Chain - Cadena de Suministro.pdf
 
Prach: A Feature-Rich Platform Empowering the Autism Community
Prach: A Feature-Rich Platform Empowering the Autism CommunityPrach: A Feature-Rich Platform Empowering the Autism Community
Prach: A Feature-Rich Platform Empowering the Autism Community
 
Robotics-Asimov's Laws, Mechanical Subsystems, Robot Kinematics, Robot Dynami...
Robotics-Asimov's Laws, Mechanical Subsystems, Robot Kinematics, Robot Dynami...Robotics-Asimov's Laws, Mechanical Subsystems, Robot Kinematics, Robot Dynami...
Robotics-Asimov's Laws, Mechanical Subsystems, Robot Kinematics, Robot Dynami...
 
High Voltage Engineering- OVER VOLTAGES IN ELECTRICAL POWER SYSTEMS
High Voltage Engineering- OVER VOLTAGES IN ELECTRICAL POWER SYSTEMSHigh Voltage Engineering- OVER VOLTAGES IN ELECTRICAL POWER SYSTEMS
High Voltage Engineering- OVER VOLTAGES IN ELECTRICAL POWER SYSTEMS
 
Engineering Drawing section of solid
Engineering Drawing     section of solidEngineering Drawing     section of solid
Engineering Drawing section of solid
 
Turn leadership mistakes into a better future.pptx
Turn leadership mistakes into a better future.pptxTurn leadership mistakes into a better future.pptx
Turn leadership mistakes into a better future.pptx
 
2022 AWS DNA Hackathon 장애 대응 솔루션 jarvis.
2022 AWS DNA Hackathon 장애 대응 솔루션 jarvis.2022 AWS DNA Hackathon 장애 대응 솔루션 jarvis.
2022 AWS DNA Hackathon 장애 대응 솔루션 jarvis.
 
Mine Environment II Lab_MI10448MI__________.pptx
Mine Environment II Lab_MI10448MI__________.pptxMine Environment II Lab_MI10448MI__________.pptx
Mine Environment II Lab_MI10448MI__________.pptx
 
signals in triangulation .. ...Surveying
signals in triangulation .. ...Surveyingsignals in triangulation .. ...Surveying
signals in triangulation .. ...Surveying
 
Levelling - Rise and fall - Height of instrument method
Levelling - Rise and fall - Height of instrument methodLevelling - Rise and fall - Height of instrument method
Levelling - Rise and fall - Height of instrument method
 
DEVICE DRIVERS AND INTERRUPTS SERVICE MECHANISM.pdf
DEVICE DRIVERS AND INTERRUPTS  SERVICE MECHANISM.pdfDEVICE DRIVERS AND INTERRUPTS  SERVICE MECHANISM.pdf
DEVICE DRIVERS AND INTERRUPTS SERVICE MECHANISM.pdf
 

Cache memory

  • 1.
  • 2. • Defination • Working • Level • Organization • Application INTRODUCTION • Direct Mapping • Associative Mapping • Set-associative Mapping MAPPING TECHNIQUES CACHE COHERENCY • Spatial Locality of reference • Temporal Locality of reference LOCALITY OF REFERENCE CACHE PERFORMANCE
  • 3. Cache memory is a small-sized type of volatile computer memory that provides high-speed data access to a processor and stores frequently used computer programs, applications and data. It stores and retains data only until a computer is powered up. Cache memory is used to reduce the average time to access data from the Main memory. The cache is a smaller and faster memory which stores copies of the data from frequently used main memory locations.
  • 5.
  • 6. • A LEVEL 1 CACHE (L1 CACHE) IS A MEMORY CACHE THAT IS DIRECTLY BUILT INTO THE MICROPROCESSOR, WHICH IS USED FOR STORING THE MICROPROCESSOR’S RECENTLY ACCESSED INFORMATION, THUS IT IS ALSO CALLED THE PRIMARY CACHE. • IT IS ALSO REFERRED TO AS THE INTERNAL CACHE OR SYSTEM CACHE. • L1 CACHE IS THE FASTEST CACHE MEMORY, SINCE IT IS ALREADY BUILT WITHIN THE CHIP WITH A ZERO WAIT-STATE INTERFACE, MAKING IT THE MOST EXPENSIVE CACHE AMONG THE CPU CACHES. • IT IS USED TO STORE DATA THAT WAS ACCESSED BY THE PROCESSOR RECENTLY, CRITICAL FILES THAT NEED TO BE EXECUTED IMMEDIATELY AND IT IS THE FIRST CACHE TO BE ACCESSED AND PROCESSED WHEN THE PROCESSOR ITSELF PERFORMS A COMPUTER INSTRUCTION. • IN MORE RECENT MICROPROCESSORS, THE L1 CACHE IS DIVIDED EQUALLY INTO TWO: A CACHE THAT IS USED TO KEEP PROGRAM DATA AND ANOTHER CACHE THAT IS USED TO KEEP INSTRUCTIONS FOR THE MICROPROCESSOR. • IT IS IMPLEMENTED WITH THE USE OF STATIC RANDOM ACCESS MEMORY (SRAM), WHICH COMES IN DIFFERENT SIZES DEPENDING ON THE GRADE OF THE PROCESSOR.
  • 7. • A LEVEL 2 CACHE (L2 CACHE) IS A CPU CACHE MEMORY THAT IS LOCATED OUTSIDE AND SEPARATE FROM THE MICROPROCESSOR CHIP CORE, • ALTHOUGH, IT IS FOUND ON THE SAME PROCESSOR CHIP PACKAGE. EARLIER L2 CACHE DESIGNS PLACED THEM ON THE MOTHERBOARD WHICH MADE THEM QUITE SLOW. • INCLUDING L2 CACHES IN MICROPROCESSOR DESIGNS ARE VERY COMMON IN MODERN CPUS EVEN THOUGH THEY MAY NOT BE AS FAST AS THE L1 CACHE, BUT SINCE IT IS OUTSIDE OF THE CORE, THE CAPACITY CAN BE INCREASED AND IT IS STILL FASTER THAN THE MAIN MEMORY. • A LEVEL 2 CACHE IS ALSO CALLED THE SECONDARY CACHE OR AN EXTERNAL CACHE. • THE LEVEL 2 CACHE SERVES AS THE BRIDGE FOR THE PROCESS AND MEMORY PERFORMANCE GAP. • ITS MAIN GOAL IS TO PROVIDE THE NECESSARY STORED INFORMATION TO THE PROCESSOR WITHOUT ANY INTERRUPTIONS OR ANY DELAYS OR WAIT- STATES. • MODERN MICROPROCESSORS SOMETIMES INCLUDE A FEATURE CALLED DATA PRE-FETCHING, AND THE L2 CACHE BOOSTS THIS FEATURE BY BUFFERING THE PROGRAM INSTRUCTIONS AND DATA THAT IS REQUESTED BY THE PROCESSOR FROM THE MEMORY, SERVING AS A CLOSER WAITING AREA COMPARED TO THE RAM.
  • 8. • A LEVEL 3 (L3) CACHE IS A SPECIALIZED CACHE THAT THAT IS USED BY THE CPU AND IS USUALLY BUILT ONTO THE MOTHERBOARD AND, IN CERTAIN SPECIAL PROCESSORS, WITHIN THE CPU MODULE ITSELF. • IT WORKS TOGETHER WITH THE L1 AND L2 CACHE TO IMPROVE COMPUTER PERFORMANCE BY PREVENTING BOTTLENECKS DUE TO THE FETCH AND EXECUTE CYCLE TAKING TOO LONG. • THE L3 CACHE IS USUALLY BUILT ONTO THE MOTHERBOARD BETWEEN THE MAIN MEMORY (RAM) AND THE L1 AND L2 CACHES OF THE PROCESSOR MODULE. • THIS SERVES AS ANOTHER BRIDGE TO PARK INFORMATION LIKE PROCESSOR COMMANDS AND FREQUENTLY USED DATA IN ORDER TO PREVENT BOTTLENECKS RESULTING FROM THE FETCHING OF THESE DATA FROM THE MAIN MEMORY.
  • 9. • USUALLY, THE CACHE MEMORY CAN STORE A REASONABLE NUMBER OF BLOCKS AT ANY GIVEN TIME, BUT THIS NUMBER IS SMALL COMPARED TO THE TOTAL NUMBER OF BLOCKS IN THE MAIN MEMORY. • THE CORRESPONDENCE BETWEEN THE MAIN MEMORY BLOCKS AND THOSE IN THE CACHE IS SPECIFIED BY A MAPPING FUNCTION. • A PRIMARY CACHE IS ALWAYS LOCATED ON THE PROCESSOR CHIP. THIS CACHE IS SMALL AND ITS ACCESS TIME IS COMPARABLE TO THAT OF PROCESSOR REGISTERS. • SECONDARY CACHE IS PLACED BETWEEN THE PRIMARY CACHE AND THE REST OF THE MEMORY. IT IS REFERRED TO AS THE LEVEL 2 (L2) CACHE. OFTEN, THE LEVEL 2 CACHE IS ALSO HOUSED ON THE PROCESSOR CHIP.
  • 10. The transformation of data from the main memory to the cache memory is referred to as cache memory mapping.
  • 11. • THE SIMPLEST TECHNIQUE, KNOWN AS DIRECT MAPPING, MAPS EACH BLOCK OF MAIN MEMORY INTO ONLY ONE POSSIBLE CACHE LINE. • IN DIRECT MAPPING, ASSIGNED EACH MEMORY BLOCK TO A SPECIFIC LINE IN THE CACHE. • IF A LINE IS PREVIOUSLY TAKEN UP BY A MEMORY BLOCK WHEN A NEW BLOCK NEEDS TO BE LOADED, THE OLD BLOCK IS TRASHED. • AN ADDRESS SPACE IS SPLIT INTO TWO PARTS INDEX FIELD AND TAG FIELD. • THE CACHE IS USED TO STORE THE TAG FIELD WHEREAS THE REST IS STORED IN THE MAIN MEMORY.
  • 12.
  • 13.
  • 15. Address length = (s + w) bitsAddress Number of addressable units = 2s+w words or bytesNumber Block size = line size = 2w words or bytesBlock Number of lines in cache = m = 2rNumber Size of tag = (s – r) bitsSize
  • 17.
  • 18.
  • 20. Address length = (s + w) bitsAddress Number of addressable units = 2s+w words or bytesNumber Block size = line size = 2w words or bytesBlock Number of lines in cache = undeterminedNumber Size of tag = s bitsSize
  • 22.
  • 23.
  • 25. Address length = (s + w) bitsAddress Number of addressable units = 2s+w words or bytesNumber Block size = line size = 2w words or bytesBlock Number of blocks in main memory = 2dNumber Number of lines in set = kNumber Number of sets = v = 2dNumber Size of tag = (s – d) bitsSize
  • 27.
  • 28. • LOCALITY OF REFERENCE REFERS TO A PHENOMENON IN WHICH A COMPUTER PROGRAM TENDS TO ACCESS SAME SET OF MEMORY LOCATIONS FOR A PARTICULAR TIME PERIOD. • LOCALITY OF REFERENCE REFERS TO THE TENDENCY OF THE COMPUTER PROGRAM TO ACCESS INSTRUCTIONS WHOSE ADDRESSES ARE NEAR ONE ANOTHER. • THE PROPERTY OF LOCALITY OF REFERENCE IS MAINLY SHOWN BY LOOPS AND SUBROUTINE CALLS IN A PROGRAM.
  • 29. • IN CASE OF LOOPS IN PROGRAM CONTROL PROCESSING UNIT REPEATEDLY REFERS TO THE SET OF INSTRUCTIONS THAT CONSTITUTE THE LOOP. • IN CASE OF SUBROUTINE CALLS, EVERYTIME THE SET OF INSTRUCTIONS ARE FETCHED FROM MEMORY. • REFERENCES TO DATA ITEMS ALSO GET LOCALIZED THAT MEANS SAME DATA ITEM IS REFERENCED AGAIN AND AGAIN.
  • 30. •IN FIGURE, YOU CAN SEE THAT CPU WANTS TO READ OR FETCH THE DATA OR INSTRUCTION.FIRST IT WILL ACCESS THE CACHE MEMORY AS IT IS NEAR TO IT AND PROVIDES VERY FAST ACCESS. IF THE REQUIRED DATA OR INSTRUCTION IS FOUND, IT WILL BE FETCHED. THIS SITUATION IS KNOWN AS CACHE HIT. BUT IF THE REQUIRED DATA OR INSTRUCTION IS NOT FOUND IN THE CACHE MEMORY THEN THIS SITUATION IS KNOWN AS CACHE MISS.NOW THE MAIN MEMORY WILL BE SEARCHED FOR THE REQUIRED DATA OR INSTRUCTION THAT WAS BEING SEARCHED AND IF FOUND WILL GO THROUGH ONE OF THE TWO WAYS: •FIRST WAY IS THAT THE CPU SHOULD FETCH THE REQUIRED DATA OR INSTRUCTION AND USE IT AND THAT’S IT BUT WHAT, WHEN THE SAME DATA OR INSTRUCTION IS REQUIRED AGAIN.CPU AGAIN HAS TO ACCESS SAME MAIN MEMORY LOCATION FOR IT AND WE ALREADY KNOW THAT MAIN MEMORY IS THE SLOWEST TO ACCESS. •THE SECOND WAY IS TO STORE THE DATA OR INSTRUCTION IN THE CACHE MEMORY SO THAT IF IT IS NEEDED SOON AGAIN IN NEAR FUTURE IT COULD BE FETCHED IN A MUCH FASTER WAY.
  • 34. • THE PERFORMANCE OF THE CACHE IS MEASURED IN TERMS OF HIT RATIO. WHEN CPU REFERS TO MEMORY AND FIND THE DATA OR INSTRUCTION WITHIN THE CACHE MEMORY, IT IS KNOWN AS CACHE HIT. • IF THE DESIRED DATA OR INSTRUCTION IS NOT FOUND IN CACHE MEMORY AND CPU REFERS TO THE MAIN MEMORY TO FIND THAT DATA OR INSTRUCTION, IT IS KNOWN AS CACHE MISS. • HIT + MISS = TOTAL CPU REFERENCE • HIT RATIO(H) = HIT / (HIT+MISS) • AVERAGE ACCESS TIME OF ANY MEMORY SYSTEM CONSISTS OF TWO LEVELS: CACHE AND MAIN MEMORY. IF TC IS TIME TO ACCESS CACHE MEMORY AND TM IS THE TIME TO ACCESS MAIN MEMORY THEN WE CAN WRITE: • TAVG = AVERAGE TIME TO ACCESS MEMORY • TAVG = H*TC + (1-H)*(TM+TC)