SlideShare a Scribd company logo
1 of 15
Cache Memory


                          V.Saranya
                              AP/CSE
Sri Vidya College of Engineering and
                         Technology,
                       Virudhunagar
Cache Memories
 Processor is much faster than the main memory.
   As a result, the processor has to spend much of its time waiting while instructions and data
    are being fetched from the main memory.
   Major obstacle towards achieving good performance.
 Speed of the main memory cannot be increased
  beyond a certain point.
 Cache memory is an architectural arrangement which
  makes the main memory appear faster to the
  processor than it really is.
 Cache memory is based on the property of computer
  programs known as “locality of reference”.
Locality of Reference
 Analysis of programs indicates that many instructions in
  localized areas of a program are executed repeatedly during
  some period of time, while the others are accessed relatively
  less frequently.
   These instructions may be the ones in a loop, nested loop
     or few procedures calling each other repeatedly.
   This is called “locality of reference”.
 Temporal locality of reference:
   Recently executed instruction is likely to be executed
     again very soon.
 Spatial locality of reference:
   Instructions with addresses close to a recently instruction
     are likely to be executed soon.
Cache memories

                                                          Main
                                                         Main
              Processor               Cache
                                      cache              memory
                                                        Memory



• Processor issues a Read request, a block of words is transferred from the
  main memory to the cache, one word at a time.
• Subsequent references to the data in this block of words are found in the
  cache.
• At any given time, only some blocks in the main memory are held in the
  cache. Which blocks in the main memory are in the cache is determined
  by a “mapping function”.
• When the cache is full, and a block of words needs to be transferred
  from the main memory, some block of words in the cache must be
  replaced. This is determined by a “replacement algorithm”.
Cache hit
• Existence of a cache is transparent to the processor. The
  processor                issues               Read                and
  Write requests in the same manner.
• If the data is in the cache it is called a Read or Write hit.
• Read hit:
   The data is obtained from the cache.
• Write hit:
   Cache has a replica(copy) of the contents of the main
     memory.
   Contents of the cache and the main memory may be updated
     simultaneously.      This is the write-through protocol.
   Update the contents of the cache, and mark it as updated by
     setting a bit known          as the dirty bit or modified bit. The
     contents of the main memory are updated                 when this
     block is replaced. This is write-back or copy-back protocol.
Cache miss
• If the data is not present in the cache, then a Read miss or
  Write miss occurs.
• Read miss:
   Block of words containing this requested word is transferred from
    the memory.
   After the block is transferred, the desired word is forwarded to the
    processor.
   The desired word may also be forwarded to the processor as soon
    as it is transferred without waiting for the entire block to be
    transferred. This is called load-through or early-restart.

• Write-miss:
   Write-through protocol is used, then the contents of the main
    memory are updated directly.
   If write-back protocol is used, the block containing the
    addressed word is first brought into the cache. The desired word
    is overwritten with new information.
Cache Coherence Problem

• A bit called as “valid bit” is provided for each block.
• If the block contains valid data, then the bit is set to 1, else
  it is 0.
• Valid bits are set to 0, when the power is just turned on.
• When a block is loaded into the cache for the first time, the
  valid bit is set to 1.
• Data transfers between main memory and disk occur
  directly bypassing the cache.
• When the data on a disk changes, the main memory block is
  also updated.
Cache Coherence Problem
• However, if the data is also resident in the cache, then the valid
  bit is set to 0.
• What happens if the data in the disk and main memory
  changes and the write-back protocol is being used?
• In this case, the data in the cache may also have changed and is
  indicated by the dirty bit.
• The copies of the data in the cache, and the main memory are
  different. This is called the cache coherence problem.
• One option is to force a write-back before the main memory is
  updated from the disk.
Mapping functions
 Mapping functions determine how memory
  blocks are placed in the cache.
 A simple processor example:
    Cache consisting of 128 blocks of 16 words each.
    Total size of cache is 2048 (2K) words.
    Main memory is addressable by a 16-bit address.
    Main memory has 64K words.
    Main memory has 4K blocks of 16 words each.
 Three mapping functions:
  Direct mapping
  Associative mapping
  Set-associative mapping.
Direct mapping
                                    Main      Block 0

             Cache                 memory     Block 1
tag
               Block 0
tag
               Block 1

                                            Block 127

                                            Block 128
tag
             Block 127                      Block 129




                                            Block 255
      T ag     Block       W ord
       5           7         4              Block 256

                                            Block 257
        Main memory
        address
                                            Block 4095
Direct mapping
•More than one memory block is mapped onto the same position in the
cache.
•May lead to contention for cache blocks even if the cache is not full.
•Resolve the contention by allowing new block to replace the old block,
leading to a trivial replacement algorithm.
•Memory address is divided into three fields:
   - Low order 4 bits determine one of the 16 words in a block.
   - When a new block is brought into the cache, the next 7 bits
determine which cache block this new block is placed in.
   - High order 5 bits determine which of the possible 32 blocks is
currently present in the cache. These are tag bits.
•Simple to implement but not very flexible.
Associative mapping
                                Main memory
                                      Block 0

               Cache                  Block 1
tag
               Block 0
tag
               Block 1

                                    Block 127

                                    Block 128
tag
           Block 127                Block 129




                                    Block 255
           Tag           Word
          12                4       Block 256

                                    Block 257
      Main memory address


                                   Block 4095
Associative mapping
•Main memory block can be placed into any cache
position.
•Memory address is divided into two fields:
   - Low order 4 bits identify the word within a block.
   - High order 12 bits or tag bits identify a memory block
when it is resident in the cache.
•Flexible, and uses cache space efficiently.
•Replacement algorithms can be used to replace an existing
block in the cache when the cache is full.
•Cost is higher than direct-mapped cache because of the need
to search all 128 patterns to determine whether a given block
is in the cache.
Set-Associative mapping
                           Cache             Main      Block 0
                                            memory     Block 1
       Block 0
           tag            Block 1
tag                       Block 2
          tag

       Block 3
                                                     Block 63

                                                     Block 64
tag
          tag
                        Block 126                    Block 65
      Block 127



tag
                                                     Block 127
                 T ag     Block     W ord
                  5           7       4              Block 128

                                                     Block 129
                   Main memory address

                                                     Block 4095
•Blocks of cache are grouped into sets.
•Mapping function allows a block of the main memory to reside
in any block of a specific set.
•Divide the cache into 64 sets, with two blocks per set.
•Memory block 0, 64, 128 etc. map to block 0, and they can
occupy either of the two positions.
Memory address is divided into three fields:
    - 6 bit field determines the set number.
    - High order 6 bit fields are compared to the tag
       fields of the two blocks in a set.
Set-associative mapping combination of direct and associative
mapping.
Number of blocks per set is a design parameter.
   - One extreme is to have all the blocks in one set,
      requiring no set bits (fully associative mapping).
   - Other extreme is to have one block per set, is
      the same as direct mapping.

More Related Content

What's hot

What's hot (20)

Cache memory ppt
Cache memory ppt  Cache memory ppt
Cache memory ppt
 
04 cache memory.ppt 1
04 cache memory.ppt 104 cache memory.ppt 1
04 cache memory.ppt 1
 
Interface
InterfaceInterface
Interface
 
Memory Organization
Memory OrganizationMemory Organization
Memory Organization
 
VIRTUAL MEMORY
VIRTUAL MEMORYVIRTUAL MEMORY
VIRTUAL MEMORY
 
Advanced Microprocessors By Er. Swapnil Kaware
Advanced Microprocessors By Er. Swapnil KawareAdvanced Microprocessors By Er. Swapnil Kaware
Advanced Microprocessors By Er. Swapnil Kaware
 
Inside the computer
Inside the computerInside the computer
Inside the computer
 
Cache Memory
Cache MemoryCache Memory
Cache Memory
 
Introduction to embedded systems
Introduction to embedded systemsIntroduction to embedded systems
Introduction to embedded systems
 
Control Memory.pptx
Control Memory.pptxControl Memory.pptx
Control Memory.pptx
 
80386
8038680386
80386
 
Cpu organisation
Cpu organisationCpu organisation
Cpu organisation
 
Secondary storage devices
Secondary storage devices Secondary storage devices
Secondary storage devices
 
Central Processing Unit
Central Processing Unit Central Processing Unit
Central Processing Unit
 
CPU Architecture
CPU ArchitectureCPU Architecture
CPU Architecture
 
Instruction Set Architecture – II
Instruction Set Architecture – IIInstruction Set Architecture – II
Instruction Set Architecture – II
 
Module4
Module4Module4
Module4
 
Cache mapping
Cache mappingCache mapping
Cache mapping
 
CS304PC: Computer Organization and Architecture Session 27 priority interrupt...
CS304PC: Computer Organization and Architecture Session 27 priority interrupt...CS304PC: Computer Organization and Architecture Session 27 priority interrupt...
CS304PC: Computer Organization and Architecture Session 27 priority interrupt...
 
Memory management ppt coa
Memory management ppt coaMemory management ppt coa
Memory management ppt coa
 

Viewers also liked (9)

cache memory
cache memorycache memory
cache memory
 
Cache memory presentation
Cache memory presentationCache memory presentation
Cache memory presentation
 
cache memory management
cache memory managementcache memory management
cache memory management
 
CS6303 - Computer Architecture
CS6303 - Computer ArchitectureCS6303 - Computer Architecture
CS6303 - Computer Architecture
 
Computer architecture
Computer architectureComputer architecture
Computer architecture
 
INSTRUCTION LEVEL PARALLALISM
INSTRUCTION LEVEL PARALLALISMINSTRUCTION LEVEL PARALLALISM
INSTRUCTION LEVEL PARALLALISM
 
Cache memory
Cache memoryCache memory
Cache memory
 
Cache memory
Cache memoryCache memory
Cache memory
 
Cache memory
Cache memoryCache memory
Cache memory
 

Similar to Cachememory

Similar to Cachememory (20)

coa-Unit5-ppt1 (1).pptx
coa-Unit5-ppt1 (1).pptxcoa-Unit5-ppt1 (1).pptx
coa-Unit5-ppt1 (1).pptx
 
Computer organization memory hierarchy
Computer organization memory hierarchyComputer organization memory hierarchy
Computer organization memory hierarchy
 
Memory (Computer Organization)
Memory (Computer Organization)Memory (Computer Organization)
Memory (Computer Organization)
 
Cache Memory for Computer Architecture.ppt
Cache Memory for Computer Architecture.pptCache Memory for Computer Architecture.ppt
Cache Memory for Computer Architecture.ppt
 
cache memory introduction, level, function
cache memory introduction, level, functioncache memory introduction, level, function
cache memory introduction, level, function
 
Ct213 memory subsystem
Ct213 memory subsystemCt213 memory subsystem
Ct213 memory subsystem
 
cache memory
cache memorycache memory
cache memory
 
04 cache memory
04 cache memory04 cache memory
04 cache memory
 
Elements of cache design
Elements of cache designElements of cache design
Elements of cache design
 
cache memory.ppt
cache memory.pptcache memory.ppt
cache memory.ppt
 
cache memory.ppt
cache memory.pptcache memory.ppt
cache memory.ppt
 
FlashCache
FlashCacheFlashCache
FlashCache
 
Mapping
MappingMapping
Mapping
 
04 cache memory
04 cache memory04 cache memory
04 cache memory
 
04 cache memory
04 cache memory04 cache memory
04 cache memory
 
CAO-Unit-III.pptx
CAO-Unit-III.pptxCAO-Unit-III.pptx
CAO-Unit-III.pptx
 
Cache memory
Cache memoryCache memory
Cache memory
 
Cache memory
Cache memoryCache memory
Cache memory
 
CPU Caching Concepts
CPU Caching ConceptsCPU Caching Concepts
CPU Caching Concepts
 
Cache Memory
Cache MemoryCache Memory
Cache Memory
 

More from Slideshare

Crystal report generation in visual studio 2010
Crystal report generation in visual studio 2010Crystal report generation in visual studio 2010
Crystal report generation in visual studio 2010Slideshare
 
Report generation
Report generationReport generation
Report generationSlideshare
 
Security in Relational model
Security in Relational modelSecurity in Relational model
Security in Relational modelSlideshare
 
Entity Relationship Model
Entity Relationship ModelEntity Relationship Model
Entity Relationship ModelSlideshare
 
Major issues in data mining
Major issues in data miningMajor issues in data mining
Major issues in data miningSlideshare
 
Data preprocessing
Data preprocessingData preprocessing
Data preprocessingSlideshare
 
What is in you
What is in youWhat is in you
What is in youSlideshare
 
Propositional logic & inference
Propositional logic & inferencePropositional logic & inference
Propositional logic & inferenceSlideshare
 
Logical reasoning 21.1.13
Logical reasoning 21.1.13Logical reasoning 21.1.13
Logical reasoning 21.1.13Slideshare
 
Statistical learning
Statistical learningStatistical learning
Statistical learningSlideshare
 
Resolution(decision)
Resolution(decision)Resolution(decision)
Resolution(decision)Slideshare
 
Reinforcement learning 7313
Reinforcement learning 7313Reinforcement learning 7313
Reinforcement learning 7313Slideshare
 
Neural networks
Neural networksNeural networks
Neural networksSlideshare
 
Instance based learning
Instance based learningInstance based learning
Instance based learningSlideshare
 
Statistical learning
Statistical learningStatistical learning
Statistical learningSlideshare
 
Neural networks
Neural networksNeural networks
Neural networksSlideshare
 
Logical reasoning
Logical reasoning Logical reasoning
Logical reasoning Slideshare
 

More from Slideshare (20)

Crystal report generation in visual studio 2010
Crystal report generation in visual studio 2010Crystal report generation in visual studio 2010
Crystal report generation in visual studio 2010
 
Report generation
Report generationReport generation
Report generation
 
Trigger
TriggerTrigger
Trigger
 
Security in Relational model
Security in Relational modelSecurity in Relational model
Security in Relational model
 
Entity Relationship Model
Entity Relationship ModelEntity Relationship Model
Entity Relationship Model
 
OLAP
OLAPOLAP
OLAP
 
Major issues in data mining
Major issues in data miningMajor issues in data mining
Major issues in data mining
 
Data preprocessing
Data preprocessingData preprocessing
Data preprocessing
 
What is in you
What is in youWhat is in you
What is in you
 
Propositional logic & inference
Propositional logic & inferencePropositional logic & inference
Propositional logic & inference
 
Logical reasoning 21.1.13
Logical reasoning 21.1.13Logical reasoning 21.1.13
Logical reasoning 21.1.13
 
Logic agent
Logic agentLogic agent
Logic agent
 
Statistical learning
Statistical learningStatistical learning
Statistical learning
 
Resolution(decision)
Resolution(decision)Resolution(decision)
Resolution(decision)
 
Reinforcement learning 7313
Reinforcement learning 7313Reinforcement learning 7313
Reinforcement learning 7313
 
Neural networks
Neural networksNeural networks
Neural networks
 
Instance based learning
Instance based learningInstance based learning
Instance based learning
 
Statistical learning
Statistical learningStatistical learning
Statistical learning
 
Neural networks
Neural networksNeural networks
Neural networks
 
Logical reasoning
Logical reasoning Logical reasoning
Logical reasoning
 

Recently uploaded

How to Add Barcode on PDF Report in Odoo 17
How to Add Barcode on PDF Report in Odoo 17How to Add Barcode on PDF Report in Odoo 17
How to Add Barcode on PDF Report in Odoo 17Celine George
 
AUDIENCE THEORY -CULTIVATION THEORY - GERBNER.pptx
AUDIENCE THEORY -CULTIVATION THEORY -  GERBNER.pptxAUDIENCE THEORY -CULTIVATION THEORY -  GERBNER.pptx
AUDIENCE THEORY -CULTIVATION THEORY - GERBNER.pptxiammrhaywood
 
Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Mark Reed
 
ICS2208 Lecture6 Notes for SL spaces.pdf
ICS2208 Lecture6 Notes for SL spaces.pdfICS2208 Lecture6 Notes for SL spaces.pdf
ICS2208 Lecture6 Notes for SL spaces.pdfVanessa Camilleri
 
Integumentary System SMP B. Pharm Sem I.ppt
Integumentary System SMP B. Pharm Sem I.pptIntegumentary System SMP B. Pharm Sem I.ppt
Integumentary System SMP B. Pharm Sem I.pptshraddhaparab530
 
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxMULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxAnupkumar Sharma
 
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)lakshayb543
 
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxINTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxHumphrey A Beña
 
ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4MiaBumagat1
 
Textual Evidence in Reading and Writing of SHS
Textual Evidence in Reading and Writing of SHSTextual Evidence in Reading and Writing of SHS
Textual Evidence in Reading and Writing of SHSMae Pangan
 
Millenials and Fillennials (Ethical Challenge and Responses).pptx
Millenials and Fillennials (Ethical Challenge and Responses).pptxMillenials and Fillennials (Ethical Challenge and Responses).pptx
Millenials and Fillennials (Ethical Challenge and Responses).pptxJanEmmanBrigoli
 
Transaction Management in Database Management System
Transaction Management in Database Management SystemTransaction Management in Database Management System
Transaction Management in Database Management SystemChristalin Nelson
 
Expanded definition: technical and operational
Expanded definition: technical and operationalExpanded definition: technical and operational
Expanded definition: technical and operationalssuser3e220a
 
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...Postal Advocate Inc.
 
Activity 2-unit 2-update 2024. English translation
Activity 2-unit 2-update 2024. English translationActivity 2-unit 2-update 2024. English translation
Activity 2-unit 2-update 2024. English translationRosabel UA
 
4.16.24 21st Century Movements for Black Lives.pptx
4.16.24 21st Century Movements for Black Lives.pptx4.16.24 21st Century Movements for Black Lives.pptx
4.16.24 21st Century Movements for Black Lives.pptxmary850239
 
Measures of Position DECILES for ungrouped data
Measures of Position DECILES for ungrouped dataMeasures of Position DECILES for ungrouped data
Measures of Position DECILES for ungrouped dataBabyAnnMotar
 
Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...Seán Kennedy
 

Recently uploaded (20)

How to Add Barcode on PDF Report in Odoo 17
How to Add Barcode on PDF Report in Odoo 17How to Add Barcode on PDF Report in Odoo 17
How to Add Barcode on PDF Report in Odoo 17
 
YOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptx
YOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptxYOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptx
YOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptx
 
AUDIENCE THEORY -CULTIVATION THEORY - GERBNER.pptx
AUDIENCE THEORY -CULTIVATION THEORY -  GERBNER.pptxAUDIENCE THEORY -CULTIVATION THEORY -  GERBNER.pptx
AUDIENCE THEORY -CULTIVATION THEORY - GERBNER.pptx
 
Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)
 
ICS2208 Lecture6 Notes for SL spaces.pdf
ICS2208 Lecture6 Notes for SL spaces.pdfICS2208 Lecture6 Notes for SL spaces.pdf
ICS2208 Lecture6 Notes for SL spaces.pdf
 
Integumentary System SMP B. Pharm Sem I.ppt
Integumentary System SMP B. Pharm Sem I.pptIntegumentary System SMP B. Pharm Sem I.ppt
Integumentary System SMP B. Pharm Sem I.ppt
 
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxMULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
 
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
 
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxINTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
 
ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4
 
Textual Evidence in Reading and Writing of SHS
Textual Evidence in Reading and Writing of SHSTextual Evidence in Reading and Writing of SHS
Textual Evidence in Reading and Writing of SHS
 
Millenials and Fillennials (Ethical Challenge and Responses).pptx
Millenials and Fillennials (Ethical Challenge and Responses).pptxMillenials and Fillennials (Ethical Challenge and Responses).pptx
Millenials and Fillennials (Ethical Challenge and Responses).pptx
 
Transaction Management in Database Management System
Transaction Management in Database Management SystemTransaction Management in Database Management System
Transaction Management in Database Management System
 
Expanded definition: technical and operational
Expanded definition: technical and operationalExpanded definition: technical and operational
Expanded definition: technical and operational
 
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
 
Activity 2-unit 2-update 2024. English translation
Activity 2-unit 2-update 2024. English translationActivity 2-unit 2-update 2024. English translation
Activity 2-unit 2-update 2024. English translation
 
4.16.24 21st Century Movements for Black Lives.pptx
4.16.24 21st Century Movements for Black Lives.pptx4.16.24 21st Century Movements for Black Lives.pptx
4.16.24 21st Century Movements for Black Lives.pptx
 
Measures of Position DECILES for ungrouped data
Measures of Position DECILES for ungrouped dataMeasures of Position DECILES for ungrouped data
Measures of Position DECILES for ungrouped data
 
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptxYOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
 
Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...
 

Cachememory

  • 1. Cache Memory V.Saranya AP/CSE Sri Vidya College of Engineering and Technology, Virudhunagar
  • 2. Cache Memories  Processor is much faster than the main memory.  As a result, the processor has to spend much of its time waiting while instructions and data are being fetched from the main memory.  Major obstacle towards achieving good performance.  Speed of the main memory cannot be increased beyond a certain point.  Cache memory is an architectural arrangement which makes the main memory appear faster to the processor than it really is.  Cache memory is based on the property of computer programs known as “locality of reference”.
  • 3. Locality of Reference  Analysis of programs indicates that many instructions in localized areas of a program are executed repeatedly during some period of time, while the others are accessed relatively less frequently.  These instructions may be the ones in a loop, nested loop or few procedures calling each other repeatedly.  This is called “locality of reference”.  Temporal locality of reference:  Recently executed instruction is likely to be executed again very soon.  Spatial locality of reference:  Instructions with addresses close to a recently instruction are likely to be executed soon.
  • 4. Cache memories Main Main Processor Cache cache memory Memory • Processor issues a Read request, a block of words is transferred from the main memory to the cache, one word at a time. • Subsequent references to the data in this block of words are found in the cache. • At any given time, only some blocks in the main memory are held in the cache. Which blocks in the main memory are in the cache is determined by a “mapping function”. • When the cache is full, and a block of words needs to be transferred from the main memory, some block of words in the cache must be replaced. This is determined by a “replacement algorithm”.
  • 5. Cache hit • Existence of a cache is transparent to the processor. The processor issues Read and Write requests in the same manner. • If the data is in the cache it is called a Read or Write hit. • Read hit:  The data is obtained from the cache. • Write hit:  Cache has a replica(copy) of the contents of the main memory.  Contents of the cache and the main memory may be updated simultaneously. This is the write-through protocol.  Update the contents of the cache, and mark it as updated by setting a bit known as the dirty bit or modified bit. The contents of the main memory are updated when this block is replaced. This is write-back or copy-back protocol.
  • 6. Cache miss • If the data is not present in the cache, then a Read miss or Write miss occurs. • Read miss:  Block of words containing this requested word is transferred from the memory.  After the block is transferred, the desired word is forwarded to the processor.  The desired word may also be forwarded to the processor as soon as it is transferred without waiting for the entire block to be transferred. This is called load-through or early-restart. • Write-miss:  Write-through protocol is used, then the contents of the main memory are updated directly.  If write-back protocol is used, the block containing the addressed word is first brought into the cache. The desired word is overwritten with new information.
  • 7. Cache Coherence Problem • A bit called as “valid bit” is provided for each block. • If the block contains valid data, then the bit is set to 1, else it is 0. • Valid bits are set to 0, when the power is just turned on. • When a block is loaded into the cache for the first time, the valid bit is set to 1. • Data transfers between main memory and disk occur directly bypassing the cache. • When the data on a disk changes, the main memory block is also updated.
  • 8. Cache Coherence Problem • However, if the data is also resident in the cache, then the valid bit is set to 0. • What happens if the data in the disk and main memory changes and the write-back protocol is being used? • In this case, the data in the cache may also have changed and is indicated by the dirty bit. • The copies of the data in the cache, and the main memory are different. This is called the cache coherence problem. • One option is to force a write-back before the main memory is updated from the disk.
  • 9. Mapping functions  Mapping functions determine how memory blocks are placed in the cache.  A simple processor example:  Cache consisting of 128 blocks of 16 words each.  Total size of cache is 2048 (2K) words.  Main memory is addressable by a 16-bit address.  Main memory has 64K words.  Main memory has 4K blocks of 16 words each.  Three mapping functions:  Direct mapping  Associative mapping  Set-associative mapping.
  • 10. Direct mapping Main Block 0 Cache memory Block 1 tag Block 0 tag Block 1 Block 127 Block 128 tag Block 127 Block 129 Block 255 T ag Block W ord 5 7 4 Block 256 Block 257 Main memory address Block 4095
  • 11. Direct mapping •More than one memory block is mapped onto the same position in the cache. •May lead to contention for cache blocks even if the cache is not full. •Resolve the contention by allowing new block to replace the old block, leading to a trivial replacement algorithm. •Memory address is divided into three fields: - Low order 4 bits determine one of the 16 words in a block. - When a new block is brought into the cache, the next 7 bits determine which cache block this new block is placed in. - High order 5 bits determine which of the possible 32 blocks is currently present in the cache. These are tag bits. •Simple to implement but not very flexible.
  • 12. Associative mapping Main memory Block 0 Cache Block 1 tag Block 0 tag Block 1 Block 127 Block 128 tag Block 127 Block 129 Block 255 Tag Word 12 4 Block 256 Block 257 Main memory address Block 4095
  • 13. Associative mapping •Main memory block can be placed into any cache position. •Memory address is divided into two fields: - Low order 4 bits identify the word within a block. - High order 12 bits or tag bits identify a memory block when it is resident in the cache. •Flexible, and uses cache space efficiently. •Replacement algorithms can be used to replace an existing block in the cache when the cache is full. •Cost is higher than direct-mapped cache because of the need to search all 128 patterns to determine whether a given block is in the cache.
  • 14. Set-Associative mapping Cache Main Block 0 memory Block 1 Block 0 tag Block 1 tag Block 2 tag Block 3 Block 63 Block 64 tag tag Block 126 Block 65 Block 127 tag Block 127 T ag Block W ord 5 7 4 Block 128 Block 129 Main memory address Block 4095
  • 15. •Blocks of cache are grouped into sets. •Mapping function allows a block of the main memory to reside in any block of a specific set. •Divide the cache into 64 sets, with two blocks per set. •Memory block 0, 64, 128 etc. map to block 0, and they can occupy either of the two positions. Memory address is divided into three fields: - 6 bit field determines the set number. - High order 6 bit fields are compared to the tag fields of the two blocks in a set. Set-associative mapping combination of direct and associative mapping. Number of blocks per set is a design parameter. - One extreme is to have all the blocks in one set, requiring no set bits (fully associative mapping). - Other extreme is to have one block per set, is the same as direct mapping.