Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.



Published on

IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit

Published in: Technology
  • Be the first to comment

  • Be the first to like this


  1. 1. Yogesh S. Watile, A. S. Khobragade / International Journal of Engineering Research andApplications (IJERA) ISSN: 2248-9622 www.ijera.comVol. 3, Issue 3, May-Jun 2013, pp.283-286283 | P a g eFPGA Implementation of cache memoryYogesh S. Watile *, A. S. Khobragade***(Student M.Tech, Deptt of Electronics (VLSI), PCE, Priyadarshini College of Engineering ,Nagpur , India)**(Professor, Deptt of Electronics (VLSI), Priyadarshini College of Engineering ,Nagpur , India)AbstractWe describe cache memory design suitable foruse in FPGA-based cache controller andprocessor. Cache systems are on-chip memoryelement used to store data. Cache serves as abuffer between a CPU and its main memory.Cache memory is used to synchronize the datatransfer rate between CPU and main memory.As cache memory closer to the micro processor,it is faster than the RAM and main memory.Theadvantage of storing data on cache, as comparedto RAM, is that it has faster retrieval times, but ithas disadvantage of on-chip energy consumption.In term of detecting miss rate in cache memoryand less power consumption, The efficient cachememory will proposed by this research work, byimplementation of cache memory on FPGA. Webelieve that our implementation achieves lowcomplexity and low energy consumption in termsof FPGA resource usage.Keywords- cache Tag memory, cache controller,counter, cache Tag comparator.I. INTRODUCTIONField-programmable gate arrays (FPGAs)have recently been garnering attention for theirsuccessful use in computing applications. Indeed,recent work has shown that implementingcomputations using FPGA hardware has thepotential to bring orders of magnitude improvementin energy-efficiency and throughput vs. realizingcomputations in software running on a conventionalprocessor. While a variety of different memoryarchitectures are possible in processor/acceleratorsystems, a commonly-used approach is one wheredata shared between the processor and acceleratorsresides in a shared memory hierarchy comprised ofa cache and main memory. The advantage of such amodel is its simplicity, as cache coherencymechanisms are not required despite this potentiallimitation; we use the shared memory model as thebasis of our initial investigation, with our resultsbeing applicable (in future) multi-cachescenarios[1]. In our work, data shared among theprocessor and parallel accelerators is to accessedthrough a shared L1 cache, implemented using on-FPGA memory.Cache systems are on-chip memoryelement used to store data. A cache controller isused for tracking induced miss rate in cachememory. Any data requested by microprocessor ispresent in cache memory then the term is called„cache hit‟. The advantage of storing data on cache,as compared to RAM, is that it has faster retrievaltimes, but it has disadvantage of on-chip energyconsumption. This paper deals with the design ofefficient cache memory for detecting miss rate incache memory and less power consumption. Thiscache memory may used in future work to designFPGA based cache controller.II.SYSTEM ARCHITECTUREOVERVIEWCache controller that communicates betweenmicroprocessor and cache memory to carry outmemory related operations. The functionality of thedesign is explained below.Fig 1 - system block diagramCache controller receive address thatmicroprocessor wants to access Cache controllerlooks for the address in L1 cache. If address presentin L1 cache the data from location is provided tomicroprocessor via data bus. If the address notfound in L1 cache then cache miss will occur. Cachecontroller then looks same address in cache L2. Ifaddress present in L2 cache the data from location isprovided to microprocessor. The same data willreplace in cache L1. If the address not found in L2cache then cache miss will occur.In our paper work we design single cachememory for detecting miss rate in cache memoryand less power consumption. This work will be usedin design of FPGA based 2-way, 4-way, 8-waysetassociative cache memory and used in cachecontroller for tracking induced miss rate.
  2. 2. Yogesh S. Watile, A. S. Khobragade / International Journal of Engineering Research andApplications (IJERA) ISSN: 2248-9622 www.ijera.comVol. 3, Issue 3, May-Jun 2013, pp.283-286284 | P a g eIII. DESIGNED WORKIn our designed work, we designed cachememory L1 using FPGA. In future with reference todesigned work we can design set associative cachememory as set associative cache strike a goodbalance between lower miss rate and higher cost[5].Cache memory consist Cache tag memory, cachedata memory, cache tag comparator and counter.Our paper work deal with detection of cache misses.We designed cache tag memory, cache tagcomparator and 10-bit counter and need not todesign cache data memory for detecting cachemisses as address is stored in tag memory and datais stored data memory. Address Requested bymicroprocessor is compared with address of data inmain memory those are stored in tag memory.The cache tag memory stored address of data inmain memory. Cache tag memory is modelled usingbehavioural style of modelling in VHDL (forexample: 22-bit cache tag memory).Figure 2: cache tag memoryFigure 2 shows the diagram of cache tag memory of22 bit. When „we‟=1 it performs writing operationand „we‟=0 it performs reading operation. Datastored in the tag memory will be a addresses of mainmemory. The 10-bit counter used to generate 10 bitaddress for writing and reading operation of cachetag memory. 10-bit counter is modelled usingbehavioural style of modelling in VHDL..Figure 3: 10-bit CounterFigure 3 shows the diagram of 10-bit counter. Foreach clock pulse counter is incremented by one. 10-bit counter provides 10-bit addresses to cache tagmemory during writing and reading operation. The22-bit cache tag comparator Cache tag comparator isused to compare address requested bymicroprocessor and address stored in tag memory.22-bit cache tag comparator is modelled usingbehavioural style of modelling in VHDL.Figure 4: 22-bit cache tag comparatorFigure 4 shows the diagram of 22-bit cache tagcomparator .22-bit Cache tag comparator is compare22-bit address requested by microprocessor and 22-bit address stored in tag memory. Address requestedby microprocessor is 32-bit out which 22-bit addressis used to compare with tag address.8-bit address isused as a set address and 2-bit address is used as aoffset word address. To design set associative cachememory, we have to make number of sets of suchcache memory which will be our future work.Figure 5: physical memory addressThe implementation of cache memory requirescache tag memory, 10-bit counter and cache tagcomparator as shown in figure 6IV: SIMULATION RESULTSFigure 7 shows the simulation results ofcache memory. Cache memory is written in aVHDL. modelsim platform is used for simulation.To achieve the synthesis of cache memory, Xilinxplatform is used. as shown in simulation results it isfound that if address requested by microprocessor ismatched with the address stored in cache tagmemory then the cache hit will occurs, if not thencache miss will occurs.
  3. 3. Yogesh S. Watile, A. S. Khobragade / International Journal of Engineering Research andApplications (IJERA) ISSN: 2248-9622 www.ijera.comVol. 3, Issue 3, May-Jun 2013, pp.283-286285 | P a g eFigure 6: cache memoryFigure 7: simulation resultsVII. CONCLUSIONIn this paper, we have presented design ofcache memory on FPGA for detecting cache miss.Such an approach would be of great utility to manymodern embedded applications, for which bothhigh-performance and low-power are of greatimportance. This cache memory may used in futurework to design FPGA based set associative cachememory and cache controller for tracking theinduced miss rate in cache memory.REFERENCES[1] Jongsok Choi, Kevin Nam, Andrew Canis,Jason Anderson, Stephen Brown, andTomasz Czajkowski, “Impact of CacheArchitecture and Interface on Performanceand Area of FPGA-BasedProcessor/Parallel-Accelerator Systems”ECE Department, University of Toronto,Toronto, ON, Canada ,Altera TorontoTechnology Centre, Toronto, ON, Canada
  4. 4. Yogesh S. Watile, A. S. Khobragade / International Journal of Engineering Research andApplications (IJERA) ISSN: 2248-9622 www.ijera.comVol. 3, Issue 3, May-Jun 2013, pp.283-286286 | P a g e[2] Maciej kurek, ioannis ilkos, wayne luk,“Customizable security-aware cache forfpga-based soft processors”, Departmentof computing, imperial college London[3] Nawaf Almoosa, Yorai Wardi, andSudhakar Yalamanchili, Controller Designfor Tracking Induced Miss-Rates in CacheMemories, 2010 8th IEEE InternationalConference on Control and AutomationXiamen, China, June 9-11, 2010[4] Computer architecture and organisation :John .p. Hayes(Mc Graw hill publication)[5] Computer architecture: Forth Edition, JohnL. hennessy and David A. Patterson.[6] Vipin S. Bhure , Praveen R. Chakole,“Design of Cache Controller for Multi-coreProcessor System” international Journal ofElectronics and Computer ScienceEngineering, ISSN: 2277-1956