Cache is a small amount of fast memory located close to the CPU that stores frequently accessed instructions and data. It speeds up processing by allowing the CPU to access needed information more quickly than from main memory. Caches exploit the principle of locality of reference, where programs tend to access the same data/instructions repeatedly over short periods. There are multiple cache levels, with L1 cache being fastest but smallest and L3 cache being largest but slower. Caching improves performance dramatically by fulfilling over 90% of memory requests from the small cache rather than requiring slower access to main memory.
About Cache Memory
working of cache memory
levels of cache memory
mapping techniques for cache memory
1. direct mapping techniques
2. Fully associative mapping techniques
3. set associative mapping techniques
Cache memroy organization
cache coherency
every thing in detail
About Cache Memory
working of cache memory
levels of cache memory
mapping techniques for cache memory
1. direct mapping techniques
2. Fully associative mapping techniques
3. set associative mapping techniques
Cache memroy organization
cache coherency
every thing in detail
This Presentation is for Memory Management in Operating System (OS). This Presentation describes the basic need for the Memory Management in our OS and its various Techniques like Swapping, Fragmentation, Paging and Segmentation.
Primary memory (main memory)
complete knowledge about main memory Ram Rom and its kinds
with history and pictures
try it to believe it
Main memory refers to physical memory that is internal to the computer
Virtual Memory
ā¢ Copy-on-Write
ā¢ Page Replacement
ā¢ Allocation of Frames
ā¢ Thrashing
ā¢ Operating-System Examples
Background
Page Table When Some PagesAre Not in Main Memory
Steps in Handling a Page Fault
Memory management is the act of managing computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and free it for reuse when no longer needed. This is critical to any advanced computer system where more than a single process might be underway at any time
Memory organization in computer architectureFaisal Hussain
Ā
Memory organization in computer architecture
Volatile Memory
Non-Volatile Memory
Memory Hierarchy
Memory Access Methods
Random Access
Sequential Access
Direct Access
Main Memory
DRAM
SRAM
NVRAM
RAM: Random Access Memory
ROM: Read Only Memory
Auxiliary Memory
Cache Memory
Hit Ratio
Associative Memory
Operating System
Topic Memory Management
for Btech/Bsc (C.S)/BCA...
Memory management is the functionality of an operating system which handles or manages primary memory. Memory management keeps track of each and every memory location either it is allocated to some process or it is free. It checks how much memory is to be allocated to processes. It decides which process will get memory at what time. It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status.
This Presentation is for Memory Management in Operating System (OS). This Presentation describes the basic need for the Memory Management in our OS and its various Techniques like Swapping, Fragmentation, Paging and Segmentation.
Primary memory (main memory)
complete knowledge about main memory Ram Rom and its kinds
with history and pictures
try it to believe it
Main memory refers to physical memory that is internal to the computer
Virtual Memory
ā¢ Copy-on-Write
ā¢ Page Replacement
ā¢ Allocation of Frames
ā¢ Thrashing
ā¢ Operating-System Examples
Background
Page Table When Some PagesAre Not in Main Memory
Steps in Handling a Page Fault
Memory management is the act of managing computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and free it for reuse when no longer needed. This is critical to any advanced computer system where more than a single process might be underway at any time
Memory organization in computer architectureFaisal Hussain
Ā
Memory organization in computer architecture
Volatile Memory
Non-Volatile Memory
Memory Hierarchy
Memory Access Methods
Random Access
Sequential Access
Direct Access
Main Memory
DRAM
SRAM
NVRAM
RAM: Random Access Memory
ROM: Read Only Memory
Auxiliary Memory
Cache Memory
Hit Ratio
Associative Memory
Operating System
Topic Memory Management
for Btech/Bsc (C.S)/BCA...
Memory management is the functionality of an operating system which handles or manages primary memory. Memory management keeps track of each and every memory location either it is allocated to some process or it is free. It checks how much memory is to be allocated to processes. It decides which process will get memory at what time. It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status.
DevOps and Testing slides at DASA ConnectKari Kakkonen
Ā
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Ā
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
Ā
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Ā
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Ā
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
Ā
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Ā
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
Ā
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
ā¢ The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
ā¢ Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
ā¢ Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
ā¢ Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
Ā
As AI technology is pushing into IT I was wondering myself, as an āinfrastructure container kubernetes guyā, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefitās both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Ā
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
Ā
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
2. What is a Cache?
The cache is a very high speed, expensive piece of memory, which is used to
07/07/12
speed up the memory retrieval process. Due to itās higher cost, the CPU comes
with a relatively small amount of cache compared with the main memory.
Without cache memory, every time the CPU requests for data, it would send the
request to the main memory which would then be sent back across the system
bus to the CPU. This is a slow process. The idea of introducing cache is that this
extremely fast memory would store data that is frequently accessed and if
possible, the data that is around it. This is to achieve the quickest possible
response time to the CPU.
2
3. Role of Cache in Computers
In early PCs, the various components had one thing in common: they were all really
slow. The processor was running at 8 MHz or less, and taking many clock cycles to get
07/07/12
anything done. In fact, on some machines the memory was faster than the processor.
With the advancement of technology, the speed of every component has increased
drastically. Now processors run much faster than everything else in the computer. This
means that one of the key goals in modern system design is to ensure that to whatever
extent possible, the processor is not slowed down by the storage devices it works with.
Slowdowns mean wasted processor cycles, where the CPU can't do anything because it
is sitting and waiting for information it needs.
The best way to keep the processor from having to wait is to make everything that it
uses as fast as it is. But that would be very expensive.
There is a good compromise to this however. Instead of trying to make the whole 64
MB out of this faster, expensive memory, you make a smaller piece, say 256 KB. Then
you find a smart algorithm (process) that allows you to use this 256 KB in such a way
that you get almost as much benefit from it as you would if the whole 64 MB was
made from the faster memory. How do you do this? The answer is by using this small
cache of 256 KB to hold the information most recently used by the processor.
Computer science shows that in general, a processor is much more likely to need again
3
information it has recently used, compared to a random piece of information in
memory. This is the principle behind caching
4. Types of Cache Memory
ā¢ Memory Cache: A memory cache, sometimes called a cache store or RAM cache, is a
07/07/12
portion of memory made of high-speed static RAM (SRAM) instead of the slower and
cheaper dynamic RAM (DRAM) used for main memory. Memory caching is effective
because most programs access the same data or instructions over and over. By keeping as
much of this information as possible in SRAM, the computer avoids accessing the slower
DRAM.
ā¢ Disk Cache: Disk caching works under the same principle as memory caching, but
instead of using high-speed SRAM, a disk cache uses conventional main memory. The
most recently accessed data from the disk (as well as adjacent sectors) is stored in a
memory buffer. When a program needs to access data from the disk, it first checks the
disk cache to see if the data is there. Disk caching can dramatically improve the
performance of applications, because accessing a byte of data in RAM can be thousands
of times faster than accessing a byte on a hard disk.
4
5. Levels of Cache: Cache memory is categorized in levels based on itās closeness
and accessibility to the microprocessor. There are three levels of a cache.
ļ¶Level 1(L1) Cache: This cache is inbuilt in the processor and is made of SRAM(Static RAM)
Each time the processor requests information from memory, the cache controller on the chip uses
07/07/12
special circuitry to first check if the memory data is already in the cache. If it is present, then the
system is spared from time consuming access to the main memory. In a typical CPU, primary cache
ranges in size from 8 to 64 KB, with larger amounts on the newer processors. This type of Cache
Memory is very fast because it runs at the speed of the processor since it is integrated into it.
ļ¶Level 2(L2) Cache: The L2 cache is larger but slower in speed than L1 cache. It is used to see
recent accesses that is not picked by L1 cache and is usually 64 to 2 MB in size. A L2 cache is also
found on the CPU. If L1 and L2 cache are used together, then the missing information that is not
present in L1 cache can be retrieved quickly from the L2 cache. Like L1 caches, L2 caches are
composed of SRAM but they are much larger. L2 is usually a separate static RAM (SRAM) chip and it
is placed between the CPU & DRAM(Main Memory)
ļ¶Level 3(L3) Cache: L3 Cache memory is an enhanced form of memory present on the
motherboard of the computer. It is an extra cache built into the motherboard between the processor and
main memory to speed up the processing operations. It reduces the time gap between request and
retrieving of the data and instructions much more quickly than a main memory. L3 cache are being used
with processors nowadays, having more than 3 MB of storage in it.
5
7. Principle behind Cache Memory
Cache is a really amazing technology. A 512 KB level 2 cache, caching 64 MB of
system memory, can supply the information that the processor requests 90-95% of the
time. The level 2 cache is less than 1% of the size of the memory it is caching, but it is
07/07/12
able to register a hit on over 90% of requests. That's pretty efficient, and is the reason
why caching is so important.
The reason that this happens is due to a computer science principle called locality of
reference. It states basically that even within very large programs with several
megabytes of instructions, only small portions of this code generally get used at once.
Programs tend to spend large periods of time working in one small area of the code,
often performing the same work many times over and over with slightly different data,
and then move to another area. This occurs because of "loops", which are what
programs use to do work many times in rapid succession.
7
8. Locality of Reference
Let's take a look at the following pseudo-code to see how locality of reference works
Output to screen Ā« Enter a number between 1 and 100 Ā»
Read input from user
07/07/12
Put value from user in variable X
Put value 100 in variable Y
Put value 1 in variable Z
Loop Y number of time Divide Z by X
If the remainder of the division = 0
then output Ā« Z is a multiple of X Ā»
Add 1 to Z
Return to loop
End
This small program asks the user to enter a number between 1 and 100. It reads the value entered by the user.
Then, the program divides every number between 1 and 100 by the number entered by the user. It checks if
the remainder is 0. If so, the program outputs "Z is a multiple of X", for every number between 1 and 100.
Then the program ends.
Now it is easy to understand that in the 11 lines of this program, the loop part (lines 7 to 9) are executed 100
times. All of the other lines are executed only once. Lines 7 to 9 will run significantly faster because of
caching. This program is very small and can easily fit entirely in the smallest of L1 caches, but let's say this
program is huge. The result remains the same. When you program, a lot of action takes place inside loops.
This 95%-to-5% ratio (approximately) is what we call the locality of reference, and it's why a cache works
so efficiently. This is also why such a small cache can efficiently cache such a large memory system. You
can see why it's not worth it to construct a computer with the fastest memory everywhere. We can deliver 95 8
percent of this effectiveness for a fraction of the cost
9. Importance of Cache
Cache is responsible for a great deal of the system performance
07/07/12
improvement of today's PCs. The cache is a buffer of sorts between
the very fast processor and the relatively slow memory that serves
it. The presence of the cache allows the processor to do its work
while waiting for memory far less often than it otherwise would.
Without cache the computer will be very slow and all our works get
delay. So cache is a very important part of our computer system.
9