The document discusses various topics related to operating systems including threads vs processes, scheduling policies, and different operating systems like Windows, Mac OS, and Linux. It provides information on CPU scheduling policies like priority scheduling and shortest remaining time first. It also discusses concepts like context switching, threads, and multi-level feedback queue scheduling.
This is my sort note of operating system.In this note we describe our knowledge for B.Sc level. Student of B.Sc(H) Computer Science this note is good. We describe this note very easy language which the student easily understand.
Operating system structures all describe in details.
System Components
Operating System Services
System Calls
System Programs
System Structure
Virtual Machines
System Design and Implementation
This is my sort note of operating system.In this note we describe our knowledge for B.Sc level. Student of B.Sc(H) Computer Science this note is good. We describe this note very easy language which the student easily understand.
Operating system structures all describe in details.
System Components
Operating System Services
System Calls
System Programs
System Structure
Virtual Machines
System Design and Implementation
Theory related to OS :
It Includes:
1. Unit I (COMPONENTS OF COMPUTER SYSTEM)
2. Unit II (OPERATING SYSTEM STRUCTURE)
3. Unit III (PROCESS MANAGEMENT)
4. Unit IV (MEMORY MANAGEMENT)
5. Unit V (FILE SYSTEM)
6. Unit VI (INPUT OUTPUT SYSTEM)
OPERATING SYSTEM SERVICES, OPERATING SYSTEM STRUCTURESpriyasoundar
These slides will help the engineering students for understanding the functionalities of operating system and its structure. Also it will help them for their exam preparation.
operating system
,
os
,
what is an os?
,
types of os
,
logical architecture of a computer system
,
basic task perform by os
,
task switching
,
utility software
,
main functions of an os
Operating system - Process and its conceptsKaran Thakkar
This presentation gives an overview of Process concepts in Operating System. The presentation aims at alleviating most of the overheads while understanding the process concept in operating system. this tailor made presentation will help individuals to understand the overall meaning of process and its underlying concepts used in an operating system.
Theory related to OS :
It Includes:
1. Unit I (COMPONENTS OF COMPUTER SYSTEM)
2. Unit II (OPERATING SYSTEM STRUCTURE)
3. Unit III (PROCESS MANAGEMENT)
4. Unit IV (MEMORY MANAGEMENT)
5. Unit V (FILE SYSTEM)
6. Unit VI (INPUT OUTPUT SYSTEM)
OPERATING SYSTEM SERVICES, OPERATING SYSTEM STRUCTURESpriyasoundar
These slides will help the engineering students for understanding the functionalities of operating system and its structure. Also it will help them for their exam preparation.
operating system
,
os
,
what is an os?
,
types of os
,
logical architecture of a computer system
,
basic task perform by os
,
task switching
,
utility software
,
main functions of an os
Operating system - Process and its conceptsKaran Thakkar
This presentation gives an overview of Process concepts in Operating System. The presentation aims at alleviating most of the overheads while understanding the process concept in operating system. this tailor made presentation will help individuals to understand the overall meaning of process and its underlying concepts used in an operating system.
Maximum CPU utilization obtained with multiprogramming
CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait
CPU burst followed by I/O burst
CPU burst distribution is of main concern
In computing, scheduling is the action .nathansel1
In computing, scheduling is the action of assigning resources to perform tasks. The resources may be processors, network links or expansion cards. The tasks may be threads, processes or data flows. The scheduling activity is carried out by a process called scheduler.
This Tutorial will provide you information on working of operating system. Main topics are following and further sub-topics are discussed in detail.
1. Kernel Architecture.
2. Initialization of operating system.
3. Process of operating system.
4. Management in operating system.
5. File system.
6.Security in operating system.
7.Interface in operating System.
The objectives of these slides are -
- To introduce the notion of a process - a program in execution, which forms the basis of all computation
- To describe the various features of processes, including scheduling, creation and termination, and communication
- To explore interprocess communication using shared memory and message passing
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
An Approach to Detecting Writing Styles Based on Clustering Techniquesambekarshweta25
An Approach to Detecting Writing Styles Based on Clustering Techniques
Authors:
-Devkinandan Jagtap
-Shweta Ambekar
-Harshit Singh
-Nakul Sharma (Assistant Professor)
Institution:
VIIT Pune, India
Abstract:
This paper proposes a system to differentiate between human-generated and AI-generated texts using stylometric analysis. The system analyzes text files and classifies writing styles by employing various clustering algorithms, such as k-means, k-means++, hierarchical, and DBSCAN. The effectiveness of these algorithms is measured using silhouette scores. The system successfully identifies distinct writing styles within documents, demonstrating its potential for plagiarism detection.
Introduction:
Stylometry, the study of linguistic and structural features in texts, is used for tasks like plagiarism detection, genre separation, and author verification. This paper leverages stylometric analysis to identify different writing styles and improve plagiarism detection methods.
Methodology:
The system includes data collection, preprocessing, feature extraction, dimensional reduction, machine learning models for clustering, and performance comparison using silhouette scores. Feature extraction focuses on lexical features, vocabulary richness, and readability scores. The study uses a small dataset of texts from various authors and employs algorithms like k-means, k-means++, hierarchical clustering, and DBSCAN for clustering.
Results:
Experiments show that the system effectively identifies writing styles, with silhouette scores indicating reasonable to strong clustering when k=2. As the number of clusters increases, the silhouette scores decrease, indicating a drop in accuracy. K-means and k-means++ perform similarly, while hierarchical clustering is less optimized.
Conclusion and Future Work:
The system works well for distinguishing writing styles with two clusters but becomes less accurate as the number of clusters increases. Future research could focus on adding more parameters and optimizing the methodology to improve accuracy with higher cluster values. This system can enhance existing plagiarism detection tools, especially in academic settings.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Forklift Classes Overview by Intella PartsIntella Parts
Discover the different forklift classes and their specific applications. Learn how to choose the right forklift for your needs to ensure safety, efficiency, and compliance in your operations.
For more technical information, visit our website https://intellaparts.com
2. Topics
• Thread vs Process
• Pre-emptive vs Non-pre-emptive
• Scheduler vs Dispatcher
• CPU Scheduling Policies
3. What is OS?
• Operating System as a Resource Manager
- It manages computer hardware and software resources.
• An Operating System (OS) is an interface between a user and hardware.
Functions of Operating System:
• Memory Management
• Processor Management
• Device Management
• File Management
4. Microsoft Windows
• Microsoft created the Windows operating system in the mid-1980s. Over the years, there
have been many different versions of Windows, but the most recent ones are Windows
10 (released in 2015), Windows 8 (2012).
5. Mac OS X
• Mac OS is a line of operating systems created by Apple. It comes preloaded on all new
Macintosh computers, or Macs.
• Mac OS X users account for less than 10% of global operating systems—much lower than
the percentage of Windows users (more than 80%).
• One reason for this is that Apple computers tend to be more expensive. However, many
people do prefer the look and feel of Mac OS X over Windows.
6. Linux
• Linux is a family of open-source operating systems, which means they can be modified
and distributed by anyone around the world. This is different from proprietary
software like Windows, which can only be modified by the company that owns it. The
advantages of Linux are that it is free, and there are many different distributions—or
versions—you can choose from.
• Linux users account for less than 2% of global operating systems.
• However, most servers run Linux because it's relatively easy to customize.
7. Five Reasons Linux Beats Windows for Servers
Stability
• Linux systems are well known for their ability to run for years without failure; in fact, many
Linux users have never seen a crash.
• That's great for users of every kind, but it's particularly valuable for small and medium-
sized businesses, for which downtime can have disastrous consequences.
Security
• That's due largely to the fact that Linux, which is based on Unix, was designed from the
start to be a multiuser operating system. Only the administrator, or root user has
administrative privileges, and fewer users and applications have permission to access the
kernel or each other. That keeps everything modular and protected.
• Linux is also innately more secure than Windows is.
• Of course, Linux also gets attacked less frequently by viruses and malware, and
vulnerabilities tend be found and fixed more quickly by its legions of developers and
users.
8. Hardware
• Windows typically requires frequent hardware upgrades to accommodate its ever-increasing
resource demands whereas Linux is slim, trim, flexible and scalable, and it performs admirably
on just about any computer, regardless of processor or machine architecture.
• Linux can also be easily reconfigured to include only the services needed for your business's
purposes, thus further reducing memory requirements, improving performance and keeping
things even simpler.
Performance:
• While there is some debate about which operating system performs better, in our experience
both perform comparably in low-stress conditions however UNIX servers under high load (which
is what is important) are superior to Windows.
Price:
• Windows cost a significant amount of money; on the other hand, Linux is a free operating
system to download, install and operate. Windows hosting results in being a more expensive
platform.
• The Linux kernel, and the GNU utilities and libraries which accompany it in most distributions,
are entirely free and open source. You can download and install GNU/Linux distributions without
purchase. Some companies offer paid support for their Linux distributions, but the underlying
software is still free to download and install. Microsoft Windows usually costs between $99.00
and $199.00 USD for each licensed copy.
9.
10. CPU – I/O Burst Cycle
Process execution consists of a cycle of CPU
execution and I/O wait.
13. Performance metrics for CPU scheduling
• CPU utilization: Percentage of the time that CPU is busy.
• Throughput: Number of processes completed per unit time
• Turnaround time: Time taken by a process for its complete execution.
• Wait time: Time spent waiting in the ready queue.
• Response time: Time interval from submission of job to first response.
• Fairness: Giving each process a fair share of CPU.
Goal:
• Maximize CPU utilization, Throughput, and fairness.
• Minimize turnaround time, waiting time, and response time.
14. Deterministic modeling example:
• Suppose we have processes A, B, and C,
submitted at time 0
• We want to know the response time, waiting
time, and turnaround time of process A
A B C A B C A C A C Time
response time = 0
+ +wait time
turnaround time
Gantt chart: visualize how processes execute.
15. Deterministic modeling example
• Suppose we have processes A, B, and C,
submitted at time 0
• We want to know the response time, waiting
time, and turnaround time of process B
A B C A B C A C A C Time
response time
+wait time
turnaround time
16. Deterministic modeling example
• Suppose we have processes A, B, and C,
submitted at time 0
• We want to know the response time, waiting
time, and turnaround time of process C
A B C A B C A C A C Time
response time
+ ++wait time
turnaround time
17.
18. Pre-emptive Scheduling Non Pre-emptive Scheduling
Process is allocated to processor for a
fixed time slot.
Process is allocated to processor as long
as it wants.
This occurs when the currently
executing process gives up the CPU
voluntarily. A running process is
executed till completion. It cannot be
interrupted.
Desirable : as it provides better control. Not desirable since one process cannot
monopolize the processor very long.
Requires a clock interrupt to occur at
end of time interval. Proper time slot is
required to be decided.
Process can either :
- Exit
- Or Enter in Ready state
Process can either :
- Exit
- Or Enter in Wait State
19. Scheduling Queues
• All processes when enters into the system are stored in the job queue.
• Processes in the Ready state are placed in the ready queue.
• Processes waiting for a device to become available are placed in device
queues. There are unique device queues for each I/O device available.
20. Scheduling Policies
• FIFO (first in, first out) NON-PREEMPTIVE
• Round robin PREEMPTIVE
• SJF (shortest job first) NON-PREEMPTIVE
• SRF (shortest remaining time first) PREEMPTIVE
• Priority Scheduling NON-PREEMPTIVE/PREEMPTIVE
• Multilevel feedback queues
Completely Fair Scheduler
What scheduling algorithms does Linux kernel use?
21. Multi-Level Feedback Queue Scheduling
It uses many ready queues and associate a different priority with each queue.
Different scheduling algorithms for various types of processes.
The Algorithm chooses to process with highest priority from the occupied queue and run
that process either pre-emptively or non-pre-emptively.
If the process uses too much CPU time it will moved to a lower-priority queue. Similarly, a
process that wait too long in the lower-priority queue may be moved to a higher-priority
queue may be moved to a highest-priority queue.
Disadvantage in multi level queue scheduling: Starvation of low priority queues.
Multilevel feedback queue scheduling removes this starvation problem as it allows
a process to move between queues.
23. In implementing a multilevel feedback queue, there are various parameters that we need to
define:
The number of queues
The scheduling algorithm for each queue
The method used to demote processes to lower priority queues
The method used to promote processes to a higher priority queue (presumably by some
form of aging)
The method used to determine which queue a process will enter
24. Priority Scheduling
• Priority scheduling is a method of scheduling processes based on priority.
• It is often possible that a priority scheduling algorithm can make a low-priority
process wait indefinitely i.e. Starvation
• A remedy to starvation is aging, which is a technique used to gradually increase
the priority of those processes that wait for long periods in the system.
• Priority scheduling can be either of the following:
- Pre-emptive
- Non-pre-emptive
25. Disadvantage: Convoy Effect (Short Process Behind Long Process)
Convoy effect is slowing down of the whole operating system because of few slow
processes. A slow process is utilizing the CPU keeping the fast process on wait.
26.
27. Advantage: High Throughput
Disadvantage: Starvation
If the short processes are continually added to the CPU scheduler then waiting time of long
processes will be more.
28. Shortest Remaining Time First
• Pre-emptive version of SJF.
• Selects the process for execution which has the smallest remaining run time.
• Shortest remaining time is optimal and it mostly gives minimum average
waiting time of processes.
Advantage : High Throughput
: No Starvation
29. Context Switching
• When CPU switches from one
process to another, then system
must save the state of old process
and load the saved state for the
new process.
• Context Switch time is pure
overhead as the system does no
useful work while switching.
• It is the process of storing and
restoring the state of a process so
that execution can be resumed
from the same point at a later
time.
30. Thread
• Thread is a LWP (Light Weight Process) and is the basic unit of CPU utilization.
• A thread shares with peer threads its code section, data section and other OS
resources (such as files).
• It comprises of :
- thread ID - stack
- registers set - program counter
31. Thread Process
A thread is often referred as lightweight
process. A thread is a subset(part) of the
process.
A process is sometime referred as task. A
program in execution is often referred as
process.
A thread can communicate with other thread
(of the same process) directly by using
methods like wait(), notify(), notifyAll().
A process can communicate with other
process by using inter-process
communication.
New threads are easily created. However the creation of new processes
require duplication of the parent process.
Threads have control over the other threads
of the same process.
A process does not have control over the
sibling process, it has control over its child
processes only.
A thread uses the process’s address space
and share it with the other threads of that
process.
Each process has its own address space.
No overhead while switching. Significant amount of overhead while
switching.
Any change in main thread (cancellation,
priority change, etc...) may affect the
behaviour of the other threads of the
process. Threads are dependent.
Any change in the parent process does not
affect child processes.
Processes are independent.
32. There are two types of threads :
• User Threads
• Kernel Threads
• User threads, are above the kernel and without kernel support. These are the threads that
application programmers use in their programs.
• Kernel threads are supported within the kernel of the OS itself. All modern OSs support
kernel level threads, allowing the kernel to perform multiple simultaneous tasks and/or to
service multiple kernel system calls simultaneously.
33.
34. Problem With Concurrent Execution
• Concurrent processes (or threads) often need to share data (maintained either in shared
memory or files) and resources.
• If there is no controlled access to shared data, some processes will obtain an inconsistent
view of this data.
• The action performed by concurrent processes will then depend on the order in which
their execution is interleaved.
35.
36. Race Condition
• Situations like this where processes access the same resource concurrently and
the outcome of execution depends on the particular order in which the access
takes place is called a race condition.
• Critical section (CS) is a section of code where multiple threads access the same
resource concurrently and where the sequence of execution for the threads
makes a difference in the result.
• How must the processes coordinate (or synchronise) in order to guard against
race conditions?
37. Critical Section and Mutual Exclusion
• When a process executes code that manipulates shared data (or resource), we
say that the process is in it’s critical section (CS) (for that shared data).
• Mutual Exclusion: The execution of critical sections must be mutually exclusive:
at any time, only one process is allowed to execute in its critical section (even
with multiple CPUs).
• Then each process must request the permission to enter it’s critical section (CS).
38.
39.
40. Solution to the critical section problem/Avoiding Race Condition
• Mutual Exclusion
• At any time, at most one process can be in its critical section (CS)
• Bounded Waiting
• There must exist a bound on the number of times that the other processes are allowed to
enter their CS after a process has made a request to enter it’s CS and before that request is
granted.
• otherwise the process will suffer from starvation
• Progress
• Only processes that are not executing in their CS can participate in the decision of who will
enter next in the CS. This selection cannot be postponed indefinitely.
• Hence, we must have no deadlock
41. Deadlock
• Deadlocks are a set of blocked processes each process is holding a resource and
waiting to acquire a resource held by another process.
42. • Mutual exclusion condition
A resource cannot be used by more than one process at a time.
• Hold and wait condition
Processes already holding resources may request for new resources.
• No pre-emption condition
Only a process holding a resource may release it.
• Circular wait condition
Two or more processes form a circular chain where each process waits
for a resource that the next process in the chain holds.
44. Deadlock Prevention
Mutual Exclusion
• Not always possible to prevent deadlock by preventing mutual exclusion (making all resources
shareable) as certain resources are cannot be shared safely. Resources such as printers
requires exclusive access by a single process.
Hold and Wait
We will see two approaches, but both have their disadvantages.
1. A resource gets all required resources before it start execution. This will avoid
deadlock, but will result in reduced throughput as resources are held by
processes even when they are not needed. They could have been used by other
processes during this time.
2. Second approach is to request for a resource only when its free.
This may result in a starvation as all required resources might not be available freely
always.
45. No pre-emption
1. Allow pre-emption: Steal the resource.
2. Release all other resources before requesting for a resource which is not readily
available.
Circular Wait
To avoid circular wait, resources may be ordered and we can ensure that each process can
request resources only in an increasing order of these numbers. The algorithm may itself
increase complexity and may also lead to poor resource utilization.
46. Deadlock Recovery
• Once a deadlock is detected, you will have to break the deadlock.
• It can be done through different ways :
- Pre-emption : We can take a resource from one process and give it to other. This
will resolve the deadlock situation, but sometimes it does causes problems.
- Rollback : System can periodically make a record of the state of each process and
when deadlock occurs, roll everything back to the last checkpoint, and restart,
but allocating resources differently so that deadlock does not occur.
- Kill one or more processes - This is the simplest way, but it works. Aborting one or
more processes to break the circular wait condition causing the deadlock.
47. Semaphore
• A semaphore is a protected integer variable that can facilitate and restrict access to
shared resources in a multi-processing environment.
Two kinds of semaphores:
• Binary semaphores : They represent two possible states (generally 0 or 1; locked or
unlocked).
• Counting semaphores : Counting semaphores represent multiple resources.
48. How to solve CS problem with semaphore?
A semaphore can only be accessed using the following operations: wait() and signal().
• WAIT: wait() is called when a process wants access to a resource. If semaphore is zero,
that process must wait until it becomes available. The wait command
decrements the semaphore value by 1.
• SIGNAL: signal() is called when a process is done using a resource. The signal operation
increments the semaphore value by 1.
49. EXPLANATION
• In this implementation, a process wanting to enter its critical section it has to acquire the
binary semaphore which will then give it mutual exclusion until it signals that it is done.
• For example: we have semaphores, and two processes, P1 and P2 that want to enter their
critical sections at the same time. P1 first calls wait(s). The value of s is decremented to 0
and P1 enters its critical section.
• While P1 is in its critical section, P2 calls wait(s), but because the value of s is zero, it must
wait until P1 finishes its critical section and executes signal(s). When P1 calls signal, the
value of s is incremented to 1, and P2 can then proceed to execute in its critical section
(after decrementing the semaphore again).
• Mutual exclusion is achieved because only one process can be in its critical section at any
time.
50. Producer Consumer Problem (Also known as the
bounded-buffer problem)
• In computing, the producer–consumer problem is a classic example of a multi-process
synchronization problem. The problem describes two processes, the producer and the
consumer, which share a common, fixed-size buffer used as a queue.
- Producer’s job is to generate data, put it into the buffer. (At one go, one item).
- Consumer’s job is to consuming the data (i.e. removing it from the buffer).
3 Problems/Objectives:
- Producers can store only when there is an empty slot. We have to make sure that
the producer won’t try to add data into the buffer if it’s full.
- Consumers can remove only when there is a full slot. We have to make sure that
the consumer won’t try to remove data from an empty buffer.
- We have to set co-ordination between producer and consumer to avoid clash.
Mutual Exclusion has to be set.
51. Solution
• The producer can go to sleep if the buffer is full. The next time the consumer removes an
item from the buffer, it notifies the producer, who starts to fill the buffer again. In the
same way, the consumer can go to sleep if it finds the buffer to be empty. The next time
the producer puts data into the buffer, it wakes up the sleeping consumer.
An inadequate solution could result in a deadlock where both processes
are waiting to be awakened.
52. /* wait for an empty space */
Check for overflow.
/* wait for a stored item */
Check for underflow.
Provides Mutual Exclusion for CS
53. • We have solved the problem using three semaphores.
INITIALIZATION:
• shared binary semaphore mutex = 1;
• shared counting semaphore empty = MAX;
• shared counting semaphore full = 0;
The counting semaphore empty keeps track of “empty slots”, and the
counting semaphore full keeps track of “full slots”.
55. Mutex vs Semaphore
Using Mutex:
• A mutex provides mutual exclusion, either producer or consumer can have the key (mutex)
and proceed with their work. As long as the buffer is filled by producer, the consumer
needs to wait, and vice versa.
• At any point of time, only one thread can work with the entire buffer. The concept can be
generalized using semaphore.
Using Semaphore:
• A semaphore is a generalized mutex. In lieu of single buffer, we can split the 4 KB buffer
into four 1 KB buffers (identical resources). A semaphore can be associated with these four
buffers. The consumer and producer can work on different buffers at the same time.
Strictly speaking, a mutex is locking mechanism and
Semaphore is signalling mechanism.
56.
57.
58.
59.
60. Virtual Memory
• A computer can address more memory than the amount physically installed on the
system. This extra memory is actually called virtual memory and it is a section of a hard
disk that's set up to emulate the computer's RAM.
• The main visible advantage of this scheme is that programs can be larger than physical
memory.
• Virtual memory serves two purposes :
- First, it allows us to extend the use of physical memory by using disk.
- Second, it allows us to have memory protection, because each virtual
address is translated to a physical address.
61. Paging
• Paging technique plays an important role in implementing virtual memory.
• With paging, the address space of process is divided into a sequence of fixed size units called
“pages”
• Similarly, main memory is divided into small fixed-sized blocks of (physical) memory called
“frames” and the size of a frame is kept the same as that of a page to have optimum utilization
of the main memory and to avoid external fragmentation.
• Then when a process is loaded it gets divided into pages which are the same size as those
frames. Pages are then loaded into the frames.
Advantage:
• Paging is a memory management mechanism that allows the physical address space of a
process to be non-contagious.
62.
63. Demand Paging
• Demand paging follows that pages should only be brought into memory if the
executing process demands them.
Disadvantages :
Memory management with page
replacement algorithms becomes slightly
more complex.
Thrashing which may occur due to
repeated page faults.
64.
65. Thrashing
• Now if it happens that your system has to swap pages at such a higher rate
that major chunk of CPU time is spent in swapping then this state is known as
thrashing. So effectively during thrashing, the CPU spends less time in some
actual productive work and more time in swapping.
66. Address Translation
• Page address is called logical address and represented by page number and the offset.
Logical Address = Page number + page offset
• Frame address is called physical address and represented by a frame number and the
offset.
Physical Address = Frame number + page offset
• A data structure called page map table is used to keep track of the relation between a
page of a process to a frame in physical memory.
67. Advantages and Disadvantages of Paging
• Paging reduces external fragmentation, but still suffer from internal fragmentation.
• Paging is simple to implement and assumed as an efficient memory management
technique.
• Due to equal size of the pages and frames, swapping becomes very easy.
• Page table requires extra memory space, so may not be good for a system having small
RAM.
68. Secondary vs Primary Memory
Secondary memory consists of all permanent
or persistent storage devices, such as read-
only memory (ROM), HDD, and other types of
internal/external storage media.
69.
70.
71.
72. External vs Internal Fragmentation
• External fragmentation
Total memory space is enough to satisfy a request or to reside a process in it, but it
is not contiguous so it can not be used. It happens when the memory allocator
leaves sections of unused memory blocks between portions of allocated memory.
• Internal fragmentation
Memory block assigned to process is bigger. Some portion of memory is left
unused as it can not be used by another process.
This usually happens because the processor’s design stipulates that memory must
be cut into blocks of certain sizes -- for example, blocks may be required to be
evenly be divided by four, eight or 16 bytes. When this occurs, a client that needs
57 bytes of memory, for example, may be allocated a block that contains 60 bytes,
or even 64. The extra bytes that the client doesn’t need go to waste,
73. Memory allocator leaves sections of unused
memory blocks between portions of allocated
memory.