NADAR SARASWATI COLLEGE
OF ARTS AND SCIENCE
DISTRIBUTED OPERATING SYSTEM
PROCESS SCHEDULING, MEMORY MANAGEMENT AND RELIABILITY/FAULT
TOLERANCE
PRESENTED BY:
M.PRATHIYATHI
I M.Sc cs
Process scheduling
ISSUES IN PROCESSOR SCHEDULING
Memory stalling: When a processor accesses memory and has to wait for data to
become available, this is called memory stalling. This can happen for a number of
reasons, such as when data is not in the cache memory. The processor can spend up
to half of its time waiting for data to become available.
Starvation: When a low priority process is waiting to get the CPU but doesn’t, this
is called starvation. This can happen if a high priority process frequently arrives in
the ready queue, or if a process with a long burst time is running the CPU.
Overhead: Preemptive scheduling can have overhead from scheduling processes.
• Multi-core CPUs: Multi-core CPUs can make scheduling issues worse.
Co scheduling of the Medusa operating
system
Medusa is an operating system that uses the operating system’s cron equivalent to
schedule jobs to run when the Medusa application is not running. Scheduling is a
fundamental operation in operating systems that involves assigning tasks to CPUs and
switching between them. It’s important for both system efficiency and application
performance.
Operating systems can have up to three types of schedulers:
Long-term scheduler: Also known as the high-level or admission scheduler
Mid-term or medium-term scheduler
Short-term scheduler
• The names of these schedulers indicate how frequently their functions are
performed.
SMART SCHEDULING
Smart scheduling is a process that uses technology and data to optimize schedules and
assign work. It can be used in a variety of contexts, including field services, utilities, and
manufacturing. Smart scheduling can help businesses save time, money, and resources by:
Improving resource utilization: Smart scheduling can consider employee availability and
task priorities to ensure efficient resource allocation.
Reducing idle time: Smart scheduling can help reduce idle time and overtime by allocating
the right engineer to a job.
Improving customer experience: Smart scheduling can improve the customer experience by
reducing no-shows and improving worker response time.
Automating tasks: Smart scheduling can automate tasks that might otherwise be done
manually.
• Improving productivity: Smart scheduling can increase productivity by 20 to 30 percent.
SCHEDULING IN NYU ULTRA
COMPUTER
Scheduling in the NYU Ultracomputer is the process of assigning
resources to perform tasks. The NYU Ultracomputer is a shared
memory MIMD parallel computer with thousands of processors
connected by an Omega network. The Symunix operating system on the
NYU Ultracomputer uses a scheduler to perform this task.
In computing, scheduling is the process of assigning resources, such as
processors, network links, or expansion cards, to perform tasks, such as
threads, processes, or data flows.
AFFINITY BASED SCHEDULING
Affinity scheduling in an operating system is the process of
assigning computing tasks to computing nodes where they
can be executed more efficiently. The scheduling is based on
aspects of the task or node that make execution more
efficient, such as the speed or overheads of the node’s
resources.
Affinity scheduling can help with: Reducing cache thrashing,
Maximizing data reuse, and Distributing workload based on
core efficiency.
There are two types of processor affinity:
Hard affinity: Ensures that a process is always scheduled on the same processor
Soft affinity: Attempts to schedule a process on the same CPU, but may move it
to a different processor if necessary
Some other scheduling algorithms used in operating systems include:
First-Come, First-Served: The first task to arrive is the first to be completed
Shortest Job First: The smallest tasks are completed first
Priority Scheduling: Important tasks are completed before less important ones
Round Robin: Each task gets a short turn, so all tasks are treated fairly
• Multilevel Queue: Tasks are grouped by type or importance, and each group
has its own rules for how tasks are completed
SCHEDULING IN MACH OPERATING
SYSTEM
Scheduling in Mach OS is a system of run queues with different priorities that
are handled in various ways. Mach OS uses a hybrid scheduler that combines
preemptive and cooperative scheduling algorithms:
Preemptive scheduling
The system can interrupt a running process at any time and switch to
another one, based on priority and fairness.
Cooperative scheduling
• One process controls multiple cooperative threads. Each process has its
own copy of the Thread Manager that schedules that process’s threads
cooperatively
MEMORY MANAGEMENT IN THE
MACH OPERATING SYSTEM
DESIGN ISSUES
The Mach Operating System has had several design issues related to memory management, including:
One-size-fits-all solution
Mach’s kernel doesn’t have a good idea of what the operating system is made up of, so it uses a single solution for paging. This
can lead to performance problems.
Expensive IPC calls
Mach’s support for multiprocessor systems can lead to expensive IPC calls. This is because Mach maps memory between
programs, and any “cache miss” slows down IPC calls.
Hardware-defined data structures
Mach’s pmap module manipulates hardware-defined in-memory structures, which control the state of an internal MMU TLB.
However, each hardware architecture has shortcomings, which can be more pronounced in a multiprocessor.
Mach’s memory management system is based on paging and uses memory objects. The code for Mach’s memory management is split
into three parts:
pmap module: Runs in the kernel and manages the MMU
Machine-independent kernel code: Processes page faults, manages address maps, and replaces pages
Memory manager: Runs as a user process and manages the backing store (disk)
THE MACH KERNEL
The Mach kernel supports memory management in several ways, including:
Virtual memory management
The Mach kernel uses virtual addresses to allow processes to access more memory than is physically installed.
Memory objects
Memory objects are the internal units of memory management, and are represented by ports. IPC messages are
sent to ports to request operations on the memory object.
Task and thread management
The Mach kernel manages threads, and is responsible for creating and destroying them.
Process management
The Mach kernel supports process management primitives, such as returning a list of a process’ threads, and
setting the priority for threads.
TASK ADDRESS SPACE
In an operating system (OS), a task address space is a set of virtual addresses that a task uses to
refer to memory. The operating system assigns an address space to a user or program, and each
task within that address space can use task control blocks (TCBs) to start multiple tasks.
Here are some other things to know about address spaces:
Address space types
Address spaces can be flat or segmented. Flat address spaces use integers that start at zero and
increase incrementally. Segmented address spaces use independent segments with offsets or
values added to create secondary addresses.
Memory management
The OS’s memory management function allocates memory to processes, tracks the status of
each memory location, and frees memory when it’s no longer needed.
MEMORY PROTECTION AND SHARING
Memory protection and sharing are key components of modern operating systems (OS) that ensure
programs run securely and efficiently:
Memory protection
Prevents unauthorized access to memory, which protects the OS, other processes, and programs from
bugs and malware. Memory protection can be achieved through hardware or software techniques, such as
virtual memory, access control bits, base and limit registers, or the memory management unit (MMU).
Memory sharing
Allows controlled data sharing between programs while still respecting memory protection. For example,
two processes can share the same segment, but each process will keep the segment’s name and address in
its segment tables.
Memory protection is essential for the security and stability of an OS. It can help prevent errors and
malicious attacks, and can enhance the OS’s performance.
MACHINE INDEPENDENCY
Machine independence in memory management in an operating system (OS) refers to the separation of software memory
management from hardware support. This approach allows for the unbiased examination of different hardware memory
management schemes, especially for multiprocessors.
Here are some other details about machine independence and memory management:
Machine-dependent software
Software that can only run on a specific computer.
Machine-independent software
Software that can run on multiple computer architectures, also known as cross-platform.
Machine language
A language that is closely related to the structure of a particular computer. Programs written in machine language are not
portable and cannot be run on other computers.
Memory management
A process that includes tracking memory bytes, allocating and deallocating memory spaces, managing swap spaces, and
implementing memory allocation policies.
FAULT TOLERANCE AMD RELIABILITY
DESIGN ISSUES
Here are some design issues to consider when creating a fault-tolerant system:
Cost
Fault-tolerant systems can be expensive because they require the continuous maintenance and operation of redundant
components. Downtime
Fault-tolerant systems should work continuously with no downtime, even when a component fails.
Space
Fault-tolerant systems require additional hardware and equipment, which can take up valuable space in data centers.
Maintenance
Fault-tolerant systems require regular maintenance and testing of their components.
Measuring fault tolerance
It can be difficult to measure the probability that fault detection, diagnosis, repair, configuration, and recovery
algorithms are operating correctly.
THE SEQUOIAARCHITECTURE
There are multiple matches for Sequoia architecture, including a building design concept and a supercomputer:
Urban Sequoia
A building design concept by Skidmore, Owings & Merrill (SOM) that aims to absorb carbon from the
atmosphere. The concept was unveiled at COP26 in 2021. The design is based on the idea that buildings can act
like trees, capturing carbon and purifying the air. The Urban Sequoia concept includes:
Using advanced materials like bio-brick, hempcrete, timber, and biocrete to reduce the carbon impact of
construction
Including carbon capture technology
Generating energy
Extending the typical building’s lifespan
Sequestering up to 1,000 tons of carbon per year, which is equivalent to 48,500 trees
Sequoia supercomputer
A Blue Gene/Q design supercomputer with 1,572,864 processor cores and 1.5 PiB of memory. It was designed to
be highly energy efficient, performing 2068 Mflops/watt.
FAULT DETECTION AMD RECOVRERY
Fault detection and recovery (FDIR) is a technique that helps identify and fix faults in a system as
quickly as possible. It’s a subfield of control engineering that involves three steps:
Fault detection: Detecting faults, such as a sensor failure or physical component malfunction
Fault isolation: Isolating the source of the fault
Recovery: Recovering from the fault and mitigating its effect on the system
FDIR can help reduce diagnostic time and downtime, and increase system availability. It can also be
beneficial for systems that are difficult to maintain, such as on-orbit systems.
Some fault-detection techniques include: Manual detection, Built-in testing (BIT), and Voting
scheme.
In an operating system, deadlock detection and recovery is a method used to find and resolve deadlocks.
Deadlocks happen when processes are stuck, waiting for each other to release resources they need.
THANK YOU

process scheduling and memory management.pptx

  • 1.
    NADAR SARASWATI COLLEGE OFARTS AND SCIENCE DISTRIBUTED OPERATING SYSTEM PROCESS SCHEDULING, MEMORY MANAGEMENT AND RELIABILITY/FAULT TOLERANCE PRESENTED BY: M.PRATHIYATHI I M.Sc cs
  • 2.
    Process scheduling ISSUES INPROCESSOR SCHEDULING Memory stalling: When a processor accesses memory and has to wait for data to become available, this is called memory stalling. This can happen for a number of reasons, such as when data is not in the cache memory. The processor can spend up to half of its time waiting for data to become available. Starvation: When a low priority process is waiting to get the CPU but doesn’t, this is called starvation. This can happen if a high priority process frequently arrives in the ready queue, or if a process with a long burst time is running the CPU. Overhead: Preemptive scheduling can have overhead from scheduling processes. • Multi-core CPUs: Multi-core CPUs can make scheduling issues worse.
  • 3.
    Co scheduling ofthe Medusa operating system Medusa is an operating system that uses the operating system’s cron equivalent to schedule jobs to run when the Medusa application is not running. Scheduling is a fundamental operation in operating systems that involves assigning tasks to CPUs and switching between them. It’s important for both system efficiency and application performance. Operating systems can have up to three types of schedulers: Long-term scheduler: Also known as the high-level or admission scheduler Mid-term or medium-term scheduler Short-term scheduler • The names of these schedulers indicate how frequently their functions are performed.
  • 4.
    SMART SCHEDULING Smart schedulingis a process that uses technology and data to optimize schedules and assign work. It can be used in a variety of contexts, including field services, utilities, and manufacturing. Smart scheduling can help businesses save time, money, and resources by: Improving resource utilization: Smart scheduling can consider employee availability and task priorities to ensure efficient resource allocation. Reducing idle time: Smart scheduling can help reduce idle time and overtime by allocating the right engineer to a job. Improving customer experience: Smart scheduling can improve the customer experience by reducing no-shows and improving worker response time. Automating tasks: Smart scheduling can automate tasks that might otherwise be done manually. • Improving productivity: Smart scheduling can increase productivity by 20 to 30 percent.
  • 5.
    SCHEDULING IN NYUULTRA COMPUTER Scheduling in the NYU Ultracomputer is the process of assigning resources to perform tasks. The NYU Ultracomputer is a shared memory MIMD parallel computer with thousands of processors connected by an Omega network. The Symunix operating system on the NYU Ultracomputer uses a scheduler to perform this task. In computing, scheduling is the process of assigning resources, such as processors, network links, or expansion cards, to perform tasks, such as threads, processes, or data flows.
  • 6.
    AFFINITY BASED SCHEDULING Affinityscheduling in an operating system is the process of assigning computing tasks to computing nodes where they can be executed more efficiently. The scheduling is based on aspects of the task or node that make execution more efficient, such as the speed or overheads of the node’s resources. Affinity scheduling can help with: Reducing cache thrashing, Maximizing data reuse, and Distributing workload based on core efficiency.
  • 7.
    There are twotypes of processor affinity: Hard affinity: Ensures that a process is always scheduled on the same processor Soft affinity: Attempts to schedule a process on the same CPU, but may move it to a different processor if necessary Some other scheduling algorithms used in operating systems include: First-Come, First-Served: The first task to arrive is the first to be completed Shortest Job First: The smallest tasks are completed first Priority Scheduling: Important tasks are completed before less important ones Round Robin: Each task gets a short turn, so all tasks are treated fairly • Multilevel Queue: Tasks are grouped by type or importance, and each group has its own rules for how tasks are completed
  • 8.
    SCHEDULING IN MACHOPERATING SYSTEM Scheduling in Mach OS is a system of run queues with different priorities that are handled in various ways. Mach OS uses a hybrid scheduler that combines preemptive and cooperative scheduling algorithms: Preemptive scheduling The system can interrupt a running process at any time and switch to another one, based on priority and fairness. Cooperative scheduling • One process controls multiple cooperative threads. Each process has its own copy of the Thread Manager that schedules that process’s threads cooperatively
  • 9.
    MEMORY MANAGEMENT INTHE MACH OPERATING SYSTEM DESIGN ISSUES The Mach Operating System has had several design issues related to memory management, including: One-size-fits-all solution Mach’s kernel doesn’t have a good idea of what the operating system is made up of, so it uses a single solution for paging. This can lead to performance problems. Expensive IPC calls Mach’s support for multiprocessor systems can lead to expensive IPC calls. This is because Mach maps memory between programs, and any “cache miss” slows down IPC calls. Hardware-defined data structures Mach’s pmap module manipulates hardware-defined in-memory structures, which control the state of an internal MMU TLB. However, each hardware architecture has shortcomings, which can be more pronounced in a multiprocessor. Mach’s memory management system is based on paging and uses memory objects. The code for Mach’s memory management is split into three parts: pmap module: Runs in the kernel and manages the MMU Machine-independent kernel code: Processes page faults, manages address maps, and replaces pages Memory manager: Runs as a user process and manages the backing store (disk)
  • 10.
    THE MACH KERNEL TheMach kernel supports memory management in several ways, including: Virtual memory management The Mach kernel uses virtual addresses to allow processes to access more memory than is physically installed. Memory objects Memory objects are the internal units of memory management, and are represented by ports. IPC messages are sent to ports to request operations on the memory object. Task and thread management The Mach kernel manages threads, and is responsible for creating and destroying them. Process management The Mach kernel supports process management primitives, such as returning a list of a process’ threads, and setting the priority for threads.
  • 11.
    TASK ADDRESS SPACE Inan operating system (OS), a task address space is a set of virtual addresses that a task uses to refer to memory. The operating system assigns an address space to a user or program, and each task within that address space can use task control blocks (TCBs) to start multiple tasks. Here are some other things to know about address spaces: Address space types Address spaces can be flat or segmented. Flat address spaces use integers that start at zero and increase incrementally. Segmented address spaces use independent segments with offsets or values added to create secondary addresses. Memory management The OS’s memory management function allocates memory to processes, tracks the status of each memory location, and frees memory when it’s no longer needed.
  • 12.
    MEMORY PROTECTION ANDSHARING Memory protection and sharing are key components of modern operating systems (OS) that ensure programs run securely and efficiently: Memory protection Prevents unauthorized access to memory, which protects the OS, other processes, and programs from bugs and malware. Memory protection can be achieved through hardware or software techniques, such as virtual memory, access control bits, base and limit registers, or the memory management unit (MMU). Memory sharing Allows controlled data sharing between programs while still respecting memory protection. For example, two processes can share the same segment, but each process will keep the segment’s name and address in its segment tables. Memory protection is essential for the security and stability of an OS. It can help prevent errors and malicious attacks, and can enhance the OS’s performance.
  • 13.
    MACHINE INDEPENDENCY Machine independencein memory management in an operating system (OS) refers to the separation of software memory management from hardware support. This approach allows for the unbiased examination of different hardware memory management schemes, especially for multiprocessors. Here are some other details about machine independence and memory management: Machine-dependent software Software that can only run on a specific computer. Machine-independent software Software that can run on multiple computer architectures, also known as cross-platform. Machine language A language that is closely related to the structure of a particular computer. Programs written in machine language are not portable and cannot be run on other computers. Memory management A process that includes tracking memory bytes, allocating and deallocating memory spaces, managing swap spaces, and implementing memory allocation policies.
  • 14.
    FAULT TOLERANCE AMDRELIABILITY DESIGN ISSUES Here are some design issues to consider when creating a fault-tolerant system: Cost Fault-tolerant systems can be expensive because they require the continuous maintenance and operation of redundant components. Downtime Fault-tolerant systems should work continuously with no downtime, even when a component fails. Space Fault-tolerant systems require additional hardware and equipment, which can take up valuable space in data centers. Maintenance Fault-tolerant systems require regular maintenance and testing of their components. Measuring fault tolerance It can be difficult to measure the probability that fault detection, diagnosis, repair, configuration, and recovery algorithms are operating correctly.
  • 15.
    THE SEQUOIAARCHITECTURE There aremultiple matches for Sequoia architecture, including a building design concept and a supercomputer: Urban Sequoia A building design concept by Skidmore, Owings & Merrill (SOM) that aims to absorb carbon from the atmosphere. The concept was unveiled at COP26 in 2021. The design is based on the idea that buildings can act like trees, capturing carbon and purifying the air. The Urban Sequoia concept includes: Using advanced materials like bio-brick, hempcrete, timber, and biocrete to reduce the carbon impact of construction Including carbon capture technology Generating energy Extending the typical building’s lifespan Sequestering up to 1,000 tons of carbon per year, which is equivalent to 48,500 trees Sequoia supercomputer A Blue Gene/Q design supercomputer with 1,572,864 processor cores and 1.5 PiB of memory. It was designed to be highly energy efficient, performing 2068 Mflops/watt.
  • 16.
    FAULT DETECTION AMDRECOVRERY Fault detection and recovery (FDIR) is a technique that helps identify and fix faults in a system as quickly as possible. It’s a subfield of control engineering that involves three steps: Fault detection: Detecting faults, such as a sensor failure or physical component malfunction Fault isolation: Isolating the source of the fault Recovery: Recovering from the fault and mitigating its effect on the system FDIR can help reduce diagnostic time and downtime, and increase system availability. It can also be beneficial for systems that are difficult to maintain, such as on-orbit systems. Some fault-detection techniques include: Manual detection, Built-in testing (BIT), and Voting scheme. In an operating system, deadlock detection and recovery is a method used to find and resolve deadlocks. Deadlocks happen when processes are stuck, waiting for each other to release resources they need.
  • 17.