The document discusses processes and process management. It covers key concepts like process state, process scheduling queues, and context switching. It also describes how processes are created and terminated, and the two main models of interprocess communication: shared memory and message passing. Process communication can be direct or indirect and use different buffering and synchronization approaches.
Allocation of processors to processes in Distributed Systems. Strategies or algorithms for processor allocation. Design and Implementation Issues of Strategies.
Threads,
system model,
processor allocation,
scheduling in distributed systems
Load balancing and
sharing approach,
fault tolerance,
Real time distributed systems,
Process migration and related issues
Allocation of processors to processes in Distributed Systems. Strategies or algorithms for processor allocation. Design and Implementation Issues of Strategies.
Threads,
system model,
processor allocation,
scheduling in distributed systems
Load balancing and
sharing approach,
fault tolerance,
Real time distributed systems,
Process migration and related issues
Sara Afshar: Scheduling and Resource Sharing in Multiprocessor Real-Time Systemsknowdiff
PhD Candidate,
Department of Computer science
Mälardalen University
Time: Tuesday, Dec. 30, 2014, 11:30 a.m.
Location: Computer Engineering Department, Urmia University
Abstract:
The processor is the brain of a computer system. Usually, one or more programs run on a processor where each program is typically responsible for performing a particular task or function of the system. The performance of all the tasks together results in the system functionality. In many computer systems, it is not only enough that all tasks deliver correct output, but it is also crucial that these activities are delivered in a proper time. This type of systems that have timing requirements are known as real-time systems. A scheduler is responsible for scheduling all tasks on the processor, i.e., it dictates which task to run and when to run to ensure that all tasks are carried out on time. Typically, such tasks/programs need to use the computer system’s hardware and software resources to perform their calculation. Examples of such type of resources that are shared among programs are I/O devices, buffers and memories. Technology that is used for the management of shared resources is known as resource sharing synchronization protocol.
In recent years, a shift from single-processor platforms to multiprocessor platforms has become inevitable due to availability of processor chips and requirements on increased performance. Scheduling and resource sharing protocols have been well studied for uniprocessor systems. However, in the context of multiprocessors, still such techniques are not fully mature. The shift towards multi-core technology has revealed the demand for real-time scheduling algorithms along with synchronization protocols to support real-time applications on multiprocessors, both with and without dependencies.
In this talk, we first have an introduction to real-time embedded systems. Next, we look at scheduling and resource sharing policies in uniprocessor platforms. Further, we discuss the extension of scheduling and resource sharing policies for multiprocessor platforms and present the recent challenges arisen in this context.
Biography:
Sara Afshar is a PhD student at Mälardalen University. She has received her B.Sc. degree in Electrical Engineering from Tabriz University, Iran in 2002. She worked at different engineering companies until 2009. In the year 2010 she started her M.Sc. in Embedded Systems at Mälardalen University. She obtained her Master degree in 2012 and at the same year she started her PhD studies in Mälardalen University. Currently she is working on the topic of resource sharing in multiprocessor systems. She is part of the Complex Real-Time Embedded Systems group at Mälardalen University.
Unit III
STORAGE MANAGEMENT
Main Memory-Contiguous Memory Allocation, Segmentation, Paging, 32 and 64 bit architecture Examples; Virtual Memory- Demand Paging, Page Replacement, Allocation, Thrashing; Allocating Kernel Memory, OS Examples.
This is a presentation for Chapter 7 Distributed system management
Book: DISTRIBUTED COMPUTING , Sunita Mahajan & Seema Shah
Prepared by Students of Computer Science, Ain Shams University - Cairo - Egypt
Sara Afshar: Scheduling and Resource Sharing in Multiprocessor Real-Time Systemsknowdiff
PhD Candidate,
Department of Computer science
Mälardalen University
Time: Tuesday, Dec. 30, 2014, 11:30 a.m.
Location: Computer Engineering Department, Urmia University
Abstract:
The processor is the brain of a computer system. Usually, one or more programs run on a processor where each program is typically responsible for performing a particular task or function of the system. The performance of all the tasks together results in the system functionality. In many computer systems, it is not only enough that all tasks deliver correct output, but it is also crucial that these activities are delivered in a proper time. This type of systems that have timing requirements are known as real-time systems. A scheduler is responsible for scheduling all tasks on the processor, i.e., it dictates which task to run and when to run to ensure that all tasks are carried out on time. Typically, such tasks/programs need to use the computer system’s hardware and software resources to perform their calculation. Examples of such type of resources that are shared among programs are I/O devices, buffers and memories. Technology that is used for the management of shared resources is known as resource sharing synchronization protocol.
In recent years, a shift from single-processor platforms to multiprocessor platforms has become inevitable due to availability of processor chips and requirements on increased performance. Scheduling and resource sharing protocols have been well studied for uniprocessor systems. However, in the context of multiprocessors, still such techniques are not fully mature. The shift towards multi-core technology has revealed the demand for real-time scheduling algorithms along with synchronization protocols to support real-time applications on multiprocessors, both with and without dependencies.
In this talk, we first have an introduction to real-time embedded systems. Next, we look at scheduling and resource sharing policies in uniprocessor platforms. Further, we discuss the extension of scheduling and resource sharing policies for multiprocessor platforms and present the recent challenges arisen in this context.
Biography:
Sara Afshar is a PhD student at Mälardalen University. She has received her B.Sc. degree in Electrical Engineering from Tabriz University, Iran in 2002. She worked at different engineering companies until 2009. In the year 2010 she started her M.Sc. in Embedded Systems at Mälardalen University. She obtained her Master degree in 2012 and at the same year she started her PhD studies in Mälardalen University. Currently she is working on the topic of resource sharing in multiprocessor systems. She is part of the Complex Real-Time Embedded Systems group at Mälardalen University.
Unit III
STORAGE MANAGEMENT
Main Memory-Contiguous Memory Allocation, Segmentation, Paging, 32 and 64 bit architecture Examples; Virtual Memory- Demand Paging, Page Replacement, Allocation, Thrashing; Allocating Kernel Memory, OS Examples.
This is a presentation for Chapter 7 Distributed system management
Book: DISTRIBUTED COMPUTING , Sunita Mahajan & Seema Shah
Prepared by Students of Computer Science, Ain Shams University - Cairo - Egypt
Operating system 19 interacting processes and ipcVaibhav Khanna
Processes within a system may be independent or cooperating
Cooperating process can affect or be affected by other processes, including sharing data
Reasons for cooperating processes:
Information sharing
Computation speedup
Modularity
Convenience
Cooperating processes need interprocess communication (IPC)
Two models of IPC
Shared memory
Message passing
Operating system 18 process creation and terminationVaibhav Khanna
Information associated with each process
(also called task control block)
Process state – running, waiting, etc
Program counter – location of instruction to next execute
CPU registers – contents of all process-centric registers
CPU scheduling information- priorities, scheduling queue pointers
Memory-management information – memory allocated to the process
Accounting information – CPU used, clock time elapsed since start, time limits
I/O status information – I/O devices allocated to process, list of open files
The objectives of these slides are -
- To introduce the notion of a process - a program in execution, which forms the basis of all computation
- To describe the various features of processes, including scheduling, creation and termination, and communication
- To explore interprocess communication using shared memory and message passing
A process is a program in execution. It is a unit of work within the system. Program is a passive entity, process is an active entity.
Process needs resources to accomplish its task
CPU, memory, I/O, files
Initialization data
Process termination requires reclaim of any reusable resources
Single-threaded process has one program counter specifying location of next instruction to execute
Process executes instructions sequentially, one at a time, until completion
Multi-threaded process has one program counter per thread
Typically system has many processes, some user, some operating system running concurrently on one or more CPUs
Concurrency by multiplexing the CPUs among the processes / threads
Deadlocks operating system To develop a description of deadlocks, which prevent sets of concurrent processes from completing their tasks
To present a number of different methods for preventing, avoiding, or detecting deadlocks in a computer system
e.t.c
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
4. Process Concept
• An operating system executes a variety of programs:
– Batch system – jobs
– Time-shared systems – user programs or tasks
• Textbook uses the terms job and process almost interchangeably
• Process – a program in execution; process execution must
progress in sequential fashion
• A process includes:
– program counter and process registers
– stack and heap
– text section
– data section
6. Process State
• As a process executes, it changes state
– new: The process is being created
– running: Instructions are being executed
– waiting: The process is waiting for some event to occur (such as an
I/O completion or reception of a signal)
– ready: The process is waiting to be assigned to a processor
– terminated: The process has finished execution
7. Process Control Block (PCB)
Information associated with each process
• Process state
• Program counter
• CPU registers
• CPU scheduling information
• Memory-management information
• Accounting information
• I/O status information
9. Context Switch
• When CPU switches to another process (because of an interrupt), the
system must save the state of the old process and load the saved state
for the new process
• Context-switch time is overhead; the system does no useful work while
switching
• Time dependent on hardware support
11. Process Scheduling Queues
• Job queue – set of all processes in the system
• Ready queue – set of all processes residing in main memory, ready
and waiting to execute
• Device queues – set of processes waiting for an I/O device
• Processes migrate among the various queues
14. Schedulers
• Long-term scheduler (or job scheduler) – selects which
processes should be brought into the ready queue
• Short-term scheduler (or CPU scheduler) – selects which
process should be executed next and allocates CPU (covered in
Chapter 5)
15. Schedulers (Cont.)
• Short-term scheduler is invoked very frequently (milliseconds) (must
be fast)
• Long-term scheduler is invoked very infrequently (seconds, minutes)
(may be slow)
• The long-term scheduler controls the degree of multiprogramming
• Processes can be described as either:
– I/O-bound process – spends more time doing I/O than
computations, many short CPU bursts
– CPU-bound process – spends more time doing computations; few
very long CPU bursts
• Some systems have no long-term scheduler
– Every new process is loaded into memory
– System stability effected by physical limitation and self-adjusting
nature of the human user
17. Process Creation
• Parent process creates children processes, which, in turn create other
processes, forming a tree of processes
• Resource sharing options
– Parent and children share all resources
– Children share subset of parent’s resources
– Parent and child share no resources
• Execution options
– Parent and children execute concurrently
– Parent waits until some or all of its children have terminated
18. Process Creation (Cont.)
• Address space options
– Child process is a duplicate of the parent process (same program and
data)
– Child process has a new program loaded into it
• UNIX example
– fork() system call creates a new process
– exec() system call used after a fork() to replace the memory space of
the process with a new program
20. C Program Forking Separate Process
int main(void)
{
pid_t processID;
processID = fork(); // Create a process
if (processID < 0)
{ // Error occurred
fprintf(stderr, "Fork failed");
exit(-1);
} // End if
else if (processID == 0)
{ // Inside the child process (no execlp() this time)
printf ("(Child) I am doing something");
} // End else if
else
{ // Inside the parent process
printf("(Parent) I am waiting for PID#%d to finishn", processID);
wait(NULL);
printf ("n(Parent) I have finished waiting; the child is done");
exit(0);
} // End else
return 0;
} // End main
22. Process Termination
• Process executes last statement and asks the operating system to terminate it
(via exit)
– Exit (or return) status value from child is received by the parent (via
wait())
– Process’ resources are deallocated by operating system
• Parent may terminate execution of children processes (kill() function)
– Child has exceeded allocated resources
– Task assigned to child is no longer required
– If parent is exiting
• Some operating system do not allow child to continue if its parent
terminates
– All children terminated - cascading termination
24. Cooperating Processes
• An independent process is one that cannot affect or be affected by the
execution of another process
• A cooperating process can affect or be affected by the execution of
another process in the system
• Advantages of process cooperation
– Information sharing (of the same piece of data)
– Computation speed-up (break a task into smaller subtasks)
– Modularity (dividing up the system functions)
– Convenience (to do multiple tasks simultaneously)
• Two fundamental models of interprocess communication
– Shared memory (a region of memory is shared)
– Message passing (exchange of messages between processes)
26. Shared Memory Systems
• Shared memory requires communicating processes to establish a
region of shared memory
• Information is exchanged by reading and writing data in the
shared memory
• A common paradigm for cooperating processes is the producer-
consumer problem
• A producer process produces information that is consumed by a
consumer process
– unbounded-buffer places no practical limit on the size of the
buffer
– bounded-buffer assumes that there is a fixed buffer size
27. Message-Passing Systems
• Mechanism to allow processes to communicate and to synchronize their actions
• No address space needs to be shared; this is particularly useful in a distributed
processing environment (e.g., a chat program)
• Message-passing facility provides two operations:
– send(message) – message size can be fixed or variable
– receive(message)
• If P and Q wish to communicate, they need to:
– establish a communication link between them
– exchange messages via send/receive
• Logical implementation of communication link
– Direct or indirect communication
– Synchronous or asynchronous communication
– Automatic or explicit buffering
28. Implementation Questions
• How are links established?
• Can a link be associated with more than two processes?
• How many links can there be between every pair of communicating
processes?
• What is the capacity of a link?
• Is the size of a message that the link can accommodate fixed or variable?
• Is a link unidirectional or bi-directional?
29. Direct Communication
• Processes must name each other explicitly:
– send (P, message) – send a message to process P
– receive(Q, message) – receive a message from process Q
• Properties of communication link
– Links are established automatically between every pair of processes
that want to communicate
– A link is associated with exactly one pair of communicating processes
– Between each pair there exists exactly one link
– The link may be unidirectional, but is usually bi-directional
• Disadvantages
– Limited modularity of the resulting process definitions
– Hard-coding of identifiers are less desirable than indirection
techniques
30. Indirect Communication
• Messages are directed and received from mailboxes (also referred to
as ports)
– Each mailbox has a unique id
– Processes can communicate only if they share a mailbox
• send(A, message) – send message to mailbox A
• receive(A, message) – receive message from mailbox A
• Properties of communication link
– Link is established between a pair of processes only if both have a
shared mailbox
– A link may be associated with more than two processes
– Between each pair of processes, there may be many different
links, with each link corresponding to one mailbox
31. Indirect Communication (continued)
• For a shared mailbox, messages are received based on the
following methods:
– Allow a link to be associated with at most two processes
– Allow at most one process at a time to execute a receive()
operation
– Allow the system to select arbitrarily which process will receive
the message (e.g., a round robin approach)
• Mechanisms provided by the operating system
– Create a new mailbox
– Send and receive messages through the mailbox
– Destroy a mailbox
32. Synchronization
• Message passing may be either blocking or non-blocking
• Blocking is considered synchronous
– Blocking send has the sender block until the message is received
– Blocking receive has the receiver block until a message is available
• Non-blocking is considered asynchronous
– Non-blocking send has the sender send the message and continue
– Non-blocking receive has the receiver receive a valid message or
null
• When both send() and receive() are blocking, we have a rendezvous
between the sender and the receiver
33. Buffering
• Whether communication is direct or indirect, messages exchanged by
communicating processes reside in a temporary queue
• These queues can be implemented in three ways:
1. Zero capacity – the queue has a maximum length of zero
- Sender must block until the recipient receives the message
2. Bounded capacity – the queue has a finite length of n
- Sender must wait if queue is full
3. Unbounded capacity – the queue length is unlimited
Sender never blocks