NETWORKING 
THREADS 
PREPARED BY 
Nilesh Pawar
INTRODUCTION TO THE THREAD : 
 A thread is a flow of execution through the process code. 
 A thread is also called a light weight process. 
 Threads provide a way to improve application performance 
through parallelism. 
 It also improves the performance of Operating System. 
 Each thread represents a separate flow of control. 
 All threads within a process share same global memory.
STATES OF THE THREAD : 
 Basically each and every threat is having a state out of 
following four states : 
• CREATE STATE 
• RUNNING STATE 
• BLOCKED STATE 
• DEAD STATE
 CREATE STATE : 
CREATE STATE denoted that the new thread is created. This is the new birth 
of any threat. 
This state can be denoted with the function start(). 
 RUNNING STATE : 
This state states that the currently thread is running. In this state the thread 
is busy with the CPU. 
This state can be denoted with the function run(). 
 BLOCKED STATE : 
During the running of any thread, if any higher priority thread comes then 
that thread may go in blocked state. So it waits for that higher priority 
thread in running queue. 
 DEAD STATE : 
When thread completes it’s work with CPU it goes in Dead state. And this 
is the actual death of the thread.
THREAD LIFE CYCLE : 
CREATED RUNNABLE BOLCKED 
DEAD 
Thread() 
Start() 
Notify() 
Sleep() 
Wait() 
Run() method 
terminates
TYPES OF THREADS : 
 Basically there are two types of threads which are given as 
follows : 
THREADS 
KERNEL LEVEL THREAD USER LEVEL THREAD
KERNEL LEVEL THREAD 
 Operating System managed threads acting on kernel, an 
operating system core. 
 Kernel-level threads will guarantee multiple processor access 
but the computing performance is lower due to load on the 
system. 
 These are more expensive than user-level threads.
ADVANTAGES 
 Has full knowledge of all 
threads. 
 Especially good for applications 
that frequently block. 
 No runtime system is needed 
for these threads. 
DISADVANTAGES 
 More expensive. 
 They are slow and inefficient. 
 It require a full thread control 
block (TCB) for each thread to 
maintain information about 
threads. 
 More complex.
USER LEVEL THREAD 
 These are the User managed threads. 
 These threads within a process are invisible to the operating 
system. 
 User-level threads have extremely low overhead, and can 
achieve high performance in computation. 
 In this some threads to gain exclusive access to the CPU and 
prevent other threads from obtaining the CPU. 
 Finally, access to multiple processors is not guaranteed since 
the operating system is not aware of existence of these types 
of threads.
ADVANTAGES 
 Does not require modification 
to operating systems. 
 Simple Representation. 
 Simple Management. 
 Fast and Efficient. 
 Relatively cheaper than kernel-level 
threads. 
DISADVANTAGES 
 User-Level threads are not a 
perfect solution as with 
everything else. 
 They are not well integrated 
with the OS. 
 There is a lack of coordination 
between threads and 
operating system kernel.
HOW DOES A THREAD RUN ? 
The thread class has a run() method, run() is executed when the 
thread's start() method is invoked 
The thread terminates if the run method terminates 
To prevent a thread from terminating, the run method must not 
end 
run methods often have an endless loop to prevent thread 
termination 
One thread starts another by calling its start method, The 
sequence of events can be confusing to those more familiar with 
a single threaded model.
WHAT IS MULTITHREADING 
 Multithreading is similar to multi-processing. 
 Multithreading is a technique by which a single set of code can 
be used by several processors at different stages of execution. 
 Multithreading is the ability of a program to manage multiple 
requests by the same user. 
 Each user request for a program or system service (and here a 
user can also be another program) is kept track of as a thread 
with a separate identity. 
 Each process has its own address/memory space. 
 The OS's scheduler decides when each process is executed.
EXAMPLE OF MULTITHREADING
WHY TO USE MULTITHREADING 
 In Application, one thread of execution must do everything. 
 If an application has several tasks to perform, those tasks will 
be In a single threaded performed when the thread can get to 
them. 
 A single task which requires a lot of processing can make the 
entire application appear to be "sluggish" or unresponsive. 
 In a multithreaded application, each task can be performed by 
a separate thread 
 If one thread is executing a long process, it does not make the 
entire application wait for it to finish.
MULTITHREADING MODELS 
 Basically there are 3 types of relationships in multithreading 
models. And they are as follows : 
One to one relationship. 
Many to one relationship. 
Many to many relationship.
ONE-TO-ONE RELATIONSHIP 
 The one-to-one model creates a separate kernel thread to 
handle each user thread. 
 One-to-one model overcomes the problems listed above 
involving blocking system calls and the splitting of processes 
across multiple CPUs. 
 Most implementations of this model place a limit on how many 
threads can be created. 
 Linux and Windows from 95 to XP implement the one-to-one 
model for threads.
ONE-TO-ONE RELATIONSHIP
MANY-TO-ONE RELATIONSHIP 
 In the many-to-one model, many user-level threads are all 
mapped onto a single kernel thread. 
 Thread management is handled by the thread library in user 
space, which is very efficient. 
 Because a single kernel thread can operate only on a single 
CPU, the many-to-one model does not allow individual 
processes to be split across multiple CPUs. 
 Green threads for Solaris and GNU Portable Threads 
implement the many-to-one model in the past, but few systems 
continue to do so today.
MANY-TO-ONE RELATIONSHIP
MANY-TO-MANY RELATIONSHIP 
 Users have no restrictions on the number of threads created. 
 Blocking kernel system calls do not block the entire process. 
 Processes can be split across multiple processors. 
 The many-to-many model multiplexes any number of user 
threads 
 onto an equal or smaller number of kernel threads, 
 combining the best features of the one-to-one and many-to-one 
models
MANY-TO-MANY RELATIONSHIP
THREADING ISSUES 
 There are some issues related to threading those are as 
follows : 
The fork( ) and exec( ) System Calls. 
Signal Handling. 
Thread Cancellation. 
Thread Specific Data. 
Scheduler Activations.
THE FORK() AND EXEC() SYSTEM CALLS 
 The fork() is used to create a separate duplicate process , in 
some unique system. 
 The fork() has two versions one that duplicate all threads an 
another that duplicate one the thread that invoke the fork() . 
 If a thread invoke the exec() the program specified in 
parameter to exec() will replace the entire process including all 
threads. 
 If exec() is called immediately alpha forking the duplicating all 
thread in unnecessary as the program specified in parameter 
to exec() will replace the process.
SIGNAL HANDLING 
 The best choice may depend on which specific signal is 
involved. 
 UNIX allows individual threads to indicate which signals they 
are accepting and which they are ignoring. However the 
signal can only be delivered to one thread, which is generally 
the first thread that is accepting that particular signal. 
 Windows does not support signals, but they can be emulated 
using Asynchronous Procedure Calls ( APCs ). 
 APCs are delivered to specific threads, not processes.
THREAD CANCELLATION 
 Threads that are no longer needed may be cancelled by 
another thread in one of two ways: 
 Asynchronous Cancellation cancels the thread immediately. 
 Deferred Cancellation sets a flag indicating the thread should 
cancel itself when it is convenient. It is then up 
 to the cancelled thread to check this flag periodically and exit 
nicely when it sees the flag set. 
 ( Shared ) resource allocation and inter-thread data transfers 
can be problematic with asynchronous cancellation.
THREAD SPECIFIC DATA 
 Most data is shared among threads, and this is one of the 
major benefits of using threads in the first place. 
 However sometimes threads need thread-specific data also. 
 Most major thread libraries ( pThreads, Win32, Java ) provide 
support for thread-specific data, known as thread local. 
 storage or TLS. Note that this is more like static data than local 
variables , because it does not cease to exist when the function 
ends.
SCHEDULER ACTIVATIONS 
 Many implementations of threads provide a virtual processor as 
an interface between the user thread and the kernel thread, 
particularly for the many-to-many or two-tier models. 
 This virtual processor is known as a "Lightweight Process", LWP. 
 There is a one-to-one correspondence between LWPs and kernel 
threads. 
 The number of kernel threads available, ( and hence the number 
of LWPs ) may change dynamically. 
 The application ( user level thread library ) maps user threads 
onto available LWPs. kernel threads are scheduled onto the real 
processor(s) by the OS.
THREAD SCHEDULING 
 Scheduling is the method by which threads, processes or data 
flows are given access to system resources (e.g. processor 
time, communications bandwidth). 
 There are two methods of thread scheduling : 
Contention scope 
Pthreads
CONTENTION SCOPE 
 One distinction between user level and kernal level thread live 
in how they are schedule. 
 All system implementing m to one and m to m modules , the 
thread library schedules user level threads to run off an 
available LWP a scheme known as process content scope. 
 To decide which kernal thread to schedule on to a cpu the 
kernal uses system contention scope that is scheduler select 
the runable thread with the highest priority to run.
P-THREADS 
 The POSIX standard ( IEEE 1003.1c ) defines the specification for 
pThreads, not the implementation. 
 PThreads are available on Solaris, Linux, Mac OSX, Tru64, and 
via public domain shareware for Windows. 
 Global variables are shared amongst all threads. 
 One thread can wait for the others to rejoin before continuing. 
 PThreads begin execution in a specified function.
REFERENCES 
 Os concept 8th addition of Abraham silberschatz. 
 Wikipedia.com 
 Slide sharer.com 
 www.google.com
THANK YOU..

Networking threads

  • 1.
  • 2.
    INTRODUCTION TO THETHREAD :  A thread is a flow of execution through the process code.  A thread is also called a light weight process.  Threads provide a way to improve application performance through parallelism.  It also improves the performance of Operating System.  Each thread represents a separate flow of control.  All threads within a process share same global memory.
  • 3.
    STATES OF THETHREAD :  Basically each and every threat is having a state out of following four states : • CREATE STATE • RUNNING STATE • BLOCKED STATE • DEAD STATE
  • 4.
     CREATE STATE: CREATE STATE denoted that the new thread is created. This is the new birth of any threat. This state can be denoted with the function start().  RUNNING STATE : This state states that the currently thread is running. In this state the thread is busy with the CPU. This state can be denoted with the function run().  BLOCKED STATE : During the running of any thread, if any higher priority thread comes then that thread may go in blocked state. So it waits for that higher priority thread in running queue.  DEAD STATE : When thread completes it’s work with CPU it goes in Dead state. And this is the actual death of the thread.
  • 5.
    THREAD LIFE CYCLE: CREATED RUNNABLE BOLCKED DEAD Thread() Start() Notify() Sleep() Wait() Run() method terminates
  • 6.
    TYPES OF THREADS:  Basically there are two types of threads which are given as follows : THREADS KERNEL LEVEL THREAD USER LEVEL THREAD
  • 7.
    KERNEL LEVEL THREAD  Operating System managed threads acting on kernel, an operating system core.  Kernel-level threads will guarantee multiple processor access but the computing performance is lower due to load on the system.  These are more expensive than user-level threads.
  • 9.
    ADVANTAGES  Hasfull knowledge of all threads.  Especially good for applications that frequently block.  No runtime system is needed for these threads. DISADVANTAGES  More expensive.  They are slow and inefficient.  It require a full thread control block (TCB) for each thread to maintain information about threads.  More complex.
  • 10.
    USER LEVEL THREAD  These are the User managed threads.  These threads within a process are invisible to the operating system.  User-level threads have extremely low overhead, and can achieve high performance in computation.  In this some threads to gain exclusive access to the CPU and prevent other threads from obtaining the CPU.  Finally, access to multiple processors is not guaranteed since the operating system is not aware of existence of these types of threads.
  • 12.
    ADVANTAGES  Doesnot require modification to operating systems.  Simple Representation.  Simple Management.  Fast and Efficient.  Relatively cheaper than kernel-level threads. DISADVANTAGES  User-Level threads are not a perfect solution as with everything else.  They are not well integrated with the OS.  There is a lack of coordination between threads and operating system kernel.
  • 13.
    HOW DOES ATHREAD RUN ? The thread class has a run() method, run() is executed when the thread's start() method is invoked The thread terminates if the run method terminates To prevent a thread from terminating, the run method must not end run methods often have an endless loop to prevent thread termination One thread starts another by calling its start method, The sequence of events can be confusing to those more familiar with a single threaded model.
  • 14.
    WHAT IS MULTITHREADING  Multithreading is similar to multi-processing.  Multithreading is a technique by which a single set of code can be used by several processors at different stages of execution.  Multithreading is the ability of a program to manage multiple requests by the same user.  Each user request for a program or system service (and here a user can also be another program) is kept track of as a thread with a separate identity.  Each process has its own address/memory space.  The OS's scheduler decides when each process is executed.
  • 15.
  • 16.
    WHY TO USEMULTITHREADING  In Application, one thread of execution must do everything.  If an application has several tasks to perform, those tasks will be In a single threaded performed when the thread can get to them.  A single task which requires a lot of processing can make the entire application appear to be "sluggish" or unresponsive.  In a multithreaded application, each task can be performed by a separate thread  If one thread is executing a long process, it does not make the entire application wait for it to finish.
  • 18.
    MULTITHREADING MODELS Basically there are 3 types of relationships in multithreading models. And they are as follows : One to one relationship. Many to one relationship. Many to many relationship.
  • 19.
    ONE-TO-ONE RELATIONSHIP The one-to-one model creates a separate kernel thread to handle each user thread.  One-to-one model overcomes the problems listed above involving blocking system calls and the splitting of processes across multiple CPUs.  Most implementations of this model place a limit on how many threads can be created.  Linux and Windows from 95 to XP implement the one-to-one model for threads.
  • 20.
  • 21.
    MANY-TO-ONE RELATIONSHIP In the many-to-one model, many user-level threads are all mapped onto a single kernel thread.  Thread management is handled by the thread library in user space, which is very efficient.  Because a single kernel thread can operate only on a single CPU, the many-to-one model does not allow individual processes to be split across multiple CPUs.  Green threads for Solaris and GNU Portable Threads implement the many-to-one model in the past, but few systems continue to do so today.
  • 22.
  • 23.
    MANY-TO-MANY RELATIONSHIP Users have no restrictions on the number of threads created.  Blocking kernel system calls do not block the entire process.  Processes can be split across multiple processors.  The many-to-many model multiplexes any number of user threads  onto an equal or smaller number of kernel threads,  combining the best features of the one-to-one and many-to-one models
  • 24.
  • 25.
    THREADING ISSUES There are some issues related to threading those are as follows : The fork( ) and exec( ) System Calls. Signal Handling. Thread Cancellation. Thread Specific Data. Scheduler Activations.
  • 26.
    THE FORK() ANDEXEC() SYSTEM CALLS  The fork() is used to create a separate duplicate process , in some unique system.  The fork() has two versions one that duplicate all threads an another that duplicate one the thread that invoke the fork() .  If a thread invoke the exec() the program specified in parameter to exec() will replace the entire process including all threads.  If exec() is called immediately alpha forking the duplicating all thread in unnecessary as the program specified in parameter to exec() will replace the process.
  • 27.
    SIGNAL HANDLING The best choice may depend on which specific signal is involved.  UNIX allows individual threads to indicate which signals they are accepting and which they are ignoring. However the signal can only be delivered to one thread, which is generally the first thread that is accepting that particular signal.  Windows does not support signals, but they can be emulated using Asynchronous Procedure Calls ( APCs ).  APCs are delivered to specific threads, not processes.
  • 28.
    THREAD CANCELLATION Threads that are no longer needed may be cancelled by another thread in one of two ways:  Asynchronous Cancellation cancels the thread immediately.  Deferred Cancellation sets a flag indicating the thread should cancel itself when it is convenient. It is then up  to the cancelled thread to check this flag periodically and exit nicely when it sees the flag set.  ( Shared ) resource allocation and inter-thread data transfers can be problematic with asynchronous cancellation.
  • 29.
    THREAD SPECIFIC DATA  Most data is shared among threads, and this is one of the major benefits of using threads in the first place.  However sometimes threads need thread-specific data also.  Most major thread libraries ( pThreads, Win32, Java ) provide support for thread-specific data, known as thread local.  storage or TLS. Note that this is more like static data than local variables , because it does not cease to exist when the function ends.
  • 30.
    SCHEDULER ACTIVATIONS Many implementations of threads provide a virtual processor as an interface between the user thread and the kernel thread, particularly for the many-to-many or two-tier models.  This virtual processor is known as a "Lightweight Process", LWP.  There is a one-to-one correspondence between LWPs and kernel threads.  The number of kernel threads available, ( and hence the number of LWPs ) may change dynamically.  The application ( user level thread library ) maps user threads onto available LWPs. kernel threads are scheduled onto the real processor(s) by the OS.
  • 31.
    THREAD SCHEDULING Scheduling is the method by which threads, processes or data flows are given access to system resources (e.g. processor time, communications bandwidth).  There are two methods of thread scheduling : Contention scope Pthreads
  • 32.
    CONTENTION SCOPE One distinction between user level and kernal level thread live in how they are schedule.  All system implementing m to one and m to m modules , the thread library schedules user level threads to run off an available LWP a scheme known as process content scope.  To decide which kernal thread to schedule on to a cpu the kernal uses system contention scope that is scheduler select the runable thread with the highest priority to run.
  • 33.
    P-THREADS  ThePOSIX standard ( IEEE 1003.1c ) defines the specification for pThreads, not the implementation.  PThreads are available on Solaris, Linux, Mac OSX, Tru64, and via public domain shareware for Windows.  Global variables are shared amongst all threads.  One thread can wait for the others to rejoin before continuing.  PThreads begin execution in a specified function.
  • 34.
    REFERENCES  Osconcept 8th addition of Abraham silberschatz.  Wikipedia.com  Slide sharer.com  www.google.com
  • 35.