Processes and Threads in Windows Vista


Published on

Published in: Technology
  • Be the first to comment

Processes and Threads in Windows Vista

  1. 1. Processes and Threads in Windows Vista
  2. 2. Agenda• Fundamental Concepts• IPC• Synchronization• Implementation – Process Creation – Scheduling
  3. 3. Fundamental Concepts• In window Vista processes are containers for programs.• Each process includes: – Virtual address space – Handles to kernel-mode objects – Threads and resources to threads execution• Each process have user-mode system data called the process environment block (PEB), includes: – List of loaded modules – The current working directory – Pointer to process’ heaps
  4. 4. Fundamental Concepts• Threads are kernel’s abstraction for scheduling the CPU• Threads can also be affinitized to only run on certain processors• Each thread has two separate call stacks, one for execution in user-mode and one for kernel-mode
  5. 5. Fundamental Concepts• There is also a threads environment block (TEB) that keeps user-mode data, includes: – Thread local storages – Fields• Another data structure that kernel-mode shared is user shared data which is contains various form of time, version info, amount of physical memory, number of shared flags.
  6. 6. Fundamental Concepts Process• Process are created from section objects, each of which describes a memory object backed by a file on disk.• Create process: – Modify a new process by mapping section – Allocating virtual memory – Writing parameters and environmental data – Duplicating file descriptors – Creating threads.
  7. 7. Fundamental Concepts Jobs and Fibers• Definition: Jobs is a group of processes.• The main functions of a job is to constraints to the threads they contain such at: – Limiting resources – Prevents threads from accessing system objects by enforcing restricted token• Once a process in a jobs, all process threads in those process create will also be in the job.• Problems: one process can be in one job, there will be conflicts if many jobs attempt to manage the same process.
  8. 8. Fundamental Concepts Fibers• Definition: A fiber is a unit of execution that must be manually scheduled by the application• Fibers are created by allocating a stack and a user-mode fiber data structure for storing registers and data can also be created independently of threads• Fibers will not run until another running fiber in thread make explicitly call SwithToFiber.• Advantage: – It easier and take fewer time to switch between fiber than threads
  9. 9. Fundamental Concepts Jobs and Fibers• Disadvantage: need a lot of synchronization to make sure fibers do not interface with each other.• Solution: create only as many threads as there are processors to run them, and affinitize the threads to each run only on a distinct set of available processors.
  10. 10. Fundamental Concepts Jobs and Fibers
  11. 11. Fundamental Concepts Threads• Every process start out with one threads and can create threads dynamically.• OS always selected threads to run not a process• Every threads have state whereas processes do not have scheduling states.• Each thread has an ID, which is taken from the same space as process IDs• System use a handle table, which keep pointer field to lookup of a process and thread by ID.
  12. 12. Fundamental Concepts Threads• There are two types of threads: normal thread and system thread• Normal thread: – A thread normally run in user-mode but when make a system call it switch to kernel-mode . – Each thread has two stacks: one for user-mode and one for kernel-mode.• System thread: – Run only in kernel-mode – Not associated with any user process
  13. 13. Fundamental Concepts Threads• The thread CONTEXT includes the threads set of registers, the kernel stack, a thread environment block, and a user stack in the address space of the threads process.• Running threads – Using access token of their containing process – Client/server• Threads are also normal focal point for I/O.• Any thread is able to access all the objects that belong to its process.
  14. 14. Fundamental Concepts
  15. 15. InterProcess Communication IPC Overview 1 -How one process passes information to another in Windows Vista? 2 – Six (6) main ways to pass information between process and process.
  16. 16. InterProcess Communication IPC Windows Vista Windows Vista provides mechanisms for facilitating(thuận tiện) communications and data sharing between applications. Typically, applications can use IPC for categorized (phân loại) as client or server – Client: a application or process that requests a service from some other applications or processes. – Server: a application or process that responds client request As developers, we must choose suitable IPC methods to use for our applications.
  17. 17. InterProcess Communication IPC Windows Vista PIPES Two types (Pipes and Named pipes) Pipes have two modes: – Byte-mode: work the same ways as in UNIX – Allows a child process to inherit a communication channels from its parents: data written to one end of the pipe can be read at other. – To exchange data in both directions (duplex operation), you must create two pipes. – Message-mode: is somewhat similar but preserve message boundaries. To recognize the separated messages. If we send 4 writes of 128-byte, it will read as 4 message separated with 128-byte each, but not one message with 512-bytes. Named pipes: also have to modes as regular pipes. But named pipes can be used over a network. – Use for transfer data between unrelated process on different computers. – Exchange data by performing read and write operations on the pipe.
  18. 18. InterProcess Communication IPC Windows Vista MAILSLOTS Function of OS/2 operating system but implemented in Windows for capability. Similar to pipes, but not all. Is one-way communication. Two-way communication is possible. – A process can be both a mailslot server and a mailslot client Can broadcast a message to many receivers instead of one (if they have same mailslots name). Use over a network but not guarantee the delivery of message. Mailslots and named pipes are implemented as file system. – Allow them to access over the network using existed remote file system protocol.
  19. 19. InterProcess Communication IPC Windows Vista SOCKETS Connect processes on different machines. Generally used in networking context. 2 ways of communication channels. Also can be used to connect processes on same machine, however it more complicated than pipes. READ/WRITE on socket is transferring data between 2 processes. Windows Sockets are based on the sockets first popularized by Berkeley Software Distribution (BSD)..
  20. 20. InterProcess Communication IPC Windows Vista remote procedure calls RPC enables applications to call functions remotely. Operates between processes on a single computer or on different computers on a network. Supports data conversion for different hardware architectures and for byte-ordering between dissimilar environments. In Windows, transport could be many ways: – TCP/IP sockets. – Named pipes. – Advanced Local Procedure Call (ALDC) – message-passing facility in the kernel mode.
  21. 21. InterProcess Communication IPC Windows Vista SHARE FILE Process can share object –Includes section objects, which can be mapped into the virtual address space of different processes at the same time. –All writes done by one process then appear in the address spaces of the other processes.
  23. 23. Critical section• The simplest type of thread synchronization object• Can be used only by the threads of a single process (differ from Mutex)• Provide a slightly faster, more efficient mechanism for mutual-exclusion synchronization• There is no way to tell whether a critical section has been abandoned• Critical sections are not kernel-mode objects
  24. 24. • One writes to the list, and the other reads from it. To prevent the two threads from accessing the list at exactly the same time, you can protect the list with a critical section. The following example uses a globally declared CCriticalSection object to demonstrate how:• // Global data• CCriticalSection g_cs;• .• .• //Thread A //Thread B• g_cs.Lock (); g_cs.Lock();• //Write to the linked list. Read from the llinked list• g_cs.Unlock(); g_cs.Unlock();
  25. 25. • Figure 17-3. Protecting a shared resource with a critical section
  26. 26. mutex• Mutexes are kernel mode• To gain exclusive access to a resource shared by two or more threads• Can be used to synchronize threads running in the same process or in different processes• Suppose two applications use a block of shared memory to exchange data. Inside that shared memory is a linked list that must be protected against concurrent thread accesses. A critical section wont work because it cant reach across process boundaries, but a Mutex will do the job nicely. Heres what you do in each process before reading or writing the linked li• // Global data• CMutex g_mutex (FALSE, _T ("MyMutex"));• g_mutex.Lock ();• // Read or write the linked list.• g_mutex.Unlock ();
  27. 27. • The first parameter passed to the CMutex constructor specifies whether the mutex is initially locked (TRUE) or unlocked (FALSE).• The second parameter specifies the mutexs name, which is required if the mutex is used to synchronize threads in two different processes.• You pick the name, but both processes must specify the same name so that the two CMutex objects will reference the same mutex object in the Windows kernel.• Naturally, Lock blocks on a mutex locked by another thread, and Unlock frees the mutex so that others can lock it
  28. 28. • Note that: There is one other difference between mutexesand critical sections: If a thread locks a critical section and terminateswithout unlocking it, other threads waiting for thecritical section to come free will block indefinitely.However, if a thread that locks a mutex fails to unlock itbefore terminating, the system deems the mutex to be"abandoned" and automatically frees the mutex so thatwaiting threads can resume.
  29. 29. EVENTS• Events are a way of signaling one thread from another, allowing one thread to wait or sleep until it’s signaled by another thread.• The producing thread, Thread A generates some data and puts it in a shared working space. In this example, the consuming thread, Thread B is sleeping on the event (waiting for the event to trigger):
  30. 30. Shared working space Thread A Thread B data data data Event10/18/12 32
  31. 31. • Once the producing thread has finished writing data, it triggers the event. Shared working space Thread A Thread B data data data Event
  32. 32. • This signals the consuming thread, Thread B, thereby waking it up Shared working space Thread A Thread B data data data Event
  33. 33. • Once the consuming thread, Thread B has woken up, it starts doing work. The assumption is that the producing thread will no longer touch the data. Shared working space Thread A Thread B data data data Event
  34. 34. • Heres an example involving one thread (thread A) that fills a buffer with data and another thread (thread B) that does something with that data. Assume that thread B must wait for a signal from thread A saying that the buffer is initialized and ready to go. An autoreset event is the perfect tool for the job:• // Global data• CEvent g_event; // Autoreset, initially nonsignaled• .• .• // Thread A• InitBuffer (&buffer); // Initialize the buffer.• g_event.SetEvent (); // Release thread B.• .• .• // Thread B• g_event.Lock (); // Wait for the signal.
  35. 35. SEMAPHORE• A semaphore is a mutex that multiple threads can access• Events, critical sections, and mutexes are "all or nothing" objects in the sense that Lock blocks on them if any other thread has them locked. Semaphores are different.• Maintain resource counts representing the number of resources available• Locking a semaphore decrements its resource count, and unlocking a semaphore increments the resource count• Can be used to synchronize threads within a process or threads that belong to different processes.
  36. 36. • To limit the time that Lock will wait for the semaphores resource count to become nonzero, you can pass a maximum wait time (in milliseconds, as always) to the Lock function.• Figure 17-6. Using a semaphore to guard a shared resource.
  37. 37. Dynamic-link Library: thư viện cấp phát động mà các ứngdụng dùng chung.API: giao diện lập trình ứng dụng, giúp cho sự giao tiếp giữacác tiến trìnhSubsystem: cung cấp môi trường hoạt động cho các ứng dụngSection object: phần đại diện cho vùng bộ nhớ được chia sẻProcess object: dùng để kiểm tra và điều khiển processThread object: dùng để kiểm soát threadProcess Environment Block (PEB): chứa dữ liệu hoạt độngtrong suốt quá trình hoạt động của một processThread Environment Block (TEB): chưa dữ liệu của các threadsđang hoạt động
  38. 38. A new process is created when another process makea Win32 CreateProcess call.There are 7 step in creating a new process:1.Converse from Win32 pathname to NT pathname.2.Open EXE and create Section object.3.Create Process object.4.Create Thread object.5.Checking6.Shim the program application if necessary.
  39. 39. Scheduling Content1. Overview2. Scheduling Priorities3. Scheduling Algorithm4. Priority Boosting5. Priority Inversion
  40. 40. Scheduling Overview• Windows schedules threads, not processes.• The Scheduler is called when: – The currently running thread blocks on a semaphore, mutex, event, I/O, … – The thread signals an object (e.g., does an up on a semaphore) – The quantum expires – An I/O operation completes – A timed wait expires
  41. 41. Scheduling Overview• Scheduling is preemptive, priority-based, and round-robin at the highest priority.• Scheduler tries to keep a thread on its ideal processor in order to improve performance (a) Symmetric Multiprocessing (SMP) (b) Non-Uniform Memory Access (NUMA) • Processes/Threads can specify affinity mask to run only on certain processors: SetProcessAffinityMask(), SetThreadAffinityMask(), …
  42. 42. Scheduling Scheduling Priorities• Threads are scheduled to run based on their scheduling priority. Each thread is assigned a scheduling priority.• The priority levels range from zero (lowest priority) to 31 (highest priority), correspondingly associated with 32 queues. – Priorities are divided into 6 priority classes that are applied to processes. – In each class, there are 7 priority levels that are applied to threads in that process.• Base priority (of a thread) = F(priority class, priority level) = constant.• Dynamic priority = Base priority + Boost Amount, is used to determine which thread to execute.
  43. 43. Scheduling Scheduling Algorithm• The system treats all threads with the same priority as equal.• The system assigns time slices in a round-robin fashion to all threads with the highest priority. – If none of these threads are ready to run, the system assigns time slices in a round-robin fashion to all threads with the next highest priority. – If a higher-priority thread becomes available to run, the system ceases to execute the lower-priority thread (without allowing it to finish using its time slice), and assigns a full time slice to the higher-priority thread.
  44. 44. Scheduling Scheduling AlgorithmFigure 11-28. Thread priorities in Windows Vista
  45. 45. Scheduling Priority Boosting• Only applied to non-realtime threads that belong to: – GUI foreground (GUI input, mouse message …) – Waking for event (disk operation completion +1, keyboard input +6, sound +8...)• After raising a threads dynamic priority, the scheduler reduces that priority by one level each time the thread completes a time slice, until the thread drops back to its base priority.
  46. 46. Scheduling Priority Inversion• Priority inversion occurs when a mutex or critical section held by a 1 lower-priority thread delays the running of a higher-priority thread when both are contending for the same resource. 2 – Thread 1 has high priority. – Thread 2 has medium priority. – Thread 3 has low priority. 3• Win32 Solution: randomly boosting the priority of the ready threads (in this case, the low priority lock-holders). – The low priority threads run long enough to exit the critical section, and the high-priority thread can enter the critical section. – If the low-priority thread does not get enough CPU time to exit the critical section the first time, it will get another chance during the next round of scheduling.
  47. 47. Summary• Fundamental Concepts• IPC• Synchronization• Implementation • Process Creation • Scheduling Q&A