§ Multitasking is a method where multiple tasks share resources such as a CPU . In the case of a computer with a single CPU, only one task is said to be running at any point in time, meaning that the CPU is actively executing instructions for that task. Multitasking solves the problem by scheduling which task may be the one running at any given time, and when another waiting task gets a turn.
§ Improvement of application’s perceived responsiveness.
§ Improvement of application’s real-time performance on multicore systems.
To run program system need to make environment for task execution. This environment consists of memory resources, input-output posibillity, access to different system resources including kernel services. This environment is called process .
In contrast to the process, which is equipped to create their own address space and context, thread uses the address space of the parent process and part of its context . So independent threads created within a single process can access the same data.
§ Thread need less system resources (time for creation and physical resources) than processes
§ Interthread communication easier than interprocess communication
§ Thread add uncertainty to your code
§ Threads still introduce a tremendous amount of overhead to your process
§ Operation objects
§ Grand Central Dispatch (GCD)
§ Idle-time notifications
§ Asynchronous functions
Threads in iOS
§ Сocoa threads
§ POSIX threads
Cocoa implements threads using the NSThread class:
POSIX threads provide a C-based interface for creating threads.
This technology can actually be used in any type of application (including Cocoa and Cocoa Touch applications) and might be more convenient if you are writing your software for multiple platforms. The POSIX routine you use to create threads is called, appropriately enough, pthread_create .
Thread entry routine § Creating an Autorelease Pool § Setting Up an Exception Handler § Setting Up a Run Loop
Atomic operations are a simple form of synchronization that work on simple data types. The advantage of atomic operations is that they do not block competing threads. For simple operations, such as incrementing a counter variable, this can lead to much better performance than taking a lock.
Memory barriers and Volatile variables
§ A memory barrier acts like a fence, forcing the processor to complete any load and store operations positioned in front of the barrier before it is allowed to perform load and store operations positioned after the barrier.
§ Applying the volatile keyword to a variable forces the compiler to load that variable from memory each time it is used.
§ Recursive lock
§ Read-write lock
§ Distributed lock
§ Spin lock
§ Double-checked lock
A condition is another type of semaphore that allows threads to signal each other when a certain condition is true. Conditions are typically used to indicate the availability of a resource or to ensure that tasks are performed in a specific order. When a thread tests a condition, it blocks unless that condition is already true.
§ Direct messaging
§ Global variables, shared memory, and objects
§ Run loop sources
§ Ports and sockets
§ Message queues
§ Cocoa distributed objects
Operation objects § Operation object is a wrapper for a task that would normally be executed on a secondary thread. § This wrapper hides the thread management aspects of performing the task, leaving you free to focus on the task itself.
Dispatch queues § With GCD, you define the task you want to perform and add it to a work queue, which handles the scheduling of your task on an appropriate thread. § Work queues take into account the number of available cores and the current load to execute your tasks more efficiently than you could do yourself using threads.