Bab 6: Sinkronisasi
Sumber Utama: Silberschatz ed.8
Materi Bab 6: Sinkronisasi Proses
 Background
 The Critical-Section Problem
 Peterson’s Solution
 Synchronization Hardware
 Semaphores
 Classic Problems of Synchronization
 Monitors
 Synchronization Examples
 Atomic Transactions
Objectives
Setelah memelajari materi ini, mahasiswa mampu:
 Memahami masalah ‘critical-section‘ yang memiliki
berbagai solusi yang dapat digunakan untuk menjamin
konsistensi‘shared data’
 Menyajikan berbagai solusi software dan hardware
pada masalah ‘critical-section ‘
 Memahami konsep dari suatu transaksi atomik dan
menggambarkan mekanisme untuk menjamin
atomisitas.
 Memahami masalah-masalah klasik dari sinkronisasi
Overview (1)
4
 Proteksi OS:
 Independent process tidak terpengaruh atau dapat
mempengaruhi eksekusi/data proses lain.
 “Concurrent Process”
 OS: mampu membuat banyak proses pada satu saat
 Proses-proses bekerja-sama: sharing data, pembagian task,
passing informasi dll
 Proses => mempengaruhi proses lain dalam menggunakan
data/informasi yang sengaja di-”share”
 Cooperating process – sekumpulan proses yang
dirancang untuk saling bekerja-sama untuk
mengerjakan task tertentu.
Overview (2)
5
 Keuntungan kerja-sama antar proses
 Information sharing: file, DB => digunakan bersama
 Computation speed-up: parallel proses
 Modularity: aplikasi besar => dipartisi dalam banyak proses.
 Convenience: kumpulan proses => tipikal lingkungan kerja.
 “Cooperating Process”
 Bagaimana koordinasi antar proses? Akses/Update data
 Tujuan program/task: integritas, konsistensi data dapat
dijamin
Latar Belakang
6
 Menjamin konsistensi data:
 Program/task-task dapat menghasilkan operasi yang
benar setiap waktu
 Deterministik:untuk input yang sama hasil harus
sama (sesuai dengan logika/algoritma program).
 Contoh: Producer – Consumer
 Dua proses: producer => menghasilkan informasi;
consumer => menggunakan informasi
 Sharing informasi: buffer => tempat penyimpanan
data
 unbounded-buffer,penempatan tidak pada limit
praktis dari ukuran buffer
 bounded-buffer diasumsikan terdapat ukuran buffer
yang tetap
Latar Belakang
 Akses konkuren untuk ‘shared data’ bisa menghasilkan data
yang inkonsisten (data inconsistency ).
 Pengelolaan konsistensi data memerlukan mekanisme yang
menjamin eksekusi proses-proses yang koorperasi (saling
bekerja sama) secara terurut.
 Andaikan bahwa kita hendak memberi sebuah solusi kepada
masalah consumer-produser yang mengisikan semua buffer. Kita
dapat melakukan demikian dengan memiliki suatu bilangan
hitungan integer(integer count) yang mencatat jumlah buffer
yang penuh.
 Awalnya, hitungan di-set ke 0. Bilangan dinaikkan oleh producer
setelah ia menghasilkan sebuah buffer baru dan diturunkan
oleh consumer setelah ia menkonsumsi sebuah buffer.
Bounded Buffer (1)
8
 Implementasi buffer:
 IPC: komunikasi antar proses melalui messages
membaca/menulis buffer
 Shared memory: programmer secara eksplisit melakukan
“deklarasi” data yang dapat diakses secara bersama.
 Buffer dengan ukuran n => mampu menampung n data
 Producer mengisi data buffer => increment “counter”
(jumlah data)
 Consumer mengambil data buffer => decrement
“counter”
 Buffer,“counter” => shared data (update oleh 2 proses)
Bounded Buffer (2)
9
 Shared data type item = … ;
var buffer array
in, out: 0..n-1;
counter: 0..n;
in, out, counter := 0;
 Producer process
repeat
…
produce an item in nextp
…
while counter = n do no-op;
buffer [in] := nextp;
in := in + 1 mod n;
counter := counter +1;
until false;
Bounded Buffer (3)
10
 Consumer process
repeat
while counter = 0 do no-op;
nextc := buffer [out];
out := out + 1 mod n;
counter := counter – 1;
…
consume the item in nextc
…
until false;
Bounded Buffer (4)
11
 Apakah terdapat jaminan operasi akan benar jika
berjalan concurrent?
 Misalkan:counter = 5
 Producer: counter = counter + 1;
 Consumer: counter = counter - 1;
 Nilai akhir dari counter?
 Operasi concurrent P & C =>
 Operasi dari high level language => sekumpulan instruksi
mesin:“increment counter”
Load Reg1, Counter
Add Reg1, 1
Store Counter, Reg1
Bounded Buffer (5)
12
 “decrement counter”
Load Reg2, Counter
Subtract Reg2, 1
Store Counter, Reg2
 Eksekusi P & C tergantung scheduler (dapat gantian)
 T0: Producer : Load Reg1, Counter (Reg1 = 5)
 T1: Producer :Add Reg1, 1 (Reg1 = 6)
 T2: Consumer: Loag Reg2, Counter (Reg2 = 5)
 T3: Consumer: Subtract Reg1, 1 (Reg2 = 4)
 T4: Producer: Store Counter, Reg1 (Counter = 6)
 T5: Consumer: Store Counter, Reg2 (Counter = 4)
Producer
while (true) {
/* produce an item and put in nextProduced
*/
while (count == BUFFER_SIZE)
; // do nothing
buffer [in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
count++;
}
Consumer
while (true) {
while (count == 0)
; // do nothing
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
count--;
/* consume the item in nextConsumed
}
Race Condition
15
 Concurrent C & P
 Shared data “counter” dapat berakhir dengan nilai: 4, atau 5,
atau 6
 Hasilnya dapat salah dan tidak konsisten
 Race Condition:
 Keadaan dimana lebih dari satu proses meng-update data
secara “concurrent” dan hasilnya sangat bergantung dari
urutan proses mendapat jatah CPU (run)
 Hasilnya tidak menentu dan tidak selalu benar
 Mencegah race condition: sinkronisasi proses dalam meng-
update shared data
Race Condition
 Pada program producer/consumer dapat kita lihat terdapat perintah count++ dan
count- - yang dapat diimplementasikan dengan bahasa mesin sebagai berikut:
 count++ could be implemented as
register1 = count
register1 = register1 + 1
count = register1
 count-- could be implemented as
register2 = count
register2 = register2 - 1
count = register2
 Dapat dilihat jika perintahdari count+ + dan count - - dieksekusi secara bersama,
maka akan sulit untuk m engetahui nilai count sebenarnya , sehingga nilai dari
count itu akan menjadi tidak konsisten.
 Marilah kita lihat contoh berikut:
 Consider this execution interleaving with “count = 5” initially:
S0: producer execute register1 = count {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = count {register2 = 5}
S3: consumer execute register2 = register2 - 1 {register2 = 4}
S4: producer execute count = register1 {count = 6 }
S5: consumer execute count = register2 {count = 4}
 Pada contoh di atas dapat dilihat bahwa count memilki nilai dua nilai yaitu bernilai 5
(pada saat count + + dieksekusi) dan bernilai 4 (pada saat count- - dieksekusi).
 Hal ini menyebabkan nilai dari count tsb inkonsisten.
 Perhatikan bahwa nilai dari count akan bergantung pada perintah terakhir yang
dieksekusi.
 Oleh karenanya, kita membutuhkan sinkronisasi yang merupakan upaya yang
dilakukan agar proses-proses yang saling bekerja bersama-sama dieksekusi secara
beraturan (orderly) demi mencegah timbulnya keadaan yang disebut Race Condition.
Sinkronisasi
18
 Sinkronisasi:
 Koordinasi akses ke shared data, misalkan hanya satu proses
yang dapat menggunakah shared var.
 Contoh operasi terhadap var.“counter” harus dijamin di-
eksekusi dalam satu kesatuan (atomik) :
 counter := counter + 1;
 counter := counter - 1;
 Sinkronisasi merupakan “issue” penting dalam
rancangan/implementasi OS (shared resources, data, dan
multitasking).
Problem Critical Section
 Problem ini karena adanya suatu race conditon pada suatu proses yang
dilakukan secara konkuren yang mengakibatkan tidak sinkron.
 Nilai akhir tegantung pada proses mana yang terakhir dieksekusi.
 Bagaimana cara mengatasi race condition?
 Kuncinya adalah menemukan jalan untuk mencegah lebih dari suatu proses
melakukan proses tulis atau baca kepada data atau berkas pada saat yang
bersamaan.
 Perlu adanya Mutual Exclusion yaitu suatu cara yang menjamin jika ada suatu
proses yang menggunakan variabel atau berkas yang sama (digunakan juga oleh
proses lain), maka proses lain akan dikeluarkan dari pekerjaan yang sama.
 Karena beberapa proses memiliki suatu segmen kode dimana jika segmen itu
dieksekusi, maka proses-proses itu dapat saling mengubah variabel, mengupdate
suatu tabel, menulis ke suatu file dsb.
 Segmen kode ini dinamakan critical section.
 Hal demikian, dapat membawa ke dalam bahaya race condition.
Masalah Critical Section
20
 n proses mencoba menggunakan shared data bersamaan
 Setiap proses mempunyai “code” yang mengakses/ manipulasi shared
data tersebut => “critical section”
 Problem: Menjamin jika ada satu proses yang sedang
 “eksekusi” pada bagian “critical section” tidak ada proses lain yang
diperbolehkan masuk ke “code” critical section dari proses tersebut.
 Structure of process Pi
Solution to Critical-Section Problem
 Solusi untuk memecahkan critical section adalah dengan
mendesain sebuah protokol di mana proses-proses dapat
menggunakannya secara bersama-sama.
 Setiap proses harus ‘meminta izin’ untuk memasuki critical
section-nya.
 Bagian dari kode yang mengimplementasikan izin ini disebut
entry section.
 Akhir dari critical section disebut exit section.
 Bagian kode selanjutnya disebut remainder section.
Solusi Masalah Critical Section
22
 Ide :
 Mencakup pemakaian secara “exclusive” dari shared
variable tersebut
 Menjamin proses lain dapat menggunakan shared
variable tersebut
 Solusi“critical section problem” harus memenuhi:
1. Mutual Exclusion: Jika proses Pi sedang “eksekusi”
pada bagian “critical section” (dari proses Pi) maka tidak
ada proses proses lain dapat “eksekusi” pada bagian
critical section dari proses-proses tersebut.
2. Progress: Jika tidak ada proses sedang eksekusi pada
critical section-nya dan jika terdapat lebih dari satu
proses lain yang ingin masuk ke critical section, maka
pemilihan siapa yang berhak masuk ke critical section
tidak dapat ditunda tanpa terbatas.
Solusi (cont.)
23
3. BoundedWaiting:Terdapat batasan berapa
lama suatu proses harus menunggu giliran
untuk mengakses “critical section” – jika
seandainya proses lain yang diberikan hak akses
ke critical section.
 Menjamin proses dapat mengakses ke “critical
section” (tidak mengalami starvation: proses
se-olah berhenti menunggu request akses ke
critical section diperbolehkan).
 Tidak ada asumsi mengenai kecepatan
eksekusi proses proses n tersebut.
Solution to Critical-Section Problem
 Solusi dari masalah Critical-Section Problem harus memenuhi tiga syarat berikut:
1. Mutual Exclusion - If process Pi is executing in its critical section, then no other
processes can be executing in their critical sections
2. Progress - If no process is executing in its critical section and there exist some
processes that wish to enter their critical section, then the selection of the
processes that will enter the critical section next cannot be postponed
indefinitely
3. BoundedWaiting - A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made a
request to enter its critical section and before that request is granted
 Assume that each process executes at a nonzero speed
 No assumption concerning relative speed of the N processes
Pemecahan Masalah Critical Section: Solusi untuk dua proses
 Ada dua jenis solusi masalah critical section, yaitu:
 Solusi perangkat lunak
 Dengan menggunakan algoritma-algoritma yang nilai kebenarannya tidak
tergantung pada asumsi-asumsi lain, selain bahwa setiap proses berjalan
pada kecepatan yang bukan nol
 Solusi perangkat keras
 Tergantung pada beberapa instruksi mesin tertentu, misalnya dengan me-
nonaktifkan interupsi atau dengan mengunci suatu variabel tertentu.
Solusi Sederhana : Kasus 2 proses
26
 Hanya 2 proses
 Struktur umum dari program code Pi dan Pj:
 Software solution: merancang algoritma program untuk solusi critical
section
 Proses dapat mengunakan “common var.” untuk menyusun algoritma tsb.
Algoritma 1
27
 Shared variables:
 int turn;
initially turn = 0
 turn - i  Pi dapat masuk ke criticalsection
 Process Pi
do {
while (turn != i) ;
critical section
turn = j;
reminder section
} while (1);
 Mutual exclusion terpenuhi, tetapi menentang progress
Algoritma 2
28
 Shared variables
 boolean flag[2];
initially flag [0] = flag [1] = false.
 flag [i] = true  Pi siap dimasukkan ke dalam critical section
 Process Pi
do {
flag[i] := true;
while (flag[j]) ;
critical section
flag [i] = false;
remainder section
} while (1);
 Mutual exclusion terpenuhi tetapi progress belum terpenuhi.
Algoritma 3
29
 Kombinasi shared variables dari algoritma 1 and 2.
 Process Pi
do {
flag [i]:= true;
turn = j;
while (flag [j] and turn = j) ;
critical section
flag [i] = false;
remainder section
} while (1);
 Ketiga kebutuhan terpenuhi, solusi masalah critical section
pada dua proses
Algoritma Bakery
30
Critical section untuk n proses
 Sebelum proses akan masuk ke dalam “critical
section”, maka proses harus mendapatkan “nomor”
(tiket).
 Proses dengan nomor terkecil berhak masuk ke critical
section.
 Jika proses Pi dan Pj menerima nomor yang sama, jika i < j,
maka Pi dilayani pertama; jika tidak Pj dilayani pertama
 Skema penomoran selalu dibuat secara berurutan,
misalnya 1,2,3,3,3,3,4,5...
Algoritma Bakery (2)
31
 Notasi < urutan lexicographical (ticket #, process id #)
 (a,b) < c,d) jika a < c atau jika a = c and b < d
 max (a0,…, an-1) dimana a adalah nomor, k, seperti pada k
 ai untuk i - 0,
…, n – 1
 Shared data
var choosing: array [0..n – 1] of boolean
number: array [0..n – 1] of integer,
 Initialized: choosing =: false ; number => 0
Algoritma Bakery (3)
32
do {
choosing[i] = true;
number[i] = max(number[0],number[1], …, number [n –
1])+1;
choosing[i] = false;
for (j = 0; j < n; j++) {
while (choosing[j]) ;
while ((number[j] != 0) && (number[j,j] <
number[i,i])) ;
}
critical section
number[i] = 0;
remainder section
} while (1);
Sinkronisasi Hardware
33
 Memerlukan dukungan hardware (prosesor)
 Dalam bentuk “instruction set” khusus: test-and-set
 Menjamin operasi atomik (satu kesatuan): test nilai dan ubah nilai
tersebu
 Test-and-Set dapat dianalogikan dengan kode:
Test-and-Set (mutual exclusion)
34
 Mutual exclusion dapat diterapkan:
 Gunakan shared data,
variabel: lock: boolean (initially false)
 lock: menjaga critical section
 Process Pi:
do {
while (TestAndSet(lock)) ;
critical section
lock = false;
remainder section
}
Semaphore
35
 Perangkat sinkronisasi yang tidak membutuhkan busy
waiting
 Semaphore S – integer variable
 Dapat dijamin akses ke var. S oleh dua operasi atomik:
 wait (S): while S ≤ 0 do no-op;
S := S – 1;
 signal (S): S := S + 1;
Contoh : n proses
36
 Shared variables
 var mutex : semaphore
 initially mutex = 1
 Process Pi
do {
wait(mutex);
critical section
signal(mutex);
remainder section
} while (1);
Implementasi Semaphore
37
 Didefinisikan sebuah Semaphore dengan sebuah record
typedef struct {
int value;
struct process *L;
} semaphore;
 Diasumsikan terdapat 2 operasi sederhana :
 block menhambat proses yang akan masuk
 wakeup(P) memulai eksekusi pada proses P yang di block
Implementasi Semaphore (2)
38
 Operasi Semaphore-nya menjadi :
wait(S):
S.value--;
if (S.value < 0) {
add this process to S.L;
block;
}
signal(S):
S.value++;
if (S.value <= 0) {
remove a process P from S.L;
wakeup(P);
}
Masalah Klasik Sinkronisasi
39
 Bounded-Buffer Problem
 Readers and Writers Problem
 Dining-Philosophers Problem
Bounded-Buffer Problem
40
 Shared data
semaphore full, empty, mutex;
Initially:
full = 0, empty = n, mutex = 1
Bounded-Buffer Problem : Producer-
Consumer
41
Readers-Writers Problem
42
 Shared data
semaphore mutex, wrt;
Initially
mutex = 1, wrt = 1, readcount = 0
Readers-Writers Problem (2)
43
 Writters Process
wait(wrt);
…
writing is performed
…
signal(wrt);
 Readers Process
wait(mutex);
readcount++;
if (readcount == 1)
wait(rt);
signal(mutex);
…
reading is performed
…
wait(mutex);
readcount--;
if (readcount == 0)
signal(wrt);
signal(mutex):
Dining-Philosophers Problem
44
 Shared data
semaphore chopstick[5];
Semua inisialisasi bernilai 1
Dining-Philosophers Problem
45
 Philosopher i:
do {
wait(chopstick[i])
wait(chopstick[(i+1) % 5])
…
eat
…
signal(chopstick[i]);
signal(chopstick[(i+1) % 5]);
…
think
…
} while (1);
Solusi Tingkat Tinggi
46
 Motif:
 Operasi wait(S) dan signal(S) tersebar pada code program =>
manipulasi langsung struktur data semaphore
 Bagaimana jika terdapat bantuan dari lingkungan HLL
(programming) untuk sinkronisasi ?
 Pemrograman tingkat tinggi disediakan sintaks-sintaks khusus
untuk menjamin sinkronisasi antar proses, thread
 Misalnya:
 Monitor & Condition
 Conditional Critical Region
Monitor
47
 Monitor mensinkronisasi sejumlah proses:
 suatu saat hanya satu yang aktif dalam monitor dan yang lain
menunggu
 Bagian dari bahasa program (mis. Java).
 Tugas compiler menjamin hal tersebut terjadi dengan
menerjemahkan ke “low level synchronization” (semphore,
instruction set dll)
 Cukup dengan statement (deklarasi) suatu section/fungsi
adalah monitor => mengharuskan hanya ada satu proses
yang berada dalam monitor (section) tsb
Monitor (2)
48
Monitor (3)
49
 Proses-proses harus disinkronisasikan di dalam monitor:
 Memenuhi solusi critical section.
 Proses dapat menunggu di dalam monitor.
 Mekanisme: terdapat variabel (condition) dimana proses
dapat menguji/menunggu sebelum mengakses “critical
section”
var x, y: condition
Monitor (4)
50
 Condition:memudahkan programmer untuk menulis code pada
monitor.
Misalkan : var x: condition ;
 Variabel condition hanya dapat dimanipulasi dengan operasi: wait()
dan signal()
 x.wait() jika dipanggil oleh suatu proses maka proses tsb. akan
suspend - sampai ada proses lain yang memanggil: x. signal()
 x.signal() hanya akan menjalankan (resume) 1 proses saja yang
sedang menunggu (suspend) (tidak ada proses lain yang wait maka
tidak berdampak apapun)
Skema Monitor
51
Pemecahan Masalah Critical Section: Peterson’s
Solution
 Two process solution
 Assume that the LOAD and STORE instructions are atomic; that is,
cannot be interrupted.
 The two processes share two variables:
 int turn;
 Boolean flag[2]
 The variable turn indicates whose turn it is to enter the critical
section.
 The flag array is used to indicate if a process is ready to enter the
critical section. flag[i] = true implies that process Pi is ready!
PROCESS
SYNCHRONIZATION
Here’s an example of a simple piece of code containing the
components required in a critical section.
do {
while ( turn ^= i );
/* critical section */
turn = j;
/* remainder section */
} while(TRUE);
Two Processes
Software
Entry Section
Critical Section
Exit Section
Remainder Section
Algorithm for Process Pi
do {
flag[i] = TRUE;
turn = j;
while (flag[j] && turn == j);
critical section
flag[i] = FALSE;
remainder section
} while (TRUE);
Synchronization Hardware
 Many systems provide hardware support for critical
section code
 Uniprocessors – could disable interrupts
 Currently running code would execute without
preemption
 Generally too inefficient on multiprocessor systems
 Operating systems using this not broadly scalable
 Modern machines provide special atomic hardware
instructions
 Atomic = non-interruptable
 Either test memory word and set value
 Or swap contents of two memory words
Solution to Critical-section Problem Using Locks
do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);
TestAndSet Instruction
 Definition:
boolean TestAndSet (boolean *target)
{
boolean rv = *target;
*target =TRUE;
return rv:
}
Solution using TestAndSet
 Shared boolean variable lock., initialized to false.
 Solution:
do {
while ( TestAndSet (&lock ))
; // do nothing
// critical section
lock = FALSE;
// remainder section
} while (TRUE);
Swap Instruction
 Definition:
void Swap (boolean *a, boolean *b)
{
boolean temp = *a;
*a = *b;
*b = temp:
}
Solution using Swap
 Shared Boolean variable lock initialized to FALSE; Each process has a
local Boolean variable key
 Solution:
do {
key =TRUE;
while ( key == TRUE)
Swap (&lock, &key );
// critical section
lock = FALSE;
// remainder section
} while (TRUE);
Bounded-waiting Mutual Exclusion with TestandSet()
do {
waiting[i] = TRUE;
key = TRUE;
while (waiting[i] && key)
key = TestAndSet(&lock);
waiting[i] = FALSE;
// critical section
j = (i + 1) % n;
while ((j != i) && !waiting[j])
j = (j + 1) % n;
if (j == i)
lock = FALSE;
else
waiting[j] = FALSE;
// remainder section
} while (TRUE);
Semaphore
 Sinkronisasi adalah alat bantu yang tidak memerlukan busy waiting
 Semaphore S – integer variable
 Two standard operations modify S: wait() and signal()
 Originally called P() and V()
 Less complicated
 Can only be accessed via two indivisible (atomic) operations
 wait (S) {
while S <= 0
; // no-op
S--;
}
 signal (S) {
S++;
}
Semaphore as General Synchronization Tool
 Counting semaphore – integer value can range over an unrestricted domain
 Binary semaphore – integer value can range only between 0
and 1; can be simpler to implement
 Also known as mutex locks
 Can implement a counting semaphore S as a binary semaphore
 Provides mutual exclusion
Semaphore mutex; // initialized to 1
do {
wait (mutex);
// Critical Section
signal (mutex);
// remainder section
} while (TRUE);
Semaphore Implementation
 Harus menjamin bahwa tidak ada duap roses dapat mengeksekusi wait () and
signal () pada semaphore yang sama dan waktu yang sama
 Thus, implementasi menjadi masalah critical section dimana kode wait dan
signal ditempatkan pada critical section
 Could now have busy waiting in critical section implementation
 But implementation code is short
 Little busy waiting if critical section rarely occupied
 Note that applications may spend lots of time in critical sections and therefore
this is not a good solution.
Semaphore Implementation with no Busy waiting
 With each semaphore there is an associated waiting queue. Each
entry in a waiting queue has two data items:
 value (of type integer)
 pointer to next record in the list
 Two operations:
 block – place the process invoking the operation on the
appropriate waiting queue.
 wakeup – remove one of processes in the waiting queue and place
it in the ready queue.
Semaphore Implementation with no Busy waiting (Cont.)
 Implementation of wait:
wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}
 Implementation of signal:
signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
}
Deadlock and Starvation
 Deadlock – two or more processes are waiting indefinitely for an event that can be
caused by only one of the waiting processes
 Let S and Q be two semaphores initialized to 1
P0 P1
wait (S); wait (Q);
wait (Q); wait (S);
. .
. .
. .
signal (S); signal (Q);
signal (Q); signal (S);
 Starvation – indefinite blocking. A process may never be removed from the
semaphore queue in which it is suspended
 Priority Inversion - Scheduling problem when lower-priority process holds a lock
needed by higher-priority process
Classical Problems of Synchronization
 Bounded-Buffer Problem
 Readers andWriters Problem
 Dining-Philosophers Problem
Bounded-Buffer Problem
 N buffers, each can hold one item
 Semaphore mutex initialized to the value 1
 Semaphore full initialized to the value 0
 Semaphore empty initialized to the value N.
Bounded Buffer Problem (Cont.)
 The structure of the producer process
do {
// produce an item in nextp
wait (empty);
wait (mutex);
// add the item to the buffer
signal (mutex);
signal (full);
} while (TRUE);
Bounded Buffer Problem (Cont.)
 The structure of the consumer process
do {
wait (full);
wait (mutex);
// remove an item from buffer to nextc
signal (mutex);
signal (empty);
// consume the item in nextc
} while (TRUE);
Readers-Writers Problem
 A data set is shared among a number of concurrent processes
 Readers – only read the data set; they do not perform any updates
 Writers – can both read and write
 Problem – allow multiple readers to read at the same time. Only one
single writer can access the shared data at the same time
 Shared Data
 Data set
 Semaphore mutex initialized to 1
 Semaphore wrt initialized to 1
 Integer readcount initialized to 0
Readers-Writers Problem (Cont.)
 The structure of a writer process
do {
wait (wrt) ;
// writing is performed
signal (wrt) ;
} while (TRUE);
Readers-Writers Problem (Cont.)
 The structure of a reader process
do {
wait (mutex) ;
readcount ++ ;
if (readcount == 1)
wait (wrt) ;
signal (mutex)
// reading is performed
wait (mutex) ;
readcount - - ;
if (readcount == 0)
signal (wrt) ;
signal (mutex) ;
} while (TRUE);
Dining-Philosophers Problem
 Shared data
 Bowl of rice (data set)
 Semaphore chopstick [5] initialized to 1
Dining-Philosophers Problem (Cont.)
 The structure of Philosopher i:
do {
wait ( chopstick[i] );
wait ( chopStick[ (i + 1) % 5] );
// eat
signal ( chopstick[i] );
signal (chopstick[ (i + 1) % 5] );
// think
} while (TRUE);
Problems with Semaphores
 Incorrect use of semaphore operations:
 signal (mutex) …. wait (mutex)
 wait (mutex) … wait (mutex)
 Omitting of wait (mutex) or signal (mutex) (or both)
Monitors
 A high-level abstraction that provides a convenient and effective mechanism
for process synchronization
 Only one process may be active within the monitor at a time
monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }
…
procedure Pn (…) {……}
Initialization code ( ….) { … }
…
}
}
Schematic view of a Monitor
Condition Variables
 condition x, y;
 Two operations on a condition variable:
 x.wait () – a process that invokes the operation is
suspended.
 x.signal () – resumes one of processes (if any) that
invoked x.wait ()
Monitor with Condition Variables
Solution to Dining Philosophers
monitor DP
{
enum { THINKING;HUNGRY, EATING) state [5] ;
condition self [5];
void pickup (int i) {
state[i] = HUNGRY;
test(i);
if (state[i] != EATING) self [i].wait;
}
void putdown (int i) {
state[i] = THINKING;
// test left and right neighbors
test((i + 4) % 5);
test((i + 1) % 5);
}
Solution to Dining Philosophers (cont)
void test (int i) {
if ( (state[(i + 4) % 5] != EATING) &&
(state[i] == HUNGRY) &&
(state[(i + 1) % 5] != EATING) ) {
state[i] = EATING ;
self[i].signal () ;
}
}
initialization_code() {
for (int i = 0; i < 5; i++)
state[i] = THINKING;
}
}
Solution to Dining Philosophers (cont)
 Each philosopher I invokes the operations pickup()
and putdown() in the following sequence:
DiningPhilosophters.pickup (i);
EAT
DiningPhilosophers.putdown (i);
Monitor Implementation Using
Semaphores
 Variables
semaphore mutex; // (initially = 1)
semaphore next; // (initially = 0)
int next-count = 0;
 Each procedure F will be replaced by
wait(mutex);
…
body of F;
…
if (next_count > 0)
signal(next)
else
signal(mutex);
 Mutual exclusion within a monitor is ensured.
Monitor Implementation
 For each condition variable x, we have:
semaphore x_sem; // (initially = 0)
int x-count = 0;
 The operation x.wait can be implemented as:
x-count++;
if (next_count > 0)
signal(next);
else
signal(mutex);
wait(x_sem);
x-count--;
Monitor Implementation
 The operation x.signal can be implemented as:
if (x-count > 0) {
next_count++;
signal(x_sem);
wait(next);
next_count--;
}
A Monitor to Allocate Single Resource
monitor ResourceAllocator
{
boolean busy;
condition x;
void acquire(int time) {
if (busy)
x.wait(time);
busy = TRUE;
}
void release() {
busy = FALSE;
x.signal();
}
initialization code() {
busy = FALSE;
}
}
Synchronization Examples
 Solaris
 Windows XP
 Linux
 Pthreads
Solaris Synchronization
 Implements a variety of locks to support multitasking,
multithreading (including real-time threads), and
multiprocessing
 Uses adaptive mutexes for efficiency when protecting data
from short code segments
 Uses condition variables and readers-writers locks when
longer sections of code need access to data
 Uses turnstiles to order the list of threads waiting to acquire
either an adaptive mutex or reader-writer lock
Windows XP Synchronization
 Uses interrupt masks to protect access to global
resources on uniprocessor systems
 Uses spinlocks on multiprocessor systems
 Also provides dispatcher objects which may act as
either mutexes and semaphores
 Dispatcher objects may also provide events
 An event acts much like a condition variable
Linux Synchronization
 Linux:
 Prior to kernelVersion 2.6, disables interrupts to implement
short critical sections
 Version 2.6 and later, fully preemptive
 Linux provides:
 semaphores
 spin locks
Pthreads Synchronization
 Pthreads API is OS-independent
 It provides:
 mutex locks
 condition variables
 Non-portable extensions include:
 read-write locks
 spin locks
Atomic Transactions
 System Model
 Log-based Recovery
 Checkpoints
 Concurrent AtomicTransactions
System Model
 Assures that operations happen as a single logical unit of work, in its
entirety, or not at all
 Related to field of database systems
 Challenge is assuring atomicity despite computer system failures
 Transaction - collection of instructions or operations that performs single
logical function
 Here we are concerned with changes to stable storage – disk
 Transaction is series of read and write operations
 Terminated by commit (transaction successful) or abort (transaction
failed) operation
 Aborted transaction must be rolled back to undo any changes it
performed
Types of Storage Media
 Volatile storage – information stored here does not survive system crashes
 Example: main memory, cache
 Nonvolatile storage – Information usually survives crashes
 Example: disk and tape
 Stable storage – Information never lost
 Not actually possible, so approximated via replication or RAID to devices
with independent failure modes
Goal is to assure transaction atomicity where failures cause loss of
information on volatile storage
Log-Based Recovery
 Record to stable storage information about all modifications by a transaction
 Most common is write-ahead logging
 Log on stable storage, each log record describes single transaction write
operation, including
 Transaction name
 Data item name
 Old value
 New value
 <Ti starts> written to log when transactionTi starts
 <Ti commits> written when Ti commits
 Log entry must reach stable storage before operation on data occurs
Log-Based Recovery Algorithm
 Using the log, system can handle any volatile memory errors
 Undo(Ti) restores value of all data updated byTi
 Redo(Ti) sets values of all data in transactionTi to new values
 Undo(Ti) and redo(Ti) must be idempotent
 Multiple executions must have the same result as one execution
 If system fails, restore state of all updated data via log
 If log contains <Ti starts> without <Ti commits>, undo(Ti)
 If log contains <Ti starts> and <Ti commits>, redo(Ti)
Checkpoints
 Log could become long, and recovery could take long
 Checkpoints shorten log and recovery time.
 Checkpoint scheme:
1. Output all log records currently in volatile storage to stable storage
2. Output all modified data from volatile to stable storage
3. Output a log record <checkpoint> to the log on stable storage
 Now recovery only includes Ti, such thatTi started executing before the
most recent checkpoint, and all transactions after Ti All other transactions
already on stable storage
Concurrent Transactions
 Must be equivalent to serial execution – serializability
 Could perform all transactions in critical section
 Inefficient, too restrictive
 Concurrency-control algorithms provide serializability
Serializability
 Consider two data items A and B
 ConsiderTransactions T0 and T1
 ExecuteT0,T1 atomically
 Execution sequence called schedule
 Atomically executed transaction order called serial schedule
 For N transactions, there are N! valid serial schedules
Schedule 1: T0 then T1
Nonserial Schedule
 Nonserial schedule allows overlapped execute
 Resulting execution not necessarily incorrect
 Consider schedule S, operations Oi, Oj
 Conflict if access same data item, with at least one write
 If Oi, Oj consecutive and operations of different transactions & Oi and Oj
don’t conflict
 Then S’ with swapped order Oj Oi equivalent to S
 If S can become S’ via swapping nonconflicting operations
 S is conflict serializable
Schedule 2: Concurrent Serializable Schedule
Locking Protocol
 Ensure serializability by associating lock with each data item
 Follow locking protocol for access control
 Locks
 Shared – Ti has shared-mode lock (S) on item Q,Ti can read Q but not write Q
 Exclusive – Ti has exclusive-mode lock (X) on Q,Ti can read and write Q
 Require every transaction on item Q acquire appropriate lock
 If lock already held, new request may have to wait
 Similar to readers-writers algorithm
Two-phase Locking Protocol
 Generally ensures conflict serializability
 Each transaction issues lock and unlock requests in two phases
 Growing – obtaining locks
 Shrinking – releasing locks
 Does not prevent deadlock
Timestamp-based Protocols
 Select order among transactions in advance – timestamp-ordering
 TransactionTi associated with timestampTS(Ti) beforeTi starts
 TS(Ti) <TS(Tj) if Ti entered system before Tj
 TS can be generated from system clock or as logical counter incremented at
each entry of transaction
 Timestamps determine serializability order
 IfTS(Ti) <TS(Tj), system must ensure produced schedule equivalent to serial
schedule whereTi appears beforeTj
Timestamp-based Protocol Implementation
 Data item Q gets two timestamps
 W-timestamp(Q) – largest timestamp of any transaction that executed write(Q)
successfully
 R-timestamp(Q) – largest timestamp of successful read(Q)
 Updated whenever read(Q) or write(Q) executed
 Timestamp-ordering protocol assures any conflicting read and write executed in
timestamp order
 SupposeTi executes read(Q)
 IfTS(Ti) <W-timestamp(Q),Ti needs to read value of Q that was already
overwritten
 read operation rejected andTi rolled back
 IfTS(Ti) ≥ W-timestamp(Q)
 read executed, R-timestamp(Q) set to max(R-timestamp(Q),TS(Ti))
Timestamp-ordering Protocol
 SupposeTi executes write(Q)
 IfTS(Ti) < R-timestamp(Q), value Q produced byTi was needed previously andTi
assumed it would never be produced
 Write operation rejected,Ti rolled back
 IfTS(Ti) <W-tiimestamp(Q),Ti attempting to write obsolete value of Q
 Write operation rejected andTi rolled back
 Otherwise, write executed
 Any rolled back transactionTi is assigned new timestamp and restarted
 Algorithm ensures conflict serializability and freedom from deadlock
Schedule Possible Under Timestamp Protocol
Selesai Bab 6

09 sinkronisasi proses

  • 1.
    Bab 6: Sinkronisasi SumberUtama: Silberschatz ed.8
  • 2.
    Materi Bab 6:Sinkronisasi Proses  Background  The Critical-Section Problem  Peterson’s Solution  Synchronization Hardware  Semaphores  Classic Problems of Synchronization  Monitors  Synchronization Examples  Atomic Transactions
  • 3.
    Objectives Setelah memelajari materiini, mahasiswa mampu:  Memahami masalah ‘critical-section‘ yang memiliki berbagai solusi yang dapat digunakan untuk menjamin konsistensi‘shared data’  Menyajikan berbagai solusi software dan hardware pada masalah ‘critical-section ‘  Memahami konsep dari suatu transaksi atomik dan menggambarkan mekanisme untuk menjamin atomisitas.  Memahami masalah-masalah klasik dari sinkronisasi
  • 4.
    Overview (1) 4  ProteksiOS:  Independent process tidak terpengaruh atau dapat mempengaruhi eksekusi/data proses lain.  “Concurrent Process”  OS: mampu membuat banyak proses pada satu saat  Proses-proses bekerja-sama: sharing data, pembagian task, passing informasi dll  Proses => mempengaruhi proses lain dalam menggunakan data/informasi yang sengaja di-”share”  Cooperating process – sekumpulan proses yang dirancang untuk saling bekerja-sama untuk mengerjakan task tertentu.
  • 5.
    Overview (2) 5  Keuntungankerja-sama antar proses  Information sharing: file, DB => digunakan bersama  Computation speed-up: parallel proses  Modularity: aplikasi besar => dipartisi dalam banyak proses.  Convenience: kumpulan proses => tipikal lingkungan kerja.  “Cooperating Process”  Bagaimana koordinasi antar proses? Akses/Update data  Tujuan program/task: integritas, konsistensi data dapat dijamin
  • 6.
    Latar Belakang 6  Menjaminkonsistensi data:  Program/task-task dapat menghasilkan operasi yang benar setiap waktu  Deterministik:untuk input yang sama hasil harus sama (sesuai dengan logika/algoritma program).  Contoh: Producer – Consumer  Dua proses: producer => menghasilkan informasi; consumer => menggunakan informasi  Sharing informasi: buffer => tempat penyimpanan data  unbounded-buffer,penempatan tidak pada limit praktis dari ukuran buffer  bounded-buffer diasumsikan terdapat ukuran buffer yang tetap
  • 7.
    Latar Belakang  Akseskonkuren untuk ‘shared data’ bisa menghasilkan data yang inkonsisten (data inconsistency ).  Pengelolaan konsistensi data memerlukan mekanisme yang menjamin eksekusi proses-proses yang koorperasi (saling bekerja sama) secara terurut.  Andaikan bahwa kita hendak memberi sebuah solusi kepada masalah consumer-produser yang mengisikan semua buffer. Kita dapat melakukan demikian dengan memiliki suatu bilangan hitungan integer(integer count) yang mencatat jumlah buffer yang penuh.  Awalnya, hitungan di-set ke 0. Bilangan dinaikkan oleh producer setelah ia menghasilkan sebuah buffer baru dan diturunkan oleh consumer setelah ia menkonsumsi sebuah buffer.
  • 8.
    Bounded Buffer (1) 8 Implementasi buffer:  IPC: komunikasi antar proses melalui messages membaca/menulis buffer  Shared memory: programmer secara eksplisit melakukan “deklarasi” data yang dapat diakses secara bersama.  Buffer dengan ukuran n => mampu menampung n data  Producer mengisi data buffer => increment “counter” (jumlah data)  Consumer mengambil data buffer => decrement “counter”  Buffer,“counter” => shared data (update oleh 2 proses)
  • 9.
    Bounded Buffer (2) 9 Shared data type item = … ; var buffer array in, out: 0..n-1; counter: 0..n; in, out, counter := 0;  Producer process repeat … produce an item in nextp … while counter = n do no-op; buffer [in] := nextp; in := in + 1 mod n; counter := counter +1; until false;
  • 10.
    Bounded Buffer (3) 10 Consumer process repeat while counter = 0 do no-op; nextc := buffer [out]; out := out + 1 mod n; counter := counter – 1; … consume the item in nextc … until false;
  • 11.
    Bounded Buffer (4) 11 Apakah terdapat jaminan operasi akan benar jika berjalan concurrent?  Misalkan:counter = 5  Producer: counter = counter + 1;  Consumer: counter = counter - 1;  Nilai akhir dari counter?  Operasi concurrent P & C =>  Operasi dari high level language => sekumpulan instruksi mesin:“increment counter” Load Reg1, Counter Add Reg1, 1 Store Counter, Reg1
  • 12.
    Bounded Buffer (5) 12 “decrement counter” Load Reg2, Counter Subtract Reg2, 1 Store Counter, Reg2  Eksekusi P & C tergantung scheduler (dapat gantian)  T0: Producer : Load Reg1, Counter (Reg1 = 5)  T1: Producer :Add Reg1, 1 (Reg1 = 6)  T2: Consumer: Loag Reg2, Counter (Reg2 = 5)  T3: Consumer: Subtract Reg1, 1 (Reg2 = 4)  T4: Producer: Store Counter, Reg1 (Counter = 6)  T5: Consumer: Store Counter, Reg2 (Counter = 4)
  • 13.
    Producer while (true) { /*produce an item and put in nextProduced */ while (count == BUFFER_SIZE) ; // do nothing buffer [in] = nextProduced; in = (in + 1) % BUFFER_SIZE; count++; }
  • 14.
    Consumer while (true) { while(count == 0) ; // do nothing nextConsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; count--; /* consume the item in nextConsumed }
  • 15.
    Race Condition 15  ConcurrentC & P  Shared data “counter” dapat berakhir dengan nilai: 4, atau 5, atau 6  Hasilnya dapat salah dan tidak konsisten  Race Condition:  Keadaan dimana lebih dari satu proses meng-update data secara “concurrent” dan hasilnya sangat bergantung dari urutan proses mendapat jatah CPU (run)  Hasilnya tidak menentu dan tidak selalu benar  Mencegah race condition: sinkronisasi proses dalam meng- update shared data
  • 16.
    Race Condition  Padaprogram producer/consumer dapat kita lihat terdapat perintah count++ dan count- - yang dapat diimplementasikan dengan bahasa mesin sebagai berikut:  count++ could be implemented as register1 = count register1 = register1 + 1 count = register1  count-- could be implemented as register2 = count register2 = register2 - 1 count = register2  Dapat dilihat jika perintahdari count+ + dan count - - dieksekusi secara bersama, maka akan sulit untuk m engetahui nilai count sebenarnya , sehingga nilai dari count itu akan menjadi tidak konsisten.  Marilah kita lihat contoh berikut:
  • 17.
     Consider thisexecution interleaving with “count = 5” initially: S0: producer execute register1 = count {register1 = 5} S1: producer execute register1 = register1 + 1 {register1 = 6} S2: consumer execute register2 = count {register2 = 5} S3: consumer execute register2 = register2 - 1 {register2 = 4} S4: producer execute count = register1 {count = 6 } S5: consumer execute count = register2 {count = 4}  Pada contoh di atas dapat dilihat bahwa count memilki nilai dua nilai yaitu bernilai 5 (pada saat count + + dieksekusi) dan bernilai 4 (pada saat count- - dieksekusi).  Hal ini menyebabkan nilai dari count tsb inkonsisten.  Perhatikan bahwa nilai dari count akan bergantung pada perintah terakhir yang dieksekusi.  Oleh karenanya, kita membutuhkan sinkronisasi yang merupakan upaya yang dilakukan agar proses-proses yang saling bekerja bersama-sama dieksekusi secara beraturan (orderly) demi mencegah timbulnya keadaan yang disebut Race Condition.
  • 18.
    Sinkronisasi 18  Sinkronisasi:  Koordinasiakses ke shared data, misalkan hanya satu proses yang dapat menggunakah shared var.  Contoh operasi terhadap var.“counter” harus dijamin di- eksekusi dalam satu kesatuan (atomik) :  counter := counter + 1;  counter := counter - 1;  Sinkronisasi merupakan “issue” penting dalam rancangan/implementasi OS (shared resources, data, dan multitasking).
  • 19.
    Problem Critical Section Problem ini karena adanya suatu race conditon pada suatu proses yang dilakukan secara konkuren yang mengakibatkan tidak sinkron.  Nilai akhir tegantung pada proses mana yang terakhir dieksekusi.  Bagaimana cara mengatasi race condition?  Kuncinya adalah menemukan jalan untuk mencegah lebih dari suatu proses melakukan proses tulis atau baca kepada data atau berkas pada saat yang bersamaan.  Perlu adanya Mutual Exclusion yaitu suatu cara yang menjamin jika ada suatu proses yang menggunakan variabel atau berkas yang sama (digunakan juga oleh proses lain), maka proses lain akan dikeluarkan dari pekerjaan yang sama.  Karena beberapa proses memiliki suatu segmen kode dimana jika segmen itu dieksekusi, maka proses-proses itu dapat saling mengubah variabel, mengupdate suatu tabel, menulis ke suatu file dsb.  Segmen kode ini dinamakan critical section.  Hal demikian, dapat membawa ke dalam bahaya race condition.
  • 20.
    Masalah Critical Section 20 n proses mencoba menggunakan shared data bersamaan  Setiap proses mempunyai “code” yang mengakses/ manipulasi shared data tersebut => “critical section”  Problem: Menjamin jika ada satu proses yang sedang  “eksekusi” pada bagian “critical section” tidak ada proses lain yang diperbolehkan masuk ke “code” critical section dari proses tersebut.  Structure of process Pi
  • 21.
    Solution to Critical-SectionProblem  Solusi untuk memecahkan critical section adalah dengan mendesain sebuah protokol di mana proses-proses dapat menggunakannya secara bersama-sama.  Setiap proses harus ‘meminta izin’ untuk memasuki critical section-nya.  Bagian dari kode yang mengimplementasikan izin ini disebut entry section.  Akhir dari critical section disebut exit section.  Bagian kode selanjutnya disebut remainder section.
  • 22.
    Solusi Masalah CriticalSection 22  Ide :  Mencakup pemakaian secara “exclusive” dari shared variable tersebut  Menjamin proses lain dapat menggunakan shared variable tersebut  Solusi“critical section problem” harus memenuhi: 1. Mutual Exclusion: Jika proses Pi sedang “eksekusi” pada bagian “critical section” (dari proses Pi) maka tidak ada proses proses lain dapat “eksekusi” pada bagian critical section dari proses-proses tersebut. 2. Progress: Jika tidak ada proses sedang eksekusi pada critical section-nya dan jika terdapat lebih dari satu proses lain yang ingin masuk ke critical section, maka pemilihan siapa yang berhak masuk ke critical section tidak dapat ditunda tanpa terbatas.
  • 23.
    Solusi (cont.) 23 3. BoundedWaiting:Terdapatbatasan berapa lama suatu proses harus menunggu giliran untuk mengakses “critical section” – jika seandainya proses lain yang diberikan hak akses ke critical section.  Menjamin proses dapat mengakses ke “critical section” (tidak mengalami starvation: proses se-olah berhenti menunggu request akses ke critical section diperbolehkan).  Tidak ada asumsi mengenai kecepatan eksekusi proses proses n tersebut.
  • 24.
    Solution to Critical-SectionProblem  Solusi dari masalah Critical-Section Problem harus memenuhi tiga syarat berikut: 1. Mutual Exclusion - If process Pi is executing in its critical section, then no other processes can be executing in their critical sections 2. Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely 3. BoundedWaiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted  Assume that each process executes at a nonzero speed  No assumption concerning relative speed of the N processes
  • 25.
    Pemecahan Masalah CriticalSection: Solusi untuk dua proses  Ada dua jenis solusi masalah critical section, yaitu:  Solusi perangkat lunak  Dengan menggunakan algoritma-algoritma yang nilai kebenarannya tidak tergantung pada asumsi-asumsi lain, selain bahwa setiap proses berjalan pada kecepatan yang bukan nol  Solusi perangkat keras  Tergantung pada beberapa instruksi mesin tertentu, misalnya dengan me- nonaktifkan interupsi atau dengan mengunci suatu variabel tertentu.
  • 26.
    Solusi Sederhana :Kasus 2 proses 26  Hanya 2 proses  Struktur umum dari program code Pi dan Pj:  Software solution: merancang algoritma program untuk solusi critical section  Proses dapat mengunakan “common var.” untuk menyusun algoritma tsb.
  • 27.
    Algoritma 1 27  Sharedvariables:  int turn; initially turn = 0  turn - i  Pi dapat masuk ke criticalsection  Process Pi do { while (turn != i) ; critical section turn = j; reminder section } while (1);  Mutual exclusion terpenuhi, tetapi menentang progress
  • 28.
    Algoritma 2 28  Sharedvariables  boolean flag[2]; initially flag [0] = flag [1] = false.  flag [i] = true  Pi siap dimasukkan ke dalam critical section  Process Pi do { flag[i] := true; while (flag[j]) ; critical section flag [i] = false; remainder section } while (1);  Mutual exclusion terpenuhi tetapi progress belum terpenuhi.
  • 29.
    Algoritma 3 29  Kombinasishared variables dari algoritma 1 and 2.  Process Pi do { flag [i]:= true; turn = j; while (flag [j] and turn = j) ; critical section flag [i] = false; remainder section } while (1);  Ketiga kebutuhan terpenuhi, solusi masalah critical section pada dua proses
  • 30.
    Algoritma Bakery 30 Critical sectionuntuk n proses  Sebelum proses akan masuk ke dalam “critical section”, maka proses harus mendapatkan “nomor” (tiket).  Proses dengan nomor terkecil berhak masuk ke critical section.  Jika proses Pi dan Pj menerima nomor yang sama, jika i < j, maka Pi dilayani pertama; jika tidak Pj dilayani pertama  Skema penomoran selalu dibuat secara berurutan, misalnya 1,2,3,3,3,3,4,5...
  • 31.
    Algoritma Bakery (2) 31 Notasi < urutan lexicographical (ticket #, process id #)  (a,b) < c,d) jika a < c atau jika a = c and b < d  max (a0,…, an-1) dimana a adalah nomor, k, seperti pada k  ai untuk i - 0, …, n – 1  Shared data var choosing: array [0..n – 1] of boolean number: array [0..n – 1] of integer,  Initialized: choosing =: false ; number => 0
  • 32.
    Algoritma Bakery (3) 32 do{ choosing[i] = true; number[i] = max(number[0],number[1], …, number [n – 1])+1; choosing[i] = false; for (j = 0; j < n; j++) { while (choosing[j]) ; while ((number[j] != 0) && (number[j,j] < number[i,i])) ; } critical section number[i] = 0; remainder section } while (1);
  • 33.
    Sinkronisasi Hardware 33  Memerlukandukungan hardware (prosesor)  Dalam bentuk “instruction set” khusus: test-and-set  Menjamin operasi atomik (satu kesatuan): test nilai dan ubah nilai tersebu  Test-and-Set dapat dianalogikan dengan kode:
  • 34.
    Test-and-Set (mutual exclusion) 34 Mutual exclusion dapat diterapkan:  Gunakan shared data, variabel: lock: boolean (initially false)  lock: menjaga critical section  Process Pi: do { while (TestAndSet(lock)) ; critical section lock = false; remainder section }
  • 35.
    Semaphore 35  Perangkat sinkronisasiyang tidak membutuhkan busy waiting  Semaphore S – integer variable  Dapat dijamin akses ke var. S oleh dua operasi atomik:  wait (S): while S ≤ 0 do no-op; S := S – 1;  signal (S): S := S + 1;
  • 36.
    Contoh : nproses 36  Shared variables  var mutex : semaphore  initially mutex = 1  Process Pi do { wait(mutex); critical section signal(mutex); remainder section } while (1);
  • 37.
    Implementasi Semaphore 37  Didefinisikansebuah Semaphore dengan sebuah record typedef struct { int value; struct process *L; } semaphore;  Diasumsikan terdapat 2 operasi sederhana :  block menhambat proses yang akan masuk  wakeup(P) memulai eksekusi pada proses P yang di block
  • 38.
    Implementasi Semaphore (2) 38 Operasi Semaphore-nya menjadi : wait(S): S.value--; if (S.value < 0) { add this process to S.L; block; } signal(S): S.value++; if (S.value <= 0) { remove a process P from S.L; wakeup(P); }
  • 39.
    Masalah Klasik Sinkronisasi 39 Bounded-Buffer Problem  Readers and Writers Problem  Dining-Philosophers Problem
  • 40.
    Bounded-Buffer Problem 40  Shareddata semaphore full, empty, mutex; Initially: full = 0, empty = n, mutex = 1
  • 41.
    Bounded-Buffer Problem :Producer- Consumer 41
  • 42.
    Readers-Writers Problem 42  Shareddata semaphore mutex, wrt; Initially mutex = 1, wrt = 1, readcount = 0
  • 43.
    Readers-Writers Problem (2) 43 Writters Process wait(wrt); … writing is performed … signal(wrt);  Readers Process wait(mutex); readcount++; if (readcount == 1) wait(rt); signal(mutex); … reading is performed … wait(mutex); readcount--; if (readcount == 0) signal(wrt); signal(mutex):
  • 44.
    Dining-Philosophers Problem 44  Shareddata semaphore chopstick[5]; Semua inisialisasi bernilai 1
  • 45.
    Dining-Philosophers Problem 45  Philosopheri: do { wait(chopstick[i]) wait(chopstick[(i+1) % 5]) … eat … signal(chopstick[i]); signal(chopstick[(i+1) % 5]); … think … } while (1);
  • 46.
    Solusi Tingkat Tinggi 46 Motif:  Operasi wait(S) dan signal(S) tersebar pada code program => manipulasi langsung struktur data semaphore  Bagaimana jika terdapat bantuan dari lingkungan HLL (programming) untuk sinkronisasi ?  Pemrograman tingkat tinggi disediakan sintaks-sintaks khusus untuk menjamin sinkronisasi antar proses, thread  Misalnya:  Monitor & Condition  Conditional Critical Region
  • 47.
    Monitor 47  Monitor mensinkronisasisejumlah proses:  suatu saat hanya satu yang aktif dalam monitor dan yang lain menunggu  Bagian dari bahasa program (mis. Java).  Tugas compiler menjamin hal tersebut terjadi dengan menerjemahkan ke “low level synchronization” (semphore, instruction set dll)  Cukup dengan statement (deklarasi) suatu section/fungsi adalah monitor => mengharuskan hanya ada satu proses yang berada dalam monitor (section) tsb
  • 48.
  • 49.
    Monitor (3) 49  Proses-prosesharus disinkronisasikan di dalam monitor:  Memenuhi solusi critical section.  Proses dapat menunggu di dalam monitor.  Mekanisme: terdapat variabel (condition) dimana proses dapat menguji/menunggu sebelum mengakses “critical section” var x, y: condition
  • 50.
    Monitor (4) 50  Condition:memudahkanprogrammer untuk menulis code pada monitor. Misalkan : var x: condition ;  Variabel condition hanya dapat dimanipulasi dengan operasi: wait() dan signal()  x.wait() jika dipanggil oleh suatu proses maka proses tsb. akan suspend - sampai ada proses lain yang memanggil: x. signal()  x.signal() hanya akan menjalankan (resume) 1 proses saja yang sedang menunggu (suspend) (tidak ada proses lain yang wait maka tidak berdampak apapun)
  • 51.
  • 52.
    Pemecahan Masalah CriticalSection: Peterson’s Solution  Two process solution  Assume that the LOAD and STORE instructions are atomic; that is, cannot be interrupted.  The two processes share two variables:  int turn;  Boolean flag[2]  The variable turn indicates whose turn it is to enter the critical section.  The flag array is used to indicate if a process is ready to enter the critical section. flag[i] = true implies that process Pi is ready!
  • 53.
    PROCESS SYNCHRONIZATION Here’s an exampleof a simple piece of code containing the components required in a critical section. do { while ( turn ^= i ); /* critical section */ turn = j; /* remainder section */ } while(TRUE); Two Processes Software Entry Section Critical Section Exit Section Remainder Section
  • 54.
    Algorithm for ProcessPi do { flag[i] = TRUE; turn = j; while (flag[j] && turn == j); critical section flag[i] = FALSE; remainder section } while (TRUE);
  • 55.
    Synchronization Hardware  Manysystems provide hardware support for critical section code  Uniprocessors – could disable interrupts  Currently running code would execute without preemption  Generally too inefficient on multiprocessor systems  Operating systems using this not broadly scalable  Modern machines provide special atomic hardware instructions  Atomic = non-interruptable  Either test memory word and set value  Or swap contents of two memory words
  • 56.
    Solution to Critical-sectionProblem Using Locks do { acquire lock critical section release lock remainder section } while (TRUE);
  • 57.
    TestAndSet Instruction  Definition: booleanTestAndSet (boolean *target) { boolean rv = *target; *target =TRUE; return rv: }
  • 58.
    Solution using TestAndSet Shared boolean variable lock., initialized to false.  Solution: do { while ( TestAndSet (&lock )) ; // do nothing // critical section lock = FALSE; // remainder section } while (TRUE);
  • 59.
    Swap Instruction  Definition: voidSwap (boolean *a, boolean *b) { boolean temp = *a; *a = *b; *b = temp: }
  • 60.
    Solution using Swap Shared Boolean variable lock initialized to FALSE; Each process has a local Boolean variable key  Solution: do { key =TRUE; while ( key == TRUE) Swap (&lock, &key ); // critical section lock = FALSE; // remainder section } while (TRUE);
  • 61.
    Bounded-waiting Mutual Exclusionwith TestandSet() do { waiting[i] = TRUE; key = TRUE; while (waiting[i] && key) key = TestAndSet(&lock); waiting[i] = FALSE; // critical section j = (i + 1) % n; while ((j != i) && !waiting[j]) j = (j + 1) % n; if (j == i) lock = FALSE; else waiting[j] = FALSE; // remainder section } while (TRUE);
  • 62.
    Semaphore  Sinkronisasi adalahalat bantu yang tidak memerlukan busy waiting  Semaphore S – integer variable  Two standard operations modify S: wait() and signal()  Originally called P() and V()  Less complicated  Can only be accessed via two indivisible (atomic) operations  wait (S) { while S <= 0 ; // no-op S--; }  signal (S) { S++; }
  • 63.
    Semaphore as GeneralSynchronization Tool  Counting semaphore – integer value can range over an unrestricted domain  Binary semaphore – integer value can range only between 0 and 1; can be simpler to implement  Also known as mutex locks  Can implement a counting semaphore S as a binary semaphore  Provides mutual exclusion Semaphore mutex; // initialized to 1 do { wait (mutex); // Critical Section signal (mutex); // remainder section } while (TRUE);
  • 64.
    Semaphore Implementation  Harusmenjamin bahwa tidak ada duap roses dapat mengeksekusi wait () and signal () pada semaphore yang sama dan waktu yang sama  Thus, implementasi menjadi masalah critical section dimana kode wait dan signal ditempatkan pada critical section  Could now have busy waiting in critical section implementation  But implementation code is short  Little busy waiting if critical section rarely occupied  Note that applications may spend lots of time in critical sections and therefore this is not a good solution.
  • 65.
    Semaphore Implementation withno Busy waiting  With each semaphore there is an associated waiting queue. Each entry in a waiting queue has two data items:  value (of type integer)  pointer to next record in the list  Two operations:  block – place the process invoking the operation on the appropriate waiting queue.  wakeup – remove one of processes in the waiting queue and place it in the ready queue.
  • 66.
    Semaphore Implementation withno Busy waiting (Cont.)  Implementation of wait: wait(semaphore *S) { S->value--; if (S->value < 0) { add this process to S->list; block(); } }  Implementation of signal: signal(semaphore *S) { S->value++; if (S->value <= 0) { remove a process P from S->list; wakeup(P); } }
  • 67.
    Deadlock and Starvation Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes  Let S and Q be two semaphores initialized to 1 P0 P1 wait (S); wait (Q); wait (Q); wait (S); . . . . . . signal (S); signal (Q); signal (Q); signal (S);  Starvation – indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended  Priority Inversion - Scheduling problem when lower-priority process holds a lock needed by higher-priority process
  • 68.
    Classical Problems ofSynchronization  Bounded-Buffer Problem  Readers andWriters Problem  Dining-Philosophers Problem
  • 69.
    Bounded-Buffer Problem  Nbuffers, each can hold one item  Semaphore mutex initialized to the value 1  Semaphore full initialized to the value 0  Semaphore empty initialized to the value N.
  • 70.
    Bounded Buffer Problem(Cont.)  The structure of the producer process do { // produce an item in nextp wait (empty); wait (mutex); // add the item to the buffer signal (mutex); signal (full); } while (TRUE);
  • 71.
    Bounded Buffer Problem(Cont.)  The structure of the consumer process do { wait (full); wait (mutex); // remove an item from buffer to nextc signal (mutex); signal (empty); // consume the item in nextc } while (TRUE);
  • 72.
    Readers-Writers Problem  Adata set is shared among a number of concurrent processes  Readers – only read the data set; they do not perform any updates  Writers – can both read and write  Problem – allow multiple readers to read at the same time. Only one single writer can access the shared data at the same time  Shared Data  Data set  Semaphore mutex initialized to 1  Semaphore wrt initialized to 1  Integer readcount initialized to 0
  • 73.
    Readers-Writers Problem (Cont.) The structure of a writer process do { wait (wrt) ; // writing is performed signal (wrt) ; } while (TRUE);
  • 74.
    Readers-Writers Problem (Cont.) The structure of a reader process do { wait (mutex) ; readcount ++ ; if (readcount == 1) wait (wrt) ; signal (mutex) // reading is performed wait (mutex) ; readcount - - ; if (readcount == 0) signal (wrt) ; signal (mutex) ; } while (TRUE);
  • 75.
    Dining-Philosophers Problem  Shareddata  Bowl of rice (data set)  Semaphore chopstick [5] initialized to 1
  • 76.
    Dining-Philosophers Problem (Cont.) The structure of Philosopher i: do { wait ( chopstick[i] ); wait ( chopStick[ (i + 1) % 5] ); // eat signal ( chopstick[i] ); signal (chopstick[ (i + 1) % 5] ); // think } while (TRUE);
  • 77.
    Problems with Semaphores Incorrect use of semaphore operations:  signal (mutex) …. wait (mutex)  wait (mutex) … wait (mutex)  Omitting of wait (mutex) or signal (mutex) (or both)
  • 78.
    Monitors  A high-levelabstraction that provides a convenient and effective mechanism for process synchronization  Only one process may be active within the monitor at a time monitor monitor-name { // shared variable declarations procedure P1 (…) { …. } … procedure Pn (…) {……} Initialization code ( ….) { … } … } }
  • 79.
  • 80.
    Condition Variables  conditionx, y;  Two operations on a condition variable:  x.wait () – a process that invokes the operation is suspended.  x.signal () – resumes one of processes (if any) that invoked x.wait ()
  • 81.
  • 82.
    Solution to DiningPhilosophers monitor DP { enum { THINKING;HUNGRY, EATING) state [5] ; condition self [5]; void pickup (int i) { state[i] = HUNGRY; test(i); if (state[i] != EATING) self [i].wait; } void putdown (int i) { state[i] = THINKING; // test left and right neighbors test((i + 4) % 5); test((i + 1) % 5); }
  • 83.
    Solution to DiningPhilosophers (cont) void test (int i) { if ( (state[(i + 4) % 5] != EATING) && (state[i] == HUNGRY) && (state[(i + 1) % 5] != EATING) ) { state[i] = EATING ; self[i].signal () ; } } initialization_code() { for (int i = 0; i < 5; i++) state[i] = THINKING; } }
  • 84.
    Solution to DiningPhilosophers (cont)  Each philosopher I invokes the operations pickup() and putdown() in the following sequence: DiningPhilosophters.pickup (i); EAT DiningPhilosophers.putdown (i);
  • 85.
    Monitor Implementation Using Semaphores Variables semaphore mutex; // (initially = 1) semaphore next; // (initially = 0) int next-count = 0;  Each procedure F will be replaced by wait(mutex); … body of F; … if (next_count > 0) signal(next) else signal(mutex);  Mutual exclusion within a monitor is ensured.
  • 86.
    Monitor Implementation  Foreach condition variable x, we have: semaphore x_sem; // (initially = 0) int x-count = 0;  The operation x.wait can be implemented as: x-count++; if (next_count > 0) signal(next); else signal(mutex); wait(x_sem); x-count--;
  • 87.
    Monitor Implementation  Theoperation x.signal can be implemented as: if (x-count > 0) { next_count++; signal(x_sem); wait(next); next_count--; }
  • 88.
    A Monitor toAllocate Single Resource monitor ResourceAllocator { boolean busy; condition x; void acquire(int time) { if (busy) x.wait(time); busy = TRUE; } void release() { busy = FALSE; x.signal(); } initialization code() { busy = FALSE; } }
  • 89.
    Synchronization Examples  Solaris Windows XP  Linux  Pthreads
  • 90.
    Solaris Synchronization  Implementsa variety of locks to support multitasking, multithreading (including real-time threads), and multiprocessing  Uses adaptive mutexes for efficiency when protecting data from short code segments  Uses condition variables and readers-writers locks when longer sections of code need access to data  Uses turnstiles to order the list of threads waiting to acquire either an adaptive mutex or reader-writer lock
  • 91.
    Windows XP Synchronization Uses interrupt masks to protect access to global resources on uniprocessor systems  Uses spinlocks on multiprocessor systems  Also provides dispatcher objects which may act as either mutexes and semaphores  Dispatcher objects may also provide events  An event acts much like a condition variable
  • 92.
    Linux Synchronization  Linux: Prior to kernelVersion 2.6, disables interrupts to implement short critical sections  Version 2.6 and later, fully preemptive  Linux provides:  semaphores  spin locks
  • 93.
    Pthreads Synchronization  PthreadsAPI is OS-independent  It provides:  mutex locks  condition variables  Non-portable extensions include:  read-write locks  spin locks
  • 94.
    Atomic Transactions  SystemModel  Log-based Recovery  Checkpoints  Concurrent AtomicTransactions
  • 95.
    System Model  Assuresthat operations happen as a single logical unit of work, in its entirety, or not at all  Related to field of database systems  Challenge is assuring atomicity despite computer system failures  Transaction - collection of instructions or operations that performs single logical function  Here we are concerned with changes to stable storage – disk  Transaction is series of read and write operations  Terminated by commit (transaction successful) or abort (transaction failed) operation  Aborted transaction must be rolled back to undo any changes it performed
  • 96.
    Types of StorageMedia  Volatile storage – information stored here does not survive system crashes  Example: main memory, cache  Nonvolatile storage – Information usually survives crashes  Example: disk and tape  Stable storage – Information never lost  Not actually possible, so approximated via replication or RAID to devices with independent failure modes Goal is to assure transaction atomicity where failures cause loss of information on volatile storage
  • 97.
    Log-Based Recovery  Recordto stable storage information about all modifications by a transaction  Most common is write-ahead logging  Log on stable storage, each log record describes single transaction write operation, including  Transaction name  Data item name  Old value  New value  <Ti starts> written to log when transactionTi starts  <Ti commits> written when Ti commits  Log entry must reach stable storage before operation on data occurs
  • 98.
    Log-Based Recovery Algorithm Using the log, system can handle any volatile memory errors  Undo(Ti) restores value of all data updated byTi  Redo(Ti) sets values of all data in transactionTi to new values  Undo(Ti) and redo(Ti) must be idempotent  Multiple executions must have the same result as one execution  If system fails, restore state of all updated data via log  If log contains <Ti starts> without <Ti commits>, undo(Ti)  If log contains <Ti starts> and <Ti commits>, redo(Ti)
  • 99.
    Checkpoints  Log couldbecome long, and recovery could take long  Checkpoints shorten log and recovery time.  Checkpoint scheme: 1. Output all log records currently in volatile storage to stable storage 2. Output all modified data from volatile to stable storage 3. Output a log record <checkpoint> to the log on stable storage  Now recovery only includes Ti, such thatTi started executing before the most recent checkpoint, and all transactions after Ti All other transactions already on stable storage
  • 100.
    Concurrent Transactions  Mustbe equivalent to serial execution – serializability  Could perform all transactions in critical section  Inefficient, too restrictive  Concurrency-control algorithms provide serializability
  • 101.
    Serializability  Consider twodata items A and B  ConsiderTransactions T0 and T1  ExecuteT0,T1 atomically  Execution sequence called schedule  Atomically executed transaction order called serial schedule  For N transactions, there are N! valid serial schedules
  • 102.
  • 103.
    Nonserial Schedule  Nonserialschedule allows overlapped execute  Resulting execution not necessarily incorrect  Consider schedule S, operations Oi, Oj  Conflict if access same data item, with at least one write  If Oi, Oj consecutive and operations of different transactions & Oi and Oj don’t conflict  Then S’ with swapped order Oj Oi equivalent to S  If S can become S’ via swapping nonconflicting operations  S is conflict serializable
  • 104.
    Schedule 2: ConcurrentSerializable Schedule
  • 105.
    Locking Protocol  Ensureserializability by associating lock with each data item  Follow locking protocol for access control  Locks  Shared – Ti has shared-mode lock (S) on item Q,Ti can read Q but not write Q  Exclusive – Ti has exclusive-mode lock (X) on Q,Ti can read and write Q  Require every transaction on item Q acquire appropriate lock  If lock already held, new request may have to wait  Similar to readers-writers algorithm
  • 106.
    Two-phase Locking Protocol Generally ensures conflict serializability  Each transaction issues lock and unlock requests in two phases  Growing – obtaining locks  Shrinking – releasing locks  Does not prevent deadlock
  • 107.
    Timestamp-based Protocols  Selectorder among transactions in advance – timestamp-ordering  TransactionTi associated with timestampTS(Ti) beforeTi starts  TS(Ti) <TS(Tj) if Ti entered system before Tj  TS can be generated from system clock or as logical counter incremented at each entry of transaction  Timestamps determine serializability order  IfTS(Ti) <TS(Tj), system must ensure produced schedule equivalent to serial schedule whereTi appears beforeTj
  • 108.
    Timestamp-based Protocol Implementation Data item Q gets two timestamps  W-timestamp(Q) – largest timestamp of any transaction that executed write(Q) successfully  R-timestamp(Q) – largest timestamp of successful read(Q)  Updated whenever read(Q) or write(Q) executed  Timestamp-ordering protocol assures any conflicting read and write executed in timestamp order  SupposeTi executes read(Q)  IfTS(Ti) <W-timestamp(Q),Ti needs to read value of Q that was already overwritten  read operation rejected andTi rolled back  IfTS(Ti) ≥ W-timestamp(Q)  read executed, R-timestamp(Q) set to max(R-timestamp(Q),TS(Ti))
  • 109.
    Timestamp-ordering Protocol  SupposeTiexecutes write(Q)  IfTS(Ti) < R-timestamp(Q), value Q produced byTi was needed previously andTi assumed it would never be produced  Write operation rejected,Ti rolled back  IfTS(Ti) <W-tiimestamp(Q),Ti attempting to write obsolete value of Q  Write operation rejected andTi rolled back  Otherwise, write executed  Any rolled back transactionTi is assigned new timestamp and restarted  Algorithm ensures conflict serializability and freedom from deadlock
  • 110.
    Schedule Possible UnderTimestamp Protocol
  • 111.