2. Acquire/Release Resource
Acquire/Release Resource is a design pattern
to avoid forgetting to acquire or to release a
resource.
In general it states that in the same function
that you acquire a resource, you must release
it.
In particular, when it comes to locking:
– It means that whenever we enter a critical
section, to acquire the corresponding lock and
releasing it is responsibility of the CALLED
class.
3. In other words...
Calling Called
Not this Lock();
do_stuff();
Unlock();
do_stuff()
{
access_critical_section();
}
But this do_stuff(); do_stuff()
{
Lock();
access_critical_section();
Unlock();
}
4. 2 Philosophies
1)Make sure the resources are locked for as long
as possible to ensure correctness.
2)Make sure resources are locked as little as
possible to ensure performance.
5. First philosophy
The first philosophy is the same as the
transactions, because transactions span for
the whole user action.
6. Second philosophy
The second philosophy doesn't require locking,
because the smallest action is not bit
oriented…
In other words if I have 2 threads that modify
an integer, there is no possibility that one of
the threads will see some bits changed and not
others.
7. Second philosophy
So for all practical purposes, that integer is
synchronized without locks, and you know
pointers are really integers in the machine.
So for example you can have 2 threads adding
elements to a list without blocking but
synchronized by simply doing all the work
independently and finally assigning the pointer to
the next element on the list.
And if there is a collision (2 threads trying to
modify the same pointer), the first will succeed
and the second can detect it using CAS.
CAS is a CPU instruction present in the x86
architecture.
9. Performance
Thread #1 exchanges a single packet at a time
with thread #2
Result #1 - uses a custom CAS based exchange
using the same principles as
SynchronousQueue, where our class is called
CASSynchronousQueue:
30,766,538 packets in 59.999 seconds ::
500.763Kpps, 1.115Gbps 0 drops
libpcap statistics: recv=61,251,128,
10. Performance
Thread #1 exchanges a single packet at a time
with thread #2
Result #1 - uses a custom CAS based exchange
using the same principles as
SynchronousQueue, where our class is called
CASSynchronousQueue:
30,766,538 packets in 59.999 seconds ::
500.763Kpps, 1.115Gbps 0 drops
libpcap statistics: recv=61,251,128,