6. Who uses Hazelcast ?
Too many customers from too many different industries
https://hazelcast.com/customers/financial-services/
7. Editions and Licencing
Open Source edition
https://github.com/hazelcast/hazelcast
Enterprise editions (https://hazelcast.com/products/)
Notable features:
• High-Density Caching
• WAN replication (geographical distribution of nodes)
• Security suite
• Web session clustering
9. Hazelcast cluster
A cluster is made up of nodes. Each node is a memory and cpu resource
for cluster.
Clients just connect and use the cluster. They do not provide resource.
Hazelcast api is unified for both clients and nodes.
10. Hazelcast Sharding
Hazelcast shards are called Partitions. By default, Hazelcast has
271 partitions.
Hazelcast distributes data entries into the partitions using a
hashing algorithm. Given an object key (for example, for a map)
or an object name (for example, for a topic or list):
the key or name is serialized (converted into a byte array),
this byte array is hashed, and
the result of the hash is mod by the number of partitions.
The result of this modulo - MOD(hash result, partition count) gives
the partition in which the data will be stored.
The partitions themselves are distributed equally among the
members of the cluster.
Hazelcast also creates the backups of partitions and distributes
them among members for redundancy.
12. Hazelcast Data Partitioning
Explained - Continued
When you start a second node on that cluster the partitions are
distributed as below.
In below, the partitions with black text are primary partitions, and the
partitions with blue text are replica partitions (backups).
The first node has 135 primary partitions (black), and each of these
partitions are backed up in the second node (blue).
The first node also has the replica partitions of the second node's
primary partitions.
13. Hazelcast Data Partitioning
Explained - Continued
When you add more nodes, Hazelcast one-by-one moves some of the
primary and replica partitions to the new nodes, making all nodes equal
and redundant.
Only the minimum amount of partitions will be moved to scale out
Hazelcast.
The following is an illustration of the partition distributions in a 4-node
Hazelcast cluster (Pairs : 1-3 & 2-4).
14. Hazelcast Data Partitioning –
Partition Table & Repartioning
When you start a node, a partition table is created within it.
This table stores the information for which partitions
belong to which nodes. The purpose of this table is to make
all nodes in the cluster aware of this information, making
sure that each node knows where the data is.
The oldest node in the cluster (the one that started first)
periodically sends the partition table to all nodes. (
hazelcast.partition.table.send.interval system property,
default is 15 seconds)
Repartitioning occurs when
When a node joins to the cluster.
When a node leaves the cluster.
15. Hazelcast Topology Options
1) EMBEDDED (ON HEAP) MODE
In this type, nodes include both the application and
data. I.e your application provides cpu & memory for
the cluster.
pros : high performance
cons: improved memory usage
16. Hazelcast Topology Options -
Continued
2) CLIENT (OFF HEAP) MODE
In this type, you can have a cluster of server nodes that can be independently
created and scaled independent of your application.
Your clients communicate with these cluster nodes to reach to the data on them.
Hazelcast provides different clients (native clients (Java, .NET and C++),
Memcache clients and REST clients).
pros: diffferent protocols can be supported, no memory pressure
cons: performance overhead
18. Hazelcast Distributed Map
Hazelcast Map (IMap) extends the interface
java.util.concurrent.ConcurrentMap and hence java.util.Map.
It is the distributed implementation of Java map.
You can perfrom operations like reading and writing from/to a
Hazelcast map with the well known get and put methods.
20. Hazelcast Distributed Map – Data
Resiliency
Assume that we have a cluster with 2 nodes
and 4 elements in the map.
Each node has 2 primary and 2 back up data
21. Hazelcast Distributed Map – Data
Resiliency - Continued
What happens when we loose the first node in
the cluster?
Back up data activation
22. Hazelcast Distributed Map – Data
Resiliency - Continued
Back up data is moved to primary partitions…
23. Hazelcast Distributed Map – Data
Resiliency - Continued
Lets test resiliency with a simple demo…
Possible production set up for resiliency
24. Hazelcast Distributed Map – Data
Resiliency - Continued
Demo set up for resiliency
DEMO 3 – Simple Map
Resiliency Test
25. Hazelcast Distributed Map –
Persistent Storage
Previous demo has limitations:
• Data is completely lost when all nodes are gone
• Can not store all cluster data in single node
(possible OOM Error ? )
Out of the box, all data is stored in memory
in Hazelcast. But it is also possible to achive
persistent storage…
26. Hazelcast Distributed Map –
Persistent Storage - Continued
Persistent storage can be achived by using
Map Store mechanism
Create a class that implements
com.hazelcast.core.MapStore<K, V>
Configure the Hazelcast instance to use the
above MapStore implementation for the
desired map.
28. Hazelcast Distributed Map –
Persistent Storage - Continued
Previous example (demo 4) is using Write-Through
Persistence because write-delay-seconds property
to 0.
This means the entries will be put to the data store
synchronously using the non-batch methods.
We can also use Write-Behind Persistence by setting
write-delay-seconds property to a value greater than
0.
This way the batch methods on the Map Store
implementation will be used.
But this mode of operation is prone to data loss
because of asynch storage!
29. Hazelcast Distributed Map – Map
Searching
We can search map entires in Hazelcast like
we do in db.
Criteria API is a programming interface offered by
Hazelcast that is similar to the Java Persistence
Query Language (JPQL)
30. Hazelcast Distributed Map – Map
Searching – Continued
Predicates can be joined using the and, or
and not operators, as shown below :
com.hazelcast.query.SqlPredicate class
contains methods related to conversion of sql
query to predicate
31. Hazelcast Distributed Map – Map
Searching – Continued
DEMO 5 – Simple Map
Searching using Predicates
DEMO 6 – Simple Map
Searching using
36. Hazelcast Distributed Map –
Concurrency & Locks - continued
DEMO 7 – Demonstration of
the lost update problem and
possible solution using key
locks
37. Hazelcast Distributed Lock
com.hazelcast.core.ILock is the distributed
implementation of
java.util.concurrent.locks.Lock.
Meaning if you lock using an ILock, the
critical section that it guards is guaranteed
to be executed by only one thread in the
entire cluster.
Locks can lead to problems if not used
properly.
38. Hazelcast Distributed Lock -
Continued
A few warnings when using locks:
Always use locks with try-catch blocks. It will
ensure that locks will be released if an exception is
thrown from the code in a critical section.
39. Hazelcast Distributed Lock -
Continued
If a lock is not released in the cluster, another thread
that is trying to get the lock can wait forever!
To avoid this, use tryLock with a timeout value. You
can set a high value (normally it should not take that
long) for tryLock. You can check the return value of
tryLock as below:
40. Hazelcast Distributed Lock -
Continued
You can also avoid indefinitely waiting threads by
using lock with lease time: the lock will be released
in the given lease time.
Locks are fail-safe. If a member holds a lock and
some other members go down, the cluster will keep
your locks safe and available. Moreover, when a
member leaves the cluster, all the locks acquired by
that dead member will be removed so that those
locks are immediately available for live members.
41. Hazelcast Distributed Lock -
Continued
Locks are re-entrant. The same thread can lock multiple times on
the same lock. Note that for other threads to be able to require
this lock, the owner of the lock must call unlock as many times as
the owner called lock.
Locks are not automatically removed. If a lock is not used
anymore, Hazelcast will not automatically garbage collect the lock.
This can lead to an OutOfMemoryError. So prefer IMap for locking
instead of ILock:
https://groups.google.com/forum/#!topic/hazelcast/bjmEWglJ4Ik
43. Other Hazelcast Data Structures -
Set
Hazelcast Set is a distributed and concurrent
implementation of java.util.Set.
Hazelcast Set does not allow duplicate elements.
Hazelcast Set does not preserve the order of elements.
Hazelcast Set is a non-partitioned data structure: all
the data that belongs to a set will live on one single
partition in that member.
The equals method implemented in Hazelcast Set uses
a serialized byte version of objects, as opposed to
java.util.HashSet.
45. Other Hazelcast Data Structures -
List
Hazelcast List is similar to Hazelcast Set, but
Hazelcast List also allows duplicate elements.
Besides allowing duplicate elements, Hazelcast List
preserves the order of elements.
Hazelcast List is a non-partitioned data structure
where values and each backup are represented by
their own single partition.
All items are copied to local and iteration occurs
locally.
47. Other Hazelcast Data Structures -
Queue
Hazelcast distributed queue is an implementation of
java.util.concurrent.BlockingQueue.
FIFO ordering will apply to all queue operations across the
cluster.
48. Other Hazelcast Data Structures -
Topic
Hazelcast provides Topic a distribution mechanism for publishing
messages that are delivered to multiple subscribers (
publish/subscribe (pub/sub) messaging model ).
Publishing and subscribing operations are cluster wide. When a
member subscribes to a topic, it is actually registering for
messages published by any member in the cluster, including the
new members that joined after you add the listener.
50. Other Hazelcast Data Structures
There are still other Hazelcast data structures ..
MultiMap
ReplicatedMap
AtomicLong
IdGenerator
CountdownLatch
…
Details can be found at
https://hazelcast.org/documentation/
51. Event Listeners For Hazelcast
Nodes
You can register for Hazelcast entry events so you will
be notified when those events occur.
Event Listeners are cluster-wide: when a listener is
registered in one member of cluster, it is actually
registered for events that originated at any member in
the cluster.
An Event is created only if you registered an event
listener.
As a rule of thumb, your event listener should not
implement heavy processes in its event methods which
block the thread for a long time. If needed, you must
use ExecutorService.
52. Event Listeners For Hazelcast
Nodes - Continued
Hazelcast offers the following event listeners:
Membership Listener for cluster membership events.
Distributed Object Listener for distributed object creation and
destroy events.
Migration Listener for partition migration start and complete
events.
Partition Lost Listener for partition lost events.
Lifecycle Listener for HazelcastInstance lifecycle events.
Entry Listener for IMap and MultiMap entry events Item Listener
for IQueue, ISet and IList item events (please refer to the Event
Registration and Configuration parts of the sections Set and List).
Message Listener for ITopic message events.
Client Listener for client connection events.
54. Hazelcast Transactions
You can use Hazelcast in transactional context.
You create a TransactionContext to begin, commit,
and rollback a transaction.
You can obtain transaction-aware instances of
queues, maps, sets, lists, multimaps via
TransactionContext, work with them, and
commit/rollback in one shot.
In a transaction, operations will not be executed
immediately. Their changes will be local to the
TransactionContext until committed.
56. Hazelcast System Properties
Hazelcast has system properties to tune some aspects
of Hazelcast.
You can set them as property name and value pairs
through declarative configuration, programmatic
configuration or JVM system property.
http://docs.hazelcast.org/docs/3.5/manual/html/systemp
roperties.html
Please note that defaults are usually fine..:)
57. Hazelcast Monitoring
Hazelcast open source version is lacking monitoring
capabilities.
Probably logging provides the best information
available…
We can use "per node jmx" utilities for monitoring.
But this does not provide information clusterwide...
Cache feature is the most well known. (Distrbuted map)
It is a cluster technology new members could be added easily.
Supports messaging in publish/subscribe and producer/consumer patterns. (topics/queues /event listeners)
Can be used for web session replication in enterprise addition.. Native support for tomcat and jetty. Use filters for other app servers.
Apple 10 nodeluk clusterı varmıs.
Mıddleware de hazelcast kullanıyormus. Apple store bunun uzerınden yuruyormus.
3000 clıent varmıs.
Sanıyede mılyon uzerınde requestı handle etmısler apple watch lansmanında.
DEMO1
Modullerı tanıt
Hazelcast configurasyonları defaultda guzel calısıyor... 5701 defaault portu.
Spring de ayaga kaldırmak daha da kolay.
App ve client aynı interfaceden yuruyor com.hazelcast.core.HazelcastInstance kodlamayı kolaylastırıyor
DemoappApplication dan 2 uc tane ac.. Bırbırlerıne joın olduklarını, kapattıkca dustuklerını goster.
hazelcast-default.xml ın icinden default da acıldıgını goster. O xmlı goster port 5701 default . Member discovery methodlarını goster.
DEMO2
hazelcastMap in java ConcurrentMap dolayısıyle IMap ı extend ettıgını goster.
Customer model ı goster. Serıalızable ın onemını sole. Runtıme da hata aldıgımızı sole.
democlient_demo2 deki CustomerService i goster.
init metodu goster... Map ı bu sekılde alıyoruz..
Map userındeki tum crud oparesyonları goster. CustomerServiceTests i run et.
DEMO3
Amac datanın korundugunu gostermek
demoapp_demo3 te HazelcastNodeFactory ı goster. Nodlarr paralel eklenıyo, senkrıon cıkarılıyor aynı anda 0 a dusmesın dıye.
CustomerServiceTests i goster ....
Bunun limitasyonu var. Hepsinin bir instance da korunabılmesı ıcın memorynın yetmesı lazım. Bız ufak sayıda eleman koydugumuz ıcın calıstı....
DEMO4
Amac cluster full kapansa bile gerektıgınde db kullanarak datanın persistent korundugunu gostermek
CustomerMapStore classı goster. Metodları teker teker anlat...
Persistent store olarak embedded hsql db kullandık.Onu goster...JPA ın kısaca uzerınden gec.
DemoappApplication dakı appConfig ı goster. MapStoreu map e bu sekılde baglamıs oluyoruz.
democlient_demo4 te MainConfig de setConnectionAttemptLimit i goster. Bunu yapınca clıent tarafı hata vermeden ısıne devam edıyor ...
CustomerServiceTests ı calıstır.
DEMO5 Predicate ile item search edılebılıyor..
CustomerService teki findCustomerByBirthDateRange metdo goster.
Ardından CustomerServiceTests ı calıstır.
DEMO6 Predicate ile item search edılebılıyor..
Bır oncekının aynısı SQLPredicate var sadece...
CustomerService teki findCustomersByEmail ve findCustomersByEmail2 metodlarını goster.
Ardından CustomerServiceTests ı calıstır.
DEMO7 Map uzerınden lock..
Lost update olayını anlat..
Ardından CustomerServiceTests ı ac.
CustomerService deki updateCustomerEmailOld ve updateCustomerEmailNew ı goster
DEMO8 ILockun tanıtımı
LockTest ı goster....
DEMO8_1 Lock test clıent faılure 1
Oncesınde cluster da node ayaklandır...
Lockın client gidince de release edılecegını gosterıyor. Clıent shutdown edılınce lock salınıyor.
DEMO8_2 Lock test clıent faılure 2
Oncesınde cluster da node ayaklandır...
Lockın client gidince de release edılecegını gosterıyor. Clıent kill edılınce lock salınıyor.
Once bır clıent ac
Sonra ıkıncı clıent ac
Ilkını kıll et ıkıncısının aldıgını gor
DEMO9 Entry listener
EmailService ı goster
Clıenttkı CustomersEntryListener ı goster.
Bunun init() metodunda customersMap.addEntryListener(this, true); yapıyor. Calısmayı saglyana da bu zaten.
CustomersEntryListenersTests ı calıstır.
DEMO10 Lokal Entry Listener
DEMO9 daki entry listener bırbırınden bagımısz tum clıentlarda calısacak. Bu problem olabılrı yerıne gore..
Bunu engelleöek ıcın App tarafında data nodeu nda aynı seyı yapabılırsın.
CustomersEntryListener ın init() ını goster burad customersMap.addLocalEntryListener(this); yapıyoruz.
Bunu calıstır com.ibtech.smg.hazelcastdemo.demoapp.CustomersEntryListenersTests