SlideShare a Scribd company logo
1 of 59
Hazelcast
Hazelcast
Ceyhan Kasap | Software Infrastructure
What is Hazelcast ?
Hazelcast is an open source in-memory data grid based on Java. It
is also the name of the company developing the product.
What is Hazelcast ? - continued
Database Caching
What is Hazelcast ? - continued
In Memory Computing
What is Hazelcast ? - continued
In Memory Data Grid Messaging
Who uses Hazelcast ?
Too many customers from too many different industries
https://hazelcast.com/customers/financial-services/
Editions and Licencing
 Open Source edition
 https://github.com/hazelcast/hazelcast
 Enterprise editions (https://hazelcast.com/products/)
 Notable features:
• High-Density Caching
• WAN replication (geographical distribution of nodes)
• Security suite
• Web session clustering
Editions and Licencing - Continued
Hazelcast cluster
 A cluster is made up of nodes. Each node is a memory and cpu resource
for cluster.
 Clients just connect and use the cluster. They do not provide resource.
 Hazelcast api is unified for both clients and nodes.
Hazelcast Sharding
 Hazelcast shards are called Partitions. By default, Hazelcast has
271 partitions.
 Hazelcast distributes data entries into the partitions using a
hashing algorithm. Given an object key (for example, for a map)
or an object name (for example, for a topic or list):
 the key or name is serialized (converted into a byte array),
 this byte array is hashed, and
 the result of the hash is mod by the number of partitions.
 The result of this modulo - MOD(hash result, partition count) gives
the partition in which the data will be stored.
 The partitions themselves are distributed equally among the
members of the cluster.
 Hazelcast also creates the backups of partitions and distributes
them among members for redundancy.
Hazelcast Data Partitioning
Explained
 Assuming a cluster config with default 271 partitions. Below is the
partitions in a single node Hazelcast cluster.
Hazelcast Data Partitioning
Explained - Continued
 When you start a second node on that cluster the partitions are
distributed as below.
 In below, the partitions with black text are primary partitions, and the
partitions with blue text are replica partitions (backups).
 The first node has 135 primary partitions (black), and each of these
partitions are backed up in the second node (blue).
 The first node also has the replica partitions of the second node's
primary partitions.
Hazelcast Data Partitioning
Explained - Continued
 When you add more nodes, Hazelcast one-by-one moves some of the
primary and replica partitions to the new nodes, making all nodes equal
and redundant.
 Only the minimum amount of partitions will be moved to scale out
Hazelcast.
 The following is an illustration of the partition distributions in a 4-node
Hazelcast cluster (Pairs : 1-3 & 2-4).
Hazelcast Data Partitioning –
Partition Table & Repartioning
 When you start a node, a partition table is created within it.
This table stores the information for which partitions
belong to which nodes. The purpose of this table is to make
all nodes in the cluster aware of this information, making
sure that each node knows where the data is.
 The oldest node in the cluster (the one that started first)
periodically sends the partition table to all nodes. (
hazelcast.partition.table.send.interval system property,
default is 15 seconds)
 Repartitioning occurs when
 When a node joins to the cluster.
 When a node leaves the cluster.
Hazelcast Topology Options
1) EMBEDDED (ON HEAP) MODE
 In this type, nodes include both the application and
data. I.e your application provides cpu & memory for
the cluster.
 pros : high performance
 cons: improved memory usage
Hazelcast Topology Options -
Continued
2) CLIENT (OFF HEAP) MODE
 In this type, you can have a cluster of server nodes that can be independently
created and scaled independent of your application.
 Your clients communicate with these cluster nodes to reach to the data on them.
 Hazelcast provides different clients (native clients (Java, .NET and C++),
Memcache clients and REST clients).
 pros: diffferent protocols can be supported, no memory pressure
 cons: performance overhead
DEMO 1 – Basics & Modules
Introduction
Hazelcast Distributed Map
 Hazelcast Map (IMap) extends the interface
java.util.concurrent.ConcurrentMap and hence java.util.Map.
 It is the distributed implementation of Java map.
 You can perfrom operations like reading and writing from/to a
Hazelcast map with the well known get and put methods.
Hazelcast Distributed Map -
Continued
DEMO 2 – Map basics & crud
operations
Hazelcast Distributed Map – Data
Resiliency
 Assume that we have a cluster with 2 nodes
and 4 elements in the map.
 Each node has 2 primary and 2 back up data
Hazelcast Distributed Map – Data
Resiliency - Continued
 What happens when we loose the first node in
the cluster?
 Back up data activation
Hazelcast Distributed Map – Data
Resiliency - Continued
 Back up data is moved to primary partitions…
Hazelcast Distributed Map – Data
Resiliency - Continued
 Lets test resiliency with a simple demo…
 Possible production set up for resiliency
Hazelcast Distributed Map – Data
Resiliency - Continued
 Demo set up for resiliency
DEMO 3 – Simple Map
Resiliency Test
Hazelcast Distributed Map –
Persistent Storage
 Previous demo has limitations:
• Data is completely lost when all nodes are gone
• Can not store all cluster data in single node
(possible OOM Error ? )
 Out of the box, all data is stored in memory
in Hazelcast. But it is also possible to achive
persistent storage…
Hazelcast Distributed Map –
Persistent Storage - Continued
 Persistent storage can be achived by using
Map Store mechanism
 Create a class that implements
com.hazelcast.core.MapStore<K, V>
 Configure the Hazelcast instance to use the
above MapStore implementation for the
desired map.
Hazelcast Distributed Map –
Persistent Storage - Continued
DEMO 4 – Simple Map
Persistency Test
Hazelcast Distributed Map –
Persistent Storage - Continued
 Previous example (demo 4) is using Write-Through
Persistence because write-delay-seconds property
to 0.
 This means the entries will be put to the data store
synchronously using the non-batch methods.
 We can also use Write-Behind Persistence by setting
write-delay-seconds property to a value greater than
0.
 This way the batch methods on the Map Store
implementation will be used.
 But this mode of operation is prone to data loss
because of asynch storage!
Hazelcast Distributed Map – Map
Searching
 We can search map entires in Hazelcast like
we do in db.
 Criteria API is a programming interface offered by
Hazelcast that is similar to the Java Persistence
Query Language (JPQL)
Hazelcast Distributed Map – Map
Searching – Continued
 Predicates can be joined using the and, or
and not operators, as shown below :
 com.hazelcast.query.SqlPredicate class
contains methods related to conversion of sql
query to predicate
Hazelcast Distributed Map – Map
Searching – Continued
DEMO 5 – Simple Map
Searching using Predicates
DEMO 6 – Simple Map
Searching using
Break
Hazelcast Distributed Map –
Concurrency & Locks
 Possible concurreny issues in updating map
values without locking…
 Simple update most likely produces lost
updates…
Hazelcast Distributed Map –
Concurrency & Locks - continued
Illustration of lost update problem:
Hazelcast Distributed Map –
Concurrency & Locks - continued
One possible solution: Key Locks
Hazelcast Distributed Map –
Concurrency & Locks - continued
DEMO 7 – Demonstration of
the lost update problem and
possible solution using key
locks
Hazelcast Distributed Lock
 com.hazelcast.core.ILock is the distributed
implementation of
java.util.concurrent.locks.Lock.
 Meaning if you lock using an ILock, the
critical section that it guards is guaranteed
to be executed by only one thread in the
entire cluster.
 Locks can lead to problems if not used
properly.
Hazelcast Distributed Lock -
Continued
A few warnings when using locks:
 Always use locks with try-catch blocks. It will
ensure that locks will be released if an exception is
thrown from the code in a critical section.
Hazelcast Distributed Lock -
Continued
 If a lock is not released in the cluster, another thread
that is trying to get the lock can wait forever!
 To avoid this, use tryLock with a timeout value. You
can set a high value (normally it should not take that
long) for tryLock. You can check the return value of
tryLock as below:
Hazelcast Distributed Lock -
Continued
 You can also avoid indefinitely waiting threads by
using lock with lease time: the lock will be released
in the given lease time.
 Locks are fail-safe. If a member holds a lock and
some other members go down, the cluster will keep
your locks safe and available. Moreover, when a
member leaves the cluster, all the locks acquired by
that dead member will be removed so that those
locks are immediately available for live members.
Hazelcast Distributed Lock -
Continued
 Locks are re-entrant. The same thread can lock multiple times on
the same lock. Note that for other threads to be able to require
this lock, the owner of the lock must call unlock as many times as
the owner called lock.
 Locks are not automatically removed. If a lock is not used
anymore, Hazelcast will not automatically garbage collect the lock.
This can lead to an OutOfMemoryError. So prefer IMap for locking
instead of ILock:
https://groups.google.com/forum/#!topic/hazelcast/bjmEWglJ4Ik
Hazelcast Distributed Lock -
Continued
DEMO 8, 8_1, 8_2 – Lock
examples
Other Hazelcast Data Structures -
Set
 Hazelcast Set is a distributed and concurrent
implementation of java.util.Set.
 Hazelcast Set does not allow duplicate elements.
 Hazelcast Set does not preserve the order of elements.
 Hazelcast Set is a non-partitioned data structure: all
the data that belongs to a set will live on one single
partition in that member.
 The equals method implemented in Hazelcast Set uses
a serialized byte version of objects, as opposed to
java.util.HashSet.
Other Hazelcast Data Structures -
Set - Continued
 Usage example:
Other Hazelcast Data Structures -
List
 Hazelcast List is similar to Hazelcast Set, but
Hazelcast List also allows duplicate elements.
 Besides allowing duplicate elements, Hazelcast List
preserves the order of elements.
 Hazelcast List is a non-partitioned data structure
where values and each backup are represented by
their own single partition.
 All items are copied to local and iteration occurs
locally.
Other Hazelcast Data Structures -
List - Continued
 Usage example:
Other Hazelcast Data Structures -
Queue
 Hazelcast distributed queue is an implementation of
java.util.concurrent.BlockingQueue.
 FIFO ordering will apply to all queue operations across the
cluster.
Other Hazelcast Data Structures -
Topic
 Hazelcast provides Topic a distribution mechanism for publishing
messages that are delivered to multiple subscribers (
publish/subscribe (pub/sub) messaging model ).
 Publishing and subscribing operations are cluster wide. When a
member subscribes to a topic, it is actually registering for
messages published by any member in the cluster, including the
new members that joined after you add the listener.
Other Hazelcast Data Structures -
Topic - Continued
Other Hazelcast Data Structures
 There are still other Hazelcast data structures ..
 MultiMap
 ReplicatedMap
 AtomicLong
 IdGenerator
 CountdownLatch
 …
 Details can be found at
https://hazelcast.org/documentation/
Event Listeners For Hazelcast
Nodes
 You can register for Hazelcast entry events so you will
be notified when those events occur.
 Event Listeners are cluster-wide: when a listener is
registered in one member of cluster, it is actually
registered for events that originated at any member in
the cluster.
 An Event is created only if you registered an event
listener.
 As a rule of thumb, your event listener should not
implement heavy processes in its event methods which
block the thread for a long time. If needed, you must
use ExecutorService.
Event Listeners For Hazelcast
Nodes - Continued
 Hazelcast offers the following event listeners:
 Membership Listener for cluster membership events.
 Distributed Object Listener for distributed object creation and
destroy events.
 Migration Listener for partition migration start and complete
events.
 Partition Lost Listener for partition lost events.
 Lifecycle Listener for HazelcastInstance lifecycle events.
 Entry Listener for IMap and MultiMap entry events Item Listener
for IQueue, ISet and IList item events (please refer to the Event
Registration and Configuration parts of the sections Set and List).
 Message Listener for ITopic message events.
 Client Listener for client connection events.
Hazelcast Distributed Map – Map
Searching – Continued
DEMO 9 & 10 – Simple Entry
listeners example
Hazelcast Transactions
 You can use Hazelcast in transactional context.
 You create a TransactionContext to begin, commit,
and rollback a transaction.
 You can obtain transaction-aware instances of
queues, maps, sets, lists, multimaps via
TransactionContext, work with them, and
commit/rollback in one shot.
 In a transaction, operations will not be executed
immediately. Their changes will be local to the
TransactionContext until committed.
Hazelcast Transactions - Continued
DEMO 11 – Simple
Transactions Example
Hazelcast System Properties
 Hazelcast has system properties to tune some aspects
of Hazelcast.
 You can set them as property name and value pairs
through declarative configuration, programmatic
configuration or JVM system property.
http://docs.hazelcast.org/docs/3.5/manual/html/systemp
roperties.html
 Please note that defaults are usually fine..:)
Hazelcast Monitoring
 Hazelcast open source version is lacking monitoring
capabilities.
 Probably logging provides the best information
available…
 We can use "per node jmx" utilities for monitoring.
But this does not provide information clusterwide...
Hazelcast Monitoring - Continued
 Enterprise version has management center…
Questions

More Related Content

What's hot

Indexing in Exadata
Indexing in ExadataIndexing in Exadata
Indexing in ExadataEnkitec
 
Sedna XML Database: Memory Management
Sedna XML Database: Memory ManagementSedna XML Database: Memory Management
Sedna XML Database: Memory ManagementIvan Shcheklein
 
Introduction to Cassandra: Replication and Consistency
Introduction to Cassandra: Replication and ConsistencyIntroduction to Cassandra: Replication and Consistency
Introduction to Cassandra: Replication and ConsistencyBenjamin Black
 
Sedna XML Database: Executor Internals
Sedna XML Database: Executor InternalsSedna XML Database: Executor Internals
Sedna XML Database: Executor InternalsIvan Shcheklein
 
Understanding Query Optimization with ‘regular’ and ‘Exadata’ Oracle
Understanding Query Optimization with ‘regular’ and ‘Exadata’ OracleUnderstanding Query Optimization with ‘regular’ and ‘Exadata’ Oracle
Understanding Query Optimization with ‘regular’ and ‘Exadata’ OracleGuatemala User Group
 
Managing Unstructured Data: Lobs in the World of JSON
Managing Unstructured Data: Lobs in the World of JSONManaging Unstructured Data: Lobs in the World of JSON
Managing Unstructured Data: Lobs in the World of JSONMichael Rosenblum
 
Developers’ mDay 2021: Bogdan Kecman, Oracle – MySQL nekad i sad
Developers’ mDay 2021: Bogdan Kecman, Oracle – MySQL nekad i sadDevelopers’ mDay 2021: Bogdan Kecman, Oracle – MySQL nekad i sad
Developers’ mDay 2021: Bogdan Kecman, Oracle – MySQL nekad i sadmCloud
 
Distributed file systems (from Google)
Distributed file systems (from Google)Distributed file systems (from Google)
Distributed file systems (from Google)Sri Prasanna
 
12c SQL Plan Directives
12c SQL Plan Directives12c SQL Plan Directives
12c SQL Plan DirectivesFranck Pachot
 
NoSQL @ CodeMash 2010
NoSQL @ CodeMash 2010NoSQL @ CodeMash 2010
NoSQL @ CodeMash 2010Ben Scofield
 
Java Future S Ritter
Java Future S RitterJava Future S Ritter
Java Future S Rittercatherinewall
 
Managing Exadata in the Real World
Managing Exadata in the Real WorldManaging Exadata in the Real World
Managing Exadata in the Real WorldEnkitec
 
Create your oracle_apps_r12_lab_with_less_than_us1000
Create your oracle_apps_r12_lab_with_less_than_us1000Create your oracle_apps_r12_lab_with_less_than_us1000
Create your oracle_apps_r12_lab_with_less_than_us1000Ajith Narayanan
 
Drilling Deep Into Exadata Performance
Drilling Deep Into Exadata PerformanceDrilling Deep Into Exadata Performance
Drilling Deep Into Exadata PerformanceEnkitec
 
Discard inport exchange table & tablespace
Discard inport exchange table & tablespaceDiscard inport exchange table & tablespace
Discard inport exchange table & tablespaceMarco Tusa
 

What's hot (19)

Lec1 Intro
Lec1 IntroLec1 Intro
Lec1 Intro
 
Indexing in Exadata
Indexing in ExadataIndexing in Exadata
Indexing in Exadata
 
Sedna XML Database: Memory Management
Sedna XML Database: Memory ManagementSedna XML Database: Memory Management
Sedna XML Database: Memory Management
 
Introduction to Cassandra: Replication and Consistency
Introduction to Cassandra: Replication and ConsistencyIntroduction to Cassandra: Replication and Consistency
Introduction to Cassandra: Replication and Consistency
 
Sedna XML Database: Executor Internals
Sedna XML Database: Executor InternalsSedna XML Database: Executor Internals
Sedna XML Database: Executor Internals
 
Understanding Query Optimization with ‘regular’ and ‘Exadata’ Oracle
Understanding Query Optimization with ‘regular’ and ‘Exadata’ OracleUnderstanding Query Optimization with ‘regular’ and ‘Exadata’ Oracle
Understanding Query Optimization with ‘regular’ and ‘Exadata’ Oracle
 
Managing Unstructured Data: Lobs in the World of JSON
Managing Unstructured Data: Lobs in the World of JSONManaging Unstructured Data: Lobs in the World of JSON
Managing Unstructured Data: Lobs in the World of JSON
 
Oracle Exadata X2-8: A Critical Review
Oracle Exadata X2-8: A Critical ReviewOracle Exadata X2-8: A Critical Review
Oracle Exadata X2-8: A Critical Review
 
Developers’ mDay 2021: Bogdan Kecman, Oracle – MySQL nekad i sad
Developers’ mDay 2021: Bogdan Kecman, Oracle – MySQL nekad i sadDevelopers’ mDay 2021: Bogdan Kecman, Oracle – MySQL nekad i sad
Developers’ mDay 2021: Bogdan Kecman, Oracle – MySQL nekad i sad
 
Distributed file systems (from Google)
Distributed file systems (from Google)Distributed file systems (from Google)
Distributed file systems (from Google)
 
12c SQL Plan Directives
12c SQL Plan Directives12c SQL Plan Directives
12c SQL Plan Directives
 
NoSQL @ CodeMash 2010
NoSQL @ CodeMash 2010NoSQL @ CodeMash 2010
NoSQL @ CodeMash 2010
 
Java Future S Ritter
Java Future S RitterJava Future S Ritter
Java Future S Ritter
 
Managing Exadata in the Real World
Managing Exadata in the Real WorldManaging Exadata in the Real World
Managing Exadata in the Real World
 
Create your oracle_apps_r12_lab_with_less_than_us1000
Create your oracle_apps_r12_lab_with_less_than_us1000Create your oracle_apps_r12_lab_with_less_than_us1000
Create your oracle_apps_r12_lab_with_less_than_us1000
 
Drilling Deep Into Exadata Performance
Drilling Deep Into Exadata PerformanceDrilling Deep Into Exadata Performance
Drilling Deep Into Exadata Performance
 
Updates
UpdatesUpdates
Updates
 
Discard inport exchange table & tablespace
Discard inport exchange table & tablespaceDiscard inport exchange table & tablespace
Discard inport exchange table & tablespace
 
S emb t6-arch_mem
S emb t6-arch_memS emb t6-arch_mem
S emb t6-arch_mem
 

Similar to Hazelcast sunum

JCConf 2016 - Cloud Computing Applications - Hazelcast, Spark and Ignite
JCConf 2016 - Cloud Computing Applications - Hazelcast, Spark and IgniteJCConf 2016 - Cloud Computing Applications - Hazelcast, Spark and Ignite
JCConf 2016 - Cloud Computing Applications - Hazelcast, Spark and IgniteJoseph Kuo
 
MySQL 5.7 Fabric: Introduction to High Availability and Sharding
MySQL 5.7 Fabric: Introduction to High Availability and Sharding MySQL 5.7 Fabric: Introduction to High Availability and Sharding
MySQL 5.7 Fabric: Introduction to High Availability and Sharding Ulf Wendel
 
MySQL 5.7 clustering: The developer perspective
MySQL 5.7 clustering: The developer perspectiveMySQL 5.7 clustering: The developer perspective
MySQL 5.7 clustering: The developer perspectiveUlf Wendel
 
MySQL Group Replication
MySQL Group ReplicationMySQL Group Replication
MySQL Group ReplicationUlf Wendel
 
Breaking The Clustering Limits @ AlphaCSP JavaEdge 2007
Breaking The Clustering Limits @ AlphaCSP JavaEdge 2007Breaking The Clustering Limits @ AlphaCSP JavaEdge 2007
Breaking The Clustering Limits @ AlphaCSP JavaEdge 2007Baruch Sadogursky
 
Hazelcast Essentials
Hazelcast EssentialsHazelcast Essentials
Hazelcast EssentialsRahul Gupta
 
Inspection and maintenance tools (Linux / OpenStack)
Inspection and maintenance tools (Linux / OpenStack)Inspection and maintenance tools (Linux / OpenStack)
Inspection and maintenance tools (Linux / OpenStack)Gerard Braad
 
How to build your query engine in spark
How to build your query engine in sparkHow to build your query engine in spark
How to build your query engine in sparkPeng Cheng
 
HPC and HPGPU Cluster Tutorial
HPC and HPGPU Cluster TutorialHPC and HPGPU Cluster Tutorial
HPC and HPGPU Cluster TutorialDirk Hähnel
 
Abhishek Kumar - CloudStack Locking Service
Abhishek Kumar - CloudStack Locking ServiceAbhishek Kumar - CloudStack Locking Service
Abhishek Kumar - CloudStack Locking ServiceShapeBlue
 
Hazelcast Distributed Lock
Hazelcast Distributed LockHazelcast Distributed Lock
Hazelcast Distributed LockJadson Santos
 
Data has a better idea the in-memory data grid
Data has a better idea   the in-memory data gridData has a better idea   the in-memory data grid
Data has a better idea the in-memory data gridBogdan Dina
 

Similar to Hazelcast sunum (20)

JCConf 2016 - Cloud Computing Applications - Hazelcast, Spark and Ignite
JCConf 2016 - Cloud Computing Applications - Hazelcast, Spark and IgniteJCConf 2016 - Cloud Computing Applications - Hazelcast, Spark and Ignite
JCConf 2016 - Cloud Computing Applications - Hazelcast, Spark and Ignite
 
Hazelcast
HazelcastHazelcast
Hazelcast
 
Support distributed computing and caching avec hazelcast
Support distributed computing and caching avec hazelcastSupport distributed computing and caching avec hazelcast
Support distributed computing and caching avec hazelcast
 
MySQL 5.7 Fabric: Introduction to High Availability and Sharding
MySQL 5.7 Fabric: Introduction to High Availability and Sharding MySQL 5.7 Fabric: Introduction to High Availability and Sharding
MySQL 5.7 Fabric: Introduction to High Availability and Sharding
 
MySQL 5.7 clustering: The developer perspective
MySQL 5.7 clustering: The developer perspectiveMySQL 5.7 clustering: The developer perspective
MySQL 5.7 clustering: The developer perspective
 
MySQL Group Replication
MySQL Group ReplicationMySQL Group Replication
MySQL Group Replication
 
Breaking The Clustering Limits @ AlphaCSP JavaEdge 2007
Breaking The Clustering Limits @ AlphaCSP JavaEdge 2007Breaking The Clustering Limits @ AlphaCSP JavaEdge 2007
Breaking The Clustering Limits @ AlphaCSP JavaEdge 2007
 
Hazelcast Essentials
Hazelcast EssentialsHazelcast Essentials
Hazelcast Essentials
 
4.Process.ppt
4.Process.ppt4.Process.ppt
4.Process.ppt
 
Clustering van IT-componenten
Clustering van IT-componentenClustering van IT-componenten
Clustering van IT-componenten
 
01 oracle architecture
01 oracle architecture01 oracle architecture
01 oracle architecture
 
Vx vm
Vx vmVx vm
Vx vm
 
Inspection and maintenance tools (Linux / OpenStack)
Inspection and maintenance tools (Linux / OpenStack)Inspection and maintenance tools (Linux / OpenStack)
Inspection and maintenance tools (Linux / OpenStack)
 
How to build your query engine in spark
How to build your query engine in sparkHow to build your query engine in spark
How to build your query engine in spark
 
HPC and HPGPU Cluster Tutorial
HPC and HPGPU Cluster TutorialHPC and HPGPU Cluster Tutorial
HPC and HPGPU Cluster Tutorial
 
Threading.pptx
Threading.pptxThreading.pptx
Threading.pptx
 
Abhishek Kumar - CloudStack Locking Service
Abhishek Kumar - CloudStack Locking ServiceAbhishek Kumar - CloudStack Locking Service
Abhishek Kumar - CloudStack Locking Service
 
Hazelcast Distributed Lock
Hazelcast Distributed LockHazelcast Distributed Lock
Hazelcast Distributed Lock
 
Scabiv0.2
Scabiv0.2Scabiv0.2
Scabiv0.2
 
Data has a better idea the in-memory data grid
Data has a better idea   the in-memory data gridData has a better idea   the in-memory data grid
Data has a better idea the in-memory data grid
 

More from Software Infrastructure (20)

Kotlin
KotlinKotlin
Kotlin
 
NoSql
NoSqlNoSql
NoSql
 
Stream Analytics
Stream AnalyticsStream Analytics
Stream Analytics
 
Quartz Scheduler
Quartz SchedulerQuartz Scheduler
Quartz Scheduler
 
Test Driven Development
Test Driven DevelopmentTest Driven Development
Test Driven Development
 
Deep Learning
Deep Learning Deep Learning
Deep Learning
 
Progressive Web Apps
Progressive Web AppsProgressive Web Apps
Progressive Web Apps
 
Java9
Java9Java9
Java9
 
Machine learning
Machine learningMachine learning
Machine learning
 
Raspberry PI
Raspberry PIRaspberry PI
Raspberry PI
 
Golang
GolangGolang
Golang
 
Codename one
Codename oneCodename one
Codename one
 
Microsoft bot framework
Microsoft bot frameworkMicrosoft bot framework
Microsoft bot framework
 
Blockchain use cases
Blockchain use casesBlockchain use cases
Blockchain use cases
 
The Fintechs
The FintechsThe Fintechs
The Fintechs
 
Server Side Swift
Server Side SwiftServer Side Swift
Server Side Swift
 
Push Notification
Push NotificationPush Notification
Push Notification
 
.Net Core
.Net Core.Net Core
.Net Core
 
Java Batch
Java BatchJava Batch
Java Batch
 
Big Data & Hadoop
Big Data & HadoopBig Data & Hadoop
Big Data & Hadoop
 

Recently uploaded

Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girlsssuser7cb4ff
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130Suhani Kapoor
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Dr.Costas Sachpazis
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile servicerehmti665
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024hassan khalil
 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escortsranjana rawat
 
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfCCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfAsst.prof M.Gokilavani
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxJoão Esperancinha
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024Mark Billinghurst
 
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝soniya singh
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfAsst.prof M.Gokilavani
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSKurinjimalarL3
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
power system scada applications and uses
power system scada applications and usespower system scada applications and uses
power system scada applications and usesDevarapalliHaritha
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...srsj9000
 
Heart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxHeart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxPoojaBan
 

Recently uploaded (20)

Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girls
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile service
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024
 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
 
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfCCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
 
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
 
power system scada applications and uses
power system scada applications and usespower system scada applications and uses
power system scada applications and uses
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
 
Heart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxHeart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptx
 

Hazelcast sunum

  • 1. Hazelcast Hazelcast Ceyhan Kasap | Software Infrastructure
  • 2. What is Hazelcast ? Hazelcast is an open source in-memory data grid based on Java. It is also the name of the company developing the product.
  • 3. What is Hazelcast ? - continued Database Caching
  • 4. What is Hazelcast ? - continued In Memory Computing
  • 5. What is Hazelcast ? - continued In Memory Data Grid Messaging
  • 6. Who uses Hazelcast ? Too many customers from too many different industries https://hazelcast.com/customers/financial-services/
  • 7. Editions and Licencing  Open Source edition  https://github.com/hazelcast/hazelcast  Enterprise editions (https://hazelcast.com/products/)  Notable features: • High-Density Caching • WAN replication (geographical distribution of nodes) • Security suite • Web session clustering
  • 9. Hazelcast cluster  A cluster is made up of nodes. Each node is a memory and cpu resource for cluster.  Clients just connect and use the cluster. They do not provide resource.  Hazelcast api is unified for both clients and nodes.
  • 10. Hazelcast Sharding  Hazelcast shards are called Partitions. By default, Hazelcast has 271 partitions.  Hazelcast distributes data entries into the partitions using a hashing algorithm. Given an object key (for example, for a map) or an object name (for example, for a topic or list):  the key or name is serialized (converted into a byte array),  this byte array is hashed, and  the result of the hash is mod by the number of partitions.  The result of this modulo - MOD(hash result, partition count) gives the partition in which the data will be stored.  The partitions themselves are distributed equally among the members of the cluster.  Hazelcast also creates the backups of partitions and distributes them among members for redundancy.
  • 11. Hazelcast Data Partitioning Explained  Assuming a cluster config with default 271 partitions. Below is the partitions in a single node Hazelcast cluster.
  • 12. Hazelcast Data Partitioning Explained - Continued  When you start a second node on that cluster the partitions are distributed as below.  In below, the partitions with black text are primary partitions, and the partitions with blue text are replica partitions (backups).  The first node has 135 primary partitions (black), and each of these partitions are backed up in the second node (blue).  The first node also has the replica partitions of the second node's primary partitions.
  • 13. Hazelcast Data Partitioning Explained - Continued  When you add more nodes, Hazelcast one-by-one moves some of the primary and replica partitions to the new nodes, making all nodes equal and redundant.  Only the minimum amount of partitions will be moved to scale out Hazelcast.  The following is an illustration of the partition distributions in a 4-node Hazelcast cluster (Pairs : 1-3 & 2-4).
  • 14. Hazelcast Data Partitioning – Partition Table & Repartioning  When you start a node, a partition table is created within it. This table stores the information for which partitions belong to which nodes. The purpose of this table is to make all nodes in the cluster aware of this information, making sure that each node knows where the data is.  The oldest node in the cluster (the one that started first) periodically sends the partition table to all nodes. ( hazelcast.partition.table.send.interval system property, default is 15 seconds)  Repartitioning occurs when  When a node joins to the cluster.  When a node leaves the cluster.
  • 15. Hazelcast Topology Options 1) EMBEDDED (ON HEAP) MODE  In this type, nodes include both the application and data. I.e your application provides cpu & memory for the cluster.  pros : high performance  cons: improved memory usage
  • 16. Hazelcast Topology Options - Continued 2) CLIENT (OFF HEAP) MODE  In this type, you can have a cluster of server nodes that can be independently created and scaled independent of your application.  Your clients communicate with these cluster nodes to reach to the data on them.  Hazelcast provides different clients (native clients (Java, .NET and C++), Memcache clients and REST clients).  pros: diffferent protocols can be supported, no memory pressure  cons: performance overhead
  • 17. DEMO 1 – Basics & Modules Introduction
  • 18. Hazelcast Distributed Map  Hazelcast Map (IMap) extends the interface java.util.concurrent.ConcurrentMap and hence java.util.Map.  It is the distributed implementation of Java map.  You can perfrom operations like reading and writing from/to a Hazelcast map with the well known get and put methods.
  • 19. Hazelcast Distributed Map - Continued DEMO 2 – Map basics & crud operations
  • 20. Hazelcast Distributed Map – Data Resiliency  Assume that we have a cluster with 2 nodes and 4 elements in the map.  Each node has 2 primary and 2 back up data
  • 21. Hazelcast Distributed Map – Data Resiliency - Continued  What happens when we loose the first node in the cluster?  Back up data activation
  • 22. Hazelcast Distributed Map – Data Resiliency - Continued  Back up data is moved to primary partitions…
  • 23. Hazelcast Distributed Map – Data Resiliency - Continued  Lets test resiliency with a simple demo…  Possible production set up for resiliency
  • 24. Hazelcast Distributed Map – Data Resiliency - Continued  Demo set up for resiliency DEMO 3 – Simple Map Resiliency Test
  • 25. Hazelcast Distributed Map – Persistent Storage  Previous demo has limitations: • Data is completely lost when all nodes are gone • Can not store all cluster data in single node (possible OOM Error ? )  Out of the box, all data is stored in memory in Hazelcast. But it is also possible to achive persistent storage…
  • 26. Hazelcast Distributed Map – Persistent Storage - Continued  Persistent storage can be achived by using Map Store mechanism  Create a class that implements com.hazelcast.core.MapStore<K, V>  Configure the Hazelcast instance to use the above MapStore implementation for the desired map.
  • 27. Hazelcast Distributed Map – Persistent Storage - Continued DEMO 4 – Simple Map Persistency Test
  • 28. Hazelcast Distributed Map – Persistent Storage - Continued  Previous example (demo 4) is using Write-Through Persistence because write-delay-seconds property to 0.  This means the entries will be put to the data store synchronously using the non-batch methods.  We can also use Write-Behind Persistence by setting write-delay-seconds property to a value greater than 0.  This way the batch methods on the Map Store implementation will be used.  But this mode of operation is prone to data loss because of asynch storage!
  • 29. Hazelcast Distributed Map – Map Searching  We can search map entires in Hazelcast like we do in db.  Criteria API is a programming interface offered by Hazelcast that is similar to the Java Persistence Query Language (JPQL)
  • 30. Hazelcast Distributed Map – Map Searching – Continued  Predicates can be joined using the and, or and not operators, as shown below :  com.hazelcast.query.SqlPredicate class contains methods related to conversion of sql query to predicate
  • 31. Hazelcast Distributed Map – Map Searching – Continued DEMO 5 – Simple Map Searching using Predicates DEMO 6 – Simple Map Searching using
  • 32. Break
  • 33. Hazelcast Distributed Map – Concurrency & Locks  Possible concurreny issues in updating map values without locking…  Simple update most likely produces lost updates…
  • 34. Hazelcast Distributed Map – Concurrency & Locks - continued Illustration of lost update problem:
  • 35. Hazelcast Distributed Map – Concurrency & Locks - continued One possible solution: Key Locks
  • 36. Hazelcast Distributed Map – Concurrency & Locks - continued DEMO 7 – Demonstration of the lost update problem and possible solution using key locks
  • 37. Hazelcast Distributed Lock  com.hazelcast.core.ILock is the distributed implementation of java.util.concurrent.locks.Lock.  Meaning if you lock using an ILock, the critical section that it guards is guaranteed to be executed by only one thread in the entire cluster.  Locks can lead to problems if not used properly.
  • 38. Hazelcast Distributed Lock - Continued A few warnings when using locks:  Always use locks with try-catch blocks. It will ensure that locks will be released if an exception is thrown from the code in a critical section.
  • 39. Hazelcast Distributed Lock - Continued  If a lock is not released in the cluster, another thread that is trying to get the lock can wait forever!  To avoid this, use tryLock with a timeout value. You can set a high value (normally it should not take that long) for tryLock. You can check the return value of tryLock as below:
  • 40. Hazelcast Distributed Lock - Continued  You can also avoid indefinitely waiting threads by using lock with lease time: the lock will be released in the given lease time.  Locks are fail-safe. If a member holds a lock and some other members go down, the cluster will keep your locks safe and available. Moreover, when a member leaves the cluster, all the locks acquired by that dead member will be removed so that those locks are immediately available for live members.
  • 41. Hazelcast Distributed Lock - Continued  Locks are re-entrant. The same thread can lock multiple times on the same lock. Note that for other threads to be able to require this lock, the owner of the lock must call unlock as many times as the owner called lock.  Locks are not automatically removed. If a lock is not used anymore, Hazelcast will not automatically garbage collect the lock. This can lead to an OutOfMemoryError. So prefer IMap for locking instead of ILock: https://groups.google.com/forum/#!topic/hazelcast/bjmEWglJ4Ik
  • 42. Hazelcast Distributed Lock - Continued DEMO 8, 8_1, 8_2 – Lock examples
  • 43. Other Hazelcast Data Structures - Set  Hazelcast Set is a distributed and concurrent implementation of java.util.Set.  Hazelcast Set does not allow duplicate elements.  Hazelcast Set does not preserve the order of elements.  Hazelcast Set is a non-partitioned data structure: all the data that belongs to a set will live on one single partition in that member.  The equals method implemented in Hazelcast Set uses a serialized byte version of objects, as opposed to java.util.HashSet.
  • 44. Other Hazelcast Data Structures - Set - Continued  Usage example:
  • 45. Other Hazelcast Data Structures - List  Hazelcast List is similar to Hazelcast Set, but Hazelcast List also allows duplicate elements.  Besides allowing duplicate elements, Hazelcast List preserves the order of elements.  Hazelcast List is a non-partitioned data structure where values and each backup are represented by their own single partition.  All items are copied to local and iteration occurs locally.
  • 46. Other Hazelcast Data Structures - List - Continued  Usage example:
  • 47. Other Hazelcast Data Structures - Queue  Hazelcast distributed queue is an implementation of java.util.concurrent.BlockingQueue.  FIFO ordering will apply to all queue operations across the cluster.
  • 48. Other Hazelcast Data Structures - Topic  Hazelcast provides Topic a distribution mechanism for publishing messages that are delivered to multiple subscribers ( publish/subscribe (pub/sub) messaging model ).  Publishing and subscribing operations are cluster wide. When a member subscribes to a topic, it is actually registering for messages published by any member in the cluster, including the new members that joined after you add the listener.
  • 49. Other Hazelcast Data Structures - Topic - Continued
  • 50. Other Hazelcast Data Structures  There are still other Hazelcast data structures ..  MultiMap  ReplicatedMap  AtomicLong  IdGenerator  CountdownLatch  …  Details can be found at https://hazelcast.org/documentation/
  • 51. Event Listeners For Hazelcast Nodes  You can register for Hazelcast entry events so you will be notified when those events occur.  Event Listeners are cluster-wide: when a listener is registered in one member of cluster, it is actually registered for events that originated at any member in the cluster.  An Event is created only if you registered an event listener.  As a rule of thumb, your event listener should not implement heavy processes in its event methods which block the thread for a long time. If needed, you must use ExecutorService.
  • 52. Event Listeners For Hazelcast Nodes - Continued  Hazelcast offers the following event listeners:  Membership Listener for cluster membership events.  Distributed Object Listener for distributed object creation and destroy events.  Migration Listener for partition migration start and complete events.  Partition Lost Listener for partition lost events.  Lifecycle Listener for HazelcastInstance lifecycle events.  Entry Listener for IMap and MultiMap entry events Item Listener for IQueue, ISet and IList item events (please refer to the Event Registration and Configuration parts of the sections Set and List).  Message Listener for ITopic message events.  Client Listener for client connection events.
  • 53. Hazelcast Distributed Map – Map Searching – Continued DEMO 9 & 10 – Simple Entry listeners example
  • 54. Hazelcast Transactions  You can use Hazelcast in transactional context.  You create a TransactionContext to begin, commit, and rollback a transaction.  You can obtain transaction-aware instances of queues, maps, sets, lists, multimaps via TransactionContext, work with them, and commit/rollback in one shot.  In a transaction, operations will not be executed immediately. Their changes will be local to the TransactionContext until committed.
  • 55. Hazelcast Transactions - Continued DEMO 11 – Simple Transactions Example
  • 56. Hazelcast System Properties  Hazelcast has system properties to tune some aspects of Hazelcast.  You can set them as property name and value pairs through declarative configuration, programmatic configuration or JVM system property. http://docs.hazelcast.org/docs/3.5/manual/html/systemp roperties.html  Please note that defaults are usually fine..:)
  • 57. Hazelcast Monitoring  Hazelcast open source version is lacking monitoring capabilities.  Probably logging provides the best information available…  We can use "per node jmx" utilities for monitoring. But this does not provide information clusterwide...
  • 58. Hazelcast Monitoring - Continued  Enterprise version has management center…

Editor's Notes

  1. Cache feature is the most well known. (Distrbuted map) It is a cluster technology new members could be added easily. Supports messaging in publish/subscribe and producer/consumer patterns. (topics/queues /event listeners) Can be used for web session replication in enterprise addition.. Native support for tomcat and jetty. Use filters for other app servers.
  2. Apple 10 nodeluk clusterı varmıs.   Mıddleware de hazelcast kullanıyormus. Apple store bunun uzerınden yuruyormus.   3000 clıent varmıs.   Sanıyede mılyon uzerınde requestı handle etmısler apple watch lansmanında.
  3. DEMO1 Modullerı tanıt Hazelcast configurasyonları defaultda guzel calısıyor... 5701 defaault portu. Spring de ayaga kaldırmak daha da kolay. App ve client aynı interfaceden yuruyor com.hazelcast.core.HazelcastInstance kodlamayı kolaylastırıyor DemoappApplication dan 2 uc tane ac.. Bırbırlerıne joın olduklarını, kapattıkca dustuklerını goster. hazelcast-default.xml ın icinden default da acıldıgını goster. O xmlı goster port 5701 default . Member discovery methodlarını goster.
  4. DEMO2 hazelcastMap in java ConcurrentMap dolayısıyle IMap ı extend ettıgını goster. Customer model ı goster. Serıalızable ın onemını sole. Runtıme da hata aldıgımızı sole. democlient_demo2 deki CustomerService i goster. init metodu goster... Map ı bu sekılde alıyoruz.. Map userındeki tum crud oparesyonları goster. CustomerServiceTests i run et.
  5. DEMO3 Amac datanın korundugunu gostermek demoapp_demo3 te HazelcastNodeFactory ı goster. Nodlarr paralel eklenıyo, senkrıon cıkarılıyor aynı anda 0 a dusmesın dıye. CustomerServiceTests i goster .... Bunun limitasyonu var. Hepsinin bir instance da korunabılmesı ıcın memorynın yetmesı lazım. Bız ufak sayıda eleman koydugumuz ıcın calıstı....
  6. DEMO4 Amac cluster full kapansa bile gerektıgınde db kullanarak datanın persistent korundugunu gostermek CustomerMapStore classı goster. Metodları teker teker anlat... Persistent store olarak embedded hsql db kullandık.Onu goster...JPA ın kısaca uzerınden gec. DemoappApplication dakı appConfig ı goster. MapStoreu map e bu sekılde baglamıs oluyoruz. democlient_demo4 te MainConfig de setConnectionAttemptLimit i goster. Bunu yapınca clıent tarafı hata vermeden ısıne devam edıyor ... CustomerServiceTests ı calıstır.
  7. DEMO5 Predicate ile item search edılebılıyor.. CustomerService teki findCustomerByBirthDateRange metdo goster. Ardından CustomerServiceTests ı calıstır. DEMO6 Predicate ile item search edılebılıyor.. Bır oncekının aynısı SQLPredicate var sadece... CustomerService teki findCustomersByEmail ve findCustomersByEmail2 metodlarını goster. Ardından CustomerServiceTests ı calıstır.
  8. DEMO7 Map uzerınden lock.. Lost update olayını anlat.. Ardından CustomerServiceTests ı ac. CustomerService deki updateCustomerEmailOld ve updateCustomerEmailNew ı goster
  9. DEMO8 ILockun tanıtımı LockTest ı goster.... DEMO8_1 Lock test clıent faılure 1 Oncesınde cluster da node ayaklandır... Lockın client gidince de release edılecegını gosterıyor. Clıent shutdown edılınce lock salınıyor. DEMO8_2 Lock test clıent faılure 2 Oncesınde cluster da node ayaklandır... Lockın client gidince de release edılecegını gosterıyor. Clıent kill edılınce lock salınıyor. Once bır clıent ac Sonra ıkıncı clıent ac Ilkını kıll et ıkıncısının aldıgını gor
  10. DEMO9 Entry listener EmailService ı goster Clıenttkı CustomersEntryListener ı goster. Bunun init() metodunda customersMap.addEntryListener(this, true); yapıyor. Calısmayı saglyana da bu zaten. CustomersEntryListenersTests ı calıstır. DEMO10 Lokal Entry Listener DEMO9 daki entry listener bırbırınden bagımısz tum clıentlarda calısacak. Bu problem olabılrı yerıne gore.. Bunu engelleöek ıcın App tarafında data nodeu nda aynı seyı yapabılırsın. CustomersEntryListener ın init() ını goster burad customersMap.addLocalEntryListener(this); yapıyoruz. Bunu calıstır com.ibtech.smg.hazelcastdemo.demoapp.CustomersEntryListenersTests