Hazelcast
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Hazelcast

on

  • 15,919 views

Introduction to Hazelcast, an open source in-memory data grid solution for Java

Introduction to Hazelcast, an open source in-memory data grid solution for Java

Statistics

Views

Total Views
15,919
Views on SlideShare
15,833
Embed Views
86

Actions

Likes
22
Downloads
155
Comments
0

4 Embeds 86

http://www.linkedin.com 59
https://twitter.com 19
https://www.linkedin.com 6
http://us-w1.rockmelt.com 2

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Hazelcast Presentation Transcript

  • 1. Talip Ozturk talip@hazelcast.com
  • 2. Agenda • Introduc-on • Code
Samples • Demo • Internals • Q/A
  • 3. What
is
Hazelcast? • In‐Memory
Data
Grid
(IMDG) • Clustering
and
highly
scalable
data
distribu-on
 solu-on
for
Java • Distributed
Data
Structures
for
Java • Distributed
Hashtable
(DHT)
and
more
  • 4. Why
Hazelcast? • Scale
your
applica-on • Share
data
across
cluster • Par--on
your
data • Send/receive
messages • Balance
the
load • Process
in
parallel
on
many
JVM
  • 5. Solu-ons
in
the
Market • Oracle
Coherence • IBM
WebSphere
eXtreme
Scale
/
ObjectGrid • TerracoPa • Gigaspaces • Gemstone • JBossCache/JGroups/Infinispan
  • 6. Difference • License
/
Cost • Feature‐set • Ease
of
use • Main
focus
(distributed
map,
tuple
space,
cache,
 processing
vs.
data) • Light/Heavy
weight
  • 7. Introducing
Hazelcast • Open
source
(Apache
License) • Super
light,
simple,
no‐dependency • Distributed/par--oned
implementa-on
of
map,
queue,
 set,
list,
lock
and
executor
service • Transac-onal
(JCA
support) • Topic
for
pub/sub
messaging • Cluster
info
and
membership
events • Dynamic
clustering,
backup,
fail‐over
  • 8. Data
Par--oning
in
a
Cluster If
you
have
5
million
objects
in
your
5‐node
cluster,
 then
each
node
will
carry 1
million
objects
and
1
million
backup
objects. Server1 Server2 Server3 Server4 Server5
  • 9. SuperClient
in
a
Cluster • -Dhazelcast.super.client=true •


As
fast
as
any
member
in
the
cluster •


Holds
no‐data
 Server1 Server2 Server3 Server4 Server5
  • 10. Code
Samples
–
Cluster
Interface import com.hazelcast.core.*; import java.util.Set; Cluster cluster = Hazelcast.getCluster(); cluster.addMembershipListener(listener); Member localMember = cluster.getLocalMember(); System.out.println (localMember.getInetAddress()); Set setMembers = cluster.getMembers();
  • 11. Code
Samples
–
Distributed
Map import com.hazelcast.core.Hazelcast; import java.util.Map; Map<String, Customer> map = Hazelcast.getMap("customers"); map.put ("1", customer); Customer c = map.get("1");
  • 12. Code
Samples
–
Distributed
Mul-Map import com.hazelcast.core.Hazelcast; İmport com.hazelcast.core.MultiMap; MultiMap<String, Order> map = Hazelcast.getMultiMap(”orders"); map.put ("1”, new Order ("iPhone", 340)); map.put ("1”, new Order (”MacBook", 1200)); map.put ("1”, new Order (”iPod", 79)); map.put (“2”, new Order (”iMac", 1500)); Collection<Order> colOrders = map.get ("1”); for (Order order : colOrders) { // process order } boolean removed = map.remove("1”, new Order(”iPod", 79));
  • 13. Code
Samples
–
Distributed
Queue import com.hazelcast.core.Hazelcast; import java.util.concurrent.BlockingQueue; import java.util.concurrent.TimeUnit; BlockingQueue<Task> queue = Hazelcast.getQueue(“tasks"); queue.offer(task); Task t = queue.poll(); Task t = queue.poll(5, TimeUnit.SECONDS);
  • 14. Code
Samples
–
Distributed
Set import com.hazelcast.core.Hazelcast; import java.util.Set; Set<Price> set = Hazelcast.getSet(“IBM-Quote-History"); set.add (new Price (10, time1)); set.add (new Price (11, time2)); set.add (new Price (13, time3)); for (Price price : set) { // process price }
  • 15. Code
Samples
–
Distributed
Lock import com.hazelcast.core.Hazelcast; import java.util.concurrent.locks.Lock; Lock mylock = Hazelcast.getLock(mylockobject); mylock.lock(); try { // do something } finally { mylock.unlock(); }
  • 16. Code
Samples
–
Distributed
Topic import com.hazelcast.core.*; public class Sample implements MessageListener { public static void main(String[] args) { Sample sample = new Sample(); Topic topic = Hazelcast.getTopic ("default"); topic.addMessageListener(sample); topic.publish ("my-message-object"); } public void onMessage(Object msg) { System.out.println("Got msg :" + msg); } }
  • 17. Code
Samples
–
Distributed
Events import com.hazelcast.core.IMap; import com.hazelcast.core.Hazelcast; import com.hazelcast.core.EntryListener; import com.hazelcast.core.EntryEvent; public class Sample implements EntryListener { public static void main(String[] args) { Sample sample = new Sample(); IMap map = Hazelcast.getMap ("default"); map.addEntryListener (sample, true); map.addEntryListener (sample, "key"); } public void entryAdded(EntryEvent event) { System.out.println("Added " + event.getKey() + ":" + event.getValue()); } public void entryRemoved(EntryEvent event) { System.out.println("Removed " + event.getKey() + ":" + event.getValue()); } public void entryUpdated(EntryEvent event) { System.out.println("Updated " + event.getKey() + ":" + event.getValue()); } }
  • 18. Code
Samples
–
Transac-ons import com.hazelcast.core.Hazelcast; import com.hazelcast.core.Transaction; import java.util.Map; import java.util.Queue; Map map = Hazelcast.getMap (“default”); Queue queue = Hazelcast.getQueue (“default”); Transaction txn = Hazelcast.getTransaction(); txn.begin(); try { Object obj = queue.poll(); //process obj map.put (key, obj); txn.commit(); } catch (Exception e) { txn.rollback(); }
  • 19. Code
Samples
–
Executor
Service FutureTask<String> futureTask = new DistributedTask<String>(new Echo(input), member); ExecutorService es = Hazelcast.getExecutorService(); es.execute(futureTask); String result = futureTask.get();
  • 20. Executor
Service
Scenario public int addBonus(long customerId, int extraBonus){ IMap<Long, Customer> mapCustomers = Hazelcast.getMap("customers"); mapCustomers.lock (customerId); Customer customer = mapCustomers.get(customerId); int currentBonus = customer.addBonus(extraBonus); mapCustomers.put(customerId, customer); mapCustomers.unlock(customerId); return currentBonus; }
  • 21. Send
computa-on
over
data public class BonusAddTask implements Callable<Integer>, Serializable{ private static final long serialVersionUID = 1L; private long customerId; private long extraBonus; public BonusAddTask () { } public BonusAddTask (long customerId, int extraBonus) { this.customerId = customerId; this.extraBonus = extraBonus; } public Integer call () { IMap<Long, Customer> mapCustomers = Hazelcast.getMap("customers"); mapCustomers.lock (customerId); Customer customer = mapCustomers.get(customerId); int currentBonus = customer.addBonus(extraBonus); mapCustomers.put(customerId, customer); mapCustomers.unlock(customerId); return currentBonus; } }
  • 22. Send
computa-on
over
data public int addBonus(long customerId, int extraBonus){ ExecutorService es = Hazelcast.getExecutorService(); FutureTask<Integer> task = new DistributedTask<Integer>(new BonusAddTask(customerId, extraBonus), customerId); es.execute(task); int currentBonus = task.get(); return currentBonus; }
  • 23. <hazelcast> Configura-on <group> <name>dev</name> <password>dev-pass</password> </group> <network> <port auto-increment="true">5701</port> <join> <multicast enabled="true"> <multicast-group>224.2.2.3</multicast-group> <multicast-port>54327</multicast-port> </multicast> <tcp-ip enabled="false"> <interface>192.168.1.2-5</interface> <hostname>istanbul.acme</hostname> </tcp-ip> </join> <interfaces enabled="false"> <interface>10.3.17.*</interface> </interfaces> </network> <queue name="default"> <max-size-per-jvm>10000</max-size-per-jvm> <time-to-live-seconds>60</time-to-live-seconds> </queue> <map name="default"> <backup-count>1</backup-count> <time-to-live-seconds>60</time-to-live-seconds> <max-size>10000</max-size> <eviction-policy>LRU</eviction-policy> <eviction-percentage>25</eviction-percentage> </map> </hazelcast>
  • 24. DEMO
  • 25. Internals
:
Threads • User
threads
(client
threads) • ServiceThread
(com.hazelcast.impl.ClusterService) • InThread • OutThread • Mul-castThread • ExecutorService
Threads
  • 26. Internals
:
Cluster
Membership • Mul-cast
and
Unicast
Discovery • Every
member
sends
heartbeats
to
the
oldest
 member • Oldest
Member
manages
the
memberships –Sends
member
list –Tells
members
to
sync
their
data
  • 27. Internals
:
Serializa-on • Op-mized
for
String, byte[], Long, Integer • Custom
serializa-on
with
 (com.hazelcast.nio.DataSerializable) • Standard
Java
Serializa-on
  • 28. Internals
:
Serializa-on public class Address implements com.hazelcast.nio.DataSerializable { private String street; private int zipCode; private String city; private String state; public Address() {} //getters setters.. public void writeData(DataOutput out) throws IOException { out.writeUTF(street); out.writeInt(zipCode); out.writeUTF(city); out.writeUTF(state); } public void readData (DataInput in) throws IOException { street = in.readUTF(); zipCode = in.readInt(); city = in.readUTF(); state = in.readUTF(); } }
  • 29. Internals
:
Serializa-on public class Employee implements com.hazelcast.nio.DataSerializable { private String firstName; private String lastName; private int age; private double salary; private Address address; //address itself is DataSerializable public Employee() {} public void writeData(DataOutput out) throws IOException { out.writeUTF(firstName); out.writeUTF(lastName); out.writeInt(age); out.writeDouble (salary); address.writeData (out); } public void readData (DataInput in) throws IOException { firstName = in.readUTF(); lastName = in.readUTF(); age = in.readInt(); salary = in.readDouble(); address = new Address(); address.readData (in); } }
  • 30. Internals
:
Serializa-on • Hazelcast
doesn't
work
with
your
objects
directly • Hazelcast
works
with
com.hazelcast.nio.Data only • Data
is
the
binary
representa-on
of
your
object • Data
is
a
list
of
re‐used
java.nio.ByteBuffers
  • 31. Internals
:
ObjectPool • Thread‐aware
object
pool • Try
the
thread's
lock‐free
queue
first • If
thread's
queue
is
full/empty,
go
to
the
global
 (concurrent)
queue • See
com.hazelcast.impl.ThreadContext.ObjectPool

  • 32. Internals
:
Sockets • Java
NIO
(none‐blocking
mode) • There
are
only
2
threads
for
read/write
 regardless
of
the
cluster
size • InThread
for
read
and
accept
 • OutThread
for
write
and
connect
 • A
pool
of
java.nio.ByteBuffers
is
used

  • 33. Internals
:
Sockets • Packet
travels
over
the
wire com.hazelcast.nio.Packet • Packet
structure: • Packet
objects
are
also
re‐used • Processed
only
by
ServiceThread
  • 34. Internals
:
Map.put(key,
value) • com.hazelcast.nio.Data • com.hazelcast.impl.BaseManager.Call • com.hazelcast.impl.BaseManager.Request • com.hazelcast.nio.Packet • com.hazelcast.impl.BaseManager.PacketProcessor
  • 35. Internals
:
Map.put(key,
value) • Convert
key‐value
objects
to
Data
instances • Hash
of
the
key
tells
us
which
member
is
the
 owner • If
owner
is
local
simply
put
the
key/value • If
remote
 •Send
it
to
the
owner
member •Read
the
response • If
remote
owner
dies,
re‐do
  • 36. Internals
:
Distributed
Map • Fixed
number
of
blocks
(segments) • Each
key
falls
into
one
of
these
blocks • Each
block
is
owned
by
a
member • Every
member
knows
the
block
owners • blockId = hash(keyData) % BLOCK_COUNT • Block
ownership
is
reassigned
upon
membership
 change • Blocks
and
keys
migrate
for
load‐balancing
  • 37. Internals
:
Distributed
Queue • The
oldest
member
creates
blocks
as
needed • Every
member
knows
the
block
owners • Items
are
added
into
the
blocks • No
migra-on
happens;
short
lived
objects • Each
member
holds
a
takeBlockId and
 putBlockId • ‘Go‐Next’
if
the
target
is
wrong
or

block
is
full/ empty
  • 38. JVM
‐1 JVM
‐2 map.put (key, value) TCP/IP PacketProcessor Call:
MPut Request: Request: Data
key Data
key Data
value Data
value Packet Owner
? Owner
? No Yes Process
Request Process
Request
  • 39. Planned
Features • Persistence
(Load/Store)
for
distributed
map • Distributed
Search
and
Con-nues
Queries • Distributed
java.util.concurrent. {DelayQueue, Semaphore, CountDownLatch} • Distributed
Tuple
Space • Pure
Java
and
C#
clients
  • 40. Ques-ons?
 • hPp://www.hazelcast.com
 • hPp://code.google.com/p/hazelcast/ • hazelcast@googlegroups.com • TwiPer
@oztalip • hPp://www.linkedin.com/in/talipozturk