Clustering your application with Hazelcast - Talip Ozturk

24,530 views

Published on

Today's applications are getting more and more distributed everyday and it is well-known that distributed programming is hard. With Hazelcast though, distributed programming is easy and lots of fun. A common reaction of Hazelcast users is 'Ooh my God, this cannot be that easy'. Hazelcast is an open source, highly scalable, transactional, distributed/partitioned implementation of queue, map, set, list, lock and executor services for Java. Hazelcast is for you if you like to easily: share data/state among many servers (e.g. web session sharing), cache your data (distributed cache), cluster your application, partition your in-memory data, send/receive messages among applications, distribute workload onto many servers, take advantage of parallel processing or provide fail-safe data management

Published in: Technology

Clustering your application with Hazelcast - Talip Ozturk

  1. 1. HAZELCAST Art of Data Distribution open source, in-memory data grid Talip Ozturk @oztalipTuesday, October 2, 12
  2. 2. Who uses Hazelcast? and many more.... Every sec one Hazelcast instance starts around the globeTuesday, October 2, 12
  3. 3. 9th Fastest Growing SkillTuesday, October 2, 12
  4. 4. Keywords In-memory data grid Distributed(Elastic) Cache NoSQL Clustering, Scalability, Partitioning Cloud ComputingTuesday, October 2, 12
  5. 5. Map import java.util.Map; import java.util.HashMap; Map map = new HashMap(); map.put(“1”, “value”); map.get(“1”);Tuesday, October 2, 12
  6. 6. Concurrent Map import java.util.Map; import java.util.concurrent.*; Map map = new ConcurrentHashMap(); map.put(“1”, “value”); map.get(“1”);Tuesday, October 2, 12
  7. 7. Distributed Map import java.util.Map; import com.hazelcast.core.Hazelcast; Map map = Hazelcast.getMap(“mymap”); map.put(“1”, “value”); map.get(“1”);Tuesday, October 2, 12
  8. 8. Why Hazelcast? To build highly available and scalable applicationsTuesday, October 2, 12
  9. 9. AlternativesTuesday, October 2, 12
  10. 10. Difference License/Cost Feature-set Ease of use Main focus (distributed map, tuple space, cache, processing vs. data) Light/Heavy weightTuesday, October 2, 12
  11. 11. HAZELCASTTuesday, October 2, 12
  12. 12. Should be free! Apache LicenseTuesday, October 2, 12
  13. 13. Lightweight without any dependency 1.7 MB jarTuesday, October 2, 12
  14. 14. Introducing Hazelcast Map, queue, set, list, lock, semaphore, topic and executor service Native Java, C#, REST and Memcache Interfaces Cluster info and membership events Dynamic clustering, backup, fail-over Transactional and secureTuesday, October 2, 12
  15. 15. Use-cases Scale your application Share data across cluster Partition your data Send/receive messages Balance the load Process in parallel on many JVMTuesday, October 2, 12
  16. 16. DemoTuesday, October 2, 12
  17. 17. Where is the Data?Tuesday, October 2, 12
  18. 18. Data Partitioning in a Cluster Fixed number of partitions (default 271) Each key falls into a partition partitionId = hash(keyData)%PARTITION_COUNT Partition ownerships are reassigned upon membership change A B CTuesday, October 2, 12
  19. 19. New Node Added A B C DTuesday, October 2, 12
  20. 20. Migration A B C DTuesday, October 2, 12
  21. 21. Migration A B C DTuesday, October 2, 12
  22. 22. Migration A B C DTuesday, October 2, 12
  23. 23. Migration A B C DTuesday, October 2, 12
  24. 24. Migration A B C DTuesday, October 2, 12
  25. 25. Migration A B C DTuesday, October 2, 12
  26. 26. Crash Migration Complete A B C DTuesday, October 2, 12
  27. 27. Node Crashes A B C D CrashTuesday, October 2, 12
  28. 28. Restoring Backups A B C D CrashTuesday, October 2, 12
  29. 29. Restoring Backups A B C D CrashTuesday, October 2, 12
  30. 30. Restoring Backups A B C D CrashTuesday, October 2, 12
  31. 31. Restoring Backups A B C D CrashTuesday, October 2, 12
  32. 32. Restoring Data from Backup A B C D CrashTuesday, October 2, 12
  33. 33. Data is Recovered from backup A B C D CrashTuesday, October 2, 12
  34. 34. Backup for Recovered Data A B C D CrashTuesday, October 2, 12
  35. 35. Backup for Recovered Data A B C D CrashTuesday, October 2, 12
  36. 36. Backup for Recovered Data A B C D CrashTuesday, October 2, 12
  37. 37. All Safe A C DTuesday, October 2, 12
  38. 38. Node TypesTuesday, October 2, 12
  39. 39. Topology Native Client: HazelcastInstance client = HazelcastClient.newHazelcastClient(clientConfig); Lite Member: -Dhazelcast.lite.member=trueTuesday, October 2, 12
  40. 40. Hazelcast EnterpriseTuesday, October 2, 12
  41. 41. Community vs Enterprise Enterprise = Community + Elastic Memory + Security + Man. CenterTuesday, October 2, 12
  42. 42. Elastic Memory is OFF-HEAP storage -XX:MaxDirectMemorySize=60G ... hazelcast.elastic.memory.enabled to true. hazelcast.elastic.memory.total.size hazelcast.elastic.memory.chunk.size Default chunk size is 1. Chunk size has to be power of 2 such as 1, 2, 4 and 8. <hazelcast> ... <map name="default"> ... <storage-type>OFFHEAP</storage-type> </map> </hazelcast>Tuesday, October 2, 12
  43. 43. JAAS based Security Credentials Cluster Login Module Cluster Member Security Native Client Security Authentication Authorization PermissionTuesday, October 2, 12
  44. 44. Code SamplesTuesday, October 2, 12
  45. 45. Hazelcast Hazelcast is thread safe Map<String, Employee> = Hazelcast.getMap(“employees”); List<Users> = Hazelcast.getList(“users”); Many instances on the same JVM Config config = new Config(); HazelcastInstance h1 = Hazelcast.newHazelcastInstance(config) HazelcastInstance h2 = Hazelcast.newHazelcastInstance(config) All objects must be serializable class Employee implements java.io.Serializable //better class Employee implements com.hazelcast.nio.DataSerializableTuesday, October 2, 12
  46. 46. Cluster Interface import com.hazelcast.core.*; import java.util.Set; Cluster cluster = Hazelcast.getCluster(); cluster.addMembershipListener(listener); Member localMember = cluster.getLocalMember(); System.out.println (localMember.getInetAddress()); Set<Member> setMembers = cluster.getMembers();Tuesday, October 2, 12
  47. 47. Distributed Map import com.hazelcast.core.*; import java.util.ConcurrentMap; Map<String, Customer> map = Hazelcast.getMap("customers"); map.put ("1", customer); Customer c = map.get("1"); //ConcurrentMap methods map.putIfAbsent ("2", new Customer(“Chuck Norris”)); map.replace ("3", new Customer(“Bruce Lee”));Tuesday, October 2, 12
  48. 48. Distributed Queue import com.hazelcast.core.Hazelcast; import java.util.concurrent.BlockingQueue; import java.util.concurrent.TimeUnit; BlockingQueue<Task> queue = Hazelcast.getQueue(“tasks"); queue.offer(task); Task t = queue.poll(); //Timed blocking Operations queue.offer(task, 500, TimeUnit.MILLISECONDS) Task t = queue.poll(5, TimeUnit.SECONDS); //Indefinitely blocking Operations queue.put(task) Task t = queue.take();Tuesday, October 2, 12
  49. 49. Distributed Lock import com.hazelcast.core.Hazelcast; import java.util.concurrent.locks.Lock; Lock mylock = Hazelcast.getLock(mylockobject); mylock.lock(); try { // do something } finally { mylock.unlock(); } //Lock on Map IMap<String, Customer> map = Hazelcast.getMap("customers"); map.lock("1"); try { // do something } finally { map.unlock("1"); }Tuesday, October 2, 12
  50. 50. Distributed Topic import com.hazelcast.core.*; public class Sample implements MessageListener { public static void main(String[] args) { Sample sample = new Sample(); ITopic<String> topic = Hazelcast.getTopic ("default"); topic.addMessageListener(sample); topic.publish ("my-message-object"); } public void onMessage(Object msg) { System.out.println("Got msg :" + msg); } }Tuesday, October 2, 12
  51. 51. Distributed Events import com.hazelcast.core.*; public class Sample implements EntryListener { public static void main(String[] args) { Sample sample = new Sample(); IMap map = Hazelcast.getMap ("default"); map.addEntryListener (sample, true); map.addEntryListener (sample, "key"); } public void entryAdded(EntryEvent event) { System.out.println("Added " + event.getKey() + ":" + event.getValue()); } public void entryRemoved(EntryEvent event) { System.out.println("Removed " + event.getKey() + ":" + event.getValue()); } public void entryUpdated(EntryEvent event) { System.out.println("Updated " + event.getKey() + ":" + event.getValue()); } }Tuesday, October 2, 12
  52. 52. Executor ServiceTuesday, October 2, 12
  53. 53. Hello Task A simple task public class HelloTask implements Callable<String>, Serializable{ @Override public String call(){ Cluster cluster = Hazelcast.getCluster(); return “Hello from ” + cluster.getLocalMember(); } Execute on any member ExecutorService ex = Hazelcast.getExecutorService(); Future<String> future = executor.submit(new HelloTask()); // ... String result = future.get(); Attach a callback DistributedTask task = ... task.setExecutionCallback(new ExecutionCallback() { public void done(Future f) { // Get notified when the task is done! } });Tuesday, October 2, 12
  54. 54. Hazelcast can execute a task ... 1. On a specific node 2. On any available node 3. On a collection of defined nodes 4. On a node owning a key ExecutorService executor = Hazelcast.getExecutorService(); FutureTask<String> task1, task2; // CASE 1: Run task on a specific member. Member member = ... task1 = new DistributedTask<String>(new HelloTask(), member); executor.execute(task1); // CASE 2: Run task on a member owning the key. Member member = ... task1 = new DistributedTask<String>(new HelloTask(), “key”); executor.execute(task1); // CASE 3: Run task on group of members. Set<Member> members = ... task = new MultiTask<String>(new HelloTask(), members); executor.execute(task2);Tuesday, October 2, 12
  55. 55. Executor Service Scenario public int removeOrder(long customerId, long orderId){ IMap<Long, Customer> map = Hazelcast.getMap("customers"); map.lock(customerId); Customer customer = map.get(customerId); customer.removeOrder (orderId); map.put(customerId, customer); map.unlock(customerId); return customer.getOrderCount(); }Tuesday, October 2, 12
  56. 56. Add Bonus Task public class OrderDeletionTask implements Callable<Integer>, PartitionAware, Serializable { private long customerId; private long orderId; public OrderDeletionTask(long customerId, long orderId) { super(); this.customerId = customerId; this.orderId = orderId; } public Integer call () { IMap<Long, Customer> map = Hazelcast.getMap("customers"); map.lock(customerId); customer customer = map. get(customerId); customer.removeOrder (orderId); map.put(customerId, customer); map.unlock(customerId); return customer.getOrderCount(); } public Object getPartitionKey() { return customerId; } }Tuesday, October 2, 12
  57. 57. Send computation over data public static int removeOrder(long customerId, long orderId){ ExecutorService es = Hazelcast.getExecutorService(); OrderDeletionTask task = new OrderDeletionTask(customerId, orderId); Future future = es.submit(task); int remainingOrders = future.get(); return remainingOrders; }Tuesday, October 2, 12
  58. 58. PersistenceTuesday, October 2, 12
  59. 59. Persistence import com.hazelcast.core.MapStore, import com.hazelcast.core.MapLoader, public class MyMapStore implements MapStore, MapLoader { public Set loadAllKeys () { return readKeys(); } public Object load (Object key) { return readFromDatabase(key); } public void store (Object key, Object value) { saveIntoDatabase(key, value); } public void remove(Object key) { removeFromDatabase(key); } }Tuesday, October 2, 12
  60. 60. Persistence Write-Behind : asynchronously storing entries Write-Through : synchronous Read-Through : if get(key) is null, load itTuesday, October 2, 12
  61. 61. Recap • Distributed implementation of • Embedded, but accessible • Map through • Queue • Native Java & C# Client • Set • REST • List • Memcache • MultiMap • Dynamic • Lock • Cluster • Executor Service • Add/ remove nodes • Topic • Backup • Semaphore • Fail-over • AtomicLong • CountDownLatchTuesday, October 2, 12
  62. 62. Q&A JOIN US! github.com/hazelcast/ hazelcast@googlegroups.com @hazelcast Hazelcast Hackers Linkedin groupTuesday, October 2, 12

×