1. Redis vs Hazelcast :
Language: Hazelcast is written in Java, while Redis is implemented in C.
Threading: Redis is single-threaded. Hazelcast can benefit from all available CPU
cores.
Design: Hazelcast is designed from the ground up to be used in a distributed
environment; Redis was initially intended to be used in standalone mode. Clustered
Redis does not support SELECT command, meaning it only supports a single DB
(namespace) per cluster. In contrast, Hazelcast supports unlimited number of maps
and caches per cluster.
Clustering
Hazelcast
The easiest option is multicast discovery, where members use multicast UDP
transmission to get to know each other automatically without any additional
configuration.
It is also possible to specify the member addresses manually through TCP.
Providing just one working address will be enough regardless of whether your
cluster consists of three or thirty nodes.
For cloud deployments, Hazelcast supports automatic discovery of its member
instances on Amazon EC2 and Google Compute Engine
Hazelcast provides a Discovery Service Provider Interface (SPI), which allows
users to implement custom member discovery mechanisms to deploy Hazelcast on
any platform. Hazelcast Discovery SPI also allows you to use third-party software
like Zookeeper, Eureka, Consul, etcd for implementing custom discovery
mechanism.
Redis
A Redis client connects to a Redis cluster through a TCP connection. There is no
provision for discovering Redis servers on a multicast UDP network.
Redis does not provide an automatic discovery mechanism for any cloud provider,
which makes it difficult to use in custom cloud deployments.
Establishing a Redis cluster is not straightforward — one needs to launch a special
utility script while specifying all member addresses. This utility script (“redis-
trib.rb”) is written in Ruby and thus requires additional Ruby runtime dependencies
(script is included in Redis distribution).
2. Scaling
Adding a node to the Hazelcast cluster is super easy — just launching the node with
the proper configuration does the job. Removing a node is also remarkably simple —
just shutdown the node and the data is recovered on other nodes from backups.
In both cases, Hazelcast automatically rebalances the data across all available server
nodes, without affecting the availability of the data.
On the other hand, adding a node to a Redis cluster is not as convenient — the user
has to use the utility script to manually introduce a new member and then re-partition
the cluster with the same script. Removal in Redis is similar, but the order of
operations is different — repartition, make the cluster “forget” about the node, and
only then shutdown.
Distributed computing
Let’s say you have stored a hundred gigabytes worth of data in your cache and you
need to quickly find all the photos that were taken around a certain location.
Simply fetching the data from the server to the client and then extracting the metadata
on the clientside might work, but it would take quite a lot of time, and would also
cause memory pressure on the client. It is better to send the extraction logic to the
servers that store the actual data and let them work in parallel, with less data
exchanged over the wire.
This can be done in both Hazelcast and Redis.
Redis supports evaluation of Lua scripts that can invoke pretty much any Redis
operation. In clustered mode the Lua scripts have a limitation —if the script is going
to use multiple keys, all those keys must belong to the same hash slot (partition).
Hazelcast has a Distributed Execution Service, which is a special implementation of
java.util. concurrent.ExecutorService that allows one to distribute computation tasks
written in Java among several physical machines. Depending on the use case, the
tasks can be routed to cluster nodes in a few different ways: randomly, based on
address, or based on key ownership.
Hazelcast also supports efficient bulk updates of map data with an abstraction called
EntryProcessor. EntryProcessor can be executed on all keys, specific keys supplied by
the user, or on keys that matchspecific criteria.