An Engineer's Intro to Oracle Coherence
Upcoming SlideShare
Loading in...5
×
 

An Engineer's Intro to Oracle Coherence

on

  • 1,839 views

Building scalable, highly-available applications that perform well is not an easy task. These features cannot be simply “bolted” onto an existing application – they have to be architected into ...

Building scalable, highly-available applications that perform well is not an easy task. These features cannot be simply “bolted” onto an existing application – they have to be architected into it. Unfortunately, the things we need to do to achieve them are often in conflict with each other, and finding the right balance is crucial. In this session we will discuss why scaling web applications is difficult and will look at some of solutions we have come up with in the past to deal with the issues involved. We will then look at how in-memory data grids can make our jobs easier by providing a solid architectural foundation to build our applications on top of. If you are new to in-memory data grids, you are guaranteed to leave the presentation eager to learn more. However, even if you are already using one you will likely walk out with a few ideas on how to improve performance and scalability of your applications.

Statistics

Views

Total Views
1,839
Views on SlideShare
1,835
Embed Views
4

Actions

Likes
1
Downloads
97
Comments
1

1 Embed 4

http://www.slashdocs.com 4

Accessibility

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
  • Hi, can Conherence*Web used for sharing session data across heterogeneous web applications?
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • (c) Copyright 2007. Oracle Corporation
  • So why can’t we use database technology to bring high-performance transaction processing to Java applications? The problem is the classic mismatch between object and relational and the huge performance penalty translating back and forth between those two representations of the data. First the object data must be loaded into mid-tier memory from several relational database tables. Then the transaction (object method) is performed. Finally the data is written back to the relational database to commit the transaction and save session state. If another transaction (method call) is performed with the same object, this same process is repeated beginning to end. This performance problem is compounded in modern Event Driven Architectures where one object method call can spawn a whole succession of others.
  • It is a Development Library. In Java it is jars, dlls etc. We ship with other components Jars to support Spring and Groovy HTTP Session can be used for WLS, OAS. Large online retailer has unified shopping cart across multiple application servers. (WAS, .Net) WebInstaller which replaces default replication
  • Serialization Options Because serialization is often the most expensive part of clustered data management, Coherence provides the following options for serializing/deserializing data: java.io.Serializable – The simplest, but slowest option. com.tangosol.io.pof.PofSerializer – The Portable Object Format (also referred to as POF) is a language agnostic binary format. POF was designed to be incredibly efficient in both space and time and has become the recommended serialization option in Coherence. java.io.Externalizable – This requires developers to implement serialization manually, but can provide significant performance benefits. Compared to java.io.Serializable, this can cut serialized data size by a factor of two or more (especially helpful with Distributed caches, as they generally cache data in serialized form). Most importantly, CPU usage is dramatically reduced. com.tangosol.io.ExternalizableLite – This is very similar to java.io.Externalizable, but offers better performance and less memory usage by using a more efficient I/O stream implementation. com.tangosol.run.xml.XmlBean– A default implementation of ExternalizableLite (c) Copyright 2007. Oracle Corporation
  • Coherence provides several cache implementations: Local Cache—Local on-heap caching for non-clustered caching. Replicated Cache Service—Perfect for small, read-heavy caches. Partitioned Cache Service—True linear scalability for both read and write access. Data is automatically, dynamically and transparently partitioned across nodes. The distribution algorithm minimizes network traffic and avoids service pauses by incrementally shifting data. Near Cache—Provides the performance of local caching with the scalability of distributed caching. Several different near-cache strategies provide varying trade-offs between performance and synchronization guarantees. In-process caching provides the highest level of raw performance, since objects are managed within the local JVM. This benefit is most directly realized by the Local, Replicated, Optimistic and Near Cache implementations. Out-of-process (client/server) caching provides the option of using dedicated cache servers. This can be helpful when you want to partition workloads (to avoid stressing the application servers). This is accomplished by using the Partitioned cache implementation and simply disabling local storage on client nodes through a single command-line option or a one-line entry in the XML configuration. Tiered caching (using the Near Cache functionality) enables you to couple local caches on the application server with larger, partitioned caches on the cache servers, combining the raw performance of local caching with the scalability of partitioned caching. This is useful for both dedicated cache servers and co-located caching (cache partitions stored within the application server JVMs). Tech Details Appendix for Cache types/strategies Distributed Cache A distributed, or partitioned, cache is a clustered, fault-tolerant cache that has linear scalability. Data is partitioned among all the machines of the cluster. For fault-tolerance, partitioned caches can be configured to keep each piece of data on one or more unique machines within a cluster. Distributed caches are the most commonly used caches in Coherence. Replicated Cache A replicated cache is a clustered, fault tolerant cache where data is fully replicated to every member in the cluster. This cache offers the fastest read performance with linear performance scalability for reads but poor scalability for writes (as writes must be processed by every member in the cluster). Because data is replicated to all machines, adding servers does not increase aggregate cache capacity. Optimistic Cache An optimistic cache is a clustered cache implementation similar to the replicated cache implementation but without any concurrency control. This implementation offers higher write throughput than a replicated cache. It also allows an alternative underlying store for the cached data (for example, a MRU/MFU-based cache). However, if two cluster members are independently pruning or purging the underlying local stores, it is possible that a cluster member may have a different store content than that held by another cluster member. Near Cache A near cache is a hybrid cache; it typically fronts a distributed cache or a remote cache with a local cache. Near cache invalidates front cache entries, using configurable invalidation strategy, and provides excellent performance and synchronization. Near cache backed by a partitioned cache offers zero-millisecond local access for repeat data access, while enabling concurrency and ensuring coherency and fail-over, effectively combining the best attributes of replicated and partitioned caches. Local Cache A local cache is a cache that is local to (completely contained within) a particular cluster node. While it is not a clustered service, the Coherence local cache implementation is often used in combination with various clustered cache services. Remote Cache A remote cache describes any out of process cache accessed by a Coherence*Extend client. All cache requests are sent to a Coherence proxy where they are delegated to one of the other Coherence cache types (Repilcated, Optimistic, Partitioned).
  • Data Grids are used for different purposes. These are the four most common uses. Caching Coherence was the first technology to prove reliable distributed caching Helped many organizations alleviate data bottleneck issues and scale out application tier Analytics Enables applications to efficiently run queries across entire data grid Support for heavy query loads, while improving responsiveness of each query Server failures do not impact correctness of “in flight” queries and analytics Transactions Data Grid provides optimal platform for joining data and business logic Greater business agility by moving database stored procedures into the Data Grid Coherence reliability allows not only in-memory data processing, but provides the ability to commit transactions in-memory Reliability is key to conducting in-memory transactions. Coherence provides absolute reliability – every transaction matters. Events Oracle Coherence Data Grid manages processing state, guaranteeing once-and-only-once event processing Data Grid provides scalable management of event processing

An Engineer's Intro to Oracle Coherence An Engineer's Intro to Oracle Coherence Presentation Transcript

  • Oracle Coherence Integration with 
WebLogic Server & WebLogic Portal


 An engineer’s introduction to in-memory data grid development Presenter Title
  • Agenda
    • What Is Coherence
      • Distributed Data Grid
    • How Does It Work?
    • Use Cases
      • Customer Examples
    • Q&A
  • Oracle Coherence
    • Development Toolkit
      • Pure Java 1.5+ Libraries
      • Pure .Net 1.1 and 2.0 (Client Libraries)
      • No Third-Party Dependencies
      • No Open Source Dependencies
    • Other Libraries for…
      • Database and File System Integration
      • Top Link and Hibernate
      • Http Session Management, Spring, …
    View slide
  • Oracle Coherence
    • Provides…
      • Container-less Clustering of Java Processes
      • Data Structures to manage Data across a Cluster / Grid
      • Real-Time Event Observation – Listener Pattern
      • Materialized Views of Data
      • Parallel Queries and Aggregation – Object-based Queries
      • Parallel Data Processing
      • Parallel Grid Processing
      • RemoteException Free Distributed Computing
      • Clustered JMX
      • MAN + WAN Connectivity
      • Client + Data Grid Deployment Models
    View slide
  • Distributed Data Grid
  • <Insert Picture Here> “ A Data Grid is a system composed of multiple servers that work together to manage information and related operations - such as computations - in a distributed environment .” Cameron Purdy VP of Development, Oracle
  • What is a Data Grid?
    • What
      • In-Memory
      • Objects
      • Shared
    • Benefits
      • Low response time
      • High throughput
      • Predictable scalability
      • Continuous availability
      • Information reliability
  • Scalability Chasm Application Servers Web Servers Data Demand Ever Expanding Universe of Users Data Supply
    • Data Demand outpacing Data Supply
    • Rate of growth outpacing ability to cost effectively scale applications
  • Performance Problem A Performance Bottleneck Application Database Tables Object Java SQL server Relational
    • Volume
    • Complexity
    • Frequency of Data Access
  • Oracle Coherence as Data Broker Application Servers Web Servers Data Demand Ever Expanding Universe of Users Data Supply
    • Oracle Coherence brokers Data Supply with Data Demand
    • Scale out Data Grid in middle tier using commodity hardware
    Data Sources Objects
  • <Insert Picture Here> Coherence Clustering
  • Coherence Clustering: Tangosol Clustered Messaging Protocol (TCMP)
    • Completely asynchronous yet ordered messaging built on UDP multicast/unicast
    • Truly Peer-to-Peer : equal responsibility for both producing and consuming the services of the cluster
    • Self Healing - Quorum based diagnostics
    • Linearly scalable mesh architecture .
    • TCP-like features
    • Messaging throughput scales to the network infrastructure.
  • Coherence Clustering: The Cluster Service
    • Transparent , dynamic and automatic cluster membership management
    • Clustered Consensus: All members in the cluster understand the topology of the entire grid at all times .
    • Crowdsourced member health diagnostics
  • Coherence Distributed data management for applications
    • Development Library
      • Pure Java 1.4.2+
      • Pure .Net 1.1 and 2.0 (client)
      • C++ client (3.4)
      • No Third-Party Dependencies
      • No Open Source Dependencies
      • Proprietary Network Stack (Peer-To-Peer model)
    • Other Libraries Support…
      • Database and File System Integration
      • TopLink and Hibernate
      • Http Session Management
      • WebLogic Portal Caches
      • Spring, Groovy
  • The Portable Object Format Advanced Serialization
    • Simple Serialization Comparison
      • In XML
        • <date format=“java.util.Date”>2008-07-03</date>
        • 47 characters (possibly 94 bytes depending on encoding)
      • In Java (as a raw long)
        • 64 bits = 8 bytes
      • In Java (java.util.Date using ObjectOutputStream)
        • 46 bytes
      • In ExternalizableLite (as a raw long)
        • 8 bytes
      • In POF
        • 4F 58 1F 70 6C = 5 bytes
    (c) Copyright 2008. Oracle Corporation
  • ©2011 Oracle Corporation Coherence Cache Types / Strategies Replicated Cache Optimistic Cache Partitioned Cache Near Cache backed by partitioned cache LocalCache not clustered Topology Replicated Replicated Partitioned Cache Local Caches + Partitioned Cache Local Cache Read Performance Instant Instant Locally cached: instant --Remote: network speed Locally cached: instant -- Remote: network speed Instant Fault Tolerance Extremely High Extremely High Configurable Zero to Extremely High Configurable 4 Zero to Extremely High Zero Write Performance Fast Fast Extremely fast Extremely fast Instant Memory Usage (Per JVM) DataSize DataSize DataSize/JVMs x Redundancy LocalCache + [DataSize / JVMs] DataSize Coherency fully coherent fully coherent fully coherent fully coherent n/a Memory Usage (Total) JVMs x DataSize JVMs x DataSize Redundancy x DataSize [Redundancy x DataSize] + [JVMs x LocalCache] n/a Locking fully transactional none fully transactional fully transactional fully transactional Typical Uses Metadata n/a (see Near Cache) Read-write caches Read-heavy caches w/ access affinity Local data
  • Use Cases
  • Data Grid Uses Caching Applications request data from the Data Grid rather than backend data sources Analytics Applications ask the Data Grid questions from simple queries to advanced scenario modeling Transactions Data Grid acts as a transactional System of Record, hosting data and business logic Events Automated processing based on event
  • Code Examples
  • Clustering Java Processes
    • Joins an existing cluster or forms a new cluster
      • Time “to join” configurable
    • cluster contains information about the Cluster
      • Cluster Name
      • Members
      • Locations
      • Processes
    • No “master” servers
    • No “server registries”
    Cluster cluster = CacheFactory.ensureCluster(); (c) Copyright 2007. Oracle Corporation
  • Leaving a Cluster
    • Leaves the current cluster
    • shutdown blocks until “data” is safe
    • Failing to call shutdown results in Coherence having to detect process death/exit and recover information from another process.
    • Death detection and recovery is automatic
    (c) Copyright 2007. Oracle Corporation CacheFactory.shutdown();
  • Using a Cache get, put, size & remove
    • CacheFactory resolves cache names (ie: “mine” ) to configured NamedCache s
    • NamedCache provides data topology agnostic access to information
    • NamedCache interfaces implement several interfaces;
      • java.util.Map, Jcache, ObservableMap * , ConcurrentMap * , QueryMap * , InvocableMap *
    (c) Copyright 2007. Oracle Corporation NamedCache nc = CacheFactory.getCache(“mine”); Object previous = nc.put(“key”, “hello world”); Object current = nc.get(“key”); int size = nc.size(); Object value = nc.remove(“key”); Coherence* Extensions
  • Using a Cache keySet, entrySet, containsKey
    • Using a NamedCache is like using a java.util.Map
    • What is the difference between a Map and a Cache data-structure?
      • Both use (key,value) pairs for entries
      • Map entries don’t expire
      • Cache entries may expire
      • Maps are typically limited by heap space
      • Caches are typically size limited (by number of entries or memory)
      • Map content is typically in-process (on heap)
    (c) Copyright 2007. Oracle Corporation NamedCache nc = CacheFactory.getCache(“mine”); Set keys = nc.keySet(); Set entries = nc.entrySet(); boolean exists = nc.containsKey(“key”);
  • Observing Cache Changes ObservableMap
    • Observe changes in real-time as they occur in a NamedCache
    • Options exist to optimize events by using Filters, (including pre and post condition checking) and reducing on-the-wire payload (Lite Events)
    • Several MapListener s are provided out-of-the-box.
      • Abstract, Multiplexing...
    (c) Copyright 2007. Oracle Corporation NamedCache nc = CacheFactory.getCache(“stocks”); nc.addMapListener(new MapListener() { public void onInsert(MapEvent mapEvent) { } public void onUpdate(MapEvent mapEvent) { } public void onDelete(MapEvent mapEvent) { } });
  • Querying Caches QueryMap
    • Query NamedCache keys and entries across a cluster (Data Grid) in parallel * using Filters
    • Results may be ordered using natural ordering or custom comparators
    • Filters provide support almost all SQL constructs
    • Query using non-relational data representations and models
    • Create your own Filters
    • * Requires Enterprise Edition or above
    (c) Copyright 2007. Oracle Corporation NamedCache nc = CacheFactory.getCache(“people”); Set keys = nc.keySet( new LikeFilter(“getLastName”, “%Stone%”)); Set entries = nc.entrySet( new EqualsFilter(“getAge”, 35));
  • Continuous Observation Continuous Query Caches
    • ContinuousQueryCache provides real-time and in-process copy of filtered cached data
    • Use standard or your own custom Filters to limit view
    • Access to “view”of cached information is instant
    • May use with MapListeners to support rendering real-time local views (aka: Think Client) of Data Grid information.
    (c) Copyright 2007. Oracle Corporation NamedCache nc = CacheFactory.getCache(“stocks”); NamedCache expensiveItems = new ContinuousQueryCache(nc, new GreaterThan(“getPrice”, 1000));
  • Aggregating Information InvocableMap
    • Aggregate values in a NamedCache across a cluster (Data Grid) in parallel * using Filters
    • Aggregation constructs include; Distinct, Sum, Min, Max, Average, Having, Group By
    • Aggregate using non-relational data models
    • Create your own aggregators
    • * Requires Enterprise Edition or above
    (c) Copyright 2007. Oracle Corporation NamedCache nc = CacheFactory.getCache(“stocks”); Double total = (Double)nc.aggregate( AlwaysFilter.INSTANCE, new DoubleSum(“getQuantity”)); Set symbols = (Set)nc.aggregate( new EqualsFilter(“getOwner”, “Larry”), new DistinctValue(“getSymbol”));
  • Mutating Information InvocableMap
    • Invoke EntryProcessors on zero or more entries in a NamedCache across a cluster (Data Grid) in parallel * (using Filters) to perform operations
    • Execution occurs where the entries are managed in the cluster, not in the thread calling invoke
    • This permits Data + Processing Affinity
    • * Requires Enterprise Edition or above
    (c) Copyright 2007. Oracle Corporation NamedCache nc = CacheFactory.getCache(“stocks”); nc.invokeAll( new EqualsFilter(“getSymbol”, “ORCL”), new StockSplitProcessor()); ... class StockSplitProcessor extends AbstractProcessor { Object process(Entry entry) { Stock stock = (Stock)entry.getValue(); stock.quantity *= 2; entry.setValue(stock); return null; } }
  • Customer Examples
  • Amir Razmara Director, The Gap
    • Problem:
      • Universal user profile, preferences, shopping cart and single sign-on across 4 brands
      • Shared sessions across all 4 brands leading to a need for a global session (each brand has its own cluster of servers)
    • Possible Solutions:
      • State repository with DB
      • Data Grid backed session management with Coherence*Web
    • Coherence*Web Solution:
      • Create a “Global Cache Cloud” to maintain a brand agnostic session caching layer, enabling the global session
      • Any server from any brand is able to obtain a session from the global cache cloud, enabling SSO and shared bag
      • Session durability, immune from crashes in the application tier
      • Sessions maintained during the nightly cell switch for publishing content
    *GAP - OOW2008 S299392 - Beyond Performance: Pushing Transaction
  • The Universal Experience: Sister Tabs *GAP - OOW2008 S299392 - Beyond Performance: Pushing Transaction…
  • Customer Examples
    • Telecommunications
      • Major Communications provider
        • Home Subscriber Server (HSS) part of IMS platform
        • “ Enterprise Data Grid” – Unified Data Access layer across the enterprise
        • Active-Active data center replication across WAN
      • Major Communications Provider
        • “ Click-to-Chat” application – web chat between customers and CSRs
    • Major Financial Services Provider
      • User Session data in Coherence, access from Java and C++
      • User Session data replicated across WAN to alternate data center
      • Mainframe MIPS cost mitigation
      • Mid-tier caching to aid migration off MS SQL Server
  • Oracle Coherence Advantage
    • Protect the Database Investment
      • Ensure DB does what it does best, limit cost of rearchitecture
    • Scale as you Grow
      • Cost effective: Start small with 3-5 servers, scale to hundreds of servers as business grows
    • Enables business continuity
      • Providing continuous data availability
  • A Q & Q U E S T I O N S A N S W E R S