• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
HTTP Session Replication with Oracle Coherence, GlassFish, WebLogic
 

HTTP Session Replication with Oracle Coherence, GlassFish, WebLogic

on

  • 4,102 views

In this talk we will cover the integration of Coherence and Application Servers like Oracle WebLogic and Oracle GlassFish Server, and touch on the native capabilities of each server for HTTP session ...

In this talk we will cover the integration of Coherence and Application Servers like Oracle WebLogic and Oracle GlassFish Server, and touch on the native capabilities of each server for HTTP session state management as well. The integration makes it simpler to access Coherence named caches through resource injection. It also provides an optimized integration of Coherence*Web for HTTP session state management. From a management perspective, it offers Coherence cluster configuration support through the WLS administration domain as well as Runtime monitoring support through the WebLogic console.

Statistics

Views

Total Views
4,102
Views on SlideShare
4,079
Embed Views
23

Actions

Likes
0
Downloads
74
Comments
1

3 Embeds 23

http://codejunction.info 20
http://www.slashdocs.com 2
http://www.docshut.com 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel

11 of 1 previous next

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
  • Hi, can Conherence*Web used for sharing session data across heterogeneous web applications?
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • (c) Copyright 2007. Oracle Corporation
  • ActiveCache refers to the integration of Coherence with another server like WebLogic, or GlassFish. Coherence*Web is one of the main integration points, so we’ll cover that first.
  • The IN-Process deployment model is mainly provided to allow for easy demostration, evaluation and smoke-testing. In this deployment model the only nodes in the cluster/grid are the application server JVMs and the session state is partitioned/maintained within the process space(s) of the application server. This is the default deployment model out of the box but should NOT be used in production. Default ONLY for ease of eval, demonstration and smoke-testing. DO NOT USE IN PRODUCTION!
  • The Out-of-Process deployment model off loads the session state to a dedicated tier of JVMs that are only responsible for storing session state (i.e. cache servers). This approach is beneficial in that: 1. Session data storage is offloaded from the application server tier to the cache server tier – reducing heap usage, garbage collection times, etc. 2. It allows for the two tiers to be scaled independently of one another – If more application processing power is needed, just start more application servers. If more session storage capacity is needed, just start more cache servers. The Out-of-Process topology is our default recommendation due to its flexibility.
  • The Out-of-Process with Coherence*Extend topology is similar to the Out-of-Process topology except that the communication between the application server tier and the cache server tier are over Coherence*Extend (i.e. TCP/IP). This approach has the same benefits as the Out-of-Process topology as well as the ability to segment deployment of application servers and cache servers. This is ideal in an environment where application servers are on a network that does not support UDP. The cache servers can be set up in a separate dedicated network, with the application servers connecting to the cluster via TCP.
  • TraditionalHttpSessionModel and TraditionalHttpSessionCollectionmanages all of the HTTP session data for a particular session in a single Coherence cache entry, but manages each HTTP session attribute (particularly, its serialization and deserialization) separately. This model is suggested for applications with relatively small HTTP session objects (10KB or less) that do not have issues with object-sharing between session attributes. (Object-sharing between session attributes occurs when multiple attributes of a session have references to the same exact object, meaning that separate serialization and deserialization of those attributes will cause multiple instances of that shared object to exist when the HTTP session is later deserialized.)
  • MonolithicHttpSessionModel and MonolithicHttpSessionCollection is similar to the Traditional Model, except that it solves the shared object issue by serializing and deserializing all attributes together in a single object stream. As a result, the Monolithic Model is often less performant than the Traditional Model.
  • SplitHttpSessionModel and SplitHttpSessionCollection manages the core HTTP session data such as the session ID, creation time, last access time, etc. together with all of the small session attributes in the same manner as the Traditional Model, thus ensuring high performance by keeping that block of session data small. All large attributes are split out into separate cache entries to be managed individually, thus supporting very large HTTP session objects without unduly increasing the amount of data that needs to be accessed and updated within the cluster on each request. In other words, only the large attributes that are modified within a particular request will incur any network overhead for their updates, and (because it uses Near Caching) the Split Model generally does not incur any network overhead for accessing either the core HTTP session data or any of the session attributes. In conclusion: The Split Model is the recommended session model for most applications. The Traditional Model may be more optimal for applications that are known to have small HTTP session objects. The Monolithic Model is designed to solve a specific class of problems related to multiple session attributes that have references to the same shared object, and that must maintain that object as a shared object. NOTE: that when using the Split Session Model in combination with the Out-of-Process deployment model only the “session storage cache” is near cached in the app server process space. The “session overflow cache” – which holds the “large” session attributes does NOT have a near cache in front of it.
  • Lock Free: This setting does not use explicit locking; rather an optimistic approach is used to detect and prevent concurrent updates upon completion of an HTTP request that modifies the session. When Coherence*Web detects a concurrent modification, a ConcurrentModificationException is thrown to the application; therefore an application must be prepared to handle this exception in an appropriate manner. Member Locking: This is accomplished by acquiring a member-level lock for an HTTP session at the beginning of a request and releasing the lock upon completion of the request. Thread Locking: This is accomplished by acquiring both a member-level and thread-level lock for an HTTP session at the beginning of a request and releasing both locks upon completion of the request.
  • With this configuration, all deployed applications in a container using Coherence*Web will be part of one Coherence node. This configuration will produce the smallest number of Coherence nodes in the cluster (one per web container JVM) and since the Coherence library (coherence.jar) is deployed in the container's classpath, only one copy of the Coherence classes will be loaded into the JVM, thus minimizing resource utilization. On the other hand, since all applications are using the same cluster node, all applications will be affected if one application misbehaves. Requirements for using this configuration are: * Each deployed application must use the same version of Coherence and participate in the same cluster. * Objects placed in the HTTP session must have their classes in the container's classpath.
  • only one copy of the Coherence classes will be loaded per EAR. Since all web applications in the EAR use the same Cluster node, all web applications in the EAR will be affected if one of the web applications misbehaves. EAR scoped cluster nodes reduce the deployment effort as no changes to the application server classpath are required. This option is also ideal if you plan on deploying only one EAR to an application server. Requirements for using this configuration are: * The Coherence library (coherence.jar) must be deployed as part of the EAR file and listed as a Java module in META-INF/application.xml. * Objects placed into the HTTP session will need to have their classes deployed as a Java EAR module in a similar fashion.
  • However, since each deployed web application is its own cluster node, web applications are completely isolated from other potentially misbehaving web applications. WAR scoped cluster nodes reduce the deployment effort as no changes to the application server classpath are required. This option is also ideal if you plan on deploying only one wAR to an application server. Requirements for using this configuration are: * The Coherence library (coherence.jar) must be deployed as part of the WAR file (usually in WEB-INF/lib). * Objects placed into the HTTP session will need to have their classes deployed as part of the WAR file (in WEB-INF/lib or WEB-INF/classes).
  • How would the features beyond Coherence*Web be used? In addition to leveraging Coherence to store WebLogic Server HTTP Sessions: TopLink Grid – caching for JPA and write-behind Manage a large Coherence cluster with WebLogic Server infrastructure (AdminServer, Node Manager) WebLogic Suite applications leveraging both WebLogic Server and Coherence
  • This is the starting point, but as you can see from the WebLogic ActiveCache being about more than just Coherence*Web, there are many more possibilities in the future.
  • Starting with Coherence 3.7 and Oracle GlassFish Server 3.1, there is a new feature of Coherence*Web called ActiveCache for GlassFish. ActiveCache for GlassFish provides Coherence*Web functionality in Web applications deployed on Oracle GlassFish Servers. In previous releases, the Coherence*Web WebInstaller was required to pre-process applications before they could use Coherence*Web for session storage. With ActiveCache for GlassFish, the WebInstaller pre-processing step is not required for GlassFish 3.1 applications.
  • On the GlassFish Server, Coherence*Web can be configured only for EAR- or WAR-scoped cluster nodes. Because of the way that GlassFish Server class loaders work, it is not possible to configure application server-scoped cluster nodes. Clustered WAR packaging means that each deployed WAR file will create a Coherence node in the cluster. If you package multiple WAR files in an EAR file, then each WAR file will create a Coherence node in the cluster.
  • May need set distributed.localstorage for any application specific caches as well.
  • Again, this assumes an in-process deployment model for a cluster of glassfish servers.

HTTP Session Replication with Oracle Coherence, GlassFish, WebLogic HTTP Session Replication with Oracle Coherence, GlassFish, WebLogic Presentation Transcript

  • ‹ #›
  • In memory session replication with WebLogic, GlassFish and Coherence Presenter Title
  • Agenda
    • ActiveCache - Coherence*Web
    • Deployment Models
    • Session Models
    • Locking Modes
    • Cluster Node Isolation
    • Session and Session Attribute Sharing
    • ActiveCache - WebLogic Server 10.3
    • ActiveCache - Oracle GlassFish Server 3.1
  • Coherence*Web
  • Coherence*Web
    • What: Distributed HTTP Session Management
    • Span Applications: Seamlessly share sessions between applications
    • Span Heterogeneous Environments: Share sessions between WebLogic, OAS, WebSphere, JBoss
    • Handle Large Sessions: Store more information within the session
  • Coherence*Web
    • Why:
    • Decouple session management from web container
    • Handle more users without adding more application servers
    • Restart/maintain applications/containers without loosing sessions
    • Handle very large sessions efficiently
    • Keep session data coherent under heavy load
  • Deployment Models
  • Deployment Models: In-Process
    • Session state maintained within app server process
    • Default model for ease of demonstration
    • DO NOT USE IN PRODUCTION
  • Deployment Models: Out-of-Process
    • Two tiers – app server and cache server
    • Session data storage is offloaded from the app server tier
    • Each tier can be scaled independently
    • Default recommendation due to flexibility
  • Deployment Models: Out-of-Process-Extend
    • Similar to the Out-of-Process deployment model
    • Communication between tiers is over Coherence*Extend (i.e. TCP/IP)
    • Ideal in environments where network does not support UDP
  • Session Models
  • Session Models: Traditional
    • Manages each session in a single Coherence cache entry…
    • … But manages each session attribute’s serialization/deserialization separately
  • Session Models: Monolithic
    • Similar to the Traditional Model…
    • … But serializes and deserializes all session attributes together in a single object stream.
    • Solves the shared object issue
  • Session Models: Split
    • Session meta-data and “small” attributes stored in one cache
    • “ Large” attributes stored in a separate cache
    • Easily supports very large session objects
    • Leverages near caching for “small” attributes
    • HIGHLY RECOMMENDED
  • Locking Modes
  • Locking Mode
    • Optimistic Locking (default)
    • Allows multiple nodes in a cluster to access an HTTP session simultaneously. Concurrent updates are detected and rejected using an optimistic approach.
    • Member Locking
    • Does not allow more than one node in the cluster to access an HTTP session.
    • Thread Locking
    • Does not allow more than one thread in the cluster to access an HTTP session.
  • Cluster Node Isolation
  • Cluster Node Isolation: App Server Scoped
    • All deployed applications in each app server instance will be part of one Coherence node.
    • Will result in the smallest number of Coherence nodes (one per web container JVM).
    • Minimizes resource utilization (only one copy of the Coherence classes loaded per JVM)
  • Cluster Node Isolation: EAR Scoped
    • All deployed applications within each EAR will be part of one Coherence node.
    • Will result in the next smallest number of Coherence nodes (one per deployed EAR that uses Coherence*Web).
    • Reduces the deployment effort as no changes to the application server classpath are required.
  • Cluster Node Isolation: WAR Scoped
    • Each deployed web applications will be its own Coherence node.
    • Will result in the largest number of Coherence nodes (one per deployed WAR that uses Coherence*Web).
    • Results in the largest resource utilization out of the three options (one copy of the Coherence classes loaded per deployed WAR).
  • Session and Session Attribute Scoping
  • Session and Session Attribute Scoping
    • Session Scoping
      • Coherence*Web allows session data to be shared by different Web applications deployed in the same or different Web containers.
    • Session Attribute Scoping
      • Extension of Session Scoping allowing for scoping of individual session attributes so that they are either globally visible or scoped to an individual web application.
      • Behavior is controllable via the AttributeScopeController interface. Two out of the box implementations:
      • ApplicationScopeController and GlobalScopeController
  • ©2008 Oracle Corporation ActiveCache and WebLogic
  • ActiveCache and WebLogic
    • Integration of Coherence and WebLogic Server
      • Incremental progress since launch of 11g
    • The whole is greater than sum of its parts!
      • Coherence*Web SPI Support for HTTP Sessions
      • Dependency Injection
      • Configuration, Lifecycle and and Monitoring of Coherence Clusters and Servers
    +
  • WebLogic Server Installers with Coherence
    • PS2 installer includes Coherence 3.5.3p2
      • $MW_HOMEcoherence_3.5
    • PS3 installer includes Coherence 3.6.0.4
      • $MW_HOMEcoherence_3.6
  • active-cache-1.0.jar
    • Shipped with WebLogic Server distribution
    • Required for advanced WebLogic Server and Coherence integration
    • Manage Coherence configuration with WLS MBeans and WLS Console
    • Dependency Injection in JEE modules
    • Manages Coherence Lifecycle in JEE modules
    • How to reference from applications:
      • EAR scope - Import into the JEE application as a shared library jar in weblogic-application.xml
      • WAR scope - Import as an optional package via META-INF/manfiest.mf
      • Server scope – reference this jar from the system classpath
  • active-cache-1.0.jar (continued)
    • META-INF/manifest.mf file uses relative paths to refer to other WLS libraries, so refer to this jar in the location: $WLS_HOME/common/deployable-libraries
    • If you copy it out of that location it will not work
    Manifest-Version: 1.0 Ant-Version: Apache Ant 1.7.1 Created-By: R28.0.2-3-133398-1.6.0_20-20100512-1453-windows-ia32 (Orac le Corporation) Specification-Title: active-cache Specification-Version: 1.0 Implementation-Title: active-cache Implementation-Version: 1.0 Extension-Name: active-cache Class-Path: @BEA_HOME@/modules/features/weblogic.server.modules.cohere nce.integration_10.3.4.0.jar ../../../modules/features/weblogic.serve r.modules.coherence.integration_10.3.4.0.jar
  • coherence.jar
    • Shipped with Coherence distribution in $COHERENCE_HOME/lib
    • Contains core Coherence classes
    • How to reference from applications:
      • EAR scoped - reference as a shared library in weblogic-application.xml (recommended) or embed in APP-INF/lib
      • WAR scoped - embed in an application in WEB-INF/lib
      • Server scoped - can be put on the system classpath
  • coherence-web-spi.war
    • Shipped with Coherence distribution in $COHERENCE_HOME/lib
    • Contains WebLogic Server SPI implementation for HTTP Session storage in Coherence
    • Reference as a shared library in web module WEB-INF/weblogic.xml
    • Contains default cache configuration for session storage WEB-INF/classes/session-cache-config.xml (can be overridden via classpath)
    • Local storage defaults to false by default
  • Node Manager and Coherence Servers
    • Coherence Servers need classpath to include $MW_HOME/modules/features/weblogic.server.modules.coherence.server_10.3.4.0.jar
    • META-INF/manfist.mf contains relative references to other WebLogic Server libraries, so refer to it in the default location and do not copy it
    • If no classpath entries are specified in Startup tab, this jar and coherence.jar are added by default implicitly by Node Manager
  • ©2008 Oracle Corporation ActiveCache and GlassFish
  • ActiveCache and Oracle GlassFish Server
    • Integration of Coherence and GlassFish
      • For commercial distribution, not open source edition
    • Coherence*Web support for HTTP Sessions
    +
  • Getting Started
    • Configure your session to use Coherence*Web
      • Edit glassfish-web.xml
            • <glassfish-web-app error-url=“”>
            • <Session-config>
            • <session-manager persistence-type=“coherence-web” />
            • </session-config>
            • </glassfish-web-app>
        • Advanced configuration options available in glassfish-web.xml
          • Defaults will handle most web applications
          • Additional <context-params> elements will override or add new configuration
  • Preparing your GlassFish application(s)
    • Configuration changes do require restart of Web App
    • Extract coherence-web.jar and session-cache-config.xml from coherencelibwebInstaller.jar/web-install
    • Copy session-cache-config.xml to WEB-INFclasses
    • If clustered, copy coherence.jar to appropriate location
  • Example: preparing your GlassFish Server
    • Example: clustered, in-process deployment model
    • Create/configure domain
      • Use asadmin at command line to create and start
      • Use GF console to add appropriate JVM options
        • coherence –D JVM arguments
    • Example JVM options
        • Well known addresses (as opposed to multicast)
        • If multi-homed system that has multiple IPs, which IP to bind to
        • Tell glassfish to use cache servers for storage, not the in the GlassFish JVM, e.g.
          • Session.localstorage=false (ensuring use of coherence cache server, not GF)
  • Example continued
    • Example: clustered, in-process deployment model
    • Configure load balancer (ex. Apache)
      • Configure JVM for cluster or standalone servers to use
      • Configure / Enable mod_jk.conf load balancer plug-in
        • Mount paths defined in workers.properties
      • Edit workers.properties list to accommodate cluster members
      • Enable mod_jk for GlassFish
        • – Djvmroute option for routing to LB
        • JVM -D option for location of workers.properties &
        • In GF console, enable JK network listener for Apache
          • for AJP protocol
  • Links
    • Coherence User Guide http://coherence.oracle.com/display/COH35UG
    • Coherence*Web Session Management
    • http://coherence.oracle.com/display/COH35UG/Coherence*Web+Session+Management+Module
    • Coherence*Web and WebLogic Server
    • http://coherence.oracle.com/display/COH35UG/Coherence*Web+and+WebLogic+Server
    • Coherence*Web and WebLogic Portal
    • http://coherence.oracle.com/display/COH35UG/Coherence*Web+and+WebLogic+Portal
  •