• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Xebia Knowledge Exchange (feb 2011) - Large Scale Web Development
 

Xebia Knowledge Exchange (feb 2011) - Large Scale Web Development

on

  • 3,340 views

 

Statistics

Views

Total Views
3,340
Views on SlideShare
3,337
Embed Views
3

Actions

Likes
5
Downloads
33
Comments
0

1 Embed 3

http://twitter.com 3

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Xebia Knowledge Exchange (feb 2011) - Large Scale Web Development Xebia Knowledge Exchange (feb 2011) - Large Scale Web Development Presentation Transcript

    • Large Scale Web DevelopmentTheory and practice with Java 2/3/2011 Michaël Figuière
    • Scalability best practices
    • Typical Web Architecture Backend A Application Instance Backend B Backends may be ApplicationLoad Balancer Backend C slow, fast, highly Instance available or not Application Backend D Instance Backend E
    • Facing the network’s reality• Some requests will be slow Server / proxy overloaded, network traffic, ...• Some requests won’t answer Server application’s bugs, GC, connection rejected...• Some requests will just fail Server failure, network failure, OS and JVM pressure
    • Handling the network’s reality• Timeout must be set and handled for every remote request If API doesn’t offer it, ExecutorService and Future can help• Use Circuit Breaker pattern Avoid requesting an already overloaded service• Setting a deadline for your answer may be helpful Whatever happen, answer will be sent as is within 3 sec. If mandatory goals aren’t achieved, return error.
    • Requests make the load• Two requests instead of one double the load of the backend Here, counting requests isn’t about optimization, it’s critical• Cache must be sized with care Cache Misses increase load on backends
    • Make requests in parallel• Parallel requests reduce overall duration Mandatory when backends are slow• Threads pool make it possible easily ExecutorService and Future do the job• Threads pool also acts as a throttle to shield the backend No more than N concurrent requests
    • Make requests in parallel D = Sum of requests durations D = Max of requests durations
    • From separated thread pools to semaphores• When you have a thousand threads, merging thread pools can help Mutualize resources• A semaphore can then do the throttling job Limit the concurrent users of a resource• Semaphore can also be tuned for live throttle tuning ! Allowing to slow down a requests stream to a dying backend
    • Serialized caches in Java heap• Garbage Collector tuning can be time consuming Especially when production environment is hard to simulate• Serializing data structures in Java heap caches reduces pressure on GC GC time complexity partly depends on amount of references• Don’t use Java Standard Serialization, use Avro, Kryo, or ProtoBuf Very low CPU overhead for a so compact format
    • Memcached instead of Java heap cache• Memcached is a simple and efficient Unix daemon Only two parameters to set : memory size and listening port• Several Java clients available All based on NIO !
    • Partitioned Memcached Application Memcached Memcached Client Memcached Memcached Application Memcached Client Memcached Application Memcached R/W requests between the Client application and one of the memcached instances (depending on hashing)
    • Monitor everything• A JMX attribute only costs an AtomicInteger and is worthless AtomicInteger doesn’t cost synchronization• Spring JMX offers efficient annotations @ManagedResource, @ManagedAttribute• Hyperic can do the aggregating job But so awful to configure and use. SpringSource promises to makes it better !
    • Log with care• With high traffic, strange things happen Synchronization issues, connection losses, weird requests, ...• These strange things can’t be reproduced in development environment Production environment’s behavior can’t be fully simulated• Log are the only way to track them You’ll have a lot of logs to store, but it’s ok
    • Concurrency Playground
    • What can be done with java.util.concurrent ?• Parallel invocations, with or without dependencies between requests ExecutorService with Future will do the job• Make synchronous and asynchronous code collaboration possible CountDownLatch, custom Future implementation, ...• Blocking IO code in pooled threads mixed with NIO code Wrapping Future, CountDownLatch, NIO callbacks, ...
    • Basic Parallel Requests Servlet Thread A Thread B Thread (from Pool) (from Pool) executorService.submit() future.get() future.get()
    • Basic Parallel Requests Servlet Thread A Thread B Thread (from Pool) (from Pool) executorService.submit() Callable.call() Callable.call() future.get() future.get()
    • Basic Parallel Requests Servlet Thread A Thread B Thread (from Pool) (from Pool) executorService.submit() Callable.call() Callable.call() future.get() future.get()
    • Thread Pooled Requests + Memcached NIO Client Servlet Thread A Thread B NIO Thread Thread (from Pool) (from Pool) (Memcached) invoke() (customExecutorService) get()future.get()future.get()
    • Thread Pooled Requests + Memcached NIO Client Servlet Thread A Thread B NIO Thread Thread (from Pool) (from Pool) (Memcached) invoke() (customExecutorService) get() submit() (in read callback)future.get()future.get()
    • Thread Pooled Requests + Memcached NIO Client Servlet Thread A Thread B NIO Thread Thread (from Pool) (from Pool) (Memcached) invoke() (customExecutorService) get() submit() (in read call() call() callback)future.get()future.get()
    • Thread Pooled Requests + Memcached NIO Client Servlet Thread A Thread B NIO Thread Thread (from Pool) (from Pool) (Memcached) invoke() (customExecutorService) get() submit() (in read call() call() callback)future.get()future.get()
    • Questions / Answers ? blog.xebia.fr @mfiguiere