Your SlideShare is downloading. ×
LMAX Disruptor - High Performance Inter-Thread Messaging Library
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Saving this for later?

Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime - even offline.

Text the download link to your phone

Standard text messaging rates apply

LMAX Disruptor - High Performance Inter-Thread Messaging Library

2,003
views

Published on

Short presentation about the fascinating Disruptor library created by LMAX. Presented at Bucharest Java User Group.

Short presentation about the fascinating Disruptor library created by LMAX. Presented at Bucharest Java User Group.

Published in: Technology, Education

0 Comments
5 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
2,003
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
44
Comments
0
Likes
5
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • They rewrote the JDK collections with custom friendly ones.
  • Transcript

    • 1. LMAX DisruptorHigh Performance Inter-ThreadMessaging Library
    • 2. Agenda• Introduction• What is LMAX?• Problems they were trying to solve• Concurrent Programming• Locks• Queues• The Disruptor Pattern• LMAX Architecture• Live Demo• Q&A
    • 3. LMAX• London Multi-Asset eXchange• Build the fastest trading platform in the world• Retail platform, it allows anyone to trade withoutgoing through a broker• They have an order matching engine, real-time riskmanagement, in-memory transaction processingsystem, etc.
    • 4. Problems to Solve• Extreme transaction processing (XTP)• High-throughput, low-latency / predictablelatency• Scale to potentially millions of users
    • 5. They tried…• RDBMS• JEE• SEDA• Actors… And none of the above offered the predictable lowlatency they were after
    • 6. Concurrent Programming• Protect access to contended resources (mutualexclusion)• Make the results public in the right order(visibility of changes)• Locks (standard synchronized blocks in Java)• Atomic / CAS Instructions – boils down tomachine instructions
    • 7. Locks? No…• Context switch to the OS kernel for arbitration(even worse in a virtualized environment).• It’s taking your thread quantum away and mightdecide to do something else…• When your thread is scheduled to run again itmight end up on another core and it will have toreload all the execution context.
    • 8. Queues? No…• Either full or empty, mostly empty in a wellrunning system, cannot resize easily• 1 Producer, 1 Consumer… 2 threads writing intothe queue• High contention for head and tail• Create garbage (put and take) so GC has a lot ofcleanup to do, impacts latency
    • 9. The Ring Buffer – Reading and Writing
    • 10. The Disruptor Pattern in Use
    • 11. In production…• 6 million TPS (3Ghz dual-socket quad coreIntel on a Dell Server 32 Gb of RAM) in2011…• Single threaded, in memory• 20 million items on the input ring buffer• 4 million items the output ring buffer
    • 12. Live Demo…Unicast: 1P – 1CDiamond: 1P – 3C
    • 13. Key Takeaways• Mechanical Sympathy• Keep the working sets in memory• Write clean compact code• Invest in modeling your domain (SRP – oneclass one thing, one method one thing, etc.)• Take the right approach to concurrency
    • 14. Q & A
    • 15. Thank you!Resources:http://martinfowler.com/articles/lmax.htmlhttp://lmax-exchange.github.io/disruptor/http://mechanical-sympathy.blogspot.com/http://bad-concurrency.blogspot.com/http://mechanitis.blogspot.ro/http://www.infoq.com/presentations/LMAX