Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Challenges in Maintaining a High Performance Search Engine Written in Java


Published on

Presented by Simon Willnauer | Apache Lucene - See conference video -

During the last decade Apache Lucene became the de-facto standard in open source search technology. Thousands of applications from Twitter Scale Webservices to Computers playing Jeopardy rely on Lucene, a rock-solid, scaleable and fast information-retrieval library entirely written in Java. Maintaining and improving such a popular software library reveals tough challenges in testing, API design, data-structures, concurrency and optimizations. This talk presents the most demanding technical challenges the Lucene Development Team has solved in the past. It covers a number of areas of software development including concurrency & parallelism, testing infrastructure, data-structures, algorithms, API designs with respect to Garbage Collection, and Memory efficiency and efficient resource utilization. This talk doesn’t require any Apache Lucene or information-retrieval background in general. Knowledge about the Java programming language will certainly be helpful while the problems and techniques presented in this talk aren’t Java specific.

Published in: Technology
  • Be the first to comment

Challenges in Maintaining a High Performance Search Engine Written in Java

  1. 1. Challenges in maintaing a high-performanceSearch-Engine written in JavaSimon WillnauerApache Lucene Core Committer & PMC /
  2. 2. Who am I?• Lucene Core Committer• Project Management Committee Chair (PMC)• Apache Member• Co-Founder BerlinBuzzwords• Co-Founder on / 2
  3. 3.• Community Portal targeting OpenSource Search 3
  4. 4. Agenda• Its all about performance ...eerrr community• It’s Java so its fast?• Challenges we faced and solved in the last years • Testing, Performance, Concurrency and Resource Utilization• Questions 4
  5. 5. Lets talk about Lucene• Apache TLP since 2001• Grandfather of projects like Mahout, Hadoop, Nutch, Tika• Used by thousands of applications world wide• Apache 2.0 licensed• Core has Zero-Dependency• Developed and Maintained by Volunteers 5
  6. 6. Just a search engine - so what’s the big deal?• True - Just software!• Massive community - with big expectations (YOU?)• Mission critical for lots of companies• End-user expects instant results independent of the request complexity• New features sometimes require major changes• Our contract is trust - we need to maintain trust! 6
  7. 7. Trust & Passion• ~ 30 committers (~ 10 active, some are payed to work on Lucene)• All technical communication are public (JIRA, Mailinglist, IRC)• Consensus is king! - usually :)• No lead developer or architect• No stand-ups, meetings or roadmap• Up to 10k mails per month• No passion, no progress!• The Apache way: Community over Code 7
  8. 8. Community? / 0-Dependencies? yesterday today tomorrow 8
  9. 9. What are folks looking for? Confident Users API - Stability Lucene 3x Lucene 4 Innovation Performance New Users Happy Users 9
  10. 10. Maintaining a Library - it’s tricky!• hides a lot technical details • synchronization, data-structures, algorithms• Most of the users don’t look behind the scenes• If you don’t look behind the scenes, it’s on us to make sure that the library is as efficient as possible...• A good reputation needs to be maintained.... • Lucene 4 promises a lot but it has to prove itself 10
  11. 11. But lets talk about fun challenges...• Written in Java... ;)• A ton of different use-cases• Nothing is impossible - or why LowercaseFilter supports supplementary characters• There are Bugs, there are always Bugs!• Mission Critical Software - shipped for free 11
  12. 12. We are working in Java so....• No need to know the machine & your environment• Use JDK Collections, they are fast• Short Lived Objects are Free• Concurrency is straight forward• IO is trivial• Method Calls are fast - there is a JIT, no?• Unicode is there and it works• No memory problems - there is a GC, right? 12
  13. 13. Really? No! 13
  14. 14. Mechanical Sympathy (Jackie Steward / Martin Thompson)“The most amazing achievement of thecomputer software industry is its continuingcancellation of the steady and staggeringgains made by the computer hardwareindustry.” Henry Peteroski 14
  15. 15. Where to focus on? CPU Cache Utilization Cost of a Monitor / CAS Space/CPU Utilization Need of mutability Concurrency Cost & Need of a Multiple Writers Model Compute up-front? Concrete or Virtual? JVM memory allocation Environment Can we allow stack allocation? JIT - Friendliness? Impact on GC Can we specialized a data-strucutres Amount of Objects (Long & Short Living)Any exploitable data properties Compression Can I reuse this object in the short term? Do we need 2 bytes per Character? 15
  16. 16. What we focus on... Materialized Data structures for Java HEAP Write, Commit, Merge Prevent False Sharing Space/CPU Utilization Write Once & Read - Only Concurrency Single Writer - Multiple Readers No Java Collections where scale is an issuecarefully test what JIT likes Environment Guarantee continuous memory allocation prevent invokeVirtual where possible Impact on GC Data Structures with Constant number of objects Finite State Transducers / Machines MemoryMap | NIOStrings can share prefix & suffix Compression Exploit FS / OS Caches Materialize strings to bytes UTF-8 by default or custom encodingWrite, Commit, Merge 16
  17. 17. Making things fast and stable is... an engineering effort! & takes time! Go tell it your boss! 17
  18. 18. But is it necessary?• Yes & No - it all depends on finding the hotspots• Measure & Optimize for you use-case. • Data-structures are not general purpose (like the don’t support deletes)• Follow the 80 / 20 rule• Enforce Efficiency by design • Java Iterators are a good example of how not to do it!• Remember you OS is highly optimized, make use of it! 18
  19. 19. Enough high level - concrete problems please!• Challenge: Idle is no-good!• Challenge: One Data-Structure to rule them all?• Challenge: How how to test a library• Challenge: What’s needed for a 20000% performance improvement 19
  20. 20. Challenge: Idle is no-good• Building an index is a CPU & IO intensive task• Lucene is full of indexes (thats basically all it does)• Ultimate Goal is to scale up with CPUs and saturate IO at the same timeDon’t go crazy!• Keep your code complexity in mind • Other people might need to maintain / extend this 20
  21. 21. Here is the problem WTF? 21
  22. 22. A closer look... IndexWriter Multi-Threaded DocumentsWriter Thread Thread Thread Thread Thread State State State State State do d d d do d d d do d d d do d d d do d d d merge segments in memory Single-Threaded Answer: it gives Merge on flush do do you threads a do do do doc break and it’s having a drink with Flush to Disk your slow-as-s**t IO System Directory 22
  23. 23. Our Solution IndexWriter DocumentsWriter Multi-Threaded DWPT DWPT DWPT DWPT DWPT ddo d d ddo dd ddo d d ddo d d ddo dd Flush to Disk Directory 23
  24. 24. The Result vs. 620 sec on 3.x Indexing Ingest Rate over time with Lucene 4.0 & DWPT Indexing 7 Million 4kb wikipedia documents 24
  25. 25. Challenge: One Data-Structure to Rule them all?• Like most other systems writing datastructures to disk Lucene didn’t expose it for extension• Major problem for researchers, engineers who know what they are doing• Special use-cases need special solutions • Unique ID Field usually is a 1 to 1 key to document mapping • Holding a posting list pointer is a wasteful • Term lookup + disk seek vs. Term lookup + read• Research is active in this area (integer encoding for instance) 25
  26. 26. 10000 ft view IndexWriter IndexReader Directory FileSystem 26
  27. 27. Introducing an extra layer IndexWriter IndexReader Flex API Codec Directory FileSystem 27
  28. 28. For Backwards Compatibility you know? Available Codecs Index ? ? Lucene 5 Lucene 4 segment ? Lucene 3 id title << read >> Lucene 4 Lucene 4 Index Index Reader Writer Index segment << merge >> segment id title Lucene 3 Lucene 3 id title Lucene 5 Lucene 5 28
  29. 29. Using the right tool for the job.. Switching to Memory PostingsFormat 29
  30. 30. Using the right tool for the job.. Switching to BlockTreeTermIndex 30
  31. 31. Challenge: How to test a library• A library typically has: • lots of interfaces & abstract classes • tons of parameters • needs to handle user input gracefully• Ideally we test all combinations of Interfaces, parameters and user inputs?• Yeah - right! 31
  32. 32. What’s wrong with Unit-Test• Short answer: Nothing!• But... • 1 Run == 1000 Runs? (only cover regression?) • Boundaries are rarely reached • Waste of CPU cycles • Test usually run against a single implementation • How to test against the full Unicode-Range? 32
  33. 33. An Example The method to test: The test: The result: 33
  34. 34. Can it fail? It can! ...after 53139 Runs• Boundaries are everywhere• There is no positive value for Integer.MIN• But how to repeat / debug? 34
  35. 35. Solution: A Randomized UnitTest Framework• Disclaimer: this stuff has been around for ages - not our invention!• Random selection of: • Interface Implementations • Input Parameters like # iterations, # threads, # cache sizes, intervals, ... • Random Valid Unicode Strings (Breaking JVM for fun and profit) • Throttling IO • Random Low Level Data-Strucutures • And many more... 35
  36. 36. Make sure your unit tests fail - eventually!• Framework is build for Lucene • Currently factored out into a general purpose framework • Check it out on:• Wanna help the Lucene Project? • Run our tests and report the failure! 36
  37. 37. Challenge: What’s needed for a 20k%Performance improvement. BEER! COFFEE! FUN! 37
  38. 38. The Problem: Fuzzy Search• Retrieve all documents containing a given term within a Levenshtein Distance of <= 2• Given: a sorted dictionary of terms• Trivial Solution: Brute Force - filter(terms, LD(2, queryTerm))• Problem: it’s damn slow! • O(t) terms examined, t=number of terms in all docs for that field. Exhaustively compares each term. We would prefer O(log2t) instead. • O(n2) comparison function, n=length of term. Levenshtein dynamic programming. We would prefer O(n) instead. 38
  39. 39. Solution: Turn Queries into Automatons• Read a crazy Paper about building Levenshtein Automaton and implement it. (sounds easy - right?)• Only explore subtrees that can lead to an accept state of some finite state machine.• AutomatonQuery traverses the term dictionary and the state machine in parallel• Imagine the index as a state machine that recognizes Terms and transduces matching Documents. • AutomatonQuery represents a user’s search needs as a FSM. • The intersection of the two emits search results 39
  40. 40. Solution: Turn Queries into Finite State MachinesExample DFA for “dogs” Levenshtein Distance 1 Finite-State Queries in Lucene Accepts: “dugs” Robert Muir o u0000-f, g ,h-n, o, p-uffff d g 40
  41. 41. Turns out to be a massive improvement! In Lucene 3 this is about 0.1 - 0.2 QPS 41
  42. 42. Berlin Buzzwords 2012 • Conference on High-Scalability, NoSQL and Search • 600+ Attendees, 50 Sessions, Trainings etc. • 42
  43. 43. Who we are? Who builds Lucene?• There is no Who!• There is no such thing as “the creator of Lucene”• It’s a true team effort• That and only that makes Lucene what it is today! 43
  44. 44. Questions anybody? ? 44