Faceted search is a powerful technique to let users easily navigate the search results. It can also be used to develop rich user interfaces, which give an analyst quick insights about the documents space. In this session I will introduce the Facets module, how to use it, under-the-hood details as well as optimizations and best practices. I will also describe advanced faceted search capabilities with Lucene Facets.
See conference video - http://www.lucidimagination.com/devzone/events/conferences/ApacheLuceneEurocon2011
This talk describes how you can practically apply some of Lucene 4's new features (such as flexible indexing, scoring improvements, column-stride fields) to improve your search application.
The talk will give a brief description of these new features and some example use-cases, to address practical use cases you can try yourself in and around the new features now available in Lucene 4. We'll cover application of functions where you can configure Solr to:
Set up the schema to use Pulsing or Memory codec for a primary key field
Not use a separate spellcheck index, controlling character-level swaps from the query processor
Sorting with a different locale
Per-field similarity configurations, such as using a non-vector-space algorithm
Presented by Shai Erera, Researcher, IBM
Lucene's arsenal has recently expanded to include two new modules: Index Sorting and Replication. Index sorting lets you keep an index consistently sorted based on some criteria (e.g. modification date). This allows for efficient search early-termination as well as achieve better index compression. Index replication lets you replicate a search index to achieve high-availability, fault tolerance as well as take hot index backups. In this talk we will introduce these modules, discuss implementation and design details as well as best practices.
Faceted search is a powerful technique to let users easily navigate the search results. It can also be used to develop rich user interfaces, which give an analyst quick insights about the documents space. In this session I will introduce the Facets module, how to use it, under-the-hood details as well as optimizations and best practices. I will also describe advanced faceted search capabilities with Lucene Facets.
See conference video - http://www.lucidimagination.com/devzone/events/conferences/ApacheLuceneEurocon2011
This talk describes how you can practically apply some of Lucene 4's new features (such as flexible indexing, scoring improvements, column-stride fields) to improve your search application.
The talk will give a brief description of these new features and some example use-cases, to address practical use cases you can try yourself in and around the new features now available in Lucene 4. We'll cover application of functions where you can configure Solr to:
Set up the schema to use Pulsing or Memory codec for a primary key field
Not use a separate spellcheck index, controlling character-level swaps from the query processor
Sorting with a different locale
Per-field similarity configurations, such as using a non-vector-space algorithm
Presented by Shai Erera, Researcher, IBM
Lucene's arsenal has recently expanded to include two new modules: Index Sorting and Replication. Index sorting lets you keep an index consistently sorted based on some criteria (e.g. modification date). This allows for efficient search early-termination as well as achieve better index compression. Index replication lets you replicate a search index to achieve high-availability, fault tolerance as well as take hot index backups. In this talk we will introduce these modules, discuss implementation and design details as well as best practices.
Presented by Adrien Grand, Software Engineer, Elasticsearch
Although people usually come to Lucene and related solutions in order to make data searchable, they often realize that it can do much more for them. Indeed, its ability to handle high loads of complex queries make Lucene a perfect fit for analytics applications and, for some use-cases, even a credible replacement for a primary data-store. It is important to understand the design decisions behind Lucene in order to better understand the problems it can solve and the problems it cannot solve. This talk will explain the design decisions behind Lucene, give insights into how Lucene stores data on disk and how it differs from traditional databases. Finally, there will be highlights of recent and future changes in Lucene index file formats.
Presented by Fotolog. Lucene is a powerful, high-performance, full-featured text search engine library that is written entirely in Java and provides a technology suitable for all size applications requiring full-text search in heterogeneous environments.
In this presentation, Frank Mash shows you how you can use Lucene with MySQL to offer powerful searching capabilities to your stakeholders. The presentation will cover installation, usage. optimization of Lucene, and how to interface a Ruby on Rails application with Lucene using a custom Java server. This session is highly recommended for those looking to add full-text cross-platform, database independent search capability to their application.
This talk moves beyond the standard introduction into Elasticsearch and focuses on how Elasticsearch tries to fulfill its near-realtime contract. Specifically, I’ll show how Elasticsearch manages to be incredibly fast while handling huge amounts of data. After a quick introduction, we will walk through several search features and how the user can get the most out of the Elasticsearch. This talk will go under the hood exploring features like search, aggregations, highlighting, (non-)use of probabilistic data structures and more.
Zoe Slattery's slides from PHPNW08:
The ability to store large quantities of local data means that many applications require some form of text search and retrieval facility. From the point of view of the application developer there are a number of choices to make, the first is whether to use a complete packaged solution or whether to use one of the available information libraries to build a custom information retrieval (IR) solution. In this talk I’ll look at the options for PHP programmers who choose to embed IR facilities within their applications.
For Java programmers there is clearly a good range of options for text retrieval libraries, but options for PHP programmers are more limited. At first sight for a PHP programmer wishing to embed indexing and search facilities in their application, the choice seems obvious - the PHP implementation of Lucene (Zend Search Lucene). There is no requirement to support another language, the code is PHP therefore easy for PHP programmers to work with and the license is commercially friendly. However, whilst ease of integration and support are key factors in choice of technology, performance can also be important; the performance of the PHP implementation of Lucene is poor compared to the Java implementation.
In this talk I’ll explain the differences in performance between PHP implementation of Lucene and the Java implementation and examine the other options available to PHP programmers for whom performance is a critical factor.
Search is everywhere, and therefore so is Apache Lucene. While providing amazing out-of-the-box defaults, there’s enough projects weird enough to require custom search scoring and ranking. In this talk, I’ll walk through how to use Lucene to implement your custom scoring and search ranking. We’ll see how you can achieve both amazing power (and responsibility) over your search results. We’ll see the flexibility of Lucene’s data structures and explore the pros/cons of custom Lucene scoring vs other methods of improving search relevancy.
High Performance JSON Search and Relational Faceted Browsing with Lucenelucenerevolution
Presented by Renaud Delbru, Co-Founder, SindiceTech
In this presentation, we will discuss how Lucene and Solr can be used for very efficient search of tree-shaped schemaless document, e.g. JSON or XML, and can be then made to address both graph and relational data search. We will discuss the capabilities of SIREn, a Lucene/Solr plugin we have developed to deal with huge collections of tree-shaped schemaless documents, and how SIREn is built using Lucene extensibility capabilities (Analysis, Codec, Flexible Query Parser). We will compare it with Lucene's BlockJoin Query API in nested schemaless data intensive scenarios. We will then go through use cases that show how relational or graph data can be turned into JSON documents using Hadoop and Pig, and how this can be used in conjunction with SIREn to create relational faceting systems with unprecedented performance. Take-away lessons from this session will be awareness about using Lucene/Solr and Hadoop for relational and graph data search, as well as the awareness that it is now possible to have relational faceted browsers with sub-second response time on commodity hardware.
Apache Lucene's next major release, 4.0, will introduce lots of flexibility into indexing, but also fundamental changes to the well-known APIs: It features a new and consistent, 4-dimensional iteration API on top of a low-level, pluggable codec API giving applications full control over the postings data.
Munching & crunching - Lucene index post-processingabial
Lucene EuroCon 10 presentation on index post-processing (splitting, merging, sorting, pruning), tiered search, bitwise search, and a few slides on MapReduce indexing models (I ran out of time to show them, but they are there...)
Portable Lucene Index Format & Applications - Andrzej Bialeckilucenerevolution
See conference video - http://www.lucidimagination.com/devzone/events/conferences/ApacheLuceneEurocon2011
This talk will present a design and implementation of a flexible, version-independent serialization format for Lucene indexes and its applications in index upgrades / downgrades, in distributed document analysis, in distributed indexing, and in integration with external indexing pipelines. This format enables submitting pre-analyzed documents to Lucene/Solr, and transferring parts of indexes between nodes in a distributed setup.
Got data? Let's make it searchable! This presentation will demonstrate getting documents into Solr quickly, will provide some tips in adjusting Solr's schema to match your needs better, and finally will discuss how to showcase your data in a flexible search user interface. We'll see how to rapidly leverage faceting, highlighting, spell checking, and debugging. Even after all that, there will be enough time left to outline the next steps in developing your search application and taking it to production.
Our talk covers the migration of the Twitter architecture from primarily Ruby on Rails (RoR) to a JVM-based SOA system with emphasis on high performance, scalability, and resilience to failure. General lessons include the advantages of asynchronous, real-time architectures over synchronous, process / thread-oriented systems, as well as caching and data store patterns.
Adapting Alax Solr to Compare different sets of documents - Joan Codinalucenerevolution
See conference video - http://www.lucidimagination.com/devzone/events/conferences/ApacheLuceneEurocon2011
One of the main features of Solr is Faceted Search. Facets are the top terms present in the results of a query. But facets do not indicate the most statistically relevant terms of a query, that is, these terms that are more present in the documents selected by the query than in the rest of the collection. A critical factor in making such statistical insights broadly useful is to make them visual -- i.e., using charts and graphs that display these quantitative relationships. We will present how to adapt Ajax-Solr to find the most prominent terms of a query compared to the full set or just another query. We are going to present and example on how this can be used to find current topics in the news, and extract that information into visually communicative charts and graphs.
Presented by Adrien Grand, Software Engineer, Elasticsearch
Although people usually come to Lucene and related solutions in order to make data searchable, they often realize that it can do much more for them. Indeed, its ability to handle high loads of complex queries make Lucene a perfect fit for analytics applications and, for some use-cases, even a credible replacement for a primary data-store. It is important to understand the design decisions behind Lucene in order to better understand the problems it can solve and the problems it cannot solve. This talk will explain the design decisions behind Lucene, give insights into how Lucene stores data on disk and how it differs from traditional databases. Finally, there will be highlights of recent and future changes in Lucene index file formats.
Presented by Fotolog. Lucene is a powerful, high-performance, full-featured text search engine library that is written entirely in Java and provides a technology suitable for all size applications requiring full-text search in heterogeneous environments.
In this presentation, Frank Mash shows you how you can use Lucene with MySQL to offer powerful searching capabilities to your stakeholders. The presentation will cover installation, usage. optimization of Lucene, and how to interface a Ruby on Rails application with Lucene using a custom Java server. This session is highly recommended for those looking to add full-text cross-platform, database independent search capability to their application.
This talk moves beyond the standard introduction into Elasticsearch and focuses on how Elasticsearch tries to fulfill its near-realtime contract. Specifically, I’ll show how Elasticsearch manages to be incredibly fast while handling huge amounts of data. After a quick introduction, we will walk through several search features and how the user can get the most out of the Elasticsearch. This talk will go under the hood exploring features like search, aggregations, highlighting, (non-)use of probabilistic data structures and more.
Zoe Slattery's slides from PHPNW08:
The ability to store large quantities of local data means that many applications require some form of text search and retrieval facility. From the point of view of the application developer there are a number of choices to make, the first is whether to use a complete packaged solution or whether to use one of the available information libraries to build a custom information retrieval (IR) solution. In this talk I’ll look at the options for PHP programmers who choose to embed IR facilities within their applications.
For Java programmers there is clearly a good range of options for text retrieval libraries, but options for PHP programmers are more limited. At first sight for a PHP programmer wishing to embed indexing and search facilities in their application, the choice seems obvious - the PHP implementation of Lucene (Zend Search Lucene). There is no requirement to support another language, the code is PHP therefore easy for PHP programmers to work with and the license is commercially friendly. However, whilst ease of integration and support are key factors in choice of technology, performance can also be important; the performance of the PHP implementation of Lucene is poor compared to the Java implementation.
In this talk I’ll explain the differences in performance between PHP implementation of Lucene and the Java implementation and examine the other options available to PHP programmers for whom performance is a critical factor.
Search is everywhere, and therefore so is Apache Lucene. While providing amazing out-of-the-box defaults, there’s enough projects weird enough to require custom search scoring and ranking. In this talk, I’ll walk through how to use Lucene to implement your custom scoring and search ranking. We’ll see how you can achieve both amazing power (and responsibility) over your search results. We’ll see the flexibility of Lucene’s data structures and explore the pros/cons of custom Lucene scoring vs other methods of improving search relevancy.
High Performance JSON Search and Relational Faceted Browsing with Lucenelucenerevolution
Presented by Renaud Delbru, Co-Founder, SindiceTech
In this presentation, we will discuss how Lucene and Solr can be used for very efficient search of tree-shaped schemaless document, e.g. JSON or XML, and can be then made to address both graph and relational data search. We will discuss the capabilities of SIREn, a Lucene/Solr plugin we have developed to deal with huge collections of tree-shaped schemaless documents, and how SIREn is built using Lucene extensibility capabilities (Analysis, Codec, Flexible Query Parser). We will compare it with Lucene's BlockJoin Query API in nested schemaless data intensive scenarios. We will then go through use cases that show how relational or graph data can be turned into JSON documents using Hadoop and Pig, and how this can be used in conjunction with SIREn to create relational faceting systems with unprecedented performance. Take-away lessons from this session will be awareness about using Lucene/Solr and Hadoop for relational and graph data search, as well as the awareness that it is now possible to have relational faceted browsers with sub-second response time on commodity hardware.
Apache Lucene's next major release, 4.0, will introduce lots of flexibility into indexing, but also fundamental changes to the well-known APIs: It features a new and consistent, 4-dimensional iteration API on top of a low-level, pluggable codec API giving applications full control over the postings data.
Munching & crunching - Lucene index post-processingabial
Lucene EuroCon 10 presentation on index post-processing (splitting, merging, sorting, pruning), tiered search, bitwise search, and a few slides on MapReduce indexing models (I ran out of time to show them, but they are there...)
Portable Lucene Index Format & Applications - Andrzej Bialeckilucenerevolution
See conference video - http://www.lucidimagination.com/devzone/events/conferences/ApacheLuceneEurocon2011
This talk will present a design and implementation of a flexible, version-independent serialization format for Lucene indexes and its applications in index upgrades / downgrades, in distributed document analysis, in distributed indexing, and in integration with external indexing pipelines. This format enables submitting pre-analyzed documents to Lucene/Solr, and transferring parts of indexes between nodes in a distributed setup.
Got data? Let's make it searchable! This presentation will demonstrate getting documents into Solr quickly, will provide some tips in adjusting Solr's schema to match your needs better, and finally will discuss how to showcase your data in a flexible search user interface. We'll see how to rapidly leverage faceting, highlighting, spell checking, and debugging. Even after all that, there will be enough time left to outline the next steps in developing your search application and taking it to production.
Our talk covers the migration of the Twitter architecture from primarily Ruby on Rails (RoR) to a JVM-based SOA system with emphasis on high performance, scalability, and resilience to failure. General lessons include the advantages of asynchronous, real-time architectures over synchronous, process / thread-oriented systems, as well as caching and data store patterns.
Adapting Alax Solr to Compare different sets of documents - Joan Codinalucenerevolution
See conference video - http://www.lucidimagination.com/devzone/events/conferences/ApacheLuceneEurocon2011
One of the main features of Solr is Faceted Search. Facets are the top terms present in the results of a query. But facets do not indicate the most statistically relevant terms of a query, that is, these terms that are more present in the documents selected by the query than in the rest of the collection. A critical factor in making such statistical insights broadly useful is to make them visual -- i.e., using charts and graphs that display these quantitative relationships. We will present how to adapt Ajax-Solr to find the most prominent terms of a query compared to the full set or just another query. We are going to present and example on how this can be used to find current topics in the news, and extract that information into visually communicative charts and graphs.
Real-time analysis starts with transforming raw data into structured records. Typically this is done with bespoke business logic custom written for each use case. Joey Echeverria presents a configuration-based, reusable library for streaming ETL that can be embedded in real-time stream-processing systems and demonstrates its real-world use cases with Apache Kafka and Apache Hadoop.
Netflix Global Search - Lucene Revolutionivan provalov
Talking about the challenges of supporting autocomplete (instant) search in different languages. Search configuration in Solr, scoring, tokenization, custom components and testing issues are discussed.
Big & Personal: the data and the models behind Netflix recommendations by Xa...BigMine
Since the Netflix $1 million Prize, announced in 2006, our company has been known for having personalization at the core of our product. Even at that point in time, the dataset that we released was considered “large”, and we stirred innovation in the (Big) Data Mining research field. Our current product offering is now focused around instant video streaming, and our data is now many orders of magnitude larger. Not only do we have many more users in many more countries, but we also receive many more streams of data. Besides the ratings, we now also use information such as what our members play, browse, or search.
In this talk I will discuss the different approaches we follow to deal with these large streams of data in order to extract information for personalizing our service. I will describe some of the machine learning models used, as well as the architectures that allow us to combine complex offline batch processes with real-time data streams.
A presentation on the Netflix Cloud Architecture and NetflixOSS open source. For the All Things Open 2015 conference in Raleigh 2015/10/19. #ATO2015 #NetflixOSS
Building production spark streaming applicationsJoey Echeverria
Designing, implementing, and testing an Apache Spark Streaming application is necessary to deploy to production but is not sufficient for long term management and monitoring. Simply learning the Spark Streaming APIs only gets you part of the way there. In this talk, I’ll be focusing on everything that happens after you’ve implemented your application in the context of a real-time alerting system for IT operational data.
AWS re:Invent 2016: Searching Inside Video at Petabyte Scale Using Spot (WIN307)Amazon Web Services
Video is a "last-mile problem" for search technology. Unlike webpages, documents, and email, content in videos has traditionally been impossible to search. Recent advances in automated speech and text recognition, however, let businesses and universities search inside video assets as easily as inside textual content. In this session, you'll learn how Panopto is using AWS to solve the video-search problem at scale, while saving over 50% in operating costs by taking advantage of Spot instances. We discuss the cross-platform architecture that combines Windows and Linux to provide cost-effective video processing and search indexing. We also dive deep into scaling Spot elastically based on user demand, handling fallback situations when instances are revoked, and using the Spot bidding process to optimize cost structure. Finally, we discuss future plans to reduce operating costs even further through Spot fleets and grid processing.
Streaming ETL for Data Lakes using Amazon Kinesis Firehose - May 2017 AWS Onl...Amazon Web Services
Learning Objectives:
- Understand key requirements for collecting, preparing, and loading streaming data into data lakes
- Get an overview of transmitting data using Amazon Kinesis Firehose
- Learn how to perform data transformations with Amazon Kinesis Firehose
Data lakes enable your employees across the organization to access and analyze massive amounts of unstructured and structured data from disparate data sources, many of which generate data continuously and rapidly. Making this data available in a timely fashion for analysis requires a streaming solution that can durably and cost-effectively ingest this data into your data lake. Amazon Kinesis Firehose is a fully managed service that makes it easy to prepare and load streaming data into AWS. In this tech talk, we will provide an overview of Amazon Kinesis Firehose and dive deep into how you can use the service to collect, transform, batch, compress, and load real-time streaming data into your Amazon S3 data lakes.
In this presentation, you will get a look under the covers of Amazon Redshift, a fast, fully-managed, petabyte-scale data warehouse service for less than $1,000 per TB per year. Learn how Amazon Redshift uses columnar technology, optimized hardware, and massively parallel processing to deliver fast query performance on data sets ranging in size from hundreds of gigabytes to a petabyte or more. We'll also walk through techniques for optimizing performance and, you’ll hear from a specific customer and their use case to take advantage of fast performance on enormous datasets leveraging economies of scale on the AWS platform.
Speakers:
Ian Meyers, AWS Solutions Architect
Toby Moore, Chief Technology Officer, Space Ape
NoSQL Couchbase Lite & BigData HPCC SystemsFujio Turner
Mobile data is becoming the new source for data. Managing data in the mobile devices has become easier with NoSQL Couchbase Lite mobile database. Making sense, analyzing, scaling to exabytes has also become easier with LexisNexis Big Data platform HPCC Systems.
Strata Conference + Hadoop World NY 2016: Lessons learned building a scalable...Sumeet Singh
Building a real-time monitoring service that handles millions of custom events per second while satisfying complex rules, varied throughput requirements, and numerous dimensions simultaneously is a complex endeavor. Sumeet Singh and Mridul Jain explain how Yahoo approached these challenges with Apache Storm Trident, Kafka, HBase, and OpenTSDB and discuss the lessons learned along the way.
Sumeet and Mridul explain scaling patterns backed by real scenarios and data to help attendees develop their own architectures and strategies for dealing with the scale challenges that come with real-time big data systems. They also explore the tradeoffs made in catering to a diverse set of daily users and the associated usability challenges that motivated Yahoo to build a self-serve, easy-to-use platform that requires minimal programming experience. Sumeet and Mridul then discuss event-level tracking for debugging and troubleshooting problems that our users may encounter at this scale. Over the course of their talk, they also address building infrastructure and operational intelligence with anomaly detection, alert correlation, and trend analysis based on the monitoring platform.
Data warehousing is a critical component for analysing and extracting actionable insights from your data. Amazon Redshift allows you to deploy a scalable data warehouse in a matter of minutes and starts to analyse your data right away using your existing business intelligence tools.
Amazon Redshift é um serviço gerenciado que lhe dá um Data Warehouse, pronto para usar. Você se preocupa com carregar dados e utilizá-lo. Os detalhes de infraestrutura, servidores, replicação, backup são administrados pela AWS.
AWS re:Invent 2016| GAM301 | How EA Leveraged Amazon Redshift and AWS Partner...Amazon Web Services
In November 2015, Capital Games launched a mobile game accompanying a major feature film release. The back end of the game is hosted in AWS and uses big data services like Amazon Kinesis, Amazon EC2, Amazon S3, Amazon Redshift, and AWS Data Pipeline. Capital Games will describe some of their challenges on their initial setup and usage of Amazon Redshift and Amazon EMR. They will then go over their engagement with AWS Partner 47lining and talk about specific best practices regarding solution architecture, data transformation pipelines, and system maintenance using AWS big data services. Attendees of this session should expect a candid view of the process to implementing a big data solution. From problem statement identification to visualizing data, with an in-depth look at the technical challenges and hurdles along the way.
AWS re:Invent 2016: [REPEAT] How EA Leveraged Amazon Redshift and AWS Partner...Amazon Web Services
In November 2015, Capital Games launched a mobile game accompanying a major feature film release. The back end of the game is hosted in AWS and uses big data services like Amazon Kinesis, Amazon EC2, Amazon S3, Amazon Redshift, and AWS Data Pipeline. Capital Games will describe some of their challenges on their initial setup and usage of Amazon Redshift and Amazon EMR. They will then go over their engagement with AWS Partner 47lining and talk about specific best practices regarding solution architecture, data transformation pipelines, and system maintenance using AWS big data services. Attendees of this session should expect a candid view of the process to implementing a big data solution. From problem statement identification to visualizing data, with an in-depth look at the technical challenges and hurdles along the way.
Text Classification Powered by Apache Mahout and Lucenelucenerevolution
Presented by Isabel Drost-Fromm, Software Developer, Apache Software Foundation/Nokia Gate 5 GmbH at Lucene/Solr Revolution 2013 Dublin
Text classification automates the task of filing documents into pre-defined categories based on a set of example documents. The first step in automating classification is to transform the documents to feature vectors. Though this step is highly domain specific Apache Mahout provides you with a lot of easy to use tooling to help you get started, most of which relies heavily on Apache Lucene for analysis, tokenisation and filtering. This session shows how to use facetting to quickly get an understanding of the fields in your document. It will walk you through the steps necessary to convert your text documents into feature vectors that Mahout classifiers can use including a few anecdotes on drafting domain specific features.
Configure
Presented by Markus Klose, Search + Big Data Consultant SHI Elektronische Medien GmbH at Lucene/Solr Revolution 2013 Dublin
Kibana4Solr is search-driven, scalable, browser based and extremely user friendly (also for non-technical users). Logs are everywhere. Any device, system or human can potentially produce a huge amount of information saved in logs. The amount of available logs and their semi-structured nature make a meaningful processing in real-time quite a difficult task. Thus, valuable business insights stored in logs might be not found. Kibana4Solr is a search-driven approach to handle that challenge. It offers user-friendly and browser-based dashboard which can be easily customized to particular needs. In the session the Kibana4Solr will be introduced. Some light will be shed on the architectural features of Kibana4Solr. Some ideas will be given in terms of possible business uses cases. And finally a live demo of Kibana4Solr will be shown.
Configure
Building Client-side Search Applications with Solrlucenerevolution
Presented by Daniel Beach, Search Application Developer, OpenSource Connections
Solr is a powerful search engine, but creating a custom user interface can be daunting. In this fast paced session I will present an overview of how to implement a client-side search application using Solr. Using open-source frameworks like SpyGlass (to be released in September) can be a powerful way to jumpstart your development by giving you out-of-the box results views with support for faceting, autocomplete, and detail views. During this talk I will also demonstrate how we have built and deployed lightweight applications that are able to be performant under large user loads, with minimal server resources.
Integrate Solr with real-time stream processing applicationslucenerevolution
Presented by Timothy Potter, Founder, Text Centrix
Storm is a real-time distributed computation system used to process massive streams of data. Many organizations are turning to technologies like Storm to complement batch-oriented big data technologies, such as Hadoop, to deliver time-sensitive analytics at scale. This talk introduces on an emerging architectural pattern of integrating Solr and Storm to process big data in real time. There are a number of natural integration points between Solr and Storm, such as populating a Solr index or supplying data to Storm using Solr’s real-time get support. In this session, Timothy will cover the basic concepts of Storm, such as spouts and bolts. He’ll then provide examples of how to integrate Solr into Storm to perform large-scale indexing in near real-time. In addition, we'll see how to embed Solr in a Storm bolt to match incoming tuples against pre-configured queries, commonly known as percolator. Attendees will come away from this presentation with a good introduction to stream processing technologies and several real-world use cases of how to integrate Solr with Storm.
Configure your Solr cluster to handle hundreds of millions of documents without even noticing, handle queries in milliseconds, use Near Real Time indexing and searching with document versioning. Scale your cluster both horizontally and vertically by using shards and replicas. In this session you'll learn how to make your indexing process blazing fast and make your queries efficient even with large amounts of data in your collections. You'll also see how to optimize your queries to leverage caches as much as your deployment allows and how to observe your cluster with Solr administration panel, JMX, and third party tools. Finally, learn how to make changes to already deployed collections —split their shards and alter their schema by using Solr API.
Presented by Rafal Kuć, Consultant and Software engineer, , Sematext Group, Inc.
Even though Solr can run without causing any troubles for long periods of time it is very important to monitor and understand what is happening in your cluster. In this session you will learn how to use various tools to monitor how Solr is behaving at a high level, but also on Lucene, JVM, and operating system level. You'll see how to react to what you see and how to make changes to configuration, index structure and shards layout using Solr API. We will also discuss different performance metrics to which you ought to pay extra attention. Finally, you'll learn what to do when things go awry - we will share a few examples of troubleshooting and then dissect what was wrong and what had to be done to make things work again.
Implementing a Custom Search Syntax using Solr, Lucene, and Parboiledlucenerevolution
In a recent project with the United States Patent and Trademark Office, Opensource Connections was asked to prototype the next generation of patent search - using Solr and Lucene. An important aspect of this project was the implementation of BRS, a specialized search syntax used by patent examiners during the examination process. In this fast paced session we will relate our experiences and describe how we used a combination of Parboiled (a Parser Expression Grammar [PEG] parser), Lucene Queries and SpanQueries, and an extension of Solr's QParserPlugin to build BRS search functionality in Solr. First we will characterize the patent search problem and then define the BRS syntax itself. We will then introduce the Parboiled parser and discuss various considerations that one must make when designing a syntax parser. Following this we will describe the methodology used to implement the search functionality in Lucene/Solr. Finally, we will include an overview our syntactic and semantic testing strategies. The audience will leave this session with an understanding of how Solr, Lucene, and Parboiled may be used to implement their own custom search parser.
Many of us tend to hate or simply ignore logs, and rightfully so: they’re typically hard to find, difficult to handle, and are cryptic to the human eye. But can we make logs more valuable and more usable if we index them in Solr, so we can search and run real-time statistics on them? Indeed we can, and in this session you’ll learn how to make that happen. In the first part of the session we’ll explain why centralized logging is important, what valuable information one can extract from logs, and we’ll introduce the leading tools from the logging ecosystems everyone should be aware of - from syslog and log4j to LogStash and Flume. In the second part we’ll teach you how to use these tools in tandem with Solr. We’ll show how to use Solr in a SolrCloud setup to index large volumes of logs continuously and efficiently. Then, we'll look at how to scale the Solr cluster as your data volume grows. Finally, we'll see how you can parse your unstructured logs and convert them to nicely structured Solr documents suitable for analytical queries.
Real-time Inverted Search in the Cloud Using Lucene and Stormlucenerevolution
Building real-time notification systems is often limited to basic filtering and pattern matching against incoming records. Allowing users to query incoming documents using Solr's full range of capabilities is much more powerful. In our environment we needed a way to allow for tens of thousands of such query subscriptions, meaning we needed to find a way to distribute the query processing in the cloud. By creating in-memory Lucene indices from our Solr configuration, we were able to parallelize our queries across our cluster. To achieve this distribution, we wrapped the processing in a Storm topology to provide a flexible way to scale and manage our infrastructure. This presentation will describe our experiences creating this distributed, real-time inverted search notification framework.
Solr's Admin UI - Where does the data come from?lucenerevolution
Like many Web-Applications in the past, the Solr Admin UI up until 4.0 was entirely server based. It used separate code on the server to generate their Dashboards, Overviews and Statistics. All that code had to be maintained and still ... you weren't really able to use that kind of data for the things you needed it for. It was wrapped into HTML, most of the time difficult to extract and changed the structure from time to time w/o announcement. After a short look back, we're going to look into the current state of the Solr Admin UI - a client-side application, running completely in your browser. We'll see how it works, where it gets its data from and how you can get the very same data and wire that into your own custom applications, dashboards and/oder monitoring systems.
Steve will show how and why to use Solr’s new Schemaless Mode, under which document indexing can be performed with no up-front schema configuration. Solr uses content clues to choose among a predefined set of field types and then automatically add previously unseen fields to the schema.
Text Classification with Lucene/Solr, Apache Hadoop and LibSVMlucenerevolution
In this session we will show how to build a text classifier using the Apache Lucene/Solr with libSVM libraries. We classify our corpus of job offers into a number of predefined categories. Each indexed document (a job offer) then belongs to zero, one or more categories. Known machine learning techniques for text classification include naïve bayes model, logistic regression, neural network, support vector machine (SVM), etc. We use Lucene/Solr to construct the features vector. Then we use the libsvm library known as the reference implementation of the SVM model to classify the document. We construct as many one-vs-all svm classifiers as there are classes in our setting, then using the Hadoop MapReduce Framework we reconcile the result of our classifiers. The end result is a scalable multi-class classifier. Finally we outline how the classifier is used to enrich basic solr keyword search.
As part of their work with large media monitoring companies, Flax has developed a technique for applying tens of thousands of stored Lucene queries to a document in under a second. We'll talk about how we built intelligent filters to reduce the number of actual queries applied and how we extended Lucene to extract the exact hit positions of matches, the challenges of implementation, and how it can be used, including applications that monitor hundreds of thousands of news stories every day.
Spellchecking in Trovit: Implementing a Contextual Multi-language Spellchecke...lucenerevolution
Presented by Xavier Sanchez Loro, Ph.D, Trovit Search SL
This session aims to explain the implementation and use case for spellchecking in Trovit search engine. Trovit is a classified ads search engine supporting several different sites, one for each on country and vertical. Our search engine supports multiple indexes in multiple languages, each with several millions of indexed ads. Those indexes are segmented in several different sites depending on the type of ads (homes, cars, rentals, products, jobs and deals). We have developed a multi-language spellchecking system using solr and lucene in order to help our users to better find the desired ads and avoid the dreaded 0 results as much as possible. As such our goal is not pure orthographic correction, but also suggestion of correct searches for a certain site.
Presented by Mark Miller, Software Developer, Cloudera
Apache Lucene/Solr committer Mark Miller talks about how Solr has been integrated into the Hadoop ecosystem to provide full text search at "Big Data" scale. This talk will give an overview of how Cloudera has tackled integrating Solr into the Hadoop ecosystem and highlights some of the design decisions and future plans. Learn how Solr is getting 'cozy' with Hadoop, which contributions are going to what project, and how you can take advantage of these integrations to use Solr efficiently at "Big Data" scale. Learn how you can run Solr directly on HDFS, build indexes with Map/Reduce, load Solr via Flume in 'Near Realtime' and much more.
A Novel methodology for handling Document Level Security in Search Based Appl...lucenerevolution
Presented by Rajini Maski, Senior Software Engineer, Happiest Minds Technologies
An important problem with document-search in any content management system (CMS) is the handling of permission-based search requests for each user. In this session, we present an algorithm and framework that allows the Search Engine to plainly index both public and privileged documents without any early binding overhead—thus enforcing document-level security policies only at the time of search. With our late-binding approach for ACL (access control lists) and some custom components, we have achieved reduction in search-time overhead. We will also discuss the order of complexity and execution time for the search overhead.
How Lucene Powers the LinkedIn Segmentation and Targeting Platformlucenerevolution
Presented by Hien Luu, Technical Lead, LinkedIn
Rajasekaran Rangaswamy, LinkedIn
For internet companies, marketing campaigns play an important role in acquiring new customers, retaining and engaging existing customers, and promoting new products. The LinkedIn segmentation and targeting platform helps marketing teams to easily and quickly create member segments based on member attributes using nested predicate expressions ranging from simple to complex. Once segments are created, then those qualified members are targeted with marketing campaigns.
Lucene is a key piece of technology in this platform. This session will cover how we leverage Hadoop to efficiently build Lucene indexes for a large and growing member attribute data set of 225 million members, and how Lucene is used to create segments based on complex nested predicate expressions. This presentation will also share some of the lessons we learned and challenges we encountered from using Lucene to search over large data sets.
Presented by Stefan Pohl, Senior Research Engineer, HERE, a Nokia Business
Besides the quality of results, the time that it takes from the submission of a query to the display of results is of utmost importance to user satisfaction. Within search engine implementations such as Apache Lucene, significant development efforts are hence directed towards reducing query latency. In this session, I will explain reasons for high query latencies and describe general approaches and recent developments within Lucene to counter them.To make the presented material relevant to a wider audience, I will focus on the actual query processing, as this is at the core of every query and search use-case.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
9. Introduction
2008
Twitter acquires Summize (MySQL-based RT search engine)
2009
2010
Modified Lucene (Earlybird) ships and replaces MySQL indexes
2011
New Earlybird features: image/video search; index compression;
efficient relevance search in time-sorted index
2012
2013
2014
Tweet archive search on SSD with vanilla Lucene
New RT posting list format that supports arbitrary document
lengths, but keeps performance optimizations for tweets
9
10. Introduction
2008
Twitter acquires Summize (MySQL-based RT search engine)
2009
2010
Modified Lucene (Earlybird) ships and replaces MySQL indexes
2011
New Earlybird features: image/video search; index compression;
efficient relevance search in time-sorted index
2012
2013
2014
Tweet archive search on SSD with vanilla Lucene
New RT posting list format that supports arbitrary document
lengths, but keeps performance optimizations for tweets
10
11. Introduction
2008
Twitter acquires Summize (MySQL-based RT search engine)
2009
2010
Modified Lucene (Earlybird) ships and replaces MySQL indexes
2011
New Earlybird features: image/video search; index compression;
efficient relevance search in time-sorted index
2012
2013
2014
Tweet archive search on SSD with vanilla Lucene
New RT posting list format that supports arbitrary document
lengths, but keeps performance optimizations for tweets
11
12. Introduction
2008
Twitter acquires Summize (MySQL-based RT search engine)
2009
2010
Modified Lucene (Earlybird) ships and replaces MySQL indexes
2011
New Earlybird features: image/video search; index compression;
efficient relevance search in time-sorted index
2012
2013
2014
Tweet archive search on SSD with vanilla Lucene
New RT posting list format that supports arbitrary document
lengths, but keeps performance optimizations for tweets
12
13. Introduction
2008
Twitter acquires Summize (MySQL-based RT search engine)
2009
2010
Modified Lucene (Earlybird) ships and replaces MySQL indexes
2011
New Earlybird features: image/video search; index compression;
efficient relevance search in time-sorted index
2012
2013
2014
Tweet archive search on SSD with vanilla Lucene
New RT posting list format that supports arbitrary document
lengths, but keeps performance optimizations for tweets
13
19. Search Architecture
RT index
RT index
(Earlybird)
• Modified Lucene index implementation optimized for realtime search
• IndexWriter buffer is searchable (no need to flush to allow searching)
• In-memory
• Hash-partitioned, static layout
19
24. Search Architecture
RT index
RT index
(Earlybird)
• Modified Lucene index implementation optimized for realtime search
• IndexWriter buffer is searchable (no need to flush to allow searching)
• In-memory
• Hash-partitioned, static layout
24
31. Search Architecture
Archive
RT index
index
• Two tiers: In-memory and on SSD
In-memory index
Much bigger index with more
tweets, less max. QPS, limited by
SSD IOPS.
Only needs to be queried if inmemory index did not yield
enough results
SSD index
31
38. Inverted Index 101
1
The old night keeper keeps the keep in the town
2
In the big old house in the big old gown.
3
The house in the town had the big old keep
4
Where the old night keeper never did sleep.
5
The night keeper keeps the keep in the night
6
And keeps in the dark and sleeps in the light.
Table with 6 documents
Example from:
Justin Zobel , Alistair Moffat,
Inverted files for text search engines,
ACM Computing Surveys (CSUR)
v.38 n.2, p.6-es, 2006
38
39. Inverted Index 101
1
The old night keeper keeps the keep in the town
2
In the big old house in the big old gown.
3
The house in the town had the big old keep
4
Where the old night keeper never did sleep.
5
The night keeper keeps the keep in the night
6
And keeps in the dark and sleeps in the light.
Table with 6 documents
term
and
big
dark
did
gown
had
house
in
keep
keeper
keeps
light
never
night
old
sleep
sleeps
the
town
where
freq
1
2
1
1
1
1
2
5
3
3
3
1
1
3
4
1
1
6
2
1
<6>
<2> <3>
<6>
<4>
<2>
<3>
<2> <3>
<1> <2> <3> <5> <6>
<1> <3> <5>
<1> <4> <5>
<1> <5> <6>
<6>
<4>
<1> <4> <5>
<1> <2> <3> <4>
<4>
<6>
<1> <2> <3> <4> <5> <6>
<1> <3>
<4>
Dictionary and posting lists
39
40. Inverted Index 101
1
The old night keeper keeps the keep in the town
2
In the big old house in the big old gown.
3
The house in the town had the big old keep
4
Where the old night keeper never did sleep.
5
The night keeper keeps the keep in the night
6
And keeps in the dark and sleeps in the light.
Table with 6 documents
Query: keeper
term
and
big
dark
did
gown
had
house
in
keep
keeper
keeps
light
never
night
old
sleep
sleeps
the
town
where
freq
1
2
1
1
1
1
2
5
3
3
3
1
1
3
4
1
1
6
2
1
<6>
<2> <3>
<6>
<4>
<2>
<3>
<2> <3>
<1> <2> <3> <5> <6>
<1> <3> <5>
<1> <4> <5>
<1> <5> <6>
<6>
<4>
<1> <4> <5>
<1> <2> <3> <4>
<4>
<6>
<1> <2> <3> <4> <5> <6>
<1> <3>
<4>
Dictionary and posting lists
40
41. Inverted Index 101
1
The old night keeper keeps the keep in the town
2
In the big old house in the big old gown.
3
The house in the town had the big old keep
4
Where the old night keeper never did sleep.
5
The night keeper keeps the keep in the night
6
And keeps in the dark and sleeps in the light.
Table with 6 documents
Query: keeper
term
and
big
dark
did
gown
had
house
in
keep
keeper
keeps
light
never
night
old
sleep
sleeps
the
town
where
freq
1
2
1
1
1
1
2
5
3
3
3
1
1
3
4
1
1
6
2
1
<6>
<2> <3>
<6>
<4>
<2>
<3>
<2> <3>
<1> <2> <3> <5> <6>
<1> <3> <5>
<1> <4> <5>
<1> <5> <6>
<6>
<4>
<1> <4> <5>
<1> <2> <3> <4>
<4>
<6>
<1> <2> <3> <4> <5> <6>
<1> <3>
<4>
Dictionary and posting lists
41
44. Posting list encoding
Doc IDs to encode: 5, 15, 9000, 9002, 100000, 100090
Delta encoding:
5 10 8985
VInt compression:
00000101
2
90998
90
Values 0 <= delta <= 127 need
one byte
44
45. Posting list encoding
Doc IDs to encode: 5, 15, 9000, 9002, 100000, 100090
Delta encoding:
VInt compression:
5 10 8985
2
90998
90
11000110 00011001
Values 128 <= delta <= 16384
need two bytes
45
46. Posting list encoding
Doc IDs to encode: 5, 15, 9000, 9002, 100000, 100090
Delta encoding:
VInt compression:
5 10 8985
2
90998
90
11000110 00011001
First bit indicates whether next
byte belongs to the same value
46
47. Posting list encoding
Doc IDs to encode: 5, 15, 9000, 9002, 100000, 100090
Delta encoding:
VInt compression:
5 10 8985
2
90998
90
11000110 00011001
• Variable number of bytes - a VInt-encoded posting can not be written as a
primitive Java type; therefore it can not be written atomically
47
48. Posting list encoding
Doc IDs to encode: 5, 15, 9000, 9002, 100000, 100090
Delta encoding:
5 10 8985
2
90998
90
Read direction
• Each posting depends on previous one; decoding only possible in old-to-new
direction
• With recency ranking (new-to-old) no early termination is possible
48
49. Posting list encoding
• By default Lucene uses a combination of delta encoding and VInt
compression
• VInts are expensive to decode
• Problem 1: How to traverse posting lists backwards?
• Problem 2: How to write a posting atomically?
49
52. Posting list encoding in Earlybird v1
int (32 bits)
docID
24 bits
max. 16.7M
textPosition
8 bits
max. 255
• Tweet text can only have 140 chars
52
53. Posting list encoding in Earlybird v1
Doc IDs to encode: 5, 15, 9000, 9002, 100000, 100090
Earlybird encoding:
5
15
9000
9002
100000
100090
Read direction
53
54. Early query termination
Doc IDs to encode: 5, 15, 9000, 9002, 100000, 100090
Earlybird encoding:
5
15
9000
9002
100000
100090
Read direction
E.g. 3 result are requested: Here
we can terminate after reading 3
postings
54
55. Inverted index components
Posting list storage
?
Dictionary
Parallel arrays
pointer to the most recently
indexed posting for a term
55
56. Inverted index components
Posting list storage
?
Dictionary
Parallel arrays
pointer to the most recently
indexed posting for a term
56
57. Posting lists storage - Objectives
• Store many single-linked lists of different lengths space-efficiently
• The number of java objects should be independent of the number of lists or
number of items in the lists
• Every item should be a possible entry point into the lists for iterators, i.e.
items should not be dependent on other items (e.g. no delta encoding)
• Append and read possible by multiple threads in a lock-free fashion (single
append thread, multiple reader threads)
• Traversal in backwards order
57
60. Memory management
4 int[]
pools
• For simplicity we can forget about the blocks for now and think of the pools
as continuous, unbounded int[] arrays
• Small total number of Java objects (each 32K block is one object)
60
62. Adding and appending to a list
slice size
211
27
available
24
allocated
21
current list
62
63. Adding and appending to a list
slice size
211
27
available
24
allocated
21
current list
Store first two
postings in this slice
63
64. Adding and appending to a list
slice size
211
27
available
24
allocated
21
current list
When first slice is full, allocate another one in second pool
64
65. Adding and appending to a list
slice size
211
27
available
24
allocated
21
current list
Allocate a slice on each level as list grows
65
66. Adding and appending to a list
slice size
211
27
available
24
allocated
21
current list
On upper most level one list can own multiple slices
66
67. Posting list format v1
int (32 bits)
docID
24 bits
max. 16.7M
textPosition
8 bits
max. 255
• Tweet text can only have 140 chars
67
68. Addressing items
• Use 32 bit (int) pointers to address any item in any list unambiguously:
int (32 bits)
poolIndex
2 bits
0-3
sliceIndex
19-29 bits
depends on pool
offset in slice
1-11 bits
depends on pool
• Nice symmetry: Postings and address pointers both fit into a 32 bit int
68
70. Linking the slices
slice size
211
27
available
24
allocated
21
current list
Dictionary
Parallel arrays
pointer to the last posting indexed for a term
70
71. Posting list encoding - Summary
• ints can be written atomically in Java
• Backwards traversal easy on absolute docIDs (not deltas)
• Every posting is a possible entry point for a searcher
• Skipping can be done without additional data structures as binary search,
though there are better approaches (skip lists)
• Repeating docIDs if a term occurs multiple times in the same document only
works for small docs
• Max. segment size: 2^24 = 16.7M tweets
71
72. New posting list encoding
• Objectives:
• 32 bit positions and variable-length payloads
• Store term frequency (TF) instead of repeating docIDs
• Keep:
• Concurrency model
• Space-efficiency for short documents
• Performance
72
77. New posting list encoding
...
DocID, termFreq
DocID, termFreq
DocID, termFreq
Position, Payload
Position, Payload, Position
...
Position, Payload
77
78. New posting list encoding
...
DocID, termFreq
DocID, termFreq
DocID, termFreq
Position, Payload
Position, Payload, Position
...
Position, Payload
• Store TF instead of repeating the same DocID
• Store DocID/TF pairs separately from position/payloads
• Find a way to synchronously decode the two streams without storing a
pointer for each posting (expensive)
78
79. New posting list encoding
...
DocID, termFreq
DocID, termFreq
DocID, termFreq
Position, Payload
Position, Payload, Position
...
Position, Payload
Fixed length for each posting
(32 bits)
• Store TF instead of repeating the same DocID
• Store DocID/TF pairs separately from position/payloads
• Find a way to synchronously decode the two streams without storing a
pointer for each posting (expensive)
79
80. New posting list encoding
• Idea: Use an embedded skip list as periodical “synchronization points”
• Keeps memory overhead for pointers low and improves search performance
80
81. New posting list encoding
slice size
211
27
available
24
allocated
21
current list
81
82. New posting list encoding
Slice header
• Header contains:
• Back-pointer to previous slice (as before)
• Skip list
• Slice id
82
83. New posting list encoding
int (32 bits)
docID
24 bits
max. 16.7M
textPosition
8 bits
max. 255
• Observation: Most tweets don’t need all 8 bits for text position
• Idea: Use the position “inlining” approach for short documents, but support
Lucene’s 32-bit positions and variable length payloads
83
84. New posting list encoding
int (32 bits)
docID
24 bits
max. 16.7M
textPosition
or
termFreq
7 bits
max. 127
0=textPosition
1=termFreq
1 bit
As a storage optimization, the text position is stored with the docID if:
o termFreq == 1 (term occurs once only in the doc) AND
o textPosition <= 127 AND
o Posting has no payload AND
o Posting is not at a skip point of the docID posting list (see later).
84
85. New posting list encoding - Summary
• Support for 32 bit positions and arbitrary length payloads stored in separate
data structure
• Performance and space consumption very similar compared to previous
encoding for tweet search
• Skip lists used for speed and synchronization points
• For short documents positions can still be inlined
85