Introduction to Solr NFJS - Boston, September 2011 Presented by Erik Hatchererik.firstname.lastname@example.org Lucid Imagination http://www.lucidimagination.com
About me...• Co-author, "Lucene in Action" (and "Java Development with Ant" / "Ant in Action" once upon a time)• "Apache guy" - Lucene/Solr committer; member of Lucene PMC, member of Apache Software Foundation• Co-founder, evangelist, trainer, coder @ Lucid Imagination
About Lucid Imagination...• Lucid Imagination provides commercial-grade support, training, high-level consulting and value- added software for Lucene and Solr.• We make Lucene ‘enterprise-ready’ by offering: • Free, certiﬁed, distributions and downloads. • Support, training, and consulting. • LucidWorks Enterprise, a commercial search platform built on top of Solr.
Abstract Apache Solr serves search requests at the enterprises andthe largest companies around the world. Built on top of the top-notch Apache Lucene library, Solr makes indexing andsearching integration into your applications straightforward.Solr provides faceted navigation, spell checking, highlighting, clustering, grouping, and other search features. Solr also scales query volume with replication and collection sizewith distributed capabilities. Solr can index rich documents such as PDF, Word, HTML, and other ﬁle types.
What is Solr?• An open source search server• Indexes content sources, processes query requests, returns search results• Uses Lucene as the "engine", but adds full enterprise search server features and capabilities• A web-based application that processes HTTP requests and returns HTTP responses.• Initially started in 2004 and developed by CNET as an in-house project to add search capability for the company website.• Donated to ASF in 2006.
Who uses Solr? And many many many many more...!
Which Solr version?• There’s more than one answer!• The current, released, stable version is 3.3 (soon to be 3.4)• The development release is referred to as “trunk”.• This is where the new, less tested work goes on• Also referred to as 4.0• LucidWorks Enterprise is built on a trunk snapshot + additional features.
What is Lucene?• An open source search library (not an application)• 100% Java• Continuously improved and tuned for more than 10 years• Compact, portable index representation• Programmable text analyzers, spell checking and highlighting• Not, itself, a crawler or a text extraction tool
Inverted Index• Lucene stores input data in what is known as an inverted index• In an inverted index each indexed term points to a list of documents that contain the term• Similar to the index provided at the end of a book• In this case "inverted" simply means the list of terms point to documents• It is much faster to ﬁnd a term in an index, than to scan all the documents
Inverted Index Example
Ingestion• API / Solr XML, JSON, and javabin/SolrJ• CSV• Relational databases• File system• Web crawl (using Nutch, or others)• Others - XML feeds (e.g. RSS/Atom), e-mail
Solr indexing options
Solr XMLPOST to /update<add> <doc> <field name="id">rawxml1</field> <field name="content_type">text/xml</field> <field name="category">index example</field> <field name="title">Simple Example</field> <field name="filename">addExample.xml</field> <field name="text">A very simple example of adding a document to the index.</field> </doc></add>
CSV indexing• http://localhost:8983/solr/update/csv• Files can be sent over HTTP: • curl http://localhost:8983/solr/update/ csv --data-binary @data.csv -H Content- type:text/plain; charset=utf-8’• or streamed from the ﬁle system: • curl http://localhost:8983/solr/update/ csv?stream.file=exampledocs/ data.csv&stream.contentType=text/ plain;charset=utf-8
Rich documents• Solr uses Tika for extraction. Tika is a toolkit for detecting and extracting metadata and structured text content from various document formats using existing parser libraries.• Tika identiﬁes MIME types and then uses the appropriate parser to extract text.• The ExtractingRequestHandler uses Tika to identify types and extract text, and then indexes the extracted text.• The ExtractingRequestHandler is sometimes called "Solr Cell", which stands for Content Extraction Library.• File formats include MS Ofﬁce, Adobe PDF, XML, HTML, MPEG and many more.
Solr Cell parameters• The literal parameter is very important. • A way to add other ﬁelds not indexed using Tika to documents. • &literal.id=12345 • &literal.category=sports• Using curl to index a ﬁle on the ﬁle system: • curl http://localhost:8983/solr/update/extract? literal.id=doc1&commit=true -F email@example.com• Streaming a ﬁle from the ﬁle system: • curl "http://localhost:8983/solr/update/extract? stream.file=/some/path/ news.doc&stream.contentType=application/ msword&literal.id=12345"
Streaming remote docs• Streaming a ﬁle from a URL: • curl http://localhost:8983/solr/ update/extract? literal.id=123&stream.url=http:// www.solr.com/content/file.pdf -H Content-type:application/pdf’
DataImportHandler• An "in-process" module that can be used to index data directly from relational databases and other data sources• Conﬁguration driven• A tool that can aggregate data from multiple database tables, or even multiple data sources to be indexed as a single Solr document• Provides powerful and customizable data transformation tools• Can do full import or delta import• Pluggable to allow indexing of any type of data source
Other commands• <commit/> and <optimize/>• <delete>...</delete> • <id>Q-36</id> • <query>category:electronics</query>• To update a document, simply add a document with same unique key
Conﬁguring Solr• schema.xml • deﬁnes ﬁeld types, ﬁelds, and unique key• solrconﬁg.xml • Lucene settings • request handler, component, and plugin deﬁnitions and customizations
Searching Basics• http://localhost:8983/solr/select?q=*:* • q - main query • rows - maximum number of "hits" to return • start - zero-based hit starting point • ﬂ - comma-separated ﬁeld list • * for all stored ﬁelds, score for computed Lucene score
Other Common Search Parameters• sort - specify sort criteria either by ﬁeld(s) or function(s) in ascending or descending order• fq - ﬁlter queries, multiple values supported• wt - writer type - format of Solr response• debugQuery - adds debugging info to response
Filtering results• Use fq to ﬁlter results in addition to main query constraints• fq results are independently cached in Solrs ﬁlterCache• ﬁlter queries do not contribute to ranking scores• Commonly used for ﬁltering on facets
Integration• Its just HTTP • and CSV, JSON, XML, etc on the requests and responses• Any language or environment can work with Solr easily• Many libraries/layers exist on top
Ruby indexing example
SolrJ searching exampleSolrServer solrServer = new CommonsHttpSolrServer( "http://localhost:8983/solr");SolrQuery query = new SolrQuery();query.setQuery(userQuery);query.setFacet(true);query.setFacetMinCount(1);query.addFacetField("category");QueryResponse queryResponse = solrServer.query(query);
Devilish Details• analysis: tokenization and token ﬁltering• query parsing• relevancy tuning• performance and scalability
Data.gov CSV catalogURL,Title,Agency,Subagency,Category,Date Released,Date Updated,TimePeriod,Frequency,Description,Data.gov Data Category Type,Specialized Data CategoryDesignation,Keywords,Citation,Agency Program Page,Agency Data Series Page,Unit ofAnalysis,Granularity,Geographic Coverage,Collection Mode,Data CollectionInstrument,Data Dictionary/Variable List,Applicable Agency Information QualityGuideline Designation,Data Quality Certification,Privacy and Confidentiality,TechnicalDocumentation,Additional Metadata,FGDC Compliance (Geospatial Only),StatisticalMethodology,Sampling,Estimation,Weighting,Disclosure Avoidance,QuestionnaireDesign,Series Breaks,Non-response Adjustment,Seasonal Adjustment,StatisticalCharacteristics,Feeds Access Point,Feeds File Size,XML Access Point,XML File Size,CSV/TXT Access Point,CSV/TXT File Size,XLS Access Point,XLS File Size,KML/KMZ AccessPoint,KML File Size,ESRI Access Point,ESRI File Size,Map Access Point,Data ExtractionAccess Point,Widget Access Point"http://www.data.gov/details/4","Next Generation Radar (NEXRAD) Locations","Department of Commerce","National Oceanicand Atmospheric Administration","Geography and Environment","1991","Irregular as needed","1991 to present","Between 4and 10 minutes","This geospatial rendering of weather radar sites gives access to an historical archive of TerminalDoppler Weather Radar data and is used primarily for research purposes. The archived data includes base data andderived products of the National Weather Service (NWS) Weather Surveillance Radar 88 Doppler (WSR-88D) next generation(NEXRAD) weather radar. Weather radar detects the three meteorological base data quantities: reflectivity, mean radialvelocity, and spectrum width. From these quantities, computer processing generates numerous meteorological analysisproducts for forecasts, archiving and dissemination. There are 159 operational NEXRAD radar systems deployedthroughout the United States and at selected overseas locations. At the Radar Operations Center (ROC) in Norman OK,personnel from the NWS, Air Force, Navy, and FAA use this distributed weather radar system to collect the data neededto warn of impending severe weather and possible flash floods; support air traffic safety and assist in the managementof air traffic flow control; facilitate resource protection at military bases; and optimize the management of water,agriculture, forest, and snow removal. This data set is jointly owned by the National Oceanic and AtmosphericAdministration, Federal Aviation Administration, and Department of Defense.","Raw Data Catalog",...