PostgreSQL and Sphinx pgcon 2013


Published on

Lectured talk on the PgCon 2013 by Emanuel Calvo

Published in: Technology
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

PostgreSQL and Sphinx pgcon 2013

  1. 1. PostgreSQL and SphinxPgCon 2013Emanuel Calvo
  2. 2. About me:● Operational DBA at PalominoDB.○ MySQL, Maria and PostgreSQL databases.● Spanish Press Contact.● Check out my LinkedIn Profile at:
  3. 3. Credits● Thanks to:○ Andrew Atanasoff○ Vlad Fedorkov○ All the PalominoDB people that help out !
  4. 4. Palomino - Service Offerings● Monthly Support:○ Being renamed to Palomino DBA as a service.○ Eliminating 10 hour monthly clients.○ Discounts are based on spend per month (0-80, 81-160, 161+○ We will be penalizing excessive paging financially.○ Quarterly onsite day from Palomino executive, DBA and PM for clientsusing 80 hours or more per month.○ Clients using 80-160 hours get 2 new relic licenses. 160 hours plusget 4.● Adding annual support contracts:○ Consultation as needed.○ Emergency pages allowed.○ Small bucket of DBA hours (8, 16 or 24)For more information, please go to: Spreadsheet
  5. 5. Agenda● What we are looking for?● FTS Native Postgres Support○● Sphinx○
  6. 6. News●
  7. 7. Goals of FTS● Add complex searches using synonyms, specific operators or spellings.○ Improving performance sacrificing accuracy.● Reduce IO and CPU utilization.○ Text consumes a lot of IO for read and CPU for operations.● FTS can be handled:○ Externally■ using tools like Sphinx or Solr○ Internally■ native FTS support.● Order words by relevance● Language sensitive● Faster than regular expressions or LIKE operands
  8. 8. What is the purpose of the talk?● The only thing we want to show is that there are complementary tools forspecific needs on full text searching.● Native FTS is a great and powerful feature. Also, there is a great workexposed on the last PgEU 2012 about GIN index and FTS improvements.● Is well known the flexibility and powerness provided by Postgres in termsof SQL. For some projects (specially science and GIS), this combination iscritical.● You can also use both, because are not exclusive each other.
  9. 9. So ... Why Sphinx?(if we already have native support for FTS)
  10. 10. Cases of use● Generally there is no need to integrate FTS searches in transactions.● Isolating text searches into boxes outside the main database is optimal ifwe use it for general purposes.● If you want to mix FTS searches with other data, you may use the nativesupport.● If you massively query for text searches you may want to use a textsearch, easier to scale out and split data.● Native support is like a RT index. Sphinx allows you to have asyncindexing, which is faster for writes.
  11. 11. Meh, still want native support!● Store the text externally, index on the database○ requires superuser● Store the whole document on the database, in an object, index on Sphinx/Solr● Dont index everything○ Solr /Sphinx are not databases, just index only what you want to search.Smaller indexes are faster and easy to maintain.○ But you can have big indexes, just be aware of the --enable-id64 option ifyou compile● ts_stats○ can help you out to check your FTS configuration● You can parse URLS, mails and whatever using ts_debug function for nunintensive operations
  12. 12. If not, lets Sphinx so.
  13. 13. Sphinx features● Standalone daemon written on C++● Highly scalable○ Known installation consists 50+ Boxes, 20+ Billions of documents● Extended search for text and non-full-text data○ Optimized for faceted search○ Snippets generation based on language settings● Very fast○ Keeps attributes in memory■ See Percona benchmarks for details● Receiving data from PostgreSQL○ Dedicated PostgreSQL datasource type.○ Able to feed incremental indexes.
  14. 14. Key features - Sphinx● Scalability & failover searches● Extended FT language● Faceted search support● GEO-search support● Integration and pluggable architecture● Dedicated PostgreSQL source, UDF support● Morphology & stemming● Both batch & real-time indexing is available● Parallel snippets generation
  15. 15. Whats new on Sphinx (2.1.1 beta)● Added AOT (new morphology library, lemmatizer) support○ Russian only for now; English coming soon; small 10-20% indexingimpact; its all about search quality (much much better "stemming")● Added JSON support○ Limited support (limited subset of JSON) for now; JSON sits in acolumn; youre able to do thing like WHERE jsoncol.key=123 orORDER BY or GROUP BY● Added subselect syntax that reorders result sets, SELECT * FROM(SELECT ... ORDER BY cond1 LIMIT X) ORDER BY cond2 LIMIT Y● Added bigram indexing, and quicker phrase searching with bigrams(bigram_index, bigram_freq_words directives)○ improves the worst cases for social mining● added HA support, ha_strategy, agent_mirror directives● Added a few new geofunctions (POLY2D, GEOPOLY2D, CONTAINS)● Added GROUP_CONCAT()● Added OPTIMIZE INDEX rtindex, rt_merge_iops, rt_merge_maxiosizedirectives● Added TRUNCATE RTINDEX statement
  16. 16. How to feed Sphinx?● Mysql● Postgres● MsSQL● OBDC● XML pipe
  17. 17. Configuration● Where to look for data?● How to process it?● Where to store index?
  18. 18. Sections● Source○ Where I get the data?● index○ How I process the data?● indexer○ Resources Ill use to index○ max_limit, max_iops● search○ How I provide the data?
  19. 19. Full text extensions•And, Orhello | world, hello & world•Nothello -world•Per-field search@title hello @body world•Field combination@(title, body) hello world•Search within first N@body[50] hello•Phrase search“hello world”•Per-field weights•Proximity search“hello world”~10•Distance supporthello NEAR/10 world•Quorum matching"the world is a wonderful place"/3•Exact form modifier“raining =cats and =dogs”•Strict order•Sentence / Zone / Paragraph•Custom documents weighting &ranking
  20. 20. Connection● API to searchd○ PHP, Ruby, Python, etc● SphinxSE○ Storage Engine for MySQL● SphinxQL○ Using Mysql client○ Using client library○ Require a different port from search daemonmysql> SELECT * FROM sphinx_index-> WHERE MATCH(I love Sphinx) LIMIT 10;...10 rows in set (0.05 sec)
  21. 21. SphinxQL● WITHIN GROUP ORDER BY● OPTION support for fine tuning● weights, matches and query time control● SHOW META query information● CALL SNIPPETS let you create snippets● CALL KEYWORDS for statistics
  22. 22. Extended Syntaxmysql> SELECT …, YEAR(ts) as yr-> FROM sphinx_index-> WHERE MATCH(I love Sphinx)-> GROUP BY yr-> WITHIN GROUP ORDER BY rating DESC-> ORDER BY yr DESC-> LIMIT 5-> OPTION field_weights=(title=100, content=1);+---------+--------+------------+------------+------+----------+--------+| id | weight | channel_id | ts | yr | @groupby | @count |+---------+--------+------------+------------+------+----------+--------+| 7637682 | 101652 | 358842 | 1112905663 | 2005 | 2005 | 14 || 6598265 | 101612 | 454928 | 1102858275 | 2004 | 2004 | 27 || 7139960 | 1642 | 403287 | 1070220903 | 2003 | 2003 | 8 || 5340114 | 1612 | 537694 | 1020213442 | 2002 | 2002 | 1 || 5744405 | 1588 | 507895 | 995415111 | 2001 | 2001 | 1 |+---------+--------+------------+------------+------+----------+--------+
  23. 23. Other features● GEO distance● Range○ Price ranges (items, offers)○ Date range (blog posts and news articles)○ Ratings, review points○ INTERVAL(field, x0, x1, …, xN)SELECTINTERVAL(item_price, 0, 20, 50, 90) as range,@countFROM my_sphinx_productsGROUP BY rangeORDER BY range ASC;
  24. 24. Misspell service● Provides correct search phrase○ “Did you mean” service● Allows to replace user’s search on the fly○ if we’re sure it’s a typo■ “ophone”, “uphone”, etc○ Saves time and makes website look smart● Based on your actual database○ Effective if you DO have correct words in index
  25. 25. Autocompletion service● Suggest search queries as user types○ Show most popular queries○ Promote searches that leads to desired pages○ Might include misspells correction
  26. 26. Related search● Improving visitor experience○ Providing easier access to useful pages○ Keep customer on the website○ Increasing sales and server’s load average● Based on documents similarity○ Different for shopping items and texts○ Ends up in data mining
  27. 27. Sphinx - Postgres compilation[root@ip-10-55-83-238 ~]# yum install gcc-c++.noarch[root@ip-10-55-83-238 sphinx-2.0.6-release]# ./configure --prefix=/usr/local/sphinx --without-mysql --with-pgsql --with-pgsql-includes=/usr/local/pgsql/include/ --with-pgsql-libs=/usr/local/pgsql/lib --enable-id64[root@ip-10-48-139-161 sphinx-2.1.1-beta]# make install -j2[root@ip-10-48-139-161 sphinx]# cat /etc/[root@ip-10-48-139-161 sphinx]# ldconfig[root@ip-10-55-83-238 sphinx]# /opt/pg/bin/psql -Upostgres -hmaster test <etc/example-pg.sqlNOTE: works for Postgres-XC also
  28. 28. Sphinx - Basic index hostPostgresApplicationsphinx daemon/APIlisten = 9312listen = 9306:mysql41IndexesQueryResultAdditional DataIndexing
  29. 29. Sphinx - Daemon● For speed● to offload main database● to make particular queries faster● Actually most of search-related● For failover● It happens to best of us!● For extended functionality● Morphology & stemming● Autocomplete, “do you mean” and “Similar items”
  30. 30. Sphinx - Daemon[root@ip-10-48-139-161 sphinx]# bin/searchd --config etc/postgres.confSphinx 2.1.1-id64-beta (rel21-r3701)Copyright (c) 2001-2013, Andrew AksyonoffCopyright (c) 2008-2013, Sphinx Technologies Inc ( config file etc/postgres.conf...listening on all interfaces, port=9312listening on all interfaces, port=9306precaching index test1precaching index src1_baseprecaching index src1_incrprecached 3 indexes in 0.002 sec
  31. 31. Sphinx - Indexer[root@ip-10-48-139-161 sphinx]# bin/indexer --all --config etc/postgres.confusing config file etc/postgres.conf...indexing index test1...collected 4 docs, 0.0 MBsorted 0.0 Mhits, 100.0% donetotal 4 docs, 193 bytestotal 0.012 sec, 15186 bytes/sec, 314.73 docs/secindexing index src1_base...collected 4 docs, 0.0 MBsorted 0.0 Mhits, 100.0% donetotal 4 docs, 193 bytestotal 0.010 sec, 17975 bytes/sec, 372.54 docs/secindexing index src1_incr...collected 4 docs, 0.0 MBsorted 0.0 Mhits, 100.0% donetotal 4 docs, 193 bytestotal 0.033 sec, 5708 bytes/sec, 118.31 docs/sectotal 9 reads, 0.000 sec, 0.1 kb/call avg, 0.0 msec/call avgtotal 30 writes, 0.000 sec, 0.1 kb/call avg, 0.0 msec/call avg
  32. 32. Data source flow (from DBs)- Connection to the database is established- Pre-query, is executed to perform any necessary initial setup, such as settingper-connection encoding with MySQL;- main query is executed and the rows it returns are indexed;- Post-query is executed to perform any necessary cleanup;- connection to the database is closed;- indexer does the sorting phase (to be pedantic, index-type specific post-processing);- connection to the database is established again;- post-index query, is executed to perform any necessary final cleanup;- connection to the database is closed again.
  33. 33. Range query for the main_querysql_query_range = SELECT MIN(id),MAX(id) FROM documentssql_range_step = 1000sql_query = SELECT * FROMdocuments WHERE id>=$start ANDid<=$end● Avoids huge result settransfer across thenetwork or undesirableseqscans on thedatabase.● You can avoid min andmax agg fxs using atable with those valuesupdated by a trigger orby a async process.
  34. 34. Delta indexing● Dont update the whole data, just the new records (Main+delta)○ You need to create your own "checkpoint" table. Update it onceyour indexing is finished.● Consider merging big indexes instead
  35. 35. Distributed searches● The key idea is to horizontally partition (HP) searched data accross searchnodes and then process it in parallel.● The partitioning should be done manually○ setup several instances of Sphinx programs (indexer and searchd) ondifferent servers;○ make the instances index (and search) different parts of data;○ configure a special distributed index on some of the searchdinstances;○ and query this index.● Queries are managed by agents (pointers to networked indexes)
  36. 36. Distributed searchesIndex Part A Index Part B Index Part Bsearchd - agent searchd - agent searchd - agentApplicationPostgres
  37. 37. Agent mirrors● It allows to do failover searches. # sharding index over 4 servers total# in just 2 chunks but with 2 failover mirrors foreach chunk# box1, box2 carry chunk1 as local# box3, box4 carry chunk2 as local# config on box1, box2agent = box3:9312|box4:9312:chunk2# config on box3, box4agent = box1:9312|box2:9312:chunk1
  38. 38. Mirror exampleindex src1_base : test1{source = src1_basepath = /usr/local/sphinx/data/scr1_basetype = distributedlocal = test1# remote agent# multiple remote agents may be specified# syntax for TCP connections is hostname:port:index1,[index2[,...]]# syntax for local UNIX connections is /path/to/socket:index1,[index2[,...]]agent = sphinx2:9312:test1agent = sphinx1:9312:test1}
  39. 39. Agent mirrorsIndex Index Indexsearchdagent mirrorsearchdagent mirrorsearchdagent mirrorha_strategy = random, roundrobin andadaptative
  40. 40. Foreign Data Wrapper?● A data wrapper will allow to retrieve data directly fromany Sphinx service, from the database.● Until today there is no project started, but hopefullysoon.
  41. 41. Thanks!Contact us!We are hiring *remotely* o/ !