This document discusses using Thompson sampling for search query recommendation. It introduces the multi-arm bandit problem and how Thompson sampling can be applied to solve it. The key aspects covered are:
1) Thompson sampling frames query recommendation as a multi-arm bandit problem to balance exploring new queries and exploiting popular ones.
2) It models the success probability of each query as a beta distribution and randomly selects queries based on these distributions to decide the next query to recommend.
3) An experiment on real search log data tested Thompson sampling for query recommendation with different numbers of queries to identify, showing it can quickly find the most popular queries.