1) The document discusses methods for setting up similarity-driven virtual screening using various molecular similarity metrics and descriptor spaces. 2) It finds that traditional dogmas like only using Tanimoto similarity above 0.85 can be inaccurate, and recommends calibrating similarity cutoffs specifically for each target, query, and chemical space. 3) Tversky similarity with an alpha value of 0.7-0.9, which more heavily penalizes the query missing features of actives, is found to often give excellent results. The best approach is to test multiple options and calibrate for each individual virtual screening project.