1. Lessons from ChEMBL Willem P. van Hoorn Senior Solutions Consultant Willem.vanhoorn@accelrys.com
2. Those who cannot remember the past are condemned to repeat it Contents
3. ‘Nasties’ Things you would not like to see in your hits Specifically: reactive/labile chemical groups Is the compound still on the plate? Activity due to (selective) non-covalent binding? Some overlap with frequent hitters/aggregators Peroxides, aldehydes, etc Not ‘structural alerts’ Off-target toxicity Toxic compounds after metabolic activation hERG binders, anilines, etc
4. This is not a new concept If you are a chemist you know many of these If you have been working in pharma you know more of these Pharma companies probably all have their in-house list of ‘forbidden/risky/ugly’ structures Some publications but no definitive public list Thus reinvention of the wheel, wasted effort
5. ChEMBL: “the most comprehensive ever seen in a public database.’” (wikipedia) “…cover a significant fraction of the SAR and discovery of modern drugs” (ChEMBL website) This must be a good source to learn what goes Experienced scientists who cared enough about compounds to measure the activity and submit the results to peer-reviewed journals ChEMBL as a teacher
6. To learn we also need to know what not to do: Compound vendor catalogues Fewer constraints on reactivity / stability Drive for diversity More customers than just pharma: Should be enriched in nasties compared to ChEMBL ChEMBL as a teacher
7. Lesson 1 ChEMBL Release 7 Dump all compounds, keep largest fragment Unique canonical smiles: 597,255 Vendor reagents Pipeline Pilot examples: Maybridge + Asinex 186,967 unique compounds Build Bayesian model ‘reagentlike’ Vendor “good” v. ChEMBL “baseline” What do reagents have in common that ChEMBL compounds don’t?
8. Training/Test: Random 80% / 20% Excellent separation ChEMBL / Reagent Reagentlike model Leave-one-out enrichment Test set enrichment
10. A look at high and low scoring compounds Colour atoms by contribution to Bayesian score Red: high contribution: reagent-like Blue: low contribution: not reagent-like Color gradient over set of molecules
14. High scoring features Low scoring features High and low scoring reagent features Seen 1029 times, of which in reagent set 1024 times Many variations of peptide bondand other polypeptide features: 635 out of 639 in reagent set
15. Learning the difference was too easy Small organic vs large polypeptide Both sets contain many series, model learns common core instead of (nasty) decorations Metric: compounds / Murcko frames ChEMBL: ~6.7, reagent: ~9.0 Number of frames / in common: ~81k / ~6k I need to resit this class Conclusions from lesson 1
16. Restrict to organic small molecules AlogP < 6, Mw < 600, organic compound filter Bayesian Model ECFP_2(smaller features compared to ECFP_6) Less likely to capture whole core Lesson 2: Rebalancing the training set
21. Less learning of “series by template” But it still happens, don’t need to capture whole ring to capture sugar, steroid, etc Some of expected nasty features found But many are not Better training set needed Series: similar in both clean/nasty training set, so that difference is not the template Many ChEMBL compounds are odd I have still not learned the lesson Conclusions from lesson 2
22. ChEMBL: What I should have started with: All compounds with IC50 or Ki expressed in nM, Against human target, Include reference: journal, volume, year, page 569,569 activities 223,896 compounds 14,383 references Lesson 3: Learning from (big) pharma
30. Creating balanced training/test sets Affiliation: Pharma, Other, Academic Keep 602 targets for which measured activities are available for all 3 affiliations Same target, same pharmacophore, some me-too work: less series learning
31. Bayesian model based on <= 2005 data Descriptors: ECFP_6 + Ro5 physical properties Categorical model: Pharma/Academic/Other
34. What makes a compound ‘Pharma’ Aromatic rings, aromatic rings, aromatic rings. IP? Absence of decorations means these are not distinctive. Number of times feature observed / how many times in academic / pharma
35. What makes a compound ‘Academic’ Aliphatic, single rings, bold usage of F and other decorations, etc. Maybe not nasty but not very druglike. Number of times feature observed / how many times in academic / pharma
38. Set out to learn nasty model, ended up with a (non)drug-like model Pharma is ‘a bit’ underrepresented 10% of MDDR is in ChEMBL (Dave Rogers) ChEMBL c/should include patent literature Over the years (big) pharma has delivered the goods and learned what does (not) work in a structure. Some of this knowledge can be extracted from ChEMBL. Ignore this at your peril Conclusions