The document summarizes the creation of three new domain-specific test collections for evaluating expert search systems in the domains of information retrieval, semantic web, and computational linguistics. The collections were created using workshop program committees and publications from relevant conferences and journals to represent experts, documents, and topics. The collections were then benchmarked using state-of-the-art expert search approaches, finding that term extraction methods outperformed language modeling on these domain-centered collections. Future work is discussed to expand the collections and incorporate additional evidence like citations.