An OWA-Based Multi-Criteria System For Assigning Reviewers
1. An OWA-based Multi-Criteria System
for Assigning Reviewers
Jennifer Nguyen1, GermaĚn SaĚnchez-HernaĚndez2, NuĚria Agell2, Xari Rovira2, Cecilio
Angulo1
1 GREC Research Group
Industrial Engineering School, Universitat PoliteĚcnica de Catalunya
Barcelona, Spain
{jennifer.nguyen, cecilio.angulo}@upc.edu
2 GREC Research Group
ESADE Business School, Ramon Llull University
Barcelona, Spain
{german.sanchez, nuria.agell, xari.rovira}@esade.edu
Assigning papers to reviewers is a large and difficult task for conference chairs and sci-
entific committees. Conferences can receive excessively high numbers of paper submissions,
sometimes thousands of papers, with only hundreds of reviewers available. Often these as-
signments must be made within days after the submission deadline creating a huge burden on
the conference chairs. Hence, making recommender systems helpful in simplifying this task.
Much research has been conducted regarding this topic in terms of applicability and
methodology [1, 2]. In this paper, we focus our attention on aggregating multiple factors to
determine reviewer expertise to analyze the match with conference manuscripts. Specifically,
we elaborate on variables used to compute reviewer expertise. Although, some prior research
focused on keyword terms from publications and supporting information through human in-
tervention [3], we propose a more realistic approach to gathering expertise information using
an automated method to acquire reviewer profiles from publicly available information.
Reviewers expertise in different domains is imprecise and multiple factors may contribute
to the profile of a reviewer [4]. Therefore, we propose to apply a fuzzy approach to analyze
reviewer expertise. Specifically, the methodology considered is based on fuzzy OWA (Order
Weighted Average) aggregation functions to summarize information coming from different
sources. General constraints for the RAP (Reviewer Assignment Problem) have been incor-
porated: (i) avoiding conflicts of interest between the reviewer and authors, (ii) each paper
must have a minimum number of reviewers, and (iii) each reviewer load cannot exceed a
certain number of papers.
To prove the applicability of the proposed methodology an illustrative example has been
carried out. The dataset utilized comes from 17th International Conference of the Catalan
Association for Artificial Intelligence (CCIA 2014) held in Barcelona, Spain. The data con-
sists of 40 papers and 74 technical committee members. Each paper has a list of keywords
associated with it chosen by its authors.
We created a profile for each of the CCIA technical committee members by combing pop-
ular global and local researcher websites like Aminer, Portal Recerca (portal for the research
2. in Catalonia), and KTIA (a community of people interested in artificial intelligence). Each
reviewer was identified according to his name and organizational affiliation. All available
information was collected for each reviewer and translated to English.
In order to fulfill the conflict of interest constraint, we scraped dblpâs website for biblio-
graphic information. We obtained each reviewerâs publication titles and co-authors. Next, we
collected bibliographic information from tdx (a catalog of doctoral theses presented at Cata-
lan universities), for reviewers theses and theses supervised in order to identify supervisors
and former students.
We compared the publicly available variables that we obtained to the variables proposed
in previous research on state-of-the-art RAP recommenders. Missing variables were added
if it could be determined from the available information. Finally, 12 variables were used to
determine a reviewerâs expertise: PhD Thesis Year, PhD Supervised, Books, Book Chap-
ters, Activity, Research Areas, Publications, Citations, H-index, Sociability, Diversity, and
Professional Age.
To evaluate our model, first we compared our results to the assignments made by the
conference chairs, then to a random assignment, and finally to a gold standard. This gold
standard was determined by several external experts. First, the expert assigned each reviewer
to the CCIA call for paper topics. There were 20 topics. The expert then read the keywords
and abstract of each paper submitted and assigned relevant CCIA topics to each paper. This
gave us a gold standard for all possible combinations of reviewers and papers.
Three main measures of performance are used: precision, recall, and coverage. The
results of our study show a promising way of utilizing publicly available information to create
expertise profiles and applying fuzzy logic to conference assignment problems.
References
[1] Wang, Fan, Ning Shi, and Ben Chen: âA comprehensive survey of the reviewer assign-
ment problem.â International Journal of Information Technology & Decision Making. 9
(4) (2010) 645â668.
[2] Karimzadehgan, Maryam, ChengXiang Zhai, and Geneva Belford: âMulti-aspect ex-
pertise matching for review assignment.â Proceedings of the 17th ACM conference on
Information and knowledge management. (2008) 1113â1122.
[3] Charlin, Laurent, and Richard S. Zemel. âThe Toronto paper matching system: an auto-
mated paper-reviewer assignment system.â International Conference on Machine Learn-
ing (ICML). (2013).
[4] Tayal, Devendra Kumar, et al. âNew method for solving reviewer assignment problem
using type-2 fuzzy sets and fuzzy functions.â Applied intelligence. 40 (1) (2014): 54â73.