Strategyproof Peer Selection. Article Swipe
YOU?
·
· 2016
· Open Access
·
· OA: W2340065903
S ince the beginning of civilization, societies have been selecting small groups from within. Athenian society, for example, selected a random subset of citizens to participate in the Boule, the council of citizens which ran daily affairs in Athens. Peer review, evaluation, and selection has been the foundational process by which scientific funding bodies have selected the best subset of proposals for funding [1] and peer evaluation is becoming increasingly popular and necessary to scale grading in MOOCs [2, 3]. In these settings, however, we do not wish to select an arbitrary subset of size k, but the “best k”, and we need, therefore, a procedure in which the candidates are rated according to the opinions of the group. In peer selection problems we are not seeking an external, “independent” agent to make choices, but desire a crowdsourced approach, in which participants are those making the selection. Mechanisms for peer selection and their properties receive considerable attention within economics, political science, and computer science [1, 4–9]. Our motivation comes from the US National Science Foundation (NSF) recent “mechanism design pilot,” which attempted to spread the review load amongst all the submitters of proposals [1, 10]. The program uses “reviewers assigned from among the set of PIs whose proposals are being reviewed.” Reviewers’ own proposals get “supplemented with “bonus points” depending upon the degree to which his or her ranking agrees with the consensus ranking ([11], Page 46).” This mechanism used by the NSF is not strategyproof; reviewers are incentivized to guess what others are thinking, not provide their honest feedback (an idea supported by [12]). Removing the bonus may be worse, as reviewers would then be able to increase the chance of their own proposal being accepted by rating other proposals lower [13]. In either case, reviewers can benefit from reporting something other than their truthful values. When agents have the incentive to misrepresent their truthful reports, the effect on the results of the aggregation or selection mechanism can be problematic. Indeed, in a comprehensive evaluation of the peer review process, Wenneras and Wold [14] wrote in Nature that “...the development of peerreview systems with some built-in resistance to the weakness of human nature is therefore of high priority.” We propose a novel strategyproof (or impartial) mechanism, where agents may never gain by being insincere. There are many reasons to prefer a mechanism which is strategyproof: first, the mechanism does not favor “sophisticated” agents who have the expertise to behave strategically. Second, agents with partial or no knowledge about the rankings of other agents are not disadvantaged when using a strategyproof mechanism. Third, normative properties of a mechanism typically assume a sincere behavior. If agents act strategically, we may lose some desirable normative properties. Fourth, it is easier to persuade people to use a strategyproof mechanism than one which can be (easily) manipulated. Note that while strategyproofness does not handle all biases agents may have, it eliminates an obvious “weakness in human nature.” To achieve strategyproofness we could use a lottery (as in the Athenian democracy). However, this method does not select based on merit. A different option is to use a mechanism based on a voting rule. However, following Gibbard and Satterthwaite [15, 16], any “reasonable” mechanism based on voting will not be strategyproof unless it is a dictatorship. Another option is to use a mechanism like the Page Rank algorithm that use Markov chains to compute a ranking of the agents [17]. However, such mechanisms are also not strategyproof.