Relative feedback increases disparities in effort and performance in crowdsourcing contests: evidence from a quasi-experiment on Topcoder
Relative feedback increases disparities in effort and performance in crowdsourcing contests: evidence from a quasi-experiment on Topcoder
By Milena Tzvetkova, Sebastian Mueller, Oana Vuculescu, Haylee Ham, and Rinat Sergeev
Abstract
“Rankings and leaderboards are often used in crowdsourcing contests and online communities to motivate individual contributions but feedback based on social comparison can also have negative effects. Here, we study the unequal effects of such feedback on individual effort and performance for individuals of different ability. We hypothesize that the effects of social comparison differ for top performers and bottom performers in a way that the inequality between the two increases. We use a quasi-experimental design to test our predictions with data from Topcoder, a large online crowdsourcing platform that publishes computer programming contests. We find that in contests where the submitted code is evaluated against others’ submissions, rather than using an absolute scale, top performers increase their effort while bottom performers decrease it. As a result, relative scoring leads to better outcomes for those at the top but lower engagement for bottom performers. Our findings expose an important but overlooked drawback from using gamified competitions, rankings, and relative evaluations, with potential implications for crowdsourcing markets, online learning environments, online communities, and organizations in general.”
Reference
Tsvetkova, M., Mueller, S., Vuculescu, O., Ham, H., & Sergeev, R. (2022, August 15). Relative feedback increases disparities in effort and performance in crowdsourcing contests: Evidence from a quasi-experiment on Topcoder. Retrieved September 21, 2022, from http://eprints.lse.ac.uk/115983/
Keyword
Online communities, learning environment, organizations,competitions, research