•  
  •  
 
Vanderbilt Law Review

First Page

1633

Abstract

In recent years, legal scholars have come to rely on Amazon's Mechanical Turk ("MTurk') platform to recruit participants for surveys and experiments. Despite MTurk's popularity, there is no generally accepted methodology for its use in legal scholarship, and many questions remain about the validity of data gathered from this source. In particular, little is known about how the compensation structure affects the performance of respondents recruited using MTurk.

This Essay fills both of these gaps. We develop an experiment and test the effect of various compensation structures on performance along two dimensions: effort and attention. We find that both the level and the structure of the compensation scheme have substantial effects on the performance of MTurk workers, and that these effects differ across question types. We then propose a series of best practices for scholars to follow in conducting research using MTurk. Adoption of these guidelines will improve both the transparency and the robustness of research conducted using this platform.

Share

COinS