Hi everyone! We're a nonprofit that promotes open access to scholarly research, and we use Mechanical Turk to check the quality of the results in Unpaywall. Results have been good in general but there are some common cases where most workers give the wrong answers, and it would take more instruction to explain than anyone is willing to read for a 10 or 15 cent HIT (understandably). We're looking for a level of accuracy that would take a couple of hours of upfront training, and I'm looking for feedback on this plan to set it up so it pays fairly and people can tell what's going on: Create a qualification that requires a detailed test, accompanied by about 10-20 pages of examples and explanation. There are enough tricky cases that it will probably take more than one attempt at the test. Create a batch of do-nothing HITs (click button, get paid) that require a certain score on the qualification. Not perfect, but enough that you have to try. These would pay maybe $15-$20 and could only be completed once. Require a higher score, near perfect, on the qualification for the real batches we run later on. Mention the do-nothing screener batches in the description so it doesn't look like we're paying 5 cents an hour. The idea is if you spend time reading up and try the test a couple of times and it's not clicking, hey, at least you got paid to take it. Does that sound like it would work? Too complicated? Any other suggestions? Thanks!