Basics of how to be a good requester

“Treat your workers with respect and dignity. Workers are not numbers and statistics. Workers are not lab rats. Workers are people and should be treated with respect.” – turker ‘T’, a Turkopticon moderator

There are many basics to being a good requester and getting good results. Several sources for additional opinions on how requesters can effectively use MTurk, including specifics of HIT creation, are linked here: Links to other resources on AMT and online research ethics.

Clearly identify yourself

This ideally should include: the full name/s of the researcher/s responsible for the HIT’s project; the university/organization/s they’re affiliated with and its state/country; their department name, lab, project group, etc; and any direct contact information you’re willing to provide. The more places that more of this information is clearly provided, the better; requester display name, HIT description, HIT content text visible in preview, and survey consent/intro page (in order of increasing amount of information that would be appropriate there).

Workers generally are more willing to take a chance on a requester they’re not familiar with (particularly one who hasn’t yet been reviewed by any workers on Turkopticon) if they know it is an academic requester, because it is a sign of legitimacy, and because the university ‘chain of command’ and IRB oversight are one of the few means of recourse workers have if something goes wrong on MTurk. Amazon takes a very hands-off approach to issues workers may have with unfair requesters.

Turkers who want to know (for the above reasons) can often figure out much of this information for an academic requester who doesn’t provide it, but this takes time and effort that could be better spent on other things if the requester would provide it.

For example, when a large batch of HITs was posted by a new requester with no Turkopticon reviews and whose only visible identification was their first-name-only requester display name, some turkers hesitated, trying to decide if it was too risky to do more than a few. When a turker was able to identify the requester’s full name and affiliation with a major university, the turkers felt more confident to do a larger quantity of those HITs.

as another example, due to a lack of obvious indications of its identity/legitimacy, an academic research project trying to improve spam-filtering caused concerns for some turkers that it may have been posted by spammers trying to use MTurk to improve their own spam to bypass filters; until they became aware of the academic nature of the HIT, concerned workers avoided doing the HIT, and posted negative reviews and discussion comments.

Always use a consent/intro page or paragraphs

It seems many universities currently exempt online surveys from many or all IRB requirements, or at least exempt online studies from certain departments which don’t cover sensitive topics. Even if your university doesn’t force you to, it’s always a good idea to use a consent/intro page at the beginning of a survey, and/or paragraphs in the HIT content text for non-survey tasks.

  • clearly identify yourself

  • clearly state the pay to be expected (and make sure this statement of pay matches what the HIT is currently posted for; some consent pages accidentally state a higher pay than the HIT’s actual current MTurk pay), and how soon approval can be expected;

  • clearly state any possible bonuses and/or follow-up studies you may qualify for, and how soon their issuance can be expected;

  • state the number of minutes you expect it to take a worker to complete the study; state any reasons for which you plan to automatically reject submissions;

  • and state a title for the study and however much description of it you reasonably can without compromising it.

  • provide an email address to contact the IRB since Turkers live in many places and may not be able to afford non-local phone calls

Provide reasonable time estimates and limits

Clearly state up-front a fair expectation of how long it will take for someone who’s not already familiar with it to thoroughly read and answer everything in your survey or task. Err on the side of overestimation, to avoid disappointment/frustration if it takes someone more than the estimated time, a situation which can encourage some workers to rush through the rest of it to reduce the decline in their effective pay rate. Other workers will return the survey in protest, losing out on all the compensation. Displaying an accurate progress bar as workers move through the survey helps them know when they’re nearing the end.

Set the ‘Time Allotted’ limit for your HITs to an amount of time much longer than the expected amount of time needed to complete the survey or task. Workers like to have leeway in case your time expectation was underestimated, and to have time available if needed to deal with interruptions that occasionally come up, like ISP/browser malfunctions, restroom breaks, phone calls, visitors or family members needing attention, and such.

Approve work as soon as possible

Some requesters try to compare MTurk approval times to the time between paychecks at a traditional job, to say that they think workers shouldn’t complain about waiting for payment. But with a traditional job, a worker knows they’ll get paid for the time they’ve reported working, even if they don’t get the pay until days/weeks after they did that work, and they’ll know for sure how many days/weeks that wait should be. Even if the employer fires the worker in the meantime, the employer is still legally obligated to pay what the worker earned. With MTurk approval times, a worker is actually waiting to see if they’ll get paid at all for the work they’ve done, and if so, will it be at the end of the auto-approve time (which the worker may not know) or at some point sooner.

Set your auto approval time as short as reasonably possible for the time you’ll need for any checking of the work; 7 days should generally be more than sufficient, and it’s better if it’s less than that. Many requesters approve work in less than 3 days, some in less than 24 hours.

One of the configuration options when requesters create a HIT group is the ‘Auto-Approve’ (AA) time, which is the amount of time from the point a worker submits a completed HIT, to the point at which the MTurk system will automatically move the HIT into ‘Approved’ status for the requester, if the requester hasn’t manually approved or rejected it prior to that point. This setting defaults to 30 days, which is the maximum allowed. That 30 days is generally regarded by workers as a very long time to potentially have to wait, and generally makes your HITs much less desirable to workers who know how to check the AA time setting, if they don’t find out from other workers that you have a history of approving much earlier than the AA time. Set your AA time as short as reasonably possible for the time you’ll need for any checking of the work; 30 days is very seldom necessary or appropriate; no more than 7 days should generally be more than sufficient, and it’s better if it’s less than that. Many requesters approve work in less than 3 days, some in less than 24 hours.

Communicate with workers promptly and politely

Check the email account associated with your MTurk requester account frequently. Respond to messages from workers as quickly as possible, preferably in less than 24 hours.

Some current IRB guidelines for MTurk use have suggested that responding within 7 days is considered good promptness. Most workers would find that to be unacceptably slow in many situations. Even 24 hours usually wouldn’t be anywhere near fast enough to get clarification on a partially-finished HIT before it’s long since expired.

Understand that when a worker goes out of their way to take the unpaid time and effort to send you a message through MTurk, and reveal their MTurk-associated name and email address to you in the process, it’s usually going to be for a good/important reason.

Some requesters are concerned about keeping up with the volume of email they might receive from workers. In cases where a worker contacts you to let you know about a problem they were concerned might cause their work to be rejected, a prompt approval may be sufficient reply. The more a requester proactively follows the other guidelines discussed herein (including providing reasonable time limits, approving work as soon as possible, being clear about bonuses, avoiding duplicates/retakes, avoiding completion code malfunctions, and avoiding other causes of unfair rejections such as unclear instructions), the fewer reasons workers will have to email you. Providing a comments box at the end of your survey will also allow workers to share feedback with you without expecting a reply.

Don’t ignore messages and be one of what seems to be at least half of requesters currently who don’t respond to most workers’ messages at all. And when you do respond, don’t be one of the occasional requesters that workers have complained about being unnecessarily harsh / insulting / condescending / rude to the workers.

Also note that the worker’s MTurk worker ID# will be automatically included in all messages you receive through MTurk, labeled as ‘Customer ID:’. So unless a worker directly emailed a requester instead of using MTurk’s ‘Contact Requester’ feature, the requester shouldn’t need to ask the worker for their ID again to address what they contacted you about.

Forums:

Workers share information, establish norms, and build community through platforms like (in alphabetical order): CloudMeBaby [1], mTurk Boards [2], mTurk Forum [3], mTurk Grind [4], mTurk Wiki Forum [5], Reddit’s /r/mturk [6] and /r/HITsWorthTurkingFor [7], Turker Nation [8], and Turkopticon [9]

These forums generally welcome requesters to communicate with workers about their HITs, responding to questions, suggestions, and complaints. They may have specific rules that they ask requesters to follow to participate there.

Some welcome researchers into select areas, while closing off other spaces so workers may speak freely. Follow all rules on the forums in which you choose to participate.

Don’t violate workers’ trust and the MTurk Terms of Service

Don’t require workers to provide personally identifying information to complete your HITs; common problems include asking for email addresses (requesters can use MTurk to send messages to workers without having the workers’ email addresses), exact birthdates (year alone, or month and year, should be sufficient), or real names.

Don’t require workers to register on sites that require this kind of personal information to complete your HITs. If a requester has a project that requires workers to register on a special site the requester set up just for the HIT, let workers use their MTurk worker ID# or a username of their choice as the unique login identifier, instead of unnecessarily expecting an email address be provided for this purpose.

Many workers also object to HITs that require the use of Facebook accounts, which are intended to be quite personally identifiable.

The MTurk messaging system can be inefficient when contacting large numbers of workers through the GUI; rather than requiring workers to provide their email addresses in your HITs to try to get around this, you should set up and familiarize yourself with how to send bulk messages to workers using one of the many open-source HIT management tools [10] available for requesters to access the Requester API [11]. There has been at least one incident where a requester carelessly exposed hundreds of turkers’ email addresses that the requester had collected. Use of the MTurk messaging system avoids this risk.

Don’t require workers to download software programs or apps to complete your HITs (this includes Java programs and plugins such as Inquisit). This can be a major security risk for workers, particularly if the program comes from an unofficial source set up just for the HIT. It became known in 2014 that an academic researcher had performed a study on MTurk intended to see how low of pay levels would still convince workers to download and install a program that pretends to be malware, so many workers who are aware of this study are now even more hesitant to go along with download-requiring HITs even from seemingly legitimate requesters.

If you don’t follow the Terms of Service, particularly in the aforementioned ways that pose potential threats directly to workers, some workers will give your requester account negative Turkopticon reviews with flags for ToS violations, and report your HITs to Amazon. (Amazon instructs workers to report ToS violations.)

Be clear about bonuses

If a bonus is offered, state as clearly as possible what the potential amount will be (or range of possible amounts and expected mean) and how to earn it, and how soon workers should expect it to be paid. Pay in as timely a manner as possible.

When requesters send out bonuses, the only information workers receive about the bonus is an email from MTurk containing the requester’s display name (does not include the unique requester ID#), a ‘HIT’ ID that is meaningless to workers (representing the worker’s unique HIT assignment, not the HIT group), and whatever comment message the requester chooses to provide. Many provide no comment at all (resulting in a message that says “No comment provided by Requester.”), or a minimally-informative comment, leaving the worker to try to figure out what the bonus was from.

Due to this limited information, workers sometimes have to ask other workers if anyone remembers what a bonus they received might have been from, and how it was determined. Workers are always glad to receive bonuses at all, but ideally your comment should clearly state the title of the HIT (and the topic of the study if this wasn’t stated in the HIT title; some just say generic things like “Take a quick survey”), state the date the worker completed the relevant HIT/s or the range of dates the HIT/s was available, and briefly re-explain how the bonus was earned/calculated.

If doing a random bonus lottery/drawing/sweepstakes, be aware that some workers are skeptical about these, since it would be easy for a requester to never award one to anyone, and the workers would never know. This concern can’t be entirely averted, but it helps if you clearly state the number of participants that you plan to recruit in this pool, the number and dollar amount of bonuses you will be awarding, and as specifically as possible when you plan to be awarding the bonuses (and stick to it). Requesters could even consider sending a small bonus (even as little as $0.01, but more would be nice of course) to everyone else when the big bonus/es are awarded, as both a small consolation prize and a notification that the lottery has concluded, so workers know that at least the requester didn’t just forget about it.

Avoid duplicates/retakes in fair ways

Please don’t block workers through MTurk just to prevent retakes! Being blocked by requesters can put a worker’s MTurk account at risk of being suspended (banned from all future work on MTurk), based on some rather murky factors that are not presented clearly to workers or requesters. Blocking should generally only be a last resort against an occasional worker who submits such terrible work that they’re clearly not trying, but even that situation can be remedied without blocking in some cases by increasing your HITs’ qualification requirements, or by using custom quals that you can either assign, revoke, or change the scores on for repeatedly-unsatisfactory workers.

If you only ever post your survey in one HIT group, and just increase the amount of assignments available in it as needed, you can simply configure MTurk to only allow each worker to accept the HIT once.

If your survey will be posted more than once (preferably only do this when the rounds will be several months or at least weeks apart, so workers don’t have to keep trying to figure out if they’ve done it before or not, over and over again, as it keeps popping back up), and if you don’t want retakes, say so up-front – and use one of the several free online retake-prevention methods/services or self-hosted open-source HIT management tools to ensure this. Options include:

  • Providing a list of worker IDs who’ve taken previous postings of the survey, and telling workers to search it for their ID before accepting – either directly in the preview page of the HIT, or in a document hosted elsewhere that is linked from the preview page. This is the simplest method, but less reliable than the other options below; and since worker IDs can be connected [12] to identifying information in some workers’ Amazon.com profiles, please try to ensure it is posted in a way that won’t be indexed [13] by search engines such as Google.

  • Using functionality within Qualtrics (at no additional cost if you’re already using Qualtrics): tips [14] and more tips (pdf) [15]

  • Detecting the Worker ID and using Javascript [16] to compare it to a predefined list of previous takers

  • Using the Unique Turker service [17], created by a researcher at Cornell University (see also tips [18], more tips [19] )

  • Using the Turkitron service [20], created by researchers at Georgia Institute of Technology and Victoria University of Wellington

  • Using the TurkCheck service [21], created by a researcher at Georgetown University (see also tips [22])

  • Using TurkGate [23], TurkGateManager [24], and other open-source HIT management tools for requesters [10]

Workers who are concerned about potential rejections for unintended retakes may avoid working on HITs that warn about rejecting duplicates and don’t provide one of those ways for workers to immediately verify their status. If a worker feels they have to resort to contacting a requester to ask if they’re allowed to take a HIT, the HIT will likely no longer be available by the time (if ever) the requester replies.

If you do repost, use the same requester account, and the same or very similar HIT title, HIT description, and preview content when reposting, if at all possible. You can indicate in the description and preview text when it was previously posted (e.g. “If you took this study in March 2014 or June 2014, please don’t try to retake it.”). In addition to situations where several weeks or months have passed between postings, it may occasionally be necessary to take down and repost a HIT to make changes to settings such as its pay or qualifications, if you unintentionally didn’t use optimal settings the first time.

If your motivation for frequently reposting the same survey in new HIT groups was to cause your HIT to pop back up at the top of the ‘HIT Creation Date (newest first)’ list repeatedly, please consider that if you simply pay a fair rate for a well-structured survey, word will be spread for you quickly on worker discussion forums to bring your HIT to the attention of more workers; you may even want to post about it yourself on some forums. Note that every time you repost the same study as a new HIT group, it also means any direct links to your previous HIT posting that workers may have already shared will no longer work.

Compensate for qualifier/screener surveys

Some requesters want to determine if workers fit specific criteria/demographics for their survey, without revealing those criteria in advance to potentially bias the answers. Some requesters handle this by expecting workers to take a qualifier/screener survey HIT that pays $0.00, or telling them to return the main survey HIT unpaid if they don’t match the initial screener’s criteria.

Turkers sometimes consider this acceptable for a qualifier of just a few questions, but instead consider handling it like this to be more fair:

Post a qualifier survey for a small but appropriate fee for the time needed to complete it. Pay that fee to everyone who completes the qualifier survey. For people who fit the criteria you’re looking for, either immediately redirect them to the full survey and pay them a bonus appropriate for the additional time needed to complete the full survey (both the amount of the bonus and the time the full survey will take should be clearly stated up-front); or else assign a custom qualification to the workers who fit the criteria, and tell them to take the full survey in another HIT that requires that custom qualification.

Avoid completion code malfunctions

Make sure your survey will actually provide the promised completion code to workers who complete it; this is a problem turkers encounter quite frequently. When the code is provided, clearly state it on a separate line by itself rather than buried in the midst of a paragraph of more text, and ideally in a different color, larger font size, and/or bold formatting.

Besides using a static code or generating a random code, another option is to provide a box for workers to type in their own completion code they make up, and tell them to type in the code they chose in the HIT to submit it. If you use randomly-generated codes, make sure they are being accurately recorded in your database; there have been several situations where requesters wrongly rejected large numbers of workers for ‘incorrect completion codes’ due to a mistake like that.

Avoid other causes of unfair rejections

Rejections leave workers with a mark counting against them on their ‘permanent record’ at MTurk that may take them below a qualification threshold necessary for certain other HITs.

Make sure your instructions are written very clearly and comprehensively, particularly for batch HIT groups; workers often run into ‘edge cases’ the requester didn’t consider/cover in the instructions, and have to either take a risk and guess how to handle it, return the HIT with no compensation, or contact the requester about it and hope the requester responds before the HIT expires.

If using Attention Check questions (ACs), make sure the ‘correct’ answers are accurate and not vague/ambiguous; and try not to reject based on missing just one, as there are multiple potential downsides to doing so. Another option to consider: some requesters pay a certain base pay amount for everyone who completes their survey, and promise a bonus for each of the ACs a worker answers correctly. And if you set your qual requirements high enough, you might not need to rely on ACs at all to get good data (see “Reputation as a Sufficient Condition for High Data Quality on MTurk” [25]).

You can choose to not use some workers’ data without rejecting their work on MTurk, when appropriate. If you want to use ‘majority rules’ (plurality) to evaluate the data you receive from batches, it’s okay to use that for internal analysis if that’s what you want, but be very hesitant to actually reject workers’ HITs based on ‘majority rules’ results. Many of the better workers try to avoid ‘majority rules’ HITs, since they will often catch something that other less attentive/experienced/knowledgeable workers might miss, but be rejected for being in the minority.

If you do reject workers unfairly, know how to undo it. (See the official MTurk blog [5] and documentation [26]. You only have 30 days from the time the HITs were submitted to reverse their rejections. The workers would prefer to have it done as soon as possible, though, not getting anywhere near that 30-day limit. Some requesters have reportedly taken so long to read and respond to a worker’s message about a rejection that the 30-day limit ran out before they got around to trying to address it. Trying to make up for a rejection by issuing a bonus, but without reversing the rejection, leaves workers with a mark counting against them on their ‘permanent record’ at MTurk that may take them below a qualification threshold necessary for certain other HITs.

It may be fair to reject some work in some situations; most turkers work conscientiously (demonstrated in studies such as “The promise of Mechanical Turk: How online labor markets can help theorists run behavioral experiments” (pdf) [27] and “Separating the Shirkers from the Workers? Making Sure Respondents Pay Attention on Self-Administered Surveys” (pdf) [28] ), but there are some who don’t. Before deciding a rejection is justified, just be sure you’ve considered the above factors (completion code malfunctions, ways you could help workers avoid accidental retakes, clarity of instructions, accuracy of ACs, and preferably not basing the decision solely on ‘majority rules’), and that your HITs weren’t malfunctioning in some other way.

References

[1] http://cloudmebaby.com/

[2] http://mturk.boards.net/

[3] http://mturkforum.com/

[4] http://www.mturkgrind.com/

[5] http://mturkwiki.net/forum

[6] http://www.reddit.com/r/mturk/

[7] http://www.reddit.com/r/HITsWorthTurkingFor

[8] http://www.turkernation.com/

[9] http://turkopticon.ucsd.edu

[10] http://www.mturkgrind.com/threads/25492-Mechanical-Turk-Software-Beyond-Scripts?p=264884&viewfull=1#post264884

[11] http://docs.aws.amazon.com/AWSMechTurk/latest/AWSMturkAPI/ ApiReference_NotifyWorkersOperation.html

[12] http://crowdresearch.org/blog/?p=5177

[13] https://support.google.com/webmasters/answer/93710?hl=en

[14] http://thebehaviorallab.wordpress.com/2012/10/08/excluding-mturk-workers-from-surveys-in-qualtrics-and-elsewhere/

[15] http://experimentalturk.files.wordpress.com/2012/02/screening-amt-workers-on-qualtrics-5-2.pdf

[16] http://turktools.net/use/check.html

[17] http://uniqueturker.myleott.com/

[18] http://www.tylerjohnburleigh.com/?p=321

[19] http://www.tylerjohnburleigh.com/?p=496

[20] http://turkitron.com/

[21] http://www.perceptionstudies.com/turkcheck/createlink.php

[22] http://faculty.georgetown.edu/sjb247/tutorials/turkcheck/

[23] http://gideongoldin.github.io/TurkGate/

[24] http://pedmiston.org/turkgatemanager/

[25] http://experimentalturk.wordpress.com/2013/12/11/reputation-as-a-sufficient-condition-for-high-data-quality-on-mturk/

[26] http://docs.aws.amazon.com/AWSMechTurk/latest/RequesterUI/ReversingRejectedAssignment.html

[27] http://www.people.fas.harvard.edu/~drand/rand_jtb_2011.pdf

[28] http://www.michelemargolis.com/uploads/2/0/2/0/20207607/screener.pdf

Fair payment

2 replies on “Basics of how to be a good requester”

Comments are closed.