Guidelines for Academic Requesters

Version 1.1 (10/2/2014)

Dynamo guidelines were published in 2014 and Turkopticon is working to update them to make sure the guidelines remain fair and up to date with current standards. Have feedback on the guidelines or want to get involved with making them up to date, robust, and effective? Email Dynamo@turkopticon.net to talk to an organizer.

Contents

Information for Requesters
Information for Workers

Guidelines

Clearly identify yourself to give workers a sense that you are accountable and responsible

Your HIT should include a consent or intro page with the following information:

  • the full name/s of the researcher/s responsible for the HIT’s project;

  • the university/organization/s they’re affiliated with and its state/country;

  • their department name, lab, project group, etc;

  • a direct line of communication, including an email address to contact the IRB (phone calls may cost Turkers money)

Also, convey as much information as you can in your:

  • requester display name

  • HIT description

  • HIT preview

Why? Workers generally are more willing to take a chance on a requester they’re not familiar with (particularly one who hasn’t yet been reviewed by any workers on Turkopticon). Academic requesters seem legitimate by virtue of their position. Also, academic requesters are part of a university ‘chain of command’ with IRB oversight and a means of redressing worker grievances should something go wrong.

What about my privacy? Turkers who want to know (for the above reasons) can often figure out much of this information for an academic requester who doesn’t provide it; however, this takes workers’ time and effort, and burns their good will.

Example: When a large batch of HITs was posted by a new requester with no Turkopticon reviews and whose only visible identification was their first-name-only requester display name, some Turkers hesitated, trying to decide if it was too risky to do more than a few. When a Turker was able to identify the requester’s full name and affiliation with a major university, the Turkers felt more confident to do a larger quantity of those HITs.

Example: Researchers working on spam algorithms did not identify themselves in HITs. Turkers grew concerned that the HITs were coming from spammers trying to bypass filters. Turkers avoided doing the HITs and posted negative reviews and discussion comments.

Provide reasonable time estimates

State up front how long the task is likely to take for a careful person unfamiliar with the task. Know that task experts always underestimate how long it takes for novices to complete a task [Hinds 1999]. Err on the side of overestimation to avoid disappointment and frustration.

Why? Turkers calculate estimated earnings based on time estimates, and their target earnings inform their choice of HITs. If a HIT takes longer than estimated, Turkers may speed through it to keep it to the requester-provided estimate, hurting quality and damaging requester reputation.

Approve work as soon as possible

Set your auto-approval time as short as reasonably possible. 7 days should generally be sufficient. Many requesters approve work in less than 3 days, and some in less than 24 hours. Many workers rely on MTurk to pay bills and manage their cash flow, so timely pay makes a big difference in their lives.

Maintain worker privacy

Don’t require workers to provide personally identifying information to complete your HITs. This includes:

  • email address

  • birth dates

  • real names

  • Facebook logins

Don’t require workers to register on sites that require this kind of personal information to complete your HITs, or similarly require a Facebook login.

If you don’t follow the Terms of Service [3], particularly in the aforementioned ways that pose potential threats directly to workers, some workers will give your requester account negative Turkopticon reviews with flags for ToS violations, and report your HITs to Amazon.

Abide by AMT Terms of Service

When you established a requester account with Amazon Mechanical Turk, you promised to adhere to Amazon’s MTurk Terms of Service (ToS). To conform with these guidelines, AMT academic researchers shall provide their IRB with a copy of the ToS, as a requisite part of submitting their application for IRB approval.

The MTurk Terms of Service include some protections for Turker privacy and systems. See a list of prohibited uses of Amazon Mechanical Turk in the MTurk ‘General Policies’ FAQ page [3] or in the ‘General Policies’ section of the MTurk Requesters FAQ page [4]. Note that requiring users to download software is against AMT’s Terms of Service. Some workers are willing to download software, but others will refuse as it can be a security risk to their systems.

Ensure conditions for rejecting work are clear and fair

Rejections leave workers with a mark counting against them on their ‘permanent record’ at MTurk that may take them below a qualification threshold necessary for certain other HITs. Before deciding a rejection is justified, be sure you’ve considered several factors:

  • State any reasons for which you plan to automatically reject submissions.

  • Test your instructions and attention checks with compensated workers to ensure they are not ambiguous or unclear.

  • Make sure your survey will actually provide the promised completion code to workers who complete it, and that the code is correctly saved in your database. Learn how to do completion codes well

  • Keep lines of communication with workers open through email and forums. Workers run into ‘edge cases’, particularly in large batch HITs.

  • Don’t reject workers solely based on majority rules, even if you use majority internally for your analyses.

  • Reject work only as a last resort. Know how to undo a rejection [5] before you do. After thirty days, a rejection can never be reversed. Don’t be in a hurry to pull the trigger.

Example: There have been several situations where requesters wrongly rejected large amounts of workers for ‘incorrect completion codes’. The requester was randomly generating the codes and they were not being correctly stored in their database for matching.

Do not block workers to avoid duplicate subjects

Blocks should only be used for bad-faith workers, as they can result in workers being suspended by Amazon. Suspensions of this type are equivalent to a permanent ban in most cases; this simple mistake can cost livelihoods.

Say up-front if you do not want duplicates. However, recognize that workers cannot easily remember whether they participated in your survey several months ago. There are several tools requesters can use when setting up their HITs to make this easier, rather than expecting workers to keep your records. Learn how to avoid duplicate subjects (retakes) fairly

Maintain a responsive line of communication with Turkers

Check the email account associated with your MTurk requester account frequently. Respond to messages from workers as quickly as possible, preferably in less than 24 hours. Visit worker forums to seek advice and find knowledgeable Turkers to vet your HIT.

Pay Turkers fairly. They are a workforce, not a volunteer study population

Crowdsourcing workers are a labor force. Many depend on income from crowdsourcing as critical income. Crowdsourcing workers are legally considered contractors and therefore are not protected by any minimum wage laws. When requesters pay a fair wage and treat workers like people, both sides receive positive results.

Pay (at least) community norms of minimum Turking wage

Many workers consider $0.10 a minute to the minimum to be considered ethical, though many studies pay more and there are excellent arguments to pay more. Tasks paying less than $0.10 a minute are likely to tap into a highly vulnerable work pool and constitutes coercion.

Since Turkers work independently, they are responsible for their own computers, electricity, taxes, health care, etc. Different workers consider fair pay anywhere from $6 an hour to $22 an hour.

If your task takes longer than you predicted, you can send workers bonuses to bring the wages up to ethical levels after the fact. In July 2014, a requester did this unexpectedly for workers who took one of their surveys, basing their target pay rate on Washington state’s $9.32/hr minimum wage.

Clearly communicate possible bonuses

Explain what the potential amount will be and how to earn it, and how soon workers should expect it to be paid. Pay promptly.

Compensate for qualifier/screener surveys

If you are using qualifier surveys, compensate all those who correctly complete the survey.

Do not experiment with forum relationships for research

Forums only work because of delicate relationships of trust and mutual aid among participants. Sociological experiments such as breaching experiments can sow discord and destroy relationships. Positivist research that attempts to control and measure a forums effects can confuse workers, create anxieties in the community, and drain community energy as members try to make sense of the unusual intervention. To learn how a forum works, talk to administrators about your project, goals, and a plan for creating mutually beneficial research with workers.

Example: One academic experiment simulated requesters with varying ratings in Turkopticon to measure the effects of ratings on worker behavior and outcomes. Turkers found some of the requesters and smelled something fishy but did not know if it was a scam, academic research, vandalism, or something else; through what amounted to at least 50 hours of sleuthing over two days, Turkers across reddit and turkopticon-discuss hypothesized that this was a research project. The researcher wanted to make positivist knowledge claims about ratings, workers, and the economics of Turking, but neither he nor the IRB understood that:

  • simulating fabricated requesters and reviews broke the fragile trust that makes Turkopticon ratings meaningful to workers

  • that worker harm includes not only unpaid wages in AMT, but also the time they spent anxiously trying to track down these mysterious apparitions

Epigraph

“Turking is work, even if it is for science, and academic researchers shouldn’t assume that people are happy to do it for fun. They should pay and respect people’s time.” – Dr. Lilly Irani (of Turkopticon), Department of Communication, University of California at San Diego

“What we need to do is teach requesters about the human side of Mturk. Mturk encourages anybody that uses Mturk to think of us as little computing units, not as people.” – Project2501 (a Turker)

“Dehumanization is the result of an unjust order that engenders violence in the oppressors, which in turn dehumanize the oppressed. Because it is a distortion of being more fully human, sooner or later being less human leads the oppressed to struggle against those who made them so. In order for this struggle to have meaning, the oppressed must not, in seeking to regain their humanity (which is a way to create it) become in turn oppressors of the oppressors, but rather restorers of the humanity of both. This, then, is the great humanistic and historical task of the oppressed: liberate themselves and their oppressors as well.” – Freire’s Pedagogy of the Oppressed

“Turkers are people, the work they do might feel like magic at times but at the end of the day we can’t forget that they’re human beings just like you and me.” – William Kyle Hamilton

References

[1] http://guidelines.wearedynamo.org/

[2] http://wearedynamo.org/Guidelines_for_Academic_Requesters.pdf

[3] https://www.mturk.com/mturk/help?helpPage=policies

[4] https:/ / requester. mturk. com/ help/ faq#restrictions_use_mturk

[5] http://mechanicalturk.typepad.com/blog/2013/01/reverse-rejected-assignments-in-the-requester-user-interface.html

[6] http:/ / www. wearedynamo. org/

[7]https://www.mturk.com/mturk/searchbar?selectedSearchType=hitgroups&requesterId=A2XJMS2J2FMVXK

 

One reply on “Guidelines for Academic Requesters”

Comments are closed.