How often have you done a study and thought, “they are forcing me to sound this way?” How often have you been given limited choices on a question when none align with your views? What becomes of that data? Was the study designed to give the requesters a voice on a topic they had already decided on?
We have all seen articles claiming Mechanical Turk gives bad data, is full of bots, or unreliable. While that isn’t entirely a lie to me, it also says a lot about the different levels of misunderstanding on how the platform and its tools can help them make their studies.
They have done all the research with their analysis, charts, and graphs, but did they ask questions that can only be answered to their liking? Is MTurk this flawed, or do we need to look at how studies are being set up and how humans are being verified? You can’t blame your tools when you don’t know how to fix your car, much like MTurk can’t be blamed when your study is set up poorly.
Many factors could affect data in research, from getting the right participants from the beginning, and building a data quality strategy, to showing these same levels of quality to your audience through proper self-evaluation and revision. Professional values are essential, and only through honesty, objectivity, respect, responsibility, integrity, and impartiality can you show strong work ethics, accountability, and quality work standard.
Researchers who don’t want to learn how to use their tools properly are not that distant, ethically speaking, from the participants who inattentively share their answers. Both are held responsible for the bad data they are providing or sharing and for what it might represent for coming studies and the future of honest workers and requesters whose livelihoods depend on doing this kind of research.
For those who are just readers of such studies, always keep in mind to think for yourself, or others will think for you.