I wrote my master's thesis at Aalto University in Finland, where I worked for 8 months as a research assistant in their Strategic Usability group. During this time, I worked in an EU-funded project that aimed to build a crowdsourced translation system. When crowdsourcing a task, it is distributed to many so-called "workers", each of whom contribute to a final result. In our system, as with many other crowdsourcing applications, the workers get paid for each task they complete. I helped design the first version of this system, which was first run in a pilot, in which it was used to translate content from Europe's largest refugee information portal, to translate content into the languages most commonly spoken by refugees.
Crowdsourcing was chosen because it has potential to reduce cost and time required for translations. The problem lies with assuring that the quality is high enough for publication, since crowdsourcing often creates low quality results. To solve this, I started with a literature study, in which I identified and analyzed the main concepts underpinning the research area of crowdsourcing. This includes various workflows that can be used to increase the quality of the output of crowdsourcing, as well as design approaches to increase the motivation of crowd workers.
I then conducted expert interviews with professionals from the translation industry. These interviews helped to gain valuable insights into the translation industry. Not only was I able to better define the information that needs to be tracked about translators and translation jobs, I also learned about Computer Aided Translation (CAT) software. Several features of this software, such as suggestions for new tasks based on previously completed translations, are very useful for a massively collaborative application like crowdsourcing.
I designed a crowdsourcing workflow that takes into account the possibility of low quality results. By incorporating iterative and parallel tasks, any mistakes that are made by crowd workers are filtered out before they become too expensive to correct. I created paper prototypes to test the feasibility of this workflow with real users.
After evaluating two iterations of paper prototypes, I created digital prototypes using Framer. The video on the left shows the basic interface for translation. It has features that were inspired by Computer Aided Translation (CAT) software. It creates suggestions for the current translation task based on a database of previously completed translations. These suggestions can be for full sentences, or full certain important terms.
In crowdsourcing, it is common to let multiple users perform a certain task. This creates multiple results, out of which the best one needs to be selected. The interface on the left lets other crowd workers take part in this selection. I chose this sorting interface because the users of the paper prototypes preferred that over other selection methods.
Translations are difficult. To prevent errors from showing up in the final text, I created this error identification interface. It is a digital representation of what the users did with the paper prototypes. Users can indicate errors on a word-level, or leave a comment about the entire sentence.