Everyone counts? Design considerations in online citizen science

Abstract: 

Effective classification of large datasets is a ubiquitous challenge across multiple knowledge domains. One solution gaining in popularity is to perform distributed data analysis via online citizen science platforms, such as the Zooniverse. The resulting growth in project numbers is increasing the need to improve understanding of the volunteer experience; as the sustainability of citizen science is dependent on our ability to design for engagement and usability. Here, we examine volunteer interaction with 63 projects, representing the most comprehensive collection of online citizen science project data gathered to date. Together, this analysis demonstrates how subtle project design changes can influence many facets of volunteer interaction, including when and how much volunteers interact, and, importantly, who participates. Our findings highlight the tension between designing for social good and broad community engagement, versus optimizing for scientific and analytical efficiency.

Published: 

17 January 2019

1 Context

During the last decade, an increasing number of research teams have deployed online citizen science projects to aid with data analysis [Brabham, 2008]. Typically, these projects invite volunteers to complete a classification task associated with a single element of data, such as an image, graph or video clip, with multiple volunteers examining each separate data point. The growth of this mode of distributed data analysis is being driven by the increased availability of datasets in many research disciplines, coupled with the concurrent broad establishment and use of web-connected computer and mobile technology. In addition to being motivated by the need to produce annotated datasets for research purposes, project owners are frequently passionate about the potential of citizen science to engage the public in authentic research. This raises a range of interesting challenges and questions for those studying, designing and implementing citizen science projects, such as how to effectively satisfy the dual aims of citizen science of scientific efficiency and social inclusivity. Here, we use a large dataset of more than 60 online citizen science projects, representing the most comprehensive collection of online citizen science project data gathered to date, to study how project design affects volunteer behaviour, and ultimately the success of the project.

We focus on a well-established platform for online citizen science, the Zooniverse. The Zooniverse is the largest and most popular citizen science platform for data interpretation [Woodcock et al., 2017] (Figure 1 and Figure 2). It has provided a space for volunteers to contribute to more than 100 distinct research projects across multiple domains, including astronomy, ecology, medicine and the humanities. Although diverse in subject matter, Zooniverse projects are unified by a common theme of asking volunteers to perform simple tasks such as image classification and text transcription, and the aggregation of these projects onto one platform confers a unique opportunity to examine volunteer behaviour across projects.

Citizen science involves the collaboration of the general public with professional scientists to conduct research. The principal benefit of applying this method is that it enables research that would not otherwise be possible; although computer-based analysis can address many research questions, it is yet to surpass human ability in a number of areas, including recognition of the often complex patterns that occur within data [Cooper et al., 2010; Kawrykow et al., 2012]. Other potential benefits to online citizen science based research include a reduction in data processing time and cost [Sauermann and Franzoni, 2015], and the engagement of a more diverse crowd that may include typically underrepresented skills or demographic features [Jeppesen and Lakhani, 2010; Pimm et al., 2014]. Online citizen science projects are an effective form of public engagement, providing volunteers with an opportunity to learn more about science [Masters et al., 2016] and with an easily accessible means to make an authentic contribution to a research project, which can improve public engagement with, and advocacy of, scientific material [Forrester et al., 2017; Straub, 2016].

Despite the relative infancy of online citizen science, it has already contributed many noteworthy discoveries across diverse research domains, including the identification of new types of galaxies [Cardamone et al., 2009], models of large carnivore coexistence in Africa [Swanson et al., 2016], elucidation of protein structures relevant to HIV transmission [Cooper et al., 2010; Khatib et al., 2011], classification of cancer pathology [Candido dos Reis et al., 2015] and mouse retinal connectome maps [Kim et al., 2014]. The growing success of citizen science, in conjunction with significant reductions in the barriers to developing online citizen science projects, has led to an exponential growth in the number and diversity of projects. This is creating both an opportunity, and a need, to further study and understand the volunteer experience [Cox et al., 2015; Sauermann and Franzoni, 2015], as the sustainability of citizen science is dependent on our ability to design for volunteer engagement while optimising for scientific efficiency.

There has already been much work to examine project design in the Zooniverse, though typically by researchers studying a small number of projects. For example, Jackson et al., investigated the effect of novelty on user motivation, showing that for one project (Higgs Hunters) a message informing volunteers when they were the first to see a particular data object increased participation [Jackson, Crowston et al., 2016]. Lee et al., found that messages which refer to learning, contribution and social engagement were more effective than direct appeals to altruism [Lee et al., 2017]. Segal et al., used active messaging about ‘helpfulness’ in the longest running Zooniverse project, Galaxy Zoo, to further increase engagement [Lintott et al., 2008; Segal et al., 2016]. Sprinks et al., recently found that participants prefer task designs with greater variety and autonomy, however fulfilling this preference did not improve performance [Sprinks et al., 2017]. Together, these studies reveal that user motivations, as discerned by studies of behaviour, are predictably complex. The effect of participation on volunteers may be assumed as similarly complicated, though work by Masters et al., has shown that volunteers self-report learning both about the scientific topics with which they are engaged, and about the process of science itself [Masters et al., 2016]. This work also shows that learning about scientific topics occurs, and is correlated with further engagement in the project.

None of these studies has taken advantage of the large number of projects now available on the Zooniverse platform; completing such a survey would allow us to identify aspects of project design which have significant effects. Because Zooniverse projects use a common codebase they share many similar features (e.g. the flow from the landing page to the classification interface, the presence of discussion forums etc.), therefore, differences will be due to fundamental design choices, such as the difficulty of the task set, the choice of dataset and the amount of data available. In this study, we employ the Zooniverse online citizen science platform as a ‘web observatory ecosystem’ to study volunteer behaviour across n = 63 projects. This work extends preliminary analyses previously presented as a poster at The Web Conference 2018 [Spiers et al., 2018] through quantifying and comparing additional project measures, utilizing the similar number of astronomy and ecology projects to assess the academic domain specificity of our observations, and closely examining unique findings associated with an individual Zooniverse project, Supernova Hunters. The analyses presented here provide quantitative evidence and insight relevant to researchers designing and developing online citizen science projects, and are informative to researchers studying the crowd-based production of knowledge.

2 Methods

To conduct the data analysis performed in this study, we first assembled the most comprehensive collection of Zooniverse classification data to date; including data from n = 63 Zooniverse projects (Figure 1, Table 6). Although these projects are all hosted on the same platform, these projects differ in domain (Figure 2), launch date (Figure 1) and task, amongst other variables such as level of social media engagement and level of press attention. An overview of project characteristics is provided in Table 6.


Cumulative Zooniverse project count. PIC

Figure 1: Accumulation of projects on the Zooniverse. The launch of the n = 63 projects included in this study are indicated by vertical lines. The line colour indicates the project domain (Ecology = green, Astronomy = blue, Transcription = yellow, Other = Grey, Biomedical = red). The black line shows the cumulative number of projects launched on the Zooniverse platform.


Data for the n = 63 projects were obtained from Zooniverse databases. Projects were selected for inclusion in this analysis based on data availability. Project characteristics, including domain and launch date, are summarized in Table 6. Projects that have been rebuilt and relaunched were treated as separate projects for the purpose of this analysis (see Notes from Nature, Milky Way Project and Chicago Wildlife Watch in Table 6). Zooniverse projects can be highly variable in duration, from weeks to years, therefore, for ease of comparison across projects a consistent observation window of 100 days post-launch day was applied across all analyses. For logged-in, registered volunteers it is possible to describe volunteer-specific classification activity.

Throughout this paper, the term ‘classification’ is used to denote a single unit of analysis on a project by a volunteer, such as the tagging of an image or a video, whereas the term ‘subject’ refers to a single data object such as an image, video or graph. Different classification types can vary significantly in the amount of effort they demand. For a detailed glossary of Zooniverse terms, see Simpson et al., [Simpson, Page and De Roure, 2014]. For further information about the Zooniverse platform please see https://www.zooniverse.org.


Number of Zooniverse projects by academic domain. PIC

Figure 2: The Zooniverse platform supports a similar number of ecology and astronomy projects. Of the n = 63 projects included in this study; Ecology n = 26, Astronomy n = 22, Transcription n = 7, Other = 5, Biomedical = 3.


To examine volunteer demographic features, data were extracted from Google Analytics (GA) (https://analytics.google.com/) for five astronomy and five ecology projects. To improve comparability between the projects examined and create a more uniform sample, we analysed the five most recently launched projects built using the project builder (https://www.zooniverse.org/lab) for both the astronomy (Gravity Spy, Milky Way (2016), Radio Meteor Zoo, Supernova Hunters and Poppin’ Galaxy) and ecology (Chicago Wildlife Watch (2016), Arizona Batwatch, Mapping Change, Camera CATalogue, Snapshot Wisconsin) domains, from our cohort of n = 63 projects. For each project examined, several variables were extracted from GA for the classify page of each project for the first quarter of 2017 (January 1st 2017 to March 31st 2017; data obtained May 9th 2017). These included the number of ‘Page views’ (‘Page views is the total number of pages viewed. Repeated views of a single page are counted’) subset by the secondary variables of age and sex.

The breadth and depth of the data collected by GA, in addition to it being a free and easily accessible service, has led to GA becoming an accepted research tool and one of the most frequently used methods to measure website performance [Clark, Nicholas and Jamali, 2014]. GA has been successfully used to examine user behaviour [Crutzen, Roosjen and Poelman, 2013; Song et al., 2018] website effectiveness [Plaza, 2011; Turner, 2010] and web traffic [Plaza, 2009]. However, it should be noted that it does have a number of constraints. For example, although GA uses multiple sources (third-party DoubleClick cookies, Android advertising ID and iOS identifier for advertisers [Google, 2018]) to extract user demographic information (age, gender and interests), these data may not be available for all users. In the analyses presented here, demographic data was available for 49.83% of total users for the variable of age, and for 53.41% of total users for the variable of gender. A further limitation is that occassionally the demographic data reported by GA may reflect a sample of website visitors, and hence may not be representative of overall site composition. Although the data presented in this study represents a sample of website visitors, for each page examined this sample was large (representing hundreds to thousands of page views) therefore the impact of missing demographic information from individual users or mis-sampled data will be minimal. The possibility of ‘referrer spam’ and ‘fake traffic’ pose further limitations to the accuracy of reports from GA, however these issues are less likely to influence non-commercial sites such as the Zooniverse.

3 Results

3.1 How heterogeneous is classification and volunteer activity across projects and academic domain?

A large amount of heterogeneity is found between the n = 63 Zooniverse projects for the total number of classifications received within the first 100 days post-launch (Figure 3a, Table 6). Notably, three orders of magnitude difference is observed between the project with the most classifications (Space Warps; total classifications n = 8,011,730) compared to the project with the fewest (Microplants; total classifications n = 8,577) (Table 6).1 This suggests that the inclusion of a citizen science project on a successful citizen science platform website such as the Zooniverse does not guarantee high levels of engagement alone, as measured by number of classifications, and that some projects are far more successful at attracting classifications than others. Although initial inspection revealed a large difference between the number of classifications (within the first 100 days post-launch) for the n = 26 ecology projects (median = 224,054; interquartile range (IQR) = 78,928–758,447) compared to the n = 22 astronomy projects (median = 666,099; IQR = 150,642–1,178,327) (Table 1), this difference was not significant (P-value = 0.07, Mann-Whitney-Wilcoxon Test), indicating that other variables likely play an important role in the value of project metrics such as classification count.

The absolute number of classifications received by each project may be influenced by factors such as how popular the project is or how much publicity it has received. Therefore, to examine whether volunteers experienced projects differently once contributing, we next analysed the median number of classifications made by registered volunteers within 100 days post-project launch for each of the 63 projects (Figure 3b, Table 6). Again, a large amount of heterogeneity was observed between projects, with a broad spectrum from the project with the highest median number of classifications per registered volunteer (Pulsar Hunters; n = 100), to the projects with the lowest (Orchid Observers, Mapping Change and Decoding the Civil War; n = 3), and this difference was highly statistically significant in each case (P-value = < 2.2e-16, Mann-Whitney-Wilcoxon Test). This indicates that once volunteers are participating in a project, a variety of engagement levels are observed. However, no significant difference was found (P-value = 0.47, Mann-Whitney-Wilcoxon Test) when comparing the median number of classifications per registered volunteer within the first 100 days post-launch for the n = 26 ecology projects (median = 15; IQR = 9–32) to the n = 22 astronomy projects (median = 17, IQR = 11–26).

a) Classifications per Zooniverse project within 100 days post-launch. PIC

b) Median classifications per registered volunteer per project within 100 days post-launch. PIC


c) Unique users per Zooniverse project within 100 days post-launch. PIC
Figure 3: Project classification and volunteer activity is highly variable. Heterogeneity is observed among Zooniverse projects during the first 100 days post-launch for a) the total number of classifications received, b) the median number of classifications per volunteer and c) the number of unique volunteers contributing. Colours indicate domain; Ecology = green, Astronomy = blue, Transcription = yellow, Other = Grey, Biomedical = red.



a) Number of classifications a day on Asteroid Zoo.
PICb) Number of classifications a day on Supernova Hunters.
PIC

Figure 4: Supernova Hunters has a distinctive classification curve. A typical Zooniverse project has a classification curve displaying a peak of activity after launch that rapidly declines a), however there are exceptions to this observation, the most striking of which is the classification curve of the Supernova Hunters project b). Volunteers return regularly to this project upon a weekly release of new project data. This figure has been adapted from a poster presented at the World Wide Web Conference 2018 [Spiers et al., 2018]. Redistributed under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.



Table 1: Classifications during the first 100 days post-launch, by domain. Median, first and third quartile of number of classifications received by project subset by academic domain, for the first 100 days post-launch.
PIC

The majority of Zooniverse projects show similar temporal trends in classification activity — most projects possess a classification curve characterized by a high level of activity upon project launch (due to the sending of an e-newsletter to the Zooniverse volunteer community) that rapidly declines (Figure 4a), with intermittent spikes of activity, which can be the result of further project promotion, press coverage or the release of new data, amongst other factors. Of the 63 projects assessed, the Supernova Hunters project showed striking exception to this trend and instead had a classification curve displaying a recurring peak of activity each week (Figure 4b). This pattern of classification activity has arisen from Supernova Hunters’ regular release of new project data and concurrent e-newsletter notification sent to project volunteers [Wright et al., 2017]. Notably, volunteer activity on the Supernova Hunters project has begun to precede the sending of e-newsletter notifications, suggesting that volunteers anticipate the release of new project data, and are therefore deeply engaged.


Table 2: Unique volunteers during the first 100 days post-launch, by domain. Median, first and third quartile of number of unique registered volunteers by project subset by academic domain, for the first 100 days post-launch.
PIC

The number of unique volunteers contributing to each project within the first 100 days post-launch, and its link to domain, was examined (Figure 3c). As with the number of classifications, the number of unique volunteers contributing to each project is highly heterogeneous (Table 2) and does not show any clear domain specificity; no significant difference (P-value = 0.07, Mann-Whitney-Wilcoxon Test) is observed between astronomy (median = 3863; IQR = 1760–5906) and ecology (median = 1975; IQR = 1289–3273) projects. As expected, the number of volunteers contributing to a project within the first 100 days post-launch is positively correlated with the number of classifications received by that project (R = 0.64, P-value = 2.30e-08). However, this correlation is not perfect, suggesting varying levels of volunteer contribution dependent on project. Rather than being stochastic, these differing volunteer contributions to individual projects are likely due to project specific variables. The heterogeneous contributions of volunteers to specific projects is also illustrated by the variable median number of classifications per project per day (Figure 5).


Median classifications a day for the first 100 days following project launch per project.

PIC

Figure 5: Zooniverse projects show heterogeneity in the number of classifications received a day. Projects are ordered by median classifications a day for the first 100 days following launch.


3.2 How skewed are volunteer contributions in Zooniverse projects?

Prior work has identified skewed volunteer contribution distributions in similar settings such as Wikipedia and OSS development [Franke and von Hippel, 2003; Ortega, Gonzalez-Barahona and Robles, 2008; Panciera, Halfaker and Terveen, 2009; Wilkinson, 2008] as well as Zooniverse projects [Cox et al., 2015; Sauermann and Franzoni, 2015]. We sought to extend previous analyses of Zooniverse projects by examining volunteer classification contribution inequality across a larger number of projects, which also enables assessment of our findings for domain specific trends. Applying an approach frequently used to examine income inequality [Gastwirth, 1972], we plotted the Lorenz curve for the distribution of volunteers’ total classifications for each project (Figure 6a), and calculated corresponding Gini coefficients (Table 3).


Table 3: Project Gini coefficients. Volunteer classification contribution inequality was assessed by calculating Gini coefficients for each project.
PIC


Lorenz curves for all Zooniverse projects.

a) PIC

Volunteer classification contributions in Microscopy Masters.

b) PICc) PIC

Volunteer classification contributions in Supernova Hunters.

d) PICe) PIC
Figure 6: All Zooniverse projects display unequal volunteer classification contribution. a) Lorenz curves were plotted for all n = 63 projects to describe the inequality in number of classifications per registered volunteer, for the first 100 days post-launch. The plot shows the cumulative number of classifications versus the cumulative number of volunteers, with the increased curvature of the Lorenz curve indicating stronger inequality in volunteer contribution. The black 45 line corresponds to total equality, which in this case would represent all users contributing equal numbers of classifications. Although all projects displayed volunteer classification contribution inequality; a large amount of variation was observed between the project displaying the lowest degree of equality; Microscopy Masters (b, c), and the project displaying the highest; Supernova Hunters (d, e). Each plot is coloured by domain; Ecology = green, Astronomy = blue, Transcription = yellow, Other = Grey, Biomedical = Red. This figure has been adapted from a poster presented at the World Wide Web Conference 2018 [Spiers et al., 2018]. Redistributed under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.


Across all n = 63 projects a large area is observed between the Lorenz curve and the 45 line (perfect equality). This indicates that a large fraction of classifications are provided by a relatively small number of volunteers across all projects. However, heterogeneous volunteer contribution equality was identified between Zooniverse projects — the Gini coefficients varied from 0.54 for Microscopy Masters (Figure 6b, c), to 0.94 for Supernova Hunters (Figure 6d, e). Examining the mean Gini coefficient for each domain revealed similar Gini coefficients for ecology (0.80), astronomy (0.82), other (0.80) and transcription (0.81) projects, suggesting similar patterns of volunteer contribution across these domains.

Biomedical projects displayed a notably different average Gini coefficient of 0.67. This was significantly less than the Gini coefficient observed for astronomy projects (P-value = 0.046, Mann-Whitney-Wilcoxon Test), indicating a more unequal distribution of volunteer effort in astronomy projects compared to biomedical projects, suggesting that astronomy projects may attract a contingent of more prolific classifiers than biomedical projects. Notably, a comparably low Gini coefficient (0.72) has been reported elsewhere for the online biomedical citizen science project Mark2Cure [Tsueng et al., 2016]. This raises an interesting line of enquiry for future analyses — is there something about biomedical projects that disincentivises return volunteers, hindering the development of prolific classifiers, or are biomedical projects more successful in attracting a large number of casual contributors? Could this be related to perceived difficulty, or importance, of biomedical tasks? Further understanding patterns in volunteer contribution equality to different projects may provide insight regarding how to better design for the many or for the few, dependent on the particular project task.

3.3 Are Astronomy and Ecology projects associated with different volunteer demographics?

We next sought to describe basic demographic features of volunteers contributing to projects from differing domains. To perform these analyses, data were extracted from GA (https://analytics.google.com/) for the classification pages of five astronomy and five ecology projects (for a full description of approach, see the Methods section).

First, we examined the number of times the classification page was viewed for each project, subset by age of visitor for both astronomy (Figure 7a) and ecology projects (Figure 7b). No consistent trend was observed across the five astronomy projects examined (Figure 7a): for example, Poppin’ Galaxy was more popular amongst the youngest age group (18–24) whereas Supernova Hunters is more popular amongst the oldest age group (65+), and other projects show no clear age bias in classification page views.

We see more uniformity in percentage page views by age group for the five ecology projects examined (Figure 7b). Although there is no striking overall trend, the peaks appear bimodal for the majority of projects examined, with peaks in percentage page views for 25–34 year olds and 55–64 year olds. The consistency in these peaks between the projects examined, particularly for the 55–64 year olds, indicates that ecology projects are more popular within these age groups. Such observations could be utilized to inform project promotion strategies.

Comparing the average page views count per age group for the five ecology projects to the five astronomy projects revealed a greater proportion of page views for astronomy projects relative to ecology projects for 18–24 year olds (Fisher’s Exact Test, odds ratio = 1.89, P-value = 8.86E-62) and 65+ year olds (Fisher’s Exact Test, odds ratio = 2.89, P-value = 4.16E-134). A smaller proportion of page views in astronomy projects relative to ecology projects was observed for the 25–34 year olds (Fisher’s Exact Test, odds ratio = 0.77, P-value = 5.67E-13), 45–54 year olds (Fisher’s Exact Test, odds ratio = 0.88, P-value = 5.11E-03), and a striking under enrichment in the 55–64 year olds (Fisher’s Exact Test, odds ratio = 0.39, P-value = 4.77E-156) (Table 4). These findings reflect the observations in Figure 7a and Figure 7b.


Table 4: Page views by age group for ecology compared to astronomy projects. Average page views for ecology compared to astronomy projects, subset by the demographic feature of age. Data were extracted from GA (see Methods section).
PIC

Next, we compared classification page views subset by sex for the five astronomy and five ecology projects, with data extracted from GA (see Methods section). A clear male bias can be observed in percentage classification page views across the five astronomy projects examined (Figure 7c), whereas ecology projects see more equality in percentage classification page views between the sexes (Figure 7d). Comparing the average page view counts by sex for the five ecology projects to the five astronomy projects revealed a highly significant greater proportion of males in astronomy projects compared to ecology projects (Fisher’s Exact Test, odds ratio = 3.92, P-value = 0), and a smaller proportion of females (Fisher’s Exact Test, odds ratio = 0.25, P-value = 0) (Table 5).


Table 5: Page views by sex for ecology compared to astronomy projects. Average page views for ecology compared to astronomy projects, subset by the demographic feature of Sex. Data were extracted from GA (see Methods section).
PIC


a) Page views by age for five astronomy projects.
PIC

b) Page views by age for five ecology projects.
PIC


c) Page views by sex for five astronomy and five ecology projects.

PIC
Figure 7: Domain-specific demographic features are observed for Zooniverse projects. Examining classification page views for astronomy and ecology projects, subset by age (a, b) and sex (c) revealed greater uniformity in page views by age group for ecology projects (b) compared to astronomy projects (a), and a clear male bias across astronomy projects (c). This figure has been adapted from a poster presented at the World Wide Web Conference 2018 [Spiers et al., 2018]. Redistributed under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.


4 Discussion

To ensure the sustainability of distributed data analysis via online citizen science, we must further understand the volunteer experience. In the analysis presented here we utilize a well-established online citizen science platform, the Zooniverse, as a ‘web observatory ecosystem’ to examine multiple project measures from the most comprehensive collection of online citizen science project data gathered to date.

Key findings include observation of a high level of heterogeneity, and little domain specificity, in the absolute number of classifications, number of unique volunteers and the median number of classifications per volunteer per project in the 100 days following project launch. These observations indicate that variables beyond the host citizen science platform and academic domain likely play an important role in project success. Such variables may include task difficulty, level of project promotion, quality of researcher engagement with volunteers or even the inclusion of ‘charismatic’ features or species (consider penguins vs. wildebeest). Quantifying these variables and relating them to project measures represents a worthwhile future direction to this work. Although highly variable in the total number of classifications received, the classification curves of individual projects were highly comparable, with all but one project, Supernova Hunters, displaying a characteristic peak of activity upon launch followed by a rapid decline in classification activity. Supporting previous studies, when examining volunteer classification contribution inequality we found a large fraction of classifications are provided by a relatively small number of volunteers across all projects. Finally, demographic analysis of astronomy versus ecology projects indicated that projects appeal to a broad age-range regardless of academic domain. In contrast, examining gender differences revealed a clear male bias amongst astronomy projects, whereas ecology projects showed greater variability in their gender balance.

In addition to these central findings, we identified a number of exceptional features associated with a single project: Supernova Hunters. Beyond possessing a notably unusual classification curve that displayed weekly peaks of activity, reflecting the scheduled release of new data to this project concurrent with the sending of a newsletter to project volunteers, the Supernova Hunters project also displayed the highest level of classification contribution inequality amongst its volunteers. In the context of other findings here, it is also worth noting that the Supernova Hunters project displayed both an age and gender bias, with volunteers primarily males over the age of 65.

Our observation of frequent, weekly spikes of classification activity concurrent with the release of small sets of data in the Supernova Hunters project indicate that a scarcity of data is associated with sustained volunteer engagement with a project. For the Supernova Hunters project, this model of activity has arisen organically due to the incremental production of subject data, therefore, the pattern of volunteer engagement observed for this project is serendipitous rather than deliberately designed. Other researchers planning citizen science projects may wish to consider intentionally adopting a similar approach of incremental data release to encourage volunteer engagement. For example, a project with a large, pre-existing data set could partition their subjects for gradual release, generating an artificial scarcity to encourage more frequent volunteer interaction.

Adopting a model of generating artificial data scarcity within a project design may encourage increased volunteer interaction through creating a heightened sense of competition to interact with a limited data set. A similar device of generating artificial scarcity is frequently applied in game or game-like design to drive further engagement [Deterding et al., 2011; Seaborn and Fels, 2015]. The motivation of competition is particularly relevant to a project such as Supernova Hunters where there is the legitimate possibility of being the first person to discover a supernova in the relatively small amount of new data released each week. It is understandable that engaged volunteers would be highly motivated to return to a project upon release of new data to have the opportunity of being the first to discover a supernova, or, as observed by Jackson et al., volunteers may just be motivated by being the first to see novel data [Jackson, Crowston et al., 2016].

However, our analyses indicate that perceived scarcity has effects beyond motivating volunteers to interact with a project more frequently. Supernova Hunters was identified as the most unequal project for volunteer classification contributions, therefore a small cohort of highly dedicated volunteers submit the majority of classifications in this project. The scarcity of data in this project, consequential competition to classify, and rapid processing of project subjects is causing the volunteer community of the Supernova Hunters project to be limited to a smaller group of highly dedicated individuals who are willing and able to return to the project upon data release.

There are a number of advantages to cultivating a community of dedicated and experienced volunteers who consistently return to a project upon data release, as this would enable quicker, and potentially more accurate, data processing. However, doing so may generate unexpected effects — our analyses indicate that this may be associated with a less diverse community of volunteers. We found the Supernova Hunters community to be demographically biased towards men of retirement age. It is possible that individuals who are unable to offer a regular, weekly time-commitment during the working day have been excluded from contributing to this project. Additionally, the increased competition to classify or higher science capital perceived to be required to contribute may make this project more appealing to older males, or, conversely, less appealing to other demographics. As shall be discussed, the impact of group diversity on project success remains to be fully delineated, particularly in online environments such as the Zooniverse where, although the majority of contributions are made independently and individually, there remains significant opportunity for group interaction and discovery through online fora [Boyajian et al., 2016]. Group diversity is commonly defined as differences in attributes between individuals resulting in the perception that others are different from oneself [van Knippenberg, De Dreu and Homan, 2004]. Although positive, negative or no relationships have been found between a group’s diversity and its performance, the literature is in agreement that there are two primary mechanisms through which diversity can impact group performance; the ‘informational’ or ‘decision-making perspective’, or the ‘social categorization perspective’ [Chen, Ren and Riedl, 2010; van Knippenberg, De Dreu and Homan, 2004; van Knippenberg and Schippers, 2007; Williams and O’Reilly III, 1998].

As increased project participation has been found to develop the scientific knowledge of volunteers [Masters et al., 2016], cultivating the selection of a highly dedicated small community of volunteers through gamification may benefit a more challenging project that requires a higher level of skill, such as Foldit [Cooper et al., 2010]. Although we do not examine the impact of sustained volunteer engagement on data quality in the Supernova Hunters project in the analyses presented here, this represents an interesting direction for future research. Further, sustained engagement may result in changed modes of volunteer participation over time, generating higher proportion of more experienced volunteers able to contribute to important community functions such as helping to onboard newcomers, answering questions and moderating discussions [Jackson, Østerlund et al., 2015; Mugar et al., 2014; Østerlund et al., 2014].

However, as we have seen here, cultivating a small, highly dedicated volunteer community can be associated with less diversity. As described in the social categorization perspective of the impact of group diversity on performance, smaller and more homogenous groups may outperform heterogeneous groups because people categorize themselves and others into social groups based on differences in social attributes, and as a result have a more positive experience when working with others they consider similar [Chen, Ren and Riedl, 2010]. Therefore, a more homogenous community has the potential to positively benefit a project. However, there are circumstances when a less-diverse community may have a negative impact on project success. For example, a cross-section of volunteers representative of the broader population may be essential for the validity of results in a health-related data collection project. Further, as stated in the ‘informational’ or ‘decision-making perspective’ of the impact of diversity on group performance, heterogeneous groups can outperform homogenous groups due to their broader range of skills, knowledge and opinions. Beyond research objectives, lack of group diversity may also curtail the potential of a citizen science project to achieve other aims, such as fostering scientific education within typically underserved communities. However, the effect of group diversity on the success of a citizen science project will be influenced by many factors, including the type and complexity of the task involved, task interdependency and the group type and size [Horwitz and Horwitz, 2007; van Knippenberg and Schippers, 2007]. Further, the importance of these variables will have differing importance dependent on whether a project is online or offline [Martins, Gilson and Maynard, 2004].

It should also be considered whether ‘designing for exclusivity’ is acceptable to the broader ethos of citizen science, and human-computer interaction more generally. Frequently, citizen science projects have the twin goals of contributing to both scientific productivity as well as ‘social good’, e.g. encouraging learning about and participation in science [Woodcock et al., 2017]. Those designing and implementing online citizen science projects should also consider whether it is acceptable to the practice of citizen science to cultivate scenarios that intentionally restrict the opportunity of the full volunteer community to access a project, or whether they should be implementing inclusive design approaches and ‘designing for all’. Beyond the goal of encouraging diversity, researchers should consider the extent to which they can and should facilitate the accessibility of their project to underserved online communities, which may include people with low ICT skills, the elderly or individuals with reading difficulties, through the implementation of relevant inclusive design approaches such as universal usability [Shneiderman, 2000; Vanderheiden, 2000], user-sensitive inclusive design [Newell, 2011], designing for accessibility [Keates, 2006] and ability-based design [Wobbrock et al., 2011], and these factors should be examined closely in future work.

When considering the consequences of implementing design changes modelled on the observations made here, project owners must be cognisant of the dual aims of citizen science; to perform authentic research in the most efficient manner possible (scientific efficiency) and to allow a broad community to engage with real research (social inclusivity). The relationship between these two aims, social inclusivity and scientific efficiency, is nuanced. There may be instances where a project is more scientifically efficient if it is more exclusive, therefore, the scientific efficiency of a citizen science project may occasionally directly conflict with the aim of social inclusivity. Although designing projects for efficiency via exclusivity is not ideal, it is not clear whether the alternative, of reducing scientific efficiency in the name of inclusivity, is preferable. For example, in the Supernova Hunters project, social inclusivity may have been enhanced through providing greater opportunity to classify through increasing the number of classifications required to more than necessary for each image. However, this would not only represent a wasteful application of volunteer time and enthusiasm, but may undermine one of the most commons motivations of volunteers — to make an authentic contribution [Raddick et al., 2010].

In conclusion, we present here quantitative evidence demonstrating how subtle changes to online citizen science project design can influence many facets of the nature of volunteer interaction, including who participates, when and how much. Through analysing the most comprehensive collection of citizen science project data gathered to date, we observe sustained volunteer engagement emerging from the incremental release of small amounts of data in a single Zooniverse project, Supernova Hunters. However, this increased interaction was observed in conjunction with high levels of classification contribution inequality, and a demographically biased community of volunteers. This model of incremental data release has cultivated a scenario where a large number of classifications are provided by a small community of volunteers, contrary to the typical Zooniverse project in which a small number of classifications are provided by a large community. These observations illustrate the tension that can exist between designing a citizen science project for scientific efficiency versus designing for social inclusivity, that stems from the liminal nature of citizen science between research and public engagement.


Table 6: Project characteristics. This table details multiple measurements associated with the 63 projects examined in this study, with all variables calculated for 100 days post-launch of each individual project. ‘N_all_classifications’: total number of classifications; ‘N_all_volunteers’: total number of unique, registered, volunteers; ‘N_classifications_100_days’: number of classifications made within 100 days post-project launch; ‘Classifications_not_logged_in’: number of classifications made by volunteers who weren’t logged into the Zooniverse during 100 days post-project launch; ‘Classifications_registered_users’: number of classifications made to logged-in registered volunteers within 100 days post-project launch; ‘N_unique_users’: number of unique volunteers who classified during 100 days post-project launch; ‘Least_classifications_single_user’: least classifications made by a single registered volunteer during 100 days post-project launch; ‘Most_classifications_single_user’: most classifications made by a single registered volunteer during 100 days post-project launch; ‘Mean_classifications_per_user’: mean classifications made by registered volunteers during 100 days post-project launch; ‘SD_classifications_per_user’: standard deviation of the classifications made by registered volunteers during 100 days post-project launch; ‘Mode_classifications’: mode classifications made by registered volunteers during 100 days post-project launch; ‘Median_classifications’: median classifications made by registered volunteers during 100 days post-project launch.
PIC

References

Boyajian, T. S., LaCourse, D. M., Rappaport, S. A., Fabrycky, D., Fischer, D. A., Gandolfi, D., Kennedy, G. M., Korhonen, H., Liu, M. C., Moor, A., Olah, K., Vida, K., Wyatt, M. C., Best, W. M. J., Brewer, J., Ciesla, F., Csák, B., Deeg, H. J., Dupuy, T. J., Handler, G., Heng, K., Howell, S. B., Ishikawa, S. T., Kovács, J., Kozakis, T., Kriskovics, L., Lehtinen, J., Lintott, C., Lynn, S., Nespral, D., Nikbakhsh, S., Schawinski, K., Schmitt, J. R., Smith, A. M., Szabo, G., Szabo, R., Viuho, J., Wang, J., Weiksnar, A., Bosch, M., Connors, J. L., Goodman, S., Green, G., Hoekstra, A. J., Jebson, T., Jek, K. J., Omohundro, M. R., Schwengeler, H. M. and Szewczyk, A. (2016). ‘Planet Hunters IX. KIC 8462852 — where’s the flux?’ Monthly Notices of the Royal Astronomical Society 457 (4), pp. 3988–4004. https://doi.org/10.1093/mnras/stw218.

Brabham, D. C. (2008). ‘Crowdsourcing as a model for problem solving’. Convergence: The International Journal of Research into New Media Technologies 14 (1), pp. 75–90. https://doi.org/10.1177/1354856507084420.

Candido dos Reis, F. J., Lynn, S., Ali, H. R., Eccles, D., Hanby, A., Provenzano, E., Caldas, C., Howat, W. J., McDuffus, L.-A., Liu, B., Daley, F., Coulson, P., Vyas, R. J., Harris, L. M., Owens, J. M., Carton, A. F. M., McQuillan, J. P., Paterson, A. M., Hirji, Z., Christie, S. K., Holmes, A. R., Schmidt, M. K., Garcia-Closas, M., Easton, D. F., Bolla, M. K., Wang, Q., Benitez, J., Milne, R. L., Mannermaa, A., Couch, F., Devilee, P., Tollenaar, R. A. E. M., Seynaeve, C., Cox, A., Cross, S. S., Blows, F. M., Sanders, J., Groot, R. de, Figueroa, J., Sherman, M., Hooning, M., Brenner, H., Holleczek, B., Stegmaier, C., Lintott, C. and Pharoah, P. D. P. (2015). ‘Crowdsourcing the general public for large scale molecular pathology studies in cancer’. EBioMedicine 2 (7), pp. 681–689. https://doi.org/10.1016/j.ebiom.2015.05.009.

Cardamone, C., Schawinski, K., Sarzi, M., Bamford, S. P., Bennert, N., Urry, C. M., Lintott, C., Keel, W. C., Parejko, J., Nichol, R. C., Thomas, D., Andreescu, D., Murray, P., Raddick, M. J., Slosar, A., Szalay, A. and VandenBerg, J. (2009). ‘Galaxy zoo green peas: discovery of a class of compact extremely star-forming galaxies’. Monthly Notices of the Royal Astronomical Society 399 (3), pp. 1191–1205. https://doi.org/10.1111/j.1365-2966.2009.15383.x.

Chen, J., Ren, Y. and Riedl, J. (2010). ‘The effects of diversity on group productivity and member withdrawal in online volunteer groups’. In: Proceedings of the 28th international conference on Human factors in computing systems — CHI ’10. ACM Press. https://doi.org/10.1145/1753326.1753447.

Clark, D. J., Nicholas, D. and Jamali, H. R. (2014). ‘Evaluating information seeking and use in the changing virtual world: the emerging role of Google Analytics’. Learned Publishing 27 (3), pp. 185–194. https://doi.org/10.1087/20140304.

Cooper, S., Khatib, F., Treuille, A., Barbero, J., Lee, J., Beenen, M., Leaver-Fay, A., Baker, D., Popović, Z. and Players, F. (2010). ‘Predicting protein structures with a multiplayer online game’. Nature 466 (7307), pp. 756–760. https://doi.org/10.1038/nature09304.

Cox, J., Oh, E. Y., Simmons, B., Lintott, C. L., Masters, K. L., Greenhill, A., Graham, G. and Holmes, K. (2015). ‘Defining and Measuring Success in Online Citizen Science: A Case Study of Zooniverse Projects’. Computing in Science & Engineering 17 (4), pp. 28–41. https://doi.org/10.1109/MCSE.2015.65.

Crutzen, R., Roosjen, J. L. and Poelman, J. (2013). ‘Using Google Analytics as a process evaluation method for Internet-delivered interventions: an example on sexual health’. Health Promotion International 28 (1), pp. 36–42. https://doi.org/10.1093/heapro/das008.

Deterding, S., Dixon, D., Khaled, R. and Nacke, L. (2011). ‘From game design elements to gamefulness: defining “gamification”’. In: Proceedings of the 15th International Academic MindTrek Conference on Envisioning Future Media Environments — MindTrek ’11 (Tampere, Finland). New York, U.S.A.: ACM Press, pp. 9–15. https://doi.org/10.1145/2181037.2181040.

Forrester, T. D., Baker, M., Costello, R., Kays, R., Parsons, A. W. and McShea, W. J. (2017). ‘Creating advocates for mammal conservation through citizen science’. Biological Conservation 208, pp. 98–105. https://doi.org/10.1016/j.biocon.2016.06.025.

Franke, N. and von Hippel, E. (2003). ‘Satisfying heterogeneous user needs via innovation toolkits: the case of Apache security software’. Research Policy 32 (7), pp. 1199–1215. https://doi.org/10.1016/s0048-7333(03)00049-0.

Gastwirth, J. L. (1972). ‘The estimation of the Lorenz curve and Gini index’. The Review of Economics and Statistics 54 (3), p. 306. https://doi.org/10.2307/1937992.

Google (2018). About demographics and interests. URL: https://support.google.com/analytics/answer/2799357?hl=en (visited on 2018).

Horwitz, S. K. and Horwitz, I. B. (2007). ‘The effects of team diversity on team outcomes: a meta-analytic review of team demography’. Journal of Management 33 (6), pp. 987–1015. https://doi.org/10.1177/0149206307308587.

Jackson, C. B., Crowston, K., Mugar, G. and Østerlund, C. (2016). ‘“Guess what! You’re the first to see this event”’. In: Proceedings of the 19th International Conference on Supporting Group Work — GROUP ’16 (Sanibel Island, FL, U.S.A.). ACM, pp. 171–179. https://doi.org/10.1145/2957276.2957284.

Jackson, C. B., Østerlund, C., Mugar, G., Hassman, K. D. and Crowston, K. (2015). ‘Motivations for sustained participation in crowdsourcing: case studies of citizen science on the role of talk’. In: 2015 48th Hawaii International Conference on System Sciences. IEEE, pp. 1624–1634. https://doi.org/10.1109/hicss.2015.196.

Jeppesen, L. B. and Lakhani, K. R. (2010). ‘Marginality and problem-solving effectiveness in broadcast search’. Organization Science 21 (5), pp. 1016–1033. https://doi.org/10.1287/orsc.1090.0491.

Kawrykow, A., Roumanis, G., Kam, A., Kwak, D., Leung, C., Wu, C., Zarour, E., Sarmenta, L., Blanchette, M. and and, J. W. (2012). ‘Phylo: a citizen science approach for improving multiple sequence alignment’. PLoS ONE 7 (3), e31362. https://doi.org/10.1371/journal.pone.0031362.

Keates, S. (2006). Designing for accessibility: a business guide to countering design exclusion. Boca Raton, FL, U.S.A.: CRC Press. https://doi.org/10.1201/9781482269543.

Khatib, F., DiMaio, F., Cooper, S., Kazmierczyk, M., Gilski, M., Krzywda, S., Zabranska, H., Pichova, I., Thompson, J., Popović, Z., Jaskolski, M. and Baker, D. (2011). ‘Crystal structure of a monomeric retroviral protease solved by protein folding game players’. Nature Structural & Molecular Biology 18 (10), pp. 1175–1177. https://doi.org/10.1038/nsmb.2119.

Kim, J. S., Greene, M. J., Zlateski, A., Lee, K., Richardson, M., Turaga, S. C., Purcaro, M., Balkam, M., Robinson, A., Behabadi, B. F., Campos, M., Denk, W. and Seung, H. S. (2014). ‘Space-time wiring specificity supports direction selectivity in the retina’. Nature 509 (7500), pp. 331–336. https://doi.org/10.1038/nature13240.

Lee, T. K., Crowston, K., Østerlund, C. and Miller, G. (2017). ‘Recruiting messages matter: message strategies to attract citizen scientists’. In: Companion of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing — CSCW ’17 Companion (Portland, OR, U.S.A.). ACM Press, pp. 227–230. https://doi.org/10.1145/3022198.3026335.

Lintott, C. J., Schawinski, K., Slosar, A., Land, K., Bamford, S., Thomas, D., Raddick, M. J., Nichol, R. C., Szalay, A., Andreescu, D., Murray, P. and Vandenberg, J. (2008). ‘Galaxy Zoo: morphologies derived from visual inspection of galaxies from the Sloan Digital Sky Survey’. Monthly Notices of the Royal Astronomical Society 389 (3), pp. 1179–1189. https://doi.org/10.1111/j.1365-2966.2008.13689.x.

Martins, L. L., Gilson, L. L. and Maynard, M. T. (2004). ‘Virtual teams: what do we know and where do we go from here?’ Journal of Management 30 (6), pp. 805–835. https://doi.org/10.1016/j.jm.2004.05.002.

Masters, K., Oh, E. Y., Cox, J., Simmons, B., Lintott, C., Graham, G., Greenhill, A. and Holmes, K. (2016). ‘Science learning via participation in online citizen science’. JCOM 15 (3), A07. URL: https://jcom.sissa.it/archive/15/03/JCOM_1503_2016_A07.

Mugar, G., Osterlund, C., DeVries Hassman, K., Crowston, K. and Jackson, C. B. (2014). ‘Planet Hunters and Seafloor Explorers: Legitimate Peripheral Participation Through Practice Proxies in Online Citizen Science’. Paper presented at the Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing, Baltimore, Maryland, U.S.A. In: https://doi.org/10.1145/2531602.2531721.

Newell, A. F. (2011). ‘Design and the digital divide: insights from 40 years in computer support for older and disabled people’. Synthesis Lectures on Assistive, Rehabilitative and Health-Preserving Technologies 1 (1), pp. 1–195. https://doi.org/10.2200/s00369ed1v01y201106arh001.

Ortega, F., Gonzalez-Barahona, J. M. and Robles, G. (2008). ‘On the inequality of contributions to Wikipedia’. In: Proceedings of the 41st Annual Hawaii International Conference on System Sciences (HICSS 2008). IEEE. https://doi.org/10.1109/hicss.2008.333.

Østerlund, C. S., Mugar, G., Jackson, C., Hassman, K. D. and Crowston, K. (2014). ‘Socializing the crowd: learning to talk in citizen science’. Academy of Management Proceedings 2014 (1), p. 16799. https://doi.org/10.5465/ambpp.2014.16799abstract.

Panciera, K., Halfaker, A. and Terveen, L. (2009). ‘Wikipedians are born, not made: a study of power editors on Wikipedia’. In: Proceedings of the ACM 2009 international conference on Supporting group work — GROUP ’09. ACM Press, pp. 51–60. https://doi.org/10.1145/1531674.1531682.

Pimm, S. L., Jenkins, C. N., Abell, R., Brooks, T. M., Gittleman, J. L., Joppa, L. N., Raven, P. H., Roberts, C. M. and Sexton, J. O. (2014). ‘The biodiversity of species and their rates of extinction, distribution and protection’. Science 344 (6187), pp. 1246752–1246752. https://doi.org/10.1126/science.1246752.

Plaza, B. (2009). ‘Monitoring web traffic source effectiveness with Google Analytics: an experiment with time series’. Aslib Proceedings 61 (5), pp. 474–482. https://doi.org/10.1108/00012530910989625.

— (2011). ‘Google Analytics for measuring website performance’. Tourism Management 32 (3), pp. 477–481. https://doi.org/10.1016/j.tourman.2010.03.015.

Raddick, M. J., Bracey, G., Gay, P. L., Lintott, C. J., Murray, P., Schawinski, K., Szalay, A. S. and Vandenberg, J. (2010). ‘Galaxy Zoo: Exploring the Motivations of Citizen Science Volunteers’. Astronomy Education Review 9 (1), 010103, pp. 1–18. https://doi.org/10.3847/AER2009036. arXiv: 0909.2925.

Sauermann, H. and Franzoni, C. (2015). ‘Crowd science user contribution patterns and their implications’. Proceedings of the National Academy of Sciences 112 (3), pp. 679–684. https://doi.org/10.1073/pnas.1408907112.

Seaborn, K. and Fels, D. I. (2015). ‘Gamification in theory and action: a survey’. International Journal of Human-Computer Studies 74, pp. 14–31. https://doi.org/10.1016/j.ijhcs.2014.09.006.

Segal, A., Gal, Y., Kamar, E., Horvitz, E., Bowyer, A. and Miller, G. (2016). ‘Intervention strategies for increasing engagement in crowdsourcing: platform, predictions and experiments’. In: Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence — IJCAI’16. New York, NY, U.S.A.: AAAI Press, pp. 3861–3867. URL: http://dl.acm.org/citation.cfm?id=3061053.3061159.

Shneiderman, B. (2000). ‘Universal usability’. Communications of the ACM 43 (5), pp. 84–91. https://doi.org/10.1145/332833.332843.

Simpson, R., Page, K. R. and De Roure, D. (2014). ‘Zooniverse: observing the World’s Largest Citizen Science Platform’. In: Proceedings of the 23rd International Conference on World Wide Web — WWW ’14 Companion (Seoul, Korea, 7th–11th April 2014). ACM Press, pp. 1049–1054. https://doi.org/10.1145/2567948.2579215.

Song, M. J., Ward, J., Choi, F., Nikoo, M., Frank, A., Shams, F., Tabi, K., Vigo, D. and Krausz, M. (2018). ‘A process evaluation of a web-based mental health portal (WalkAlong) using Google Analytics’. JMIR Mental Health 5 (3), e50. https://doi.org/10.2196/mental.8594.

Spiers, H., Swanson, A., Fortson, L., Simmons, B. D., Trouille, L., Blickhan, S. and Lintott, C. (2018). ‘Patterns of volunteer behaviour across online citizen science’. In: Companion of the The Web Conference 2018 on The Web Conference 2018 — WWW ’18 (Lyon, France). https://doi.org/10.1145/3184558.3186945.

Sprinks, J., Wardlaw, J., Houghton, R., Bamford, S. and Morley, J. (2017). ‘Task workflow design and its impact on performance and volunteers’ subjective preference in Virtual Citizen Science’. International Journal of Human-Computer Studies 104, pp. 50–63. https://doi.org/10.1016/j.ijhcs.2017.03.003.

Straub, M. C. P. (2016). ‘Giving citizen scientists a chance: a study of volunteer-led scientific discovery’. Citizen Science: Theory and Practice 1 (1), p. 5. https://doi.org/10.5334/cstp.40.

Swanson, A., Arnold, T., Kosmala, M., Forester, J. and Packer, C. (2016). ‘In the absence of a “landscape of fear”: how lions, hyenas and cheetahs coexist’. Ecology and Evolution 6 (23), pp. 8534–8545. https://doi.org/10.1002/ece3.2569.

Tsueng, G., Nanis, S. M., Fouquier, J., Good, B. M. and Su, A. I. (2016). ‘Citizen science for mining the biomedical literature’. Citizen Science: Theory and Practice 1 (2), p. 14. https://doi.org/10.5334/cstp.56.

Turner, S. J. (2010). ‘Website statistics 2.0: using Google Analytics to measure library website effectiveness’. Technical Services Quarterly 27 (3), pp. 261–278. https://doi.org/10.1080/07317131003765910.

van Knippenberg, D., De Dreu, C. K. W. and Homan, A. C. (2004). ‘Work group diversity and group performance: an integrative model and research agenda’. Journal of Applied Psychology 89 (6), pp. 1008–1022. https://doi.org/10.1037/0021-9010.89.6.1008.

van Knippenberg, D. and Schippers, M. C. (2007). ‘Work group diversity’. Annual Review of Psychology 58 (1), pp. 515–541. https://doi.org/10.1146/annurev.psych.58.110405.085546.

Vanderheiden, G. (2000). ‘Fundamental principles and priority setting for universal usability’. In: Proceedings on the 2000 conference on Universal Usability – CUU ’00. ACM Press. https://doi.org/10.1145/355460.355469.

Wilkinson, D. M. (2008). ‘Strong regularities in online peer production’. In: Proceedings of the 9th ACM conference on Electronic commerce — EC ’08 (Chicago, IL, U.S.A.). ACM Press, pp. 302–309. https://doi.org/10.1145/1386790.1386837.

Williams, K. Y. and O’Reilly III, C. A. (1998). ‘Demography and diversity in organizations: a review of 40 years of research’. Research in organizational behavior 20, pp. 77–140.

Wobbrock, J. O., Kane, S. K., Gajos, K. Z., Harada, S. and Froehlich, J. (2011). ‘Ability-based design: concept, principles and examples’. ACM Transactions on Accessible Computing 3 (3), pp. 1–27. https://doi.org/10.1145/1952383.1952384.

Woodcock, J., Greenhill, A., Holmes, K., Graham, G., Cox, J., Oh, E. Y. and Masters, K. (2017). ‘Crowdsourcing citizen science: exploring the tensions between paid professionals and users’. Journal of Peer Production. URL: http://peerproduction.net/issues/issue-10-peer-production-and-work/peer-reviewed-papers/crowdsourcing-citizen-science-exploring-the-tensions-between-paid-professionals-and-users/.

Wright, D. E., Lintott, C. J., Smartt, S. J., Smith, K. W., Fortson, L., Trouille, L., Allen, C. R., Beck, M., Bouslog, M. C., Boyer, A., Chambers, K. C., Flewelling, H., Granger, W., Magnier, E. A., McMaster, A., Miller, G. R. M., O’Donnell, J. E., Simmons, B., Spiers, H., Tonry, J. L., Veldthuis, M., Wainscoat, R. J., Waters, C., Willman, M., Wolfenbarger, Z. and Young, D. R. (2017). ‘A transient search using combined human and machine classifications’. Monthly Notices of the Royal Astronomical Society 472 (2), pp. 1315–1323. https://doi.org/10.1093/mnras/stx1812.

Authors

Helen Spiers is the Biomedical Research Lead for The Zooniverse citizen science platform and a Postdoctoral Research Associate based in the Department of Astrophysics at the University of Oxford. Prior to starting her current position she obtained a PhD in developmental epigenetics from King’s College London. E-mail: helen.spiers@physics.ox.ac.uk.

Alexandra Swanson is a behavioral and community ecologist interested in finding innovative technological solutions to conservation issues. As part of her PhD on mechanisms of coexistence among carnivores, she established a large-scale camera trap survey and partnered with the Zooniverse to build a citizen science platform to engage volunteers in classifying the millions of photographs produced. She formerly led the ecology initiatives at the Zooniverse and is now a AAAS Science and Technology Policy Fellow working on wildlife conservation policy issues. E-mail: ali@zooniverse.org.

Lucy Fortson is a Professor of Physics at the University of Minnesota. She is an observational astrophysicist studying galaxies and a cofounder of the Zooniverse citizen science platform. In collaboration with the University of Oxford and Adler Planetarium, Fortson leads the Zooniverse team developing human-computation algorithms to maximize the utility of the citizen science method of data processing. These methods are critical to extracting knowledge from the large amounts of data produced across the disciplines from astronomy to zoology. E-mail: lfortson@umn.edu.

Brooke Simmons is a lecturer in Physics at Lancaster University. She is Deputy Project Scientist for Galaxy Zoo and leads the Zooniverse’s humanitarian and disaster relief arm. Dr Simmons’ research uses citizen science for pattern detection in multiple disciplines, including astrophysics. E-mail: b.simmons@lancaster.ac.uk.

Laura Trouille is Vice President of Citizen Science at the Adler Planetarium in Chicago, co-Principal Investigator for the Zooniverse, and a Research Associate at Northwestern University. While earning her Ph.D. in astrophysics, she also earned the Center for the Integration of Research, Teaching and Learning’s Delta certificate for STEM education research. The Adler-Zooniverse group collaborates closely with the Zooniverse web developers and team of researchers at the University of Oxford and the University of Minnesota. E-mail: trouille@zooniverse.org.

Samantha Blickhan is the Humanities Lead for the Zooniverse, and IMLS Postdoctoral Fellow at the Adler Planetarium in Chicago, IL. She received her PhD in musicology from Royal Holloway, University of London. E-mail: samantha@zooniverse.org.

Chris Lintott is a professor of astrophysics at the University of Oxford, where he is also a research fellow at New College. He is the principal investigator of both the Zooniverse and Galaxy Zoo, and his work includes investigations of galaxy evolution, planet populations and the possibilities of human-machine collaboration. E-mail: chris.lintott@physics.ox.ac.uk.

How to cite

Spiers, H., Swanson, A., Fortson, L., Simmons, B. D., Trouille, L., Blickhan, S. and Lintott, C. (2019). ‘Everyone counts? Design considerations in online citizen science’. JCOM 18 (01), A04. https://doi.org/10.22323/2.18010204.

Endnotes

1The few classifications received by the Microplants project are likely due to this project being promoted primarily in a museum setting, as opposed to the broader Zooniverse community.