Keywords: Crowdsourcing; Pragmatic Quality; Hedonic Quality; Complex-Problem; Software Development
Critics of the wisdom of crowds suggest that collective wisdom may be only useful for simple problems, and may be difficult to use for complex problems such as software development. As the practice of problem solving with crowdsourcing becomes increasingly common, it is essential to identify whether the wisdom of crowds can be applied to solve complex problems.
There are two alternative streams of research that focus on the legitimacy of the crowd's/customer's complex problemsolving abilities. One stream suggests that crowds are mostly novice and do not have sufficient domain expertise to participate in and solve complex problems such as product innovation and development [33, 43, 5]. The other stream argues that "innovation is being democratized" [51], meaning that crowds/customers of product and services know about their requirements, are able to contribute toward the development of a product, and can solve complex problems [9,32, 51].
Research on Complex Problem Solving (CPS) has revealed a wide variety of thoughts and insights about the characteristics and operationalization of complex problems [17]. The research community is still debating which definition should be widely accepted by the scientific community, what is "complex" in CPS, and how to evaluate the complexity of problems [42]. In group environments, CPS addresses challenges such as coordination of tasks, lack of domain expertise by community members, lack of motivation, and sustainability of the community [31]. Such difficulties with CPS are never attempted in a crowdsourcing domain. Although the crowdsourcing business model supports creativity and problem solving [32], use of crowdsourcing for software development is different from general crowdsourcing [55]. These research gaps suggest that research on complex problem solving in crowdsourcing environments is valuable in addressing the question of whether the wisdom of crowds produces quality solutions for complex problems such as software development.
Lanier argues that crowd wisdom is inadequate to solve a creative or innovative problem; collective wisdom is useful when a problem is inadequately defined, a solution is simple, and the collective is aggregated by quality control which depends upon individuals to a high degree [33]. Other researchers suggest that crowdsourcing can be used for solving complex problems [8, 20, 30]. It should be noted here that software development is also a complex and creative activity. The production of a tangible product (software) may require various processes such as requirements analysis, design, coding, and testing [55].
To address the veracity of the two alternative claims about crowdsourcing, we propose to address the following specific research question: Does software developed by the crowdsourcing business model provide the same or better Perceived Quality (PQ) than software developed by professionals?.
A popular use of crowdsourcing is to perform various micro tasks (routine tasks) which are easier to perform by the humans but rather difficult for machine [16]. Micro tasks are those that are executed in minutes and repetitive in nature, e.g., identifying a person in a photo, phone number verification, or writing reviews. In these types of problems, the solution is known and the objective is clear. Though organizations are also using crowdsourcing model to solve complex problems such as software development, current research in the crowdsourcing and complex-problem solving field lacks a systematic experimental investigation [34]. Leicht et al performed a structured literature review based on top IS and software engineering journals and conferences to review the current state of crowdsourced software development research and concluded that research on crowdsourced software development is still in a nascent phase. They reported that almost 60% of the research in crowdsourcing software development is from a systems perspective, about 40% of the research is on crowdsourcing applications in software development, and only one paper dealt with user perspectives.
Although the crowdsourcing business model supports creativity and problem solving [32], use of crowdsourcing for software development is different from general crowdsourcing [55]. According to Wu et al software crowdsourcing needs to support the rigorous engineering disciplines of software development; stimulate creativity in software development tasks through the wisdom of the crowd; address the psychological issues of crowdsourcing such as competition, open, sharing, collaboration, and learning; address the financial aspects and recognition for various stakeholders; ensure the quality of the software product; and address liability issues in case of failure [55]. A key feature of software crowdsourcing is that it is a contest-based crowdsourcing model. In a contest-based crowdsourcing model, a problem owner who faces an innovationrelated problem posts this problem to a large independent crowd and then provides a reward to the agent who produces the best solution [50]. While competitions promote creativity and support quality software development, they may reduce the massive competitions [55]. A contest-based crowdsourcing model also promotes the min-maxing nature of game playing by different people with different roles [50].
This research simplifies and adapts crowdsourcing in a software development context, in particular a website development project. Understanding and managing of website structures is a complex task [11]. Like any other software development effort, website development processes can involve requirements analysis, design, and implementation, which make this also a complex, challenging, and creative process [55].
Hassenzahl and Tractinsky define user experience as a "consequence of a user's internal state (predispositions, expectations, needs, motivation, mood, etc.), the characteristics of the designed system (complexity, purpose, usability, functionality, etc.) and the environment within which the interaction occurs (organizational/social setting, meaningfulness of the activity)" [22]. According to Alben "all the aspects of how people use an interactive product (feeling, understanding, sensations) fit to the context" [1]. According to Forlizzi and Batterbee "emotion is at the heart of any human experience and an essential component of user-product interactions and user experience" [18]. In fact, "UX is a momentary, primarily evaluative feeling (good-bad) while interacting with a product or service" [23]. Various definitions and concepts of UX have been proposed, but the common theme in all the definitions is that UX is an outcome of interactions between a user and a product in the form of the user's perceptions and emotions.
Although these two premises are the same, researchers have used two different concepts to define user experience. One group suggests uncovering the objective in the subjective, and developed a model-based approach (a reductionist approach). The other group suggested that UX is very subjective in nature and should be inherent to the concept of UX, and thus developed a framework of thought (phenomenological approach).
Hassenzahl presented a hedonic/pragmatic model of user experience. This model suggested that users first perceive product features, such as content, presentation, functionality, and presentation style to view a personal version of the apparent product character (pragmatic attributes and hedonic attributes) [21]. This apparent product character leads to consequences, such as a product's appeal (good-bad), its emotional consequences (satisfaction and pleasure), and its behavioral consequences (increased usage). The consequences are not always the same and may be moderated by specific usage situations.
Pragmatic quality refers to a product's perceived ability to support the fulfillment of functions or intended tasks. Hassenzahl refers to these functions or tasks as "do goals" or instrumental goals (in which software is performing intended tasks) [23]. Pragmatic quality focuses on the utility and usability of products in terms of intended tasks. Hedonic quality refers to individual psychological well-being. Hedonic quality is mostly associated with pleasure. According to Hassenzahl [23], hedonic quality refers to a product's perceived quality to achieve the "be goals," such as being "competent" related to others. Hassenzahl emphasized that good UX stems from the fulfillment of the human needs for autonomy, competency, stimulation (self-oriented), relatedness, and popularity (other-oriented) through interacting with a product or service [23].
In summary, we propose a conceptual model adapted from Hassenzahl to address the research question in this paper (refer Figure 1) [21]. A conceptual model is a graphical lens for communicating the specification of things, events, or processes [52]. First, drawing on previous theoretical studies, this study proposes that the development approach (by crowdsourcing or IT professionals) has an impact on perceived quality, which is moderated by the complexity of the problem. Perceived quality of the product is measured in terms of pragmatic quality, hedonic quality stimulation, and hedonic quality identification.
We use a two-phase process to investigate the research question. The first phase used the development of a software by the crowd and professionals. In the second phase we evaluate the software developed by the crowdsourcing business model and professionals in terms of key perceived quality dimensions, including pragmatic quality, hedonic quality stimulation, and hedonic quality identification to compare the quality of software developed by crowds and professionals.
i. Independent Variable: development approach
ii. Dependent Variables: pragmatic quality, hedonic quality stimulation, and hedonic quality identification
Upon the development of the software the students at UNO, who were not part of the crowd that developed the software, were asked to participate in a survey designed to measure the perceived quality of the systems. We used existing measures to evaluate the pragmatic quality, the hedonic quality stimulation, and the hedonic quality identification. In order to evaluate the perceived quality, we used the survey questionnaire developed by Hassenzahl [24]. The survey instrument is composed of a 7 point Likert-scale items designed specifically to measure the perceived quality of the software product.
Table 1 summarizes the descriptive statistics results while Table 2 and Table 3 shows the results of Multivariate and Univariate analysis.
A total of 66 students from UNO participated in the survey. The participants were either undergraduates or graduate belonging to different departments at UNO. Table 1 shows that the average response rate for the hedonic quality identification (HQIL) as 5.2 and the hedonic quality stimulation (HQSL) as 4.6, which is more
Descriptive |
|||||||||
---|---|---|---|---|---|---|---|---|---|
|
|
N |
Mean |
Std. Deviation |
Std. Error |
95% Confidence Interval for Mean |
Minimum |
Maximum |
|
|
|
Lower Bound |
Upper Bound |
||||||
HQIL |
1 |
66.0000 |
4.2778 |
1.3491 |
.1661 |
3.9461 |
4.6094 |
1.2222 |
6.8889 |
2 |
66.0000 |
5.2037 |
.7530 |
.0927 |
5.0186 |
5.3888 |
3.2222 |
6.6667 |
|
Total |
132.0000 |
4.7407 |
1.1834 |
.1030 |
4.5370 |
4.9445 |
1.2222 |
6.8889 |
|
PQL |
1 |
66.0000 |
4.9515 |
1.0949 |
.1348 |
4.6824 |
5.2207 |
2.0000 |
7.0000 |
2 |
66.0000 |
5.2212 |
.7885 |
.0971 |
5.0274 |
5.4150 |
3.0000 |
6.8000 |
|
Total |
132.0000 |
5.0864 |
.9600 |
.0836 |
4.9211 |
5.2517 |
2.0000 |
7.0000 |
|
HQSL |
1 |
66.0000 |
4.0354 |
1.2758 |
0.1570 |
3.7217 |
4.3490 |
1.3333 |
7.0000 |
2 |
66.0000 |
4.6465 |
1.0984 |
0.1352 |
4.3765 |
4.9165 |
1.3333 |
6.6667 |
|
Total |
132.0000 |
4.3409 |
1.2249 |
0.1066 |
4.1300 |
4.5518 |
1.3333 |
7.0000 |
Effect |
Value |
F |
Hypothesis df |
Error df |
Sig. |
|
Intercept |
Pillai's Trace |
0.977 |
1840.310a |
3.000 |
128.000 |
.000 |
Wilks' Lambda |
0.023 |
1840.310a |
3.000 |
128.000 |
.000 |
|
Hotelling's Trace |
43.132 |
1840.310a |
3.000 |
128.000 |
.000 |
|
Roy's Largest Root |
43.132 |
1840.310a |
3.000 |
128.000 |
.000 |
|
Cat |
Pillai's Trace |
0.157 |
7.960a |
3.000 |
128.000 |
.000 |
Wilks' Lambda |
0.843 |
7.960a |
3.000 |
128.000 |
.000 |
|
Hotelling's Trace |
0.187 |
7.960a |
3.000 |
128.000 |
.000 |
|
Roy's Largest Root |
0.187 |
7.960a |
3.000 |
128.000 |
.000 |
a. Exact statistic
b. Design: Intercept + Cat
Source |
Dependent Variable |
Type III Sum of Squares |
df |
Mean Square |
F |
Sig. |
Corrected Model |
HQIL |
28.29a |
1 |
28.29 |
23.705 |
.000 |
PQL |
2.4b |
1 |
2.4 |
2.637 |
.107 |
|
HQSL |
12.32c |
1 |
12.32 |
8.697 |
.004 |
|
Cat |
HQIL |
28.29 |
1 |
28.29 |
23.705 |
.000 |
PQL |
2.4 |
1 |
2.4 |
2.637 |
.107 |
|
HQSL |
12.32 |
1 |
12.32 |
8.697 |
.004 |
|
Error |
HQIL |
155.16 |
130 |
1.19 |
||
PQL |
118.34 |
130 |
0.91 |
|||
HQSL |
184.22 |
130 |
1.42 |
|||
HQSL |
2,683.89 |
132 |
||||
Corrected Total |
HQIL |
183.45 |
131 |
|||
PQL |
120.74 |
131 |
||||
HQSL |
196.55 |
131 |
a. R Squared = .154 (Adjusted R Squared = .148)
b. R Squared = .020 (Adjusted R Squared = .012)
c. R Squared = .063 (Adjusted R Squared = .055)
In order to compare the perceived quality of the website developed by the crowdsourcing model against the one developed by the professionals, we conducted a multivariate test (MANOVA) because there were three dependent variables. The three dependent variables were namely HQIL, HQSL and PQL. The alternative hypothesis in our study is that the development approach (crowdsourcing method and professionals' method of software development) has an effect on all the three dependent variable.
Referring to Table 2 we see that the p-value is very close to zero, which is less than the level of significance (alpha); therefore, the development approach (crowdsourcing method and professionals' method of software development) has an effect on all the three dependent variables namely HQIL, HQSL and PQL.
The MANOVA test also provides the ANOVA table to test the mean difference between the three dependent variables. Table 3 shows that the p-value for the HQSL and HQIL is close to zero, suggesting that the development approach has an effect on HQSL and HQIL. For PQL, the p-value is 0.107 and is greater than the level of significance, which suggests that PQL has no effect on the development approach.
Although various scholars have studied the crowdsourcing research phenomenon from the lens of software development, more research is warranted from the user perspective [35]. To the best of our knowledge, this study is the only experimental study on crowdsourcing and complex problem solving.
Primarily, the user experience model has never been used in the IS discipline and in particular the crowdsourcing domain. We go beyond the existing studies in crowdsourced software development by offering a deeper understanding of the perceived quality not only in terms of the utility and the usability of the software but also in terms of the general human needs. Existing studies on crowdsourced software development have mostly addressed the phenomenon based on couple of crowdsourcing organizations such as Topcoder and Innocentive [34]. In this study we have answered the call of various researchers who have emphasized the need for a more detailed study on the crowdsourcing and complex problem, and on the crowdsourced software development [33, 34].
Secondly, a systematic literature survey based on the top IS conferences and journals reveals that the theoretical research that motivates the design of crowdsourcing related artifact is least common and there is still very little research on traditionally popular topics such as adoption and complex problem solving in the crowdsourcing context. The conceptual model provided in this study should provide a solid starting point for continuing the crowdsourcing research by extending our knowledge of traditional work arrangements of organizations and crowdsourcing based model to solve complex problems. Based on the results of our experiment, we can argue for supporting the notion that at least instrumental goal can be achieved by the crowdsourcing based model i.e. crowdsourced software is able to fulfill the behavioral goals.
- Alben L. Defining the Criteria for Effective Interaction Design. 1996;3(3):11-15.
- Baskerville R L, Myers M D. Fashion Waves in Information Systems Research and Practice. MISQ. 2009;33(4):647-662.
- Baumoel U, Georgi S, Ickler H. and Jung R. Design of New Business Models for Service Integrators by Creating Information-Driven Value Webs Based on Customers' Collective Intelligence. Proceedings of the 42nd HICSS. 2009:1-10.
- Blohm I, Riedl C, Leimeister J M, Krcmar H. Idea Evaluation Mechanism for Collective Intelligence in Open Innovation Communities: Do Traders Outperform Raters?. Proceedings of the 2011 ICIS. 2011.
- Bidault F, Cummings T. Innovating Through Alliances: Expectations and Limitations. R&D Management. 1994;24(2):33-45.
- Boehm B W. Software Engineering Economics. Englewood Cliffs (NJ): Prentice-hall. 1981.
- Bonabeau E. Decisions 2 0: The Power of Collective Intelligence. MIT Sloan Management Review. 2009;50(2):45-52.
- Brabham D C. Moving the crowd at Threadless: Motivations for Participation in a Crowdsourcing Application. Information, Communication & Society. 2010;13(8):1122-1145.
- Brabham D C. Crowd Sourcing the Public Participation Process for Planning Projects. Planning Theory. 2009;8(3):242-262.
- Brandel M. Crowdsourcing: Are you ready to ask the world for answers? Computerworld. 2008;42(10):24-26.
- Coda F, Ghezzi C, Vigna G and Garzotto F. Towards a Software Engineering Approach to Web Site Development. In Software Specification and Design. Proceedings. Ninth International Workshop on IEEE. 1998:8-17.
- Colomo-Palacios R, Tovar-Caro E, García-Crespo Á and Gómez-Berbís J M. Identifying Technical Competences of IT Professionals. In Professional Advancements and Management Trends in the IT Sector. 2012;1:1-14.
- Datta R. Collective Intelligence: Tapping into the Wisdom of Crowds. KM Review. 2008;11(3):3.
- Davis J and Lin H. Web 3.0 and Crowdservicing. In Proceedings of the 2011 AMCIS. 2011.
- Doan A, Ramakrishnan R and Halevy A Y. Crowdsourcing Systems on the World-Wide Web. CACM. 2011;54(4):86-96.
- Erickson L B, Petrick I, Trauth E M. Hanging with the Right Crowd: Matching Crowdsourcing Need to Crowd Characteristics. In: Proceedings of the Eighteenth Americas Conference on Information Systems. 2012.
- Fischer A, Greiff S and Funke J. The Process of Solving Complex Problems. Journal of Problem Solving. 2011;4(1):19-42.
- Forlizzi J, Battarbee K. Understanding Experience in Interactive Systems. In Proceedings of the 5th conference on Designing interactive systems: processes, practices, methods, and techniques ACM. 2004: 261-268.
- Füller J, Mühlbacher H, Matzler K and Jawecki G. Consumer Empowerment Through Internet-Based Co-Creation. JMIS. 2009;26(3):71-102.
- Guinan E, Boudreau K J and Lakhani K R. Experiments in Open Innovation at Harvard Medical School. MIT Sloan Management Review. 2013;54(3):45-52.
- Hassenzahl M. The thing and I: understanding the relationship between user and product. In Funology Springer Netherlands. 2003:31-42.
- Hassenzahl M, Tractinsky N. User Experience-a Research Agenda. Behavior & information technology. 2006;25(2):91-97.
- Hassenzahl M. User experience (UX): Towards an Experiential Perspective on Product Quality. In Proceedings of the 20th International Conference of the Association Francophone d'Interaction Homme-Machine ACM. 2008.
- Hassenzahl M, Eckoldt K, Diefenbach S, Laschke M, Len E and Kim J. Designing Moments of Meaning and Pleasure. Experience Design and Happiness. International Journal of Design. 2013;7(3).
- Haythornthwaite C. Crowds and Communities: Light and Heavyweight Models of Peer Production. In Proceedings of the 42nd HICSS. 2009.
- Howe J. The Rise of Crowdsourcing. Wired. 2006;14(6).
- Howe J. Crowdsourcing, Why the Power of the Crowd is Driving the Future of Business. NY: Crown Business. 2008.
- Huysman M and Wulf V. IT to Support Knowledge Sharing in Communities, towards a Social Capital Analysis. JIT. 2006;21(1):40-51.
- Jarvenpaa S L, Dickson G W and DeSanctis G. Methodological Issues in Experimental IS Research: Experiences and Recommendations. MISQ. 1985;9(2):141-156.
- Jeppesen L B, Lakhani K R. Marginality and Problem-Solving Effectiveness in Broadcast Search. Organization Science. 2010;21(5):1016-1033.
- Kittur A, Smus B, Khamkar S and Kraut R E. CrowdForge: Crowdsourcing Complex Work. In Proceedings of 2011 ACM Symposium on User Interface Software and Technology. 2011:43-52.
- Kittur A. Crowdsourcing, Collaboration and Creativity. ACM Crossroads. 2010;17(2):22-26.
- Lanier J. You are Not a Gadget. NY: Random House Digital. 2010.
- Lakhani K R, Boudreau K J, Loh P R, et al. Prize-Based Contests can Provide Solutions to Computational Biology Problems. Nature biotechnology. 2013;31(2):108-111.
- Leicht N, Durward D, Blohm I and Leimeister J M. Crowdsourcing in Software Development: A State-of-the-Art Analysis. 28th Bled eConference. 2015.
- Maier N R. Problem Solving and Creativity in Individuals and Groups. CA: Brooks/Cole Publishing Co. 1970.
- Mao K, Capra L, Harman M and Jia Y. A Survey of the Use of Crowdsourcing in Software Engineering. RN. 2015.
- Olsson, Thomas. User Expectations and Experiences of Mobile Augmented Reality Services. Tampereen teknillinen yliopisto. 2012.
- Oppelaar E R, Hennipman E J, van der Veer G C. Experience Design for Dummies. In Proceedings of 15th European Conference on Cognitive Ergonomics: the Ergonomics of Cool interaction. 2008.
- Pedersen J, Kocsis D, Tripathi A, et al. Conceptual Foundations of Crowdsourcing: A Review of IS Research. In Proceedings of the 46th HICSS. 2013.
- Poetz M K and Schreier M. The Value of Crowdsourcing: Can Users Really Compete With Professionals in Generating New Product Ideas? Journal of Product Innovation Management. 2012;29(2):245-256.
- Quesada J, Kintsch W and Gomez E. Complex Problem-Solving: a Field in Search of a Definition? Theoretical Issues in Ergonomics Science. 2005;6(1):5-33.
- Schrader S and Gopfert J. Structuring Manufacturer-Supplier Interaction in New Product Development Teams: an Empirical Analysis. Elsevier Science. 1998.
- Senge P M. The Fifth Discipline: The Art and Practice of the Learning Organization. New York: Doubleday. 1990.
- Siau K, Tan X and Sheng H. Important Characteristics of Software Development Team Members: An Empirical Investigation Using Repertory Grid. Information Systems Journal. 2010;20(6):563-580.
- Sproull N L. Handbook of Research Methods: A Guide For Practitioners and Students in the Social Sciences. Scarecrow press. 2002.
- Sternberg R J and Frensch P A. Complex Problem Solving: Principles and Mechanisms. Mahwah, NJ: Lawrence Erlbaum Associates, Inc. 1991.
- Ren J. Who's More Creative, Experts or the Crowd? In Proceedings of the 2011 AMCIS. 2011.
- Surowiecki J. The Wisdom of the Crowds. NY: Anchor Books. 2005.
- Terwiesch C and Xu Y. Innovation Contests, Open Innovation and Multiagent Problem Solving. Management science. 2008;54(9):1529-1543.
- Von Hippel E A. Open Source Projects as Horizontal Innovation Networks-By and for Users. MIT Sloan School of Management. 2002.
- Wand Y, Storey V C and Weber R. An Ontological Analysis of the Relationship Construct in Conceptual Modeling. ACM Transactions on Database Systems (TODS). 1999;24(4):494-528.
- Whelan E. Exploring Knowledge Exchange in Electronic Networks of Practice. JIT. 2007;22(1):5-12.
- Wiggins A and Crowston K. From Conservation to Crowdsourcing: A Typology of Citizen Science. In Proceedings of the 44th HICSS. 2011:1-10.
- Wu W, Tsai W T and Li W. Creative Software Crowdsourcing: From Components and Algorithm Development to Project Concept Formations. International Journal of Creative Computing. 2013;1(1):57-91.