Many are excited for the coming space missions, especially those involving Mars. Some look forward to the chance of discovering other living organisms in our galaxy, while others desire to see the day humans step foot on Mars. Whatever the drive, the fact remains that, as we seek to travel farther into space, missions will continue to increase in distance, size, complexity, and cost. From designing the spacecraft to the moment astronauts return home, the ability of all personnel to cooperate and properly perform tasks in a team-centered environment will be critical to the success of any space mission. In an effort to obtain evidence to create a behaviorally-informed systems design model, our exploratory study focused on two common design delegation approaches, Requirements Allocation (RA) and Value-Driven Design (VDD), to analyze whether or not the framing of instructions influenced an individual’s effort while completing a task. In the RA approach, instructions communicate specific requirements about the desired outcome of a project, while the VDD approach communicates certain values about the desired outcome of a project (Table 1). We focused on these two approaches because there is ongoing debate over the efficacy of these two approaches. Some researchers believe approaches that communicate clear standards, such as RA, guarantee the coordination of multiple tasks but acknowledge that these approaches fail in other areas, such as enabling knowledge sharing.1 Some researchers argue that comparable results could be obtained without the associated costs by focusing on values-driven approaches, such as VDD.
SOME LOOK FORWARD TO THE CHANCE OF DISCOVERING
Psychological studies have revealed that the framing of statements and questions can affect individuals’ perceptions and decisions.2,3 Additionally, researchers demonstrated that the framing of incentives (positive or negative) can cause participants’ effort to significantly differ.4 This led us to question whether the difference in framing of instructions between the RA and VDD approaches could influence individuals’ effort during a decision-making task. With approaches equivalent to the RA approach, research in economics and psychology has shown that individuals (e.g., an employee) experience negative effects on their effort and performance when an individual of a higher position (e.g., a boss) decides to communicate minimum requirements on tasks.5,6 This could be because external control (i.e., having specific parameters to meet when performing a task) lowers individuals’ intrinsic motivation.7 Although, some researchers suggest that these negative effects are reduced when the external control is perceived as legitimate or necessary.8 Furthermore, evidence shows that individuals’ performance tends to decline as they reach their goal, perhaps due to the disappearance of intrinsic and extrinsic motivation.5,9 Based on this prior research in the social sciences, we hypothesized that the RA approach will have a higher rate of effort disutility than the VDD approach. Disutility of effort occurs at the point in time when an individual perceives his/her/their effort to be less beneficial or counterproductive. As a result, we predicted that participants would experience less beneficial, if not adverse, effects on his/her/their effort when given instructions that included specific requirements to meet (RA) than those given value-driven instructions (VDD).
...WHILE OTHERS DESIRE TO SEE THE DAY HUMANS STEP FOOT ON MARS
The 109 Texas A&M University undergraduate students who took part in the experiment came to the lab to play a computer game called “Manned Mission: Mars.” Participants were randomly assigned to one of the following two conditions: requirements allocation (55 participants) or value-driven design (54 participants).
At the beginning of the game, participants were informed that their job was to configure several different spacecraft systems for a Mars mission. Each of the four levels of the game (Figures 1–4) was a different spacecraft system, which was represented as an assortment of components–the colored blocks. The participants’ objective was to rearrange these components in an effort to increase the robustness of these systems–the participants’ score on the game–which was represented by a percentage. The first level of the game started off with three different component colors: red, blue, and yellow (Figure 1). The on-screen directions informed participants that the robustness of the system increased if certain colored components were placed near specific vicinities of the system: red toward the top left, blue near the center, and yellow toward the bottom right. While rearranging the components, participants could analyze the system by clicking the “Analyze” button to see how the robustness score changed. They were allowed to analyze the system as many times as they wished before submitting it and moving onto the next level.
As seen in Figure 1, the first level contained nine square components. The second level replaced four of these square-shaped components with two rectangular-shaped ones (Figure 2). The third level introduced two new colors—green and orange (Figure 3). These new colors were mixtures of two existing colors (e.g., blue and yellow forming green; red and yellow forming orange), with the two primary colors indicated in the corners of the component. In relation to the robustness of the system, these new components were considered the two colors from which they were made; this introduced the challenge of finding the optimal position for these new components. The fourth level, seen in Figure 4, added a purple component—composed of red and blue. This ultimately resulted in a total of six different colors in the fourth level: red, blue, yellow, green, orange, and purple. In addition, this level increased the number of components from seven in the third level to nineteen.
Throughout the game, the computer recorded the participants’ effort for each level. This was measured by the amount of time spent on the level, the number of components moved, and the number of times students analyzed the system by clicking the “Analyze” button. The only difference between the two conditions was the framing of instructions before each level. In the RA condition, instructions described the task by the need to meet a specified robustness threshold (Figure 5), and this threshold varied across levels. In each level, participants were randomly assigned one of the following five robustness thresholds: 60%, 70%, 80%, 90%, or 100%. In the VDD condition, instructions described the task by the need to optimize the robustness score (Figure 6).
Our preliminary results showed that, on average, participants in the VDD condition attained higher robustness scores than participants in the RA condition in every level of the game with significantly higher robustness scores in the third level (Figure 7). The threshold for robustness varied in the RA condition, and therefore we also assessed the robustness score by the specific thresholds provided. However, this represented a comparison among six groups [i.e., VDD, RA(60), RA(70), RA(80), RA(90), and RA(100)], resulting in small sample sizes for each group compared to the VDD condition, making the statistical comparisons less reliable. Therefore, we looked for patterns in the data. The most common pattern in the data was that the RA conditions used more effort (i.e., time spent on level, components moved, number of times system was analyzed) than the VDD condition but finished with a lower robustness score, suggesting the RA conditions in these cases were working less efficiently than the VDD condition. This pattern was present in all four levels of the game, although more prevalent in the lower levels, and was present for seven of the comparisons.
Three other patterns occurred commonly (three times) in the comparisons: 1) The RA conditions spent less time but moved more components and analyzed the systems more compared to the VDD condition and finished with a higher robustness score. This pattern shows that the RA condition, in these cases, is exerting more effort—moving more components and clicking “Analyze” more—in a shorter period of time. This pattern is present in the first and third levels of the game. 2) The RA conditions spent more effort (i.e., time spent on level, components moved, number of times system was analyzed) than VDD and finished with a higher robustness score. This pattern is present in the second and fourth levels. 3) Unlike the other patterns, the RA conditions exerted less effort (i.e., time spent on level, components moved, number of times system was analyzed) and finished with lower robustness scores than the VDD condition. Unlike the previous two patterns, this one is noticeable in three different levels: the second, third, and fourth.
As exploratory research, this study works toward helping us figure out the best research design and data collection methods to use in future studies to evaluate decision delegation approaches. Our study sheds light on the possibility that the framing of instructions has an effect on individuals’ effort during a task. Furthermore, our preliminary results suggest that individuals operating in the RA approach experience greater rates of effort disutility than those in the VDD approach, as we hypothesized, but further research must be conducted to confirm this. As alluded to earlier, a limitation of the methodology is the resulting small sample sizes of the RA condition after categorizing participants based on the provided robustness threshold. We plan to address this in a future study to obtain a more reliable comparison between the different approaches. Additionally, in future studies we plan to collect measurements within each level, allowing us to see how participants’ performance and effort fluctuate throughout a task, which would result in a better understanding of effort in this context. Furthermore, we will narrow the participant pool to engineering students and practicing engineers in later studies.
The results of our study are widely applicable to life in the workplace and are increasingly relevant as positions change. We have seen common workplace jobs become more team-oriented, with projects increasing in complexity and intricacy, relying heavily on the cooperation and communication of many, often specialized, individuals. For the completion of such projects, agencies, businesses, governments, and organizations must delegate tasks to individuals in the most effective way possible. To do so effectively requires a clear understanding of how the frame of the task affects individuals’ effort and attention. A great example of this is space exploration, where attention to details and maximizing effort can be vital due to the complex and dangerous nature of space missions. From designing the spacecraft on earth to the actual mission in space, the delegation of tasks and the way they are communicated are an important process of a mission that should not be overlooked.
Data collection was supported by the National Science Foundation (award #1563379) to Richard Malak, Heather Lench, and Rachel Smallman. I would like to thank my advisor, Dr. Heather C. Lench, for her guidance throughout the course of this research as well as her informative feedback. I also thank Yidou Wan for answering my many questions and for the large amount of time he has sacrificed for me. Thanks also go to Masden Stribling, from the DATA (Dissertation, Article, and Thesis Assistance) program, for her assistance in the writing process. Additionally, I would like to thank Andrea Mendes and Annabelle Aymond for their wonderful feedback. Furthermore, I would like to extend my gratitude to my fellow research assistants in Dr. Heather Lench’s Emotion Science lab as well as our lab manager, Zari Haggenmiller, for data collection and their support. I am grateful for the support of my friends and family throughout this process. Finally, I would like to thank my girlfriend, Aira Martin, for her patience, constant encouragement, and continuous feedback throughout this entire project.
Cameron McCann '19
Cameron McCann is a junior psychology major from Bryan, Texas. Cameron participated in the 2017–2018 class of the Undergraduate Research Scholars where he completed his thesis, which culminated in this article, under the guidance of Dr. Heather Lench. After graduation, Cameron plans to attend graduate school pursuing a Master’s in psychology, where he hopes to further identify his research interests.
1. Bertoni, Marco, Alessandro Bertoni, and Ola Isaksson. 2018. “Evoke: A value-driven concept selection method for early system design.” Journal of Systems Science and Systems Engineering 27, no. 1: 46–77.https://doi.org/10.1007/s11518-016-5324-2
2. Levin, Irwin P., and Gary J. Gaeth. “How consumers are affected by the framing of attribute information before and after consuming the product.” Journal of consumer research 15, no. 3 (1988): 374–378. https://doi.org/10.1086/209174
3. Tversky, Amos, and Daniel Kahneman. “The framing of decisions and the psychology of choice.” Science 211, no. 4481 (1981): 453–458. DOI: 10.1126/science.7455683
4. Goldsmith, Kelly, and Ravi Dhar. “Negativity bias and task motivation: Testing the effectiveness of positively versus negatively framed incentives.” Journal of Experimental Psychology: Applied 19 no. 4 (2013): 358. http://dx.doi.org.ezproxy.library.tamu.edu/10.1037/a0034415
5. Kajackaite, Agne, and Peter Werner. “The incentive effects of performance requirements – A real effort experiment.” Journal of Economic Psychology 49, (2015): 84–94. http://dx.doi.org.lib-ezproxy.tamu.edu:2048/10.1016/j.joep.2015.03.007
6. Falk, Armin, and Michael Kosfeld. “The Hidden Costs of Control.” The American Economic Review 96, no. 5 (2006): 1611–1630.
7. Fisher, Cynthia D. “The Effects of Personal Control, Competence, and Extrinsic Reward Systems on Intrinsic Motivation.” Organizational Behavior and Human Performance 21 no. 3 (1978): 273–288. http://dx.doi.org.lib-ezproxy.tamu.edu:2048/10.1016/0030-5073(78)90054-5
8. Schnedler, Wendelin, and Radovan Vadovic. “Legitimacy of control.” Journal of Economics & Management Strategy 20 no. 4 (2011): 985–1009. https://doi.org/10.1111/j.1530-9134.2011.00315.x
9. Goerg, Sebastian, and Sebastian Kube. “Goals (Th)at Work: Goals, Monetary Incentives, and Workers’ Performance.” MPI Collective Goods Preprint, No. 2012/19, (2012) http://dx.doi.org/10.2139/ssrn.2159663