Participant withdrawal as a function of hedonic value of task and time of semester.
Undergraduates participating in experiments late in the semester
generally perform more poorly on demanding tasks and withdraw more often
than those participating early. To investigate effects of task
aversiveness, some participants were instructed to choose brief cartoon
reinforcement with a long time-out while others were instructed to
choose longer cartoon reinforcement with a short time-out. Three times
as many students withdrew under the unfavorable schedule but withdrawal
rates were significantly higher at the end of the semester under both
conditions. The Zimbardo Time Perspective Inventory (Zimbardo &
Boyd, 1999) marginally predicted task persistence. Numerous
end-of-semester obligations appear to promote withdrawals independently
of task aversiveness.
Key words: obedience, time-of-semester effects, Zimbardo Time Perspective Inventory, behavioral persistence, instructional control, establishing operations, informed consent
Education, Higher (Psychological aspects)
Educational programs (Psychological aspects)
Bellone, John A.
Navarick, Douglas J.
|Publication:||Name: The Psychological Record Publisher: The Psychological Record Audience: Academic Format: Magazine/Journal Subject: Psychology and mental health Copyright: COPYRIGHT 2012 The Psychological Record ISSN: 0033-2933|
|Issue:||Date: Summer, 2012 Source Volume: 62 Source Issue: 3|
|Topic:||Event Code: 310 Science & research|
|Geographic:||Geographic Scope: United States Geographic Code: 1USA United States|
Undergraduate students who serve as research participants tend to
show poorer performance on frustrating or boring tasks at the end of an
academic term than at the beginning, a trend known as the
time-of-semester effect (Casa de Calvo & Reich, 2007; Harber,
Zimbardo, & Boyd, 2003; Navarick & Bellone, 2010). In addition,
participants at the end of the semester are more likely to withdraw from
a subjectively aversive experiment than are participants at the
beginning if the social underpinnings of obedience are weakened through
instructions, for example, by stating that most participants withdraw
(Navarick & Bellone, 2010).
Time-of-semester effects could produce self-selection bias in a researcher's sample and become an uncontrolled source of variation in performance on vulnerable tasks (Harber et al., 2003; Roman, Moskowitz, Stein, & Eisenberg, 1995). Especially at risk would seem to be experiments on self-control that give participants a choice between an immediate, small reinforcer and a delayed, larger reinforcer. If waiting for the larger reinforcer were subjectively aversive, participants may be more likely to make impulsive choices in sessions held at the end of the semester than at the beginning.
Paralleling this decline in performance and persistence are changes in questionnaire measures of dispositions, which are operationally defined in terms of patterns of behavior that are stable over time and consistent across situations, for example, time perspective (Harber et al., 2003), sensation-seeking, and impulsivity (Zelenski, Rusting, & Larsen, 2003). Time-of-semester effects are most often interpreted in terms of such dispositional factors (Navarick & Bellone, 2010).
Alternatively, performance could be attributed to external, situational factors, which would imply that as the semester progresses it is the situation that changes rather than the personality traits of the students. For example, Casa de Calvo and Reich (2007) argued that the declines they observed in the persistence and quality of their participants' performance were likely related to the increasing number of academic obligations that the students faced toward the end of the semester, which resulted in depletion of the "self-regulatory resources" (p. 355) that the students needed to persist and maintain attention to the accuracy of their work.
Dispositional and situational factors could also interact. For example, impulsivity has been treated both as a personality trait (Zelenski et al., 2003) and as a behavioral choice that is dependent upon the parameter values of the alternatives (Navarick & Fantino, 1976). Toward the end of the semester, impulsivity as a disposition could become more relevant to a choice between withdrawing and staying if academic pressures increased and made immediate escape a stronger prospective reinforcer.
The roles of dispositional and situational factors in participants' choice to withdraw were investigated using a type of procedure that was designed to study obedience to subjectively aversive instructions (Navarick, 2009; Navarick & Bellone, 2010). Previously, participants chose between favorable and unfavorable schedules of reinforcement, in which the reinforcer was a cartoon video. In the obedience phase, they were instructed to choose the unfavorable schedule (brief reinforcement, long time-out) over the favorable schedule (long reinforcement, brief time-out). During the debriefing, most participants rated the unfavorable schedule as "unpleasant" (hedonically negative) and the favorable schedule as "pleasant" (hedonically positive; Navarick & Bellone, 2010).
Dispositional factors were assessed by administering questions from the Zimbardo Time Perspective Inventory (ZTPI; Zimbardo & Boyd, 1999) to measure the constructs of future and present time perspective. Individuals with a predominantly future time perspective often think about and act in accordance with temporally distant goals, whereas those with a predominantly present time perspective typically give more attention to immediate concerns and feelings. Time perspective correlates with delay discounting, the tendency to devalue future outcomes (Teuscher & Mitchell, 2011).
Harber et al. (2003) found that these constructs were correlated with a time-of-semester effect for date of signups and with participants' reliability in meeting a series of deadlines that they had agreed to meet. Participants whose questionnaire responses emphasized the future orientation generally signed up earlier and showed greater reliability than did participants whose questionnaire responses emphasized the present orientation. Thus, the ZTPI had predictive value apart from any issues that one may raise regarding the explanatory value of the underlying concepts. If a similar pattern of behavior held in the current study, participants with questionnaire scores emphasizing a future orientation should sign up earlier and be less likely to withdraw than participants with scores emphasizing a present orientation.
Situational factors were studied by instructing the participants to choose either the unfavorable schedule (brief reinforcement, long time-out) or the favorable schedule (long reinforcement, brief time-out). The previous experiments (Navarick, 2009; Navarick & Belione, 2010) assumed that withdrawals resulted from the discomfort produced by the unfavorable schedule combined with instructions that made the consequences of withdrawing less aversive than continuing. The instructions theoretically acted as an establishing operation, a procedure that alters the reinforcing or punishing effects of a consequence (Michael, 1993). However, it is possible that the instructions alone were sufficient to produce withdrawals. If the level of discomfort during the experiment was also a factor, then withdrawal rates should be higher when participants are instructed to choose the unfavorable schedule than when they are instructed to choose the favorable schedule.
This instructional variable is also relevant to the hypothesis of Casa de Calvo and Reich (2007), that the reduction in students' persistence at the end of the semester results from their reduced tolerance for demanding tasks. If the time-of-semester effect for withdrawals is based on escape from an increasingly aversive task, then the increase in withdrawals across the semester should occur under the unfavorable schedule but not under the favorable one.
A total of 70 introductory psychology students (35 from both the fall and spring semesters of the same academic year) were recruited to participate in the current experiment. There were an additional 13 participants whose data were excluded due to not meeting the baseline criterion (described below). Participants received 1-hr credit that partially fulfilled a requirement for their introductory psychology course. Students were independently given one-half-hr credit for completing a prescreen that was available to all students, regardless of the research studies for which they signed up.
Using a Web-based system, timeslots were scheduled from Weeks 4 to 15 of the fall semester and Weeks 3 to 16 of the spring semester. The study was entitled "Cartoon Viewing," and participants were given the opportunity to sign up for these slots no more than 1 week prior to the session date. The objective was to enhance the separation of the samples at different times of the semester in terms of scores on the ZTPI (Harber et al., 2003), with a preponderance of present-oriented scorers participating at the end. Without this restriction, students could have signed up early in the semester to reserve a timeslot many weeks later, a form of planning that would be typical of future-oriented scorers.
Setting and Apparatus
Participants were read an informed consent statement explaining that they could leave the experiment at any time and still keep the credit they received for showing up. The experimenter, a male graduate student in the fall and a female undergraduate student in the spring, then led them to an adjacent room where they were seated in front of a response console and video monitor. The console had two disks arranged horizontally, which were illuminated before each trial as a signal for participants to press one of them. A desk bell was located on top of the console, just under eye level, which participants were instructed to ring if at any time they decided to leave the experiment.
In the fall, participants could select 1 of 26 cartoons before starting each half of the session. In the spring, the number of cartoons was reduced to 5, based on the most frequent selections during previous semesters. A trial began with the illumination of both disks. Pressing one disk caused the cartoon to play immediately for 5 s followed by 25 s of time-out (unfavorable schedule). Pressing the other disk caused the cartoon to play immediately for 25 s followed by 5 s of time-out (favorable schedule). When the cartoon was stopped, the screen was darkened. On the next trial, the cartoon resumed from the point at which it was stopped. The disk that produced the favorable schedule was located on the side of the participant's nondominant hand (which was determined by observing the hand with which the participant signed the informed consent statement) so that a predominance of choices on that side would more plausibly be attributable to schedule preference than to position preference. The room was kept dark when these trials were conducted.
The session consisted of two parts. The first part was a baseline phase that presented 20 free-choice trials to determine whether the participant would choose the favorable schedule more often than the unfavorable schedule. Failure to choose the favorable schedule on more than half the trials resulted in the exclusion of the participant's data. The second part was the obedience phase, where 40 trials were presented for which the participant was instructed to choose only one of the disks. During the fall semester, participants were instructed to press only the disk that produced the unfavorable schedule whereas during the spring semester, they were instructed to press only the disk that produced the favorable schedule. All participants were told that Part 1 would last approximately 15 min and Part 2 approximately 30 min. To increase participants' sensitivity to the unequal amounts of reinforcement, they were additionally informed that the disk they pressed could affect how long the cartoon played.
Prior to the initiation of the first free-choice trial of the baseline phase, four preexposure trials were presented with both disks lit, and the participants were asked to press the disks in the order left-right-left-right. The experimenter then pointed to the bell and stated, "If you decide to leave before the experiment is over, you can tap on the bell there and I will come in and end the experiment." After a cartoon was selected by the participant and cued up, the experimenter turned off the light, closed the door, and initiated the first preexposure trial from the adjoining room.
Upon completion of the 20th free-choice trial, the experimenter returned and read additional instructions. Participants were again instructed to press the disks in the order left-right-left-right but then only to press the disk on one side (determined by whether it was the favorable or unfavorable condition). They were reminded that they could ring the bell at any time to end the experiment and were then given implicit approval to quit: "If you decide to leave, it's not a problem for us because you already finished Part 1 and we can still use the results, even if you don't finish Part 2." Additionally, they were informed that quitting was the norm: "Most of our participants do leave before the experiment is over." After the experimenter gave the participant the option to switch to a different cartoon, the experimenter closed the door and initiated the obedience phase.
Upon conclusion of the session, the initial step of the debriefing was to give participants a questionnaire on which they circled the number representing how pleasant or unpleasant each schedule of reinforcement was in Part 2. The scale ranged from very pleasant (+5) to neutral (0) to very unpleasant (-5).
Confound between the instructional manipulation and the experimenter/semester. For purposes of the correlational analysis, the only condition that was studied during the fall semester was the one in which participants were instructed to choose the unfavorable schedule, the schedule that was already found to produce wide individual differences with approximately 50% of participants withdrawing (Navarick, 2009), and a time-of-semester effect for withdrawals (Navarick & Bellone, 2010). Previously, it was found that withdrawal rates differed consistently (but not significantly) in replicated conditions across semesters with different experimenters (male in the fall, female in the spring, as in the present experiment; Navarick, 2009). Therefore, holding the semester and experimenter constant rather than distributing the participants across two semesters with different experimenters seemed a prudent step to reduce variability and increase the chances of finding significant correlations in the condition where they were most likely to occur. However, this approach also created a confound between the instructional variable and the experimenter/semester combination.
In a condition similar to the present one with the unfavorable schedule (Navarick, 2009, p. 162, Group EAN), the proportion of participants who quit was approximately .10 higher in the fall with a male experimenter
than in the spring with a female experimenter. However, each proportion fell well within the 95% confidence interval of the other proportion. Therefore, there is an empirical basis for assuming that, in the present experiment, if each proportion fell outside the other's 95% confidence interval, the difference between proportions was more likely due to the instructional manipulation than to the semester or gender of the experimenter.
In assessing the possibility of an artifact, considerations of effect size would also be relevant. The previous difference of .10 between withdrawal rates in the fall and spring that was possibly attributable to the confounds provides a basis for comparison with the effect sizes to be described.
Analysis of semester obligations. An analysis was conducted to determine whether student obligations rose over the course of the semester as assumed by Casa de Calvo and Reich (2007). The analysis was based on 10 introductory psychology course outlines that were available (out of 12 sections offered) for the spring 2011 semester. It was assumed that these 10 outlines would be representative of other semesters as there were no changes to the academic calendar or relevant university policies in the period since the study was conducted (especially the campus policy that all final exams had to be given during Week 17). Only obligations that were listed with scheduled dates were included: exams, quizzes, papers, and other miscellaneous assignments. Unscheduled quizzes and in-class assignments could occur throughout the semester.
The weeks were divided into four quarters: 1-4,5-8,9-12, and 13-17. Week 17 (finals week) was added to the fourth quarter because no final exams were permitted during Week 16. The total number of scheduled obligations for each quarter was calculated and analyzed as a percentage of the total obligations for the entire semester.
ZTPI questionnaire. Participation in this experiment was restricted to students who filled out a series of questionnaires offered to them simultaneously for one-half-hr credit through the Web-based research participation management system. There were seven questionnaires arranged in successive sections, each section associated with a different study. The questionnaires were not visibly linked to their associated studies, which made it less likely that the questionnaires would influence performance during the sessions. The ZTPI was the last questionnaire that participants encountered on the list.
All 56 questions of Zimbardo and Boyd's (1999) inventory were presented. Of these questions, 13 measured the future time-perspective construct and 15 measured the present-hedonistic time-perspective construct. The ZTPI classifies participants according to the construct that produces the higher score so that a participant could be said to have a predominantly present or future time perspective. Accordingly, correlations were calculated using these difference scores as predictor variables. In addition, the scores on the present-hedonistic scale and the future scale were assessed separately as predictor variables. Dependent variables were: quit vs. stayed; the number of trials on which a key was pressed before quitting; the total number of trials on which a key was pressed, regardless of whether participants quit (ranging from 0 to 40); and the week of the semester when participants signed up for the experiment.
Overall Withdrawal Rates
The 95% confidence intervals (CI) of proportions were determined using the calculator provided at the following Web site: http://www.dimensionresearch.com/resources/calculators/conf_prop.html, which subsequently became unavailable. The same conclusions regarding significance are obtainable using the CI calculator (with no continuity correction) available at the VassarStats Web site (http://vassarstats.net/prop1.html). Two proportions were considered to be significantly different if each proportion fell outside the 95% CI of the other proportion.
For participants who were instructed to choose the unfavorable schedule, the proportion who withdrew was .57 (20/35), 95% CI [.41-.73], which significantly exceeded the proportion of .20 (7/35), 95% CI [.07-.33] for participants who were instructed to choose the favorable schedule. In the unfavorable condition participants who quit stayed an average of 15 trials (n = 20, SD = 11) whereas in the favorable condition participants who quit stayed an average of 22 trials (n = 7, SD = 8). This difference was not significant (t = 1.54, p = .128, d = .73, r = .34).
Figure 1 presents the withdrawal rates and confidence intervals during the early and late periods of the semester, for the condition in which participants were required to choose the unfavorable schedule and for the condition in which participants were required to choose the favorable schedule. The fall semester, which consisted of Weeks 4-45, was split in the middle so that the first 5 weeks (4-8) were considered early and the last 5 weeks (11-15) were considered late. Week 9 was Thanksgiving recess, during which no sessions were conducted. Week 10 was the remaining week in the middle and was excluded from the analysis to equate the number of weeks for the early and late periods. The withdrawal proportion for the early sign-ups was .39 (5/13), 95% CI [.12-.65] whereas the withdrawal proportion for the late sign-ups was .67 (12/18), 95% CI 45-.89]. The withdrawal rates were significantly different since each proportion fell outside the CI of the other proportion.
The spring semester consisted of Weeks 3-16 and was also split in the middle to differentiate between the early period (first 6 weeks: 3-8) and the late period (last 6 weeks: 11-16). Week 10 was spring recess with no sessions, and Week 9 in the middle was excluded from the analysis to equate the number of weeks for early and late periods. The withdrawal proportion for participants who signed up early was .07 (1/14), 95% CI [.00--.21] whereas the withdrawal proportion for participants who signed up late was .32 (6/19) 95% CI [.11--.53]. This difference was significant.
Although withdrawal rates were markedly lower when participants were instructed to choose the favorable schedule rather than the unfavorable schedule, the time-of-semester effects for the two conditions were virtually identical when measured as the difference between the withdrawal proportions for the early and late periods. For the unfavorable condition, the time of semester effect was .67--.39 = .28, and for the favorable condition it was .32 -.07 = .25. Measured in this straightforward way, the time-of-semester effect appears to be independent of the schedule effect.
[FIGURE 1 OMITTED]
[FIGURE 2 OMITTED]
Figure 2 shows the week-to-week progression in the frequency of withdrawals for the favorable and unfavorable schedule conditions. Each data point represents the sum of the withdrawals from the beginning of the semester through the week that is indicated. For the unfavorable condition, the withdrawal rates started to accelerate at Week 8, whereas for the favorable condition the withdrawal rates started to accelerate at Week 12. However, from Weeks 12 to 16, the two curves appear to be similarly negatively accelerated and parallel.
Distribution of Semester Obligations
Figure 3 shows that the percentage of scheduled obligations increased in three stages across the semester: the initial quarter, the midterm period (second and third quarters), and the final quarter. The frequencies of obligations in the four quarters were significantly different on a chi-square test ([x.sup.2] = 10.671, df = 3, [p.sup.2]] = .014). It was found that the final quarter contained almost 3 times as many obligations as the first quarter (39% vs. 14%).
[FIGURE 3 OMITTED]
[FIGURE 4 OMITTED]
Figure 4 presents a histogram of participants' ratings of the pleasantness and unpleasantness of each schedule in the favorable and unfavorable instructional conditions. It can be seen that in both conditions there was very little overlap in the distributions.
In the unfavorable condition, the mean rating of the unfavorable schedule was -3.31 (SD =1.64) and the mean rating of the favorable schedule was 2.89 (SD = 1.59). These means were significantly different with a large effect size (paired t = 16.23, p < .001, d = 3.84, r = .89). A significant difference and large effect size were also found in the favorable condition (paired 434] = 10.82, p < .001, d = 2.65, r = .80), where the mean rating was -2.29 (SD =1.87) for the unfavorable schedule and 2.71 (SD = 1.90) for the favorable schedule.
Ratings of each schedule were additionally compared between instructional conditions. When participants were instructed to choose the unfavorable schedule, the mean rating of this schedule was significantly more negative (-3.31, SD = 1.64) than it was when participants were instructed to choose the favorable schedule (-2.29, SD = 1.87), t(68) = 2.45, p = .017, d = .58, r = .28. However, the mean ratings of the favorable schedule did not differ significantly between instructional conditions (2.89, SD = 1.59 vs. 2.71, SD = 1.90, respectively).
The prescreen consisting of the ZTPI and six other questionnaires from unrelated studies took participants an average of 21.14 min (SD = 9.62) to complete. If this entire time had been spent only on the 56 questions of the ZTPI, the average time participants would have given to answering each question would be 23 s.
Despite the haste with which the questions were answered, there was evidence of internal consistency in the answers given by participants during the fall semester. A substantial and significant negative correlation occurred between scores on the present-hedonistic scale and scores on the future scale (r = -.488, p < .01, two-tailed). However, during the spring semester the correlation was only slightly negative (--.129) and it was not significant. As discussed under Procedure, the focus of the correlational analysis was on the fall when all of the participants were instructed to choose the unfavorable schedule, the one expected to maximize individual differences.
Scores on the future subscale of the ZTPI were subtracted from scores on the present-hedonistic subscale to obtain a value for the time perspective construct that each participant emphasized. A positive value represents a present-hedonistic emphasis whereas a negative value represents a future emphasis. In the unfavorable condition the mean score for those who quit (-.10, SD = .96, n = 20) did not differ significantly from those who did not quit (.45, SD = 1.12, n = 15), t(33) = 1.58, p = .124, d = .53, r = .26. Additionally, in the favorable condition the mean score for those who quit (--.34, SD = .53, n = 7) did not differ significantly from the mean score for those who did not quit (-.28, SD = .85, n = 28), t(33) = .182, p = .857, d = .09, r = .04.
The difference scores (present--future) were also analyzed in relation to the week of the semester in which participants signed up for the experiment. Here, too, there was no significant difference between those who signed up early or later.
A significant and surprising correlation did emerge in the unfavorable schedule condition in which scores on the future scale correlated negatively with the number of trials on which participants pressed a key (regardless of whether they quit), r = -.357, p < .05, two-tailed, n = 35. Correspondingly, scores on the present-hedonistic scale correlated positively with the number of trials served (r = .213) but the correlation was not significant. When the dependent variable was simply quit versus stayed, the correlations were consistent with those for number of trials served, but they, too, were not significant (future: r = .239, present: r = -.219). Although the one significant correlation may well have occurred by chance due to performing multiple correlations, its credibility is enhanced by its consistency with the three related correlations.
For the semester as a whole, participants withdrew at a much higher rate when they were instructed to choose the unfavorable schedule than when they were instructed to choose the favorable schedule. Since the participants rated the unfavorable schedule mostly as unpleasant and the favorable schedule mostly as pleasant, the discomfort produced by the experimental task is implicated as a factor in withdrawals.
Whether or not participants withdraw depends on the social context in which the session is conducted. The social context was established here by the background instructions that made the experiment's demand characteristics ambiguous (stating that leaving would not interfere with the research) and by misrepresenting the normative behavior of peers (stating that most participants withdrew). Navarick (2009) found that both components of these instructions were necessary to produce withdrawal rates in the range found in the present experiment. Deleting just the normative component sharply reduced withdrawals and deleting both components completely eliminated withdrawals.
The present results complement the previous ones by showing that these instructions create the potential for withdrawals but do not directly evoke them. Instead, they apparently act as an establishing operation (Michael, 1993) that makes the prospect of withdrawal less punishing. The participant can be seen as making a choice between the prospect of being punished by pressing the key and the prospect of being punished by tapping the bell to signal withdrawal. Withdrawals can be increased either by making the task more punishing (present study) or by making the prospect of withdrawing less punishing (Navarick, 2009). The same choice process has been proposed as the basis for a model of how subordinates come to withdraw from their roles as a way of defying orders from a malevolent authority to commit harmful acts (Navarick, 2012; cf. Equation 2 in that model and its application to the Lomazy massacre).
Withdrawal rates increased significantly across the semester in both the favorable and unfavorable schedule conditions. This finding replicates and extends the time-of-semester effect previously found in the unfavorable schedule condition (Navarick & Bellone, 2010). Moreover, the changes in withdrawal rates were virtually identical in the two conditions, despite the large difference in overall withdrawal rates. The implication is that the time-of-semester effect for withdrawals is largely independent of the participant's subjective reaction to the task.
Casa de Calvo and Reich (2007) proposed that students' persistence on demanding tasks tends to decline at the end of the semester due to an accumulation of obligations, such as multiple papers and final exams, that deplete the students' resources for coping. The researchers' assumption about academic workload was supported by the finding in the present experiment that the percentage of total scheduled obligations in introductory psychology classes increased significantly across quarters of the semester (Figure 3).
Casa de Calvo and Reich (2007) measured persistence by how much time the participants chose to devote to solving anagram problems at different levels of difficulty. The consequences of giving up on a difficult anagram problem are markedly different from the consequences of withdrawing from an entire experiment. However, giving up on a frustrating problem does have some resemblance to quitting an experiment that the participants rate as unpleasant; it clearly has more in common with that situation than with quitting an experiment that participants rate as pleasant. Therefore, if the depletion hypothesis were to be extended to withdrawals in a straightforward manner (i.e., with no additional assumptions), then the time-of-semester effect should occur with the unfavorable schedule but not with the favorable one. The current data argue against this extension as well as against a broader interpretation of the time-of-semester effect in terms of escape, as discussed below.
Two-Factor Hypothesis of Participant Withdrawal
Overall, withdrawal rates seem to have resulted from two factors: emotional and pragmatic. Most withdrawals throughout the semester occurred in the unfavorable schedule group, and apparently for emotional reasons--to escape the discomfort that was reflected in the participants' strongly negative hedonic ratings of the required schedule. In addition, there was evidence that the participants' level of discomfort increased with the number of times that they were exposed to the unfavorable schedule, an indication of aversive conditioning that could have provided the basis for escape. The mean rating of the unfavorable schedule in this group was significantly more negative than it was in the favorable schedule group (-3.31 vs. -2.29, respectively), the latter group having been required to produce the unfavorable schedule only twice during the four preexposure trials.
In the favorable schedule condition, there was no comparable aversive conditioning. Yet, the withdrawal rate at the end of the semester increased by virtually the same amount (.25) as it did in the unfavorable schedule condition (.28). A second factor in withdrawals appears to have become influential and the basis for approximately one quarter of the withdrawals in both conditions.
As the end of the semester approached and the density of academic obligations increased (Figure 3), some students may have withdrawn because they could use the freed-up time for more useful purposes than watching a cartoon. They withdrew for pragmatic reasons. For them, withdrawal offered positive reinforcement for alternative behavior (making progress on assignments) rather than the negative reinforcement of escaping an increasingly aversive situation.
Predictive Value of the ZTPI
The ZTPI difference scores (present--future) did not differ significantly between participants who withdrew and participants who did not withdraw in either the unfavorable or favorable schedule condition. Additionally, these scores did not differ significantly between those who participated early and those who participated later in either condition. However, a marginal but potentially important negative correlation appeared in the unfavorable schedule condition between scores on the future subscale and the number of trials on which participants pressed a key (regardless of whether they withdrew). This variable, ranging from 0 to 40, could be seen as a measure of behavioral persistence.
Although the correlation may have occurred by chance (despite being consistent with three related correlations), it can plausibly be interpreted in terms of the distinction made above between the emotional and pragmatic factors that induce participants to leave an experiment. Future-oriented participants are characterized as tending to work persistently toward temporally distant goals (Harber et al., 2003). For some of these participants, freeing up time to work toward such goals could be advantageous if working on the present task is unnecessary and lacks reinforcing value.
The ZTPI has been found to be useful in a variety of clinical applications (Vranesh, Madrid, Bautista, Ching, & Hicks, 1999; Zimbardo & Boyd, 2008). However, Worrell and Mello (2007) have suggested that the ZTPI be tested further for its validity and reliability since low intercorrelations with other constructs have been reported and there is an insufficient number of validation studies. The present study highlights a methodological factor that needs to be considered when assessing the ZTPI: The test situation should include demand characteristics (Navarick, 2007; Orne, 1962) that encourage participants to take all the time they need to read, comprehend, and think about the questions.
In the present case, participants took the test online at their own convenience yet devoted less than 20 s to each question. During that brief time, the participants would not only have to comprehend the question but also retrieve memories that were necessary to apply it to themselves. Although some questions could probably be answered quickly, others would clearly require more reflection, such as the following question from the present-hedonistic scale: "It is more important for me to enjoy life's journey than to focus only on the destination." A controlled test environment that allocates more than sufficient time to take the ZTPI and that encourages reflection before answering could enhance the value of the test as a predictor of behavioral persistence.
Situational factors were found to have robust effects on participants' choice to withdraw in accordance with informed consent. Withdrawal rates ranged from .07 during the first half of the semester, in participants who were required to choose the favorable schedule, to .67 during the second half of the semester, in participants who were required to choose the unfavorable schedule. Previously, it was shown that specific lines of instructions related to demand characteristics and normative behavior produced withdrawal rates across a similar range (Navarick, 2009).
In contrast, the dispositional construct of time perspective had marginal predictive value when the ZTPI was administered under the relatively uncontrolled conditions of a Web-based survey. Further refinements in test administration may enhance the predictive value of this inventory as well as others that may correlate with behavioral persistence. Questionnaires that measure dispositional constructs complement behavioral methods in that they provide a means of making predictions in situations that produce wide individual differences in performance.
CASA DE CALVO, M. P., & REICH, D. A. (2007). Spontaneous correction in the behavioral confirmation process: The role of naturally-occurring variations in self-regulatory resources. Basic and Applied Social Psychology, 29, 351-364. doi:10.1080/01973530701665132
HARBER, K. D., ZIMBARDO, R G., & BOYD, J. N. (2003). Participant self-selection biases as a function of individual differences in time perspective. Basic and Applied Social Psychology, 25, 255-264. doi:10.1207/S15324834BASP2503_08
MICHAEL, J. (1993). Establishing operations. The Behavior Analyst, 16,191-206.
NAVARICK, D. J. (2007). Attenuation and enhancement of compliance with experimental demand characteristics. The Psychological Record, 57, 501-515.
NAVARICK, D. J. (2009). Reviving the Milgram obedience paradigm in the era of informed consent. The Psychological Record, 59,155-170.
NAVARICK, D. J. (2012). Historical psychology and the Milgram paradigm: Tests of an experimentally derived model of defiance using accounts of massacres by Nazi Reserve Police Battalion 101. The Psychological Record, 62, 133-154.
NAVARICK, D. J., & BELLONE, J. A. (2010). Time of semester as a factor in participants' obedience to instructions to perform an aversive task. The Psychological Record, 60, 101-114.
NAVARICK, D. J., & FANTINO, E. (1976). Self-control and general models of choice. Journal of Experimental Psychology: Animal Behavior Processes, 2, 75-87. doi:10.1037/0097-7403.2.1.75
ORNE, M. E. (1962). On the social psychology of the psychological experiment: With particular reference to demand characterisics and their importance. American Psychologist, 17, 776-783. doi:10.1037/h0043424
ROMAN, R. J., MOSKOWITZ, G. B., STEIN, M. I., & EISENBERG, R. E (1995). Individual differences in experiment participation: Structure, autonomy, and the time of the semester. Journal of Personality, 63, 113-138. doi:10.1111/j.1467-6494.1995.tb00804.x
TEUSCHER, U., & MITCHELL, S. H. (2011). Relation between time perspective and delay discounting: A literature review. The Psychological Record, 61, 613-632.
VRANESH, J. G., MADRID, G., BAUTISTA, J., CHING, P., & HICKS, R. A. (1999). Time perspective and sleep problems. Perceptual and Motor Skills, 88, 23-24. doi:10.2466/pms.1918.104.22.168
WORRELL, F. C., & MELLO, Z. R. (2007). The reliability and validity of Zimbardo Time Perspective Inventory scores in academically talented adolescents. Educational and Psychological Measurement, 67, 487-503. doi:10.1177/0013164406296985
ZELENSKI, J. M., RUSTING, C. L., & LARSEN, R. J. (2003). Consistency in the time of experiment participation and personality correlates: A methodological note. Personality and Individual Differences, 34, 547-558. doi:10.1016/S0191-8869(01)00218-5
ZIMBARDO, P. G., & BOYD, J. N. (1999). Putting time in perspective: A valid, reliable individual-differences metric. Journal of Personality and Social Psychology, 77, 1271-1288. doi:10.1037/0022-3522.214.171.1241
ZIMBARDO, P. a, & BOYD, J. N. (2008). The time paradox. New York, NY: Free Press.
John A. Bellone, Douglas J. Navarick, and Raquel Mendoza
California State University, Fullerton
Correspondence concerning this article should be addressed to John A. Bellone, Department of Psychology, Loma Linda University, Loma Linda, CA 92350. E-mail: email@example.com
|Gale Copyright:||Copyright 2012 Gale, Cengage Learning. All rights reserved.|