Using postfeedback delays to improve retention of computer-based instruction.
Abstract: Self-pacing, although often seen as one of the primary benefits of computer-based instruction (CBI), can also result in an important problem, namely, computer-based racing. Computer-based racing is when learners respond so quickly within CBI that mistakes are made, even on well-known material. This study compared traditional CBI with two forms of CBI designed to reduce computer-based racing: incentives/disincentives and postfeedback delays. All three formats were evaluated in terms of both performance and satisfaction using a between-group repeated measures design with pretest and posttest Dependent measures included posttest scores and satisfaction questionnaire ratings. Posttest scores favored the use of postfeedback delays to improve learning over incentives/disincentives and control conditions. Post feedback delays negatively affected satisfaction in comparison to the control condition, although no satisfaction differences were found between incentives/disincentives and postfeedback delays.

Key words: computer-based instruction, computer-based racing, postfeedback delays, monetary incentives, training
Article Type: Report
Subject: Computer-assisted instruction (Psychological aspects)
Feedback (Psychology) (Psychological aspects)
Feedback (Psychology) (Educational aspects)
Rewards and punishments in education (Psychological aspects)
Educational psychology (Research)
Authors: Johnson, Douglas A.
Dickinson, Alyce M.
Pub Date: 06/22/2012
Publication: Name: The Psychological Record Publisher: The Psychological Record Audience: Academic Format: Magazine/Journal Subject: Psychology and mental health Copyright: COPYRIGHT 2012 The Psychological Record ISSN: 0033-2933
Issue: Date: Summer, 2012 Source Volume: 62 Source Issue: 3
Topic: Event Code: 310 Science & research
Geographic: Geographic Scope: United States Geographic Code: 1USA United States
Accession Number: 298058820
Full Text: Computer-based instruction (CBI) continues to grow as a training solution for business and education (Hannafin & Foshay, 2008; Mayfield, Glenn, & Vollmer, 2008; Rivera-Nivar & Pomales-Garcia, 2010). CBI can improve learning over traditional instruction, reduce instructional time and costs, accommodate learners in geographically diverse locations, improve retention, and standardize content delivery (Kruse & Keil, 2000; Kulik, 1994; Schultz & Schultz, 2006). Given the importance of CBI for both business and education, it is appropriate that researchers continue to analyze the variables that contribute to the success of computer-based instructional solutions (Johnson & Rubin, 2011).

CBI's success in improving performance may in part be the result of its ability to enforce active and meaningful responding. It has been repeatedly shown that frequently requiring learners to make an overt response during instruction can improve learning (Bodemer, Ploetzner, Feuerlein, & Spada, 2004; Eckerman et al., 2002; Miller & Malott, 1997, 2006). However, just because learners are actively responding with CBI does not necessarily mean that they are demonstrating understanding of the material (Markle, 1990). For example, simply clicking a "next" button to advance an instructional slide would be an active, but not very meaningful, response. A CBI program should be more than an "electronic page-turner," in which learners simply advance the content (Kruse & Keil, 2000). Fortunately, CBI can be designed to enforce demonstrative interactions, in which learners overtly demonstrate their understanding of the material throughout the instructional process. Classroom unison responding can approximate this, although it is difficult to enforce the responding of all learners in a group format. One-on-one tutors can enforce demonstrative interactions, but the associated time and costs often make such a solution impractical. Enforced, demonstrative interactions that are practical on a large scale may be CBI's most important and unique contribution.

Another frequently cited benefit of CBI is the ease of implementing learner self-pacing (Henry, 1995; Kruse & Keil, 2000; Milheim & Martin, 1991). It may be beneficial to distinguish between two types of CBI self-pacing: overall course pacing and within-unit pacing. Overall course pacing relates to the deadlines by which learners are expected to complete various CBI tutorial units and other assignments. Within-unit pacing involves how much time a learner spends studying the material within a given CBI unit. As desir-able as self-pacing may sound in theory, in practice it often has detrimental effects, with both CBI and other types of instruction. Learners are often found to be poor managers of their own time (Steinberg, 1977). As other activities compete for a learner's time, procras-tination frequently occurs with respect to the assigned instructional material, resulting in lower rates of completion in self-paced courses (Fox, 2004; Michael, 2004). When procrastination occurs with self-imposed deadlines for overall course pacing, the addition of externally set and frequent deadlines has been recommended to correct this problem (Fox, 2004; Hirsch, 1996; Michael, 2004). However, self-pacing within instructional units has often been considered beneficial (Heinich, Molenda, & Russell, 1993).

Untortunately, within-unit selt-pacing can also be problematic for CBI. In contrast to the procrastination problems seen with overall course self-pacing, within-unit self-pacing can produce responding that is too rapid. Learners begin making "let's-get-it-over-with" sloppy responses and therefore learn less (Markle, 1990). For example, Brown (2001) studied the performance of employees during an online training course. Many employees moved too quickly through the training and consequently had the worst test scores. The author postulated that the fast pace was motivated by an attempt to return to their other work obligations. The phenomenon of fast-paced responding that results in mistakes, even on well-known material, has been termed racing (Crosbie & Kelly, 1993, 1994; Kelly & Crosbie, 1997; Munson & Crosbie, 1998). It has been hypothesized that computer-based racing occurs so that learners can more quickly obtain the conditioned reinforcers of unit completion and "being done." Faster responding produces the subsequent material faster and brings the individual closer to escaping the current learning situation and moving on to more reinforcing situations. When learners begin blindly guessing at answers in order to finish more quickly, they are no longer demonstrating their understanding of the material. As such, computer-based racing undermines one of the most unique and important contributions of CBI.

There are several possible methods for reducing the likelihood of computer-based racing, such as partial reduction of learner control. The use of postfeedback delays partially reduces learner control and has been shown to improve learner performance (Crosbie & Kelly, 1993, 1994; Kelly & Crosbie, 1997). Postfeedback delays involve an enforced delay by the computer after the provision of feedback. When using CBI with postfeedback delays, the learner proceeds through the material at his or her own pace until a request for a demonstrative interaction is encountered. The learner either selects or constructs an answer (depending on the question format) and submits his or her answer. The computer provides some form of feedback. Immediately after the feedback, the computer enforces a delay for a predetermined time, thus preventing the learner from immediately proceeding to subsequent material. When the time period elapses, control is returned to the learner, allowing him or her to proceed to subsequent material when ready. Postfeedback delays appear to work in part by allowing further exposure and time to study instructional material (Crosbie & Kelley, 1994).

Another way of decreasing computer-based racing is by implementing contingent monetary incentives and disincentives. Munson and Crosbie (1998) investigated this possibility by paying participants a 5-cent incentive for every correct answer and reducing the amount by S cents for every incorrect answer. Munson and Crosbie found that contingent incentives and disincentives improved performance in comparison to conditions in which payment was independent of performance. Participants within this arrangement retain complete control over self-pacing. However, computer-based racing response patterns will be punished and appropriate pacing decisions will be reinforced. Although financial incentives would be well-suited to business settings, the incentives and disincentives could easily take the form of points or other similar tokens of achievement in educational settings.

While other possibilities exist for reducing computer-based racing, only postfeedback delays and contingent incentives/disincentives have been empirically studied for the purpose of reducing such racing. However, these two methods have yet to be directly compared to one another to assess their relative impact on performance. Both methods have a number of advantages and disadvantages. Postfeedback delays require very little monitoring of learner interactions to be effective, therefore reducing supervisory demands. However, by their very nature, programmed delays automatically increase instructional time. Furthermore, learners often dislike being artificially slowed in their progress.

Munson and Crosbie (1998) reported that contingent monetary incentives and disincentives did not negatively impact satisfaction. However, there is an important disadvantage: Someone must evaluate the performance of learners, leading to the cost of increased supervision. The linguistic capabilities of computers are too limited to evaluate many complex learner responses, necessitating judgments by human evaluators (Chase, 1985; Pear & Martin, 2004) and thus further adding to the labor cost of this method. Finally, if monetary incentives are used, this can potentially add more to the financial costs associated with this method.

While the disadvantages associated with the aforementioned methods are problematic, each may be worth the costs if they improve learning within a CBI format. The question becomes, Which method is more effective, in terms of both performance and satisfaction? This study sought to address this question by comparing the effects of postfeedback delays and contingent incentives/disincentives with each other and with normal CBI conditions.

Method

Participants and Setting

Sixty-one university students with little to no prior knowledge of the instructional materials were recruited via in-class announcements and on-campus flyers. Informed consent was appropriately obtained from all participants prior to experimental sessions. Sessions were conducted in a university laboratory containing four desktop computers, keyboards, mice, chairs, and tables. The computers were partitioned from one another by cubicle walls so that adjacent participants could not view each other's screens. Cubicle walls also prevented experimenters from observing the screens of participants while instructional units were being completed.

Instructional Material

A computer program created in Macromedia Flash and developed by the first author presented 27 units, which consisted of Sets 1-16 and 18-28 of the material from Holland and Skinner's (1961) The Analysis of Behavior. Sets 17 and 29 are review sets, which were used as the basis for pre-and posttests. For 16 of the experimental units, paper-based supplements titled "Exhibits" accompanied the CBI program. These exhibits were based on the exhibits used in the Holland and Skinner textbook.

Every instructional slide of the unit contained brief and incomplete statements requiring the learner to supply a response. Participants constructed responses for each question by typing the response on the keyboard and then clicking a "Submit Your Answer" button. After clicking this button, participants could no longer alter their answer. Feedback consisted of the correct answer being displayed immediately after clicking "Submit Your Answer." Participants then scored the correctness of their response by typing either "C" or "I" and clicking a "Submit scoring" button to help ensure that they were attending to the feedback. When participants completed all of the instructional slides, an "End Unit" button was displayed on the screen. Figure 1 displays sample screenshots from the program.

[FIGURE 1 OMITTED]

Pretest and Posttest Measures

All participants completed a 51-question paper-based pretest in order to screen potential participants who already knew the material and to obtain a covariate measure. Participants were informed that they could earn $5.00 if they scored 65% or better on the pretest; however, these participants would have been immediately excluded from further participation. Participants were not told in advance that scoring 65% or better would exclude them from the study. No participants were excluded on the basis of the above criterion. Questions were drawn from review sets, 17 and 29 of the Holland and Skinner (1961) text, and covered material taught in nonreview sets, 1-16 and 18-22. The posttest was identical to the pretest.

Experimental Design

A between-group repeated measures design with pretest and posttest was used to assess differences in learning. The participants were randomly assigned to one of three groups: postfeedback delay, incentives/disincentives, or control. The postfeedback delay condition had 21 participants randomly assigned to it, whereas the incentives/disincentives and control conditions had 20 participants randomly assigned to each. To assess differences in terms of satisfaction, after the posttest, each participant was exposed to all three instructional methods before satisfaction data were collected.

Independent Variables

Postfeedback delay. Participants were paid 5 cents for each question they completed, regardless of accuracy. Paying by question (rather than by time) helped to more directly tie effort to payment as well as increase the similarity to the incentives/disincentives condition (see the next section). After participants clicked the "Submit scoring" button, there was a 5-s delay during which the material could not be advanced. The incomplete statement, the feedback, and the participant's response remained visible on the screen. In addition, a horizontal red bar that progressively decreased in size appeared at the bottom of the screen during the delay (see Figure 1 for an example). When the red bar disappeared, a button appeared along with the text "Proceed to next question." Clicking the button allowed the participant to advance to the next question. Other than the imposed delay, the presentation of material was learner-paced.

Incentives/disincentives. Participants were paid 5 cents for each question they answered correctly and lost 5 cents for each question they answered incorrectly (i.e., answering a phrase incorrectly would result in the participant receiving 10 cents less than if the question had been answered correctly). Participants did not lose more money than they earned for each unit (i.e., they did not owe the experimenter any money if performance was poor), and pay was calculated based on the sum total of correct and incorrect answers within each unit after completion. Immediately after clicking the "Submit scoring" button, the "Proceed to next question" button appeared. Progression through each unit was entirely learner-paced.

Control. Participants were paid 5 cents for each question they completed, regardless of accuracy. Immediately after clicking the "Submit scoring" button, the "Proceed to next question" button appeared. Progression through each unit was entirely learner-paced. This was meant to be analogous to a typical workplace setting, where employees are paid for the time they spend completing computer-based training. Although employees are typically paid by the hour, not by the question, it is necessary to pay in this fashion to keep this condition as similar as possible to the other two experimental conditions outside of the independent variable. Furthermore, the more questions a unit contained, the longer it took to complete. As such, there was a relation between the amount of time on the job and the amount of payment, similar to hourly pay in the workplace (although not perfectly analogous).

Experimental Procedures

Each experimental session took less than 1 hr to complete. Previous research using a similar task has shown that fatigue often occurs when three or more units are completed (Crosbie & Kelly, 1994); therefore, participants only completed two units per session and never completed more than one session per day. Sessions began with an experimenter seating the participant in front of a computer that was already set to the appropriate unit. The experimenter then told the participant whether or not an exhibit accompanied the unit and asked the participant to click "Begin Unit" when he or she was ready. When the participant clicked "Begin Unit," the experimenter left the view of the participant and recorded the start time. When the unit was complete, the computer informed the participant to let the experimenter know that he or she was finished. The computer automatically recorded the start and end time.

After administration of the posttest (after the completion of Computer Unit 21), participants were exposed to each of the three conditions using an alternating-treatments design in order to obtain satisfaction measures. Satisfaction ratings often do not differ across experimental conditions unless participants are exposed to all conditions, thus enabling them to make meaningful comparisons with respect to their relative satisfaction with each condition (Bucklin & Dickinson, 2001; Dickinson & Gillette, 1993). As with prior sessions, participants completed two units per session. The final six units consisted of Sets 23-28 from the Holland and Skinner (1961) text. Two units were presented under the postfeedback delay condition, two under the incentives/disincentives condition, and two under the control condition. The order in which participants were exposed to these conditions was randomly determined for each participant. Prior to pressing the "Begin Unit" button, the experimenter handed the participant a description of the experimental conditions under which they would be completing that unit.

Dependent Measures

The following criteria were used to assess the differences between CBI formats and were based on the first 21 units: percentage correct per instructional unit, minutes spent completing each unit, and accuracy of self-scoring. The density of rewards was calculated using the approximate hourly rate for each participant, based on the total earnings divided by the total training time for each participant. Other dependent measures included the number correct on the pretest and the number correct on the posttest. After completion of all 27 units, satisfaction survey ratings were collected.

Given the limitations of computers in evaluating complex answers, human evaluators were needed to score participant responses. For example, the correct answer to a question may be "response." If a participant typed "behavior" or "reponse," the computer would score these as incorrect, even though such alternative terms or spellings may be considered acceptable by a human evaluator. Thus, the data collected by the computer were printed out and given to the experimenters so that participant answers could be scored for accuracy.

Interobserver Agreement

Interobserver agreement was collected on participant responses and the time spent completing each unit. Two experimenters independently scored the accuracy of participant responses made during learning, with every question being marked as correct or incorrect for 100% of the units. Time measures were scored by comparing the computer records with the experimenter records for start and completion times for 30% of the units. This was done to ensure the computer was recording time accurately. Any duration that differed by less than 1 min was marked as an agreement. Interobserver agreement on both measures was calculated by dividing the number of agreements by the number of agreements plus disagreements (point-by-point agreement) and then multiplying by 100. Interobserver agreement on participant responses averaged 97.2% and never fell below 86.7% for experimental sessions. Interobserver agreement on time durations was 100% for all units assessed.

Results

Figure 2 displays the adjusted means for the percentage correct for the posttest. An analysis of covariance revealed that the obtained differences were statistically significant (F = 5.90, p = 0.005, [[eta].sub.2] = 0.17). Fisher's protected LSD pairwise comparisons were calculated to discover the source of the differences. The differences between the incentives/disincentives and control conditions were not significant at the .05 level. The differences between postfeedback delays and the other two conditions were significant at the .05 level.

[FIGURE 2 OMITTED]

The satisfaction ratings of the participants also differed on a 9-point scale (1 = not at all satisfied, 9 = extremely satisfied), with postfeedback delay being rated as 5.4, incentives/disincentives being rated as 5.1, and control being rated as 8.3. As a result of attrition after the completion of the posttest, two participants did not provide satisfaction data because they were not sufficiently exposed to all three experimental conditions.

The approximate hourly rates for the participants (based on total earnings and total time for the first 21 units) were $5.29 (postfeedback delay), $1.88 (incentives/disincentives), and $6.33 (control). The inclusion of the 5-s postfeedback delays did add an additional 1 hr and 14 min to the mean total time that participants spent learning the material. However, if one considers only the time periods in which learners had control over the instructional pace (total time minus the 5-s delays), the mean total times were approximately the same in the postfeedback delay condition (7 hr 14 min), incentives/disincentives condition (7 hr 7 min), and control condition (7 hr 5 min).

Discussion

As indicated by Figure 2, the postfeedback delay condition was most effective in improving performance on posttest measures. However, as indicated by satisfaction ratings, participants disliked being artificially slowed by such delays as compared with the control condition. Satisfaction with postfeedback delays did not differ from the incentives/ disincentives condition. Overall, the control condition was the most preferred condition. This is not surprising, given that this condition allowed participants to earn money unhindered by delays in their progress and without regard to accuracy. Furthermore, the control condition resulted in the highest rate of earnings.

Although the control condition was the most favored condition, the performance data argue against its use in training situations where learning outcomes are of primary importance. In fact, one should not expect that the most effective learning solution will be the most popular one. As others have pointed out, even though the outcomes of learning (i.e., fluent performance) might be enjoyable, the learning process itself is often stressful, especially when one must learn a large amount of material in a short time (Gilbert, 1995; Lindsley, 1992; Michael, 2004).

Similar to previous studies using 10-s postfeedback delays (Crosbie & Kelly, 1993, 1994; Kelly & Crosbie, 1997), the current study found 5-s postfeedback delays to be effective at improving the retention of instructional material. However, the current study conflicts with the previous study on the use of contingent monetary incentives and disincentives to reduce computer-based racing. Munson and Crosbie (1998) found that contingent incentives and disincentives improved performance over noncontingent incentives without negatively impacting satisfaction. The current study found roughly the opposite, with contingent incentives and disincentives failing to improve performance and also reducing satisfaction. Given that the-same instructional material and monetary arrangements were used in both studies, it is worthwhile to note possible explanations for this discrepancy.

One possible explanation relates to differences in research design. Munson and Crosbie (1998) used an alternating-treatments design in which participants were exposed to 10 to 15 sessions of each condition (incentives/disincentives and control), with the order of presentation randomly determined for each participant. In the present study, participants were only exposed to one experimental condition prior to collection of performance measures. It is possible that contrast effects were present in Munson and Crosbie, with participants responding to the rapidly alternating conditions differently than they would have if the conditions were presented in isolation (Komaki & Goltz, 2001). Given that it is improbable that instructional arrangements would be rapidly alternated in an applied setting, the current study may be more representative of applied performance than the study conducted by Munson and Crosbie.

Although contrast effects may explain differences in posttest outcomes, they are unlikely to explain differences in satisfaction data. In both Munson and Crosbie (1998) and the current study, participants were exposed to multiple experimental conditions in an alternating fashion prior to the collection of satisfaction measures. One possible explanation for the satisfaction differences may relate to sampling error due to the fact that there were only three research participants in Munson and Crosbie. To illustrate why this is problematic, it may be useful to look at the satisfaction ratings in the current study. Although the majority of participants rated incentives/disincentives as less satisfactory than the control condition, there were still six individuals who rated these conditions as equal. Furthermore, seven participants assigned only a 1-point difference between these conditions. If one speculates that this study's sample is representative of the population as a whole, one could then assume that 22% (13 out of 59 surveyed participants) of the population would rate virtually no difference between incentives/disincentives and control conditions. The notion that a sample of three people (as in Munson and Crosbie) is highly likely to be subject to sampling error and that all three participants came from the unrepresentative 22% minority is quite plausible. Sampling error might not only explain the satisfaction discrepancies but also account for the performance discrepancies. Ultimately, when dealing with an intervention that is likely to generate large variability in responding across participants, it is important to use adequate sample sizes and use caution when interpreting studies utilizing small samples.

Based on the present study and previous studies (Crosbie & Kelly, 1993, 1994; Kelly & Crosbie, 1997), there is evidence that postfeedback delays of 5 and 10 s can improve performance in a computer-based instructional format. However, these differing postfeedback delay durations have yet to be directly compared to see which are most effective at improving performance. A delay that is too short is unlikely to foster remediation or rehearsal, whereas a delay that is too long is going to unnecessarily frustrate learners and increase instructional time. What is unknown is what duration is optimal for balancing learning and satisfaction. Future research should address this question by directly comparing postfeedback delays of different durations.

Although the present study did not support the use of this particular arrangement of monetary incentives for improving performance, this does not mean monetary incentives in general are ineffective. There are a number of studies demonstrating the effectiveness of monetary incentives for improving performance, even small incentives (Bucklin & Dickinson, 2001). For example, Johnson, Dickinson, and Huitema (2008) found that incentives as small as $.006 for each completed unit resulted in large improvements in performance. Future research should investigate under what arrangements incentives and disincentives are effective with CBI (both monetary and nonmonetary incentives).

Future research should also examine other methods for reducing computer-based racing. Two potential examples include branching formats and the incorporation of mastery learning. In branching formats, supplemental material is automatically added to instruction following mistakes on the part of the learner. This is contrasted with linear formats, where all learners are exposed to the same amount of material regardless of accuracy during learning. With mastery learning, learners must achieve some predetermined performance criteria during an instructional section before being allowed to proceed to subsequent instructional material. Mastery learning should be of special concern to organizations in which employee mistakes cannot be tolerated, making it critical for employees to demonstrate understanding on every aspect of training (for example, consider many safety applications). Computer-based training affords an opportunity for errors and repeated practice without the serious consequences of on-the-job mistakes or costly commitment of repeated face-to-face training.

Ultimately, the main contribution of this study is that it shows that postfeedback delays even as brief as 5 s can improve learning in a computerized instructional format similar to those implemented in organizational settings, even if not for reasons of self-pacing. As suggested earlier, there is much to be done to investigate self-pacing and other factors that influence the effectiveness of CBI. CBI, like other forms of instruction, needs to be more than just cutting edge hardware and software. It can often be fashionable to adopt technology for the sake of having technology. However, if one wishes to promote real and lasting change, such technology requires a careful analysis of both behavior and instructional content in order to produce learning that will add value to organizations and educational institutions. Given the increasing presence of such instruction in both business and educational settings, it is critical that experts on the learning process continue to contribute to and guide these growing technologies.

The authors would like to thank John Crosbie and Janet Emmendorfer of AME-Learning for their technical and financial assistance with this study.

Correspondence concerning this article should be addressed to Douglas A. Johnson, P.O. Box 20415, Kalamazoo, MI 49019. E-mail: djohnson@operant-tech.com

References

BODEMER, D., PLOETZNER, R., FEUERLEIN, I., & SPADA, H. (2004). The active integration of information during learning with dynamic and interactive visualisations. Learning and Instruction, /4,325-341. doi:10.1016/j.learninstruc.2004.06.006

BROWN, K. G. (2001). Using computers to deliver training: Which employees learn and why? Personnel Psychology, 54, 271-296. doi:10.1111/.1744-6570.2001.tb00093.x

BUCKLIN, B. R., & DICKINSON, A. M. (2001). Individual monetary incentives: A review of different types of arrangements between performance and pay. Journal of Organizational Behavior Management, 21(3), 45-137. doi:10.13005075v21n03_03

CHASE, P. N. (1985). Designing courseware: Prompts from behavioral instruction. The Behavior Analyst, 8,65-76.

CROSBIE, J., & KELLY, G. (1993). A computer-based Personalized System of Instruction course in applied behavior analysis. Behavior Research Methods, Instruments, & Computers, 25, 366-370. doi:10.3758/BF03204527

CROSBIE, J., & KELLY, G. (1994). Effects of imposed postfeedback delays in programmed instruction. Journal of Applied Behavior Analysis, 27, 483-491. doi:10.1901/jaba.1994.27-483

DICKINSON, A. M., & GILLETTE, K. L. (1993). A comparison of the effects of two individual monetary incentive systems on productivity: Piece rate pay versus base pay plus incentives. Journal of Organizational Behavior Management, 14(1), 3-82. doi:10.13003075v14n01_02

ECKERMAN, D. A., LUNDEEN, C. A., STEELE, A., FERCHO, H. L., AMMERMAN, T. A., & ANGER, W. K. (2002). Interactive training versus reading to teach respiratory protection. Journal of Occupational Health Psychology, Z 313-323. doi:10.1037/1076-8998.7.4.313

FOX, E. J. (2004). The Personalized System of Instruction: A flexible and effective approach to mastery learning. In D. J. Moran & R. W. Malott (Eds.), Evidence-based educational methods (pp. 201-221). San Diego, CA: Elsevier Academic Press.

GILBERT, T. F. (1996). Human competence: Engineering worthy performance. Washington, DC: The International Society for Performance Improvement.

HANNAFIN, R. D., & FOSHAY, W. R. (2008). Computer-based instruction's (CBI) rediscovered role in K-12: An evaluation case study of one high school's use of CBI to improve pass rates on high-stakes tests. Educational Technology Research and Development, 56, 147-160. doi:10.1007/s11423-006-9007-4

HEINICH, R., MOLENDA, M., & RUSSELL, J. D. (1993). Instructional media and the new technologies of instruction (4th ed.). New York, NY: Macmillan Publishing Company.

HENRY, M. J. (1995). Remedial math students' navigation patterns through hypermedia software. Computers in Human Behavior, 11, 481-493. doi:10.1016/0747-5632-(95)80012-W

HIRSCH, E. D., JR. (1996). The schools we need and why we don't have them. New York, NY: Doubleday.

HOLLAND, J. G., & SKINNER, B. F. (1961). The analysis of behavior. New York, NY: McGraw-Hill.

JOHNSON, D. A., DICKINSON, A. M., & HUITEMA, B. E. (2008). The effects of objective feedback on performance when individuals receive fixed and individual incentive pay. Performance Improvement Quarterly, 20, 53-74. doi:10.1002/piq.20003

JOHNSON, D. A., & RUBIN, s. (2011). Effectiveness of interactive computer-based instruction: A review of studies published 1995-2007. Journal of Organizational Behavior Management, 31, 55-94. doi:10.1080/01608061.2010.541821

KELLY, G., & CROSBIE, J. (1997). Immediate and delayed effects of imposed postfeedback delays in computerized programmed instruction. The Psychological Record, 47, 687-698.

KOMAKI, J. L., & GOLTZ, S. M. (2001). Within-group research designs: Going beyond program evaluation questions. In C. M. Johnson, W. K. Redmon, & T. C. Mawhinney (Eds.), Handbook of organizational performance: Behavior analysis and management (pp. 81-137). Binghamton, NY: The Haworth Press, Inc.

KRUSE, K., & KEIL, J. (2000). Technology-based training: The art and science of design, development, and delivery. San Francisco, CA: Jossey-Bass/Pfeiffer.

KULIK, J. A. (1994). Meta-analysis studies of findings on computer-based instruction. In E. L. Baker & H. F. Harold, Jr. (Eds.), Technology assessment in education and training (pp. 9-33). Hillsdale, NJ: Lawrence Erlbaum Associates.

LINDSLEY, 0. R. (1992). Why aren't effective teaching tools widely adopted? Journal of Applied Behavior Analysis, 25, 21-26. doi:10.1901/jaba.1992.25-21

MARKLE, S. M. (1990). Designs for instructional designers. Champaign, IL: Stipes Publishing Company.

MAYFIELD, K. H., GLENN, I. M., & VOLLMER, T. R. (2008). Teaching spelling through prompting and review procedures using computer-based instruction. Journal of Behavioral Education, 17, 303-312. doi:10.1007/s10864-008-9069-y

MICHAEL, J. L. (2004). Concepts and principles of behavior analysis (2nd ed.). Kalamazoo, MI: Association for Behavior Analysis.

MILHEIM, W. D., & MARTIN, B. L. (1991). Theoretical bases for the use of learner control: Three different perspectives. Journal of Computer-Based Instruction, 18, 99-105.

MILLER, M. L., & MALOTT, R. W. (1997). The importance of overt responding in programmed instruction even with added incentives for learning. Journal of Behavioral Education, 7, 497-503. doi:10.1023/A:1022811503326

MILLER, M. L., & MALOTT, R. W. (2006). Programmed instruction: Construction responding, discrimination responding, and highlighted keywords. Journal of Behavioral Education, 15, 111-119. doi:10.1007/s10864-006-9010-1

MUNSON, K. J., & CROSBIE, J. (1998). Effects of response cost in computerized programmed instruction. The Psychological Record, 48, 233-250.

PEAR, J. J., & MARTIN, T. L. (2004). Making the most of PSI with computer technology. In D. J. Moran & R. W. Malott (Eds.), Evidence-based educational methods (pp. 223-243). San Diego, CA: Elsevier Academic Press.

RIVERA-NIVAR, M., & POMALES-GARCIA, c. (2010). E-training: Can young and older users be accommodated with the same interface? Computers & Education, 55, 949-960. doi:10.1016/j.compedu.2010.04.006

SCHULTZ, D., & SCHULTZ, S. E. (2006). Psychology and work today: An introduction to industrial and organizational psychology (9th ed.). Upper Saddle River, NJ: Pearson Education, Inc.

STEINBERG, E. R. (1977). Review of student control in computer-assisted instruction. Journal of Computer-Based Instruction, 3,84-90.

Douglas A. Johnson and Alyce M. Dickinson

Western Michigan University
Gale Copyright: Copyright 2012 Gale, Cengage Learning. All rights reserved.