Humans and monkeys exert metacognitive control based on learning difficulty in a perceptual categorization task.
Recently, Redford (2010) found that monkeys seemed to exert
metacognitive control in a category-learning paradigm. Specifically,
they selected more trials to view as the difficulty of the
category-learning task increased. However, category-learning difficulty
was determined by manipulating the family resemblance across the
to-be-learned exemplars. Although this effectively influenced the
learning difficulty, difficulty was confounded with novelty. For
instance, a weak family resemblance made category learning difficult,
but also increased the amount of perceptual change from trial to trial.
The current research rules out novelty in favor of difficulty by
manipulating the number of dots involved in the dot distortions while
controlling the amount of perceptual change.
Key words: metacognition, rhesus macaques, categorization, dot distortions
|Author:||Redford, Joshua S.|
|Publication:||Name: The Psychological Record Publisher: The Psychological Record Audience: Academic Format: Magazine/Journal Subject: Psychology and mental health Copyright: COPYRIGHT 2010 The Psychological Record ISSN: 0033-2933|
|Issue:||Date: Fall, 2010 Source Volume: 60 Source Issue: 4|
Metacognition--defined as thinking about thinking--involves a
monitoring and control component (Nelson & Narens, 1990). The
monitoring component is responsible for assessing the mind's basic
mental processes (Dunlosky & Nelson, 1992; Koriat & Bjork, 2005;
Koriat & Ma'ayan, 2005; Serra & Dunlosky, 2005; Thiede,
Anderson, & Therriault, 2003). Metacognitive monitoring provides
information related to how difficult information will be to learn and
how well information has been learned. Metacognitive control uses this
information to control study. For example, students who quit studying
because they decide that the information is already well learned are
exerting metacognitive control. In this case, metacognitive monitoring
provided information regarding the state of learning (i.e., the
information was well learned), and metacognitive control enacted the
appropriate action (i.e., to terminate study). Effective metacognitive
monitoring and control have been shown to be critical for learning
Recently, Redford (2010) provided evidence that rhesus macaques (Macaca mulatto, hereafter referred to as monkeys) and humans both exert metacognitive control when learning dot distortion categories. This task used categories at three different difficulty levels. Humans and monkeys had control over how many dot distortions of the to-be-tested category to view before transitioning to test. Redford found that the humans and two of the three monkeys chose to view more exemplars when the categories were more difficult to learn. This behavior could not be attributed to a learned connection between behavioral responses and consequences--termed associative learning. Responses interpreted as "metacognitive" in earlier work (e.g., Kornell, Son, & Terrace, 2007) were also paired with potentially rewarding consequences (e.g., no penalty for an error, food pellet, correct answer, easier subsequent trial). In the present work, consequences (rewards, punishments) of preparation occurred during the test phase, so they were far removed from the actual (potentially metacognitive) behavior. When associative learning is ruled out, the likelihood that increased study trial views reflect an attempt to compensate for the increased difficulty of the study materials is improved.
One limitation of Redford's (2010) research was that the paradigm confounded difficulty with novelty. To increase difficulty, the visual similarity (i.e., the family resemblance) among the exemplars was reduced, as previous research has shown this to be an effective manipulation (Homa & Cultice, 1984; Posner, Goldsmith, & Welton, 1967). However, weakening the family resemblance increased the dissimilarity across exemplars, which produced greater trial-to-trial changes in visual appearance, or perceptual changes. Therefore, humans and monkeys may have chosen to view more dot distortions because the novelty was greater when learning the more difficult categories. In other words, the more the shapes changed from trial to trial, the more shapes participants were willing to view. Likewise, the shapes changed little from trial to trial in the low-difficulty condition, which may have encouraged participants to move on more quickly to the more stimulating test phase. Evidence favoring this possibility comes from reports of a novelty preference in humans starting in infancy (e.g., Thompson, Petrill, DeFries, Plomin, & Fulker, 1994) and in monkeys (Golub & Germann, 1998). Novelty preference has been explored in human infants using Fantz's (1958) preference method. In the traditional paradigm, infants are presented with two stimuli, and they look longer at the stimulus that is more interesting. For instance, infants will look longer at attractive adult faces than less attractive faces (e.g., Samuels & Ewy, 1985). This research has found an infant preference for any stimulus that is novel when that stimulus is presented alongside a familiar stimulus (e.g., Quinn & Eimas, 1986; Rieth & Sireteanu, 1994).
This research used the paradigm introduced by Redford (2010) but equated novelty across the difficulty levels. The degree of trial-to-trial changes was due to the strength of the family resemblance among the exemplars. By keeping family resemblance at a fixed level, novelty was controlled. Instead, difficulty was manipulated by changing the number of dots involved in the dot distortions (hereafter referred to as dot count). An increase in dot count increased difficulty, as each pattern required that more details be learned. For example, participants had to encode how 5 angles were arranged with 5-dot distortions, but had to encode the arrangement of 9 angles with 9-dot distortions. Figures 1A, 1B, and 1C show a set of 5-dot, 7-dot, and 9-dot distortions, respectively. The categorization task was first presented to humans to ensure that a metacognitive pattern still emerged.
[FIGURE 1 OMITTED]
In this experiment, human participants studied categories that varied in dot count. Dot count influenced the difficulty level, but the visual novelty was controlled across the difficulty levels. Therefore, if participants monitored the difficulty of the materials and increased their study trial views as the difficulty level required, they were exerting metacognitive control. On the other hand, if participants viewed approximately the same number of trials regardless of difficulty level, they were guided by the novelty of the stimuli, as all dot distortions were equated for trial-to-trial changes.
Participants. Forty-nine students from the University at Buffalo participated in this experiment in partial fulfillment of their introductory psychology course requirements. All participants were treated in accordance with APA ethical standards.
Design. This experiment was a single-factor design with dot count (5, 7, 9) serving as the within-participant independent variable.
Materials. The dot distortion categories were created with a method described in Smith and Minda (2002) and originally developed by Posner et al. (1967). This method began with nine randomly positioned dots within a central 30 x 30 area of a 50 x 50 grid. These nine original dots represent the prototype. From this prototype, dot distortions were generated by moving the dots different distances from their original location. Specifically, dot distortions were built from prototypes by probabilistically moving each of the nine dots into one of five areas that covered a 20 x 20 grid of pixels that surrounded it. For Area 1, the dot kept its original position. For Area 2, the dot was moved to one of the 8 pixel positions in the shell immediately around its original position. For Area 3, the dot was moved to one of the 16 pixel positions in the 2nd shell of pixels around it. For Area 4, the dot was moved into one of the 75 pixel positions in the 3rd, 4th, and half of the 5th pixel shell around it. For Area 5, the dot was moved into one of the remaining 300 pixel positions in the surrounding 20 x 20 pixel grid (i.e., to the 5th, 6th, 7th, 8th, 9th, or 10th shell of pixels around the dot's original position). Dot distortion level was controlled by adjusting the probabilities of dot movement to different areas. Study trials were generated by a Level 5 dot distortion algorithm. For Level 5 dot distortions, the probabilities that the (5, 7, or 9) dots would move to each of the five areas were .200, .300, .400, .050, and .050. These probabilities mean that dot displacement will be moderate. Fifty percent of the time, the dot will remain within the first two areas (the eight pixels immediately surrounding the dot). Forty percent of the time, the dot will move to the third area. Ten percent of the time, the dot will move to the outer fourth or fifth area. These dot distortions form the category for the prototype from which they are derived. After the dots are repositioned in accordance with the relevant probabilities, the dot distortion is magnified threefold to occupy a 150 x 150 pixel space on the screen. As exemplars were generated at a distortion level 5, the exemplars shared an intermediate degree of family resemblance. The low-difficulty, intermediate-difficulty, and high-difficulty conditions consisted of 5-dot, 7-dot, and 9-dot distortions, respectively. The test phase consisted of 22 trials comprising the prototype, five distortion level 5 trials, five distortion level 7 trials, and 11 distortion level 7 trials of a randomly generated prototype (hereafter referred to as random dot distortions). Therefore, each test phase contained 11 distortions that belonged to the studied category and 11 distortions that did not belong to the studied category. These trial types were randomly ordered anew for each test phase. Dot count was applied to dot distortions during the study phase and test phase. So, if the study phase involved 5-dot distortions, the test phase involved 5-dot distortions.
Procedure. Participants were seated in one of three different experiment rooms and read instructions. These instructions described the task's requirements and encouraged participants to move on to the test phase when they felt prepared.
As shown in Figure 2, each study trial displayed a red dot distortion at the top middle of the screen, a cursor in the center of the screen, and a filled circle at the bottom middle of the screen. On every trial, the dot distortion was presented alone onscreen for 2.5 s, followed by the appearance of the cursor and red circle. Once the cursor and circle appeared, participants had the option of moving the cursor either to the dot distortion or to the circle. A quarter-second pause followed all choices. Moving the cursor to the dot distortion gave participants another dot distortion to view. Moving the cursor to the circle transitioned the participants to the test phase
[FIGURE 2 OMITTED]
Each study phase involved a new to-be-learned category consisting of Level 5 dot distortions. The study phases (5-dot, 7-dot, and 9-dot distortions) were randomized without replacement until participants completed each difficulty level. After all three difficulty levels were finished, the iteration repeated with a new randomly ordered set of three difficulty levels. This randomization-with-out-replacement process repeated until the end of the experiment. The overall number of studied categories was dependent on how quickly participants went through the study and test phases in the time allotted.
During the test phase, participants decided whether new dot distortions belonged to the studied category. Each test trial displayed the to-be-categorized dot distortion at the top middle of the screen, a cursor in the center of the screen, and a large "N" at the bottom middle of the screen. To accept a dot distortion as a category member, the participant moved the cursor to the dot distortion at the top middle of the screen (the same cursor movement performed during the study phase to view additional dot distortions). To reject a dot distortion as a category member, the participant moved the cursor to the large "N" at the bottom middle of the screen (the same cursor movement performed during the study phase to enter the test phase). Correct responses received a beep to indicate that the participant was correct, and a point was added to the participant's score. Incorrect responses received a 10-s buzzing sound, and 2 points were subtracted from the participant's score. After completion of the final test phase in a cycle (22 trials), a participant began another cycle with a new category to learn. Overall, 11% of the transitions to a new study phase involved the same difficulty level (e.g., 5-5). The percentage of these types of transitions is lower because they could occur only when a new, randomized set of three cycles began. Forty-five percent of the transitions involved an increase in difficulty (e.g., 5-7), and 44% of the transitions involved a decrease in difficulty (e.g., 7-5). The experiment was programmed to run for about 45 min.
To confirm that study phase trial-to-trial changes were equivalent, the average logarithmic distances were calculated across each study phase condition of this experiment. This method has been used successfully to calculate distances among dot distortions in earlier work (e.g., Smith & Minda, 2002). Basically, this procedure compares how different each distortion is from the prototype. For each dot of each distortion, the distance or amount of displacement from the original prototype dot location is measured. Once all of these values are summed, the natural logarithm is calculated using the In function in Turbo Pascal. The distances from the prototypes remained the same, 1.08, 1.09, and 1.09, for the 5-, 7-, and 9-dot distortions, respectively. This analysis confirmed that trial-to-trial variability was controlled.
Study phase difficulty influenced the number of trials that participants chose to view prior to test, F(2, 824) = 5.01, p < .05, MSE = 86.96, [[eta].sup.2] = .012 (see Table 1, rows 1-3). Follow-up tests found that participants chose to view more trials in the high-difficulty condition and the intermediate-difficulty condition than in the low-difficulty condition. However, no difference was found between the intermediate- and high-difficulty conditions. These data suggest that humans exerted metacognitive control. In line with most of the previous research exploring metacognitive control, participants chose more trials when learning high-difficulty material--in this case, the categories comprising more dots.
The equality between the intermediate- and high-difficulty conditions was puzzling. Several previous studies have shown that distortion level is the main factor determining category-learning difficulty (Homa & Cultice, 1984; Posner et al., 1967), but no research was found that offered evidence that dot count influences category learning. To interpret these data, participants' perception of these distortions was measured.
Sixteen students from the University at Buffalo rated the similarity of 600 dot distortion pairs generated from the same prototype that varied in dot count and distortion level. This paradigm was modeled after that used in Smith, Redford, Gent, and Washburn (2005). Specifically, distortion levels included 0 (the prototype), 1, 3, 5, and 7 and dot count included 5-dot, 7-dot, and 9-dot distortions. Prototypes for each category were chosen at random. Participants used a scale that ranged from 1 (no difference) to 6 (large difference) and received no feedback on their responses. Analyses of the data revealed a main effect of dot count, F(2, 9597) = 19.81, p < .05, MSE = 3.33, [[eta].sup.2] = .004. Most relevant is that follow-up tests found that participants rated 5-dot distortions as more similar across distortion levels than they did 7- and 9-dot distortions. Moreover, participants rated 7-and 9-dot distortions as equally dissimilar. Put differently, participants perceived 5-dot distortions as sharing greatest similarity. They did not perceive 9-dot distortions as being more dissimilar than 7-dot distortions. Because similarity across exemplars is related strongly to perceived learning difficulty (Homa & Cultice, 1984), these data provided converging support for the metacognitive control hypothesis. Presumably, participants used perceived difficulty to guide their study view behavior, producing the equal number of 7- and 9-dot study trials, but fewer 5-dot study trials. In contrast, had participants been guided only by novelty, they would have chosen to view an equal number of study trials across conditions as the amount of trial-to-trial variability was equated.
Another important issue in metacognition is whether metacognitive control will translate into an improved test performance. In other words, will an increase in study trial views produce increased accuracy on the subsequent test? In this case, study phase difficulty had a marginal influence on test performance, F(2, 824) = 2.46, p < .10, MSE = 165.72, [[eta].sup.2] = .006. Follow-up tests found that only the 5-dot and 9-dot test performances were different (see Table 2, rows 1-3). In this case, participants' apparent efforts to exert metacognitive control were moderately effective in improving test performance. Because this paradigm provided potential evidence of metacognitive control by human participants, it was presented to monkeys in the next experiment.
In the previous experiment, participants exerted metacognitive control by choosing to view the same number of study trials in the 7- and 9-dot conditions, but fewer study trials in the 5-dot condition. Although monkeys did not engage in the similarity rating task described above, previous research suggests that they would be just as likely to view 7- and 9-dot distortions as equally dissimilar (Smith et al., 2008). Therefore, the same predictions are made here. If monkeys are exerting metacognitive control, they will choose to view more 7- and 9- dot distortions than 5-dot distortions. If monkeys are guided by a novelty preference, then they should view an equal number of study trials across conditions.
Participants. Two monkeys--Murph (14 years old, male) and Gale (24 years old, male)--participated in this experiment. Murph and Gale were the same monkeys that seemingly exerted metacognitive control in Redford (2010), so they came to this task with substantial experience. The monkeys were housed at the Language Research Center of Georgia State University. They were singly housed in rooms that offered constant visual and auditory access to other monkeys. They also were periodically group-housed with compatible conspecifics in outdoor-indoor housing units. Both monkeys were maintained on a healthy diet including fresh fruits, vegetables, and monkey chow daily, independent of their computer test schedule. The monkeys were not restricted in food intake for the purposes of testing. Each monkey was tested using the Language Research Center's Computerized Test System (LRC-CTS; described in Rumbaugh, Richardson, Washburn, Savage-Rumbaugh, & Hopkins, 1989; Washburn & Rumbaugh, 1992) that consists of a PC computer, a digital joystick, a color monitor, and a pellet dispenser. Monkeys manipulated the joystick through the mesh of their home cages, producing isomorphic movements of an onscreen cursor. Rewarded responses resulted in the delivery of a 94-mg fruit-flavored chow pellet (Bioserve, Frenchtown, NJ) using a Gerbrands 5120 dispenser interfaced to the computer through a relay box and output board (PIO12 and ERA01; Keithley Instruments, Cleveland, OH). Each test session was started by the facility's research coordinator or a research technician. Once initiated, monkey tests were autonomous and required no human monitoring. Murph and Gale have participated in dozens of studies associated with cognitive psychology and animal learning, including ones pertaining to spatial and working memory, numerical cognition, judgment and decision making, psychomotor control, discrimination learning, planning, concept learning, categorization, and metacognition. All experiments were conducted after obtaining IACUC approval.
Design and Materials. This experiment's design was identical To that of Experiment 1, and the materials were generated as described in that experiment.
Procedure. Three modifications were made to the procedures used in Experiment 1. First, instead of verbal instructions, monkeys learned the task's requirements through experience across multiple sessions. Second, rewarded responses yielded food pellets in addition to a beep sound, and errors resulted in a longer buzzing sound (20 s instead of 10 s). Last, the probability of earning pellets during the study phase slowly decreased as additional dot distortions were viewed. For the first trial of each study phase, monkeys were guaranteed a pellet for choosing to view a second dot distortion. Thereafter, for each additional selected dot distortion that produced a pellet, the probability of receiving another pellet decreased by 2% until pellet earnings were at chance levels (50%). Pellet dispersal remained at chance levels for the rest of the study phase. This declining reward rate encouraged monkeys to view variable numbers of study trials. As these probability decrements were applied equally across study conditions, the rate of pellet dispersal, per se, did not have any additional influence on a particular study phase. In other words, any differences in study trial views would not be due to differential reinforcement, as each study condition provided the same probability of reward.
The randomization procedure produced a similar distribution of transitions. For Murph, the study phase difficulty remained the same on 11% of the transitions, increased in difficulty on 43% of the transitions, and decreased in difficulty on 46% of the transitions. For Gale, the study phase difficulty remained the same on 9% of the transitions, increased in difficulty on 45% of the transitions, and decreased in difficulty on 45% of the transitions.
Analyses of the monkey study data differed from the analyses of the human participant data. Instead of performing analyses on whole sessions, each monkey's study phases were partitioned into completed sets of the three difficulty levels (low, intermediate, and high). The set-to-set variability provided the error variance to perform tests of significance. Study phase difficulty influenced the number of trials that Murph viewed prior to test, F(2, 266) = 3.22, p < .05, USE = 39.13, [[eta].sup.2] = .024 (see Table 1, rows 4-6). Follow-up tests found that Murph viewed more 9-dot distortions than 5-dot distortions but did not view significantly more 7-dot distortions than 5-dot distortions; and he did not view significantly more 9-dot distortions than 7-dot distortions. Previous research has shown that monkeys display less perceptual sensitivity to dot distortions than humans. Specifically, Smith and his colleagues (2008) presented humans and monkeys with dot distortion pairs. The dot distortions varied in their level of similarity from identical to extremely different (i.e., each distortion was generated from a different prototype, thereby sharing little to no similarity). They found that the monkeys were less accurate at classifying dot distortions as being Same (i.e., when they were identical) or Different. By contrast, the humans had a sharper perceptual acuity for sameness and accurately classified more of both Same and Different trials. So Murph--who was also one of the monkeys that participated in Smith et al. (2008)--may have failed to view more 7-dot distortions than 5-dot distortions because of this weaker ability to perceive small differences among dot distortions. Overall, the data suggest that Murph was exerting metacognitive control as he chose to view more study trials in the most difficult study condition relative to the least difficult study condition. Had his behavior been driven by a novelty preference, he would have viewed an equal number of study trials across conditions. Study phase difficulty also influenced Murph's test performance, F(2, 414) = 3.16, p < .05, MSE = 81.35, [[eta].sup.2] = .018. Follow-up tests found that only the 5-dot and 9-dot test performances were different (see Table 2, rows 4-6).
Study phase difficulty influenced the number of trials that Gale viewed prior to test, F(2, 244) = 6.43, p < .05, MSE = 77.74, [[eta].sup.2] = -050 (Table 1, rows 7-9). Follow-up tests found that Gale viewed more 7-dot distortions than 5-dot distortions and more 9-dot distortions than 5-dot distortions, but Gale viewed the same number of 7-dot distortions as 9-dot distortions, which mirrored the human data. Thus, the data indicate that Gale exerted metacognitive control in this task and was not guided by a novelty preference. Unlike Murph, study phase difficulty did not influence Gale's test performance, F(2, 372) < 1.00 (see Table 2, rows 7-9). Although accurate metacognitive control may improve categorization performance for both monkeys and humans, the present paradigm did not provide clear evidence of this benefit. Future research should explore ways to use metacognitive control to improve categorization performance.
The monkeys and the human participants viewed more study trials as the (perceived) difficulty during study phases increased. Previous research confounded difficulty and novelty. This research disentangled difficulty from novelty and provided additional support for metacognitive control in both humans and monkeys.
The present research makes a strong contribution to a field already rich in theoretical progress. Past research has provided evidence of metacognitive monitoring by monkeys across numerous tasks, including sparse-dense perceptual discrimination tasks (e.g., Smith, Shields, Schull, & Washburn, 1997), memory probe tasks (e.g., Smith, Shields, Allendoerfer, & Washburn, 1998), match-to-sample tasks (e.g., Hampton, 2001), and numerosity judgment tasks (e.g., Beran, Smith, Redford, & Washburn, 2006). Subsequent research was able to disentangle metacognitive monitoring from associative learning by deferring feedback (Smith, Beran, Redford, & Washburn, 2006).
Research continued to build on this knowledge by also demonstrating that monkeys could "comment" on their confidence in their responses. Son and Kornell (2005) and Shields, Smith, Guttmannova, and Washburn (2005) did this by designing a betting paradigm. High-confidence bets resulted in larger rewards and larger penalties for correct and incorrect responses, respectively. Low-confidence bets resulted in smaller rewards and smaller penalties for correct and incorrect responses, respectively. The main finding of these studies was that monkeys made bets that were consistent with their accuracy. That is, they made large bets more often when their response was correct and small bets more often when their response was incorrect. Kornell et al. (2007) used this paradigm in a later study to demonstrate that monkeys can accurately identify their confidence levels across different tasks. Related to this, Smith and his colleagues (2006) demonstrated that monkeys can effectively use the uncertainty response in cases where the tasks (and response options) are constantly changing. These last two studies reveal that these aspects of metacognition are not linked to particular tasks.
Recently, Redford (2010) extended metacognition research by demonstrating that monkeys apparently exert metacognitive control. The present research addresses the primary limitation of Redford's study by discounting novelty as a potential explanation. In that study, difficulty was confounded with novelty (i.e., the dot distortion varied more from trial to trial in the more difficult conditions). In the present study, trial-to-trial variability was equated across conditions; yet, difficulty still influenced study trial views for both humans and monkeys.
This research confirms that monkeys are capable of greater metacognitive sophistication--they can regulate their learning. In other words, monkeys can act upon their environment to improve their level of learning. Murph and Gale seemed to use difficulty (as indexed by perceived similarity) to exert metacognitive control. They were not influenced by novelty or associative learning. However, this area remains largely unexplored, and future research needs to continue investigations into metacognitive control by nonhuman primates. The monkeys' behavior in this paradigm indicates metacognitive control, but the paradigm leaves at least two questions unresolved. First, can accurate metacognitive control produce a corresponding increase in test performance? Metacognitive control had little influence on test performance in this paradigm. As an independent variable, study trial views served well for gauging metacognitive control but seemed ill-suited for assessing whether metacognitive control had an influence on test performance. By replacing the independent variable, this paradigm may provide evidence of this relationship. For instance, metacognitive control may improve test performance if the monkeys are allowed a choice about which categories to restudy rather than choosing the number of category exemplars to view. Another possibility is allowing the monkeys to restudy a category partway into the test phase. Metacognitive control would be demonstrated (and potentially improve test performance) if the monkeys chose to restudy categories only in cases where they were doing poorly on the test items.
Another unanswered question is whether metacognitive control in this task was intimately linked to the difficulty of the study materials, or if it could have been influenced by the test's difficulty? In other words, would monkeys view more study trials in preparation for a more difficult test? One potential method for addressing this issue would be to keep the study phase difficulty constant, but manipulate the test difficulty. Information regarding the test's difficulty could be cued by the color of the stimulus (e.g., red dot distortions preceding a difficult test and green dot distortions preceding an easy test). The present data suggest that monkeys are capable of exerting metacognitive control based on the difficulty of the study materials, which is akin to the student who puts more effort into studying notes for a difficult course than he or she does studying notes for an easy course. If monkeys behave differently at study based on the impending test's difficulty, this would be akin to a student studying harder for an essay exam than for a multiple choice exam.
Another area of interest is whether monkeys can exert metacognitive control in other domains. So, is their metacognitive behavior tied to categorization, or could they use metacognitive control adaptively in other tasks, (e.g. sequence learning) without direct reinforcement for those particular responses? Regardless of which aspects of metacognition are explored in future research with monkeys, the new bar has been set by this and other recent research. At a minimum, metacognitive behaviors need to be independent of rewards (or equivalent in their rewards to nonmetacognitive behavioral responses). Hopefully, research will continue to push the boundaries of what we know about metacognitive abilities in other species.
BERAN, M. J., SMITH, J. D., REDFORD, J. S., & WASHBURN, D. A. (2006). Rhesus macaques (Macaca mulatta) monitor uncertainty during numerosity judgments. Journal of Experimental Psychology. Animal Behavior Processes, 32, 111-119.
DUNLOSKY, J., & NELSON, T. O. (1992). Importance of the kind of cue for judgments of learning (JOL) and the delayed-JOL effect. Memory & Cognition, 20, 374-380.
FANTZ, R. (1958). Pattern vision in young infants. The Psychological Record, 8, 43-47.
GOLUB, M. S., & GERMANN, S. L. (1998). Perinatal bupivacaine and infant behavior in rhesus monkeys. Neurotoxicology and Teratology, 20, 29-41.
HAMPTON, R. R. (2001). Rhesus monkeys know when they remember. Proceedings of the National Academy of Sciences of the United States of America, 98, 5359-5362.
HOMA, D., & CULTICE, J. C. (1984). Role of feedback, category size, and stimulus distortion on the acquisition and utilization of ill-defined categories. Journal of Experimental Psychology: Learning, Memory, and Cognition, 10, 83-94.
KORIAT, A., & BJORK, R. A. (2005). Illusions of competence in monitoring one's knowledge during study. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31, 187-194.
KORIAT, A., & MA'AYAN, H. (2005). The effects of encoding fluency and retrieval fluency on judgments of learning. Journal of Memory and Language, 52, 478-492.
KORNELL, N., SON, L. K., & TERRACE, H. S. (2007). Transfer of metacognitive skills and hint seeking in monkeys. Psychological Science, 18, 64-71.
NELSON, T. O., & NARENS, L. (1990). Metamemory: A theoretical framework and new findings. In G. H. Bower (Ed.), The psychology of learning and motivation, (pp. 125-141). New York: Academic Press.
POSNER, M. I., GOLDSMITH, R., & WELTON, K. E., JR. (1967). Perceived distance and the classification of distorted patterns. Journal of Experimental Psychology, 73, 28-38.
QUINN, P., & EIMAS, P. (1986). Pattern-line effects and units of visual processing in infants. Infant Behavior and Development, 9, 57-70.
REDFORD, J. S. (2010). Evidence of metacognitive control by humans and monkeys in a perceptual categorization task. Journal of Experimental Psychology: Learning, Memory, and Cognition, 36, 248-254.
RIETH, C., & SIRETEANU, R. (1994). Texture segmentation and visual search based on orientation contrast: An infant study with the familiarization/novelty preference method. Infant Behavior and Development, 17, 359-369.
RUMBAUGH, D. M., RICHARDSON, W. K., WASHBURN, D. A., SAVAGE-RUMBAUGH, E. S., & HOPKINS, W. D. (1989). Rhesus monkeys (Macaca mulatta), video tasks, and implications for stimulus-response spatial contiguity. Journal of Comparative Psychology, 103, 32-38.
SAMUELS, C. A., & EWY, R. (1985). Aesthetic perception of faces during infancy. British Journal of Developmental Psychology, 3, 221-228.
SERRA, M. J., & DUNLOSKY, J. (2005). Does retrieval fluency contribute to the underconfidence-with-practice effect? Journal of Experimental Psychology: Learning, Memory, and Cognition, 31, 1258-1266.
SHIELDS, W. E., SMITH, J. D., GUTTMANNOVA, K., & WASHBURN, D. A. (2005) Confidence judgments by humans and rhesus monkeys. Journal of General Psychology, 132, 165-186.
SMITH, J. D., BERAN, M. J., REDFORD, J. S., & WASHBURN, D. A. (2006). Dissociating uncertainty responses and reinforcement signals in the comparative study of uncertainty monitoring. Journal of Experimental Psychology: General, 135, 282-297.
SMITH, J. D., & MINDA, J. P. (2002). Distinguishing prototype-based and exemplar-based processes in dot-pattern category learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 28, 800-811.
SMITH, J. D., REDFORD, J. S., GENT, L. C., & WASHBURN, D. A. (2005). Visual search and the collapse of categorization. Journal of Experimental Psychology: General, 134, 443-460.
SMITH, J. D., REDFORD, J. S., HAAS, S. M., COUTINHO, M. V. C., & COUCHMAN, J. J. (2008). The comparative psychology of same-different judgments by humans (Homo sapiens) and monkeys (Macaca mulatta). Journal of Experimental Psychology: Animal Behavior Processes, 34, 361-374.
SMITH, J. D., SHIELDS, W. E., ALLENDOERFER, K. R., & WASHBURN, W. A. (1998). Memory monitoring by animals and humans. Journal of Experimental Psychology: General, 127, 227-250.
SMITH, J. D., SHIELDS, W. E., SCHULL, J., & WASHBURN, D. A. (1997). The uncertain response in humans and animals. Cognition, 62, 75-97.
SON, L. K., & KORNELL, N. (2005). Metaconfidence judgments in rhesus macaques: Explicit versus implicit mechanisms. In H. S. Terrace & J. Metcalfe (Eds.), The missing link in cognition: Origins of self-reflective consciousness (pp. 296-320). New York: Oxford University Press.
THIEDE, K. W. (1999). The importance of monitoring and self-regulation during multitrial learning. Psychonomic Bulletin & Review, 6, 662-667.
THIEDE, K. W., ANDERSON, M. C. M., & THERRIAULT, D. (2003). Accuracy of metacognitive monitoring affects learning of texts. Journal of Educational Psychology, 95, 66-73.
THOMPSON, L. A. & PETRILL, S. A. (1994). Longitudinal predictions of school-age cognitive abilities from infant novelty preference. In J. C. Defries, R. Plomin, & D. W. Plomin (Eds.), Nature and nurture during middle childhood (pp. 77-85). Maiden, MA: Blackwell Publishing.
WASHBURN, D. A., & RUMBAUGH, D. M. (1992). Testing primates with joystick-based automated apparatus: Lessons from the Language Research Center's Computerized Test System. Behavior Research Methods, Instruments & Computers, 24, 157-164.
Preparation of this article was supported by Grant HD-38051 from the National Institute of Child Health and Human Development and by Grant BCS-0634662 from the National Science Foundation. The opinions expressed are those of the author and do not represent the views of either funding body.
The author would like to thank Keith Thiede for his help in the preparation of this article.
Correspondence concerning this article should be addressed to Joshua S. Redford, Center for School Improvement and Policy Studies, Boise State University, 1910 University Dr., Boise, ID 83725 (e-mail: JoshRedford@boisestate.edu).
Joshua S. Redford
University at Buffalo, The State University of New York
Table 1 Mean Study Trial Views by Experiment Group Experiment condition Mean study trials viewed by condition * Humans 5-dot distortions 11.85 (1.08) 7-dot distortions 13.81 (1.19) 9-dot distortions 14.87 (1.42) Murph 5-dot distortions 8.29 (0.51) 7-dot distortions 9.62 (0.73) 9-dot distortions 10.18 (0.80) Gale 5-dot distortions 11.74 (0.85) 7-dot distortions 15.36 (1.23) 9-dot distortions 15.09 (0.95) Note. * Standard errors of the means presented in parentheses
Table 2 Test Performance by Experiment Group Experiment condition Percentage correct * Humans 5-dot distortions 75.5 (0.97) 7-dot distortions 77.4 (1.08) 9-dot distortions 77.8 (1.04) Murph 5-dot distortions 83.2 (0.83) 7-dot distortions 83.0 (0.80) 9-dot distortions 85.6 (0.71) Gale 5-dot distortions 72.3 (1.13) 7-dot distortions 73.6 (1.27) 9-dot distortions 73.4 (1.09) Note. * Standard errors of the means presented in parentheses
|Gale Copyright:||Copyright 2010 Gale, Cengage Learning. All rights reserved.|