The school counseling program implementation survey: initial instrument development and exploratory factor analysis.
Student guidance services
Student guidance services (Technology application)
Discriminant analysis (Methods)
Factor analysis (Methods)
Clemens, Elysia V.
Carey, John C.
Harrington, Karen M.
|Publication:||Name: Professional School Counseling Publisher: American School Counselor Association Audience: Academic; Professional Format: Magazine/Journal Subject: Family and marriage; Psychology and mental health Copyright: COPYRIGHT 2010 American School Counselor Association ISSN: 1096-2409|
|Issue:||Date: Dec, 2010 Source Volume: 14 Source Issue: 2|
|Topic:||Canadian Subject Form: School counselling; School counselling Computer Subject: Technology application|
|Geographic:||Geographic Scope: United States Geographic Code: 1USA United States|
This article details the initial development of the School
Counseling Program Implementation Survey and psychometric results
including reliability and factor structure. An exploratory factor
analysis revealed a three-factor model that accounted for 54% of the
variance of the intercorrelation matrix and a two-factor model that
accounted for 47% of the variance. Cronbach's alpha reliability
estimates for subscales ranged from .79 to .87. Subscales of the
instrument may be used to assess ASCA National Model program
implementation, programmatic orientation, school counseling services,
and school counselors" use of computer software.
The organization of school counseling activities within schools is a matter of professional concern (Gysbers, 2004). Leading scholars consistently maintain that organizing school counseling activities as a program within schools (as opposed to a set of ancillary support services) results in both enhanced student outcomes and greater legitimacy (e.g., Dahir, 2001; Gysbers, 1990; Gysbers & Henderson, 2006; Johnson & Johnson, 2003; Lapan, 2001). More recently, the American School Counselor Association (ASCA, 2005) has developed a national model for the organization of school counseling programs. ASCA has embarked on an ambitious initiative to encourage and facilitate the development and implementation of model programs nationwide in the hopes that such programs will enhance the effectiveness and centrality of school counseling within schools.
In spite of the perceived importance of and the resources devoted to organizing school counseling as a program, relatively little research has been conducted to investigate either the student outcomes or political advantages that are associated with a strong programmatic structure (Brown & Trusty, 2005; Gysbers, 2004). Such research has been hampered, in part, by a lack of instruments to measure variability in the programmatic characteristics and outcomes (Whitson, 2002; Whitson & Aricak, 2008). Without quantitative, empirically-based instruments measuring program characteristics, comparisons across studies are difficult.
Program-level studies have typically used instruments constructed for a specific research project. Lapan and colleagues used an aspect of the Missouri Department of Elementary and Secondary Education's school district evaluation process to determine, based on teacher report, the presence of school counseling activities that are consistent with four components of Missouri's model for comprehensive guidance programs (Lapan, Gysbers, & Sun, 1997; Lapan, Gysbers, & Petroski, 2001). Sink and Stroh (2003) developed a telephone survey, which served in part to classify Washington state schools into categories based on comprehensive school counseling program (CSCP) implementation. The Utah Department of Education has conducted a series of studies evaluating the impact of CSCPs within the state, utilizing an instrument that was developed and has been subsequently modified for the purposes of evaluating programs in Utah (Nelson, Fox, Haslam, & Gardner, 2007). Although each of these measures has been used by research teams multiple times within a given state, malting comparisons among studies across states is difficult because CSCP implementation was measured differently.
Recently, Whiston and Aricak (2008) and Scarborough (2005) developed quantitative, empirically-based scales related to program characteristics. The School Counseling Program Evaluation Survey (Whiston & Aricak, 2009) is based on the National Standards for School Counseling programs (Campbell & Dahir, 1997) and measures student outcomes. This instrument is a student-report measure and should help researchers gather information about program outcomes that are anchored to professional standards and comparable across studies. Scarborough (2005) developed the School Counselor Activity Rating Scale (SCARS) to assist school counselors in gathering process data related to how they spend time and how they would prefer to spend their time. Scarborough indicated that the SCARS could be used by school counselors to assess the amount of time they are spending on interventions and activities associated with implementing a CSCP.
Sink and Spencer (2005, 2007) have contributed to the literature in a related area of instrument development through examining the reliability and validity of student and teacher versions of the My Class Inventory as a measure of classroom climate in elementary schools. Sink and Spencer (2007) incorporated items in the teacher version of the My Class Inventory that measure the extent to which elementary school teachers believe school counselors had a beneficial impact on the climate in their classroom. The researchers suggested that both the teacher and student versions can be useful measures of the impact of school counseling programs and activities on important dimensions of classroom climate that are known to promote learning.
Empirically-based instruments that measure variability" in school counseling program characteristics are still needed. Much of the research evaluating school counseling programs has been conducted with instruments developed to measure characteristics associated with Gysberian-type comprehensive school counseling programs rather than the characteristics of the newer ASCA National Model programs (e.g., Lapan et al., 1997, 2001; Nelson, Gardner, & Fox, 1998). Although the Gysberian-type model was used in large part as a foundation for developing the ASCA National Model, the standard of practice for program implementation has become the ASCA National Model (Dahir, Burnham, & Stone, 2009; Sink, Akos, Turnbull, & Mvududu, 2008). For research to progress, empirically based approaches to measuring variability among programs based on ASCA National Model-related characteristics are important.
The School Counseling Program Implementation Survey (SCPIS) was developed as a measure of the extent to which a school counseling program has implemented an ASCA National Model program, based upon the existence of essential characteristics of such programs. The purpose of this article is to present the initial steps in the development of the SCPIS and the results of a study investigating its reliability and factor structure. It was hypothesized that this measure would reflect a single construct: ASCA National Model program implementation. The authors, however, recognize that program implementation may be multi-faceted and as such, an exploratory factor analysis was used to uncover the constructs that underlie the items.
Initial Scale Development
The SCPIS was originally developed by Eisner and Carey (2005) at the Center for School Counseling Outcome Research at the University of Massachusetts Amherst to facilitate research on ASCA National Model programs and to enable school counselors to identity, which aspects of an ASCA National Model school counseling program were either in place or missing at their schools. Twenty-five items were developed from an extensive review of the literature that identified characteristics of the ASCA National Model and related comprehensive developmental school counseling programs (e.g., ASCA, 2003, 2004; Gysbers, 1990). An effort was made to write each of these items to reflect concrete, observable program characteristics (e.g., "The program has a set of clear measurable student learning objectives and goals."). These items were reviewed independently by five experienced, district-level school counseling directors familiar with the ASCA National Model and with comprehensive developmental guidance programs. Reviewers were asked to indicate whether each item reflected an important characteristic of an ASCA National Model program and if any important characteristics were missing. They were also asked to identify, any potential problems with the wording of the items and to suggest alternative wordings. Based on this feedback, all items were maintained; however, several were rewritten for greater clarity.
These 23 items were assembled into a scale with the following directions: "Please rate each statement below in terms of the degree to which it is currently implemented in your school's School Counseling program. Circle your response using the following Rating Scale: 1 = Not Present, 2 = Development in Progress; 3 = Partly Implemented; 4 = Fully Implemented." Sixty school counselors, who were participants in a state school counseling association conference session on the ASCA National Model, agreed to fill out the survey in reference to their programs. Internal consistency reliability analyses were conducted on these data and five items with low correlations to the total scale were dropped, resulting in a total of 20 items. The Cronbach's alpha internal consistency reliability estimate for the remaining items was .81. The next step in developing the SCPIS was to explore the factor structure.
Exploratory Factor Analysis
An initial exploratory factor analysis of the 20 items on the School Counseling Program Implementation Survey was conducted using the principal axis factor method and oblique rotation. This data analytic method was selected because it is not reliant on multivariate normal data and it allows factors to correlate (Costello & Osborne, 2005). Allowing the factors to correlate is important because behaviors are likely to be connected (Costello & Osborne, 2005).
Sample. The instrument was administered as part of two larger studies: a multi-state study assessing outcomes related to principal-school counselor relationships (Clemens, Milsom, & Cashwell, 2009), and a state-wide evaluation conducted by the Center for School Counseling Outcome Research at University of Massachusetts Amherst. The sampling strategy and response rates are described separately. The statistics describing the participants reflect a collapsing of the two samples into one data set for analysis. Redundancy in the data (i.e., a participant's response being captured by both samples) is unlikely because the studies did not have duplication among states.
Data collection one. Participants were selected using cluster sampling. Specifically, 23 school districts were randomly selected from three southeastern states. All school counselors listed on individual school building Web sites as being employed in each of these districts were invited to participate in the study. A total of 637 school counselors were invited by e-mail to participate in the study; however, 57 of the e-mails were undeliverable or flagged as spam. As such, the true sampling flame was 580 school counselors. The response rate was 35.66% (n= 201).
Data collection two. The second data set was gathered as part of a state-wide evaluation project that occurred in a midwestern state. An e-mail invitation from the state's school counseling association was sent to a member (the director of the program when identifiable) of each public high school's counseling program, requesting participation in an online survey to examine if more fully implemented counseling programs and school counselor activities are associated with stronger positive student outcomes.
Two reminder e-mails were sent to counselors if they did not respond to the initial e-mail invitation. A total of 322 school counselors were invited to participate in the survey; 136 school counselors completed the entire survey, yielding a response rate of 42.24%.
Combined sample. The combined sample used in subsequent analyses consisted of 341 school counselors. Participants reported working at the elementary school level (n=86, 25.22%), middle/junior high level (n =52, 15.25%), high school level (n = 190, 55.72%), K-12 setting (n = 5, 0.15%), and other (n=8, 0.02%). Respondents who endorsed "other" for level indicated that they either worked at a K-8 school or a K-2 primary school.
Demographic information beyond the level at which respondents practice was available only for the first data collection (n = 201). The respondents from data collection one were primarily female (84.58%, n=170). The majority, of respondents described themselves as Caucasian (83.08%, n= 167); 13.93% (n = 28) described themselves as African American/ Black. One respondent (<0.01%) endorsed each of the following categories: the Asian American/Pacific Islander, Hispanic/Latino/a, and Multiethnic/ Multiracial. Participants ranged in age from 23 years to 73 years (M=42.76, SD= 11.64). One hundred fifty-five (77.11%) of the respondents described the geographic location of their school as urban or suburban, whereas 39 (19.40%) endorsed the rural description of school location.
The Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy was .89. The Bartlett's test was significant p < .001. The KMO and Bartlett's test statistics indicated that the data were suitable for factor analytic procedures (Tinsley & Tinsley, 1987).
Item analysis revealed some departure from normality. The factor analysis method used in the current study, principal axis factoring, is not reliant on normal data (Costello & Osborne, 2005). The range of responses for all items was 1 to 4. Mean scores ranged from 2.44 to 3.38. Nineteen of the 20 items were negatively skewed (range = -1.32-.05; SD =. 132). Kurtosis statistics ranged from -1.45 to .78 with a standard error of .26. Complete descriptive statistics for the items are presented in Table 1.
Decision to Retain Factors
The decision to retain factors was initially guided by visually inspecting the scree plot, considering Eigenvalues, and balancing parsimony and plausibility. Visual inspection of the scree plot revealed that two-factors appeared to be left scree, or real; however, four-factors had Eigenvalues greater than one. The break in the scree test may have been difficult to interpret because multiple data points clustered near the bend (Costello & Osborne, 2005). Although researchers have indicated that Eigenvalues may not be the most accurate indicator of factors (Costello & Osborne, 2005; Velicer & Jackson, 1990), methodologists have suggested that under factoring is more problematic than over factoring (Wood, Tataryn, & Gorsuch, 1996); thus, there was a need to arrive at a factor solution that balances plausibility and parsimony (Fabrigar, Wegener, MacCalhnn, & Strahan, 1999). Methodologists (e.g., Costello & Osborne, 2005; Fabrigar et al., 1999) indicated that when the number of factors to retain is unclear, conducting a scries of analyses is then appropriate. Thus, a four-factor model was first evaluated followed by more parsimonious models to determine which model might best explain the data compared to a less complex model. Researchers evaluated models based upon amount of variance explained and interpretability (Fabrigar et al., 1999) and cleanest structure as defined by fewest cross-loading items and item loadings above .32 (Costello & Osborne, 2005). Two items were dropped from all models because the communalities for those items were consistently less than .30. Therefore, each model was initially run with 18 items.
A four-factor solution explained 55% of the variance of the intercorrelation matrix. Eigenvalues for the first four factors were 6.70, 2.00, 1.28, and 1.05. The four-factor model did not prove to be a viable solution because only two items loaded on the fourth factor and one of those items cross-loaded on the third factor (loadings of-.38 on factor 3 and .37 on factor 4). Fabrigar et al. (1999) indicated that a minimum of three variables with a loading of .5 or greater is necessary for a stable factor. Thus, retaining a fourth factor was not supported.
Three-factor model--18 items. The three-factor solution accounted for 53% of the variance of the intercorrelation matrix. Factor one had an Eigenvalue of 6.35 and accounted for 35% of variance; factor two's Eigenvalue was 1.86 and an additional 10% of the variance was accounted for by this factor; and factor three had an Eigenvalue of 1.27 and explained an additional 7% of the variance of the intercorrelation matrix. One item, "The school counseling program has the resources to allow counselors to complete appropriate professional development activities," cross-loaded on two-factors, and was placed on factor three because it loaded most heavily on that factor and fit conceptually. The first factor, programmatic orientation, included seven items with loadings that ranged from .33 to .78. The items imply a level of intentionality that only occurs when school counselors are administering a proactive program rather than delivering a set of reactive services. Sample items from the programmatic orientation factor include: "The program operates from a plan for closing the achievement gap for minority and lower income students" and "Needs assessments are completed regularly and guide program planning." The Cronbach's alpha reliability coefficient was .79. The second factor was difficult to label because three items were closely related conceptually as they all began with the sentence stem, "School counselors use computer software to..." and loaded heavily on the factor (.69 to .88). The item "The program ensures that all students have academic plans that include testing individual advisement, long-term planning, and placement" appeared unrelated conceptually and the loadings were much lower (.32). The Cronbach's alpha reliability coefficient for this factor was .78. The third factor, school counseling services, consisted of seven items with factor loadings that ranged from .33 to .70, and the Cronbach's alpha reliability coefficient was .81. The label was selected because the items relate to how school counseling services are provided within the program, such as "School counselors spend at least 80% of their time in activities that directly benefit students" and "Services are organized so that all students are well served and have access to them." Correlations among these factors were moderate (r = .35 factors 1 and 2, r = .45 factors 1 and 3, r = .36 factors 2 and 3).
Three-factor model - 17 items. A second three-factor model was also explored after dropping the item that appeared to be problematic on factor two. All extraction communalities were in the moderate or high ranges .44 to .78. The analysis was re-run with the remaining seventeen items. The data were appropriate for factor analysis as the Kaiser-Meyer-Olkin measure of sampling adequacy was .89 and the Bartlett's test was significant p < .001. The model explained 54% of the variance; Eigenvalues were 6.01, 1.82, and 1.27; and a factor structure emerged that was interpretable (see table 2). The same seven items loaded on the first factor, programmatic orientation, and this factor explained 36% of the variance (Cronbach's [alpha] = .79, M= 18.64; SD= 5.07). The same seven items loaded on the third factor, school counseling services, which explained 7% of the variance (Cronbach's [alpha] =. 81, M=20.93; SD =4.65). The difference among the three-factor models is that the second factor consisted only of the three items that loaded substantially on that factor and fit together conceptually (.68 to .88). The second factor was labeled school counselors use of computer software. The Cronbach's alpha reliability estimate for the second factor was .83, a modest increase in reliability from the previous three-factor model, and explained 11% of the variance of the intercorrelation matrix (M= 8.75; SD=2.84). Correlations among these factors were moderate (r = .27 factors 1 and 2, r = .45 factors 1 and 3, r = .28 factors 2 and 3).
Item number seven also proved to be problematic in the two-factor solution. Under the two-factor solution, the item was loaded equally by both factors (.31 and -.31). Therefore, the item was dropped from the two-factor model as well and the data was re-analyzed with 17 items.
Extraction communalities ranged from .32 to .77. A two-factor solution accounted for 47% of the variance of the intercorrelation matrix. Eigenvalues were 6.01 and 1.81. Fourteen of the items loaded on one factor with a Cronbach's reliability estimate of alpha = .87 (see table 3). This factor was labeled ASCA National Model Program Implementation (M = 39.56; SD = 8.84). The three remaining items loaded most heavily on factor two. The items that loaded on factor two are the same items that comprised factor two in the interpretable three-factor model. As such, they were labeled school counselors' use of computer software (Cronbach's [alpha] =.83; M = 8.75; 8D = 2.84). Factor 1 explained 36% of the variance, and Factor 2 explained an additional 11% of the variance of the intercorrelation matrix. The correlation between the factors was moderate (r = .40).
The results of" the exploratory factor analysis revealed that the instrument has a more complex factor structure than hypothesized. The three- and two-factor models utilizing 17 items appear to be the most appropriate. The four-factor model was problematic because items cross-loaded on multiple factors and the fourth factor consisted of only two items. Fabrigar et al. (1999) indicate that three items with moderate loadings (.5) are the minimum for a factor to be stable. The four-factor model did not meet Fabrigar's criteria. Although the three- and two-factor models have limitations, both models meet the Fabrigar's criteria for potentially stable subscales and warrant further investigation.
Advantages of" the three-factor model are that it allows researchers to capture more precise aspects of ASCA National Model program implementation and it explains more variance of the intercorrelation matrix compared to the two-factor model. This three-factor model separates the programmatic orientation aspect of implementing a program (factor one) from service delivery (factor three). Factor two comprises items that are focused on school counselors' use of software to manage student data and the use of data for school improvement. Results related to reliability indicate that the internal consistency of these factors is solid for research purposes, as Cronbach's alpha ranges from .79 to .83, using Heppner, Wampold, and Kivlighan's (2008) standards for research. All factors meet Fabrigar et al.'s (1999) criteria for a stable subscale: a minimum of three items that load at .5 or greater.
Alternatively, the two-factor model allows researchers to assess holistically program implementation with the first factor or subscale. Whiston (2002) emphasized the importance of considering outcomes of comprehensive school counseling programs. The two-factor model allows for researchers to assess programmatic outcomes.
The two-factor model offers a cleaner structure than the three-factor model because no items crossload > .32. Similar to the three-factor model, the factors are interpretable. This first factor has particularly good face validity, because it assesses what the instrument as a whole appears to measure. The second factor in the two-factor model is identical to the second factor in the three-factor model. The reliability estimates (Cronbach's alphas ranged from .83 to .87) are appropriate for research.
Limitations of the two-factor model compared to the three-factor model are the communalities and amount of variance explained. Some communalities, however, were low (i.e., < .40) suggesting the possibility of a more complex factor structure (Costello & Osborne, 2005). The two-factor model explained 47% of the total variance, which is 7% less than the more complex three-factor model. As such, some of the richness of the data is lost by reducing to a two-factor solution.
Although the amount of variance explained by either the three or the two-factor solution is a substantial psychometric limitation (54% and 47%, respectively), considering this limitation in the context of other measures related to school counseling programs is important. Much of the research on school counseling programs to date has been conducted using instruments that were designed by the researchers for a specific research study (e.g., Sink & Stroh, 2003; Sink et al., 2008) or are connected to a particular state's method of evaluation (e.g., Lapan, Gysbers, & Petroski, 2001; Nelson et al., 2007). The dimensionality of such instruments is inconsistently reported in the literature. The psychometrics of the SCPIS, however, can be compared to the School Counseling Activity Rating Scale (SCARS) as both are school counselor-report measures and the author of the SCARS also used an exploratory factor analysis as part of evaluating the instrument. Scarborough (2005) reported accounting for 47% of the variance in 40 items with a four-factor solution. The amount of variance explained by the factor structure on the SCARS is comparable to the SCPIS. Thus, despite the relatively small amount of variance explained by the factor solutions presented, the initial steps in instrument development indicate preliminary evidence of psychometric suitability.
USING THE SCPIS IN RESEARCH
The SCPIS is presented in Appendix A. Depending on the model employed, three or two scores may be calculated by administering this instrument. Under the three-factor model, researchers may calculate programmatic orientation by summing items 1, 3, 4, 5, 9, 10, and 14. School counselors" use of computer software consists of totaling items 15, 16, and 17. School counseling services is the sum total of items 2, 11, 12, 13, 18, 19, and 20. Higher scores indicate more fully implemented aspects of the ASCA National Model program than lower scores. Researchers who choose to employ the three-factor model may be able to differentiate between schools that have the proactive and programmatic aspects of implementing an ASCA National Model program in place but whose actual services may not fully reflect the model or visa versa. The subscale school counselors' use of computer software may be used to measure the degree to which school counselors are using computer software and data to inform their programs.
Under the two-factor model, the same three items associated with the subscale school counselors' use of computer software are summed, and the remaining fourteen items comprise ASCA National Model program implementation subscale. That subscale allows researchers to assess school counselors' perception of how fully their program reflects the characteristics of an ASCA National Model program.
DIRECTIONS FOR FUTURE RESEARCH
The SCPIS can prove to be a valuable tool in research on the relationships between school counseling program characteristics and programmatic outcomes. Further evaluation of the SCPIS is important because the development and validation of a researcher-constructed survey would allow for results of studies to be compared across contexts.
Subsequent confirmatory factor analyses are needed to confirm the factor structure and to determine whether the three- or two-factor model is preferable. Further, researchers may consider whether the structure is constant across contexts (e.g., is the same for school counseling programs in urban, suburban and rural settings) and respondents (e.g., principal-report and school counselor-report). Similarly, subsequent validity studies of the subscale norms are needed to establish the meaning of the scaled scores. For example, knowing the minimum score of the implementation subscale that reflects an adequate level of ASCA National Model implementation would be helpful. Additionally, researchers could consider developing a related measure that assesses school counselors' perceptions related to ASCA National Model program implementation.
Use of the SCPIS with other quantitative, empirically-based measures of program outcomes would be beneficial in studying the relationship between school counseling program characteristics and the program's desired outcomes. For example, a study using the SCPIS and the My Class Inventory (Sink and Spencer, 2005, 2007) could examine whether implementation of the ASCA National Model resulted in increases in positive classroom climates in elementary schools. Similarly, a study using the SCPIS and the standardized ASCA National Standards instrument developed by Whiston and Aricak (2008), could examine whether implementation of the ASCA National Model was related to student growth on dimensions identified by the profession as being most salient.
IMPLICATIONS FOR PRACTICE
Although the primary purpose of developing SCPIS was to offer researchers a means of gathering program-related data that could be compared across studies, opportunities also exist for the SCPIS to be used in practice. For example, the measure may be used by individual school counselors or by districts to evaluate programs. School counselors might use the SCPIS as a means of auditing their program and identifying opportunities for further program development.
For example, the total ASCA National Model program implementation score can serve as a baseline measure to assess progress toward comprehensive school counseling program implementation over time. Similarly, school district leaders might request that all school counseling departments complete the SCPIS to assess how well the district as a whole is implementing programs consistent with the ASCA National Model.
The school counselors' use of computer software allows researchers and school districts to gain insight into how school counselors are using technology and data to support their programs. A strength of this subscale is that the items do not specify the type of software (e.g., Excel), so researchers, administrators, and district leaders can use this subscale regardless of the software packages available in a particular school. Knowledge of school counselors' use of computer software may be important because it could help identify opportunities for professional development and improve data management.
Researchers have engaged in a series of pilot studies and a larger study to develop the School Counseling Program Implementation Survey. In doing so, the items have been refined, internal consistency demonstrated, and factor structure explored. The larger study does not offer much insight into the demographic composition of the sample; therefore, ascertaining representativeness of the sample is difficult. Data are not yet available to evaluate aspects of instrument reliability beyond internal consistency (e.g., test-retest). Although steps were taken during the initial item development to ensure good face validity (e.g., expert review of items), other aspects of validity, such as divergent and convergent validity, have not been assessed. Similarly, the factor structure has not been confirmed and two models are presented in the current manuscript for further exploration. Thus, the authors have demonstrated preliminary support for utilizing the SCPIS.
As the profession of school counseling continues to work toward answering important accountability, questions, researchers must be able to measure school counseling programmatic characteristics with confidence. The goal of developing the SCPIS was to offer researchers a means of measuring characteristics of school counseling programs. The initial evaluation of the psychometric properties of the SCPIS revealed that this measure has solid internal consistency and an interpretable factor structure. Although the findings presented in this manuscript are promising, researchers should continue to use the measure with caution.
American School Counselor Association (2004). The ASCA national model workbook. Alexandria, VA: Author.
American School Counselor Association (2005). The ASCA national model: A framework for school counseling programs (2nd ed.). Alexandria, VA: Author.
Brown, D., & Trusty, J. (2005). School counselors, comprehensive school counseling programs, and academic achievement: Are school counselors promising more than they can deliver? Professional School Counseling, 9, 1-8.
Campbell, C. A., & Dahir, C. A. (1997). The national standards for school counseling programs. Alexandria, VA: American School Counselor Association.
Clemens, E.V., Milsom, A., & Cashwell, C. S. (2009). Using leader-member exchange theory to examine principal-school counselor relationships, school counselors' roles, job satisfaction, and turnover intentions. Professional School Counseling, 13, 75-86.
Costello, A. B., & Osborne, J. W. (2005). Best practices in exploratory factor analysis: Four recommendations for getting the most from your analysis. Practical Assessment, Research, & Evaluation, 10, 1-9.
Dahir, C. A. (2001).The national standards for school counseling programs: Development and implementation. Professional School Counseling, 4, 320-327.
Dahir, C. A., Burnham, J. J., & Stone, C. (2009). Listen to the voices: School counselors and comprehensive school counseling programs. Professional School Counseling, 12, 182-192.
Eisner, D., & Carey, J. (2005). School counseling program implementation survey. Unpublished assessment instrument.
Fabrigar, L. R., Wegener, D.T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4, 2727-2799.
Gysbers, N. C. (I 990). Comprehensive guidance programs that work. Ann Arbor, MI: ERIC/CAPS.
Gysbers, N. C. (2004). Comprehensive guidance programs: The evolution and accountability. Professional School Counseling, 8, I -I 4.
Gysbers, N. C., & Henderson, P. (2006). Developing and managing your school counseling guidance program (4th ed.). Alexandria, VA: American Counseling Association.
Heppner, P. P., Wampold, B. E., & Kivlighan, D. M. (2008). Research design in counseling. Belmont, CA: Thompson Brooks/Cole.
Johnson, S. K., & Johnson, C. D. (2003). Results-based guidance: A systems approach to student support programs. Professional School Counseling, 6, 289-298.
Kirchner, G., & Setchfield, M. (2005). School counselors' and school principals' perceptions of the school counselor's role. Education, 126, 10-16.
Lapan, R.T. (2001). Results-based comprehensive guidance and counseling program: A framework for planning and evaluation. Professional School Counseling, 4, 289-299.
Lapan, R.T., Gysbers, N. C., & Petroski, G. F. (2001). Helping seventh graders be safe and successful: A statewide study of the impact of comprehensive guidance and counseling programs. Journal of Counseling & Development, 75, 292-302.
Lapan, R.T., Gysbers N. C., & Sun, Y. (1997).The impact of more fully implemented guidance programs on the school experiences of high school students: A statewide evaluation study. Journal of Counseling & Development, 75, 292-302.
Nelson, D. E., Fox, D. G., Haslam, M., & Gardner, J. (2007). An evaluation of Utah's comprehensive guidance program: The fourth major study of Utah's thirteen-year program. Salt Lake City, UT: Utah State Office of Education.
Nelson, D. E., Gardner, J., & Fox, D. G. (1998). An evaluation of the comprehensive guidance program in the Utah public schools. Salt Lake City, UT: Utah State Office of Education.
Scarborough, J. L., (2005).The school counselor activity rating scale: An instrument for gathering process data. Professional School Counseling, 8, 274-283.
Scarborough, J. L.,& Culbreth, J. R. (2008). Examining discrepancies between actual and preferred practice of school counselors. Journal of Counseling & Development, 86, 446-459.
Sink, C. A., Akos, R T., Turnbull, R. J., & Mvududu, N. (2008). An investigation of comprehensive school counseling programs and academic achievement in Washington State Middle Schools. Professional School Counseling, 12, 43-53.
Sink, C. A., & Spencer, L. R. (2005). My Class Inventory - Short Form as an accountability tool for elementary school counselors to measure classroom climate. Professional School Counseling, 9, 37-48.
Sink, C. A., & Spencer, L R. (2007).Teacher version of the My Class Inventory--Short Form: An accountability tool for elementary school counselors. Professional School Counseling, 11, 129-139.
Sink, C. A., & Stroh, H. R. (2003). Raising achievement test scores of early elementary school students through comprehensive school counseling programs. Professional School Counseling, 6, 352-364.
Tinsley, H. A, & Tinsley, D. J. (1987). Uses of factor analysis in counseling psychology research. Journal of Counseling Psychology, 34, 414-424.
Velicer, W. F., & Jackson, D. N. (1990). Component analysis versus common factor-analysis: Some further observations. Multivariate Behavioral Research, 25, 97-114.
Whiston, S. C. (2002). Response to the past, present, and future of school counseling: Raising some issues. Professional School Counseling, 5, 148-157.
Whiston, S. C., & Aricak, T. (2008). Development and initial investigation of the School Counseling Program Evaluation Scale. Professional School Counseling, 1 I, 253-261.
Wood, J. M., Tataryn, D. J., & Gorsuch, R. L. (1996). Effects of under- and overextraction on principal axis factor analysis with varimax rotation. Psychological Methods, 1, 345-365.
Elysia V. Clemens, Ph.D. is an assistant professor at the University of Northern Colorado, School of Applied Psychology and Counselor Education. E-mail: email@example.com
John C. Carey, Ph.D. is a professor and Karen M. Harrington is a senior research fellow at the National Center for School Counseling Outcome Research, University of Massachusetts, Amherst.
APPENDIX School Counseling Program Implementation Survey Please rate each statement below in terms of the degree to which it is currently implemented in your school's School Counseling program. Circle your response using the following Rating Scale: 1 = Not Present; 2 = Development in Progress; 3 = 1 2 3 4 Partly Implemented; 4 = Fully Implemented 1. A written mission statement exists and is used as a foundation by all counselors. 2. Services are organized so that all students are 1 2 3 4 well served and have access to them. 3. The program operates from a plan for closing 1 2 3 4 the achievement gap for minority and lower income students. 4. The program has a set of clear measurable 1 2 3 4 student learning objectives and goals are established for academics, social/personal skills, and career development. 5. Needs Assessments are completed regularly and 1 2 3 4 guide program planning. 6. All students receive classroom guidance lessons 1 2 3 4 designed to promote academic, social/personal, and career development. 7. The program ensures that all students have 1 2 3 4 academic plans that include testing, individual advisement, long-term planning, and placement. 8. The program has an effective referral and 1 2 3 4 follow-up system for handling student crises. 9. School counselors use strident performance data 1 2 3 4 to decide how to meet student needs. 10. School counselors analyze student data by 1 2 3 4 ethnicity, gender, and socioeconomic level to identify interventions to close achievement gaps. 11. School counselor job descriptions match actual 1 2 3 4 duties. 12. School counselors spend at least 80% of their 1 2 3 4 time in activities that directly benefit students. 13. The school counseling program includes 1 2 3 4 interventions designed to improve the school's ability to educate all students to high standards. 14. An annual review is conducted to get 1 2 3 4 information for improving next year's programs. 15. School counselors use computer software to 1 2 3 4 access student data 16. School counselors use computer software to 1 2 3 4 analyze student data 17. School counselors use computer software to use 1 2 3 4 data for school improvement 18. The school counseling program has the 1 2 3 4 resources to allow counselors to complete appropriate professional development activities. 19. School counseling priorities are represented 1 2 3 4 on curriculum and education committees. 20. School counselors communicate with parents to 1 2 3 4 coordinate student achievement and gain feedback for program improvement.
Table 1. Item Level Descriptive Statistics Skewness Standard Skewness Standard Item Range Mean Deviation Statistic Error 1 1 to 4 2.76 1.22 -0.37 0.132 2 1 to 4 3.36 0.78 -1.11 0.132 3 1 to 4 2.73 1.10 -0.36 0.132 4 1 to 4 2.79 1.01 -0.34 0.132 5 1 to 4 2.45 1.08 -0.01 0.132 6 1 to 4 3.17 1.01 -1.08 0.132 7 1 to 4 3.10 1.03 -0.88 0.132 8 1 to 4 3.38 0.86 -1.28 0.132 9 1 to 4 2.98 0.99 -0.68 0.132 10 1 to 4 2.44 1.13 0.05 0.132 11 1 to 4 2.65 1.01 -0.29 0.132 12 1 to 4 3.11 0.98 -0.87 0.132 13 1 to 4 3.08 0.89 -0.75 0.132 14 1 to 4 2.5 1.12 -0.02 0.132 15 1 to 4 3.34 1.03 -1.32 0.132 16 1 to 4 2.69 1.15 -0.33 0.132 17 1 to 4 2.72 1.10 -0.35 0.132 18 1 to 4 3.02 1.05 -0.75 0.132 19 1 to 4 2.67 1.13 -0.28 0.132 Kurtosis Kurtosis Standard Item Statistic Error 1 -1.45 0.263 2 -0.66 0.263 3 -1.056 0.263 4 -1.01 0.263 5 -1.27 0.263 6 -0.01 0.263 7 -0.43 0.263 8 0.78 0.263 9 -0.58 0.263 10 -1.38 0.263 11 -1.01 0.263 12 -0.3 0.263 13 -0.16 0.263 14 -1.36 0.263 15 0.33 0.263 16 -1.33 0.263 17 -1.92 0.263 18 -0.67 0.263 19 -1.3 0.263 Table 2. Three-factor Model with 17 items: Factor Loadings for Principal Axis Factoring With Direct Oblimin Rotation of School Counseling Program Implementation Survey School Counselor Use School Programmatic of Computer Counseling Item Orientation Software Services 1 0.33 0.04 0.22 2 0.19 0.07 0.49 3 0.66 -0.04 -0.01 4 0.51 -0.19 0.17 5 0.49 0.02 0.15 6 IE IE IE 7 IE IE IE 8 IE IE IE 9 0.54 0.28 0.04 10 0.78 0.14 -0.16 11 0.03 -0.05 0.66 12 -0.08 -0.22 0.71 13 0.37 0.03 0.47 14 0.34 0.03 0.3 15 -0.15 0.68 0.11 16 0.08 0.88 -0.07 17 0.18 0.75 0.01 18 0.04 0.29 0.39 19 0.07 0.24 0.46 20 0.24 0.20 0.34 Note. Bold face indicates placement on a given factor. IE = Item eliminated. Item 13 cross-loads on item 1 and 3; it is placed on factor three because it loads more heavily on that factor. Table 3. Two-factor Model with 17 items: Factor Loadings for Principal Axis Factoring With Direct Oblimin Rotation of School Counseling Program Implementation Survey ASCA National School Counselors' Model Program Use of Computer Item Implementation Software 1 0.50 0.02 2 0.60 0.05 3 0.59 -0.06 4 0.62 -0.22 5 0.59 -0.01 6 IE IE 7 IE IE 8 IE IE 9 0.53 0.25 10 0.56 0.10 11 0.59 -0.04 12 0.51 -0.01 13 0.75 0.01 14 0.59 0.01 15 -0.04 0.67 16 0.02 0.87 17 0.17 0.74 18 0.37 0.28 19 0.46 0.23 20 0.52 0.20 Note. Bold face indicates placement on a given factor. IE = Item Eliminatcd.
|Gale Copyright:||Copyright 2010 Gale, Cengage Learning. All rights reserved.|