Instrumentation and psychometric property reporting in current health education literature.
Abstract: Survey research methodologies are frequently employed in health education. Advancements in health education research and practice are attributed to researchers' ability to effectively measure social variables. An analysis of content published in four health education journals over two years was conducted using a 26-point rubric. Of articles meeting inclusion criteria, 41.4% reported psychometric properties of data collected with survey instruments. Instrument characteristics and survey administration procedures were inconsistently reported. Publishing detailed information about instrumentation and psychometric properties of data collected with survey instruments enables researchers to determine whether instrument scales are useful and able to consistently measure social phenomenon across study samples. Psychometric property testing should be conducted and reported for data collected each time a survey instrument is administered.
Subject: Medical personnel (Training)
Author: Smith, Matthew Lee
Pub Date: 01/01/2009
Publication: Name: American Journal of Health Studies Publisher: American Journal of Health Studies Audience: Professional Format: Magazine/Journal Subject: Health Copyright: COPYRIGHT 2009 American Journal of Health Studies ISSN: 1090-0500
Issue: Date: Wntr, 2009 Source Volume: 24 Source Issue: 1
Accession Number: 308743751
Full Text: INTRODUCTION

Researching social phenomena assists health educators to understand the complexity of human behavior and health, which transcends merely identifying and defining health problems (McDermott, 2000). These efforts guide the development and implementation of health education programs and enable health educators to evaluate the efficacy and effectiveness of health education programs (Merrill, Lindsay, Shields, & Stoddard, 2007). As such, a charge for original, unique, and high-quality research has been voiced by leaders in health education to propel and distinguish the profession (Glover, 2004; McDermott, 2000; Torabi, 2004).

Health education, like any profession, is partially defined and advanced by the literature it publishes (Simons-Morton, 2007). Through published literature, techniques are shared and results are reported to give readers the most current information about emerging issues relevant to the field. To measure the sophistication and rigor associated with health education research, it is important to assess the content published in journal articles and determine how authors obtained the information reported in these publications. In-depth examination of published literature is one method to assess the status, maturation, and direction of a profession (Merrill et al., 2007).

Advancements in social science research are largely attributed to reliable and valid techniques to measure social variables (Ary, Jacobs, & Razavieh, 1996). Survey research is one method used to determine community needs and inform tailored health education programs (Merrill et al., 2007). Self-report survey instruments are tools to measure the degree of, changes in, and contributors to health status and health behavior. Survey instruments are commonly used in health education to collect information to explain and even predict health outcomes. A 10-year analysis of representative articles published in health education journals reported that cross-sectional quantitative research design articles are the most common type of research articles published in health education literature (Merrill et al., 2007). Increasing trends in survey research-based articles being published emphasize the need for well-constructed survey instruments.

Various definitions, descriptions, and techniques are available concerning instrument scale reliability testing and construct validity (Anastasi, 1969; Cronbach, 1970; Cronbach & Meehl, 1955; Ghiselli, 1964; Windsor, Baranowski, Clark, & Cutter, 1994). Reliability refers to the internal-consistency of instrument scale items based on scores reported by study participants (Ghiselli, 1964). Validity refers to the extent to which an instrument scale measures what was intended to measure (Cronbach & Meehl, 1955). Reporting psychometric properties for data collected with instruments allows researchers to evaluate how well the scales measure intended latent constructs (scales should be grounded in established theoretical frameworks; Babbie, 1989; McGrath, 1979) and decide whether the scales are appropriate for use in future research.

Repeated administration of a survey instrument may document psychometric-related consistency or inconsistency of data collected from heterogeneous samples. Although an instrument may exhibit sound psychometric properties in its original version, reliability and validity must be reestablished following each administration of the instrument (Chen, Wang, Yang, & Liou, 2003). This is especially true if the instrument or scale has been modified. Reporting psychometric properties of data collected with instrument scales permits researchers to appropriately alter the length and content of survey instruments so they may be more precise and efficient (Birnbaum et al., 2002; Chen et al., 2003; Lottes & Adkins, 2003).

The reliability and validity of data collected with survey instruments is fundamental to research results (Patrick & Beery, 1991). Performing tests to assess the reliability and validity of data collected with survey instruments allows researchers to determine whether instrument scales are useful and can consistently be used to measure social phenomenon across samples (Laukkanen, Halonen, Aivio, Viinamaki, & Lehtonen, 2000). To maintain data integrity, eliminate bias, and obtain valid and reliable results, survey instruments must be carefully created (O'Rourke, 2001). Psychometric property testing serves to provide relatively objective criteria to determine the value of survey instruments (McDowell & Newell, 1996). Unfortunately, methods of establishing reliability and validity for data collected with survey instruments are complex and labor-intensive (Chen et al., 2003; Fleiss, 1986; Long, 1983; Nunnally & Bernstein, 1994), and thus should be shared among the research community through publication (Birnbaum et al., 2002).

Currently, no journals in the field of health education mandate authors to report characteristics about survey instruments used to collect primary data or psychometric properties of data collected with survey instruments. To date, the extent to which survey instruments are shared, or associated psychometric properties of questionnaires are published, is unknown. This void indicates that the field lacks a cohesive inventory or repository of information about survey instruments. The aims of this article are to 1) identify the use of survey instruments and associated psychometric property reporting in currently published health education literature, 2) emphasize the importance and need for well-constructed instruments, and 3) recommend extensive psychometric property testing and reporting in published literature.

METHODS

An analysis of the content related to the use of survey instruments and associated psychometric property reporting was performed using articles published in four health education journals. This analysis assessed each published article's use of survey instruments and psychometric property reporting practice. All 2006 and 2007 issues of Health Education and Behavior (HEB), Health Education Journal (HEJ), Health Education Research (HER), and International Electronic Journal of Health Education (IEJHE) were analyzed. These journals were selected because their submission guidelines require that articles submitted for publication contain solely health education-related content. None of the selected journals mandate the inclusion of survey instrument logistics or psychometric property reporting within their author submission guidelines.

INCLUSION CRITERIA

Selected journal articles contained primary or secondary data collected using a survey instrument. Articles were included only if quantitative data were collected using survey research methodology. Omitted journal articles included qualitative research articles, book reviews, editorials, commentaries, conceptual articles, presidential addresses, award papers, perspectives, systematic/literature reviews, policy reviews, non-empirical research, instructions for authors, reference indices, erratum, and articles using data collection methods other than a survey instrument. Articles using survey instruments yielding data inappropriate for psychometric properties to be calculated were omitted as well. For example, survey instruments administered in the form of checklists or rubrics were not included in the current study.

CONTENT RUBRIC

All articles were evaluated utilizing a 26-point content rubric created by the author to assess instrument-related information published in health education literature. Rationale of content rubric items was based on recommended survey instrument development and design methods (Dillman, 2007). Introduction, methods, results, and discussion sections of journal articles were reviewed for explicitly-stated content concerning instrument design, administration, and psychometric property reporting. Information from articles meeting inclusion criteria was recorded in the content rubric. To eliminate bias, explicit statements made by article authors were evaluated. No inferences were made about the content. Only criteria related to instrument development, instrument administration, instrument characteristics, and reported psychometric properties for data collected with survey instruments were included in this study.

FINDINGS

SAMPLE

Review of all 2006 and 2007 issues of HEB, HEJ, HER, IEJHE yielded 403 articles for potential inclusion (128, 68, 169, and 38, respectively). After applying the inclusion criteria, 192 articles were omitted (69, 39, 73, and 11, respectively). The remaining 211 articles were examined for content concerning instrumentation and psychometric property reporting (59, 29, 96, and 27, respectively). Further review of articles for the use of survey research resulted in excluding an additional 8 for not utilizing survey instruments for data collection or using instruments not appropriate for psychometric property calculation or reporting (3, 3, 2, and 0, respectively). For example, a study which administered a survey instrument to program administrators in the form of a checklist to determine common characteristics of similar programs nation-wide was eliminated. The remaining 203 articles formed the final sample.

JOURNAL ARTICLE CHARACTERISTICS

The majority of published articles employed quantitative research methodology (41.9%), qualitative research methodology (15.1%), and mixed-methods methodology (9.9%). Table 2 provides details about types of articles published in the selected journals in 2006 and 2007. Overall, 124 articles used cross-sectional study designs (61.1%), 36 used repeated cross-sectional designs (17.7%), 34 used pre-test/post-test designs (16.7%), and 9 used longitudinal study designs (4.4%). Of the included articles, 146 (71.9%) reported findings from original research studies, and 56 (27.6%) reported findings from intervention studies and program evaluations.

INSTRUMENTATION IN HEALTH EDUCATION LITERATURE

Among the final sample (n = 203), 83.3% reported that the instruments used to collect data from participants were newly created by authors of the studies, modified versions of existing instruments, or a combination of existing instrument scales to comprise one instrument. The remaining 16.7% reported using unaltered existing survey instruments. Table 3 provides detailed information about the basis for instrument creation reported by authors. Of the articles meeting predetermined inclusion criteria, less than half reported using a theoretical framework in their study design (43.3%). The most commonly reported theoretical frameworks were the Theory of Planned Behavior (20.5%), Social Cognitive Theory (19.3%), Transtheoretical Model (12.5%), Social Learning Theory (10.2%), Health Belief Model (9.1%), and Social-Ecological Model (9.1%). Paper-and-pencil instruments were most frequently utilized to collect data from study participants (68.0%). Table 3 provides information about data collection instrument formats. Nearly half of the included articles (47.8%) did not report the number of survey items included within their respective instruments. Among those that did report the number of items, instrument length ranged from 5 to 108 items. Only 10 articles (4.9%) reported the length of instruments used to collect data in terms of page numbers. Of those reporting instrument page numbers, instruments ranged from 2 to 28 pages. Of the 33 articles (16.3%) that reported the time needed for participants to complete the instrument, completion time ranged from 4 to 120 minutes. Articles most frequently reported using Likert-type (34.4%), ordinal (12.2%), and yes/no (10.5%) instrument item response types, but over 18% did not specify the type of responses that participants were expected to provide. Table 3 provides types of instrument item responses used to collect data from participants in studies.

PSYCHOMETRIC PROPERTY REPORTING IN HEALTH EDUCATION LITERATURE

Overall, 119 (58.6%) articles did not report psychometric properties associated with the instrument used in the respective studies. Of the articles that met the inclusion criteria, 10 (4.9%) reported having conducted an exploratory factor analysis (EFA), 7 (3.4%) reported having conducted a principle components analysis (PCA), 3 (1.5%) reported having conducted a confirmatory factor analysis (CFA), and 2 (1.0%) reported using both EFA and CFA. Of the 169 articles that reported using author-created instruments, 16 (9.5%) performed EFA, PCA, or CFA. Of these 169 articles, 71 (42.0%) reported psychometric properties associated with the instrument used in their respective studies. Only 38.3% of articles that used unaltered existing instruments in their study reported psychometric properties associated with the instrument used. Of the articles reporting psychometric properties for data collected in their respective studies or data collected during previous administrations of the instrument used in their respective studies, 89.3% reported Cronbach's alpha reliability coefficients. Table 4 provides details about reported psychometric properties associated with instruments used in current studies. A total of 14 articles (6.9%) reported psychometric properties associated with a previous administration of the instrument, or components of the instrument, used in their respective studies. Of the 169 articles that reported using created instruments, 10 (15.4%) reported psychometric properties associated with a previous administration of the instrument, or components of the instrument, used in their respective studies. Of the 34 articles that reported using unaltered existing survey instruments, 4 (11.8%) reported psychometric properties associated with a previous administration of the instrument, or components of the instrument. Only 5 (2.5%) published articles meeting inclusion criteria included a version of the instrument or scale used in the reported study.

DISCUSSION

Research is a critical element for health education's recognition as credible field of study (Torabi, 2004). Study designs, whether qualitative or quantitative, must employ sound methodology to generate accurate findings with practical implications, which may then be translated for utilization by practitioners and academicians (Merrill et al., 2007; Torabi, 2004). Information reported in research-based publications should explicitly detail study methods and procedures to enable readers to comprehend content, determine the study's relevance to their personal practice, and formulate assessments of the feasibility and efficacy of replicating study methodology.

Findings from this review support previous research, which report increasing trends of cross-sectional quantitative research designs in health education literature (Merrill et al., 2007). Health education researchers frequently use survey instruments to collect data intended to ascertain information from participants concerning attitudes, perceptions, beliefs, and behaviors associated with contemporary health issues. The proper development and administration of survey instruments to collect primary data is so critical to the discipline that it is included among the competencies required of entry-level health educators (Bartee, Grandjean, & Bieber, 2004; National Commission for Health Education Credentialing, 2004). Specifically, as identified by the National Commission for Health Education Credentialing, an entry-level health educator should possess the skill to "employ or develop appropriate data-gathering instruments" (2004). In addition, graduate-level competencies encompass the ability of health educators to "develop valid and reliable data collection instruments" (National Commission for Health Education Credentialing, 2004).

In the current study, inconsistent and low levels of reporting existed concerning survey instrument logistics and psychometric property testing. Less than half of the articles meeting inclusion criteria reported information about number of items included in instruments, page length of instruments, types of items included within instruments, or time needed for participants to complete survey instruments. Many articles reported the number or types of items included for a particular scale within the instrument, but most did not report these logistics for the overall instrument. Inconsistent reporting of survey instrument characteristics hinders readers' understanding of the true composition of the instrument used and its administration process. The absence of consistent reporting of this information also hinders readers' ability to replicate research studies and corresponding findings.

Among articles reviewed for this study, 83.3% used instruments to collect data that were created by the authors of their respective studies, modified versions of existing instruments, or a combination of existing instrument scales to comprise one instrument. Only 5% reported findings from factor analyses. Without reporting factor loadings associated with data collected with survey instruments, researchers remain uncertain about the construct validity of items intended to measure theoretical constructs. Only 42% of articles reported psychometric properties of the instrument used in the current study, and only 6.9% of articles reported psychometric properties associated with the instrument's administration in previous studies. Although it is encouraged that researchers tailor studies and study materials to the targeted research sample, altering existing survey instruments requires a complex regimen of testing to reestablish and/or confirm the validity of data collected with survey instruments (Chen et al., 2003). Psychometric property testing results should be reported in published articles for each administration of a survey instrument to determine the stability of the data for the instrument and promote reliability and generalizability of findings when compared to previous and/or future studies. The dearth of psychometric property reporting in published health education literature leaves researchers unable to determine potential utility of existing instruments for their own use.

Reporting survey instrument characteristics and psychometric properties in health education publications should be required and viewed as a courtesy to readers. Including logistics about survey instruments, instrument design processes, and psychometric property testing in published articles enables readers and the greater research community to assess the quality of survey instruments used to generate research findings and replicate studies if so desired. Without this information readers are left with partial instructions about how best to locate appropriate research tools to collect data, interpret study findings, replicate studies, or generalize study findings beyond the current administration of the survey instrument. Inconsistent and incomplete reporting of instrument characteristics, instrument development procedures, and instrument testing procedures have implications to compromise the integrity and advancement of health education as a discipline.

IMPLICATIONS FOR PRACTICE

Page and word limitations imposed by journals and journal editors may deter authors from including excessive detail as part of a delicate balance to include valuable information relevant to the study and implications for future research. This author recommends that other writers/researchers not omit information about survey instruments used or the methods employed to validate such instruments. Rather, authors should provide as much instrument-related descriptive information as possible, such as instrument design procedures, instrument administration, and results from former and current psychometric property testing. Table 5 is a recommended checklist to be used by authors prior to manuscript submission.

The author further recommends that journals and journal editors modify manuscript submission guidelines to require psychometric property reporting. Required reporting of psychometrics in health education literature will contribute to more uniform reporting practices and publishing only high-quality studies using survey research (i.e., validated and reliable instruments for the data being reported). The credibility, integrity, and direction of the health education field are dependent upon the literature it publishes.

LIMITATIONS

This initial investigation into the use of survey instruments and associated psychometric property reporting presents findings from an analysis of the content using articles published in four health education journals over a two-year period. The abbreviated duration from which this sample of articles was derived limits its ability to be considered as a comprehensive representation of the health education literature. Future studies will extend the number of years and journals analyzed to afford researchers an opportunity to more accurately identify trends in psychometric property reporting in published literature.

REFERENCES

Anastasi, A. (1969). Psychological testing (3rd ed.). London: Collier-Macmillan Limited.

Ary, D., Jacobs, L., & Razavieh, A. (1996). Introduction to research in education (5th ed.). Fort Worth, TX: Hold, Rinehart, and Winston, Inc.

Babbie, E. (1989). The practice of social research. Belmont, CA: Wadsworth.

Bartee, R. T., Grandjean, B. D., & Bieber, S. L. (2004). Confirming the reliability of a theory-based questionnaire. American Journal of Health Studies, 19(3), 175-180.

Birnbaum, A. S., Lytle, L. A., Murray, D. M., Story, M., Perry, C. L., & Boutelle, K. N. (2002). Survey development for assessing correlates of young adolescents' eating. American Journal of Health Behavior, 26(4), 284-295.

Chen, M.-Y., Wang, E. K., Yang, R.-J., & Liou, Y.-M. (2003). Adolescent Health Promotion Scale: Development and psychometric testing. Public Health Nursing, 20(2), 104-110.

Cronbach, L. J. (1970). Essentials of psychological testing (3rd ed.). New York, NY: Harper & Row.

Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281-302.

Dillman, D. A. (Ed.). (2007). Mail and internet surveys: The tailored design method (2nd ed.). Hoboken, NJ: Wiley.

Fleiss, J. L. (1986). The design and analysis of clinical experiments. New York, NY: Wiley & Sons.

Ghiselli, E. E. (1964). Theory of psychological measurement. New York, NY: McGraw-Hill.

Glover, E. D. (2004). A new health education paradigm: Uncommon thoughts about common matters. American Journal of Health Education, 35, 260-271.

Laukkanen, E., Halonen, P., Aivio, A., Viinamaki, H., & Lehtonen, J. (2000). Construct validity of the Offer Self-Image Questionnaire in Finnish 13-year-old adolescents: Differences in the self-images of boys and girls. Nordic Journal of Psychiatry, 54(6), 431-435.

Long, J. S. (1983). Confirmatory factor analysis. Newbury Park, CA: Sage Publications.

Lottes, I. L., & Adkins, C. W. (2003). The Construction and Psychometric Properties of an Instrument

Measuring Support for Sexual Rights. Journal of Sex Research, 40(3), 286-295.

McDermott, R. J. (2000). Health education research: Evolution or revolution (or maybe both)? Journal of Health Education, 31, 264-271.

McDowell, I., & Newell, C. (1996). Measuring health: A guide to rating scales and questionnaires (Vol. 3). New York, NY: Oxford University Press.

McGrath, J. (1979). Toward a theory of method for research on organizations. In R. T. Mowday & R. M. Steers (Eds.), Research in organizations: Issues and controversies (pp. 4-21). Santa Monica, CA: Goodyear Publishing.

Merrill, R. M., Lindsay, C. A., Shields, E. C., & Stoddard, J. (2007). Have the focus and sophistication of research in health education changed? Health Education & Behavior, 34(1), 10-25.

National Commission for Health Education Credentialing. (2004). About NCHEC. Responsibilities & competencies [Electronic Version]. Retrieved May, 10, 2008, from http://www.nchec.org/aboutnchec/ rc.htm.

Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory (3rd ed.). New York, NY: McGraw-Hill. O'Rourke, T. (2001). Techniques to improve questionnaire format. American Journal of Health Studies, 17, 36-38.

Patrick, D. L., & Beery, W. L. (1991). Measurement issues: Reliability and validity. American Journal of Health Promotion, 5, 305-310.

Simons-Morton, B. (2007). Defined by publication: A commentary on health education and health promotion publication trends. Health Education & Behavior, 34(1), 26-30.

Torabi, M. R. (2004). Optimistically looking ahead: Closing remarks by incoming president of the academy. American Journal of Health Behavior, 28(6), 569-571.

Windsor, R., Baranowski, T., Clark, N., & Cutter, G. (1994). Evaluation of health promotion, Healdi education, and disease prevention programs (2nd ed.). Mountain View, CA: Mayfield Publishing Company.

Matthew Lee Smith PhD, MPH, CHES, is affiliated with School of Rural Public Health, Health Science Center, Texas A&M University. Please address all correspondence to Matthew Lee Smith, PhD, MPH, CHES, School of Rural Public Health, Health Science Center, Texas A&M University. 1266 TAMU College Station, TX 77843. (Tel): 979-845-5788, Email: matlsmit@tamu.edu.
Table 1. Frequency of 203 Reviewed Studies Meeting
Inclusion Criteria for Review of Psychometric Content

                           Articles   Percent
                Overall    Meeting    Articles
        Year    Articles   Criteria   Included

HEB

        2006       65         25       38.5%
        2007       63         31       49.2%
        Total     128         56       43.8%
HEJ

        2006       34         13       38.2%
        2007       34         13       38.2%
        Total      68         26       38.2%
HER

        2006       87         44       50.8%
        2007       82         50       61.0%
        Total     169         94       55.A%
IEJHE

        2006       19         14       73.7%
        2007       19         13       68.4%
        Total      38         27       71.1%

Table 2. Article Types in 2006-2007 Health Education
Literature (n = 403)

Article Type                              Frequency (%)

Quantitative-based Articles               169 (41.9%)
Qualitative-based Articles                 61 (15.1%)
Mixed-Methods-based Articles               40 (9.9%)
Systematic/Literature Reviews              24 (6.0%)
Secondary Data Analysis-based Articles     21 (5.2%)
Conceptual Articles                        21 (5.2%)
Commentaries                               15 (3.7%)
Book Reviews                               11 (2.7%)
Editorials                                 10 (2.5%)
Acknowledgements                            4 (1.0%)
Erratum                                     2 (0.5%)
Awards                                      2 (0.5%)
Miscellaneous_                             23 (5.7%)

Table 3. Instrument-Related Characteristics
(n = 203)

Basis for Instrument Creation    Frequency (%)

No Explicit Justification        52 (25.6%)
Unaltered Existing Instrument    44 (21.7%)
Modified Existing Instrument     42 (20.7%)
Program Objectives               24 (11.8%)
Qualitative Methods              16 (7.9%)
Literature                       15 (7.4%)
Previous Research Findings       13 (6.4%)
Theory                           11 (5.4%)
Pilot Study                      3 (1.5%)

Methods of Data Collection       Frequency (%)

Paper-and-Pencil                 149 (73.4%)
Internet                          20 (9.9%)
Telephone                         19 (9.4%)
Mail                              17 (8.4%)
Interviewer-Administered          14 (6.9%)

Instrument Item Response Type    Frequency (%)

Likert-Type                      101 (49.8%)
Ordinal                           36 (17.7%)
Yes/No                            31 (15.3%)
Open-Ended                        21 (10.3%)
Continuous                        18 (8.9%)
True/False                        11 (5.4%)
Categorical                       11 (5.4%)
Multiple-Choice                   8 (3.9%)
Dichotomous                       7 (3.4%)
Close-Ended                       6 (3.0%)
Checklist Response                5 (2.5%)
Rank-Order Response               2 (1.0%)
No Item Information Provided     37 (18.2%)

Table 4. Psychometric Property Type Reporting (n = 203)

Article Type                               Frequency (%)

Cronbach's Alpha Reliability Coefficient    74 (36.5%)
Test-Retest Reliability Coefficient          8 (3.9%)
Kappa Coefficient                            2 (1.0%)
Temporal Stability Coefficient               1 (0.5%)
Generalizability Coefficient                 1 (0.5%)
Kuder-Richardson-20                          1 (0.5%)
Separation Index                             1 (0.5%)
No Psychometric Properties Reported         119 (58.6%)

Table 5. Instrument Characteristics and
Psychometric Property Reporting Checklist

--   Data collection method
--   Survey research design
--   Theoretical framework
--   Instrument name
--   History of instrument development
--   Number of instrument items
--   Number of instrument pages
--   Minutes needed for participants
       to complete instrument
--   Types of instrument item responses
--   Study response rate
--   Instrument completion rate
--   Factor analysis
--   Current reliability coefficients
       for data collected with scales
--   Previous reliability coefficients
       for data collected with scales
--   Instrument included in article
       (or relevant contact information
       provided)
Gale Copyright: Copyright 2009 Gale, Cengage Learning. All rights reserved.