Improving the annual conference of the society for social work and research.
Article Type: Conference news
Subject: Social case work (Conferences, meetings and seminars)
Author: Howard, Matthew O.
Pub Date: 03/01/2010
Publication: Name: Social Work Research Publisher: National Association of Social Workers Audience: Academic; Trade Format: Magazine/Journal Subject: Sociology and social work Copyright: COPYRIGHT 2010 National Association of Social Workers ISSN: 1070-5309
Issue: Date: March, 2010 Source Volume: 34 Source Issue: 1
Geographic: Geographic Scope: United States Geographic Code: 1USA United States
Accession Number: 220468224
Full Text: Last week, I received an e-mail from the Society for Social Work and Research (SSWR) with a link to the preliminary agenda of the 14th Annual SSWR Conference in San Francisco. It is strange to think that only 60% of SSWR members had e-mail when the organization was founded in 1994 (Williams et al., 2008). It is even harder to imagine social work without the annual SSWR conference, an extraordinary professional development by any measure. When this editorial appears in the spring, planning for the 15th annual conference will be underway. Thus, it may be useful presently to offer a few recommendations for improving the conference based on my experience as program chair of the 11th annual conference and more recent developments.


Abstract Reviewer Selection

The SSV4R Board should reconsider the current reviewer selection process. In 2007, if memory serves me correctly, approximately 200 reviewers were recruited to rate an average of 18 abstracts each. Reviewers were commonly drawn from the ranks of assistant professors (many of them newly minted) and even doctoral students. A large set of reviewers was recruited, so that no one reviewer was burdened with rating an excessive number of abstracts. Reviewers were unpaid and acknowledged only in a listing in the conference brochure and SSWR News.

The abstract reviewer selection process for the 14th annual SSWR conference did not rely on doctoral students for reviews, and the majority of reviews were completed by professors (30.5%, N = 87) and associate professors (28.4%, N = 81). That said, the largest group of reviewers was comprised of assistant professors (36.5%, N = 104), with the remainder (4.6%, N = 13) consisting of apparently well-qualified research-involved people.

Although the abstract reviewer selection process has improved in recent years, nearly 40% of abstracts continue to be rated by among the most inexperienced (and nontenured) researchers in the profession. It might make more sense, scientifically, to recruit a smaller set of abstract reviewers (perhaps limiting the reviewers to the 160 to 170 full and associate professors) who are indisputably drawn from among the most accomplished researchers in social work. These reviewers could be paid modestly for their work, as the National Institutes of Health does in symbolic recognition of their grant reviewers.

Selection to the conference abstract review committee should represent a significant professional achievement and be recognized as such with a plaque, dinner, and specific ceremony at the conference. Criteria for service on the abstract review committee should be established by the SSWR Board, approved by the membership, and include parameters for such service (for example, length of appointment, number of times one may serve, and so forth). A committee of approximately 160 to 170 abstract reviewers could rate all conference abstracts within one month and might be willing to review a larger number of abstracts than previously if the experience was perceived as more professionally rewarding. This arrangement might increase the rigor, reliability, and validity of the conference abstract selection process.

Conference Abstract Rating Process

The current abstract rating process is flawed for methodological reasons. First, decisions about the relative merits of abstracts (that is, abstract ratings) are made by different reviewers with very different rating propensities. A very critical reviewer may rate all of her or his abstracts relatively poorly, with variation in ratings occurring within the "very poor to poor" range. Conversely, other raters invariably find virtues in the abstracts they review and rate them highly, the variation in their ratings tending to occur within the "very good to excellent" range. These rater biases are largely independent of abstract quality but are currently uncontrolled for in the annual conference abstract selection process.

Current reliance on three reviewers for ratings of each abstract might be thought to attenuate the effects of rater biases, in part, because reviewers with low and high rating propensities would be randomly distributed across abstracts. However, many abstracts are rated by one, two, or even three reviewers with either low or high rating propensities and receive scores that are partly and perhaps largely independent of their quality. The current abstract rating process includes a post-hoc review of each abstract for reviewer rating outliers by the conference chair, but it is unclear how often and under what circumstances this review is performed and what measures are taken to address situations in which abstracts receive highly discordant ratings. SSWR should explore what might be done to make the abstract review process less vulnerable to reviewer rating biases.

Absence of Reliability and Validity Data for Abstract Rating Process

It is, admittedly, difficult to assess the adequacy of the current abstract review process, but it is possible that it is only of marginal reliability and validity. Although it is unfortunate that SSWR has not undertaken reliability and validity assessments of the abstract review process, it is understandable given the young age of the organization. However, it is important that SSWR conduct or contract for such studies now and provide these data to the SSWR membership. Studies of other professional conference abstract rating protocols frequently reveal low interrater reliabilities; potential bias in the ratings of abstracts, with implications for final conference content; and low correspondence between abstract ratings and ratings of actual conference presentations (Bhandari, Templeman, & Tornetta, 2004; Montgomery, Graham, Evans, & Fahey, 2002; Poolman et al., 2007; Rowe et al., 2006; Smith, Nixon, Bueschen,Venable, & Henry, 2002; van Mastrigt & Downie, 1994; Vilstrup & Sorensen, 1998). Furthermore, several studies suggest that the quality of reporting in conference abstracts is relatively low (for example, Dundar, Dodds, Williamson, Dickson, & Walley, 2006).

The SSWR Board should examine the reliability and validity of the abstract review process and continue such assessments until such time as the conference review process is demonstrably reliable and valid. This process may entail testing a variety of abstract review strategies until an optimal process is identified. For validity assessment, it is important that each presentation at the conference be rated for its quality. Furthermore, content analyses and other formal assessments of the final conference proceedings should be conducted. These analyses could examine the quality of reporting in accepted and rejected abstracts and the overall content of the final program for notable gaps in program topical coverage. In addition, accepted and rejected abstracts could be followed over time to determine the proportions of each that are eventually published (Valderrama-Zurian et al., 2009). Given that publication is one of the most important scientific outcomes, it would be useful to know whether the percentage of conference abstracts eventually published has changed over the past decade.

Other Conference-related Issues

Although key results from hundreds of research studies are presented at each annual SSWR conference, rarely do media reports mention conference findings. It is not clear that conference proceedings have exerted any impact on social policy measures or political affairs. The SSWR Board should consider permanently locating the conference in Washington, DC, in close proximity to policymakers, lobbyists, and legislators and undertake a significant annual marketing effort in conjunction with the conference. The findings of particularly significant studies could be collated, attractively packaged, and forwarded to key policymakers in advance of the conference and meetings convened with these people during the conference. Compelling press releases could be developed and disseminated and, at a minimum, one might hope for annual coverage in the Washington Post and some national news outlets. Now that a significant body of rigorous social work research is being produced, the next step is to ensure that these reports have maximal policy influence.

In a related vein, it would be ideal if conference plenary speakers represented the pinnacle of public accomplishment and came from the ranks of people like former presidential contenders Al Gore or John Kerry and perhaps even former presidents like Bill Clinton or Jimmy Carter. Plenary speakers of great prominence would do much to bring attention to the SSWR conference and the profession. In a moment of reverie, one can envision national coverage of Bill Clinton (or Barack Obama if we are being grandiose) praising social work, in concert with intense marketing and lobbying efforts conveying important social work research findings to key policymakers. SSWR can find the resources to make this vision a reality, and I am pleased to learn that the SSWR Board has established a new Fund Development Committee.

Nonconference-related Issues

There are many strategies for highlighting social work research other than the annual conference that SSWR should consider. The SSWR Board has, understandably and with few exceptions (most notably the recent establishment of the Journal of the Society for Social Work and Research), largely confined itself to conference management activities. A more encompassing vision of SSWR's mission could entail many additional projects. For example, SSWR could seek book contracts with high-profile publishers sympathetic to social work, such as Oxford University Press, and publish series of books that highlight ongoing social work research relevant to public policy and clinical practice. SSWR might also convene practice guideline development committees to make broadly available concrete policy and practice recommendations in key social work practice areas. These products could enhance utilization and appreciation of social work research and improve contemporary policy and practice in important and highly visible areas.

One can also envision SSWR becoming a funder of research, lf, over time, a substantial SSWR endowment could be established, it would be possible for SSWR to award 20 to 30 pilot grants annually.


In the near term, much can be done to improve the annual SSWR conference. Research could rapidly provide more data about the types of papers presented at the SSWR annual conference, their quality, and the extent to which they are later published and cited. Enhanced marketing, media, and lobbying efforts on behalf of the SSWR annual conference and social work research generally could bring additional significant positive attention to social work research. It may be time to expand the SSWR vision to include a concerted and ongoing fundraising effort and activities that promote social work research, in addition to the annual conference.


Bhandari, M., Templeman, D., & Tornetta, P. (2004). Interrater reliability in grading abstracts for the orthopaedic trauma association. Clinical Orthopaedics and Related Research, 423, 217-221.

Dundar, Y., Dodd, S., Williamson, R, Dickson, R., & Walley, T. (2006). Case study of the comparison of data from conference abstracts and full-text articles in health technology assessment of rapidly evolving technologies: Does it make a difference? International Journal of Technology Assessment and Health Care, 22, 288-294.

Montgomery, A.A., Graham, A., Evans, P. H., & Fahey, T. (2002). Inter-rater agreement in the scoring of abstracts submitted to a primary care research conference. BMC Health Services Research, 2, 8. Retrieved from

Poolman, R.W., Keijser, L. C., de Waal Malefijt, M. C., Blankevoort, L., Farrokhyar, F., & Bhandari, M. (2007). Reviewer agreement in scoring 419 abstracts for scientific orthopedics meetings. Acta Orthopedics, 78, 278-284.

Rowe, B. H., Strome, T. L., Spooner, C., Blitz, S., Grafstein, E., & Worster, A. (2006). Review agreement trends from four years of electronic submissions of conference abstracts. BMC Medical Research Methodology, 6, 14. Retrieved from http://www.biomedcentral. com/1471/2288/6/14

Smith, J., Nixon, R., Bueschen, A. J.,Venable, D. D., & Henry, H. H. (2002). Impact of blinded versus unblinded abstract review on scientific program content. Journal of Urology, 168, 2123-2125.

Valderrama-Zurian, J. C., Bolanos-Pizarro, M., Buena-Canigral, F.J., Alvarez, F. J., Ontalba-Riperez, J. A., & Aleixandre-Benavent, R. (2009). An analysis of abstracts presented to the College on Problems of Drug Dependence meeting and subsequent publication in peer review journals. BMC Substance Abuse Treatment, Prevention, and Policy, 4, 19. Retrieved from http://

van Mastrigt, R., & Downie, J.W. (1994). Statistical evaluation of the function of the 1992 International Continence Society Scientific Committee. Neurourological Urodynamics, 13, 323-331.

Vilstrup, H., & Sorensen, H.T. (1998). A comparative study of scientific evaluation of abstracts submitted to the 1995 European Association for the Study of the Liver Copenhagen meeting. Danish Medical Bulletin, 45, 317-319.

Williams, J. B., Tripodi, T., Rubin, A., Hooyman, N., Allen-Meares, P., Padgett, D. K., & Fortune, A. E. (2008). A historical account of the Society for Social Work and Research: Presidential perspectives on advances in research infrastructure. Social Work Research, 32, 208-219.

Matthew O. Howard, PhD, is Frank A. Daniels Distinguished Professor, School of Social Work, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599; e-mail: Grateful acknowledgment to the SSWR Board for their informative reaction to an earlier draft of this editorial and openness to discussion about the ideas proposed herein.
Gale Copyright: Copyright 2010 Gale, Cengage Learning. All rights reserved.