Program Evaluation and Archaeology: Considerations and Future Projections

A few days ago, Ph.D. candidate Kate Ellinger (SUNY-Binghamton) put forth some thoughts on the need for evaluation procedures of our public archaeology programs. You can find her post here on the blog for the SUNY-Binghamton MAPA program and I also recommend this earlier post on public archaeology evaluation needs. They’re great and you should check them out ASAP.

Building on that post, I’d like to offer some thoughts on the field of Program Evaluation and the impact it may have on the future of Southeastern Archaeology and anthropological practice. Although we all throw around the term ‘evaluate’ as a synonym to analyze, consider, investigate, etc., the concept of evaluation has a very specific meaning in the social sciences. Program evaluation is a distinct discipline within itself with professional societies, academic and doctoral programs, thousands of practitioners, and prescribed methodologies and techniques.

In 2013 and 2014 I worked for an evaluation team at the University of Kentucky through the Human Development Institute (HDI), a research and advocacy center for vulnerable populations. Through this position I learned about the field of Program Evaluation outside of its connections and implications for anthropology and archaeology.

Since that experience, I have spent time considering the potential connections between my professional worlds in both Program Evaluation and Southeastern Archaeology. Cultural anthropologists have already begun to consider this field in both applied and theoretical applications (Copeland-Carson 2005; Crain and Tashima 2005), but formal evaluations have not had much impact on archaeology thus far.

I believe, however, that this pattern is going to change within the next decade or so and we will see formal evaluations become a routine part of some (perhaps all…?) archaeological research.

My evidence for this prediction:

  • Federal archaeological funding is becoming scarcer. We have to justify our work by the broader impacts it will have. How do we demonstrate that impact? Evaluation.
  • There have been very public and political attacks on the relevance and significance of archaeological research. How do we sell the significance of our work to political and public audiences? Evaluation.
  • All federal (and most state) education and healthcare grants require extensive external program evaluations of research and practice. How do they justify the ways that they spend their public monies? Evaluation.
  • The ethics of archaeological practice have been questioned and critiqued by many groups, especially indigenous communities. How do we demonstrate effective engagement with all stakeholders? Evaluation.
  • Harassment and discrimination has recently been very publicly exposed in archaeology and other field sciences. How do we expose those private practices and demonstrate that new procedures can prevent these actions? Evaluation.

It is my experience that many archaeologists are relatively unaware of the formal discipline of Program Evaluation. I’d like to shed a little light on the scope and significance of evaluation and offer some thoughts on where formal evaluations may become an integral part of our future research agendas.

What Is Program Evaluation?

The simplest way to describe evaluation is the research of research. Program evaluators investigate how effective and appropriate studies and practices are at impacting change.

Program Evaluation is a relatively new, yet fast-growing field of social analysis. Although many professionals conduct internal evaluations as part of their overall projects or assignments, there are many full-time, professional evaluators that spend their careers conducting external evaluations for clients. The primary professional society for evaluation is the American Evaluation Society (AEA) and it has thousands of active members.

Billions of dollars are spent every year to evaluate the impact of research and practice. The education and medical fields are the most invested in the industry, but evaluations are conducted in almost every field out there. For example, there are many topical interest groups for the AEA and they intersect with some of the themes we work with every day in anthropology and archaeology. A selection that would be interesting to SEAC Underground readers includes Advocacy and Policy Change, Assessment in Higher Education, Environmental Program Evaluation, Indigenous Peoples in Evaluation, and International and Cross-Cultural Evaluations.

What do Program Evaluators Do?

In practice, Program Evaluation looks a lot like sociological or anthropological investigations. The work generally combines quantitative and qualitative approaches to see how effective a study or practice has been. To the annoyance of many anthropologists, however, there has often been an over-reliance on the quantitative results with less emphasis on the qualitative results (cue the impassioned rants from ethnographers).

The evaluator toolbox includes things like surveys, interviews, peer-focus groups, social network maps, audience response systems, interactive storyboards, etc. In fact, I learned how to manipulate survey programs like Qualtircs as an evaluator, which we later used for the SEAC Sexual Harassment Survey (Meyers et al. 2015).

Evaluators are also really interested in how their data is presented and delivered. They love infographics and they hate pie charts. Seriously… they put forth countless papers and posts on this topic. Actually, I think many practicing archaeologists have a lot to learn about data presentation after my time working this field.

At their core, program evaluations are intended to assess effectiveness. Some evaluations lead to policy changes, but not all.

Program evaluators work in a variety of sectors. Some are academics or work in a university setting. Others are employed at research or evaluation firms. A large section of evaluators are independent contractors who bid for projects or contract their skills to other organizations. Content experts are often hired to conduct small evaluations of other studies even when they are not primarily evaluators. 

Implications for Archaeological Funding and Research

Public archaeology is clearly the most visible place where Program Evaluation will intersect with archaeological practice. The Florida Public Archaeology Network (FPAN), for example, recently advertised for a post-doctoral position in program assessment (aka. evaluation). This posting was advertised on the AEA website. Awards and grants for public archaeology – including the SEAC Public Outreach Grant – wish to see evaluation and assessment built into the research design.

But, this need will likely move beyond the public archaeology subfield and we should be prepared for new requirements if they do materialize.

Federal agencies (like the NSF) are requiring formal evaluations of results for grants in more fields every year, and archaeology will undoubtedly need to offer such evaluations in increasing numbers. Many agencies specifically ask for external evaluations, thus researchers have to seek out content AND program evaluation experts to assess the impact and effectiveness of the research.

Evaluations are clearly beneficial in our work for assessing impact. They are a great thing. They may also – in the near future – become a required thing.

WORKS CITED

Copeland-Carson, Jacqueline (2005). Theory-Building” Evaluation Anthropology. Annals of Anthropological Practice 24(1):7-16.

Crain, Cathleen E., and Nathaniel Tashima (2005). Anthropology and Evaluation: Lessons from the Field. Annals of Anthropological Practice 24(1):41-48.

Meyers, Maureen, Tony Boundreaux, Stephen Carmody, Victoria Dekle, Elizabeth Horton, and Alice Wright (2015). Preliminary Results of the SEAC Sexual Harassment Survey. Horizon and Tradition: The Newsletter of the Southeastern Archaeological Conference 57(1):19-35.

 

Victoria Dekle is a co-found of SEAC Underground and a Ph.D. candidate at the University of Kentucky. She can be reached at vdekle@gmail.com.

Advertisements

The generation ‘wars’ are everywhere right now

How many out there have read the new issue of the digital SEAC newsletter? The generation gaps discussed in the interview with the New South owners is very interesting and it’s led me to wonder what others think about this distinction. Like any good archaeologist, I’m fascinated by the temporal dimension of social action!

What do you think?

*  Is the ‘millennial’ archaeological experience in the Southeast drastically different than the ‘baby boomer’ experience?

*  If so, is this a good or bad thing?

*  What does such a gap mean about the projects we pursue in school?  Or, what about our decisions on whether or not to pursue a secondary degree?

*  Are we putting too much stock in these generational distinctions?

*  (… and for good measure…)  Are the gender discrepancies experienced by earlier generations still an issue for the upcoming millennial set?

Gen X – you want to weigh in on this?

Kuddos to Meg and the SEAC Underground Project

I’m writing this small post during the lunch break on the second full day of the 2012 SEAC conference in Baton Rouge. The conference leaves little time for blogging, but I want to make sure that Meg is commended for her professional and poignant discussion comments in yesterday’s Plenary Session. Thanks for promoting the SEAC Underground project so eloquently yesterday, Meg. Great job!