Program Evaluation and Archaeology: Considerations and Future Projections

A few days ago, Ph.D. candidate Kate Ellinger (SUNY-Binghamton) put forth some thoughts on the need for evaluation procedures of our public archaeology programs. You can find her post here on the blog for the SUNY-Binghamton MAPA program and I also recommend this earlier post on public archaeology evaluation needs. They’re great and you should check them out ASAP.

Building on that post, I’d like to offer some thoughts on the field of Program Evaluation and the impact it may have on the future of Southeastern Archaeology and anthropological practice. Although we all throw around the term ‘evaluate’ as a synonym to analyze, consider, investigate, etc., the concept of evaluation has a very specific meaning in the social sciences. Program evaluation is a distinct discipline within itself with professional societies, academic and doctoral programs, thousands of practitioners, and prescribed methodologies and techniques.

In 2013 and 2014 I worked for an evaluation team at the University of Kentucky through the Human Development Institute (HDI), a research and advocacy center for vulnerable populations. Through this position I learned about the field of Program Evaluation outside of its connections and implications for anthropology and archaeology.

Since that experience, I have spent time considering the potential connections between my professional worlds in both Program Evaluation and Southeastern Archaeology. Cultural anthropologists have already begun to consider this field in both applied and theoretical applications (Copeland-Carson 2005; Crain and Tashima 2005), but formal evaluations have not had much impact on archaeology thus far.

I believe, however, that this pattern is going to change within the next decade or so and we will see formal evaluations become a routine part of some (perhaps all…?) archaeological research.

My evidence for this prediction:

  • Federal archaeological funding is becoming scarcer. We have to justify our work by the broader impacts it will have. How do we demonstrate that impact? Evaluation.
  • There have been very public and political attacks on the relevance and significance of archaeological research. How do we sell the significance of our work to political and public audiences? Evaluation.
  • All federal (and most state) education and healthcare grants require extensive external program evaluations of research and practice. How do they justify the ways that they spend their public monies? Evaluation.
  • The ethics of archaeological practice have been questioned and critiqued by many groups, especially indigenous communities. How do we demonstrate effective engagement with all stakeholders? Evaluation.
  • Harassment and discrimination has recently been very publicly exposed in archaeology and other field sciences. How do we expose those private practices and demonstrate that new procedures can prevent these actions? Evaluation.

It is my experience that many archaeologists are relatively unaware of the formal discipline of Program Evaluation. I’d like to shed a little light on the scope and significance of evaluation and offer some thoughts on where formal evaluations may become an integral part of our future research agendas.

What Is Program Evaluation?

The simplest way to describe evaluation is the research of research. Program evaluators investigate how effective and appropriate studies and practices are at impacting change.

Program Evaluation is a relatively new, yet fast-growing field of social analysis. Although many professionals conduct internal evaluations as part of their overall projects or assignments, there are many full-time, professional evaluators that spend their careers conducting external evaluations for clients. The primary professional society for evaluation is the American Evaluation Society (AEA) and it has thousands of active members.

Billions of dollars are spent every year to evaluate the impact of research and practice. The education and medical fields are the most invested in the industry, but evaluations are conducted in almost every field out there. For example, there are many topical interest groups for the AEA and they intersect with some of the themes we work with every day in anthropology and archaeology. A selection that would be interesting to SEAC Underground readers includes Advocacy and Policy Change, Assessment in Higher Education, Environmental Program Evaluation, Indigenous Peoples in Evaluation, and International and Cross-Cultural Evaluations.

What do Program Evaluators Do?

In practice, Program Evaluation looks a lot like sociological or anthropological investigations. The work generally combines quantitative and qualitative approaches to see how effective a study or practice has been. To the annoyance of many anthropologists, however, there has often been an over-reliance on the quantitative results with less emphasis on the qualitative results (cue the impassioned rants from ethnographers).

The evaluator toolbox includes things like surveys, interviews, peer-focus groups, social network maps, audience response systems, interactive storyboards, etc. In fact, I learned how to manipulate survey programs like Qualtircs as an evaluator, which we later used for the SEAC Sexual Harassment Survey (Meyers et al. 2015).

Evaluators are also really interested in how their data is presented and delivered. They love infographics and they hate pie charts. Seriously… they put forth countless papers and posts on this topic. Actually, I think many practicing archaeologists have a lot to learn about data presentation after my time working this field.

At their core, program evaluations are intended to assess effectiveness. Some evaluations lead to policy changes, but not all.

Program evaluators work in a variety of sectors. Some are academics or work in a university setting. Others are employed at research or evaluation firms. A large section of evaluators are independent contractors who bid for projects or contract their skills to other organizations. Content experts are often hired to conduct small evaluations of other studies even when they are not primarily evaluators. 

Implications for Archaeological Funding and Research

Public archaeology is clearly the most visible place where Program Evaluation will intersect with archaeological practice. The Florida Public Archaeology Network (FPAN), for example, recently advertised for a post-doctoral position in program assessment (aka. evaluation). This posting was advertised on the AEA website. Awards and grants for public archaeology – including the SEAC Public Outreach Grant – wish to see evaluation and assessment built into the research design.

But, this need will likely move beyond the public archaeology subfield and we should be prepared for new requirements if they do materialize.

Federal agencies (like the NSF) are requiring formal evaluations of results for grants in more fields every year, and archaeology will undoubtedly need to offer such evaluations in increasing numbers. Many agencies specifically ask for external evaluations, thus researchers have to seek out content AND program evaluation experts to assess the impact and effectiveness of the research.

Evaluations are clearly beneficial in our work for assessing impact. They are a great thing. They may also – in the near future – become a required thing.


Copeland-Carson, Jacqueline (2005). Theory-Building” Evaluation Anthropology. Annals of Anthropological Practice 24(1):7-16.

Crain, Cathleen E., and Nathaniel Tashima (2005). Anthropology and Evaluation: Lessons from the Field. Annals of Anthropological Practice 24(1):41-48.

Meyers, Maureen, Tony Boundreaux, Stephen Carmody, Victoria Dekle, Elizabeth Horton, and Alice Wright (2015). Preliminary Results of the SEAC Sexual Harassment Survey. Horizon and Tradition: The Newsletter of the Southeastern Archaeological Conference 57(1):19-35.


Victoria Dekle is a co-found of SEAC Underground and a Ph.D. candidate at the University of Kentucky. She can be reached at

6 comments on “Program Evaluation and Archaeology: Considerations and Future Projections

  1. dover1952 says:

    Two comments:

    1) “Ufimtsev became interested in describing the reflection of lasers while working in Moscow. He gained permission to do work on it after being advised that work was useless and would curtail his advancement. Because the work was considered of no military or economic value, Ufimtsev was allowed to publish his work internationally.” (Wikipedia quote)

    2) Boolean Algebra. Considered theoretically interesting in mathematics circles at one time—but of no possible practical use in the real world.

    Would things have turned out differently if they had been subjected to a systematic, rigorous evaluation process when the work was first conducted and completed? You tell me.

    I somehow doubt it.

  2. victoriagd says:

    Thanks for your comment! I definitely understand where you are coming from on this statement. I agree that science and other forms of inquiry cannot always be conducted with the only concern being the outcomes. Yet funding and striving for results with impact does structure the nature of research in our capitalist world, for better or worse. It’s inherent in the peer-review process, funding procedures, academic tenure, and other structuring principles. I’m not sure how to work around this issue without introducing an entirely new economic and political structure.

    Evaluation helps us understand how and where in the process of the research or the practice that the work can be improved or changed to have better results or impact. To me, it’s a part of that spirit of inquiry. It’s provides reflection on what we are doing in our work, whatever it may be. It has value in the same ways as the examples you offer above.

    I also think that archaeology has the added complexity of being a destructive form of inquiry that impacts a variety of people. There are often ethical concerns behind the nature of our work. Therefore, it does merit ongoing investigation on how and where our work is influencing those around us.

    What do you think, Dover1952 or others?

  3. dover1952 says:

    I think evaluation of programs in archaeology are a great idea Victoria—as long as it has a specified purpose—and that purpose ends in something useful or wise.

    At the Y-12 nuclear weapons plant here in Oak Ridge, Tennessee, the U.S. Department of Energy (DOE) has been designing a huge new facility for storing the excess uranium cores used in nuclear warheads. The Y-12 Plant is the national “Fort Knox” for all of our fissile uranium bomb materials. Well, they did several years of work on the project and spent a fortune on the design of a huge and very complex storage building. When all of that money and effort had been expended near the end of the originally scheduled design phase, some really smart person (like you or me) just happened to take a close look at some engineering drawings and noticed that the expensive building was designed too small for all of the equipment that was going to be housed in it. The building redesign cost was estimated at $500,000,000. You can read all about it here:

    Normally, on most other projects, DOE has evaluation processes in place that are designed to catch this sort of thing early rather than late, but someone was obviously not minding the store on this project.

    The kind of rigorous evaluation programs you have in mind do have the potential to prevent this sort of thing before it gets out of control. However, I think the programs need to be structured on the front end to achieve stated objectives and endpoint goals on the back end. Therefore, the thing you would need to do first is to identify and define all of the various purposes, objectives, and goals you would want assorted evaluation programs to achieve and then structure them carefully to do just that. Perhaps I am just stating the obvious here, but even the obvious did not occur to DOE and its contractors at Y-12.

    I am glad you are pursuing this issue in American archaeology. Do you think evaluation programs could be developed to address and improve the quality of work performed by CRM contractors?

  4. dover1952 says:

    P.S. I also meant to say “uranium processing facility” rather than storage facility.

  5. […] Following last week’s post, you may sense that I am on the fence about “evaluation.” I am passionate about public archaeology and heritage work, and I believe we should strive to gather more information about how our actions affect other stakeholders. (Side note: you can learn more about how Program Evaluation professionals do this in a reply to my post by Victoria Dekle.) […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s