Program Evaluation and Archaeology: Considerations and Future Projections

A few days ago, Ph.D. candidate Kate Ellinger (SUNY-Binghamton) put forth some thoughts on the need for evaluation procedures of our public archaeology programs. You can find her post here on the blog for the SUNY-Binghamton MAPA program and I also recommend this earlier post on public archaeology evaluation needs. They’re great and you should check them out ASAP.

Building on that post, I’d like to offer some thoughts on the field of Program Evaluation and the impact it may have on the future of Southeastern Archaeology and anthropological practice. Although we all throw around the term ‘evaluate’ as a synonym to analyze, consider, investigate, etc., the concept of evaluation has a very specific meaning in the social sciences. Program evaluation is a distinct discipline within itself with professional societies, academic and doctoral programs, thousands of practitioners, and prescribed methodologies and techniques.

In 2013 and 2014 I worked for an evaluation team at the University of Kentucky through the Human Development Institute (HDI), a research and advocacy center for vulnerable populations. Through this position I learned about the field of Program Evaluation outside of its connections and implications for anthropology and archaeology.

Since that experience, I have spent time considering the potential connections between my professional worlds in both Program Evaluation and Southeastern Archaeology. Cultural anthropologists have already begun to consider this field in both applied and theoretical applications (Copeland-Carson 2005; Crain and Tashima 2005), but formal evaluations have not had much impact on archaeology thus far.

I believe, however, that this pattern is going to change within the next decade or so and we will see formal evaluations become a routine part of some (perhaps all…?) archaeological research.

My evidence for this prediction:

  • Federal archaeological funding is becoming scarcer. We have to justify our work by the broader impacts it will have. How do we demonstrate that impact? Evaluation.
  • There have been very public and political attacks on the relevance and significance of archaeological research. How do we sell the significance of our work to political and public audiences? Evaluation.
  • All federal (and most state) education and healthcare grants require extensive external program evaluations of research and practice. How do they justify the ways that they spend their public monies? Evaluation.
  • The ethics of archaeological practice have been questioned and critiqued by many groups, especially indigenous communities. How do we demonstrate effective engagement with all stakeholders? Evaluation.
  • Harassment and discrimination has recently been very publicly exposed in archaeology and other field sciences. How do we expose those private practices and demonstrate that new procedures can prevent these actions? Evaluation.

It is my experience that many archaeologists are relatively unaware of the formal discipline of Program Evaluation. I’d like to shed a little light on the scope and significance of evaluation and offer some thoughts on where formal evaluations may become an integral part of our future research agendas.

What Is Program Evaluation?

The simplest way to describe evaluation is the research of research. Program evaluators investigate how effective and appropriate studies and practices are at impacting change.

Program Evaluation is a relatively new, yet fast-growing field of social analysis. Although many professionals conduct internal evaluations as part of their overall projects or assignments, there are many full-time, professional evaluators that spend their careers conducting external evaluations for clients. The primary professional society for evaluation is the American Evaluation Society (AEA) and it has thousands of active members.

Billions of dollars are spent every year to evaluate the impact of research and practice. The education and medical fields are the most invested in the industry, but evaluations are conducted in almost every field out there. For example, there are many topical interest groups for the AEA and they intersect with some of the themes we work with every day in anthropology and archaeology. A selection that would be interesting to SEAC Underground readers includes Advocacy and Policy Change, Assessment in Higher Education, Environmental Program Evaluation, Indigenous Peoples in Evaluation, and International and Cross-Cultural Evaluations.

What do Program Evaluators Do?

In practice, Program Evaluation looks a lot like sociological or anthropological investigations. The work generally combines quantitative and qualitative approaches to see how effective a study or practice has been. To the annoyance of many anthropologists, however, there has often been an over-reliance on the quantitative results with less emphasis on the qualitative results (cue the impassioned rants from ethnographers).

The evaluator toolbox includes things like surveys, interviews, peer-focus groups, social network maps, audience response systems, interactive storyboards, etc. In fact, I learned how to manipulate survey programs like Qualtircs as an evaluator, which we later used for the SEAC Sexual Harassment Survey (Meyers et al. 2015).

Evaluators are also really interested in how their data is presented and delivered. They love infographics and they hate pie charts. Seriously… they put forth countless papers and posts on this topic. Actually, I think many practicing archaeologists have a lot to learn about data presentation after my time working this field.

At their core, program evaluations are intended to assess effectiveness. Some evaluations lead to policy changes, but not all.

Program evaluators work in a variety of sectors. Some are academics or work in a university setting. Others are employed at research or evaluation firms. A large section of evaluators are independent contractors who bid for projects or contract their skills to other organizations. Content experts are often hired to conduct small evaluations of other studies even when they are not primarily evaluators. 

Implications for Archaeological Funding and Research

Public archaeology is clearly the most visible place where Program Evaluation will intersect with archaeological practice. The Florida Public Archaeology Network (FPAN), for example, recently advertised for a post-doctoral position in program assessment (aka. evaluation). This posting was advertised on the AEA website. Awards and grants for public archaeology – including the SEAC Public Outreach Grant – wish to see evaluation and assessment built into the research design.

But, this need will likely move beyond the public archaeology subfield and we should be prepared for new requirements if they do materialize.

Federal agencies (like the NSF) are requiring formal evaluations of results for grants in more fields every year, and archaeology will undoubtedly need to offer such evaluations in increasing numbers. Many agencies specifically ask for external evaluations, thus researchers have to seek out content AND program evaluation experts to assess the impact and effectiveness of the research.

Evaluations are clearly beneficial in our work for assessing impact. They are a great thing. They may also – in the near future – become a required thing.

WORKS CITED

Copeland-Carson, Jacqueline (2005). Theory-Building” Evaluation Anthropology. Annals of Anthropological Practice 24(1):7-16.

Crain, Cathleen E., and Nathaniel Tashima (2005). Anthropology and Evaluation: Lessons from the Field. Annals of Anthropological Practice 24(1):41-48.

Meyers, Maureen, Tony Boundreaux, Stephen Carmody, Victoria Dekle, Elizabeth Horton, and Alice Wright (2015). Preliminary Results of the SEAC Sexual Harassment Survey. Horizon and Tradition: The Newsletter of the Southeastern Archaeological Conference 57(1):19-35.

 

Victoria Dekle is a co-found of SEAC Underground and a Ph.D. candidate at the University of Kentucky. She can be reached at vdekle@gmail.com.

Advertisements

Taking Care of Data

A couple of things have had me thinking recently about data management in archaeology.

You might have seen the Atlantic’s recent article on the digital collection, curation, and analysis of archaeological data. The article emphasizes the massive size of datasets that are being collected particularly with digital methods, and it highlights a few points that will be familiar to archaeologists: we work and think at various scales, many of us are invested in new technological approaches to data, and often whatever documentation we can produce and preserve is all that will remain when the original record is destroyed by the process of our research (or by war, terrorism, or climate change). The article cites projects with data points that are apparently in the billions because of digital techniques—but of course our datasets can become unwieldy even with traditional methods once you take into account decades of research at a site or investigate questions across broad geographic areas. This article speaks to both the research potential of massive datasets, and the logistical challenges they can pose at all levels.

I thought of this article during a meeting of a class in “Responsible Conduct of Research and Scholarship.” The class is a new department requirement related to federal research funding, and so the inclusion of data management is no surprise if you consider the increasing attention paid to this component of NSF grant proposals. In the first session we touched on ways to plan for data management early on in a research project, whether that means selecting stable file formats or making informant information anonymous (for those anthropologists who work with the living). This can be especially challenging as a graduate student; many of us are planning the first project that we will be executing independently, and bringing from its earliest stages through to the end. What steps do we need to take to anticipate the management of data that we have yet to collect, and which will likely end up taking a different form than we expect when we first formulate the project?

datasupervision

Very thoroughly supervised excavation at Weeden Island, FL, Dec 2015

So the third thing that brings me to this topic is my own research. Having finished my fieldwork in December, I am now committing most of my time to lab-based sorting and analysis, along with organizing field notes and photographs and databases—and at the same time writing proposals, revising my 3 year life plan every other week, and otherwise trying to stay in touch with the big picture. Trying to balance these drastically different conceptual and practical scales really makes it clear how much effort can go into managing all the details of a project and the data it generates, and how critical it is to do that well in order to transition smoothly to analyzing and synthesizing those results, and then to making them available in a form that could be useful to others.

If I were starting all over tomorrow, I can think of (at least) a few things I would do differently with regards to record keeping and planning for database management. I think some big challenges for graduate students directing research are accurately estimating the scale and volume of data that will result, and developing systems of organization that will continue to make sense if strategies for sampling evolve over different phases of the project. Most of my work in this area has depended not on formal training but on observing the practices of other projects, remembering things that were difficult when I’ve worked with other datasets, and spending hours fiddling around with my tables in Access.

boxes

Boxes of excavated material on their way to becoming data

I have been thinking a lot about revision in writing lately, and perhaps there are some relevant comparisons and contrasts between writing and building databases. A first draft of a written work very often needs to be “re-envisioned” to be improved, perhaps through reworking its structure and reconsidering what information it is meant to convey. Many writers benefit from the feedback of readers as they move through revisions of written work; is this true for “data work” too? I know that each time I have had reason to share some portion of my preliminary dissertation data, it has forced me to refine the organization a bit, to check that my coding and conventions are accessible to another person, and to otherwise revise my database. But the structure of a database can be difficult or impossible to change once a project is really underway, in part because strategies for data collection are usually conceived along with plans for data management.

P1020221

Data collection teamwork at Weeden Island, FL, Dec 2015

As for making my data accessible after I finish my current work, I expect to include many appendices in my dissertation, but also to archive materials digitally with The Digital Archaeological Record (tDAR). I initially looked into the terms and requirements for tDAR to fulfill a requirement—but doing so prompted me to think about how archived data really gets used. I haven’t personally undertaken any serious work with some of the archaeological data that is recently being archived digitally and made accessible (e.g. the Digital Index of North American Archaeology), although I have made use of other relevant types of data available online, like NOAA’s coastal LiDAR. I think finding ways to seek out and incorporate more resources for available data will be a future goal of mine.

Archaeologists are always thinking about long time scales, the durability of materials, and the transmission of knowledge. Even so, there can be some disconnect when it comes to maintaining our own records in a way that will be readily accessible and understandable for future researchers. Graduate students out there, is this something you’re being trained in before delving into your research? What experiences do you have working with more novel forms of data collection, management, or archiving? Looking beyond the data you collect yourself, what ways have you found to work with the data that’s already available in digital archives?

Resources and Links