Links and Contacts


How does PERG conduct an evaluation?

Each evaluation begins with the formulation of an evaluation design. The Director and a Senior Research Associate initiate this stage, working with program leaders to get a sense of the program's goals and to create a picture of what specific evidence of success would look like for that particular program. This information is used to develop an Evaluation Matrix that is unique to the program being evaluated.

Based on the project's size, one or more PERG evaluators implement the data collection activities identified in the Evaluation Matrix. Evaluators then analyze the various types of data collected and distill this information into an evaluation report, designed to help program leaders with their implementation and address program difficulties identified during the evaluation. Evaluators provide frequent, informal, formative evaluation feedback in meetings with program staff and periodic formative evaluation reports based on the program staff's timeline and other needs. Evaluators provide summative evaluation reports at the conclusion of a program. Findings can also be delivered by presentation to project staff and stakeholders. 

What is the PERG Evaluation Matrix?

George Hein and Brenda Engel designed the original PERG matrix for PERG's first evaluation. Its purpose was to match what is to be evaluated with appropriate information-gathering means. Program goals, activities, and outcomes are considered in relation to the evaluation activities and data necessary to perform the evaluation.

Beyond its immediate usefulness for that project, the matrix turned out to be a highly adaptable tool that could be applied to future programmatic planning tasks. A special benefit of the PERG matrix is that it promotes collaboration between evaluators and program staff. Together, evaluators and program staff periodically review the implementation goals, activities, and outcomes and the evaluation activities and data that document and explain the program.

Where does PERG work?

PERG works in formal and informal educational settings. Formal settings include K-12 schools and higher education institutions; informal settings include museums, adult education programs, community centers, and other places where people gather for leisure and learning. For example, PERG has worked through the years in school districts, individual schools and universities, and in a range of local and national museums and community centers across the country. In all these different settings, PERG evaluators approach the embedded program as an important component of its environment, requiring extensive efforts to collect data and understand its context.

PERG views all educational settings as communities of people created through involvement in a program. In a school-based program, evaluators may interview parents by telephone, meet with teachers, interview principals or board members, and observe students at work. In a museum, evaluators may interview docents and other museum staff, observe visitors at an exhibit, and ask visitors to reflect on their experience.

What is PERG's concept of the purpose of evaluation?

PERG evaluations are designed to be useful to the project stakeholders. PERG evaluators collaborate with the project staff and community to generate questions to guide the evaluation and identify the evidence needed to address these questions. This process may result in both qualitative and quantitative analysis.

Clients often call PERG evaluators "Critical Friends." Through formative evaluation meetings, evaluators help participants and staff reflect on information they have gathered about what is happening in the project--what is working and what is not--for the purpose of addressing project needs. Evaluators provide periodic reports that are clear, detailed, and reflective of the project and designed to help project leaders make decisions that will enable them to satisfy the project's intentions and identify its effectiveness.

What is unique about PERG's evaluation approach?

Brenda Engel, one of PERG's founders, compared PERG's evaluation approach to the experience of listening to music: In judging the quality of a piece of music, the critic can't reasonably begin with an analysis of notes, themes, time, key, and so on... Rather, she must undergo total physical immersion in the music itself: listening, feeling, responding to, being in it. It is only then that she is able to apprehend meaning and can begin to make distinctions and analyze the components with a better sense of how they relate to the whole.

For evaluators, a broad understanding of the context in which the program exists provides the framework within which meaningful assessment of the implementation of the program can take place. An initial immersion in the whole provides the necessary context for the eventual understanding of the parts. Over the course of the evaluation, this broad understanding is constantly tested and revised as evaluators look at data collected in the field.

Knowing and being able to describe the context in which the project is occurring is the consequence of having extensive direct contact with the project and its environment. Such understanding results from observing a range of events and activities, collecting data from each group of participants, and studying documents relating to the overall project implementation. Evaluators examine all the collected data and analyze it in relation to the project's intentions, discussing the interplay between the environment and the project as a component of their evaluation reporting.

What kinds of programs does PERG have the capacity to evaluate?

PERG has conducted evaluations for a very broad range of programs in professional development, science and mathematics systemic reform, curriculum development, curriculum implementation and dissemination, arts partnerships, museum exhibit development, museum visitor studies, and arts infusion. The range of PERG's recent evaluation projects gives an excellent view of the organization's capacity to meet the needs of both large and small programs in many different content areas, such as the arts, mathematics, and science.

Formal Education:
Some projects PERG evaluates are large-scale, multi-year systemic reform and/or curriculum development and implementation programs:

  • Several NSF-funded, local systemic science/math reform initiatives at the state and local levels;
  • A national middle school mathematics project intended to increase achievement in mathematics among rural and urban youth;
  • An initiative to increase public awareness and interest in space science and create partnerships with schools and communities through a mission-based approach;
  • A mathematics, science, and technology statewide systemic initiative in Massachusetts;
  • A K-12 science curriculum implementation and dissemination program for Massachusetts districts statewide;
  • A K-8 science education reform initiative providing leadership and assistance to regional sites serving districts nationwide;
  • A national, standards-based, middle school science curriculum development project; and
  • NSF-funded reform projects committed to creating regional centers for science and mathematics reform throughout the country.

Informal Education:
In addition to these, PERG conducts visitor studies and evaluates projects and programs in a diverse range of informal settings:

  • Several interactive museum exhibits focusing on visitor learning;
  • A teacher-centered professional development program focusing on developing democratic classrooms;
  • A school-based reform effort in eight districts using New Standards Assessments to improve teaching and learning at the school level;
  • Several partnership projects between art institutes or museums and public schools;
  • A bilingual elementary science professional development program;
  • An informal educational video dissemination project looking at issues surrounding implementation; and
  • A traveling museum exhibit that invites visitors to explore ideas and information about the universe.

How do PERG's evaluation activities benefit clients?

The PERG evaluation staff facilitates reflection and understanding among program constituents by bringing their extensive program and evaluation experience, knowledge, and skills to clarify ongoing issues concerning program implementation. Through frequent formal and informal conversations, presentations, and reports, evaluators convey their emergent understanding to program staff. This information, in turn, helps decision-makers address their implementation issues and challenges. Additionally, PERG's client organizations find the PERG evaluation reports useful in communicating information about the program and its effectiveness to a broader audience.

How did PERG begin?

The work of PERG began in 1976, when Lesley University faculty members George Hein and Brenda Engel created an evaluation proposal for the Massachusetts Cultural Education Collaborative (CEC). George Hein had been teaching a Lesley University course on assessing children's learning and evaluating programs using "informal" methods. Brenda Engel was experimenting with a new type of institutional evaluation in her consulting work.

The proposed evaluation involved examining the educational work of more than 30 arts organizations and museum programs coordinated by the CEC, a group of state-sponsored cultural institutions with a mission to assist racial integration of Boston schools. One of George Hein's students--who also happened to direct the Neighborhood Arts Center, a CEC member--had urged him to take on the project and put his teaching in informal evaluation methods to the test.

Evaluation of those thirty programs over the next two years was the beginning of a steady stream of program evaluation and research that has made PERG a national leader in qualitative evaluation.

How has PERG evolved over time?

During its first decade, PERG was sustained with multiple small projects and a modest staff, with a primary focus on performing qualitative evaluation work. In the subsequent years, PERG expanded significantly, taking on numerous projects, some large in scale, and formalizing its organizational structure without compromising the practices that have made its work so valuable to clients. PERG also developed capacity in quantitative methods under the leadership of Frank Davis, Former Senior Research Associate and former Director of the Doctoral Program in Educational Studies at Lesley University. The domain of PERG's work now extends across the United States: current projects are located in more than thirteen states, and the group has a presence at national and international meetings and conferences. While PERG is continually evolving in response to the needs of its clients and the educational community, the organization retains at its core the ideas and practices that went into its making: sustained relationships, collaborative design, and reflective evaluation work in the service of educational program improvement.

How is PERG organized?

Debra Smith, who was associated with the organization at its inception, is now PERG's Director. PERG is composed of Senior Research Associates, Research Associates, Research Assistants, and administrative staff. Several graduate students have also worked for PERG as Research Assistants and Fellows. Please refer to the PERG staff page for more information about individual staff members.

Print Friendly and PDF