shaping outcomes home page skip to global navigation skip to course navigation skip to content

Glossary of Terms

The terms are listed in alphabetical order. To quickly jump down the page, click on a letter of the alphabet below.


Key to using the Glossary:
The phrase “In OBPE” introduces a definition that is geared towards the exact technical use of the term within the Shaping Outcomes OBPE methodology. The phrase “Sometimes in other fields” introduces an explanation of how the term might be seen in other contexts. 

In OBPE, an “activity” is anything that staff on the program do which is not directly addressed to program participants. Activities usually have to do with the management of the program. If the target audience is students, then a teacher teaching students is a service, but hiring that teacher is an activity. Evaluation itself is an activity, because it is not something done to or for the participants—even though they will benefit from the fact that the program is being evaluated.

Sometimes in other fields, “activity” and “service” are used interchangeably, to include every action that is undertaken by program staff, partners or stakeholders. See also Services.

See Attitudes.

Applied to
In OBPE, in order to have outcomes information, data is collected from some or all of the target audience (or participants). The “applied to” section of a Logic Model spells out exactly from whom or about whom the data will be collected. coach

In OBPE, assumptions are information about an audience or program partners or other aspects of a program that will affect the choice of the best solution to address audience needs.  Some assumptions come from published data or research (such as knowing the percentage of low-income households in a geographic area) or that one-on-one tutoring has often resulted in higher academic scores.  Some assumptions come from a program staff’s general professional knowledge, or from their history of experiences with the audience, or from knowing about similar programs done elsewhere. coach

In OBPE, one type of outcome is called “affective” and concerns people’s “attitudes.”  Any program that aims to change participants’ feelings has an attitudinal or affective outcome.  Examples of these attitudes include: confidence, satisfaction, positive feelings towards an institution—and even anxiety.  A program may aim to reduce fear or anxiety.  It is most easily measured by a survey or interview, as it is difficult to identify measures of internal feelings without actually asking participants directly.  Affective outcomes should not be combined or confused with other types:  someone’s confidence about public speaking is different from his or her ability to effectively speak in public. Most fields use attitudes/affective in this way. coach

In OBPE, the audience for a program includes everyone who has the identified need and who might benefit from the identified solution. It is sometimes called the “target” audience, but is usually different from the program “participants.” Participants may be a small sub-set of the audience. For example, a reading program may address (present a “solution” for) literacy skills suitable for any fourth grade children, but a particular program may enlist only some fourth-graders (“participants”).

Audience members are stakeholders in a program, as they certainly care what happens to the program. One way to distinguish between other stakeholders and audience members is that specific outcomes will be identified for, and indicator information collected from, participants (audience), but not for or from other stakeholders. coach

Audience considerations
Audience considerations are characteristics of audience members which will affect how a program is designed. (Characteristics which are the deficits or gaps that a program is designed to remedy are called “needs”). These are usually very specific to a particular location and audience.  For example, you may wish to provide a theatre program to address children’s public-speaking skill deficit (a need); your audience members may not have transportation available for an afterschool or weekend program (a consideration). A well-designed program will think broadly and creatively about these considerations, because they are important in making sure a program actually delivers the services it promises. 


Baseline data
Initial information on program participants or other program aspects collected prior to participants receiving services or program intervention. Baseline data are used later for comparing measures that determine changes in your participants, program, or environment.  It is important to identify necessary baseline data so that it can be collected at the start; if this window of opportunity is missed, it cannot be re-created.  This information is sometimes referred to as “pre-test” data.

In OBPE, one type of outcome is called “behavioral.” A program that teaches or encourages people to do something usually intends to result in behavioral outcomes.  For example, a program on stars and planets at an observatory may be so engaging that participants thereafter watch the skies on a regular basis. “Doing” something is not the same as “knowing” something. Participants may be able to accurately describe the difference between a star and a planet (knowledge), are able to locate Venus in the evening sky (skill) but then seldom if ever try to locate Venus in the evening sky (behavior). coach

See also Skills.


Cognitive outcome
See Knowledge outcome.  

Conditions are circumstances that apply to or affect your audience or your program.  Important conditions represent needs (or wants or deficits) in your audience, which your program is designed to change. For example, children from low-income families may not have many books in their homes, which may affect their reading scores. “Being low income” is also a condition (also called “status”), although your program may not seek to change that.

Other conditions will influence how you design your program.  Handicapped individuals may not have convenient transportation, so you may design your program to include providing suitable transportation. 

In OBPE,  the specific conditions that your program is designed to address are usually called “needs” (or wants or deficits).  Your outcome is a change in these conditions.  The audience conditions that affect how you design your chosen solution (how you plan your program) are called “audience considerations.”

See also Audience considerations, Deficit, Need, Status, Want.

Since an evaluation may entail exchanging or gathering private information about individuals, it’s important to use a written form that assures evaluation participants that information they give will not be disclosed or associated with them by name. Telling individuals that their information is confidential gives your promise to ensure their privacy. There are two particularly important times to consider how you will ensure privacy. When you select indicators, you need to consider how much private information those indicators involve: do you really need a family’s exact income or can you use a choice of ranges (for example, $15,000-30,000 per year). Second, if you phrase an outcome as an “improvement,” you usually cannot use totally anonymous surveys or other instruments, because you will need to match each person’s answers before and after.  To meet this need, an evaluator can, with the participant’s permission, collect non-anonymous data which is then kept confidential. coach

See also Sensitive.


Specific information or facts which are collected to show how the program works and its effects. Data are usually well-defined and concrete.  There is a wide range of data that can be associated with a program:  for example, data about inputs and outputs shows program costs and time (such as budgets and records of staff time devoted to a program). Data about participants (for example, age, date of entry into a program, or reading level) should be chosen only if they are relevant to showing need or outcomes.

In OBPE, data applied to evaluation is usually stated as an “indicator” which comes from a ”data source.” For example, an indicator of reading skill may be a test; the data found in the “data source” are test scores.

Data can be analyzed in several ways, but a simple and commonly used method in OBPE is “number and percent” of participants who achieve a given outcome.

See also Data Source.

Data interval
In OBPE, the “timing” or “data interval” entry in the Logic Model specifies when data will be collected for a particular indicator. For short-term outcomes, the timing is usually right at the conclusion of a program. For medium- or long-term outcomes, program designers need to state exactly when or after what interval (“at the end of high school” or “after one year”) data will be collected. Having timing information right in the logic model ensures that outcomes data collection will be part of the planning of the program. 

Data source
The data source names the tool that provides measurable data about the indicator selected for an outcome. For example, in a program to reduce bullying, an indicator might be a reduction of bullying incidents at recess. Where would information be found about incidents? This is the data source, which in this case might come from a daily log, or from interviews of recess monitors. Sometimes in other fields, data sources can also include written surveys, and structured observation.

See also Indicator.

In OBPE, a deficit is a need of the audience, a condition that is targeted for a program:  there is a difference--a negative difference--between the state of the audience and an ideal or desired state.  For example, some students at an elementary school may not be reading at grade level:  they have a deficit in reading ability.  If members of the defined program audience lack some skill, or knowledge, or behavior, then they have a “deficit” in that area, which the program aims to remedy. coach
See also Need.


In most social fields, evaluation refers to systematic methods for collecting, analyzing, and using information to answer basic questions about a program. It helps to identify effective and ineffective services, practices, and approaches. There are several types of evaluation, including evaluation of theory, of implementation (or process), of outcomes, of impact, and of cost.

In OBPE, the word “evaluation” is specifically used in the context of participant-changing outcomes. Outcomes evaluation can be used in most situations where a program has a defined beginning and end and broad agreement exists among program sponsors about its desired results.  But outcomes evaluation may not be suitable for pilot programs where the focus is on finding some workable techniques, or for ongoing operations which have no defined “end” point. coach

In general terms, the word “experiment” can simply designate a new idea about how to do something.  In formal social science or medical research, “experiment” has a very specific meaning, in which a theory (hypothesis) is tested by very carefully exposing one group to a “treatment” (particular services) and comparing results to a group that does not receive the treatment.Because the purpose of OBPE is not to test theories, it does not use an “experiment” format, and often does not include any comparison groups. 


Focus group
A focus group is a way of collecting in-depth feelings, perceptions, and observations from program participants.  It can be an especially valuable data source when program outcomes are too complex to formulate in a fixed questionnaire. Focus groups usually involve 7-10 people; having several focus groups on the same topic helps determine if some themes are truly pervasive among participants. Because focus group participants are not randomly selected (in contrast to some surveys), and affect each other (in contrast to interviews), it is usually not appropriate to assume all audience members feel the same way as focus group participants.

Formal research
See Research.

Formative evaluation
Formative evaluations focus on collecting data on program operations: is this working? This type of evaluation is a type of “process evaluation” in the early stages of a program, showing whether changes or modifications are needed for the program to work properly. Formative evaluations are used to provide feedback to staff about the program components that are working and those that need to be changed.

In an OBPE logic model, this kind of evaluation is not usually specified, but it generally focuses on the “activities,” “services” and “outputs” areas, answering questions such as Are the products producing the effects expected? Are enough people enrolling?

Foundation is used in the tutorial and elsewhere as a general term to include             funding sources other than an organization’s (a library or museum’s) own yearly             operating funds. Many foundations have their own missions and need to demonstrate that they are serving those missions effectively and responsibly. The results-oriented data that OBPE provides is especially valuable to foundations. 

Front-end reports
A type of report which focuses on the initial stage of implementation, rather than the outcomes, of a program. Front-end reports document that appropriate partners, inputs, and activities are available in order to deliver services. They are especially             important for very new programs, where some elements of implementation are not yet finalized. For example, the Shaping Outcomes  project interviewed decision makers in libraries and museums to see what they would want in an online course for their staff or students.   


A gap is a type of “deficit”—it is the difference between what an audience needs             and what it currently has. Programs are designed to remedy gaps in the target audience.

See also Condition and Deficit.

Generally, the goal of an OBPE-oriented program is termed an “outcome” for a particular audience.  In other fields, this may be called the “goal” of the program, or the terms “goal”and “objective” and “outcome” may be used interchangeably.  If your program partners are from other fields, be sure you agree on the terminology for “what” you want to achieve. 

Government Performance Results Act
In 1993, the federal government passed the, Government Performance Results Act as part of the “reinventing government” movement.  This pressed government agencies to be accountable for not only the amount of money disbursed and obvious “outputs” (e.g. number of people served), but the resulting outcomes. Simultaneously, many foundations (private sources of funding, such as the United Way or Kellogg Foundation), also changed funding guidelines and oversight to stress results, not merely activities. Currently, the name of the Government Accountability Office (GAO, formerly the General Accounting Office) reflects an ongoing focus on outcomes as a measure of programs’ effectiveness.


Immediate outcomes
See Short-term outcomes.

In general, “impact” means a general effect of a program. It often is used broadly, describing the ultimate goals of the program, such as to alleviate poverty or to help children attend college. Programs are conceived to have an impact, but because the ultimate goal is often very large, long-range, and broadly-stated, it is often difficult or impossible either to measure it or to understand exactly how one program influences it.  It is usually not useful to specify program outcomes in terms of these larger “impacts,” but they can appear in stakeholder questions. Impact is one area where a program’s goals meet the mission of funding organizations:  one foundation may desire to “improve children’s lives” and one program may address that in a small way with specific outcomes. 

Implementation evaluation
See Formative evaluation.

An indicator is a specific, observable, and measurable characteristic, action, or condition that demonstrates whether a desired change has happened. Remember that outcomes measure changes in attitudes, skills, knowledge, behavior, status or life condition. To measure outcomes accurately, indicators must be concrete, well-defined, and observable.  Remember that we’ve recommended starting indicators with “the number and percentage of” and then identifying the target audience and then explaining what to look for in a measurable way that doesn’t require interpretation. An outcome can be measured by more than one indicator.

It is useful to distinguish between “indicators” and “data sources.” An indicator is usually the “what” you want to know. A data source is “where” or “by what means” you will know that “what.” For example, in a reading program, you choose “reading scores” to indicate change in the participants. You state your indicator as “X number and Y percent of participants achieve 4th grade reading level”—that is, what will indicate a successful outcome. The data source--“special test administered at the end of the program”-- is where the scores will be generated. For a bird-watching program, correct identification of birds will indicate success, so you state your indicator as “the number and percent of participants who correctly identify five species.” Your data sources for this indicator might be “testing children with flashcards” or “observation by leader.”

See also Data Source.

Inputs refer to the resources that will be used to produce a program.  Time and money are the most fundamental resources, and are usually used to acquire the needed physical and personnel inputs:  rented or donated space, staff time, the work of volunteers, contractors or consultants; books, equipment and materials of various kinds.

A Logic Model does not need to be detailed in its listing of needed inputs—for example, it may say, “equipment to produce CD-ROMs” rather than “scanner, computer, CD burner, blank CD's and cases, printing supplies.”  The goal of a logic model is to provide in a concise document the logical relations of program elements, so all partners, including funders, understand what is needed. A separate budget will propose and then document detailed spending.

Intermediate outcomes
See Medium-term outcomes.
Interval  See Data interval.  


Knowledge (cognitive outcome)
In OBPE, one type of outcome is a specific“knowledge” or “cognitive” outcome. A program may be designed to teach people information; the audience “need” is for that information. For example, people may have limited knowledge of healthy activities.  The most common measure of knowledge is a test of some type, even a short one on exiting an exhibit. Sometimes people are asked on a survey if they “believe” they know more about a topic (after some program activity), but this is, at best, an indirect measure of knowledge.

It is important in choosing or writing data collection materials to distinguish between knowing something (for example, what a Dewey Decimal number is—a knowledge or cognitive outcome), knowing how to do something (for example, how to find a book on a shelf), and doing something (checking out and reading books, a behavioral outcome). coach

See also Attitudes, Behavior.


Life condition
Members of an audience may have a status or a condition of their lives, which a program aims to change or serve.  For example, not being a high school graduate is a status that a program can help to change; having low vision is a life condition a library program will want to accommodate. coach

Logic Model
In OBPE, a Logic Model is a text or diagram which describes the logically-related parts of a program, showing the links between program objectives, program activities, expected program outcomes, and how those outcomes will be evaluated. In other words, it is a picture or description of a program that shows what it is supposed to accomplish.

In broader evaluation situations, “logic model” can have two different interpretations. Some agencies or organizations consider a “logic model” to be only a graphical model:  a representation of a program’s elements, from inputs, to activities, to outcomes. The goal is to have a one-page visual presentation of a program as a whole. Other agencies emphasize the logic and use a “logic model” to present the underlying theory of why the particular services are expected to have the desired outcomes.


See Inputs.

Measurable terms
Clear and concrete language which specifies what you plan to do and how you plan to do it. You can help to make program elements measurable by describing inputs, stating time periods for activities and the number of participants (outputs) to whom services are provided.

In particular, outcomes need to be measured with concrete indicators. Some aspects of a program may be important to participants, partners, and stakeholders, but be difficult to measure (“civility” or “an inquiring mind,” for example). Program designers attempt to specify measurable items which give some, though probably not complete, information about desired outcomes. To make your terms measurable, remember to use verbs that describe observable outer action (such as “demonstrate,” “choose,” and “express”) rather than those that describe inner action (such as “think,” “believe” and “feel”).

Medium-term outcomes
Refers to results or outcomes of a program that do not occur during or immediately after a program, but which occur or can be observed only after a period of time. Such outcomes contrast with “short-term outcomes.” It is harder to say that a particular program has influenced medium-term outcomes, because other events and activities may have intervened between the end of services and the measurement.

If a program identifies medium-term outcomes that occur after the end of the program, then the report at the end of “service delivery” (for example, a workshop or series of meetings) may be only preliminary. Program planners may be very interested in medium-term outcomes--does an after-school mentor program for eighth graders result in higher high school graduation rates?—but they require tracking participants for some time (usually at least a year), and so need more staff time, administrative processes, and funding.

The line between medium-term and long-term is less clear but usually long-term outcomes occur at least a year after the program services.

See also Short-term outcomes.

Mission, mission statement
Organizations often write a mission statement to help focus their activities and state their priorities. In libraries and museums, the mission states the overall purpose of the organization; typically identifies key audiences and purposes, and often broadly describes the methods by which the organization will achieve its mission. In OBPE, Boards are primary stakeholders who decide on supporting programs or canceling them by whether the programs fall within the institution’s mission. Boards also decide among programs, in budget crunches, by their value to the mission.


Need is a general term that  includes wants, deficits, and conditions to be remedied or changed: your program aims to address the needs of your audience. Knowledge of audience needs can come from formal research, from your own professional judgment and experience, and from information gathered by partners.

See also Audience considerations, Deficits, and Status

Number and percentage (generally written as # and %)
In OBPE plans, targets for indicators are usually expressed in terms of how many (“number” or # of) program participants and what portion (percentage or %) of them show evidence of a particular indicator. These show the quantity and degree to which a program is successful. For example, one program may have 20 participants, and another, 100. The target percentages can be the same (for example, 50%) while the numbers would differ (10 and 50, respectively). Funders often want to know how many people are being served successfully, but also, how many were successful out of all who participated.

See also Indicator, Target.


In OBPE, outcome is a specific benefit that occurs to participants of a program. It is generally phrased in terms of the changes in knowledge, skills, attitudes, behavior, condition or status that are expected to occur in the participants as a result of implementing the program. To keep your program focused on these changes, be sure to write objectives starting with a word describing the target audience followed by a verb highlighting the expected change. For example, students develop the habit of library usage. Outcomes may be short-term, medium-term, or long-term. coach

See also Indicators.

Outcomes based planning and evaluation (OBPE)
A systematic way to plan a program and to determine if it has achieved its goals. The process of planning an outcomes-based program and a logic model (an evaluation plan) helps institutions establish clear program benefits (outcomes), develop ways to measure those program benefits (indicators), identify specific individuals or groups for which the program benefits are intended (target audience), and design program services to reach that audience and achieve the desired results.

Many social services and other agencies and foundations use the term “outcomes based evaluation” to express this idea. The wording “OBPE” (or Outcomes-based Planning and Evaluation) reflects a belief that outcomes-based evaluation, to work effectively, needs to include program planning focused on outcomes.

See also Evaluation and Outcome.

A measure of the volume of a program’s actions such as products created or delivered, number of people served, activities and services carried out. Outputs are generally measured quantitatively.

Examples: 145 internships were completed; 3700 pictures were digitized; 15 courses were offered; 37,000 visitors attended an exhibit; a web site received 10,000 hits.  Outputs generally measure things, not people, and so are not a measure of program outcomes, which are changes in people. However, it is usually necessary to have a sufficient quantity of outputs in order to achieve outcomes: if no one attends workshops, no one will learn the skills taught there. 


An individual receiving or participating in services provided by a program. Participants are usually members of identified audiences, and may also be termed a “target” audience. That is, the audience targeted to be helped by the services of a program may be “all 4th graders,” but the program cannot reach all fourth graders, so the program is tried on a smaller group of available participants.  If the program is successful, your work may be replicated (repeated in other locations) to reach more of the target audience.

Program designers should be consistent in using the terms “audience,” “target audience” and “participants.”  All should agree that “audience” equals “target audience” OR that “target audience” equals “participants.”
In OBPE, if more than one organization is involved in producing a program, the second institution is termed a “partner.” (One of the partners might be a senior or central partner, with the biggest role in managing the program).  Partners are also stakeholders, because partners always have an interest in the outcomes of a program. Partners are usually identified by formal partnership agreements. coach

See also Stakeholder.

A broad term that includes many aspects of service to humans or to causes such as the environment. Philanthropy generally means providing money or services so that charitable activities can be carried out or resources provided. For non-profit organizations, sources of funds come in part from private giving (which means philanthropy) from either individuals or foundations. (Other, non-philanthropic sources include some commercial activities [such as gift shops], government grants or dedicated taxes.)

See also Foundations.   

A systematic written guide showing how to conduct something.  An evaluation plan describes all of the necessary steps to gather data to demonstrate outcomes. An implementation plan describes how a program will be conducted.

In other contexts, a plan can be used generally for any organized activity. 

A test of measurement taken after a service or program takes place, which has been paired with a “pre-test.” (Tests that are not paired are usually just called “tests.”) Post-test data can be compared with pre-test results to show evidence of the effects or changes as a result of the service or intervention being evaluated. Sometimes called “exit data” if collected immediately after an individual leaves a program or exhibit.

A test or measurement taken before a service or program takes place. It is compared with the results of a post-test to show evidence of the effects or changes as a result of the service or intervention being evaluated. Information from pre-tests is often called “baseline data.”

If an outcome is expressed in terms of an “improvement” in participants, it is often desirable to gather before-and-after data. Some of this data can come from “organizational records”--for example if you design a program for a group of students already identified as having reading skill deficits.

See also Post-test.

A program or project is a set of services delivered to an audience, supported by managerial activities and supplied by inputs, which is designed to achieve specified outcomes.

For OBPE, a program usually has to have a defined beginning and an endpoint  (for purposes of evaluation). One purpose of outcomes evaluation is to determine if the program’s results justify its being repeated or continued. This may differ from libraries’ or museums’ “programming,” which can refer to a broad range of processes and activities which might not have specific outcomes or goals envisioned or evaluated for user-based outcomes. 

See also Project.

The term ‘project’ is sometimes used in place of ‘program.’  While it means mainly the same thing:  something done with an audience to achieve some outcomes, “project” emphasizes that it is a discrete item:  something with a beginning and end. In libraries and museums, “programs” sometimes have a wide range of meanings. 

See also Program.

Purpose (of a program)
The purpose of a program is the reason it is done, which is usually to achieve some outcomes on the part of some participants.  In our tutorial, we identify a purpose statement as a short written answer to the questions: we do what? for whom? for what outcome?


Random sample   
In OBPE, if a data source does not involve all participants, a “random sample” of participants may be selected. For example, if a town has been blanketed with public service messages, not all town residents will be surveyed to see what they think about the issue. If a sub-set of participants is needed, the best way to select them is through “random sampling.”  In random sampling, every audience member has an equal chance of being selected (like picking names out of a hat). There are specific techniques used to select randomly, as discussed in most statistics textbooks or the sources recommended in Resources for module D.

If you rely only on participants or audience members who comment on their own initiative, those people may not be “typical” of the general audience.

Research (also termed formal research)
In formal research, researchers intend to test a theory (hypothesis) by carefully examining all of the factors that they believe influence a situation, controlling some of them, and changing some of them in order to measure and analyze their effects.  Formal research aims more at creating new knowledge than exploring whether a particular known program is working as desired. Formal research also involves several steps that are usually not present in evaluation, such as control groups with random selection and random assignment. 

OBPE does not make “theory-testing” claims for the data it generates. OBPE does not require that programs exclude all potential “interfering” influences from their programs, but works on a more real-life basis. It provides a thorough documentation of what can be known about a particular program. This shows what the program accomplishes, and also may be of interest to those planning similar programs. 

See Inputs.


A subset of a larger group. Participants, or the “target” audience, often include some, not all, members of the audience identified as having a need. Samples of participants can be random (selected by chance, such as every sixth individual on a waiting list) or non-random (selected purposefully, such as all graduating degree candidates in a Museum Studies program).

Samples can also be used in the “applied to” section of a Logic Model (when the number of participants is very large, for instance in West Dakota Rx , accessible through the Cases tab). Outcomes data may not be gathered from every participant, but only from randomly or non-randomly selected respondents. 

See also Random Sample.

Sensitive can be used in several senses. We mean “requiring careful handling.” Programs may involve sensitive issues and therefore must be presented with awareness of audience reactions. Outcomes data may include sensitive information (that is data that participants would not want revealed, such as annual salary, so it must be kept confidential. All program staff members must protect sensitive information in the way that the program design has set up, and in the way participants expect.

In a different sense, we say that data instruments should be sensitive enough (that is, powerful enough and with the right scale) to provide the right kind of detail. For example, a survey that asks whether people “like” or “dislike” something is less sensitive than a question that asks if people “dislike complete, dislike somewhat, feel neutral, like somewhat, like completely” something. 

See also Confidentiality.

In OBPE, services refers to program actions taken that directly involve the participating audience members:  in short, they comprise the direct “what we do” part of the program purpose statement:  we hold workshops, we teach skills, we put on a play to demonstrate ethical dilemmas. coach

See also Activities.

Short-term outcomes
The changes in program participants’ knowledge, attitudes, skills and behavior that occur immediately after, or even during, the course of a program. This is the easiest type of outcome to define and measure, because you usually have access to program participants, and there is little else, other than your program, that has happened that could have caused the results. It contrasts with medium-term (or intermediate) or long-term outcomes.

Other terms for short-term outcomes are “proximal outcomes” or “immediate” outcomes, although many immediate outcomes also are long-lasting (such as a participant signing up for a library card).

See also Medium-term outcomes and Long-term outcomes.

In OBPE, one type of outcome is a “behavioral” or “skills” outcome. A skill can be understood as a specific ability to perform a type of activity.  It is useful to contrast “cognitive” or “knowledge” outcomes with “behavioral” or “skill” outcomes. Cognitive outcomes mean knowing things; behavioral outcomes mean being able to do something. Observation is the most direct measure of a skill outcome.

For example, knowing that anthropologists carefully handle dirt from “digs” is a cognitive outcome; being able to appropriately sift, record, and handle dirt and artifacts from a real or simulated site is a skill outcome. 

See also Behavior.

In OBPE, a solution is what program designers use to address the identified needs of the audience. In the Logic Model, the solution can be a general statement of the approach the program will take, summarized in a single sentence in the “we do what” part of the program purpose statement. The outcomes demonstrate that the solution has provided a remedy for the audience’s deficit. 

In formal terms, a standard is a widely-accepted, concretely measurable, level of knowledge or skill. In OBPE, standards may be used to identifying audience needs or deficits:  members of an audience may be perceived or documented as not achieving a desired standard.  Standards may also be a part of an indicator statement: # and % of participants who meet a certain standard.

Some programs may use their own “standards” drawn from their own experience. Program designers should be clear whether they are using local or generally-accepted standards, and identify the data sources which will provide the information.

People or institutions who care about the process or outcomes of a program.  Some stakeholders may contribute inputs (funding or staff time or facilities); others may be formal partners, helping to deliver the program. Audience members obviously care about the program and are thus stakeholders. Other stakeholders may include peer institutions or professionals who may want to duplicate the program--one museum watching another museum’s program; public librarians looking for good public library program ideas. Finally, community members, or relatives of defined audiences, may also have an interest.

In OBPE, stakeholders are identified at the beginning of a program so that the outcomes data that is gathered can be used in reports to answer stakeholder questions and concerns.

The situation of a program is its context in a specific town or state. Although the target audience may be early adolescent boys, programs designed to appeal to them will vary depending on whether they are offered in New York City or Tucson, Arizona and will vary depending on who the stakeholders are. It is not a specific technical term in OBPE. 

See also Assumptions, Condition, Mission and Need

A condition that describes an audience member before and also after a program, sometimes termed a “life condition.” A  “status” can often be changed (such as becoming a high school graduate). The status of audience members will affect program design (especially as it concerns socio-economic status), thus being  an “audience consideration.” Or, a status may represent an audience need, and your program may be designed to change that status in participants, such as becoming fluent English speakers, or college graduates. 

Status often is a long-term outcome, so it may be difficult to track participants over enough time to determine status changes. 

See also Audience Considerations, Life Condition, and Outcome.

Summative evaluation
A type of evaluation that assesses the results or outcomes of a program. In OBPE, outcomes evaluation summarizes what a program has accomplished. This type of evaluation is concerned with a program’s overall effectiveness, and is generally conducted at the conclusion of a program.

See also Formative Evaluation.

A type of data source which asks people to respond to specific questions, usually with very specific answers (checklist or “pick one”). Everybody who takes the survey answers the same questions, asked in the same way, and with the same set of possible responses.  Surveys can ask about feelings, satisfaction, and overall impressions.


In OBPE, a target is the desired quantity of an outcome, usually expressed as a percentage of those from whom data is collected who display that indicator. For example, for a knowledge outcome such as knowing the areas of the Dewey Decimal System, the target might be that 75% (or more) of the participants in a workshop (the “applied to”) can name the ten major divisions correctly on a test (the “data source”) given at the end of the workshop (the timing or “data interval.”

Targets come from designers’ knowledge of previous programs and audience considerations. Targets for very new programs may not be very accurate, but the program can still be worthwhile as it explores how effective the “solution” is to the audience needs. 

One way of thinking about targets is to consider what would make the program worth all of its time, money and effort. Some programs can be worthwhile even with relatively low targets. For example,  if only 25% of children in a summer reading program maintain their reading levels when usually all of them decline while school is out, the program as a whole may be “worth it.”
Target audience
See Audience.

See Data Interval.


In OBPE, “wants” are considered a kind of “need.” Both refer to something that an audience is assumed to lack.

See also Deficit and Condition.

 Institute of Museum and Library Services

Copyright © 2006 Shaping Outcomes

Questions or comments about the website? Contact the Webmaster.