Modules | Module A: overview | Module B: plan | Module C: build | Module D: evaluate | Module E: report |
Other Resources | Orientation | Logic Model | Cases | Glossary | Credits | Enhanced version |
[Start Screen D-1 of 16/Module D>Introduction (1)]
You should now know how to frame a program (Module B: Plan) and build a plan for activities and services (Module C: Build) to fulfill your purpose.
In this module, you will learn how to evaluate the outcomes of a program, by answering the following questions:
[Graphic of rolled-up tape measure.]
[End Screen D-1 of 16]
[Start Screen D-2 of 16/Module D>Logic model (2)]
If you can’t measure success, how will you know you’ve achieved it? How do you know the outcomes of your program?
In our graphic of the Logic Model notice that the bar labeled Evaluation underlines the activities and services you perform throughout.
Evaluation, the foundation of a program, means the planners are continuously asking: will this work to produce worthwhile outcomes?
This module develops a vocabulary and system you can use for evaluation throughout the program.
[Graphic of man and woman looking at plans]
[End Screen D-2 of 16]
[Start Screen D-3 of 16/Module D>Outcomes (3)]
Evaluating a program by its outcomes means you define success by the measurable changes in program participants brought about by participating in your program. Remember that you may write more than one outcome (so be sure each outcome contains only one concept to measure).
Measurable: That is, you can test for the change or observe it. But if you made a movie of success, the camera would focus on people, not on mechanisms or processes used to create the hoped-for results.
Changes in participants: Remember we’ve defined an outcome as a change in a target audience’s skills, attitudes, knowledge, behaviors, status, orlife condition brought about by experiencing a program.
Define success: Does the outcome represent a benefit for the target audience? Do key stakeholders accept the outcome as valid for your program? Finally, is the outcome realistic and attainable?
Participating in your program: Is it sensible to claim your program services influenced the outcome?
How do we measure outcomes?
[Graphic of young girl with the text "Skills: Girl Scouts can identify local birds by sight and name."]
[Graphic of father and daughter with the text "Attitudes: Girl Scouts no longer think science is boring."]
[Graphic of person with binoculars with the text "Knowledge: Girls Scouts know what local birds eat and what predators they face."]
[Graphic of student with the text "Behavior: Children read for pleasure over three hours per week."]
[Graphic of woman at computer with the text "Status: Students use educational materials on library computers to get GED and improved salary and job prospects."]
[Graphic of group of people with the text "Conditions: West Dakota residents stop smoking after using improved access to reliable, understandable medical information."]
[Start COACH text]
State the outcome you want to produce in simple, concrete, active terms—something that you will be able to measure and report without challenge. Notice in the poor statement of outcomes below that a skeptic could very well say, “Really? How are you going to prove that?” Yes, a skeptic might also say that to the more specific statements on the right, but wouldn’t it be easier to come up with specific indicators to support those revised statements?
Poor Outcome Statements
Better Outcome Statements
[End of COACH text]
[End Screen D-3 of 16]
[Start Screen D-4 of 16/Module D>Outcomes (4)]
Remember, outcomes can be observed in the short term, medium term, and long term. Start with an achievable, measurable short-term outcome, then ask “what happens next?” Then “what happens after that?” In this way you show how you assume the program leads to long-term outcomes or goals. Examine the examples below.
Short-term outcomes: changes in skills, attitudes and knowledge
Girl Scout can identity 5 birds by name in an exhibit and on a field trip.
Medium-term outcomes: changes in behavior and decision making
Girl Scout chooses bird-watching activity for family vacation.
Long-term outcomes: changes in status of life conditions
Girl Scout becomes biology major in college.
Plan to evaluate short-term outcomes, but check whether you have the time and money to evaluate any medium- or long-term outcomes. For example, see the Denver Firefighters Museum (PDF) case teaching fire safety to children with the long-term goal of reducing the number of children as fire victims, but the outcome is medium-term: “children demonstrate knowledge of fire safety” as tested in a teacher-administered survey 4th weeks after their museum visit to see if they can identify the safest way out of their house in case of an emergency.
[End Screen D-4 of 16]
[Start Screen D-4_M of 16/Module D>Outcomes>Museum example (4-M)]
The MoNA program provides intensive training and other art museum services for Skagit County elementary school teachers in order to enhance teaching with art and related concepts such as visual thinking. Which of the follow statements are outcomes for the MoNA Program?
Clicking on a graphic of hand prints labeled "Teachers attend 5-day summer workshop at MoNA and attend 2 training days on exhibits" presents following text: No, this is what the program does for the target audience (a service) in order to produce a desired outcome.]
[Clicking on a graphic of art labeled "Teachers know how to lead an effective discussion with students about art, develop a curriculum unit for an exhibition, and teach a related hands-on art making lesson" presents following text: Yes, this is the change that will occur in the teachers if the program is successful.]
[Clicking on a graphic of docent and group labeled "Docents are trained to lead school visits" presents following text: No, this is an activity the program providers must carry out to ready services that lead to outcomes.]
[Clicking on a graphic of art labeled "Teachers incorporate Museum visits and links to exhibitions into their ongoing classroom teaching" presents following text: Yes, this is the change in behavior that will occur in the teachers if the program is successful.]
[Clicking on a graphic of “to do” list "Teachers currently feel uncomfortable going to MoNA and don’t have time to include art in their classrooms" presents following text: No, this is an assumption about the audience that helps define the solution provided by the program.]
[End Screen D-4_M of 16]
[Start Screen D-4_L of 16/Module D>Outcomes>Library example (4-L)]
The Riverton Library Memoirs Program is offering a year-long program with a facilitator helping participants to grow in their writing of autobiographically-based essays through critique, discussions with published writers and public performance. Which of the following statements are outcomes for the Riverton Memoirs program?
[Clicking on a graphic of man and woman with computer labeled "Participants critique the original and revised copies of other participants’ work." presents following text: No, this is what the participants do in the workshop to bring about the change for the desired outcome.]
[Clicking on a graphic of glasses on paper labeled "Participants write in a way that is judged as improved according to blind expert review of original and revised essays." presents following text: Yes, this is the change in skills that will occur if the program is successful.]
[Clicking on a graphic of woman labeled "An expert on creative writing is hired to evaluate original and revised essays (without knowing the names of writers or the order of composition)" presents following text: No, this is an activity the program director performs to set up evaluation of writing outcomes.]
[Clicking on a graphic of shaking hands "Participants demonstrate that they feel they belong to a community of writers" presents following text: Yes, this is the outcome (change in attitude) that will occur if the program is successful.]
[Clicking on a graphic of a checklist "Riverton residents respond to a needs assessment questionnaire describing the type of workshop they would like." presents following text: No, the questionnaire provides feedback on the number interested and the kind of workshop they want. Program planners make assumptions about the audience that helps define the solution provided by the program.]
[End Screen D-4_M of 16]
[Start Screen D-5 of 16/Module D>Outcomes (5)]
Let’s look at how complex programs measure outcomes:
[Graphic of Magnifying glass]
[End Screen C-5 of 16]
[Start Screen D-6 of 16/Module D>Indicators (6)]
To make an outcome measurable, ask the question: “What change in your participants’ behavior, attitudes, skills, knowledge, status or condition would indicate success?” That change is an indicator of your outcome. You can have more than one indicator per outcome.
You’ll ensure your indicator is measurable by writing it in the following form:
The # and % of (participants) who (demonstrate what specified change?)
Why start with participants? <popup answer: To show that you’re observing a member of the target audience for change.>
Why # and %? <popup answer: To let you decide what level of impact counts as success: everyone? Half? 80% (We’ll talk about setting “targets” for success later in this module.)
How specific a change? <popup answer: Make sure you use a verb that can be observed. Give enough specifics (how many or under what conditions or in what timeframe) that scoring doesn’t depend on guesswork or interpretation.
Example:
Outcome: Girl Scouts know about birds in their area.
Indicator: The # and % of Girl Scouts who can identify at least five local birds on a field trip.
[Start COACH text]
Watch out for fuzzy verbs (“know, do, feel, act, think”) and try to replace them with observable or measurable verbs:
changes in knowledge: lists, describes, compares, teaches, gives examples, identifies
changes in attitudes: answers a survey, reports, chooses
changes in behavior: builds, joins, pays for, votes, devotes effort to
changes in skills: builds, can prepare, qualifies, demonstrates, passes exam
[End COACH text]
[End Screen D-6 of 16]
[Start Screen D-7 of 16/Module D>Indicators (7)]
Indicators should be measurable and specific enough so that all can agree whether the behavior has occurred.
As you develop an indicator, be sure it meets the following criteria:
It’s time for you to look at an example in some detail.
[End Screen D-7 of 16]
[Start Screen D-7_M of 16/Module D>Indicators>Museum example (7)]
The MoNA program provides intensive training and other art museum services for Skagit County elementary school teachers in order to enhance teaching with art and related concepts such as visual thinking. Immediate goals include strengthened teacher skills for leading discussions about art, using museum visits and online resources to enhance teaching, and integrating art into classroom activities, for the ultimate purpose of helping students develop critical thinking skills.
The first program outcome is: Teachers see the Museum as an effective resource for instructional support. Which of the following statements are good indicators for the success of this outcome?
Possible Indicators of Success
[Clicking "Proposal 1: Funding for the second year of the program increases by 10% ($2,000)" presents the following text: No, increased funding is nice, but it is more likely to be related to state budgets or sudden increases in costs. More important, it doesn’t show a change in the target audience.]
[Clicking "Proposal 2: # and % of teachers who rate the Museum as effective or highly effective resource for instructional support" presents the following text: Yes, the outcome measures an attitude, and a survey is a good way to assess attitudes.]
[Clicking "Proposal 3: # and % of teachers who complete MoNA’s 3-day summer institute for teachers" presents the following text: No, this is an output of a program service (3-day summer institute). An indicator shows a change in the participant as a result of program services.]
[Clicking "Proposal 4: # and % of teachers who bring their classes to visit the Museum at least once in the year following their training" presents the following text: Yes, voluntary actions (such as arranging visits) are a good demonstration of an attitude.]
[Start DIG DEEPER text]
For more in-depth information about the MoNA program, click on Cases and choose MoNA Links.
[End DIG DEEPER text]
[End Screen D-7_M of 16]
[Start Screen D-7_L of 16/Module D>Indicators>Library example (7-L)]
The Riverton Memoirs Program provides support for adults writing autobiographical essays, with a moderator directing bi-monthly meetings for peer review and discussions with authors.
The first program outcome is: Participants show improvement in their writing. Which of the following statements are good indicators for the success of this outcome?
Possible Indicators of Success
[Clicking "Proposal 1: Funding for the second year of the program increases." presents the following text: No. Increased funding is nice, but it is not a direct indicator of improved writing- the change in skill that the outcome aims to produce. More important, it doesn’t show a change in the target audience.]
[Clicking "Proposal 2: 85% of workshop participants have two revised essays judged better than their original drafts in an independent blind testing" presents the following text: Yes. This test indicates whether there has been improvement in writing by the independent standards of an expert, though the 2 sets of essays (original and revised) have been chosen by the participant.]
[Clicking "Proposal 3: 60% of participants sign up for a writing workshop within three months of the end of the Memoirs program." presents the following text: No. This doesn’t provide evidence of improved writing, although it is a great indicator for another outcome of the program—demonstrating they feel part of a community of writers.]
[Clicking "Proposal 4: 90% of participants who signed up for the workshop attend 80% of the sessions and produce work included in the published anthology of participants’ work." presents the following text: No. This is an output that describes active participation and completion of assignments, but it doesn’t describe a change in the participants, so it isn’t an indicator of success.]
[Start DIG DEEPER text]
For more in-depth information about the Riverton program, click on Cases and choose Riverton Library Memoirs.
[End DIG DEEPER text]
[End Screen D-7_M of 16]
[Start Screen D-8 of 16/Module D>Data sources (8)]
Consider a cluster of questions when deciding how to measure an indicator:
[End Screen D-8 of 16]
[Start Screen D-9 of 16/Module D>Data source (9)]
Data sources are tools, documents, and locations for information that can be designed to show what happened in your target audience.
[Graphic of woman and child]
Anecdotes:
[Graphic of form]
Surveys or feedback forms:
[Graphic of hand to present the following text:
Observation or assessments:
[Graphic of office supplies]
Participant projects:
[Graphic of graph]
Other organizations’ records or test information:
[Start COACH text]
You’ll learn a lot by doing your Logic Model. Remember that you not only have the instructional modules but also the examples (available in the Cases section) to give you ideas and show you how to apply them. When you turn in your Logic Model to your instructor, you will get feedback and suggestions. Remember too your evaluation should be logical and honest, but will not be held to the standards of publishable research. For in-depth guidance on different forms of evaluation, see the references for Module D by clicking the Resources tab at the top of the screen.
[End COACH text]
[End Screen D-9 of 16]
[Start Screen D-10 of 16/Module D>Data source (10)]
Your common sense and knowledge of the situation will suggest what data to use. The kinds of data sources explained give the most common types. Remember that OBPE programs do not require formal research; although some programs may wish to get a consultant involved in evaluation. (References to further information can be found by clicking the Resources tab at the top of the screen.)
No kind of data source is better than another. The data source chosen should depend on what is being evaluated.
[Graphic of pencil with statement "To indicate that children are developing a habit of reading: “# and % of Springfield students in the summer library reading program who spend at least an hour per day for independent reading for fun."]
[Clicking on anecdotes presents the following text: Possible measure. If enough participants discuss their reading, you can get a good estimate of how much recreational reading is going on.
Clicking on survey presents the following text: Good measure. Parents and children are the best source of information about behavior that takes place at home.
Clicking on participant project presents the following text: Bad measure. This outcome is concerned with “normal” behavior, not specific projects.
Clicking on organizational records presents the following text: Limited measure. Circulation records can show how many books are checked out. They might not be read…but it is something of an indication of behavior.]
[Graphic of pencil with statement "To indicate that West Dakota residents use public library databases as a preferred source of information: # and % of WD residents who say that they are likely or very likely to use the public library databases as one of their first 3 sources of health information. "]
[Clicking on anecdotes presents the following text: Limited measure. People who volunteer stories sometimes are not representative of the larger audience.
Clicking on survey presents the following text: Good measure. A mail survey of randomly chosen citizens can validly measure general audience knowledge.
Clicking on participant projects presents the following text: Bad measure. Most residents will be reached indirectly, through information providers, and so won’t have any project.
Clicking on organizational records presents the following text: Bad measure. Most citizens will access this information privately, even if they do so in a library.]
[Graphic of pencil with statement "To indicate that Girl Scouts learn bird-identification skills: “# and % of Girl Scouts who correctly identify five birds common to the area on a field trip."]
[Clicking on anecdotes presents the following text: Limited measure. Enthusiastic Girl Scouts are good, but this won’t measure the quiet ones.
Clicking on survey presents the following text: Weak measure. Some Girl Scouts might over-estimate their ability to identify birds.
Clicking on participant projects presents the following text: Good measure, if the workshop ends with girls taking photographs and correctly identifying the birds. A very direct indicator.
Clicking on organizational records presents the following text: Good measure, if the Girl Scout badge process includes a test of their identification skills. This will be an easy source of information.]
[Graphic of pencil with statement "To indicate 4th-8th grade teacher-participants who demonstrate ability to teach biodiversity with inquiry-based methods: “:# and % of teachers who implement completed science curriculum unit in the classroom.” (See Peabody Museum case.)"]
[Clicking on anecdotes presents the following text: Limited measure. Enthusiastic teachers will report, but this won’t measure the quiet ones.
Clicking on survey presents the following text: Weak measure. Some teachers might overestimate their achievements.
Clicking on participant projects presents the following text: Good measure, if the projects are observed and rated independently.
Clicking on organizational records presents the following text: Weak measure. Attendance records will indicate teachers who complete their summer fellowship studies, but this doesn’t indicate increased skill in actually teaching.]
[Start DIG DEEPER text]
Matrix of common needs analysis and evaluation data sources
[Table containing the headings: tool, purpose, requirements, advantages, and disadvantages.]
Tool: Questionnaires and Surveys
Tool: Knowledge Assessments
Tool: Performance Assessments
Tool: Structured Observation
Tool: Focus Groups
Tool: Telephone Interviews
Based on National Leadership Grant tutorial by the Institute of Museum and Library Services found at http://www.imls.gov/Project_Planning/index.asp
Source: Falletta, Salvatore and Combs, Wendy. Info-line: Evaluating Technical Training: A Functional Approach. (September, 1997). Page 12-15. Alexandria, VA. ASTD (www.ASTD.org) (Used with permission.)
[End DIG DEEPER text]
[End Screen D-10 of 16]
[Start Screen D-11 of 16/Module D>Data sources (11)]
If your indicators are specific, usually they will suggest the data source to measure it as well as the people in the target audience it should be applied to.
What people or program data will the indicator be applied to? Consider the following issues.
Examine the visuals below to explore issues related to each program. For background on programs, click Cases.
Some or all of the target audience?
Remember that your target audience is usually stated in general characteristics and you may only involve part of that audience in your program as participants.
[Photo of teacher and child]
Illinois State Museum Changes
Because of high numbers involved, only randomly selected visitors and students will be interviewed after their visit to the Changes exhibit at the Illinois State Museum. For example, the interviewers might talk only to every fifth person who crosses a line at the exhibit. Click Cases and select Illinois State Museum Changes to review the case.
[Photos of couple]
West Dakota Library Rx
All WD residents are part of the target audience, but they cannot all be surveyed. A random sample of residents is assumed to be representatives of the whole group.
Some or all of the participants?
[Photo of girl writing]
Springfield Library Summer Reading Program
All the participants are students in an urban library district. Should reading tests be monitored for all participants? At-risk students only? Those who complete a specific number of activities?
People other than the participants?
[Photo of child reading]
Springfield Library Summer Reading Program
Are parents likely to be better sources of information about their children (such as number of hours spent reading and attitude toward reading) than the children themselves?
Remember: Collecting data costs time and money. Collect only enough information to figure out if your program is successful, so be specific and concrete. Consider the difference in costs between collecting information about “children” and “children who need after-school tutoring.” Or do you mean “children who participate in at least five tutoring sessions? If you expect 100 children to meet this criterion, should you say instead “a random sample of children who participate in at least five tutoring sessions”?
[End Screen D-11 of 16]
[Start Screen D-11_M of 16/Module D>Data sources>-Museum example (11-M)]
For the following outcomes and indicators for the MoNA program, try choosing data sources and how they should be applied.
Outcome 1: Teachers use a curriculum unit linking an exhibition to their classroom teaching.
Indicator: # and % of teachers who develop at least one curriculum unit linking an exhibition to their classroom teaching during their training year.
Applied To [Clicking on a question mark presents the following text: Teachers who participate in MoNA Link]
Data Source [Clicking on a question mark presents the following text: Teacher portfolios]
Indicator: # and % of teachers who teach at least two lessons in their classroom based on a Museum exhibition during their training year
Applied To [Clicking on a question mark presents the following text: Teachers who participate in MoNA Link]
Data Source [Clicking on a question mark presents the following text: Teacher reports, observations and documentation by Museum art educator]
Outcome 2: Teachers will see the Museum as an effective resource for instructional support.
Indicator: # and % of teachers who rate the Museum as effective or highly effective resource for instructional support
Applied To [Clicking on a question mark presents the following text: Teachers who participate in MoNA Link]
Data Source [Clicking on a question mark presents the following text: Participant survey with four points (Not effective, somewhat effective, effective, highly effective)
# and % of teachers who bring their classes to visit the Museum at least once in the year following their training
Applied To [Clicking on a question mark presents the following text: Teachers who participate in MoNA Link]
Data Source [Clicking on a question mark presents the following text: Group visit log]
[End Screen D-11_M of 16]
[Start Screen D-11_L of 16/Module D>Data sources>Library example (11-L)]
For the following outcomes and indicators for the Memoirs program, try choosing data sources and how they should be applied.
Outcome 1: Participants show improvement in their writing.
Indicator: # and % of participants who revise at least two pieces, commenting on what they tried to improve in each revision. AND
Applied To [Clicking on a question mark presents the following text: All workshop participants who have at least two sets of essays (original revised and revision comments)]
Data Source [Clicking on a question mark presents the following text: Expert evaluation of participants’ work]
Indicator: # and % of participants whose revised pieces (two before-and-after versions) are judged better than the originals in a blind (no writer or dates given) overall grading by a creative writing specialist AND.
Applied To [Clicking on a question mark presents the following text: All workshop participants who have at least two sets of essays (original, revised and revision comments)]
Data Source [Clicking on a question mark presents the following text: Expert evaluation of participants’ work]
Indicator: # and % of participants whose revised pieces are judged better than the originals (for two sets) by a creative writing specialist when judged by the writer’s goals in the revision.
Applied To [Clicking on a question mark presents the following text: All workshop participants who have at least two sets of essays (original, revised and revision comments)]
Data Source [Clicking on a question mark presents the following text: Expert evaluation of participants’ work]
Outcome 2: Participants demonstrate they feel themselves to be part of a community of writers.
Indicator: # and % of participants who can name three ways they feel a part of the community of writers.
Applied To [Clicking on a question mark presents the following text: Program participants who attend 80% of sessions and have work published in the group anthology.]
Data Source [Clicking on a question mark presents the following text: exit survey]
Indicator: # and % of participants who act as part of a community of writers after the program (produce writing, continue library group or join another, attend readings, read memoirs, read regularly about authors’ concerns)
Applied To [Clicking on a question mark presents the following text: Program participants who attend 80% of sessions and have work published in the group anthology.]
Data Source [Clicking on a question mark presents the following text: Phone interview with checklist of behaviors]
[End Screen D-11_M of 16]
[Start Screen D-12 of 16/Module D>Data intervals (12)]
“Data Intervals” is a term describing when and how often you will collect data. You may prefer to just think of this as “timing” or “time plans.”
Specific intervals: When designing data collection intervals, think about what is reasonable given staff and budget inputs, in order to measure your outcomes.
Outcome information can be collected at specific intervals to show changes over time (for example, quarterly, every 6 months, once a semester) or at the start of program and end of program.
Increases: When increases in skill, behavior or knowledge are the outcome, collect data at program start and end to give the longest time for the change to occur as a result of your program. Data collected about outcomes early in the program is often called “baseline data.”
[Start COACH text]
Know what you want to do and when you’ll do it. Easy to plan but hard to fix after you’ve missed the chance. You don’t want to discover late in the program that you needed information from an earlier time period.
[End COACH text]
[End Screen D-12 of 16]
[Start Screen D-12_M of 16/Module D>Data interval>Museum example (12-M)]
The MoNA program had several outcomes. Think what you consider to be appropriate timing for each indicator listed.
Outcome 1: Teachers use a curriculum unit linking an exhibition to their classroom teaching.
Indicator: # and % of teachers who develop at least one curriculum unit linking an exhibition to their classroom teaching during their training year.
Applied To/Data Source: Teacher portfolios from teacher participants in MoNA Link.
Data Interval [Clicking on a question mark presents the following text: End of 2nd and 3rd year of MoNA Link]
Indicator: # and % of teachers who teach at least two lessons in their classroom based on a Museum exhibition during their training year.
Applied To/Data Source: Teacher reports from teacher participants in MoNA Link. Observations of teacher participants and documentation by Museum art educator
Data Interval [Clicking on a question mark presents the following text: Throughout 2nd and 3rd year of MoNA Link]
Outcome 2: Teachers will see the Museum as an effective resource for instructional support.
Indicator: # and % of teachers who rate the Museum as effective or highly effective resource for instructional support
Applied To/Data Source: Participant survey for participant teachers
Data Interval [Clicking on a question mark presents the following text: Beginning of summer institute and end of training year]
Indicator: # and % of teachers who bring their classes to visit the Museum at least once in the year following their training
Applied To/Data Source: Group visit log/ participant teachers
Data Interval [Clicking on a question mark presents the following text: June 2006, June 2007 (after IMLS funding)]
[End Screen D-12_M of 16]
[Start Screen D-12_L of 16/Module D>Data interval>Library example (12-L)]
The Riverton Memoirs program had several outcomes. Think what you consider to be appropriate timing for each indicator listed. Click the question-mark icon for an answer.
Outcome 1: Participants show improvement in their writing.
Indicator: # and % of participants who revise at least two pieces, commenting on what they tried to improve in each revision. AND
Applied To/Data Source: Expert evaluation of all workshop participants who have at least two sets of essays (original, revised and revision comments)
Data Interval [Clicking on a question mark presents the following text: 8 months into the program (so that participants should have several sets to choose from and time for mid-course corrections to be introduced)]
Indicator: # and % of participants whose revised pieces (two before-and-after versions) are judged better than the originals in a blind (no writer or dates given) holistic grading by a creative writing specialist AND.
Applied To/Data Source: Expert evaluation of all workshop participants who have at least two sets of essays (original, revised and revision comments)
Data Interval [Clicking on a question mark presents the following text: 8 months into the program (so that participants should have several sets to choose from and time for mid-course corrections to be introduced)]
Indicator: “# and % of participants whose revised pieces are judged better than the originals (for two sets) by a creative writing specialist when judged by the writer’s goals in the revision.
Applied To/Data Source: Expert evaluation of all workshop participants who have at least two sets of essays (original, revised and revision comments)
Data Interval [Clicking on a question mark presents the following text: 8 months into the program (so that participants should have several sets to choose from and time for mid-course corrections to be introduced)
Outcome 2: Participants demonstrate they feel themselves to be part of a community of writers.
Indicator: # and % of participants who can name three ways they feel a part of the community of writers.
Applied To/Data Source: Exit survey of program participants
Data Interval [Clicking on a question mark presents the following text: 8 months into the program (so that participants should have several sets to choose from and time for mid-course corrections to be introduced)
Indicator: # and % of participants who act as part of a community of writers after the program (produce writing, continue library writers’ group, etc.)
Applied To/Data Source: Phone interview of program participants with checklist of behaviors
Data Interval [Clicking on a question mark presents the following text: Three months after the end of the program]
[End Screen D-12_L of 16]
[Start Screen D-13 of 16/Module D>Targets (13)]
Targets attach a number to the program’s goals and state expectations for the successful performance of outcomes. Identifying targets helps translate general goals (What should we do?) into specifics (What can we do?)
Another way of thinking about a target is: What would success look like? If half of your participants learned a new skill, would that be worth all of the inputs, activities, and services? What would make you happy? Seeing even a few students gain confidence? Every parent attending a program learning at least something new?
Unfortunately, no formula can determine targets. As you set targets, get input and agreement of important stakeholders based on their expectations. Setting numeric targets can be difficult, and targets may require revision as you implement your program. Remember to state targets as a number and/or percent. Base estimates on:
[Start COACH text]
Remember that OBPE isn’t as rigorous as research which looks for “statistically significant” differences. You may be dealing with small numbers of participants, so it’s hard to prove anything statistically. Instead, think of a “target” as a way to quantify success: your program was worth your time and effort if <target> # and <target> % of participants achieve a particular level of results.
[End COACH text]
[End Screen D-13 of 16]
[Start Screen D-13_M of 16/Module D>Targets>Museum example (13-M)]
From what you know about the MoNA Links program, what would you choose as targets? Then, click the question-mark icon to see what those involved with the program chose as targets.
Outcome 1: Teachers use a curriculum unit linking an exhibition to their classroom teaching.
Indicator: # and % of teachers who develop at least one curriculum unit linking an exhibition to their classroom teaching during their training year.
Applied To/Data Source & Interval:
Teacher portfolios from teacher participants in MoNA Link.
End of 2nd and 3rd year of MoNA Link
Targets [Clicking on a question mark presents the following text: 100% (40 teachers)]
Indicator: # and % of teachers who teach at least two lessons in their classroom based on a Museum exhibition during their training year.
Applied To/Data Source & Interval: Teacher reports from teacher participants in MoNA Link.
Observations of teacher participants and documentation by Museum art educator
Throughout 2nd and 3rd year of MoNA Link
Targets [Clicking on a question mark presents the following text: 100% (40 teachers)]
Outcome 2: Teachers will see the Museum as an effective resource for instructional support.
Indicator: # and % of teachers who rate the Museum as effective or highly effective resource for instructional support
Applied To/Data Source & Interval: Participant survey for participant teachers
Beginning of Summer Institute End of training year
Targets [Clicking on a question mark presents the following text: 90% (36 teachers)]
Indicator: # and % of teachers who bring their classes to visit the Museum at least once in the year following their training
Applied To/Data Source & Interval: Group visit log/ participant teachers
June 2006 June 2007 (after IMLS funding)
Targets [Clicking on a question mark presents the following text: 75% (30 teachers)]
[Start of Coach]
Notice that the Museum of Northwest Art has set very high targets—100%--for the two indicators of outcome one. That’s bold and unusual. Before you set a target of 100% be sure you’ve reviewed the difficulty of the task and the commitment of the participants, and be sure to explain your reasoning to stakeholders and get their approval. You set the targets, not your funder or partners. But you want to have a realistic chance of meeting the targets you set.
[End of Coach]
[End Screen D-13_M of 16]
[Start Screen D-13_L of 16/Module D>Targets>Library example (13-L)]
From what you know about the Riverton Memoirs program, what would you choose as targets? Then, click the question-mark icon to see what those involved with the program chose as targets.
Outcome 1: Participants show improvement in their writing,
Indicator: # and % of participants who revise two pieces, commenting on what they tried to improve in each revision. AND
Applied To/Data Source & Interval: Expert evaluation of all workshop participants who have at least two sets of essays (original, revised and revision comments), 8 months into the program
Target [Clicking on a question mark presents the following text: 90% for those having 2 sets of essays (original, revision, writer’s revision goals)]
Indicator: # and % of participants whose revised pieces (two before-and-after versions) are judged better than the originals in a blind (no writer or dates given) holistic grading by a creative writing specialist AND.
Applied To/Data Source & Interval: Expert evaluation of all workshop participants who have at least two sets of essays (original, revised and revision comments), 8 months into the program
Target [Clicking on a question mark presents the following text: 85% for expert evaluation of writers’ work with independent criteria]
Indicator: # and % of participants whose revised pieces are judged better than the originals (for two sets) by a creative writing specialist when judged by the writer’s goals in the revision.
Applied To/Data Source & Interval: Expert evaluation of all workshop participants who have at least two sets of essays (original, revised and revision comments), 8 months into the program
Target [Clicking on a question mark presents the following text: 95% for expert evaluation according to writers’ revision intent]
Outcome 2: Participants demonstrate they feel themselves to be part of a community of writers.
Indicator: # and % of participants who can name three ways they feel a part of the community of writers.
Applied To/Data Source & Interval: Exit survey of program participants, end of program
Target [Clicking on a question mark presents the following text: 80% on questionnaire]
Indicator: # and % of participants who act as part of a community of writers after the program (produce writing, continue library writers’ group, etc.)
Applied To/Data Source & Interval: Phone interview of program participants with checklist of behaviors, 3 months after the program ends
Target [Clicking on a question mark presents the following text: 60% on post-group writerly activity]
[Start COACH text]
Be mindful of the number of participants you expect in your program when you set your goal. Why? If you have only 10 participants and one does not succeed in showing the indicated change, then you will only have 9 participants (90%); if two fail, you’ll have 80%, and so on. Then it doesn’t make sense to set a target of 85%. Set your targets at intervals that fit the number of your participants.
[End of Coach for D-13_M]
[End Screen D-13_M of 16]
[Start Screen D-14 of 16/Module D>Check your understanding (14)]
The following examples give you a chance to work on stating measurable outcomes. You’ll be asked to choose the better statement of either outcomes or indicators.
Remember:
Outcomes should be stated as changes-- in the people participating-- of their behaviors, attitudes, knowledge and skills (BASK) as well as status and conditions.
Indicators are measures that indicate whether the change has taken place.
[End Screen D-14 of 16]
[Start Screen D-14_M of 16/Module D>Check understanding>Museum example (14-M)]
Review the materials for the Children’s Museum MAPS Exhibit.
For the two items below, click on the star icon indicating the better statement of the outcome. (Remember: “participants” are children in grades 3-5 and their families)
Outcome Statement 1
Participants learn from the exhibit about maps [Clicking on the statement presents the following text: Too broad to be useful. What is the change in skills that shows learning?]
Participants demonstrate improved knowledge of map-making. [Clicking on the statement presents the following text: Yes, the change is in knowledge about map-making.]
Outcome Statement 2
Participants have positive family learning experiences in the exhibit. [Clicking on the statement presents the following text: Yes, the positive experience has to do with being in a family AND being in the exhibit.]
Families have fun together at the exhibit. [Clicking on the statement presents the following text: This doesn’t state a change clearly brought about by visiting the exhibit.]
Indicator 1- Outcome: Participants demonstrate knowledge of world geography.
# & % of participants who can list 3 or more points from the exhibit’s main messages related to world geography. [Clicking on the statement presents the following text: Yes, this shows they understand and remember important points from the exhibit.]
# & % of participants who can name three continents, three oceans and three lakes. [Clicking on the statement presents the following text: This shows knowledge but it’s not clearly related to the exhibit.]
Indicator 2- Outcome: Geographic educators use exhibit materials for teaching geographical concepts.
# & % of geographic educators who report use of at least one element of website materials for teaching geographical concepts, administered in a web-based survey to a randomly selected group of members in geography educators associations. [Clicking on the statement presents the following text: Yes, this indicator fairly surveys a selection from the target audience and asks them to report use of the materials. What do you think a reasonable target would be for this indicator?]
# & % of requests (written and at the website) for the unit of study available from the MAPS exhibit. [Clicking on the statement presents the following text: Requests alone don’t indicate actual use of the materials. This is a useful additional indicator, but is not sufficient alone.]
[End Screen D-14_M of 16]
[Start Screen D-14_L of 16/Module D>Check understanding>Library example (14-L)]
Review the materials for the Whitney Library.
Outcome Statement 1
Students know about availability of infopods. [Clicking on the statement presents the following text: Too broad. The library needs to know whether students not only know about but also USE infopods in class projects.]
Students create information-rich products. [Clicking on the statement presents the following text: Yes, though this doesn’t seem to mention the library, the InfoPod reference services are designed to help students create projects that use information resources adeptly.]
Outcome Statement 2
Students create technologically-adept products for their projects. [Clicking on the statement presents the following text: Yes, although this doesn’t seem to mention the library, the InfoPod computer consulting services are designed to help students create better projects.
Computer consultants report an increase in the numbers of times they are asked for assistance. [Clicking on the statement presents the following text: No. This is staff-oriented, not audience-oriented. Student outcomes are the goal.]
For each outcome listed below, click on the star icon indicating which is the better stated indicator of that particular outcome.
Indicator 1- Outcome: Students work effectively in groups.
# & % of librarians who observe students working in groups in the InfoPod area. [Clicking on the statement presents the following text: No: An indicator should describe a change in participants. While students may appear to be working productively, a librarian really can’t observe just what they are doing and how effective it is. Starting your indicator with the participant will help you avoid this error.]
# & % of students in Infopod groups who score 3 or higher (scale 1-5) on the group participation assessment rubric. [Clicking on the statement presents the following text: Yes, An existing measure such as a teacher’s grade of group effectiveness can target an outcome precisely]
Indicator 2 – Outcome: Students create information-rich products
# & % of Infopod students whose group projects include high-quality information resources as demonstrated by bibliographies from Infopod-using student projects achieving 3 or higher on information-quality rubric. [Clicking on the statement presents the following text: Yes, this indicator gauges mastery by using the instructor’s assessment. Notice that for this informal check of effectiveness, a comparison to non-Infopod users is not necessary. What constitutes success is that Infopods help students master a skill. What do you think a reasonable target would be for this indicator?]
# of footnotes & % of pages with footnotes in papers of Infopod users compared to the same in papers of non-Infopod users. [Clicking on the statement presents the following text: Not really. It’s true this indicator includes a comparison of Infopod users and non-users which is nice but not mandatory. But the indicator is a count, not an assessment of whether the students have mastered a skill.]
[End Screen D-14_M of 16]
[Start Screen D-15 of 16/Module D>Applying understanding]
You have reached the end of the instruction for this module. It’s time to apply your understanding.
Working with your own program, complete section V of the Logic Model worksheet. (Submitting your work on-time, even if it is very rough, will get you helpful instructor feedback in time for you to make revisions before you submit your final draft.) (Get help by reviewing filled-in forms in the examples accessed by clicking Cases. Or use the blank form, keyed to the pages of the Shaping Outcomes instructional modules.
Follow your instructor’s directions for submitting your Logic Model worksheet for comments. Check whether additional work has been assigned.
[End Screen D-15 of 16]
[Start Screen D-16 of 16/Module D>Resources]
Diamond, Judy (1999) Practical Evaluation Guide: Tools for Museums and Other Informal Educational Settings. Walnut Creek: Alta Mira Press.
Includes data collections strategies for assessing museum learning. In addition, the guide provides clear instructions, tools, and approaches for understanding how well programs and exhibits communicate their main messages to museum audiences.
Durrance, Joan C. and Karen E. Fisher-Pettigrew (2004). Outcome toolkit version 2.0, [website of Information Behavior in Everyday Contexts]. Retrieved August 18, 2005, from http://ibec.ischool.washington.edu/ibecCat.aspx?subCat=Outcome%20Toolkit&cat=Tools%20and%20Resources
This toolkit is a simple, flexible effective methodology for evaluating outcomes, targeting libraries and community-focused services. It includes worksheets and examples, and the method is in the process of being piloted by a group of public libraries from large to small for a variety of typical library programs.
Hernon, P. & Dugan, R.E. (2002). Action plan for outcomes assessment in your library. Chicago, IL, American Library Association.
The book provides data collection tools for measuring learning and research outcomes linking outcomes to user satisfaction, as well as case studies from actual outcomes assessment programs in libraries.
LibQUAL survey of library-user satisfaction, see http://www.libqual.org
A source for standardized surveys that can serve as models or can be budgeted as part of your evaluation plan.
National Survey of Student Engagement, see http://nsse.iub.edu/index.cfm
A source for standardized surveys that can serve as models or can be budgeted as part of your evaluation plan.
Nichols, Susan K., compiler. (1999). Visitor surveys: a user's manual. Washington, D.C.: American Association of Museums, Technical Information Service.
Part of the AAM Professional Practice Series, this guide provides a step by step guide to survey development by Randi Korn and Laurie Sowd along with reprints of selected readings about museum evaluation.
Schecter, E.I. (2005). Internet resources for higher education outcomes assessment, [webpage of North Carolina State University]. Retrieved August 9, 2005, from http://www2.acs.ncsu.edu/UPA/assmt/resource.htm
Schecter has compiled this metasite with information on assessment in higher education programs. Major subdivisions include discussion lists, forums, archives, articles and lists of links. Much of the assessment information can be applied to other programs and institution types.
StatSoft, Inc. (2005). The electronic statistics textbook. Retrieved August 12, 2005, from http://www.statsoft.com/textbook/stathome.html
This online guide to statistics contains a glossary, interactive “Statistical Advisor,” distribution tables and elementary concepts to help explain targets, percentages and measurement in evaluation. Although much of the material is highly advanced, the layperson will likely find some of the information useful.
W.K. Kellogg Foundation. (2005). Evaluation overview. Retrieved August 12, 2005, from http://www.wkkf.org/Programming/Overview.aspx?CID=281
The W.K. Kellogg Foundation is a 75-year old philanthropic organization. This website provides evaluation approaches, questions, plan, budgeting, and management tools to communicate crucial information to stakeholders and program audiences. It is an overview of how to develop and capture information within an organization.
American Evaluation Association http://www.eval.org/
The American Evaluation Association is an international professional association of evaluators devoted to the application and exploration of program evaluation, personnel evaluation, technology, and many other forms of evaluation. Topic Interest Groups (TIGs) include evaluation of arts and culture, and non-profits.
Association of Science and Technology Centers http://www.astc.org/resource/visitors/index.htm
The ASTC web site maintains a section on visitor studies including readings, abstracts of recent studies, and listing of ASTC evaluation publications.
Committee on Audience Research and Evaluation (CARE). http://www.care-aam.org/
A Standing Professional Committee of the American Association of Museums (AAM), CARE’s website contains information for those interested in audience research and evaluation in museums, zoos, aquariums, historic sites and other visitor institutions. The site has publications, presentations from “Visitor Studies 101” sessions at AAM, professional standards guidelines, and links to evaluation resources.
University of Wisconsin-Extension Cooperative. http://www.uwex.edu/ces/pdande/index.html
A site of the Program Development and Evaluation Unit of UW-Extension with a wealth of evaluation materials and resources. Included is their list of evaluation publications, “Quick Tips” (such as “The Basics of Good Evaluation Reporting”), and program development materials.
Texas State Library and Archives Commission. Outcome Measures, http://www.tsl.state.tx.us/outcomes/
This site maintained by the state commission offers resources (annotated links to online tools, publications, and websites, and bibliographies of print publications), contributed examples (outcomes measurement materials developed and contributed by library staff, identified by sample type (phase), by topic, and by library type), a forum (threaded discussion with peers on topics related to outcome measurement in libraries) and responses to frequently asked questions.
Visitor Studies Association. http://www.visitorstudies.org/
International organization devoted to enhancing and understanding visitor experiences in museums, zoos, nature centers, visitor centers, historic sites, parks and other informal learning settings.
[End Screen D-16 of 16]
[End of Module D]
Modules | Module A: overview | Module B: plan | Module C: build | Module D: evaluate | Module E: report |
Other Resources | Orientation | Logic Model | Cases | Glossary | Credits | Enhanced version |