Program Evaluation: Define Questions and Methods. Steps five through eight will help you define the questions your evaluation should address, and develop the methods you'll use to conduct your evaluation (learn more about the other steps in general program evaluations): Step 5: Decisions and Questions. In this section you will use the program logic model tables to: Help clarify the reasons why you want to undertake an evaluation, e. Identify higher- level, more general questions the evaluation must answer and prioritize them. Select specific, researchable questions the evaluation must answer. The evaluation must ask questions where answers to these questions will satisfy the evaluation objectives. This is a very important step. Program Design and Evaluation. 7 responses to “Program Design and Evaluation. EvalBlog · John Gargani's blog about program design and evaluation. Program evaluation is essential to public health. The Centers for Disease Control and Prevention sets standards for evaluation, develops evaluation tools and. Project STAR Study Designs for Program Evaluation Introduction. At different points in your program cycle, you may need to use different types of evaluation designs. Evaluation is the systematic application of scientific methods to assess the design, implementation, improvement or outcomes of a program (Rossi. Program evaluation is a systematic method for collecting, analyzing, and using information to answer questions about projects, policies and programs, particularly. Steps five through eight will help you define the questions your evaluation should address, and develop the methods you'll use to conduct your evaluation (learn more. The city was expected to conduct a program evaluation to see if the newly installed sewer line was acceptable to the public, and also to fix any initial problems. If the answers to the questions do not satisfy the objectives, the evaluation resources will be wasted. To develop relevant questions, it will be useful to start with a table that arranges the following from left to right: What type of evaluation is needed to develop the needed information? What kind of information is needed to inform decisions about the program? What are the evaluation objectives? The objectives should be appropriate to the decisions under consideration and information needed. Specify the higher- level general questions that need to be answered to satisfy the evaluation objectives. Prioritize the general questions. Determine what information is most needed and when. These prioritized general questions are used to begin the process of specifying more detailed questions that will supply the required information. Some questions may not be answered if it is decided that the more important questions require all of the current evaluation resources. A multi- year evaluation strategy is helpful in that it can schedule coverage of all the general questions over a period of time. Select specific, researchable questions that can be asked to answer the general questions. The most important questions should receive enough resources to develop defensible answers. Ensure that questions about outcomes include both direct and indirect outcomes. Pose all of the questions that you think are relevant. You will screen them later. Evaluation Type, Evaluation Objectives, General Evaluation Questions, Priorities, and Specific Research Questions. The priority levels shown are for illustrative purposes only. Type of Evaluation. Evaluation Objectives. General Evaluation Questions. Priorities High Medium Low. More Specific Researchable Questions. Needs/Market Assessment. Identify how FEMP can accelerate the efficiency with which federal agencies use energy. Identify which federal agencies most need FEMP assistance. Create a baseline for future evaluations. Q1: What is preventing federal agencies from giving energy efficiency improvement a higher priority for annual funding? Q2: What do federal agencies need that FEMP can provide to increase the number of efficiency upgrades they implement? Q3: What are the federal agencies most in need of FEMP assistance that have not accepted it? Q4: What energy efficiency measures have been installed and/or what is the level of energy use prior to participation in FEMP's program? Collected prior to participation and evaluation.)Q1 High. Q2 High. Q3 Medium. Q4 High. What are the market and agency barriers to adopting better energy and water management technologies? How are FEMP actions directly and indirectly meeting specific customers needs or lowering barriers to action? What customers is FEMP serving? Which need its services most? What is the energy use per square foot of an office building prior to participation? Process Evaluation. Assess the adequacy of program funding relative to objectives. Determine if funding is being used as intended. Determine if populations that can benefit from the program are being served well. Identify opportunities to improve effectiveness of activities and outputs. Q1: What level of overall investment in energy efficiency are we leveraging with our spending? Q2: Do the federal agencies perceive that we are helping them meet their energy- efficiency upgrade goals? Q3: How can we make our services more productive for federal agencies? Q1 High. Q2 High. Q3 High. How much does FEMP spend and on what activities? What is the total investment in energy and water projects? Are FEMP partnerships leveraging funds and capabilities? What is the quality of FEMP service and products? What can FEMP do to improve its services and its service delivery, generally and specifically (e. Is FEMP reaching the right customers and are they satisfied? Outcome Evaluation. Quantify the achievements of program outputs and outcomes against planned time frame. Assess whether further outcomes are possible and how to achieve them. Q1: Have overall energy savings by the federal government increased from year to year? Q2: How many quads of energy savings are in the pipeline? Q3: Is progress toward energy- efficiency upgrade goals, by agency satisfactory? Can they meet these goals? Q4: Are there any actions possible, by agency, that have not been undertaken? Q1 High. Q2 High. Q3 Medium. Q4 Low. Is FEMP making progress, as measured by FY2. Procurement Challenge; or savings identified in audits, demonstrations, and projects in the pipeline? Is the federal government, by agency, on track to meet its goals? What agency actions/projects (retrofit, procurement) is possible or in the pipeline, demonstrating that those goals are being met? Impact Evaluation. Assess the net effect of the program's activities, i. Q1: How much were they achieving before each of FEMP's initiatives? Q2: How much of the measured outcome can the program claim? Q3: Which FEMP initiatives helped more than others? Q4: What would have caused federal agencies to invest in energy efficiency upgrades had FEMP's programs not existed? Q1 High. Q2 High. Q3 Medium. Q4 Low. To what degree did a FEMP program cause specific measured benefits? Which FEMP tools helped more than others to create the benefits? What external factors would have caused agencies to create savings without FEMP? Cost- Benefit Analysis. Determine program cost- effectiveness. Determine the cost- effectiveness of individual outcomes, outputs, or goals, where possible. Q1: Are the benefits from FEMP actions greater than the total of FEMP and customer costs? Q1 High. What are the energy savings and emissions reductions attributable to FEMP initiatives, as determined by an impact evaluation? What are the savings to the taxpayer as a result of FEMP initiatives? What are FEMP's costs associated with the benefits that will be quantified? The priority levels shown are for illustrative purposes only. Back to Top. Step 6: Develop Research Design. The evaluation research design is the research strategy that permits defensible findings to be deduced from the evaluation data. It consists of: The questions and indicators for which data will be collected. Inventory of existing data and identification of data gaps. The method and timing by which the data will be collected. The populations from which the data will be collected. The choices of research accuracy, sampling precision and confidence level, and degree of defensibility for the results. The method of analysis used to produce the evaluation results. The method of reasoning from the results to answers to the questions. Development of the research design entails creating the logical scheme for deducing useful answers from the collected data. The design must specify how the answers can be developed from each of the above evaluation activities. This, in turn, requires considering the alternatives available for each of the activities. The next sections discuss these alternatives, their uses, and their resource requirements. Select Design Type. The research design can vary from simply tabulating the findings of a customer satisfaction survey or count of outcomes to inferring the net outcome of the program from the results of an experiment. Methods such as tabulating descriptive measurements and finding the statistical significance of a relationship between variables are usually not thought of as research designs, but, in fact, the process of going from the results of these analytical procedures to answers to evaluation questions involves logic and, therefore, constitutes a research design. These methods are not discussed further here because the logical process involved in using them to find answers to questions is relatively straightforward. They are mentioned because it is important to stress that the program manager should understand how the evaluation will derive answers to the evaluation questions from the data collected. The rest of this discussion describes the special type of research design required for impact evaluations known as an experimental design. If you need to determine the proportion of a measured outcome that can be attributed to the program instead of to external influences (i. This design should be able to forecast what actions participants would have taken (outcomes) had your program not existed. The difference between what participants would have done and what they actually did is the amount of the observed outcome that you can attribute to your program. Evaluation research designs that allow you to make such claims of effect are called "experimental" or "quasi- experimental" designs. Experimental and quasi- experimental designs are data- collection and analysis strategies that use deductive reasoning to estimate whether a programs' outcomes can be attributed to the program's activities and outputs or whether they were likely to have occurred anyway. True experimental designs, especially those using randomly assigned participant and control groups with before- after measurement, are the "gold standard" of evaluation research; however, they are rarely used in energy program evaluations because they require random assignment of the target population to participant and control (non- participant) groups. This is not possible with programs whose success depends upon voluntary participation.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
September 2016
Categories |