To address this question, we conducted an evaluability assessment using information we gathered from document reviews and site visits. The documents we reviewed provided information on the principles of program intake. We supplemented that with information from site visits with Project SEARCH sites—in and around Cincinnati, Ohio and Orlando, Florida—and assessed how the documented guidelines are implemented in practice. During the site visits, we met not only with Project SEARCH staff, but also with staff from other local and state agencies that collaborate with the program, including schools, school districts, VR, and developmental disability agencies. When meeting with staff from other agencies, we also discussed the scope for collecting and sharing data on the youth from those agencies that could be used for an impact evaluation.
Recommendations
Based on information we gathered from document reviews and from site visits conducted for this evaluability assessment, we propose two leading evaluation designs: one under the existing setting, where we take Project SEARCH sites, students, and other partners as given; and another under a demonstration setting, where we allow for the evaluation to play a role in determining the setting within which these players interact.
- Existing setting design. Under the existing setting scenario, we propose a matched comparison group design with eligible youth from areas not served by Project SEARCH matched with individuals from areas served by the program.
- Demonstration design. Under a demonstration scenario, we propose a randomized experimental evaluation with school districts/local education agencies randomly assigning youth enrolled in the demonstration either to a treatment group that would have the opportunity to apply for Project SEARCH services, or to a control group that would have the opportunity to receive usual services from the state VR agency.
For practical reasons, we recommend pursuit of the existing setting design first. We believe this design would meet the standards of rigor necessary for the findings to be credibly used to inform policy and, importantly, would be by far the most feasible and least expensive to implement. Even though the demonstration setting design uses what program evaluators would call the gold standard for impact evaluation, a randomized controlled trial, it also would involve significantly more resources and a longer time frame to implement. Implementing either of the leading evaluation designs would require collaboration with Project SEARCH and other entities.
Read less >