Over the past 35 years, JSI has been dedicated to continuously improving its work. From the very first family planning project in 1978 to current projects in Liberia, Pakistan, the United States, and beyond, evaluation and accountability have been integral parts of JSI’s successful partnerships to produce impactful programs and initiatives.
While each project is required to compile internal assessments, three years ago I spearheaded the first JSI U.S. Health Services department-wide evaluation, our Quality Improvement Initiative. By examining a spectrum of projects varying by contract type, location, and subject, we have determined a range of big picture conclusions JSI can take from its projects.
At first, I was the only evaluator and I looked at conclusions from just a few projects, but gradually, the team and our scope of work has grown. Each year we now engage in an eight step process to gather feedback about JSI’s capabilities for U.S. health projects and explore the successes and the failures of some of these programs.
1. Selection: First, we determine the subset of programs to evaluate. The goal is to include projects that are successful as well as those that have not been so successful in order to have as wide of a range as possible to gather diverse data.
2. Compilation: After selecting our case studies, we work with each project director to gather products from the initiative including deliverables such as final reports and communication materials.
3. Evaluation: Then, we review these materials to determine which elements we think are effective or ineffective.
4. Collaboration: After determining our own conclusions, we convene a meeting of all the JSI staff members connected to the project to debrief and discuss their thoughts on the project’s implementation.
5. Client Feedback: With the project team’s feedback in mind, we meet with the client to discuss what they liked or didn’t like about the project’s implementation and what changes they would suggest for the future.
6. Reflection: Following this conversation, we then return to the project teams to reflect what the client has said and together we discuss the lessons we can take away from the project based on this feedback.
7. Aggregation: We then aggregate the lessons learned from all of the projects we’ve evaluated to gather overall conclusions or issue solutions for the JSI U.S. health services department.
8. Presentation: All conclusions from the process are presented and discussed broadly with Division staff at a division-wide JSI U.S. Health Services meeting.
Although this process requires time and effort, at JSI we believe that we can’t just assume our work is always excellent, because there are always ways to improve. Instead, by intentionally sharing these best practices, we make sure that the work we do department-wide is innovative, creative, and effective – all in a measurable and repeatable way.
Being proactive makes us more reflective of and invested in our work. As a result, we’re discovering JSI’s strengths and weaknesses and are now better equipped to consistently build on successes and intentionally remedy failures.
Additionally, we found that directly engaging with our clients gives us more than just data about our capabilities – it builds on productive working relationships that attune JSI to client’s needs, regardless of the project, which makes us a more effective business partner.
I have said that in public health, our goal should not be to say that we did a good thing or even the right thing. Rather, we should strive to say that we did something that had an actual impact on the population. A key facet of this impact is quantifying our work and through evaluation, showing how it was successful and why it should be done again.
By engaging in department-wide internal evaluation, JSI is an even better steward of our funder’s resources, advocate of our client’s needs, and developer of innovations – to the benefit of clients across the United States and in 66 other countries around the world.