This post is a short conversation with Jamie Gamble (KDE Hub consultant), who spoke at the May 8th webinar on Developmental Evaluation (DE).
Barb Riley (KDE Hub scientific director): Jamie, that was an awesome overview of DE on May 8th; thank you. In the webinar, you described different ‘Developmental Evaluation situations’. Since then, the Hub shared with you a bit more background on MHP-IF projects, including pandemic adaptations. Might you be able to paint a bit of a picture of what DE could look like in even a small subset of examples?
Jamie: One likely situation is the modification of content and/or mode of delivery in response to COVID-19; for example, if in-person services are now being offered online, or are adjusted to accommodate physical distancing. This could also mean the introduction of new material that supports mental health challenges that are heightened in the pandemic. In any of these cases, a developmental evaluation approach to new activity can help those working on it to learn quickly, and adjust. A little up-front work, and then a follow up mini-review after a new activity can help sharpen the thinking, and keep everyone up to speed on what is being learned and what the next steps will be. In the lead up, the more you can articulate what you are doing and why, the better. For example, if you are trying a virtual workshop for the first time, you may be thinking: “We know that x was an essential part of what we offered before, and we know that will be challenging online, here is what we are going to try to respond to that.” The trick here is not to develop some big, clunky logic model, or elaborate documentation, but to be clear-eyed and explicit about the strategy and some of the assumptions you have about how and why it will work in a certain way. The next step is to debrief what happened soon after. Some prefer to do this immediately while it’s fresh; others find a bit of time to think about it is helpful. There is no hard and fast rule, but don’t wait too long. It’s helpful to have a bit of structure for these debriefs, as ideally you start to build them into your evaluative practice over time. The What? So What? Now What? questions introduced in the DE webinar slides are one example of a structure. After-action reviews are another type of structure. Here are some questions that you might include:
- We thought x might happen in this way. What did we see that was confirming of our assumptions? What did we see that challenged some of our assumptions? What, if anything, surprised us?
- Is there any other evidence that we have, or could gather, that would help us understand better what worked and what didn’t work?
- What does this mean we might need to do more of, less of, change, adapt, stop, create the next time we do this?
It’s also a good step to include a brief process check. Did this mini-DE cycle work for us? Should we do it again, and if so, anything we’d do differently?
Another common situation is tailoring a program to meet the unique needs of a different audience, for example adapting mental health supports that meet the unique needs of newcomers or the LGBTQ+ community. This could be an organization or initiative that has experience with different audiences that are expanding their services, or an organization that has experience in serving a particular community, that is now expanding the scope of their mental health promotion supports. In either case the wisdom of program developers is the blending of their understanding of a unique context, the fit of evidence-informed practice in the situation, and the adaptations to content and approach that help achieve the best results for those being served. These adaptations are best guided by a good understanding of principles and boundaries, and over time, watching, gathering evidence, and learning about the optimum mix. You might ask:
- What are the essential principles that guide our work in this area? To what extent and in what ways are we able to successfully work within these principles? Are there times when they are particularly challenging, and if so, what can we learn from that (about the principles or what we should be doing)?
- To what extent and in what ways are we applying evidence-based practice? Are there times when we deviate from that, and if so, what are the implications? How is that practice best applied in our context?
These are two common situations, and of course there are others. What are ways you are thinking about applying DE? Let us know, and we can explore these in future posts.
Barb: Based on your experience working with different groups on DE, how might you suggest we support DE efforts with MHP-IF projects?
Jamie: Here are three ideas for supporting DE:
- Make it a team activity. Shared sensemaking, debate, dialogue and critical thinking help develop a shared understanding of what is being learned, and its relevance to the work. It also helps build evaluation as a practice and capability within that team.
- Keep it simple. The more data gathering and analysis can be integrated into the natural activity of a project and the functioning of an organization, the better. If it starts to feel like a burden, it tends to stop happening. From time to time, you may need to roll up your sleeves for a deeper dive, but it’s better to save those for key moments.
- Remind people throughout about the nature of adaptive work. This could include managers, board members, community members, the people who the project serves, partners, funders, etc. From time to time, update them on what is being learned, and where progress is being made. This helps people stay with you on the journey.
Barb: Super helpful advice, Jamie; thank you! Thanks also for working with the Hub on other DE supports. For those wanting to explore some of these DE supports: Check out the Hub website for some resource annotations on DE, and visit the online forum on DE. The forum can be a prime place for learning from each other; sharing your experiences in applying DE and any questions that come up. The Hub and Jamie will be right there with you!