Hi, my name is Nick Amland, and I’m a field fellow for Dimagi. I spent the last 8 months in Tanzania working on a CommCare project, and now I’m back in the US leading our Active Data Management (ADM) initiative. This post will describe our current approach as well as pose two of our current questions to the group about effective performance monitoring.
Our motivation for ADM stems from our experience over the last few years, in which we see the most successful projects utilizing active feedback loops where somebody actively uses the data we collect to improve field operations. We’ve also seen a lot of data we collect go unused—not surprisingly because most of the health programs we work with are not designed to expect real-time data. ADM strives to help partners make active use of the data collected and achieve evidence-based change.
Part of my desire to drive this initiative comes from my work in Tanzania, where we had a fully employed and incentivized CHW workforce, our own employed supervisors, and still struggled with how best to utilize all of the data we had available.
Approach: Monitoring and follow-up action are two distinctly different elements of one process. Monitoring refers to reviewing reports and data analysis which informs follow-up action. Both rely on the other, and they must be closely linked for evidence-based change and performance improvement to occur. But, we identified a significant gap creating a barrier to effective or optimal performance monitoring and follow-up action.
Numerous reasons create this gap including poorly designed reports and data requirements, busy project managers, and unclear prioritization and expectation of effort. The foundation of our approach concentrates on addressing this gap by making it easier for a project manager to access, correctly interpret, and act on data using short, routine reports that more explicitly link analysis to follow-up action and working with partners to integrate this process into their existing supervisory structure.
To talk a little more about our effort to link reporting analysis to follow-up action, we tackle this problem in primarily three ways: simplifying report design, establishing clear follow-up, and setting metric targets.
Design Simplicity: In each of our report prototypes, design simplicity is paramount. Everyone knows complexity kills, but our working environment emphasizes this effect. If a project manager needs more than 2 minutes to interpret a report, we believe the chance of follow-up action decreases. Simplifying a report’s design increases the chance of a project manager reviewing and acting on the information presented.
Clear Follow-Up: In addition to design simplification, we provide a list of follow-up items. This helps to more explicitly link the information to action. For example, a graph or chart presenting aggregated data might be insightful, but the step from this graph to action isn’t direct enough. It is a step removed.
A graph or chart requires both more effort and a correct interpretation to lead to meaningful action. We believe this one step decreases the chance a project manager will act. The name of an individual CHW with specific follow-up action is the most actionable piece of information. As the primary output of our report prototypes, we generate a list of CHWs with associated follow-up actions as a way to incrementally address weak performance.
However, we realize that identifying specific follow-up items accomplishes only half of this process. We intend to track these items from open to close in order to confirm action follows the identification of these issues.
Question 1: Is individual or group follow-up the best way to act on information regarding performance? For instance, if you had a performance distribution of all CHWs, would you follow-up individually or as a group?
Question 2: To generate our follow-up items, we’re selecting both the top and bottom performing CHWs in numerous metric categories. The bottom will receive supportive supervision and the top will receive positive reinforcement. Do you foresee any issues in doing this?
Metric Targets: Monitoring is severely limited without setting an expectation or target and leads to unclear and potentially misguided follow-up action. We’re strongly encouraging our partners to set targets for the level of activity they expect (e.g. designate CHWs to visit x number of clients every x days). This adds context to data analysis which project managers can use for an easier, more appropriate, and meaningful interpretation of the data.
Next Steps: The next major milestones for ADM are report standardization and automation. All CommCare projects have slight differences, but the concept of mobile CHWs managing clients and submitting forms remains an underlying congruence. We want to standardize the metrics to enable a level of autonomy. So, we’re currently testing our report prototypes with partners in India, Africa, and the Middle East. Through an iterative process, we have started to refine our prototypes to respond to project manager feedback.
Once we validate our approach, we want to automate it. We plan to automatically generate and disseminate our reports on a weekly and/or monthly basis. To accommodate different goals from project to project, we’ve started designing a user configurable report builder. The report builder is designed around a simplified user interaction and focuses on a low number of inputs about the metrics a particular project manager value’s and associated metric targets.
I've attached a sample of our monthly report. I encourage comments, questions, and cross-examinations.
--
You received this message because you are subscribed to the Google Groups "ict4chw" group.
To post to this group, send email to ict...@googlegroups.com.
To unsubscribe from this group, send email to ict4chw+u...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/ict4chw?hl=en.
1) How have partners responded so far?
2) Are there managers in place to use this information, or is finding the appropriate person to send the information to part of the challenge?
3) In the report builder, what set of metrics will users be able to choose from?
4) How do you determine what the top and bottom performing CHWs in each category are? Are you exploring various approaches?
Question 1: Is individual or group follow-up the best way to act on information regarding performance? For instance, if you had a performance distribution of all CHWs, would you follow-up individually or as a group?
Question 2: To generate our follow-up items, we’re selecting both the top and bottom performing CHWs in numerous metric categories. The bottom will receive supportive supervision and the top will receive positive reinforcement. Do you foresee any issues in doing this?
--
You received this message because you are subscribed to the Google Groups "ict4chw" group.
To post to this group, send email to ict...@googlegroups.com.
To unsubscribe from this group, send email to ict4chw+u...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/ict4chw?hl=en.
Question 1: Is individual or group follow-up the best way to act on information regarding performance? For instance, if you had a performance distribution of all CHWs, would you follow-up individually or as a group?
Question 2: To generate our follow-up items, we’re selecting both the top and bottom performing CHWs in numerous metric categories. The bottom will receive supportive supervision and the top will receive positive reinforcement. Do you foresee any issues in doing this?
Response to topic 2:
Yeah, we've thought a lot about displaying the entire performance distribution because it gives the project manager the best opportunity to use judgement and contextual knowledge. However, we're sticking to our motto of simplicity and trying to avoid the paralysis of staring at a huge data set. Imagine a project with 50-100 CHWs. The full performance distribution for one metric would be enormous. We want our reports to be simpler, more targeted, and smarter. There may be an implicit sacrifice in doing this, but we think well worth if this increases report utilization.
Hey Ray,Thanks for the response.You're not the first person to bring up distributed or supervisor-wise reporting. During our iterative feedback stage, multiple partners have mentioned the same thing, and this resonates with us. At this point, we're really concentrating on the first step which is to create a generalized report. This is an improvement, but we do realize it's only incremental. We've started to articulate a project plan for ADM which includes setting future milestones and functionality. We're going to integrate this, and I may contact you again to get some more of your thoughts on this topic specifically.Regarding "on the go" reporting, we do plan to automatically disseminate the ADM reports (haven't chosen end format, but likely PDF) via email which could be available on a smart phone. For a slightly different use case, we've also started thinking about SMS-based reporting. My colleague Ryan Hartford has done some great SMS-based reporting work with JSI and VillageReach in the logistics space.
--
You received this message because you are subscribed to the Google Groups "ict4chw" group.
To view this discussion on the web visit https://groups.google.com/d/msg/ict4chw/-/vbirR5FGbtwJ.
--
You received this message because you are subscribed to the Google Groups "ict4chw" group.
To view this discussion on the web visit https://groups.google.com/d/msg/ict4chw/-/uUrEuuGx9ZIJ.
Valadez, J.J., B. Weiss, C. Leburg and R. Davis. Assessing Community Health Programs: A Trainer's Guide and A Participant's Manual and Workbook. London: Teaching Aids At Low Costs (TALC). September 2002. (Translated: French, Spanish) (second edition 2007)
Valadez, J.J. and Bamberger, M. Monitoring and Evaluating Social Programs in Developing Countries. Washington, DC: The World Bank, Economic Development Institute, Economic Development Series. 1994.
Valadez, J.J. Assessing Child Survival Programs in Developing Countries: Testing Lot Quality Assurance Sampling. Cambridge, MA: Harvard University Press. 1991.
Susan Robertson and J.J. Valadez."Global review of health care surveys using lot quality assurance sampling (LQAS), 1984-2004." Social Science and Medicine, 63, 2006: 1648-1660 http://www.ncbi.nlm.nih.gov/pubmed/16764978
Cheers!--