Active Data Management and Effective Performance Monitoring

81 views
Skip to first unread message

Nick Amland

unread,
Nov 14, 2011, 11:44:07 AM11/14/11
to ict4chw

Hi, my name is Nick Amland, and I’m a field fellow for Dimagi.  I spent the last 8 months in Tanzania working on a CommCare project, and now I’m back in the US leading our Active Data Management (ADM) initiative.  This post will describe our current approach as well as pose two of our current questions to the group about effective performance monitoring.


Our motivation for ADM stems from our experience over the last few years, in which we see the most successful projects utilizing active feedback loops where somebody actively uses the data we collect to improve field operations.   We’ve also seen a lot of data we collect go unused—not surprisingly because most of the health programs we work with are not designed to expect real-time data.   ADM strives to help partners make active use of the data collected and achieve evidence-based change.  


Part of my desire to drive this initiative comes from my work in Tanzania, where we had a fully employed and incentivized CHW workforce, our own employed supervisors, and still struggled with how best to utilize all of the data we had available.  


Approach:  Monitoring and follow-up action are two distinctly different elements of one process.  Monitoring refers to reviewing reports and data analysis which informs follow-up action.  Both rely on the other, and they must be closely linked for evidence-based change and performance improvement to occur.  But, we identified a significant gap creating a barrier to effective or optimal performance monitoring and follow-up action.


Numerous reasons create this gap including poorly designed reports and data requirements, busy project managers, and unclear prioritization and expectation of effort.   The foundation of our approach concentrates on addressing this gap by making it easier for a project manager to access, correctly interpret, and act on data using short, routine reports that more explicitly link analysis to follow-up action and working with partners to integrate this process into their existing supervisory structure.  


To talk a little more about our effort to link reporting analysis to follow-up action, we tackle this problem in primarily three ways: simplifying report design, establishing clear follow-up, and setting metric targets.


Design Simplicity: In each of our report prototypes, design simplicity is paramount.  Everyone knows complexity kills, but our working environment emphasizes this effect.  If a project manager needs more than 2 minutes to interpret a report, we believe the chance of follow-up action decreases.  Simplifying a report’s design increases the chance of a project manager reviewing and acting on the information presented.


Clear Follow-Up:  In addition to design simplification, we provide a list of follow-up items.  This helps to more explicitly link the information to action.  For example, a graph or chart presenting aggregated data might be insightful, but the step from this graph to action isn’t direct enough.  It is a step removed.  

A graph or chart requires both more effort and a correct interpretation to lead to meaningful action.  We believe this one step decreases the chance a project manager will act.  The name of an individual CHW with specific follow-up action is the most actionable piece of information.  As the primary output of our report prototypes, we generate a list of CHWs with associated follow-up actions as a way to incrementally address weak performance.  


However, we realize that identifying specific follow-up items accomplishes only half of this process.  We intend to track these items from open to close in order to confirm action follows the identification of these issues. 

Question 1: Is individual or group follow-up the best way to act on information regarding performance?  For instance, if you had a performance distribution of all CHWs, would you follow-up individually or as a group?

Question 2: To generate our follow-up items, we’re selecting both the top and bottom performing CHWs in numerous metric categories.  The bottom will receive supportive supervision and the top will receive positive reinforcement.  Do you foresee any issues in doing this?


Metric Targets: Monitoring is severely limited without setting an expectation or target and leads to unclear and potentially misguided follow-up action.  We’re strongly encouraging our partners to set targets for the level of activity they expect (e.g. designate CHWs to visit x number of clients every x days).  This adds context to data analysis which project managers can use for an easier, more appropriate, and meaningful interpretation of the data. 


Next Steps: The next major milestones for ADM are report standardization and automation.  All CommCare projects have slight differences, but the concept of mobile CHWs managing clients and submitting forms remains an underlying congruence.  We want to standardize the metrics to enable a level of autonomy.  So, we’re currently testing our report prototypes with partners in India, Africa, and the Middle East.  Through an iterative process, we have started to refine our prototypes to respond to project manager feedback.


Once we validate our approach, we want to automate it.  We plan to automatically generate and disseminate our reports on a weekly and/or monthly basis.  To accommodate different goals from project to project, we’ve started designing a user configurable report builder.  The report builder is designed around a simplified user interaction and focuses on a low number of inputs about the metrics a particular project manager value’s and associated metric targets.


I've attached a sample of our monthly report.  I encourage comments, questions, and cross-examinations.


Thanks,
Nick
sample_monthly_report.pdf

Ben Birnbaum

unread,
Nov 16, 2011, 3:12:27 PM11/16/11
to ict...@googlegroups.com
Hi Nick,

Thank you for the great post.  I think the ADM initiative is a great idea.  Here are a few questions:

1) How have partners responded so far?
2) Are there managers in place to use this information, or is finding the appropriate person to send the information to part of the challenge?
3) In the report builder, what set of metrics will users be able to choose from?
4) How do you determine what the top and bottom performing CHWs in each category are?  Are you exploring various approaches?

Of course, only answer the subset of questions you'd like (including the empty subset).

-Ben

--
You received this message because you are subscribed to the Google Groups "ict4chw" group.
To post to this group, send email to ict...@googlegroups.com.
To unsubscribe from this group, send email to ict4chw+u...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/ict4chw?hl=en.

Heather Zornetzer

unread,
Nov 16, 2011, 4:16:49 PM11/16/11
to ict4chw
Hi Nick (and all on the list),

GREAT post -- much appreciated as this is something that our group
struggles with in all of our projects where we collaborate with new
users of ICT tools.

Another questions (if it's ok to post before you've answered Ben's):

I'd like to know a little more about your experience (and that of
others on the list) with the use of different formats for visualizing
information (data). For example, even simple graphs and charts have
been enormously challenging for us to integrate into decision maker
information-use on a regular basis.

thanks, and great work - your post has given me tons to think about in
terms of helping to more simply codify a training plan with our local
collaborators about their active data management in a CommCare
implementation just launching here in Nicaragua.

cheers, Heather

--
Heather Zornetzer, MS, MPH
Coordinadora de Programa, TIC para Salud Pública  |  Program
Coordinator, ICT for Health
Instituto de Ciencias Sostenibles  |  Sustainable Sciences Institute
Managua, Nicaragua
> > *Approach*:  Monitoring and follow-up action are two distinctly different
> > elements of one process.  Monitoring refers to reviewing reports and data
> > analysis which informs follow-up action.  Both rely on the other, and they
> > must be closely linked for evidence-based change and performance
> > improvement to occur.  But, we identified a significant gap creating a
> > barrier to effective or optimal performance monitoring and follow-up action.
>
> > Numerous reasons create this gap including poorly designed reports and
> > data requirements, busy project managers, and unclear prioritization and
> > expectation of effort.   The foundation of our approach concentrates on
> > addressing this gap by making it easier for a project manager to access,
> > correctly interpret, and act on data using short, routine reports that more
> > explicitly link analysis to follow-up action and working with partners to
> > integrate this process into their existing supervisory structure.
>
> > To talk a little more about our effort to link reporting analysis to
> > follow-up action, we tackle this problem in primarily three ways:
> > simplifying report design, establishing clear follow-up, and setting metric
> > targets.
>
> > *Design Simplicity*: In each of our report prototypes, design simplicity
> > is paramount.  Everyone knows complexity kills, but our working environment
> > emphasizes this effect.  If a project manager needs more than 2 minutes to
> > interpret a report, we believe the chance of follow-up action decreases.
> >  Simplifying a report’s design increases the chance of a project manager
> > reviewing and acting on the information presented.
>
> > *Clear Follow-Up*:  In addition to design simplification, we provide a
> > list of follow-up items.  This helps to more explicitly link the
> > information to action.  For example, a graph or chart presenting aggregated
> > data might be insightful, but the step from this graph to action isn’t
> > direct enough.  It is a step removed.
>
> > A graph or chart requires both more effort and a correct interpretation to
> > lead to meaningful action.  We believe this one step decreases the chance a
> > project manager will act.  The name of an individual CHW with specific
> > follow-up action is the most actionable piece of information.  As the
> > primary output of our report prototypes, we generate a list of CHWs with
> > associated follow-up actions as a way to incrementally address weak
> > performance.
>
> > However, we realize that identifying specific follow-up items accomplishes
> > only half of this process.  We intend to track these items from open to
> > close in order to confirm action follows the identification of these
> > issues.
>
> > *Question 1*: Is individual or group follow-up the best way to act on
> > information regarding performance?  For instance, if you had a performance
> > distribution of all CHWs, would you follow-up individually or as a group?
>
> > *Question 2*: To generate our follow-up items, we’re selecting both the
> > top and bottom performing CHWs in numerous metric categories.  The bottom
> > will receive supportive supervision and the top will receive positive
> > reinforcement.  Do you foresee any issues in doing this?
>
> > *Metric Targets*: Monitoring is severely limited without setting an
> > expectation or target and leads to unclear and potentially misguided
> > follow-up action.  We’re strongly encouraging our partners to set targets
> > for the level of activity they expect (e.g. designate CHWs to visit x
> > number of clients every x days).  This adds context to data analysis which
> > project managers can use for an easier, more appropriate, and meaningful
> > interpretation of the data.
>
> > *Next Steps*: The next major milestones for ADM are report

Nick Amland

unread,
Nov 16, 2011, 4:38:19 PM11/16/11
to ict...@googlegroups.com
Hey Ben,

Hope all is well!  Thanks for the questions.  My responses are inline. 

1) How have partners responded so far?

We've shared these report prototypes with all of our current CommCare partners.  They've responded very positively thus far and can see the value in these reports.  A number of comments have complemented the simplicity of the report which is something we aimed for.  It's great to get that kind of feedback.  Additionally, we've also heard a lot of "what if you did this...".  This iterative feedback has been great in refining our approach and to identify the right metrics to concentrate on.
 
2) Are there managers in place to use this information, or is finding the appropriate person to send the information to part of the challenge?

This is a challenge.  Many projects aren't well designed to optimally use real-time data, and sometimes the monitoring task hasn't specifically been delegated to one person.  We're working with partners to plan for active data management, so the expectation is set during the project design phase.
 
3) In the report builder, what set of metrics will users be able to choose from?

In our current approach, we plan to restrict the user's ability to choose the actual output metrics.  Instead, we will provide the flexibility to the user to determine the priority of the inputs.  For example, a project has 5 forms in their application, but the project manager only wants trend analysis (i.e. graphs) for 3 of them or only wants those 3 forms to show up in the table.  Through this functionality, a user can decide what is highlighted in a report based on his/her preferences.  This simplifies user interaction and reduces complexity - a primary goal.  
 
4) How do you determine what the top and bottom performing CHWs in each category are?  Are you exploring various approaches?

We determine high and low performing CHWs by comparing them to high/low threshold metric targets.  A CHW will appear listed in that corresponding high or low grouping until their performance no longer meets the threshold.  This way, we have a living table of follow-up actions that doesn't have to be updated. 

Thanks,
Nick

Nick P. Amland
US Mobile: 206.605.7378

Nick Amland

unread,
Nov 17, 2011, 1:29:55 AM11/17/11
to ict...@googlegroups.com
Hey Heather, 

Good question.  We're also trying to figure out the best approach for data visualization as well.  It's a daunting task!  There are literally thousands of ways to display the same data.  One thing we've recognized is that any graph requires interpretation which increases the level of effort a manager needs to invest.  This prompted us to concentrate on itemized follow-up actions.  So, in our current report, we have really simple graphs which do provide the opportunity for analysis, but our focus is on the itemized follow-up actions.

However, I think one important prerequisite to the effective performance monitoring is expectation of effort.  Regardless of how well you design a graph, you still need someone to look at a graph with the expectation of analyzing and acting upon any gained understanding.

FYI - Someone recently recommended some data visualization books by Edward Tufte, Yale University Professor and revolutionary in the area of data visualization.  I'm thinking to some more research to learn about more suggested best practices in this area. 

Thanks,
Nick

Nick P. Amland
US Mobile: 206.605.7378



kieran sharpey - schafer

unread,
Nov 17, 2011, 1:38:53 AM11/17/11
to ict...@googlegroups.com
Hi Nick,

First of all these reports look great and well done on a great initiative.

Your approach seems spot on and seem to compliment the idea that one of the main benefits of tech is that the actual PMs & CHWs can spend more time making decisions on the work they are experts in, and less time faffing with data. The simplicity and follow-up focus of the ADM reports seem to be targeting that.

To add some thoughts on your questions:
Question 1: Is individual or group follow-up the best way to act on information regarding performance?  For instance, if you had a performance distribution of all CHWs, would you follow-up individually or as a group?
I would guess this would depend on their current (pre-tech) methods of supervision and feedback. For instance one group we worked with previously, always had a 'Friday Meeting' where the CHWs as a group discussed general issues about the job. Once the tech was put in, they just added the summaries to their meeting to aid some of the discussion. That said you could certainly see arguments for both approaches, and so it would seem to depend on the person / organisation doing the follow up. But thats just speculation - keen to hear your thoughts. 

Question 2: To generate our follow-up items, we’re selecting both the top and bottom performing CHWs in numerous metric categories.  The bottom will receive supportive supervision and the top will receive positive reinforcement.  Do you foresee any issues in doing this?
Hmm maybe there's value for reporting the performance of all the CHWs? You could still rank them so that the 'best' and 'worst' are evident? Or maybe you have a specific reason for *not* doing that?

I'm just thinking that the PM / coordinator will know more of the external factors influencing the volumes and speeds of each CHWs performance - so might have a better idea than the system / metric about what is a realistic target number for each user. E.g. one user may be off in a rural location so X is quite a good submission rate, whilst X for an urban user may be low.

Don't know if I'm on the right path here, or if this is helpful, but I do like the challenge to think about how people will use the data and take ownership. So thanks for the great topic!

Best wishes,
Kieran
 
--
You received this message because you are subscribed to the Google Groups "ict4chw" group.
To post to this group, send email to ict...@googlegroups.com.
To unsubscribe from this group, send email to ict4chw+u...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/ict4chw?hl=en.



--
--------------------------------
Kieran Sharpey-Schafer
skypeid: kieran.sharpey.schafer
I'm in Malawi for 2011 so you can find me on mobile: +265 88 163 6918

Ray Brunsting

unread,
Nov 17, 2011, 10:11:59 AM11/17/11
to ict...@googlegroups.com
Hi Nick,

Thanks for the great posting.  You posting and responses have been very interesting and helpful.

Background to my feedback: I work with a Guatemalan NGO, TulaSalud (www.tulasalud.org), that has recently expanded the network of CHWs from 60 to 125.  The CHWs are distributed over a wide geographic area, with supervisors in place to monitor the activity of ~20 CHWs.  Historically, the monitors have visited each CWH about one a month and review the (mostly paper based) records of each CHW.  We are currently working on a CommCare pilot project, to learn more about CommCare applications can augment and improve what we currently have in place.  Our current focus is on maternal and infant health.

As we continue to scale things up, we will have a need for both centralized and distributed performance monitoring.  I can see how reports similar your example will be very helpful for centralized performance monitoring.  We would certainly be willing to try them out and provide you with feedback as part of our current pilot project.

What I find myself thinking about these days is how we can enable the supervisors to facilitate distributed performance monitoring.  Group supervisors will likely benefit from a more detailed report, as Kieran suggested in her posting ("Hmm maybe there's value for reporting the performance of all the CHWs? You could still rank them so that the 'best' and 'worst' are evident? Or maybe you have a specific reason for *not* doing that?").  While the supervisors could look at detailed data submitted by each CWH, and may be doing some of this anyway, they would certainly benefit from an ADM report that includes performance metrics for every CHW they monitor.  Also, consider the fact that supervisors are normally on-the-road and likely using an Android-based phone, it would be great if the supervisors could access real-time reports from their phone.  (ie. a live 'dashboard' web page that renders well on a mobile device, in addition to a scheduled PDF report).

Regarding your initial questions...
Question 1: Is individual or group follow-up the best way to act on information regarding performance?  For instance, if you had a performance distribution of all CHWs, would you follow-up individually or as a group?
For centralized monitoring, I suspect that group follow-up is the best way.  For distributed monitoring, individual follow-up is likely the best way.  You'll likely need separate reports for each case --- and perhaps that is your longer term intention.

Question 2: To generate our follow-up items, we’re selecting both the top and bottom performing CHWs in numerous metric categories.  The bottom will receive supportive supervision and the top will receive positive reinforcement.  Do you foresee any issues in doing this?
Great idea.  In a distributed monitoring model supervisors may prefer an ordered list, rather than only listing top and bottom performers.

Again, thanks for the great post.  Hopefully this feedback is helpful.

Ray Brunsting, Tula Foundation / www.tula.org



Nick Amland

unread,
Nov 17, 2011, 10:51:56 AM11/17/11
to ict...@googlegroups.com
Hey Kieran,

Thanks for the post. 

Response to topic 1:
The project manager will definitely have the best sense on whether group or individual follow-up is best.

I worked on a CommCare project in TZ that also had a weekly meeting.  In a bit of unique fashion, the individual CHWs called out in the report were pointed out during the meeting in front of everyone.  This was a decision by the project manager and a testament to the strong relationship between the project manager and the CHWs, as well as between the CHWs, themselves.  It was a very supportive community.  So, the low performers were recognized in front of the group and understood they needed to improve in X or Y while the high performers were congratulated in front of the group.  This style of supportive supervision seemed to work out really well and gave the project manager targeted action to call out.

Response to topic 2:
Yeah, we've thought a lot about displaying the entire performance distribution because it gives the project manager the best opportunity to use judgement and contextual knowledge.  However, we're sticking to our motto of simplicity and trying to avoid the paralysis of staring at a huge data set.  Imagine a project with 50-100 CHWs.  The full performance distribution for one metric would be enormous.  We want our reports to be simpler, more targeted, and smarter.  There may be an implicit sacrifice in doing this, but we think well worth if this increases report utilization. 

Thanks,
Nick

Nick P. Amland
US Mobile: 206.605.7378



kieran sharpey - schafer

unread,
Nov 17, 2011, 1:30:09 PM11/17/11
to ict...@googlegroups.com
Response to topic 2:
Yeah, we've thought a lot about displaying the entire performance distribution because it gives the project manager the best opportunity to use judgement and contextual knowledge.  However, we're sticking to our motto of simplicity and trying to avoid the paralysis of staring at a huge data set.  Imagine a project with 50-100 CHWs.  The full performance distribution for one metric would be enormous.  We want our reports to be simpler, more targeted, and smarter.  There may be an implicit sacrifice in doing this, but we think well worth if this increases report utilization. 

Great response and I'm totally sold. I like the thinking as its focused on making something acutally happen in the real-world, rather than the theoretical perfect case!  Let us know how you find the different groups employ it.

Best wishes,
Kieran

Nick Amland

unread,
Nov 17, 2011, 2:01:42 PM11/17/11
to ict...@googlegroups.com
Hey Ray,

Thanks for the response.

You're not the first person to bring up distributed or supervisor-wise
reporting.  During our iterative feedback stage, multiple partners have
mentioned the same thing, and this resonates with us.  At this point, we're
really concentrating on the first step which is to create a generalized
report.  This is an improvement, but we do realize it's only incremental.
 We've started to articulate a project plan for ADM which includes setting
future milestones and functionality.  We're going to integrate this, and I
may contact you again to get some more of your thoughts on this topic
specifically.

Regarding "on the go" reporting, we do plan to automatically disseminate
the ADM reports (haven't chosen end format, but likely PDF) via email which
could be available on a smart phone.  For a slightly different use case,
we've also started thinking about SMS-based reporting.  My colleague Ryan
Hartford has done some great SMS-based reporting work with JSI and
VillageReach in the logistics space.  We'll be looking to learn from them.


Thanks,
Nick

Nick P. Amland
US Mobile: 206.605.7378



On Thu, Nov 17, 2011 at 12:00 PM, Nick Amland <nam...@dimagi.com> wrote:
Hey Ray,

Thanks for the response.

You're not the first person to bring up distributed or supervisor-wise reporting.  During our iterative feedback stage, multiple partners have mentioned the same thing, and this resonates with us.  At this point, we're really concentrating on the first step which is to create a generalized report.  This is an improvement, but we do realize it's only incremental.  We've started to articulate a project plan for ADM which includes setting future milestones and functionality.  We're going to integrate this, and I may contact you again to get some more of your thoughts on this topic specifically.   

Regarding "on the go" reporting, we do plan to automatically disseminate the ADM reports (haven't chosen end format, but likely PDF) via email which could be available on a smart phone.  For a slightly different use case, we've also started thinking about SMS-based reporting.  My colleague Ryan Hartford has done some great SMS-based reporting work with JSI and VillageReach in the logistics space.  

Thanks,
Nick

Nick P. Amland
CommCare Field Fellow
Dimagi, Inc.



--
You received this message because you are subscribed to the Google Groups "ict4chw" group.
To view this discussion on the web visit https://groups.google.com/d/msg/ict4chw/-/vbirR5FGbtwJ.

Eduardo Jezierski

unread,
Nov 17, 2011, 5:34:18 PM11/17/11
to ict...@googlegroups.com
Hi - I have to say from our experience having transparent reporting metrics 'on the go' actually improved the completeness and quality.

An idea is to link that performance monitoring to indirect measures / achievements (a bit of 'gameification' if I am allowed to use the buzzword once) that prize positive outliers etc. 

We haven't rolled out distributed review/performance feedback tools explicitly but I would strongly encourage formally researching and testing it. Peer to peer feedback and accountability is strong; and at least in the area of large-scale sms data reporting tools the value/usefulness of data and the influence of seeing behavior on a particular worker was the strongest the closer you got - i.e. imitation, peer pressure, and prizing positive outliers in tiny groups (10 folks) changed behavior more than centralized mandates and reports of feedback.

~ ej

Nick Amland

unread,
Nov 18, 2011, 1:14:45 PM11/18/11
to ict...@googlegroups.com
Hi Eduardo,

Thanks for your reply.

By linking "performance monitoring to indirect measures/achievements that prize positive outliers", are you talking about performance-based incentives?  

We implemented performance-based incentives in our project in TZ.  However, we tied the performance metric to a single metric (form submission), and we think this might have caused skewed CHW activity to just satisfy that performance requirement.  So, an incentive needs to be carefully designed and optimally not connected to just one metric, perhaps a few or a hybrid performance metric that could be weighted to account for all facets of performance. 

Thanks,
Nick

Nick P. Amland
US Mobile: 206.605.7378



alvin....@gmail.com

unread,
Nov 18, 2011, 4:47:15 PM11/18/11
to ict...@googlegroups.com
Dear Nick,

Interesting experience at NZ. We're also interested on designing incentives here in Manila.

Our past experiences with incentives have gone to naught. First we gave them targets -- they just manipulated the data to meet the targets. Then we gave them forms, and they just filled up the forms as required with poor data quality.

Perhaps the incentive is for data quality? Timely and complete are fairly easy to measure with mobile but we're stumped with accuracy. Even form validation can't assure accuracy.

Any ideas will be greatly appreciated.

Sent from my BlackBerry® wireless handheld

From: Nick Amland <naml...@gmail.com>
Date: Fri, 18 Nov 2011 11:14:45 -0700
Subject: Re: [ict4chw] Re: Active Data Management and Effective Performance Monitoring

Matt Theis

unread,
Nov 18, 2011, 11:55:03 PM11/18/11
to ict...@googlegroups.com
Nick,

Great post.  I recently arrived in India to help with a number of existing and future CommCare deployments, and the healthcare structure here seems very relevant to Rays point and a number of projects here:
- ASHA/AWW (like a CHW) uses CommCare
- Supervisor (ANM or project coordinator) have some sort of way to get data based on the CHW performance.  It seems like ADM type information delivered via phone (be it SMS, GPRS, HTTP call on an Android) could be really powerful.

In addition, I think there are two basic types of data to consider that would be useful in a distributed report:
1) Performance indicators - like followup rate, number of visits, etc (very much like what ADM does now) in order for the supervisor to manage health worker performance

but also:

2) Programmatic indicators - not just looking at number of forms received and when, but whether the information in those forms is able to tell anything about the quality of healthcare being delivered, for instance, a person with condition X was given treatment Y.  This information could then be used to make sure that not only are CHW visits happening, but that the visits are implementing the desired protocol.

Matt

Nick Amland

unread,
Nov 19, 2011, 12:13:55 PM11/19/11
to ict...@googlegroups.com
Hi Marcelo,

It sounds like you've had a trying experience with performance-based incentives.  

One way we tried to measure data quality and attempt to identify intentionally falsified data was implementing a household verification process.  The main responsibility for our CHWs was to follow-up with households, and the number of household follow-up form submissions was tied to our performance-based incentive.  

So, we devised a random visit ID system to ensure that the CHWs were actually visiting the households to fill out the form.  The form would generate a random visit ID which the CHW would write on a notebook located at the client's house.  During the CHW's next visit, they'd have to enter this visit ID into the form.  The two visit IDs could be cross-checked to see if they matched.  The system worked, but one challenge was getting all of the CHWs to fully understand this process.

Another thing we tested out was using an algorithm to detect odd or unexpected patterns in the submitted data.  We're working together with a PHD student from the University of Washington who has developed an a pseudo data quality, data assurance metric produced by an algorithm he created.  We're going to be looking for ways to test this and potentially integrate into our ADM reports.  

Thanks,
Nick

Nick P. Amland
US Mobile: 206.605.7378



Nick Amland

unread,
Nov 19, 2011, 12:31:15 PM11/19/11
to ict...@googlegroups.com
Hi Matt,

Thanks for the post. 

We've talked a lot about the distinction between CHW performance and programmatic monitoring.  Right now, we're focusing on CHW performance monitoring because the metrics here can be standardized across projects whereas programmatic monitoring will be very project specific.  

Also, one goal of ADM is to enable routine follow-up action by project management, and I have questions about how actionable programmatic metrics can be on a routine basis.  You mention an example of a programmatic metric: patient was suffering from X and received treatment Y.  Is this treatment given by the CHW or by a health facility?  If by the health facility, this would be great, "outcome" related data to collect from an M&E perspective, but it doesn't seem directly actionable unless the CHW didn't or incorrectly referred the patient.

Thanks,
Nick

Nick P. Amland
US Mobile: 206.605.7378



--
You received this message because you are subscribed to the Google Groups "ict4chw" group.
To view this discussion on the web visit https://groups.google.com/d/msg/ict4chw/-/uUrEuuGx9ZIJ.

Alvin Marcelo

unread,
Nov 19, 2011, 5:25:12 PM11/19/11
to ict...@googlegroups.com
Hi Nick,

Interesting method! We should try our hand with this. I presume the number on the notebook persists in the house. What if gets lost?

Could taking a picture of the house (assuming the phone is capable of that) -- or a GPS reading or cellsite geodata perform this function to some degree?

alvin

Alvin B. Marcelo, MD, FPCS    www.alvinmarcelo.com
Voicemail: +1-301-534-0795)    GPG 0x99CBC54C

Kola OLADELE

unread,
Nov 21, 2011, 12:24:48 PM11/21/11
to ict...@googlegroups.com
Hi Nick and list members,

I read with great interest, your very useful post - your experience has stimulated a lot of thinking here.
In trying to avoid complexity, I will suggest that you look at analysing program data using LQAS methodology. LQAS helps to classify 'lots' (health districts) into acceptable and unacceptable performance/quality. It has been used variously to assess health worker performance, measure coverage of health interventions, etc. These articles will make good reading:

Valadez, J.J., B. Weiss, C. Leburg and R. Davis. Assessing Community Health Programs: A Trainer's Guide and A Participant's Manual and Workbook. London: Teaching Aids At Low Costs (TALC). September 2002. (Translated: French, Spanish) (second edition 2007)

Valadez, J.J. and Bamberger, M. Monitoring and Evaluating Social Programs in Developing Countries. Washington, DC: The World Bank, Economic Development Institute, Economic Development Series. 1994.

Valadez, J.J. Assessing Child Survival Programs in Developing Countries: Testing Lot Quality Assurance Sampling. Cambridge, MA: Harvard University Press. 1991.

Susan Robertson and J.J. Valadez."Global review of health care surveys using lot quality assurance sampling (LQAS), 1984-2004." Social Science and Medicine, 63, 2006: 1648-1660 http://www.ncbi.nlm.nih.gov/pubmed/16764978

Cheers!
'Kola
 
 4) How do you determine what the top and bottom performing CHWs in each category are?  Are you exploring various approaches?

--
Alvin B. Marcelo, MD, FPCS    www.alvinmarcelo.com
Voicemail: +1-301-534-0795)    GPG 0x99CBC54C

Nick Amland

unread,
Nov 22, 2011, 2:06:34 PM11/22/11
to ict...@googlegroups.com
Hi Kola,

Thanks for your post!  

I'm not familiar with the LQAS methodology.  I'll take a look literature you've referenced here.  I'm particularly interested in the performance classification of health districts.  

Thanks,
Nick

Nick P. Amland
US Mobile: 206.605.7378



2011/11/21 Kola OLADELE <olade...@gmail.com>
Reply all
Reply to author
Forward
0 new messages