Skip to main content

Evaluation 1 of "The Governance Of Non-Profits And Their Social Impact: Evidence From A Randomized Program In Healthcare In DRC” (anonymous evaluator)

Evaluation 1 of "The Governance Of Non-Profits And Their Social Impact: Evidence From A Randomized Program In Healthcare In DRC” (anonymous evaluator)

Published onMay 29, 2023
Evaluation 1 of "The Governance Of Non-Profits And Their Social Impact: Evidence From A Randomized Program In Healthcare In DRC” (anonymous evaluator)
·

Summary Metrics

Overall assessment

Answer: 65/100 

90% CI: (55, 74)

Quality scale rating

“On a ‘scale of journals’, what ‘quality of journal’ should this be published in?:(Note: 0= lowest/none, 5= highest/best)

Answer: 3.8

90% CI: (3.0, 4.1)

See here for a more detailed breakdown of the evaluators’ ratings and predictions.

Written Report

This paper uses a randomized controlled trial to show that a World Bank program introducing performance pay, auditing, and feedback raised operating efficiency and reduced infant mortality in non-profit health centers in the DRC. Treatment and control health centers received equal increases in funding. A matching diff-in-diff analysis comparing treatment and control health centers against those not in the experiment shows that increased funding alone increases the number of employees and services offered but does not affect efficiency or infant mortality. Effects appear only after about 7 quarters, suggesting that the “feedback” component of the program was important for teaching health center managers how to meet the incentivized performance targets.

This is a very nicely executed paper on an important and policy-relevant topic, and it was a pleasure to read. RCTs this big are somewhat rare, and obtaining access to administrative outcome can be difficult. To combine both of these features in a context as understudied as the DRC is a real achievement. My congratulations to the authors. I think the paper is very good, and I hope the comments that follow can be useful in helping the authors clarify and improve the paper even further.

My main comment is that I would have liked more discussion of the details of the treatment, especially early on in the paper. The intro refers to “performance-based incentives and feedback” without mentioning exactly which players are receiving the incentives and the feedback. The recent literature on the personnel economics of the state (e.g. Finan Olken Pande 2017) shows that there are lots of choke points in the bureaucracy where service delivery can break down – administrators, managers, and frontline providers can each fail to perform as desired. It’s important that your readers know which of these you are taking aim at, and why you think that’s the right part of the system to target.

  • By the end of the paper I had learned that the incentives and feedback were provided to the managers (owners?) of the health centers. It would be helpful to have more background about these people, their world, and their incentives.

  • You characterized the health centers as non-profits; does this mean they are funded by donors? Are they Western or local organizations? Given that the state seems quite interested in their operation, would it make sense to think of them as government contractors? Locating the readers in the economic decisions faced by these operators in the status quo can help us interpret the program’s effect more clearly. You note that non-profit and for-profits are different, but I think it would be nice to be explicit about exactly who has different incentives, how are they different, and what predictions does that lead to.

  • I would have liked more detail earlier on in the intro about exactly which actors are getting the performance-based pay and the feedback, and why they are the ones who need it. Is it the managers, the owners, the frontline employees? The interpretation of the paper depends crucially on this. I’d love to see a discussion of it in the intro.

  • The characterization of the treatment itself confused me a bit. The paper groups “auditing and feedback” together, and suggests that the auditing in question is an audit of the feedback, meant to strengthen the reliability of the information which the health centers receive as feedback (p.1). But page 11 gives the impression that the auditing in question is being done on the information provided by the health centers to the central authority (the “community verification system”. [What incentives do communities have to report malfeasance, if their community health center will get more funding for reporting inflated numbers?] Page 11 also mentions a procurement and contracting reform that seems to have been part of the treatment. I think the paper would be well-served by a much more nitty-gritty, hard-headed explanation of the full accountability treatment and how each element of it fits into the theory of change.

  • Along these lines, I think the paper would benefit from a fuller picture of the possible set of mechanisms (even while I recognize that the experiment wasn’t set up to really nail down mechanisms). For example: You note an increase in the number of services performed – what are the different ways that could happen? Is it that demand increases because quality increases? Is it that nurses and doctors are more likely to show up to work? Is it that nurses are going out to drum up demand among customers? Is it that services that are already happening are more likely to be recorded? Anything you can do to distinguish between these mechanisms is of course welcome, but even just enumerating the possible mechanisms would be really helpful – remember the readers know next to nothing about the context!

It's great that you report outcomes from administrative data, but given that the treatment incentivized outcomes reported by the health centers themselves, a bit more work is in order to convince the reader that the data are reliable.

  • It sounds like the main outcomes you look at are NOT those on which the incentives were based (which is good!), but you should make this clearer. I didn’t find that explicitly stated until Appendix D!

  • It would be nice to know something about the outcomes that were incentivized. Even though it makes sense not to make these your main outcomes, showing effects on them would be an important “manipulation check” to understand if the treatment works in the expected way. (Going back to the paper again, I see that it looks like the best you can do on this is Figure A4/Table A8. But even that is helpful, and I’m glad you’ve included it.)

  • You mention the audits and counter audits of these incentivized outcomes – it’d be nice to see how prominent fraud was (if, of course, you have access to the data – you may not).

  • As a further “manipulation check” on how the program operated: it would be nice to see how the funds disbursed evolved over time, or perhaps what fraction of centers qualified for funding over time (and also for control centers if you can). This could help tell the story about how crucial the feedback/learning element of the program was – if you have the data.

  • Would love to see how the program affected performance on metrics that were NOT related to the incentivized ones. You cite Holmstrom and Milgrom but it would be really great to explicitly measure whether multitasking is a problem here.

Finally, I think a basic cost-benefit analysis would be really helpful here.

  • What’s the total cost of the program? How much more expensive is it to give out conditional cash (with auditing and procurement reforms) than unconditional cash?

  • How much does it cost to save each life? How does this compare with other interventions in the literature?

Small comments – mostly on framing, most simply a matter of taste.

  • You note that “a large share” of health services in poor countries are provided by non-profits – how big is that share? I realize it may be hard to get an extremely authoritative answer due to data limitations, but anything you can do to show what a big part of the sector you’re speaking to would be helpful.

  • This feature is really cool: that the control health centers receive on average the same funding as the treatment ones, but the funding is unconditional. I think it’s worth highlighting more.

  • You’re really interested in employees’ “motivation,” and how they might get “overwhelmed” by P4P structure. I get it; that’s the focus of the Huillery and Seban paper, and they have to figure out what’s behind a surprising negative result. But I think you don’t need to emphasize this so much, and your data don’t really allow you to speak to motivation anyway; it seems like enough to say that employees need both feedback and incentives to succeed, and neither is sufficient on its own.

  • The DiD on the “outside group” is really clever; I wish more RCTs did this. I love Table 6. One small caveat: I don’t think you can say that “governance alone improves operating efficiency” (p. 30) because you don’t see any health centers that get governance WITHOUT funding – it may be that funding is a necessary condition for the governance to work.

  • You’re focused on drawing the contrast between P4P in a social context vs a for-profit context, but I think a lot of people will be interested in this paper who care about P4P in the (social) context of public economics, especially education. My read of the evidence from P4P in education is that it seems to work pretty well in poor countries (e.g Muralidharan 2011) and that a big part of the effect is the selection of teachers (e.g. Leaver et al 2021), (Andrabi and Brown 2021). There’s also work showing, similarly to your paper, that P4P is often not enough on its own (Mbiti et al 2019). I think including a brief discussion of how your work relates to the P4P literature in education, and what makes the health context different, would expand your audience.

  • I think you do an admirable job motivating why the effects of this “bundled” treatment are interesting, even though ideally we would want to know which pieces of the bundle matter most. You may also be interested in another recent paper that achieved big gains by bundling a lot of things together (also in the education space, from Guinea-Bissau): Fazzio et al 2021.

  • There’s nothing inherently wrong with having a treatment group so much bigger than the control group, but since it’s so unusual you may want to explain why – were there originally multiple treatment arms? Were there just not very many eligible health centers relative to funding available?

Evaluator details

How long have you been in this field?
11 years

How many proposals and papers have you evaluated?

25

Comments
0
comment
No comments here
Why not start the discussion?