How to Avoid the Dark

Last semester, PlutoShouldBeAPlanet and NJDevilsRed17 and shepardspy and I had a conversation through the Reply field about aid programs that fail to achieve their worthy objectives partly because of bad experimental design and partly because they don’t take the time and effort to accurately measure the results.

A man checks his phone to confirm that the charity GiveDirectly has transferred a cash grant to his account.

You’ll recall that the aid workers decided to give cash grants directly to recipient families instead of spending the same amount of money to educate them about good nutrition in the hope that the families would spend the money to feed their kids healthier foods.

By and large, the families did not spend the money on leafy greens. What did the experimenters learn?

You’ll also remember that, in setting up the assignment, I used example of Thomas Edison eliminating hundreds of candidates for an appropriate filament before he discovered the right metal alloy that would shine brightly without breaking when current passed through it.

Edison was a very skilled experimenter, more skilled it seems than the aid agency that ran the “cash grant” experiment. Given enough time and chances, maybe the aid agency could find the right formula to result in improved child health, but they’d be wasting a lot of money and time if they failed to track their results carefully.

Pluto observed that:

Researchers couldn’t really determine that poor people didn’t know what to do with the money, but truthfully, it seems like they really couldn’t prove anything. In order to have a good assessment, they would have to consult each individual family and ask what they did with the money and why. If I were looking at this hypothesis before they even tested it, I would think that it was ridiculous and untestable. The whole thing just seems a bit ridiculous.

It’s a conundrum that gets played out too often in social service settings.

Well-meaning aid agencies, here or abroad, that don’t design quantifiable, testable programs never really know whether the money they’re spending is having a positive impact. I like the audacity of giving the money directly to the people who need money, but the energy the agency saved by not having to run the program should have gone into carefully calibrating the results of the experiment.

Remember the original question was whether the hypothesis was proved or disproved.

We can say that a hypothesis has been proved right or wrong because the evidence proves them right or wrong. But badly designed experiments can’t actually prove anything.

Suppose someone other than Edison, me for example, had tried to invent the lightbulb. I might have given dental floss one try as a filament and failed to produce light for more than a second. The right conclusion would be that dental floss IN THE WAY I PREPARED IT was not a good filament. But that would not be reason enough to eliminate dental floss entirely as a candidate. Right?

Maybe giving money to the aid recipients makes sense, but only if they’re given the right amount, or after the right preparation, or with stipulations, or with a reward program to earn more money if their kids’ health improved, or . . .

. . . or maybe they should have bought healthy food and given THAT to the families!

The point is, too many students declare their hypotheses “unprovable” when they fail to “find the right sources” on their first search. HOW the search (the experiment) is conducted is vital to its success.

Before you toss away the idea that first excited you, at least give your Professor a chance to help you refine your research process. Conferences have a way of illuminating your own best ideas.

Reply

Leave a comment