|A question about experimental methods
||[Mar. 19th, 2011|02:06 pm]
Been reading a book on experimental methods (nerd!), and been thinking about Jamie's degu and about the waste of having the first year bio students do a lot of dissections when it's not going to be relevant to where a lot of them go (even within Biology, so I heard from a Bio TA).
This has collected into a hunch about a way to make animal testing more efficient, in the sense that the same amount of testing could be done but at the cost of fewer animals.
Say experimenters A and B are both measuring the effects of different treatments on, say, the red blood cell count of mice. They both need their own control group (no treatment) of mice to compare their treatments against. But the control group in each experiment is, in a lot of ways, the same: a group of untreated mice which we take blood from.
If A and B find out about each other's work (likely), they can compare the conditions in which the mice were raised and sacrificed. If those conditions are sufficiently similar, they can compare the red blood cell counts from each experiment to see if the control groups had effectively the same counts.
The questionable leap:
Are experimenters A and B then able to pool their control groups together into a larger, blocked, control group? Could they both use this meta-group to get more statistical power in the comparison between the (meta-)control and each of their treatments?
If so, this would imply that each experimenter would be able to be more certain about the effects of their treatments without having to increase the sample size of either experiment.
What if A and B published their raw data, specifically the data about the control groups and their methods of measurement, into a repository for animal testers?
Say experimenter C, who is measuring the effects of a different treatment but on the same variable, then could she...
- Match her measurement methods to what A and B did or...
- Use a smaller control group and check that for equality with A and B's controls...
...and use the collective AB or ABC control group as the control for her experiment?
I suspect there's already a lot of standardized procedures in place for such testing, so sufficient similarities could happen often.
But how often is the same variable measured in different experiments? Would the decay in statistical strength from applying control to a similar but ultimately independent experiment be too much to actually save any animals? Would experimenters toss outlier control groups in favour of the associated meta-control, and would this cause an under-reporting in the inter-block variance of the meta-control? Would experimenters alter their experiments in order to be able to use a published meta-control even if it was detrimental to the independent experiment?
I feel like there's potential here, and that experimenters working independently on similar experiments are wasting information that could be shared. But I'm wary about the validity of using that information. Different experiments are different, the whole point of an experiment to try something unique. Would such a repository not validly be able to link control groups together? Would people try to anyway to make their experiments seem more significant? Then there's statistical assumptions like independence, what happens to that?
This needs more thought.
I need to know more about: Bayesian Statistics, Resampling.