Think of it like this: maybe 10 years ago you thought "hey, why are we writing software for Windows when we could be writing it for Mac / Linux where it's so much nicer and easier?" And then you remember that most everyone else uses Windows and you know that if you want to make an impact, you have to use those platforms, until the balance slowly shifts.
The same goes for medical research: if you want others to be able to use and build on your result, it needs to be comparable and reproducible. That is, others have to be able to run your methods (read: software) on their animal models (read: OS).
So when he says:
Perhaps the researchers have come to resemble their favored species: So complacent and sedentary in their methods, so well-fed on government grants, that any flaws in the model have gone unnoticed, sliding by like wonky widgets on a conveyor belt.
It's kind of grating. It's not just "easier" and "cheaper" -- it's fundamentally more useful to other scientists and that's worth acknowledging.
"it's fundamentally more useful to other scientists and that's worth acknowledging."
But at the end of the day the goal isn't just to produce results that are logically consistent with the results of other scientists, it's also to discover drugs that can be used to cure real human diseases. Using a mice monoculture model is really bad for this, because there are lots of drugs that would work great in humans but that never get tested because they don't work in mice, and similarly there are tons of drugs that we waste millions of dollars testing solely because they were effective in mice. When looking at the results of mice studies is less predictive of what will work in humans than just looking at anecdotal reports from people, then it means that the scientific model of drug development has basically become less effective than 'alternative medicine' as a source for new ideas, which is problematic for obvious reasons. I have no idea what percentage of drugs with a history of enthnobotanical usage end up being able to demonstrate real efficacy, but I can guarantee it's a hell of a lot better than the 10,000 to 1 figure cited by this article as the rate of drugs that successfully go from petri dish to mice to humans.
So then I think we have to ask ourselves, why are we spending our money on mice when actually going out into the rainforest and talking with indigenous people could be literally two orders of magnitude more cost effective. And, much more importantly, why are we genociding indigenous cultures in favor of 'scientific progress', when empirically preserving traditional knowledge is so much more conducive to real progress and understanding. Hundreds of thousands of people throughout history have been willing to die in order to at least try to figure out which drugs work and which don't with the very limited tools they had, so ignoring what they have to teach us is a tragic waste, especially since we probably aren't going to go back to sacrificing millions of people in the name of scientific research any time soon.
The same goes for medical research: if you want others to be able to use and build on your result, it needs to be comparable and reproducible. That is, others have to be able to run your methods (read: software) on their animal models (read: OS).
So when he says:
Perhaps the researchers have come to resemble their favored species: So complacent and sedentary in their methods, so well-fed on government grants, that any flaws in the model have gone unnoticed, sliding by like wonky widgets on a conveyor belt.
It's kind of grating. It's not just "easier" and "cheaper" -- it's fundamentally more useful to other scientists and that's worth acknowledging.