This is a personal perspective on what I think is one of the fundamental problems with the whole paradigm of how we regulate randomised trials. This is the mindless following of instructions rather than thinking about what matters to do a trial well.
The scientific case for the MoreTrials campaign to do more trials better is focused around three fundamental problems with ICH-GCP, which have previously been described here. First, it focuses on things that don’t really matter at the expense of those few key principles that do in order to do a trial well. This is dressed up in important sounding, but vague concepts, like the need to collect “high-quality data.” Second, we have a one-size-fits-all mentality, which makes life easier for the checkers, but nobody else. Third, is the complete failure over the last two decades to update ICH-GCP to keep pace with change, change largely driven by technology.
The starting point then to develop a fundamentally better replacement for ICH-GCP that promotes rather than hinders the conduct of randomised trials is to first set out what are the conditions for doing a trial well. I think there might be only three that really matter:
- Identify an important question
- Design a randomised experiment to answer that question reliably
- Conduct that experiment keeping the participants safe.
No mention of abstract concepts like “an international ethical and scientific quality standard”, no recipe book of do this, then do this……
No, the problem is that it’s really difficult to do this well, to do a trial that means that things will never be the same again. To answer a really important question so well, so clearly that doctors and anybody else immediately change what they do. There are examples of this.
Clot busters for treating heart attacks and not giving steroids after major head injury are two examples that stand out. I want to write that there are many more, sadly, there are not and bad regulation is at least partly to blame for that.
To do a trial really well requires doing really hard work, in short, time and attention often with blood, sweat and tears. It doesn’t start with a recipe book, no, it starts with a blank screen. The trials that I’ve seen first-hand done really well don’t get conceived and done in a few weeks and months. They don’t pluck arbitrary metrics out of the air, like “time to first patient recruited” and them gamify the whole enterprise to achieve these at the expense of those things that matter.
Contrast this with what I see on the research ethics committee that I chair.
We usually see 3-4 new randomised trials each month. We do see some really good trials, both from academic groups and industry, but the majority of trials we review don’t start with that blank sheet of paper, with just those three headings above: identify, design, conduct, no, instead, you pull out the last similar protocol and tweak it.
If you work in a large organisation, you might ask the stats group to write the section on statistics and the safety department the section on monitoring (the quality people, even without asking, in my experience, will then add five pages of people who need to sign things off before you can do anything else. When I was in big-pharma, I didn’t know most of the people who were signing off my studies).
You then end up what we affectionately call a bit of a “pig’s breakfast” so that we end up reviewing a trial with five, maybe six “primary outcomes” and nobody sees the irony of that! A camel when we set out to design a racehorse.
I once at the meeting asked a CRO, who the pharma company had appointed to run the trial, why they had chosen a particular primary outcome measure and he replied, “Sorry, I don’t know I’m not from the company.” Sounds like what the bankers said back in 2009, “Sorry, we’ve made the system so complex nobody understands it.”
And along with all of this comes writing down the key information that people who are thinking of taking part need to know. The main thing that ethics committees look at. This participant information leaflet most reasonable people would agree might be three, maybe four pages, because as our participant panel in Oxford regularly tells us:
“If you make it any longer than 3-4 pages people won’t bother to read it.”
Again this is hard to write and involves staring at a blank screen while thinking, if you’re like me, “I’ll make a start after I make another coffee.” No, instead, what happens in most instances, is the last information leaflet is dusted off, again tweaked a little, because you can feel reassured that it covers everything, that the legal department have made sure that every eventuality is covered, “just in case.” And we end up with something, a participant information leaflet that is 25-30 pages long and nobody does anything about it (I’ve repeatedly asked the Health Research Authority in the UK to do something about it, but they just make matters worse by creating even more formulaic guidance that means that the 50 page leaflet can’t be far away) And we call it ethics!
Like the cynic walking around a gallery tutting at the piece of modern art with “I could have painted that” No, doing a trial really well, takes hard work, blood sweat and tears. Regulation and specifically, ICH-GCP, needs to promote doing trials well rather than getting in the way. No, it’s most certainly not painting by numbers.