The National Academies of Sciences, Engineering, and Medicine are hosting a series of workshops looking at how real-world data can be converted to reliable real-world evidence to improve patient outcomes. Professor Sir Rory Collins spoke at the recent workshop and argued that rather than placing greater reliance on observational data, it would be much better to make it much easier to do randomised trials properly. You can watch the talk here.
He pointed out that the starting point is to be aware of the limitations of observational studies to identify the causal effects of treatments on health outcomes. The strengths of observational studies in terms of detecting causal associations are limited to large effects of treatments on health outcomes that would otherwise be rare. However, because observational studies are prone to various sources of confounding and bias they may yield associations of treatment with health outcomes that are precise (i.e. have small random errors due to a large sample size), but are not causal. For this reason, observational studies can’t be relied upon to demonstrate the true effects of treatments (e.g. they may well give results that are precise, in the statistical sense, but which are wrong). This has resulted in 2016 in the FDA issuing a warning about the risks of observational studies generating incorrect or unreliable conclusions.
Therefore, what is needed to realise the full potential of real-world evidence is to make it much easier to do randomised trials to detect plausible moderate beneficial or adverse effects of treatments on common health outcomes.
This need to make it much easier to do randomised trials is because in the last 20 years or so it has become much more difficult to do trials as a result of increased regulation and related bureaucracy. This has resulted in increased obstacles, delays and costs for trials, which has distorted the research agenda and reduced creative collaboration between industry and academia. This has resulted in industry developing fewer new treatments and fewer academic trials of existing therapies.
Collins then went on to propose that the key obstacle and cause of the problems is the International Council on Harmonisation (ICH). ICH’s lack of representativeness and transparency are covered here and a lack of evidence of their competence is shown by the proven failure of the ICH-GCP guideline and the contradictory text proposed in the amendment of last year. The key problems with the ICH-GCP guideline will be very familiar to readers of this website and include a failure to focus on the key scientific principles of randomised trials that are critical for the generation of reliable results. Further, ICH-GCP is not even working for the registration trials of new drugs that it was created to promote with such trials having unsustainable costs, wasteful practices and poor quality. These large increases in the cost of trials as a result of ICH-GCP have coincided with the exponential growth seen in the CRO market in recent decades.
However, the problems with ICH-GCP extend way beyond just registration trials due to it being applied much more widely than it was originally intended. Examples include the new European Union Clinical Trials Regulation (which is applicable to all trials of medicinal products conducted in Europe and expected to be implemented in 2019) and the Gates Foundation, which requires all grant holders doing clinical trials to comply with ICH-GCP (most of these trials are in resource poor low- and middle income countries). ICH is now undertaking a major revision to ICH-GCP (GCP Renovation which you read about here), but it is still not clear due to a continued lack of transparency how key stakeholders – such as academic trialists, non-industry funders, patients and the public – can be actively involved in this process.
These problems with ICH-GCP, other related trials regulations (and the frequent over-interpretation of these rules) mean that trials are much more complex and costly than they need to be. This unnecessary complexity and cost is the result of a number of areas required by regulations which have no sound evidence-base for their value. These include adverse event recording by sites, adverse event and serious adverse event reporting to regulatory authorities and other related safety monitoring. Such safety reporting might plausibly be able to detect large effects on rare outcomes, but as discussed above, to reliably detect moderate adverse effects on common outcomes a randomised comparison is required. The THRIVE trial, which you can read about here, is an example of a randomised comparison reliably demonstrating serious adverse effects of a commonly used drug (niacin), adverse effects which had not previously been shown from non-randomised analysis of adverse event/serious adverse event reporting in older trials and from routine pharmacovigilance.
Another area which ICH-GCP and related regulations promote without a sound evidence base is monitoring and data verification. This is caused by a fundamental misunderstanding which assumes that a reliable result from a randomised trial is dependent upon having high-quality data. It is not and is the same point made in Rob Califf’s talk at the same meeting that highlighted ICH-GCP’s mistaken focus on data precision is at the expense of reliability. An example showing the waste of resources for no material value of detailed data checking (outcome adjudication) is the Heart Protection Study which you can read about here. That said, some progress has been made in the area of monitoring through the work of CTTI and subsequent adoption by regulators, particularly the FDA (e.g. risk-based monitoring and the use of central statistical monitoring). More progress still needs to be made.
Collins closed his talk by highlighting the key strengths of randomised trials (particularly putting greater reliance on comparison with the randomly-allocated control group) and the need to develop new evidence-based guidance which is based upon these key principles to do a trial well rather than current guidelines, particularly ICH-GCP, which focuses on things that don’t largely matter (non-randomised individual reporting of adverse events, site monitoring etc.). These key strengths of randomised trials are not new having been described in previous publications. The goal of MoreTrials is to develop such guidance based upon these key principles of how to do a randomised trial well.