As previously discussed, the concept of Quality by Design is a proactive process to identify the errors that matter when doing a randomised trial. This is in contrast to ICH-GCP which focuses considerable time and effort through data checking to ensuring all data are error free. ICH-GCP regards all errors as important, whereas QbD sets out to identify which errors matter when doing a trial and which are not important.
Marc Buyse and colleagues have shown in a paper in Clinical Trials that the reliability of the result of a randomised trial depends on the type of error introduced. Using actual trial data from two randomised trials large numbers of simulations were run introducing different types of data errors. The results clearly show that for a randomised experiment the results are surprisingly tolerant to random errors if the errors are independent of treatment group. By contrast, the results show that adding systematic errors to only a small proportion of participants results in a major bias in the estimated treatment effect. This is shown in the following figure:
The results highlight that whilst every effort should be made to avoid systematic errors there is no justification for devoting large amounts of resources to detecting and correcting random errors. ICH-GCP emphasises monitoring activities that require large amounts of resources to correct both types of error, such as 100% Source Data Verification (SDV). This is a costly and largely useless activity since in properly masked trials the reliability of the result is extremely robust to most of the errors that SDV might uncover.
You can read the full paper here.