You can outsource every operational function. You cannot outsource the accountability.
ICH E6(R3) is not approaching. It is in effect. Finalized by ICH in January 2025, adopted by the EMA in July 2025, and published as FDA final guidance in September 2025, the updated Good Clinical Practice guidance formally codifies what was once considered best practice into hard regulatory expectation: Quality by Design (QbD) built into protocol development, centralized monitoring as a formally recognized component of trial oversight, and explicit sponsor accountability that follows the study, not the service contract.
On February 19, Castor hosted Practical ICH E6(R3) Oversight for Your Centralized Monitoring Strategy, a live webinar exploring E6(R3) implementation for Phase 4 and real-world evidence programs that drew 360 clinical research professionals. That number is a signal: the industry is not just aware of these changes, it is urgently looking for practical answers.
Chief Product Officer Lisa Charlton and Director of Delivery Engineering Connor Ladly Fredeen delivered those answers, along with a live platform demo that generated more questions than the session had time to answer.
What E6(R3) actually demands
R3 builds on R2 but goes further. Where R2 introduced risk-based thinking as a concept, R3 embeds it as a structural requirement throughout the entire guideline framework. Sponsors must now define Critical-to-Quality (CtQ) factors at protocol design, set pre-specified Quality Tolerance Limits (QTLs) tied to those factors, and demonstrate continuous, documented monitoring against them. Key Risk Indicators (KRIs), the industry-standard operational complement to QTLs, operate at the site level to surface localized performance issues in real time.
The governing structure runs from CtQ factors to QTLs to documented oversight. On centralized monitoring specifically, E6(R3) Annex 1 (Section 3.11.4.2) formally recognizes it as a core and legitimate oversight approach. The guideline is deliberately flexible, requiring sponsors to implement a risk-proportionate monitoring strategy that may combine on-site, remote, and centralized methods based on trial-specific risks. Traceability across that process is not optional.
For a thorough breakdown of the regulatory framework and practical implementation considerations, Castor’s ICH GCP E6(R3) insight brief covers the detail you need before you act.
Understanding what R3 requires is the straightforward part. Finding tools proportionate to your organization’s actual size and risk profile is where the market falls short.
The problem nobody is solving cleanly: the biotech monitoring gap
ICH E6(R3) compliance is not a tiered obligation. The same requirements that apply to a global pharma company with a dedicated Risk-Based Quality Management (RBQM) team apply to a ten-person biotech running biotech clinical trials on a single compound. The tools available in the market to address them, however, were not built with that reality in mind. Lisa Charlton put it plainly:
“The ICH rules apply to everyone, but the tools in the market are fit for purpose for big pharma and enterprise-level support. Sometimes these traditional RBQM tools are sledgehammers to acorns.”
— Lisa Charlton, Chief Product Officer, Castor
The Clinical Research Associate (CRA) is ground zero for that burden. Under R3, CRAs are expected to continuously track the KRIs the sponsor has defined: patient enrollment velocity, screen failure rates, data quality signals like query rates, and safety flags like adverse event patterns. The volume of centralized monitoring work is going to increase significantly across all trial types, all sponsors, large and small. For a pharma organization with a dedicated RBQM team, that is manageable. For a single-compound biotech where the CRA is also the clinical operations lead, it is a different problem entirely.
For a ten-person biotech managing one study, deploying a full-scale RBQM platform is not proportionate oversight. It is the operational weight that crushes the teams it is supposed to help. R3 requires a proportionate approach. For some sponsors, that genuinely means a well-documented manual process. For others, the audit burden makes that untenable. What every sponsor needs is something fast to deploy, study-specific, and proportionate to the actual risk profile. The market has largely ignored that distinction.
The guidance is also unambiguous on where responsibility sits, regardless of what you deploy. As Lisa stated during the session:
“Even if you outsource everything to a CRO, you are still responsible for data integrity and participant safety. And for that, you will always need your own view into the data.”
— Lisa Charlton, Chief Product Officer, Castor
Castor’s answer: built from the data layer up
Connor walked the audience through the technical foundation: a first-party data layer, built to ALCOA+ principles and validated under Castor’s formal SDLC, that unifies event streams from Electronic Data Capture (EDC), electronic Patient-Reported Outcomes (ePRO), eConsent, and randomization into a single auditable source of truth. On top of that sits a custom, study-level dashboard, with each metric annotated to specific ICH E6(R3) sections and backed by human-readable specifications that make traceability demonstrable, not assumed.
The core architectural distinction Connor drew is that this is not an AI agent dropped on top of existing reporting infrastructure. The data layer, the specifications, and the agentic interface are built together from the ground up on the study itself. That matters for auditability, and it matters for regulatory defensibility in a way that bolt-on tools cannot replicate.
What the live demo actually showed
The most forward-looking moment of the session was the live demonstration of QueryLab, Castor’s agentic AI interface built directly on the data layer. Connor asked a plain-language question (“Show me the correlation between enrollment speed and number of queries”) and received a step-by-step human-readable explanation of the underlying logic alongside the full machine-readable code. Every output is auditable. It can be pinned to a dashboard. The logic can be reviewed and independently checked without relying on the system to validate its own work.
No black boxes. That is the point.
The demo also showed deep linking: one click from a flagged protocol deviation in the dashboard directly into the specific participant record in the EDC. The Q&A that followed went long. Attendees wanted to know how far this goes.
The central question in the room was one that every compliance-focused sponsor is quietly asking right now: can we build a monitoring infrastructure that satisfies E6(R3) without the overhead of tools built for a different scale of organization? The answer Castor’s clinical trial solutions demonstrated is yes. What it takes to get there is worth seeing firsthand.
Frequently asked questions
These are real questions submitted by attendees during the live session.
Is there an audit trail for AI-generated insights? Can AI-generated interpretations be disabled in certain regulated environments?
QueryLab is currently in proof-of-concept form, as Connor noted explicitly during the session. That said, every query and action is captured in the audit trail, and the AI’s underlying logic is surfaced as both human-readable specifications and machine-readable code, so any output can be reviewed and verified. The feature can be disabled in environments where it has not yet been formally validated for production use.
Can the data be owned or housed in our cloud versus Castor’s?
Data is currently housed in Castor’s cloud environment, which spans multiple server locations globally to meet varying privacy and encryption requirements. Private server arrangements can be discussed depending on sponsor needs.
Is the dashboard and QueryLab usable in studies that have already been running for years?
The unified data layer is already available across active studies. Building the dashboard is a structured custom development effort. It requires gathering study-specific human inputs to produce the human-readable and machine-readable specifications that define each metric. It is not a feature flag. It is a deliberate engagement.
Can Castor integrate via API with existing TMF or CTMS software?
Yes. Castor is an API-forward platform, and the unified data layer is accessible via API. Integration with existing TMF and CTMS systems can be scoped based on your stack.
Can this solution be used at the sponsor level to filter and manage action items across internal teams?
The dashboard’s task management view surfaces site-level risks and required actions. Customization for specific sponsor personas and team-level filtering is defined during the requirements-gathering phase when building the dashboard.
Is it possible to implement an eCRF designed by a different CRO within Castor’s EDC?
Castor supports standard eCRF designs with built-in flexibility. The dashboards and QueryLab shown in the webinar sit on top of Castor’s unified data layer, so study data would need to flow through the Castor platform for those features to function.
Is there an approval step before changes go live?
Yes. All changes follow the standard SOP-governed process: design, build, and test. Mid-study updates follow the same change control framework required by the guidance.
Who creates the unified data layer?
The unified data layer is a Castor infrastructure investment developed over several years by the Castor engineering team. Sponsors do not build or configure it. It is the foundation on which study-specific dashboards are built.