Early-stage biotech’s AI advantage is a window that closes. Sol Babani and Derk Arts on how to use it.

April 21st, 2026 by

At the AWS Life Sciences Symposium the previous week, Solomon Babani, Founder and CEO of Symbiosis Advisors, sat through presentation after presentation on AI in clinical development. His takeaway: “There’s a lot of retrofit going on.” Companies were taking existing processes, existing workflows, and asking how to bolt AI onto them. He left mostly disappointed.

That framing opened a live conversation on April 15 between Sol and Derk Arts, MD, PhD, CEO and founder of Castor, on what AI strategy actually looks like for early-stage biotech clinical trials teams. An hour, no slides, no scripted pitch. The session covered why most AI guidance misses the point, where the FDA’s guidance framework actually helps, and what separates the teams that will get this right from the ones that won’t.

The distinction is not budget, team size, or therapeutic area. It’s whether you’re building or retrofitting. Early-stage biotech can build. Most of pharma can’t.

The greenfield window is real

Big pharma is stuck. Studies designed years ago, processes embedded in SOPs that cost more to change than to work around, and organizational immune systems that turn almost any innovation into a retrofit. Early-stage biotech teams are in a completely different position.

“I think it would be almost foolish not to entertain how to modernize your approach. You have the opportunity, you don’t have to worry about undoing legacy systems, legacies, processes, procedures, and all of that, and look for a more efficient way.”Sol Babani, Founder and CEO of Symbiosis Advisors (14:29)

That window closes the moment a team builds a first study on manual processes. The longer you wait, the more you are retrofitting too. Derk noted that Castor itself is working to remove manual data entry from the process entirely, describing how Castor Catalyst handles source document review and data capture with human oversight on the output rather than at the data entry stage. The goal, he said, is to create a digital trail that makes monitoring a remote and efficient process rather than a site visit.

FDA guidance: useful signal, limited map

Derk’s read on the current FDA AI guidance was specific: positive direction, weak on practical detail. His estimate was that about ninety-eight percent of the innovation actually happening in this space involves teams using existing models, not building their own. The guidance focuses heavily on model development. That maps to a fraction of what most teams are actually doing.

Sol’s answer: stop waiting for detailed guidance and start with a structured decision framework instead. Identify the business problem. Define the specific AI use case. Work through validation requirements and data quality. Estimate cost and timeline. Define what success looks like before you start. That sequence matters. Skipping any step is where implementations go wrong.

Build versus buy: the compliance variable

For data collection and electronic data capture infrastructure, Sol’s position was unambiguous: in a 21 CFR Part 11 environment, building your own compliance layer introduces validation risk that established vendors have already absorbed. The cost gap between buying sophisticated tools and building from scratch has collapsed. Most biotech teams don’t need to build. They need to choose well.

Human oversight is a methodology, not a checkbox

Sol was clear that human-in-the-loop oversight is not just a regulatory compliance requirement. It is an active design choice that shapes how a team builds its AI workflows from day one. Derk put the practical risk plainly:

“It just produces believable bullshit if you’re not careful. When you do it properly, it doesn’t do that. But if you don’t do it properly, it probably will.”Derk Arts, MD, PhD, CEO and founder of Castor (39:58)

The right setup requires a well-structured data source, a paid model with appropriate data protection agreements, and a documented review workflow. The shortcut through free public models is not a starting point. It is a risk that looks like one.

Derk also framed the financial stakes of getting AI right across the industry, noting that failed oncology studies represent billions of dollars in lost investment annually, as context for why modernizing clinical trial infrastructure matters beyond individual efficiency gains.


What’s in the full recording

The session covered more ground than the highlights above capture. Specific moments worth finding:

Sol walks through exactly why CRO economics are a structural obstacle to faster AI adoption in monitoring. A large part of CRO revenue is tied to monitoring visits, and he explains where the conflict actually sits and what biotechs can do to address it in negotiations. (26:00)

Derk breaks down what he calls the four-stakeholder problem in clinical AI: the vendor, the CRO, the sponsor, and the site, each with different definitions of what “saving money” means and different incentives to move toward or resist change. (29:04)

A detailed exchange on the buy versus build decision, including Derk’s specific reasoning for why building 21 CFR Part 11 compliant infrastructure against an existing EDC system would be “quite risky” for most biotech teams, and where Sol draws the line between cases that justify building versus buying. (22:13)

Sol’s five-step decision framework, presented in full with specifics on validation approach, training data requirements, timeline and cost estimation, and how to define what a successful implementation looks like at the end. (37:34)

The full conversation is available on demand. If you’re building your first clinical program or thinking through your AI strategy for an upcoming study, this is 60 minutes worth watching.

Watch the full recording

Frequently asked questions

How should early-stage biotech companies start with AI in clinical operations?

Start by identifying the specific business problem before choosing a tool. Sol Babani recommends a five-step approach: identify the business problem, define the specific AI use case, think through validation and data requirements, estimate cost and timeline, and define what success looks like before you start. Avoid applying AI broadly to all processes at once. The most common mistake is treating AI adoption as a technology project rather than an operational one.

What does human-in-the-loop mean for AI in regulated clinical environments?

Human-in-the-loop is not just a compliance checkbox. It is an active design methodology where a qualified person reviews and approves AI-generated outputs before they enter the clinical record. In a GCP and 21 CFR Part 11 context, traceable human oversight is required. The practical implication: early implementations should build full human review into the workflow, track the outcomes of that review, and use that data to justify a more risk-based approach over time. Teams that skip this step are also skipping the evidence base they will need to reduce review requirements in future studies.

Should early-stage biotech companies build or buy AI tools for clinical development?

For most clinical trial solutions use cases, buy. Building your own 21 CFR Part 11 compliant infrastructure introduces validation risk that established vendors have already absorbed. The cost gap between buying sophisticated AI tools and building from scratch has collapsed significantly. Building makes sense only when the use case is highly proprietary, the data cannot leave the organization, or no commercial solution covers the specific problem. For source document review, data capture, and monitoring support, commercial solutions with validated compliance frameworks are almost always the better choice for an early-stage biotech team.

Joel White’s Q4 CRO breakdown: strong bookings, a sell-off that didn’t match, and the disruption gap nobody is talking about

April 9th, 2026 by

CRO bookings were up year over year and accelerating. Revenues were recovering across most major players. Delays and cancellations, after a brutal stretch through much of 2025, had moved back to something closer to normal. So why did the stocks take a beating?

That disconnect was the starting point for a forty-five-minute conversation between Joel White, founder and principal at Market Capital Consulting, and Derk Arts, CEO at Castor. Joel spent fifteen years in-house at large and mid-sized CROs before founding his own practice, where he produces the quarterly market analysis that strategy and commercial teams across the sector use to benchmark pricing and track industry performance. His Q4 recap had landed the week before — twenty to thirty pages covering every major public CRO, drug discovery platform, and biopharma equity in the sector. The session was the annotated, live version. For clinical trial technology teams navigating AI-heavy market headlines, the session addressed a question with a specific and useful answer: where is the disruption actually landing, and where is it still narrative?

The clearest finding was that the AI-pocalypse narrative hit CRO stocks not because the numbers were bad, but largely because of how some companies handled questions about it. Bookings are up year over year and accelerating. Revenues are recovering across most of the major players.

Then came the analyst questions about AI strategy. Joel described the Medpace earnings call as a turning point for sentiment. Medpace is the sector’s highest-valuation outlier, significantly smaller than an IQVIA or ICON but priced for future growth. The CEO’s response to questions about AI did not land well. The stock was, in Joel’s words, “absolutely smashed. And still to this day, very depressed.” Contrast that with IQVIA and Fortrea, whose leadership arrived prepared with structured responses that, while not resolving the underlying concern, at least prevented things from getting worse.

“When it comes to some of the doomsday scenarios, for me, I need to start seeing that growth curve somehow reverse when other things are looking good.”

Joel White, Market Capital Consulting — follow Joel’s newsletter on LinkedIn

But the session drew a clear line between two different industries. For CROs running biotech clinical trials, nothing in the Q4 earnings data yet supports the disruption thesis. Joel’s argument runs on basic economic logic: if AI lowers the cost of drug development, more drugs get developed and more trials follow. He put it directly: “I tend to believe that clinical research…is very elastic to the extent that if the cost of development goes down, there will be more things that get developed, that there will be more trials to help de-risk the developments that are already in place.” The structure of CRO contracts reinforces this. Because the overwhelming majority run on fixed-price milestones rather than hourly billing, CROs have a direct financial incentive to adopt efficiency tools regardless of whether sponsors mandate them.

For drug discovery software companies, the picture looks different. Certara, Simulations Plus, and Evotec are at or near all-time lows, with companies explicitly citing seat-based license losses as a primary driver. IQVIA agreed to acquire Charles River’s preclinical platform earlier this year at a price that drew comment in the market for how low it came in. The signal is clear: AI disruption is already visible in drug discovery software. It has not yet appeared in CRO services data.

The session also surfaced a question worth sitting with: when do efficiency gains from decentralized trial models and electronic source integration start showing up as pricing pressure on CROs? Joel’s view was measured. The technology gains are real. But the contract structures, the pace of regulatory adoption, and the gap between trial efficiency and billing models suggest the impact is years away, not quarters.

Derk put the position of regulated electronic data capture and clinical operations software on the disruption timeline directly:

“The type of software that Castor and all of our friends in the space create is going to be the last to go because it’s heavily regulated. It’s the last thing you want to vibe code, basically.”

Derk Arts, CEO at Castor

For clinical trial teams building on regulated platforms, the practical takeaway is worth sitting with. The same compliance requirements that slow AI adoption in this space also make the underlying software category more stable. Procurement cycles, validation requirements, and regulatory audit trails don’t move at the pace of a general-purpose AI tool.

The recording covers considerably more than this post captures.

Joel runs through each major public CRO company in detail, including what ICON’s simultaneous accounting investigation announcement meant for investor confidence and why the Medpace CEO’s response carried such outsized consequences. He and Derk get into the drug discovery software sector at length, covering why some of these companies are moving to bring their own drug assets in-house and what that shift might mean for the traditional service model.

There is a specific exchange about whether new trial starts built on more modern decentralized clinical trial technology stacks will look materially different, and what Joel would need to see in the numbers to genuinely change his view on the disruption timeline.

Joel followed up the session with a post-event newsletter piece that takes the CRO-as-investor angle further, including a look at how IQVIA is positioning itself in early-stage biotech funding and what that strategy signals about where large CROs think the market is heading. Worth reading alongside the recording.

Watch the full session on demand

Forty-five minutes of context on where AI disruption in clinical research is actually landing, and where the Q4 data doesn’t yet support the narrative. Built for anyone making technology or investment decisions in the sector.

Watch now

Frequently asked questions

What did Q4 2025 CRO revenue and bookings data actually show?

Q4 showed bookings up year over year and accelerating across the major CROs, with revenues recovering in core direct services rather than just pass-through costs. Delays and cancellations, which had been severely elevated through much of 2025, moved into a more normalized range. ICON was the notable exception, with an internal accounting investigation announced in the same period adding company-specific pressure to broader sector sentiment.

Why did CRO stocks fall despite strong Q4 operational results?

Two factors intersected at the same time. First, a broader investor narrative about AI disrupting all software-as-a-service businesses created sector-wide pressure, catching CROs in the fallout despite their limited software revenue exposure. Second, how individual CEOs responded to AI questions on earnings calls mattered. Companies whose leadership arrived prepared with structured answers fared better than those who appeared caught off-guard. The data itself was not the problem. The narrative around it was.

How does AI disruption affect clinical operations software differently from drug discovery software?

The distinction matters a great deal. Regulated clinical operations software, including clinical trial solutions and eCOA solutions, operates under strict regulatory oversight that significantly limits the pace of AI-driven displacement. Drug discovery software companies, by contrast, are already experiencing measurable disruption in seat-based licensing, with several major players trading near all-time lows as of Q4 2025. The disruption is real. It just has not arrived uniformly across all segments of the industry.

Editorial note: During the live session, Joel mentioned that IQVIA had acquired Charles River’s preclinical platform. For accuracy: the acquisition agreement was announced in late February 2026 and had not yet closed at the time of the session. The body of this post reflects the correct status.

References

  1. White, J. (2026). Q4 2025 CRO and biopharma market update. Market Capital Consulting quarterly newsletter. Available via LinkedIn newsletter.
  2. Castor LinkedIn Live session: “The CRO Rebound and the AI-pocalypse: a Q4 industry post-mortem.” Recorded March 17, 2026. Featuring Joel White (Market Capital Consulting) and Derk Arts (Castor).
  3. IQVIA Holdings. (2026). IQVIA to acquire Charles River Laboratories’ early development services business. Acquisition agreement announced February 2026. IQVIA Investor Relations.
  4. White, J. (2026). Follow-ups on the Q4 recap for CROs and investors. Market Capital Consulting, published via LinkedIn Pulse, March 2026.

What AI replaces in Phase 2/3 data management, and the edit check question your team hasn’t asked yet

April 9th, 2026 by

There’s a version of the AI-in-clinical-trials conversation that consists mostly of noise. Bold claims, proof-of-concept demos, vendors who say their system handles everything. Then someone asks what the FDA would say, how it fits existing SOPs, or who takes accountability when something goes wrong, and the conversation gets much shorter.

Derk Arts, CEO at Castor, and Alison Bishop, a data management specialist with close to thirty years of experience across small and large clinical research organizations, set out to have a different kind of conversation. One that was, in Derk’s framing, “very specific and very much grounded in reality.” The session ran for about forty-five minutes on March 31 and covered the full data management lifecycle: what AI handles well today, what it is approaching, and where the honest answer is still “not yet.”

The conversation opened where most teams are already starting: document generation. Medical writing, clinical data management plans, database validation protocols. Both Alison and Derk confirmed this is where AI has made the most visible inroads. The technology generates text, so people gravitated toward having it generate text. Alison noted that drafts of the DMP, the SAP, and supporting documents are natural starting points, though all require human review before anything goes anywhere near a sponsor. Both agreed this territory is important and getting better, but it’s not where the session was going to spend its time.

That time went to edit checks.

Derk framed the provocation directly: what if organizations moved away from writing programmed edit checks entirely, and instead deployed an intelligent system to flag problematic data in real time? Hard-coded rules replaced by contextual judgment. Alison’s answer was measured and specific:

“I think there are a number of things that we need to make sure that we work through in order to get there. And obviously sponsors are going to want evidence. They’re going to want proof that what they get at the end is as good as if not better than what they would get with a traditional model.”

Alison Bishop, data management specialist

She outlined what that evidence would look like: run both approaches on a study with known edit check history, compare what each system flags, and build a retroactive benchmark. Derk described how Castor applies this with Castor Catalyst, using real source data and patient journeys to validate AI performance against the historical record. The question most Phase 2/3 teams haven’t asked is not whether AI can flag data issues. It can. The question is whether you can prove it, to a sponsor, in a way that would hold up on inspection.

Query management sits directly downstream of edit checks, and the session moved there next. The same system that identifies a data issue can generate the query. But the more interesting design question is whether it should, and under what conditions. Alison and Derk worked through a risk-stratification model: adverse events and primary endpoints always route through mandatory human review; lower-risk data can move at a different pace once confidence is established. Derk described how confidence scoring in Catalyst serves as the configurable decision point for each query type. Alison put the broader principle in terms that will resonate with anyone who has watched data review backlogs grow:

“AI is taking away the burden of the volume of review… focusing us at the things that are going to have the biggest value.”

Alison Bishop

The governance question ran through all of it. Regulators will require accountability, system validation, and the ability to explain every AI-driven decision. Alison laid out what a human oversight model would actually need: review gates for the first patients, defined processes for model drift, deviation handling that feeds into existing quality management. An AI validation plan sitting alongside the standard data management plan, defining upfront which data the system handles autonomously, which routes require human sign-off, and how confidence thresholds were set, is the direction the session pointed to. Both agreed this is where things are heading.

The recording covers material this post doesn’t have room for.

Derk walks through Castor Catalyst‘s visual audit trail in detail (37:17): a replay capability that shows what the AI agent did and why, step by step, in terms a human reviewer can follow. He makes a counterintuitive point that’s worth the watch on its own. An AI creates a more complete audit record than a human doing the same work, because a CRA doing source data verification doesn’t log every step in their reasoning. The AI logs all of it.

The Q&A includes a direct exchange on automation bias (43:11): how you stop human review from becoming a rubber stamp when the AI is right most of the time. It gets into ensemble validation approaches and why the problem isn’t actually new to this industry.

And Alison closes with a use case worth pursuing (42:10): using AI to track clean patients and flag what still needs doing before an interim deliverable. Straightforward in theory. Harder than it sounds when your systems don’t talk to each other.

Watch the full session on demand

Forty-five minutes on where AI is actually landing in clinical trial data management, built for anyone making technology decisions in the space.

Watch now

Frequently asked questions

Can AI replace programmed edit checks in clinical trials?

Possibly, but not without evidence first. Alison’s position in the session was that sponsors will want proof that an AI-driven approach produces outcomes at least as good as traditional edit checks before they adopt it. The practical path is to run both models in parallel on a study with known edit check history, compare what each system flags, and build a retroactive benchmark. Derk described how Castor applies this with Castor Catalyst, using real source data to validate AI performance against the historical record before asking any sponsor to trust it.

How do you prevent human review of AI output from becoming a rubber stamp?

Automation bias (accepting high-quality AI suggestions without scrutiny) was raised in the Q&A. Derk described one approach Castor uses internally: running a separate AI validation step using a different model or model family to check the output of the first. This creates a consensus check rather than relying on a human reviewer to spot errors in a mostly-correct stream. Risk stratification is the other mechanism: routing the highest-stakes data (adverse events, primary endpoints) through mandatory human review, while lower-risk data moves at a different pace. Derk also made the point that this problem isn’t unique to AI. It’s already present when a CRA does source data verification, and at least with AI you have an audit trail of every decision.

Where should clinical data management teams start with AI adoption?

The session framed it as a progression tied to what you can validate and demonstrate to a sponsor or regulator. Document generation (drafting data management plans, SAPs, validation protocols) is where most teams start, with full human review. Edit checks and query management are the next steps, but both require evidence of equivalence or improvement before replacing the traditional approach. The key point Alison made: process redesign matters more than tool selection. Overlaying AI on an existing workflow rarely produces the efficiency gains clinical data management teams are looking for. The gains come from thinking carefully about where human involvement actually needs to be.

References

  1. Castor LinkedIn Live session: “What AI replaces in Phase 2/3 data management — and what it doesn’t.” Recorded March 31, 2026. Featuring Alison Bishop (data management specialist) and Derk Arts (CEO, Castor).

Phase 4 and real-world evidence: not a spectrum, a strategic choice

March 13th, 2026 by

Phase 4 and real-world evidence are not synonyms for post-approval research. Phase 4 is a specific regulatory milestone: an interventional clinical trial that follows drug approval, often required as a condition of that approval. Real-world evidence spans the entire drug development lifecycle, from natural history studies running before Phase 1 to long-term safety and effectiveness programs active years after a product reaches the market. This piece covers the definitional line between the two, the main types of RWE study and what each is built to answer, where the two approaches genuinely converge, how their infrastructure requirements differ, and how to think about both as part of a coordinated post-approval evidence strategy.

Post-approval, sponsors often manage two distinct evidence streams simultaneously. The Phase 4 program fulfills regulatory commitments made at the time of approval. The real-world evidence program builds the effectiveness and safety story for payers, medical affairs, and long-term label development. Both can be required by regulators. Both generate data that agencies and payers review in their assessments. What separates them is the question each is built to answer and the methodological logic that question demands.

That distinction matters commercially and scientifically. A payer reviewing coverage decisions for a specific patient population needs effectiveness data from real clinical practice, not a controlled trial designed to satisfy a regulatory commitment. A regulator reviewing a post-marketing commitment needs the interventional evidence that commitment specified, not observational data collected under routine care. Getting the right evidence to the right stakeholder requires treating these as separate programs from the start.

This piece covers the definitional line between Phase 4 and real-world evidence, maps the main RWE study types and what each is designed to answer, identifies where the two approaches genuinely connect, and frames how to use both as part of a coordinated post-approval evidence strategy.

The word that derails most conversations: “trial”

There is a reason RWE practitioners react when someone uses the word “trial” in a meeting about observational studies. It signals a category error that runs deeper than terminology. Phase 4 is a clinical trial. An RWE study is not.

Phase 4 comes after approval, but it retains the defining characteristics of the trial: a prospective protocol, a schedule of events with specific visit windows, and often randomized or protocol-assigned treatment. The FDA or EMA may mandate it as a Post-Marketing RequirementA study or clinical trial required by FDA or EMA as a condition of drug approval, typically to confirm clinical benefit or address a safety signal identified before approval. to verify benefit or address a safety signal identified before approval.[1] For drugs approved through accelerated pathways, failure to complete confirmatory post-marketing studies can trigger regulatory proceedings that may ultimately lead to withdrawal of marketing authorization.[1]

A real-world evidence study follows patients as they are naturally seen in clinical practice, under standard of care. The study does not introduce a treatment as part of the study design. That is the definitional boundary between a clinical trial and an observational study under international GCP standards.[2] The moment you assign a patient to a treatment as part of the study protocol, you have crossed into trial territory. One important nuance: pragmatic clinical trials often look observational in practice, because they allow flexibility in how care is delivered and may draw on routine data. They are still interventional by design, because treatment assignment is part of the protocol.

Two questions, two designs

The clearest way to distinguish Phase 4 from RWE studies is through the question each is built to answer.

Phase 4 asks about efficacy and safety: does this drug work under controlled conditions, in a defined population, measured against a protocol-prescribed endpoint, and what safety signals emerge under those conditions?[3] Participants in a Phase 4 study know they are in a study. Their visits, labs, and assessments are scheduled and tracked according to a rigid protocol. Every data point was planned for in advance.

An RWE study asks about effectiveness and tolerability: how does this drug actually perform when patients are seen as they would normally be seen, without study-imposed visits or procedures, and how well do they tolerate it over time in real clinical practice?[3] The difference between those two questions runs through every design decision that follows.

Four dimensions separate the typical Phase 4 study from the typical RWE study:

Dimension Phase 4 RWE study
Primary question Efficacy Effectiveness
Safety characterization Safety under controlled conditions Tolerability in real-world clinical practice
Patient population Homogeneous (protocol-defined eligibility criteria) Heterogeneous (broad clinical practice, fewer exclusions)
Study design Controlled (interventional) Observational

This distinction also matters commercially. A drug can clear every Phase 4 commitment and still face skepticism from payers who want to know what the outcomes look like in the actual patient population they cover. That question can only be answered with real-world evidence.

A practical test: Ask whether the study introduces a medical intervention as part of the protocol. If yes, it is a clinical trial regardless of where it sits in the development timeline. If no, and patients are observed under standard of care, it is an observational study.

What RWE studies actually look like

Knowing what RWE is not — a clinical trial — only gets you so far. The more useful question is what it actually is in practice. Real-world evidence is not a single study type. The programs that fall under that umbrella differ significantly in design, regulatory standing, and what they can credibly demonstrate. Understanding those differences is what turns “we should run some RWE” from a vague intention into a fundable, stakeholder-specific program.

Post-Approval Safety Studies (PASS) are among the most common. Mandated by EMA under its formal PASS framework and required by FDA under its Post-Marketing Requirements structure, these studies collect long-term safety data on approved drugs in broader populations than were studied in clinical trials.[1][4] Pregnancy registries are a well-known example: women of childbearing potential are enrolled to track fetal exposure outcomes over time, in patients receiving an approved medication as part of their normal care. The data is observational and uncontrolled by design, and that is precisely what makes it informative for long-term safety surveillance in real patient populations.

Post-Authorization Effectiveness Studies (PAES) are observational studies required or recommended by EMA after approval to characterize how a medicine performs under real-world conditions. Where PASS addresses safety, PAES addresses effectiveness: how does the drug actually perform across the broader patient population that receives it outside a trial protocol? PAES data directly bridges the gap between efficacy measured in controlled trials and effectiveness in clinical practice, and it is increasingly cited as part of the market access dossier.[4]

Natural history studies document the course of a disease without any intervention. They can be prospective, enrolling participants and following them forward in time, or retrospective, drawing on data already captured in existing medical records. In rare disease drug development, natural history studies often run before or alongside Phase 1 and 2 clinical trials. They answer a question no randomized trial can: what happens to patients with this condition if you do not intervene? That data informs endpoint selection and helps sponsors identify outcomes that are both measurable and meaningful to patients. In some cases, it supports the development of novel endpoints grounded in patient experience, which is relevant to FDA’s patient-focused drug development program.[5][6]

Health Economics and Outcomes Research (HEOR) studies use real-world data to examine the economic and clinical value of a treatment in clinical practice. They capture outcomes including costs, resource utilization, quality of life, and productivity, in patient populations that reflect routine care rather than trial eligibility criteria. Payers increasingly require HEOR evidence as part of the reimbursement and formulary review process, making it an integral part of the evidence generation strategy for most launched products.

External Control Arms (ECAs) use patient data from outside the study as a comparator group in lieu of a concurrent randomized control. The data may come from electronic health records, registries, or prior clinical studies. FDA’s 2023 draft guidance on externally controlled trials addresses this approach and outlines conditions under which it may be appropriate when a concurrent randomized control arm is not feasible.[7]

Study type Design Primary question Typical use
Phase 4 clinical trial Interventional, prospective protocol Does it work (efficacy) under controlled conditions? Confirmatory PMR, label expansion
PASS / Post-marketing safety study Observational, prospective or retrospective Is it safe in the real-world patient population? Safety surveillance, regulatory commitment
PAES / Post-authorization effectiveness study Observational, prospective or retrospective Is it effective in real-world clinical practice? Effectiveness evidence, market access support
Natural history study Observational, longitudinal (prospective or retrospective) What happens to patients without intervention? Endpoint development, rare disease, pre-trial planning
HEOR study Observational, typically retrospective What is the economic and outcomes value in clinical practice? Reimbursement dossiers, formulary decisions, market access
Externally controlled trial Single-arm trial with external comparator Does it work vs. real-world comparator patients? Rare disease, small populations, when randomization is not feasible

Where Phase 4 and RWE genuinely converge

Most of those study types sit clearly on one side of the interventional/observational line. There is a small category of design approaches where Phase 4 methodology and real-world data genuinely meet. These are deliberate, well-defined choices that draw on real-world data to address specific constraints, not evidence that the distinction between trials and observation has blurred.

Synthetic Control Arms represent the clearest convergence point. A Synthetic Control Arm takes the External Control Arm concept further. Where an ECA draws directly from a real-world patient cohort to create a historical comparator, a Synthetic Control Arm uses advanced statistical methods to construct a comparator group from real-world patient-level data, creating what is sometimes described as a digital twin of the treated population. The comparator is not a group of actual patients who received standard of care alongside the treatment group. It is statistically derived from patient-level real-world data to approximate what that group would have looked like. FDA maintains significant methodological scrutiny over these approaches, and they are appropriate in specific, well-defined circumstances. The relevant scenarios span multiple phases of development. In Phase II proof-of-concept work, a synthetic control arm can generate early efficacy signals in rare or pediatric populations without exposing a control group to an investigational agent when early safety data is still limited. In Phase III, the clearest cases involve terminal illness, rare disease, or pediatric settings where the ethical or practical barriers to randomization are high and a real-world comparator can credibly substitute for a concurrent control arm. In Phase IV, synthetic controls appear most often in indication expansion programs, long-term safety assessments, and comparative effectiveness work, where sponsors need to generate evidence on new populations or endpoints without running a new full-scale controlled trial. They are not a general alternative to randomization, and FDA’s 2023 draft guidance is explicit about the methodological standards required for this evidence to be accepted.[7]

Technology: same appearance, different requirements

Phase 4 and RWE studies make fundamentally different demands on data infrastructure. Both rely on electronic data capture systems and both are moving toward more direct engagement with patients through ePRO and eCOA solutions. But what each program needs from those systems reflects the underlying difference between a controlled trial and an observational study.

Phase 4 needs protocol enforcement. The system has to support a rigid schedule of events, flag missed or out-of-window visits, and maintain the audit trail and data integrity requirements of GCP. eSource integration, where EHR data flows directly into the trial database, is supported by FDA guidance and increasingly used to reduce manual transcription and accelerate data collection, though adoption remains uneven across sites and regions.[8]

RWE studies need flexibility. Patients in an observational study do not follow a schedule prescribed by the study. They see their doctor when they see their doctor, and the data system has to accommodate natural variance in visit timing, unscheduled encounters, and, in retrospective studies, data entry from existing medical records. An EDC built for Phase 3 protocol rigidity will create friction for teams running a PASS or a registry study, because the study is designed around how patients actually live, not around a visit window.

Federated data networks represent a different infrastructure model built specifically for RWE data collection. In a federated network, patient data never leaves the institution that holds it. Queries go out to partner sites, analysis runs locally at each site, and only aggregate results return to the coordinating center. No patient-level data is transferred or pooled centrally. FDA’s Sentinel System is the clearest regulatory example at scale: it has operated as a full active surveillance network since 2016, spanning dozens of data partners covering hundreds of millions of covered lives across the US, without patient-level data ever leaving the institutions that hold it.[9]

For both decentralized clinical trials and observational studies, the direct-to-patient model is gaining relevance. In RWE especially, the case is straightforward. Pregnancy registries have always needed to reach patients wherever they are, not only at academic medical centers. Oncology and rare disease follow the same logic: patients are often geographically dispersed, often managing complex treatment regimens, and collecting their data from home reduces burden and improves long-term retention.

The strategic frame for post-approval programs

Most sponsors running post-approval programs are operating on both tracks at the same time. A Phase 4 study satisfies a regulatory commitment. RWE studies build the effectiveness story for payers, medical affairs, and long-term label development. The programs serve different stakeholders and answer different questions.

What makes them work together is treating them as exactly what they are: separate programs with separate design requirements. The data strategy, the technology infrastructure, the endpoint selection, and the team running each study all need to reflect the fundamental difference between what a Phase 4 study can prove and what a real-world evidence study can demonstrate.

A Phase 4 study that drifts toward observational methods undermines the clinical trial logic that gives its results regulatory standing. An RWE study forced into a clinical trial framework collects data that no longer reflects how patients actually live. Both programs have real value, but only when they are designed for the questions they are actually built to answer.

Castor supports Phase 4 and real-world evidence study programs with purpose-built data capture designed for the specific requirements of each study type.

See how Castor supports RWE studies

Frequently asked questions

What is the difference between a Phase 4 study and a real-world evidence study?

Phase 4 is a post-approval clinical trial. It involves a prospective protocol, defined visit schedules, and is typically mandated by FDA or EMA as a Post-Marketing Requirement (PMR) to confirm clinical benefit or address a safety signal. A real-world evidence study is observational: it follows patients under standard of care, without introducing a medical intervention as part of the study. Phase 4 measures efficacy and safety under controlled conditions, in a homogeneous, protocol-defined population. RWE studies measure effectiveness and tolerability in real clinical practice, across heterogeneous patient populations that reflect how the drug is actually used.

Can real-world evidence replace a Phase 4 clinical trial?

In most cases, no. Where FDA or EMA has mandated a Phase 4 study as a Post-Marketing Requirement, that commitment specifies a study meeting defined design criteria. RWE can supplement the evidence base, and FDA has accepted real-world data in certain regulatory contexts, particularly for externally controlled trials in rare disease or small populations. A confirmatory Phase 4 study required under accelerated approval cannot be replaced by an observational study.

What are the main types of real-world evidence studies?

The main types include Post-Approval Safety Studies (PASS), which track long-term safety in broader patient populations; Post-Authorization Effectiveness Studies (PAES), which characterize real-world effectiveness after approval and are increasingly required by EMA; natural history studies, which document disease progression without intervention and are particularly valuable in rare disease; Health Economics and Outcomes Research (HEOR) studies, which examine costs, resource utilization, and quality-of-life outcomes for payer and market access purposes; disease and drug registries; retrospective chart review studies; and studies using External Control Arms, where real-world patient data serves as the comparator group in lieu of a concurrent randomized control arm.

What is the difference between an External Control Arm and a Synthetic Control Arm?

An External Control Arm (ECA) uses patient data from outside the study as the comparator group, drawing on electronic health records, registries, or prior clinical studies. A Synthetic Control Arm takes this concept further: it uses advanced statistical methods to construct a comparator group from real-world patient-level data, creating what is sometimes described as a digital twin of the treated population. The comparator is statistically derived rather than drawn directly from a real patient cohort. Both approaches are subject to significant FDA methodological scrutiny and are generally appropriate only in rare disease or small patient populations where randomization is not feasible. FDA’s 2023 draft guidance on externally controlled trials addresses the standards both require.

References

  1. U.S. Food and Drug Administration. Postmarketing Studies and Clinical Trials. FDCA Section 505(o)(3). Consolidated Appropriations Act of 2023, Section 3210, which expanded FDA authority to initiate expedited withdrawal proceedings for accelerated approval products that fail to verify clinical benefit in confirmatory post-marketing studies.
  2. International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH). ICH E6(R3) Guideline for Good Clinical Practice. 2025. Defines clinical trial and establishes the distinction between interventional and observational research.
  3. U.S. Food and Drug Administration. Framework for FDA’s Real-World Evidence Program. December 2018. FDA Center for Drug Evaluation and Research. Addresses the distinction between efficacy measured in controlled trial settings and effectiveness measured through real-world data.
  4. European Medicines Agency. Post-Authorisation Safety Studies (PASS) and Post-Authorisation Efficacy Studies (PAES). EMA Pharmacovigilance and Regulatory Science framework. Available at ema.europa.eu.
  5. U.S. Food and Drug Administration. Rare Diseases: Natural History Studies for Drug Development. FDA Draft Guidance, March 2019. FDA Center for Drug Evaluation and Research / Center for Biologics Evaluation and Research / Center for Devices and Radiological Health.
  6. U.S. Food and Drug Administration. Patient-Focused Drug Development: Incorporating Clinical Outcome Assessments into Endpoints for Regulatory Decision-Making. FDA Guidance for Industry, 2022. CDER/CBER/CDRH.
  7. U.S. Food and Drug Administration. Considerations for the Design and Conduct of Externally Controlled Trials for Drug and Biological Products. FDA Draft Guidance, February 2023. CDER/CBER.
  8. U.S. Food and Drug Administration. Use of Electronic Health Records in Clinical Investigations. FDA Draft Guidance, 2023. CDER.
  9. U.S. Food and Drug Administration. FDA’s Sentinel System. FDA.gov. The full Sentinel System has operated as an active surveillance network since 2016, spanning dozens of data partners covering hundreds of millions of covered lives in the US.

Joel White’s Q4 CRO breakdown: strong bookings, a sell-off that didn’t match, and the disruption gap nobody is talking about

March 20th, 2026 by

CRO bookings were up year over year and accelerating. Revenues were recovering across most major players. Delays and cancellations, after a brutal stretch through much of 2025, had moved back to something closer to normal. So why did the stocks take a beating?

That disconnect was the starting point for a forty-five-minute conversation between Joel White, founder and principal at Market Capital Consulting, and Derk Arts, CEO at Castor. Joel spent fifteen years in-house at large and mid-sized CROs before founding his own practice, where he produces the quarterly market analysis that strategy and commercial teams across the sector use to benchmark pricing and track industry performance. His Q4 recap had landed the week before — twenty to thirty pages covering every major public CRO, drug discovery platform, and biopharma equity in the sector. The session was the annotated, live version. For clinical trial technology teams navigating AI-heavy market headlines, the session addressed a question with a specific and useful answer: where is the disruption actually landing, and where is it still narrative?

The clearest finding was that the AI-pocalypse narrative hit CRO stocks not because the numbers were bad, but largely because of how some companies handled questions about it. Bookings are up year over year and accelerating. Revenues are recovering across most of the major players.

Then came the analyst questions about AI strategy. Joel described the Medpace earnings call as a turning point for sentiment. Medpace is the sector’s highest-valuation outlier, significantly smaller than an IQVIA or ICON but priced for future growth. The CEO’s response to questions about AI did not land well. The stock was, in Joel’s words, “absolutely smashed. And still to this day, very depressed.” Contrast that with IQVIA and Fortrea, whose leadership arrived prepared with structured responses that, while not resolving the underlying concern, at least prevented things from getting worse.

“When it comes to some of the doomsday scenarios, for me, I need to start seeing that growth curve somehow reverse when other things are looking good.”

Joel White, Market Capital Consulting — follow Joel’s newsletter on LinkedIn

But the session drew a clear line between two different industries. For CROs running biotech clinical trials, nothing in the Q4 earnings data yet supports the disruption thesis. Joel’s argument runs on basic economic logic: if AI lowers the cost of drug development, more drugs get developed and more trials follow. He put it directly: “I tend to believe that clinical research…is very elastic to the extent that if the cost of development goes down, there will be more things that get developed, that there will be more trials to help de-risk the developments that are already in place.” The structure of CRO contracts reinforces this. Because the overwhelming majority run on fixed-price milestones rather than hourly billing, CROs have a direct financial incentive to adopt efficiency tools regardless of whether sponsors mandate them.

For drug discovery software companies, the picture looks different. Certara, Simulations Plus, and Evotec are at or near all-time lows, with companies explicitly citing seat-based license losses as a primary driver. IQVIA agreed to acquire Charles River’s preclinical platform earlier this year at a price that drew comment in the market for how low it came in. The signal is clear: AI disruption is already visible in drug discovery software. It has not yet appeared in CRO services data.

The session also surfaced a question worth sitting with: when do efficiency gains from decentralized trial models and electronic source integration start showing up as pricing pressure on CROs? Joel’s view was measured. The technology gains are real. But the contract structures, the pace of regulatory adoption, and the gap between trial efficiency and billing models suggest the impact is years away, not quarters.

Derk put the position of regulated electronic data capture and clinical operations software on the disruption timeline directly:

“The type of software that Castor and all of our friends in the space create is going to be the last to go because it’s heavily regulated. It’s the last thing you want to vibe code, basically.”

Derk Arts, CEO at Castor

For clinical trial teams building on regulated platforms, the practical takeaway is worth sitting with. The same compliance requirements that slow AI adoption in this space also make the underlying software category more stable. Procurement cycles, validation requirements, and regulatory audit trails don’t move at the pace of a general-purpose AI tool.

The recording covers considerably more than this post captures.

Joel runs through each major public CRO company in detail, including what ICON’s simultaneous accounting investigation announcement meant for investor confidence and why the Medpace CEO’s response carried such outsized consequences. He and Derk get into the drug discovery software sector at length, covering why some of these companies are moving to bring their own drug assets in-house and what that shift might mean for the traditional service model.

There is a specific exchange about whether new trial starts built on more modern decentralized clinical trial technology stacks will look materially different, and what Joel would need to see in the numbers to genuinely change his view on the disruption timeline.

Joel followed up the session with a post-event newsletter piece that takes the CRO-as-investor angle further, including a look at how IQVIA is positioning itself in early-stage biotech funding and what that strategy signals about where large CROs think the market is heading. Worth reading alongside the recording.

Watch the full session on demand

Forty-five minutes of context on where AI disruption in clinical research is actually landing, and where the Q4 data doesn’t yet support the narrative. Built for anyone making technology or investment decisions in the sector.

Watch Now

Frequently asked questions

What did Q4 2025 CRO revenue and bookings data actually show?

Q4 showed bookings up year over year and accelerating across the major CROs, with revenues recovering in core direct services rather than just pass-through costs. Delays and cancellations, which had been severely elevated through much of 2025, moved into a more normalized range. ICON was the notable exception, with an internal accounting investigation announced in the same period adding company-specific pressure to broader sector sentiment.

Why did CRO stocks fall despite strong Q4 operational results?

Two factors intersected at the same time. First, a broader investor narrative about AI disrupting all software-as-a-service businesses created sector-wide pressure, catching CROs in the fallout despite their limited software revenue exposure. Second, how individual CEOs responded to AI questions on earnings calls mattered. Companies whose leadership arrived prepared with structured answers fared better than those who appeared caught off-guard. The data itself was not the problem. The narrative around it was.

How does AI disruption affect clinical operations software differently from drug discovery software?

The distinction matters a great deal. Regulated clinical operations software, including clinical trial solutions and eCOA solutions, operates under strict regulatory oversight that significantly limits the pace of AI-driven displacement. Drug discovery software companies, by contrast, are already experiencing measurable disruption in seat-based licensing, with several major players trading near all-time lows as of Q4 2025. The disruption is real. It just has not arrived uniformly across all segments of the industry.

References

  1. White, J. (2026). Q4 2025 CRO and biopharma market update. Market Capital Consulting quarterly newsletter. Available via LinkedIn newsletter.
  2. Castor LinkedIn Live session: “The CRO Rebound and the AI-pocalypse: a Q4 industry post-mortem.” Recorded March 17, 2026. Featuring Joel White (Market Capital Consulting) and Derk Arts (Castor).
  3. IQVIA Holdings. (2026). IQVIA to acquire Charles River Laboratories’ early development services business. Acquisition agreement announced February 2026. IQVIA Investor Relations.
  4. White, J. (2026). Follow-ups on the Q4 recap for CROs and investors. Market Capital Consulting, published via LinkedIn Pulse, March 2026.

ICH E6(R3) is here: what your centralized monitoring strategy needs right now

February 24th, 2026 by

You can outsource every operational function. You cannot outsource the accountability.

ICH E6(R3) is not approaching. It is in effect. Finalized by ICH in January 2025, adopted by the EMA in July 2025, and published as FDA final guidance in September 2025, the updated Good Clinical Practice guidance formally codifies what was once considered best practice into hard regulatory expectation: Quality by Design (QbD) built into protocol development, centralized monitoring as a formally recognized component of trial oversight, and explicit sponsor accountability that follows the study, not the service contract.

On February 19, Castor hosted Practical ICH E6(R3) Oversight for Your Centralized Monitoring Strategy, a live webinar exploring E6(R3) implementation for Phase 4 and real-world evidence programs that drew 360 clinical research professionals. That number is a signal: the industry is not just aware of these changes, it is urgently looking for practical answers.

Chief Product Officer Lisa Charlton and Director of Delivery Engineering Connor Ladly Fredeen delivered those answers, along with a live platform demo that generated more questions than the session had time to answer.

What E6(R3) actually demands

R3 builds on R2 but goes further. Where R2 introduced risk-based thinking as a concept, R3 embeds it as a structural requirement throughout the entire guideline framework. Sponsors must now define Critical-to-Quality (CtQ) factors at protocol design, set pre-specified Quality Tolerance Limits (QTLs) tied to those factors, and demonstrate continuous, documented monitoring against them. Key Risk Indicators (KRIs), the industry-standard operational complement to QTLs, operate at the site level to surface localized performance issues in real time.

The governing structure runs from CtQ factors to QTLs to documented oversight. On centralized monitoring specifically, E6(R3) Annex 1 (Section 3.11.4.2) formally recognizes it as a core and legitimate oversight approach. The guideline is deliberately flexible, requiring sponsors to implement a risk-proportionate monitoring strategy that may combine on-site, remote, and centralized methods based on trial-specific risks. Traceability across that process is not optional.

For a thorough breakdown of the regulatory framework and practical implementation considerations, Castor’s ICH GCP E6(R3) insight brief covers the detail you need before you act.

Understanding what R3 requires is the straightforward part. Finding tools proportionate to your organization’s actual size and risk profile is where the market falls short.

The problem nobody is solving cleanly: the biotech monitoring gap

ICH E6(R3) compliance is not a tiered obligation. The same requirements that apply to a global pharma company with a dedicated Risk-Based Quality Management (RBQM) team apply to a ten-person biotech running biotech clinical trials on a single compound. The tools available in the market to address them, however, were not built with that reality in mind. Lisa Charlton put it plainly:

“The ICH rules apply to everyone, but the tools in the market are fit for purpose for big pharma and enterprise-level support. Sometimes these traditional RBQM tools are sledgehammers to acorns.”

— Lisa Charlton, Chief Product Officer, Castor

 

The Clinical Research Associate (CRA) is ground zero for that burden. Under R3, CRAs are expected to continuously track the KRIs the sponsor has defined: patient enrollment velocity, screen failure rates, data quality signals like query rates, and safety flags like adverse event patterns. The volume of centralized monitoring work is going to increase significantly across all trial types, all sponsors, large and small. For a pharma organization with a dedicated RBQM team, that is manageable. For a single-compound biotech where the CRA is also the clinical operations lead, it is a different problem entirely.

For a ten-person biotech managing one study, deploying a full-scale RBQM platform is not proportionate oversight. It is the operational weight that crushes the teams it is supposed to help. R3 requires a proportionate approach. For some sponsors, that genuinely means a well-documented manual process. For others, the audit burden makes that untenable. What every sponsor needs is something fast to deploy, study-specific, and proportionate to the actual risk profile. The market has largely ignored that distinction.

The guidance is also unambiguous on where responsibility sits, regardless of what you deploy. As Lisa stated during the session:

“Even if you outsource everything to a CRO, you are still responsible for data integrity and participant safety. And for that, you will always need your own view into the data.”

— Lisa Charlton, Chief Product Officer, Castor

 

Castor’s answer: built from the data layer up

Connor walked the audience through the technical foundation: a first-party data layer, built to ALCOA+ principles and validated under Castor’s formal SDLC, that unifies event streams from Electronic Data Capture (EDC), electronic Patient-Reported Outcomes (ePRO), eConsent, and randomization into a single auditable source of truth. On top of that sits a custom, study-level dashboard, with each metric annotated to specific ICH E6(R3) sections and backed by human-readable specifications that make traceability demonstrable, not assumed.

The core architectural distinction Connor drew is that this is not an AI agent dropped on top of existing reporting infrastructure. The data layer, the specifications, and the agentic interface are built together from the ground up on the study itself. That matters for auditability, and it matters for regulatory defensibility in a way that bolt-on tools cannot replicate.

 

What the live demo actually showed

The most forward-looking moment of the session was the live demonstration of QueryLab, Castor’s agentic AI interface built directly on the data layer. Connor asked a plain-language question (“Show me the correlation between enrollment speed and number of queries”) and received a step-by-step human-readable explanation of the underlying logic alongside the full machine-readable code. Every output is auditable. It can be pinned to a dashboard. The logic can be reviewed and independently checked without relying on the system to validate its own work.

No black boxes. That is the point.

The demo also showed deep linking: one click from a flagged protocol deviation in the dashboard directly into the specific participant record in the EDC. The Q&A that followed went long. Attendees wanted to know how far this goes.

 


 

The central question in the room was one that every compliance-focused sponsor is quietly asking right now: can we build a monitoring infrastructure that satisfies E6(R3) without the overhead of tools built for a different scale of organization? The answer Castor’s clinical trial solutions demonstrated is yes. What it takes to get there is worth seeing firsthand.

 

Frequently asked questions

These are real questions submitted by attendees during the live session.

 

Is there an audit trail for AI-generated insights? Can AI-generated interpretations be disabled in certain regulated environments?

QueryLab is currently in proof-of-concept form, as Connor noted explicitly during the session. That said, every query and action is captured in the audit trail, and the AI’s underlying logic is surfaced as both human-readable specifications and machine-readable code, so any output can be reviewed and verified. The feature can be disabled in environments where it has not yet been formally validated for production use.

Can the data be owned or housed in our cloud versus Castor’s?

Data is currently housed in Castor’s cloud environment, which spans multiple server locations globally to meet varying privacy and encryption requirements. Private server arrangements can be discussed depending on sponsor needs.

Is the dashboard and QueryLab usable in studies that have already been running for years?

The unified data layer is already available across active studies. Building the dashboard is a structured custom development effort. It requires gathering study-specific human inputs to produce the human-readable and machine-readable specifications that define each metric. It is not a feature flag. It is a deliberate engagement.

Can Castor integrate via API with existing TMF or CTMS software?

Yes. Castor is an API-forward platform, and the unified data layer is accessible via API. Integration with existing TMF and CTMS systems can be scoped based on your stack.

Can this solution be used at the sponsor level to filter and manage action items across internal teams?

The dashboard’s task management view surfaces site-level risks and required actions. Customization for specific sponsor personas and team-level filtering is defined during the requirements-gathering phase when building the dashboard.

Is it possible to implement an eCRF designed by a different CRO within Castor’s EDC?

Castor supports standard eCRF designs with built-in flexibility. The dashboards and QueryLab shown in the webinar sit on top of Castor’s unified data layer, so study data would need to flow through the Castor platform for those features to function.

Is there an approval step before changes go live?

Yes. All changes follow the standard SOP-governed process: design, build, and test. Mid-study updates follow the same change control framework required by the guidance.

Who creates the unified data layer?

The unified data layer is a Castor infrastructure investment developed over several years by the Castor engineering team. Sponsors do not build or configure it. It is the foundation on which study-specific dashboards are built.

On-site ePRO in Action: A Recap of Castor’s Product Spotlight

September 23rd, 2025 by

Remote desktop and mobile ePRO has been in Castor for years, but in more recent customer conversations, one ask kept coming up: can we extend our current assessment solution to a controlled, on‑site setting?

 

And that’s what we built. We extended that remote functionality into our core platform for clinicians to access directly. So they can capture participant data in-person while staying aligned with the flexibility and compliance of remote ePRO.

 

Our recent Product Spotlight with our product experts Christian (Product Manager) and Dualtagh (Manager Solutions Consulting) detailed exactly how the solution works. But in case you missed it or would like a recap, below is an overview.

 

Flexible data capture for sites and patients

 

We designed our on-site ePRO to reduce burden on sites without the reliance on hardware. The functionality is modular to our existing ePRO solution. It doesn’t require an extra app or device, and entries flow into your CDMS alongside your other ePRO data.

 

After starting the on-site session, staff can hand over their device or display a QR code for the participant to continue on theirs. If time in the clinic runs short, progress is saved and the participant can finish up remotely—no duplicate records, no re‑entry. You can switch modes at any time and preserve progress.

 

“The whole point is that there are different completion options with our on-site ePRO. We know that sites often have a pile of devices at study sites,” Christian explains. “Crucially, our solution is device agnostic. It ultimately scales and flexes to the device that you’re using and that you already have rather than adding yet another thing to that pile of devices at your site.”

 

For studies

 
Our On-site ePRO allows for direct and enhanced data capture on the site at FPI. Collecting those ePROs on site at baseline ultimately results in fewer gaps before intervention. 
 

Missing, inconsistent, or poor quality ePRO data—particularly at baseline—ultimately jeopardizes trial endpoints,” Christian recognizes. “And we all want to avoid that ‘missed data on the patient clipboard in the lobby syndrome’ where they get given the clipboard, enter only parts of the data, they then leave, and have to then come back or be brought back at another point to enter that data.”

 

For patients

 

If a participant prefers their own device—even in a controlled site environment—we let them use their own device. Participants choose what’s comfortable in the moment—clinic device or their own—while keeping the option to finish later without starting over.

 

“It’s much more flexible, it’s kind of part of that broader level of support for decentralized and hybrid trials—complementing the remote data capture and creating that seamless data continuum,” Christian says. “So no matter where they are, no matter what devices they have, no matter what point in the trial they’re at, we can still capture that data safely, securely, and consistently.”

 

For data managers

 

On-site ePRO ultimately ensures standardized capture under controlled conditions, improving reliability for regulatory review. Which in turn, improves the overall compliance tracking. Sites can immediately verify data entry, reduce lag, reduce dropout, and have the data sit alongside all of the other data in the same, consistent, compliance report.

 

Christian concludes, “I think one of the biggest benefits of the module is how wonderfully simple it is. It kinda just works, and leverages our existing assessment technology.”

 

How it works

 

As a site user, you can access the solution via a button in the existing platform, opening up a link in the browser that you can bookmark, or add as an icon on your tablet.

 

After opening the module, you’ll log in and be presented with the on-site administration page, where you select the participant, visit, questionnaire and the participant’s language. Next you’ll verify the participant’s identity and select how to administer the survey.

 

You’ll be presented with two options: 

 

Using the site device

 

Using the site device as a participant, you’ll be presented with a clean and simple interface. You can start navigating through the questions and a progress bar on the left will provide you with an indication of the completion percentage. 

 

“While I’m completing the survey as a participant, that data is auto saving immediately,” Dualtagh explains. “It’s syncing back with the participant’s record within our overarching CDMS. So it’s making sure that data is immediately available for the site to review as well. At the site, and against the record.”

 

When the participant is finished, they confirm that they’ve completed their responses and will be presented instructions around returning the device. When they confirm and hand the device back, the site user will be logged out to prevent the participant from seeing any information.

 

The site user can then log back in and will be taken to the administration page again, where they can select the next participant and move forward.

 

Using the participant’s device

 

When using the participant’s own device, they’ll scan the QR code presented on the site device. 

 

Just like with the site device, the participant is presented with a clean and simple interface they can navigate through, and the data is auto saving and syncing back to their record.

 

“It’s just another way in which we can provide that little bit of extra flexibility, for data capturing scenarios where you don’t have that site-based device available,” Dualtagh highlights.

 

Using remote back up

 

When you make use of any of our remote back-up options, the participant will be emailed a link to the questionnaire where they can pick up where they left off.

 

Tracking compliance

 

Within our CDMS, the compliance dashboard gives you a quick overview of the overall compliance across participants in the study. You can use filters to drill down into the compliance in the last 7 days, 30 days, or all time; or, for example, hiding the 100% compliance entries.

 

You can use more detailed filters to drill down into data, for example based on specific site statuses, or a compliance percentage window.

 

You can also have a look at the specific surveys that have been sent, and dig into the participants to follow up with to ensure good compliance across your study.

 

“And this really just goes alongside some of our broader functionality for patient reported outcomes,” Dualtagh says. “It sits nicely alongside things like our patient reminders and different notifications for different modalities. So, the ability to remind patients via SMS, via web, via WhatsApp, and these different means to keep them engaged.”

 

Find out more about Castor’s On-site ePRO

 

Want to know more? We’re happy to answer questions or get you set up.

 

For existing customers, studies, and researchers:

 

The On-site ePRO module can be activated by your account manager. Contact them directly or email [email protected].

 

For new customers, studies, and researchers:

 

Contact the Castor team here to get started with Castor ePRO, or email [email protected].

 

Of course, you can also watch the webinar here

Today’s Challenges for Digital Therapeutics

January 5th, 2022 by

DTx growth transcends old barriers, presents new challenges

Recent years have seen transformative technological advances—pushed partly by the urgency generated from the COVID-19 pandemic. The need to evolve has affected many industries, Digital Therapeutics (DTx) included.1 DTx manufacturers face fresh challenges in completing clinical trials and commercializing their products. Thankfully, with careful planning and help from the right allies, DTx manufacturers can successfully adapt.

todays-challenges-in-digital-therapeutics

DTx are evidence-based software programs that allow patients and (remotely) their care teams to prevent, manage, or treat a medical disorder or disease.2 DTx usually focus on chronic and behavior-modifiable conditions—everything from diabetes to insomnia to substance use disorders. DTx push the boundaries on what is possible when healthcare meets tech. For example, Renovia’s FDA-approved leva® provides potentially more effective relief than traditional interventions for chronic fecal incontinence.3 Like other medical interventions, such as medications and medical devices, DTx undergo rigorous testing for approval and use.  

DTx market expanding

Grand View Research’s recent report on the DTx market projects expansion at an astonishing 23.1% compound annual growth rate from 2021 to 2028. The following factors can explain this growth:4

  1. As awareness of DTx grows, patients, providers, and payers are now accepting them as valid treatment options.
  2. The pandemic has highlighted humanity’s need for mental health services and convenient and accessible digital health solutions. 
  3. Pandemic-generated urgency changed the pace of regulatory approval. Regulation requirements were suddenly widened to accommodate new and higher-tech approaches to research and medicine. Revisions may speed up the overall regulatory support for next-generation medicine. 
  4. Increased smartphone usage across the globe means more access to DTx and remote healthcare.    

Emerging challenges

New challenges have replaced previous woes despite growing acceptance and the healthcare industry’s increasing demand for DTx. In May 2021, Castor interviewed Chris Bergman, president of Amalgam Rx, about his thoughts on the future of DTx. Bergman identified previous issues as lack of funding, regulatory ambiguity, and a hesitant market. Current issues, according to Bergman, have shifted to establishing evidence, creating adequate payment and business models, and effectively increasing distribution and scale. A few years ago, DTx were scrambling to navigate regulations and establish themselves as valid healthcare options. Today they are making changes to improve growth and prove efficacy.5

Planning for new challenges

DTx manufacturers can meet today’s challenges through careful planning during the development stage. According to Bergman, DTx manufacturers do well to consider the following before commercializing their products:

Another way to meet new challenges is through strategic alliances. Data management platforms, such as Castor, can fill gaps in DTx manufacturers’ experience in trial development, security, and management. Utilizing innovative tech in clinical trials saves time and money, protects patients’ data, and contributes to trial success—a must in today’s healthcare scene.

The COVID-19 pandemic brought unforeseen changes to the DTx market. Initial challenges such as payer adoption, patient acceptance, and (even) regulatory ambiguities no longer stand at the forefront of challenges for DTx manufacturers. Instead, manufacturers have to deal with how to prove efficacy and ensure product distribution at scale with proper reimbursement. Investing in trial tech, such as Castor products, will help DTx manufacturers meet these challenges. 

 

1Llopis G. Digital Therapeutics are accelerating personalization in healthcare. Forbes. https://www.forbes.com/sites/glennllopis/2020/08/09/digital-therapeutics-are-accelerating–personalization-in-healthcare/?sh=34001c2c2176. Published August 9, 2020. Accessed September 3, 2021.
2Understanding DTx. Digital Therapeutics Alliance. https://dtxalliance.org/understanding-dtx. Accessed August 26, 2021.
3Renovia. October 29, 2021. Renovia receives Breakthrough Device Designation for leva® Digital Therapeutic as first-line treatment for chronic fecal incontinence [press release].
4Digital Therapeutics market size & trends report, 2021-2028. Grand View Research. https://www.grandviewresearch.com/industry-analysis/digital-therapeutics-market. Published April 2021. Accessed September 3, 2021.
5The future of digital therapeutics and the impact on care. Linus https://www.thelinusgroup.com/blog/digital-therapeutics. Accessed September 3, 2021.

Electronic Patient Reported Outcome (ePRO) Measures: Questionnaires & More

November 10th, 2020 by

Patient reported outcome measures in clinical trials have traditionally been done on paper. Surveys are a common way to collect data from study participants. Surveys are questionnaires that allow data to be collected from a predefined sample in a population [1].

Electronic Patient Reported Outcome Measures Man Using Questionnaire on Mobile Device

What are patient reported outcome measures?

Patient reported outcome measures, or PROMs, are an easy method for measuring a patient’s health status or health-related quality of life. These capture data from moments in time through medical questionnaires which patients complete independently [2].  

By filling in the questionnaires, patients directly report on how their symptoms, daily functioning and general well-being are perceived during the study. Therefore, PROMs ensure to record not only the researcher’s observations and interpretation but the patients’ perspective on their own health.

Why are patient reported outcomes important?

The broad goal of clinical trials is to improve healthcare and its outcome for the population. By collecting patient reported outcome data, researchers get a brief insight into the frequency and variety of symptoms as well as the disease’s actual impact on daily life. These findings can later be used to close the gap between clinical research and therapy to ensure a patient-centered and high-quality care practice

In the past, surveys have been administered on paper, which requires tedious administration and logistics, and can also pose a private health information security risk. Thanks to advancement in digital technology, it is now possible for researchers to easily collect data electronically and in a secure way using tools like ePROs (electronic Patient Reported Outcomes) or eCOA. Patients can complete secure electronic surveys sent via email, saving time, increasing engagement, and requiring less administration. At the moment, more than 26% of studies in Castor are using surveys.

electronic Patient reported outcome measurements

Benefits of eCOA / ePRO

  1. Electronic medical questionnaires are easy to distribute:

    A major benefit is a more efficient and streamlined workflow, equating to time saved for researchers and participants. Often, for example, travel time to the clinic for data collection can be a barrier for participants and negatively impact the study, especially when researching rare diseases or small gene pools [3].However researchers should use tools designed and built for medical research, both for security and data compliance.

  2. Electronic Surveys are cost effective, requiring minimal research power to reach people and collect data.

    With the correct electronic data capture (EDC) tool, researchers can send patient questionnaires directly from the system and do not need to import or copy data from paper. Well designed surveys will collect high quality relevant research data, but require careful crafting and evaluation of wording and questions [4].

As discussed above, medical questionnaires need to be well crafted to ensure they are valid and reliable. It is also important to ensure that the correct population sample is selected. As with all study designs, surveys can introduce bias as a result of poor responses or no-responses (researchers’ cognitive bias).

How to create good clinical outcome assessments

epro-patient-surveys-castor

As researchers, the challenging task lies in creating a well designed patient questionnaire that measures what it claims to measure ensuring that it is valid. External validity is important for the generalizability of the study, ie. are the inclusion or exclusion criteria properly defined, can the results be applied to a population [4]. And internal validity is related to the robustness of the study, ie. does it have sufficient statistical power, proper control groups, randomization and blinding necessary for clinical trial research [4]. And a reliable questionnaire that will produce consistent results upon repetition [1].

When generating a patient questionnaire, the questions can be close-ended or open-ended. With close-ended questions, researchers set the range of answers on a scale or a range of tick-boxes [1]. Open-ended questions or free text can enrich quantitative data, and researchers will want to plan in advance how this data will be analyzed [1].

Standardized questionnaires can also be used, see an example below of an EQ-5D Questionnaire from Kieran Bond of Aridhia [2]. These widely used forms ensure that a high level of validity and reliability is achieved throughout the research.

Example of an EQ-5D Questionnaire in Castor EDC
Example of an EQ-5D Questionnaire in Castor EDC

These widely used forms ensure that a high level of validity and reliability is achieved throughout the research.

Using Castor eCOA / ePRO to send medical questionnaires to patients

With Castor eCOA / ePRO you can create complex surveys in minutes, using more than 21 field types, pre-built templates, and validations. You can also reduce time spent on rebuilding surveys from scratch by reusing existing surveys.

You can choose from tried and tested electronic surveys shared by Castor users in the Castor Form Exchange. Standardized forms, for example, those that measure quality of life, can be easily downloaded and re-used.

By using Castor’s automation engine you can increase participant enrollment, retention, and experience through automated patient engagement. You can also easily manage survey participants through bulk invites, automatic triggers, and a dynamic dashboard.

Researchers can schedule surveys and create emailing schedules to distribute patient questionnaires on certain dates or according to a custom timeline.

 

Using encrypted email addresses, clinical data entry is combined with outbound survey invitations sent to study participants. And at the push of a button, researchers can send a clinical outcome assessment to hundreds of participants, monitor its status and see results directly in the study dashboard.


Check out our webinar on how to build surveys in Castor eCOA / ePRO.

 

 

Sources:

  1. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC420179/
  2. http://www.aridhia.com/blog/building-trust-and-improving-participation-in-clinical-trials-using-innovative-electronic-data-capture-platforms/
  3. http://www.bmj.com/content/350/bmj.g7818
  4. https://www.bmj.com/about-bmj/resources-authors/article-types

 

Castor Is Committed to Scalable FAIR Data

October 1st, 2018 by

Success in life sciences research is all about transforming research findings into actionable knowledge. In this context, FAIR stands for Findable, Accessible, Interoperable and Reusable data, four critical elements to improve research infrastructure, making it easier for researchers to collaborate, ultimately improving the quality of healthcare in general.

#FAIRdata is a key topic at The Dutch Techcentre for Life Sciences (DTL)’s 2018 Conference, which we are proud to support. DTL provides a helpful description of each of the four elements on their website:

Findable – Data and metadata should be easy to locate, both by humans and by computer systems. Basic machine-readable descriptive metadata enable the discovery of interesting datasets and services.

Accessible – Stored for long term so that they can easily be accessed and/or downloaded with well-defined license and access conditions (open access when possible), whether at the level of metadata, or at the level of the actual data

Interoperable – Ready to be combined with other datasets by humans or computers

Reusable – Ready to be used for future research and to be further processed using computational methods

These FAIR principles are perfectly aligned with Castor’s goal of helping “accelerate medical research by unlocking the potential of every byte of research data.” 

Click here if you would like to learn more about the FAIR data specification.

Concerns over data quality and usability

Over the years, as an MD and a researcher myself, I have become more and more concerned about the quality and the (re-)usability of data. In fact, approximately 85% of medical research data is never re-used due to poor data quality, lack of standardization, and by the data being inaccessible to others. I started Castor EDC in 2012 to address these issues and was happy to learn about the FAIR principles, which were published in 2016. This, in addition to other important initiatives such as the European Open Science Cloud (EOSC), are fostering global data findability and accessibility.

Open Science is an umbrella term for new technologies and a data driven systemic change in how researchers work, collaborate, share ideas, disseminate and reuse results. It is built on a foundation of core values that knowledge should be reusable, modifiable and redistributable.

The Commission “High Level Expert Group European Open Science Cloud” chaired by Barend Mons has published a first report on how the EOSC can be realized.

You can learn more about DTL’s vision regarding Open Science here.

Incorporating FAIR principles into Castor EDC

At Castor, one of our main goals for the next few years is to become a pioneering player in the field of Open Science. This means we will prioritize the development of data FAIRification within Castor EDC. By allowing researchers to expose their Castor data in a FAIR manner, research data can be shared easily between research projects worldwide.

At the 2016 BYOD hackathon in Leiden, Netherlands, Castor’s CTO, Sebastiaan Knijnenburg, PhD, and I spent three days learning about the FAIR specifications and trying to implement them into Castor. In just three short days we managed to extend our API and transform Castor into a FAIR data point.

BYOD FAIR hackathon
Castor attending the 2016 “Bring Your Own Data (BYOD)” FAIR hackathon in Leiden, Netherlands.

We also managed to implement a Resource Description Framework (RDF) endpoint. We added semantic metadata to a Castor study and allowed the export of this study data in the RDF format. Two other software solution providers, OSSE (Open Source Registry System for Rare Diseases in the EU) and RDRF (Rare Disease Registry Framework) also worked on generating FAIR API endpoints for their software. (Learn more about medical device registry studies here.)

As a result, on the last day, data from a case study in all three systems could be queried and analyzed together, even though the original datasets were developed separately and did not share the similar structure.

Every dataset should be FAIR

In my view, every dataset in the world should become FAIR, not just those with funding to pay for FAIR data stewardship. This is why Castor is joining forces with several partners, such as DTL, that support Open Science to create an infrastructure that allows researchers to create semantic data models themselves. They can then actually create FAIR data at the source. Once we get this to work for all the studies in our system, FAIR will really start to shine. By enabling FAIR data at scale, researchers can easily make their clinical research data available for the FAIR research community. This way, both humans and computers will be able to search and filter through a dataset on a semantic level.

That said, semantic modeling is an area we can improve, as it is currently very labor intensive and can only be done with the help of experts. I have some ideas on making the creation of FAIR data accessible for everyone, and I will be working on these ideas in the coming years with FAIR scientists from across the globe.

Start small

As beautiful as fully interoperable, machine-readable data are, just the ability to find and access research data globally will make a big difference. Having the FAIR data points available, with a simple Comma Separated Value (CSV) download distribution for instance, will already be a big improvement in the short term.

The ultimate goal is user-created scalable content

We should work together towards enabling user-created scalable FAIR data. I think that would be the key to success. As soon as researchers start to realize the potential of FAIR –like the European Science Cloud– it will make a big difference in their attitude towards sharing data.

Furthermore, once people see the immense savings that a standardized data set can make, it could lead to initiatives that can contribute to making valuable medical data universally available.

Going forward

Showing the world how awesome user-created scalable FAIR data is and how useful it can be is a very important first step.

We at Castor have applied for grant funds to enable us to put more effort into working on scalable FAIR data and to demonstrate its overall benefits.

For additional background on Castor and our efforts to support FAIR data, here is a video completed for the 2016 FAIR hackathon:

 

Castor joins forces with EuroQol to facilitate EQ-5D survey usage

June 27th, 2019 by

The EuroQol Research Foundation created EQ-5D to standardize how health-related measures such as Quality of Life and other healthcare evaluations are collected. This initiative is aligned with our quest to standardize medical research. And today, we are thrilled to announce that these EQ-5D modular versions are now available for Castor EDC users! 

Under this partnership,  Castor users can now make use of pre-made Castor EDC forms for the EQ-5D surveys. This simplifies the process of accessing and sending these surveys by eliminating the need for screenshot review by EuroQol.

The surveys are available in both Dutch and English, for EQ-5D-3L and EQ-5D-5L. Castor users can obtain these surveys by registering their studies on the EuroQOL website. For academic studies, Castor EDC’s EQ-5D modules can be used for free (after registration). For commercial studies, a license fee will be charged according to EuroQoL’s user policy.

A demo version of the Castor EQ-5D surveys can be found here. Give it a spin and tell us what you think!

Demo Request

Bridging clinical efficacy and real-world effectiveness in digital therapeutics trials

January 25th, 2022 by

How real-world data capture closes the gap

No matter how conclusive clinical evidence is for medical treatments, developers may struggle to see precisely how their treatment performs in the real world—outside the controlled environment of the clinic or office. Do patients take the medication as expected while immersed in busy schedules? How often do they practice the exercises the therapist prescribes?

Unfortunately, medical treatments proven to work in clinical trials do not always measure up in the real world. According to Simon Makin’s “A smarter way to treat” article, one quarter to one half of the global population doesn’t take medications as recommended. In the U.S., this failure links to up to 125,000 deaths and costs up to $289 billion.1 The gap between efficacy and effectiveness is a significant issue impacting many.

Digital therapeutics provide real-world data capture

Enter digital therapeutics (DTx) and their unique ability to capture real-world data remotely. DTx are software-based, evidence-driven treatments for managing, preventing, and treating a wide range of medical conditions or illnesses.2 Unlike conventional medical interventions such as medications or medical devices, DTx can constantly gather data on patients’ involvement and progress—giving a crystal-clear view of how the treatment is working outside of a clinical setting.

bridging-clinical-efficacy-and-real-world-effectiveness-thumbnail

DTx capture data via software on patients’ devices or connected wearables. App data includes any biometrics recorded in the app, patient-reported outcomes, patient progress through software modules, and other data stored in the app, such as clinic visits and test results. While more traditional data collection methods are periodic, real-world data collection can be continuous and ongoing.3 When used effectively, the detail captured improves the data quality and helps demonstrate how users interact with the treatment in their everyday lives.4

These examples illustrate DTx real-world data capture:

Tech helps manage real-world data capture

Tech tools help DTx maximize the impact of the real-world data they capture. Technology can assist in every aspect of handling patient data—from collecting responses from digital patient surveys to capturing, processing, and integrating data from varied sources to securely tracking patients’ consent. Software developed specifically for DTx streamlines data capture and management, making it easier for DTx to get products to market and into patients’ hands. DTx real-world data capture can bridge the gap between how well researchers think their treatment functions (efficacy) and how well their treatment performs (effectiveness) outside of a clinical setting. Regular, remote data capture can transform how effective treatments can be and the type of people they can reach.

 


1Makin S. A smarter way to treat. Nature. 2019;573. https://media.nature.com/original/magazine-assets/d41586-019-02873-1/d41586-019-02873-1.pdf. Accessed August 27, 2021.
2Understanding DTx. Digital Therapeutics Alliance. https://dtxalliance.org/understanding-dtx. Accessed August 26, 2021.
3Sverdlov O, van Dam J, Hannesdottir K, and Thornton-Wells T. Digital therapeutics: An integral component of digital innovation in drug development. Clin Pharmacol Ther. 2018;104(1):72-80. doi:10.1002/cpt.1036
4Van Norman G. Decentralized clinical trials: The future of medical product development? JACC Basic Transl Sci. 2021;6(4):384–387. doi: 10.1016/j.jacbts.2021.01.011
5DTx product case study: Propeller. Digital Therapeutics Alliance. https://dtxalliance.org/products/propeller/. Accessed August 26, 2021.
6The Mabu care insights platform. Catalia Health. http://www.cataliahealth.com/platform-ai/. Accessed September 8, 2021.
7Patient-centered solutions. Catalia Health. http://www.cataliahealth.com/solutions/. Accessed September 8, 2021.

A Digital-First Mindset Shift on eConsent

September 26th, 2023 by

eConsent plays a pivotal role in optimizing modern clinical trials, but the nuance around eConsent adoption for pharmaceutical and medical device companies remains a consideration for implementation teams. Castor CEO & Founder – Derk Arts MD, Ph.D. recently sat down with Leanne Walsh of Northern Light Lifescience to talk about the challenges, processes, considerations, and mindset shifts that study teams must consider when reviewing eConsent for their trials.

From the conversation between Derk and Leanne, as well as the audience Q&A, it’s clear that the question of eConsent isn’t so much of a “Why?” but a “Why not?” As one attendee noted, “This sounds like people [are] using eConsent in exactly the same way as paper though—in the clinic alongside a face-to-face conversation. If it is used as a tool to provide information to potential participants ahead of a clinic meeting or as a follow-up to the clinic, then it could add more value.”

Here are some of the key themes from the discussion:

eSignature vs. eConsent vs. Digital Signature: Are they the same?

The first barrier to eConsent adoption is getting over regulatory uncertainty. On the sponsor and site ends—especially in terms of signature use, misconceptions about “eConsent is only the eSignature” are still very prevalent. eConsent allows for additional sources of information that the participant can review ahead of time. All of this capacity is contained within the eConsent ecosystem and can happen before a visit to the site. Educating study designers and IRBs to understand that the informed consent process is its own ecosystem rather than just the signature element is a crucial step to break out the parts of eConsent that go well beyond just eSignature.

An electronic signature is always integral to eConsent. However, eSignature requirements vary by country, impacting eConsent adoption. In countries that do not accept eSignature, Castor’s research indicates that a participant can often sign a paper form while on the video call and then mail in that form. Although this method entails paperwork, you retain two key benefits: the eClinical platform tracks the consent status, and participants can access trial information online at any time.

People’s perceptions are different right down to the basics around a signature and digital signature. Are they the same thing? And do they mean the same in the regulations from the FDA versus in Europe? Vendors should be responsible to convey what it means, clarify those terminologies, and make it simple for the study teams to understand.

– Leanne Walsh, Director, Northern Light Lifescience

Read more from the FDA on Informed Consent for Clinical Trials

Digital vs. analog systems: Can eConsent enhance data retrieval over traditional paper processes?

The shift from analog to digital systems in the consenting process is not merely a trend but a necessity. Picture the traditional paper-based consenting process: physical folders, stacks of papers, and the ever-present risk of misplacing a crucial document. This analog approach, although familiar, presents challenges. As the volume of participants grows, so does the paperwork, leading to an increased risk of data loss.

Building on the insights from the webinar, there is an undeniable “wait and see” approach when it comes to adopting new technologies. This hesitancy is often rooted in the challenges of implementing features and the perceived complexities of digital systems.

But let’s debunk a myth: digital systems, especially in the consenting process, are designed to secure data storage. With everything stored electronically, data accessibility becomes seamless. If the study team needs to pull up a specific patient’s consent form, it is only a click away.

What’s stopping sponsors from engineering their way into accessing clinical data is the concern that all the paper will get digitized in some shape or form and stored somewhere that is completely untraceable […] There’s no way a response would break into a digital system and override whatever access they have to. Then review data that they are not supposed to have access to. So I think it is unequivocally true that digital systems are actually safer and better in restricting access to information or giving access to the right information than analogue analog systems.

– Derk Arts, CEO & Founder, Castor 

Shifting to a “digital first” mindset creates a larger move toward eConsent adoption among sponsors

The “digital first” approach not only highlights the safety and security benefits of eConsent over paper, it also helps study designers and IRBs experience the expanded capabilities of the whole eConsent ecosystem out in the open. Seeing the eConsent capabilities allows the clinical staff and study designers to plan for how to deliver the extra information that is customized to specific needs of participants.

Digital systems can enhance the patient experience by allowing researchers to develop tools that are more efficient, interactive, and personalized. They can give study participants enough flexibility and time to be better prepared for the consultation with clinicians, formulate questions, or use the waiting time efficiently if the tools are used in the waiting room.

However, the success of this approach hinges on timely adoption. By embracing the digital first mindset from the get-go, study teams can ensure that eConsent isn’t an afterthought but a foundational pillar of the study design. This proactive approach can catalyze conversations, ensuring that sponsors are aligned and onboard from the outset.

Watch the on-demand recording of eConsent from Sponsor to Site: Navigating Successful eConsent Adoption to take in the full conversation and hear about the impact of eConsent, real-world examples of eConsent evaluation, readiness, and implementation, and potential for future industry adoption. 

Discussion Highlights: 

 

Castor’s product suite ensures security, access, and ease of use for sponsors. Our ongoing innovations keep Castor software and technology poised to meet the expanding needs of clinical trials. Ready to learn more? Let’s chat.

3 Ways eConsent Tackles the Challenges of Modern Clinical Trials

September 28th, 2021 by

Although eConsent struggled to gain momentum and wider acceptance pre-pandemic, it is actually powered by technology that is regularly used in daily life and is more approachable than one might think. Regardless, some researchers are still hesitant to embrace remote technology. In this article, we’ll explore three areas that researchers cite as obstacles when implementing eConsent: data safety; regulatory compliance; and identity verification. Read on to learn how these issues can be resolved safely and efficiently.

Playing it safe with data

Data integrity, safety, and privacy are of critical importance these days. And with good reason—never before has so much personal data been processed through online services. Since medical data is among the most private there is, clinical trials must adhere to the highest standards of data protection while offering their participants data privacy. Fortunately, there are several ways to accomplish this via  eConsent.

In order to keep data safe and secure, opt for an eConsent solution with:

When employing eConsent, investigators should use embedded HIPAA-compliant authorization forms to ensure FDA compliance. In the EU, the GDPR’s most recent guidance requires “an effective audit trail of how and when consent was given, so you can provide evidence if challenged” and “an appropriate cryptographic hash function to support data integrity.” 

Any video conferencing used as part of eConsent must be secure, traceable, and fully compliant. For example, it should be encrypted and US 21 CFR Part 11 compliant. 

Navigating the regulatory jungle

The regulatory jungle is complex enough to discourage researchers from changing established methods they know are fully compliant (or that they believe are fully compliant). . It doesn’t help that different countries and regions have their own regulations around the use of eConsent and acceptance of eSignatures. At the time of writing, the FDA’s most recent guidance was published in December 2016 and no EU regulation or guidance about eConsent in clinical trials exists.

Ensuring regulatory compliance is not hopeless, however. In general, Institutional Review Boards (IRBs) and ethics committee have consistent requirements for paper and electronic consent, such as:

Researchers need to familiarize themselves with their applicable IRB guidelines. (Find a handy overview of eConsent guidelines in twelve different countries here.) When making a submission for an eConsent-based study, it’s important to also address:

Checking IDs at the door

Clinical investigators need to confirm the identity of all participants in a trial according to regulatory requirements. But that doesn’t mean every participant needs to present themselves at the study site—video conferencing to the rescue!

An appropriate, secure video conferencing solution provides investigators with real-time, visual interaction with participants. This allows the study team to verify the identity and, if using a hybrid wet-signature with eConsent, witness the signature of each participant. But the benefits don’t end there—video conferencing allows a clinical researcher to answer questions directly, cementing trust with a participant and increasing retention. Importantly, investigators are able to observe the participant’s behavior and determine if they are capable of offering informed consent and are consenting of their own free will.

Castor eConsent is a flexible, user-friendly, and secure solution for your next trial. If you’re interested in learning more about putting eConsent to work in your next trial, reach out to one of our friendly Castorians here.

EQ-5D in European Trials: When Generic QoL Measures Actually Matter

August 26th, 2025 by

Many European biotechs discover that FDA-focused PRO strategies overlook valuable reimbursement opportunities across European markets. Companies repositioning EQ-5D from “regulatory necessity” to “HTA advantage” often secure faster reimbursement approvals, while acknowledging that the same data contributes minimal value to FDA label claims.

This reality reflects the fundamental misalignment between vendor marketing and regulatory practice: EQ-5D’s value lies in European health technology assessment, not US regulatory acceptance.

The Regulatory Reality Check

Analysis of 735 FDA drug approvals found 0% included EQ-5D data in product labeling, while only 5% mentioned it in supporting documentation [Shaw et al. 2024]. Meanwhile, European Medicines Agency acceptance reached 5% for labeling support which is limited but measurably better than FDA’s complete resistance.

FDA‘s opposition to generic quality of life measures stems from fundamental concerns: generic instruments lack sensitivity to detect small therapeutic benefits and cannot distinguish treatment-specific adverse effects. Their preference for disease-specific PRO measures reflects regulatory pragmatism, not methodological bias.

Where EQ-5D Actually Succeeds

EQ-5D’s strength lies in European health technology assessment, not clinical outcome measurement:

German HTA Bodies: Analysis shows strong EQ-5D acceptance in German HTA processes, with IQWiG and G-BA demonstrating systematic usage when quality of life assessment is included [Shaw et al. 2024]. German bodies show notable acceptance for clinical outcome assessment among European regulators.

NICE Guidelines: NICE continues to recommend EQ-5D for cost-utility analysis while maintaining the 2019 position on EQ-5D-5L value sets, requiring mapping to 3L values for consistency [NICE 2019].

French HAS: Recognizes EQ-5D within their health economic evaluation methodology, though specific usage varies by therapeutic area and assessment context [HAS 2020].

Understanding EQ-5D’s Actual Structure

EQ-5D is a simple, 5-question static questionnaire. The instrument covers five dimensions: mobility, self-care, usual activities, pain/discomfort, and anxiety/depression, plus the EQ-VAS rating overall health from 0-100.

No special site training or certification is required for standard administration. The 2-3 minute completion time reflects genuine simplicity, not algorithmic optimization. This simplicity explains both EQ-5D’s broad adoption and its regulatory limitations.

When EQ-5D Doesn’t Work

Avoid EQ-5D as primary strategy when:

  • FDA labeling claims are your primary objective (0% success rate)
  • Disease-specific outcome measurement is regulatory requirement
  • Ceiling effects are expected in your patient population
  • Sensitivity to small therapeutic benefits is crucial for approval

Implementation limitations to acknowledge:

  • Generic nature misses condition-specific improvements
  • Statistical analysis challenges affect many studies due to missing data and ceiling effects [Pickard et al. 2007]
  • No special training requirements means limited differentiation from competitor implementations

The Economic Reality

EQ-5D’s true value lies in quality-adjusted life year (QALY) calculations essential for European health technology assessment. The instrument provides standardized utility values across therapeutic areas, enabling cost-effectiveness analysis required by most European reimbursement bodies.

However, this economic value shouldn’t be confused with regulatory acceptance. Analysis shows clinical outcome assessment represents approximately 18% of EQ-5D usage in technology appraisals, with the majority focused on economic evaluation [Shaw et al. 2024].

Your Practical Implementation Plan

Immediate Assessment (This Week)

  1. Clarify regulatory objectives: Determine whether your primary need is FDA labeling, European regulatory support, or HTA economic modeling
  2. Review current PRO strategy: Assess whether disease-specific measures are already planned for regulatory endpoints
  3. Evaluate HTA requirements: Identify which European markets require QALY data for reimbursement decisions

Strategic Planning (Next 2-4 Weeks)

  1. HTA body consultation: Engage with NICE, G-BA, or relevant bodies on EQ-5D requirements for your therapeutic area
  2. Platform assessment: Ensure your clinical trial solutions support both EQ-5D data collection and economic analysis
  3. Budget allocation: Plan implementation costs focusing on health economic value rather than regulatory claims
  4. Timeline integration: Coordinate EQ-5D deployment with broader European market access strategy

Implementation Excellence (Following 12-22 Weeks)

  1. HTA-focused deployment: Prioritize data quality for economic modeling over regulatory claim support
  2. Country-specific optimization: Apply appropriate value sets and preference weights by market
  3. Economic analysis preparation: Generate QALY calculations supporting reimbursement submissions
  4. Realistic outcome measurement: Track HTA acceptance rates rather than regulatory approval metrics

Frequently Asked Questions

Why do vendors position EQ-5D as “regulatory accepted” if FDA acceptance is 0%?

Vendor marketing often conflates HTA acceptance with regulatory approval. While EQ-5D has established HTA positioning, particularly with NICE’s continued preference, this differs significantly from regulatory labeling acceptance. The distinction matters for setting realistic expectations and budget allocation.

Should I avoid EQ-5D entirely for US trials?

Not necessarily. EQ-5D can provide valuable health economic modeling data for US payers and HTA bodies like ICER. However, expect zero contribution to FDA labeling claims and plan disease-specific measures for regulatory endpoints.

How do I maximize EQ-5D’s value in European trials?

Focus on health economic evaluation rather than clinical outcome assessment. Ensure your platform supports QALY calculations with country-specific preference weights, and coordinate with HTA bodies early in protocol development.

What’s the most efficient EQ-5D implementation approach?

HTA-optimized implementation (12-16 weeks) provides the highest return on investment by focusing on EQ-5D’s established strengths rather than attempting to overcome its regulatory limitations.

References

[1] Shaw, J.W., et al. (2024). A Review of the Use of EQ-5D for Clinical Outcome Assessment in Health Technology Assessment, Regulatory Claims, and Published Literature. The Patient. Available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC11039499/

[2] Pickard, A.S., et al. (2007). Psychometric comparison of the standard EQ-5D to a 5 level version in cancer patients. Medical Care, 45(3), 259-263. Available at: https://pubmed.ncbi.nlm.nih.gov/17304084/

[3] Sampson, C. (2022). NICE and the EQ-5D-5L: Ten Years Trouble. PharmacoEconomics – Open, 6, 5-8. Available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC8807740/

[4] Ciani O, et al. (2023). The Assessment of Patient-Reported Outcomes for the Authorisation of Medicines in Europe: A Review of European Public Assessment Reports from 2017 to 2022. Pharmacoeconomics, 41(11), 1411-1426. Available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC10627987/

[5] NICE. (2019). Position Statement on Use of the EQ-5D-5L Value Set for England (updated October 2019). Available at: https://www.nice.org.uk/about/what-we-do/our-programmes/nice-guidance/technology-appraisal-guidance/eq-5d-5l

[6] Haute Autorité de Santé (HAS). (2020). Choices in Methods for Economic Evaluation. Available at: https://www.has-sante.fr/jcms/r_1499422/en/methodological-guide-for-health-economic-evaluation

[7] Devlin, N., et al. (2018). Valuing health-related quality of life: An EQ-5D-5L value set for England. Health Economics, 27(1), 7-22. Available at: https://pubmed.ncbi.nlm.nih.gov/28833869/

[8] Janssen, M.F., et al. (2013). Measurement properties of the EQ-5D-5L compared to the EQ-5D-3L across eight patient groups. Quality of Life Research, 22(7), 1717-1727. Available at: https://pubmed.ncbi.nlm.nih.gov/23184421/

[9] EuroQol Research Foundation. (2023). EQ-5D-5L User Guide. Available at: https://euroqol.org/information-and-support/euroqol-instruments/eq-5d-5l/

[10] FDA. (2024). Patient-Reported Outcome Measures: Use in Medical Product Development to Support Labeling Claims. Available at: https://www.fda.gov/media/77832/download

ePRO, eCOA 101: Everything You Need to Know About ePRO and eCOA

July 31st, 2025 by

What is eCOA?

Electronic Clinical Outcome Assessment (eCOA) represents the digital transformation of patient outcome measurement in clinical trials. Rather than relying on traditional paper forms, eCOA technology encompasses all electronically captured clinical outcomes data, fundamentally changing how trials collect and manage patient-reported information.

The shift from paper to digital has revolutionized clinical trial data collection, creating new opportunities for real-time monitoring and improved data quality. Modern electronic data capture systems integrate with eCOA platforms to create comprehensive clinical data ecosystems supporting both regulatory submissions and real-world evidence generation.

Understanding ePRO Within the eCOA Framework

Electronic Patient Reported Outcomes (ePRO) represents the patient-centric component within the broader eCOA framework. While these terms are often used interchangeably, ePRO is actually one of four distinct assessment types within the eCOA ecosystem:

  • ePRO (electronic Patient Reported Outcomes): Patient-reported symptoms, quality of life measures, and treatment experiences
  • eClinRO (electronic Clinician Reported Outcomes): Healthcare provider assessments and clinical observations
  • eObsRO (electronic Observer Reported Outcomes): Caregiver or family member observations, particularly important in pediatric or cognitive studies
  • ePerfO (electronic Performance Outcomes): Objective measurements captured through digital tools and devices

ePRO has gained particular prominence in patient journey optimization due to its direct connection to patient experiences and regulatory emphasis on patient-centered drug development. Decentralized clinical trials increasingly rely on ePRO data to capture patient experiences outside traditional clinic settings.

The Implementation Reality: Beyond Technology

eCOA implementation involves significantly more complexity than deploying a simple electronic survey. The process requires careful coordination of regulatory compliance, intellectual property management, and operational planning across multiple stakeholders.

Regulatory Framework: Regulatory agencies continue emphasizing electronic data collection approaches in clinical trials, with updated guidance supporting eCOA implementation. The FDA defines Clinical Outcome Assessments as measures that describe or reflect “how a patient feels, functions, or survives,” encompassing the four eCOA categories [FDA 2024][1]. The industry has responded with remarkable growth – the global eCOA solutions market reached $1.94 billion in 2024 and is projected to grow at 16.1% annually to $4.78 billion by 2030 [MarketsandMarkets 2024][2].

Intellectual Property Management: Many clinical trials utilize copyrighted assessment instruments that require specific licensing agreements. These instruments often carry usage restrictions and approval processes that can significantly impact implementation timelines and platform selection decisions.

System Integration Requirements: eCOA platforms must integrate with existing clinical trial infrastructure including EDC systems, clinical trial management systems, and regulatory submission processes. This integration requires sophisticated technical architectures and security protocols that meet clinical research standards.

Data Integrity and Compliance: Clinical trials require comprehensive audit trails, electronic signatures, and data lineage capabilities that support regulatory inspections. Organizations must ensure their eCOA platforms integrate with electronic consent processes and maintain data integrity throughout the study lifecycle.

BYOD vs. Provisioned Devices: Strategic Considerations

The choice between Bring Your Own Device (BYOD) and sponsor-provisioned device strategies represents a critical implementation decision with implications for patient experience, data quality, and operational complexity.

Provisioned Device Approach

Sponsor-supplied devices offer controlled environments with standardized hardware, operating systems, and security configurations. This approach provides consistency across all participants but introduces logistical challenges including device distribution, technical support, and patient training requirements.

BYOD Strategy Considerations

BYOD approaches leverage patients’ existing smartphones and tablets, potentially improving engagement through familiar interfaces and integration into daily routines. However, BYOD implementation requires careful attention to device variability, security protocols, and data equivalency validation across different platforms and operating systems.

BYOD adoption is accelerating rapidly, driven by cost-effectiveness and patient familiarity with personal devices. Research demonstrates the importance of patient-centered design and user experience for successful eCOA implementation, with electronic methods showing superior compliance compared to traditional paper-based approaches [Clinical Leader 2024][3]. Organizations considering BYOD strategies should evaluate current regulatory expectations and validation requirements for their specific study contexts.

Licensing and Validation Considerations

Copyright and licensing management represents one of the most complex aspects of commercial eCOA implementations, particularly for studies utilizing established clinical assessment instruments.

Licensing Requirements: Many clinical outcome assessments are copyrighted materials requiring study-specific licenses for electronic deployment. These licensing processes often involve multiple stakeholders and can require technical documentation, translation reviews, and approval workflows that extend implementation timelines.

Validation Processes: Electronic implementations of established instruments may require validation studies to demonstrate equivalence with paper-based versions. These studies ensure that electronic formats maintain the measurement properties and clinical validity of the original instruments.

Multi-Language Considerations: Global clinical trials require linguistic validation processes that extend beyond simple translation. These processes involve cultural adaptation, cognitive testing, and formal validation studies to ensure conceptual equivalence across different populations and languages.

Implementation Planning and Timeline Management

Successful eCOA implementation requires realistic planning that accounts for the various regulatory, technical, and operational requirements involved in electronic data collection deployment.

Timeline Variables: Implementation schedules depend on multiple factors including study complexity, regulatory requirements, licensing needs, translation requirements, and organizational readiness. Early identification of these variables helps establish realistic project timelines and resource allocation.

Change Management: Modifications to electronic systems after initial deployment often require coordination across multiple study components including visit schedules, data transfer processes, site training materials, and regulatory documentation. Planning for potential changes during the implementation process helps minimize disruption to ongoing studies.

Optimization Opportunities: Organizations with established eCOA capabilities may achieve implementation efficiencies through standardized processes, pre-validated instrument libraries, and streamlined approval workflows. These approaches can provide time savings when appropriate for specific study requirements.

Patient Engagement and Data Quality Benefits

The transformation from paper-based assessments to electronic systems addresses fundamental challenges in clinical data collection, particularly patient compliance and data accuracy.

Compliance and Completion Rates: Research has demonstrated that patients are significantly less compliant with paper diaries than previously assumed, with studies showing higher completion rates and more accurate data capture with electronic methods [Stone et al. 2002][4]. Electronic systems provide real-time data validation, immediate feedback to patients, and automated reminders that improve protocol adherence.

Patient-Centered Design: Modern eCOA platforms prioritize intuitive user interfaces and patient-facing technology improvements that accommodate diverse patient populations, including considerations for age, technical literacy, and accessibility requirements. This patient-centric approach recognizes that patients are experts in their own experience and should be empowered to provide accurate, meaningful data.

Real-Time Data Quality: Electronic capture enables immediate data validation, range checks, and consistency monitoring that identifies potential issues before they impact study integrity. This real-time capability supports both patient safety monitoring and regulatory submission requirements.

Implementation Best Practices and Workflow

Successful eCOA implementation requires systematic planning that addresses both technical and operational considerations throughout the study lifecycle.

Pre-Implementation Planning: Effective eCOA deployment begins with comprehensive assessment of patient populations, study requirements, and operational capabilities. This includes evaluating patient technology access, site infrastructure, and regulatory requirements across all study regions.

User Experience Validation: Testing eCOA interfaces with representative patient populations ensures usability across diverse demographics and technology comfort levels. This validation process should include cognitive interviews and usability testing to identify potential barriers to completion.

Training and Support Infrastructure: Comprehensive training programs for both sites and patients, supported by accessible technical support, are critical for successful adoption. This includes developing multilingual support materials and establishing clear escalation procedures for technical issues.

Data Integration and Monitoring: eCOA platforms must integrate seamlessly with electronic data capture systems and provide real-time monitoring capabilities that support both operational oversight and patient safety monitoring.

Technology Evolution and Future Considerations

Emerging Technologies: The eCOA field continues evolving with advances in mobile technology, wearable devices, and digital therapeutics integration. Organizations are increasingly exploring how these technologies can enhance patient engagement while maintaining regulatory compliance and data integrity.

Regulatory Adaptation: As regulatory agencies gain experience with electronic data collection approaches, guidance continues evolving to address new technologies and implementation approaches. Staying current with regulatory expectations remains essential for successful eCOA deployment.

Industry Standardization: Professional organizations and industry consortiums continue developing best practices and standardized approaches that can improve implementation efficiency and reduce regulatory review timelines across the industry.

 

Looking to implement eCOA solutions for your clinical trials? Discover how Castor’s integrated eCOA platform combines regulatory compliance with patient-centered design to accelerate your clinical research objectives.

Frequently Asked Questions

What is the difference between eCOA and ePRO?

eCOA (Electronic Clinical Outcome Assessment) serves as the umbrella term for all electronically captured clinical outcomes data. ePRO (Electronic Patient Reported Outcomes) represents the patient-reported component within the eCOA framework, alongside eClinRO (clinician-reported), eObsRO (observer-reported), and ePerfO (performance-based) outcomes.

How long does eCOA implementation typically take?

Implementation timelines vary significantly based on study complexity, regulatory requirements, licensing needs, translation requirements, and organizational capabilities. Early identification of these factors and realistic project planning help establish appropriate timelines for specific study contexts.

What are the key considerations for BYOD vs. provisioned devices?

The choice depends on study requirements, patient populations, regulatory considerations, and operational capabilities. BYOD approaches may improve patient engagement through familiar interfaces, while provisioned devices offer controlled environments. Both strategies require careful attention to data quality, security, and regulatory compliance.

How do licensing requirements affect eCOA implementation?

Many clinical assessment instruments are copyrighted materials requiring study-specific licenses for electronic deployment. These licensing processes can involve multiple stakeholders and approval workflows, making early planning and stakeholder engagement important for realistic timeline management.

 

References

  1. U.S. Food and Drug Administration. (2024). Clinical Outcome Assessment (COA) Frequently Asked Questions. Available at: https://www.fda.gov/about-fda/clinical-outcome-assessment-coa-frequently-asked-questions
  2. MarketsandMarkets. (2024). Electronic Clinical Outcome Assessment Solutions Market Growth, Drivers, and Opportunities. Available at: https://www.marketsandmarkets.com/Market-Reports/ecoa-solutions-market-87857774.html
  3. Clinical Leader. (2024). The Rise of Electronic Clinical Outcome Assessments (eCOAs) in the Age of Patient Centricity. Available at: https://www.clinicalleader.com/doc/the-rise-of-electronic-clinical-outcome-assessments-ecoas-in-the-age-of-patient-centricity-0001
  4. Stone, A.A., et al. (2002). Patient non-compliance with paper diaries. BMJ, 324(7347), 1193-1194. Available at: https://www.bmj.com/content/324/7347/1193

Why eCOA Still Fails in Clinical Trials: Practical Strategies to Fix Baseline Data Problems

July 18th, 2025 by

Electronic COAs were meant to protect data quality and capture the patient voice. But missing baseline data, poor site preparation, and unrealistic timelines continue to break studies. In a frank discussion hosted by Castor, Derk Arts (Castor), Katja Rudell (Kielo Research), and Ari Gnanasakthy (RTI Health Solutions) unpacked where the breakdowns really happen — and how to address them practically.

“When you have 40% missing at baseline, you pretty much lost the study.” Ari Gnanasakthy

What Goes Wrong, Repeatedly in eCOA Clinical Trials

For over 20 years, eCOA has been positioned as a fix for data quality in clinical trials. But under pressure to launch quickly, the fundamentals still fail. Platforms aren’t validated in time. Sites are unprepared to train patients. Devices get stuck in customs. Sponsors overload studies with endpoints without checking whether sites can realistically execute them.

Ari gave a stark example: nearly half of baseline data lost because provisioned devices didn’t arrive on time. That kind of problem can destroy confidence in the entire trial, yet it keeps happening because sponsors focus on checklists rather than operational readiness.

“Defaulting to ePRO is fine. But when execution fails, confidence collapses.”

eCOA Organizational Friction and Clinical Operations

Many of these issues have little to do with the technology itself. They are organizational. Procurement departments pick eCOA vendors without involving scientific leadership. Clinical teams don’t leave enough time between protocol sign-off and first-patient-in. Multiple business units push conflicting priorities: commercial, HTA, regulatory, patient engagement — all pulling studies in different directions.

Katja noted even well-designed eCOA systems collapse when no one is coordinating translations, site training, device shipments, and ongoing data monitoring. When those hand-offs fail, secondary endpoints suffer, leaving massive data gaps no statistician can fix after the fact.

“Different teams chase different goals. That’s how data gets lost.” Katja Rudell

Practical Steps to Simplify eCOA Implementation

Derk challenged the group to question the myth of unavoidable complexity. In industries like aviation, change and surprises are planned for systematically. Clinical trials tolerate known failure points again and again — customs delays, missing devices, untrained staff — and act surprised every time.

The panelists pointed to several actionable ways forward:

Sponsors should also push vendors for true contingency planning. There is no excuse for having no Plan B in a study with critical endpoints.

“We act surprised every time these issues happen. That’s on us.” Derk Arts

A Path to Consistency and Patient-Reported Outcomes Success

No one suggested eCOA is fundamentally flawed. Quite the opposite: when done well, it supports high-quality patient-reported outcomes that add enormous value to a study. Ari reminded the audience that many trials do succeed — but those that fail, fail for predictable, preventable reasons.

Sponsors, CROs, and vendors should share lessons across studies, build repeatable playbooks, and train site staff continuously. Patients will only deliver quality data if their participation is practical and realistic — no matter what the protocol says on paper.

“If you expose patients to a product, you owe it to them to ask how they feel about it.” Katja Rudell

Key Takeaways

Watch the full webinar on-demand here for unfiltered, practical insights you can apply to your next clinical trial.

Building Biotech: From Science to Scale – Strategic Lessons from the Frontline

May 28th, 2025 by

Biotech isn’t for the faint-hearted. As Derk Arts, CEO of Castor, and Professor Thomas Wurdinger discussed in their recent LinkedIn Live session, building a successful biotech company demands more than just groundbreaking science. Their conversation, titled “Building Biotech: From Science to Scale,” peeled back the layers on what truly drives success in early-stage biotechs. Spoiler: it’s less about having the best data and more about narrative, execution, and alignment.

Biotech, they argue, is the survival of the funded. It’s a terrain where the loudest story often drowns out the best science. Wurdinger’s own journey underscores this paradox. From his days as an RNA researcher to founding ThromboDx—a diagnostics company acquired by Illumina—and watching it evolve into Grail, his career exemplifies the rare but critical blend of academic depth and commercial savvy.

“Having the best data doesn’t guarantee funding, while poor data can often get you funded fast.”— Thomas Wurdinger

One of the most telling themes from the conversation is how market sentiment frequently trumps scientific merit. The Illumina-Grail case highlights this reality. Regulatory friction, misaligned expectations, and timing can vaporize billions in value overnight. In such a volatile landscape, a great idea needs more than validation; it needs strategy.

Wurdinger’s transition from the lab bench to the boardroom reveals the critical early decisions that separate promising biotechs from perishable ones. His threefold advice? First, founders must seek strong mentors who can help navigate intellectual property (IP), licensing, and fundraising. Second, never sign licensing or investment documents without independent legal counsel—no matter how friendly the university seems. And third, founders need brutal self-awareness: not everyone is meant to be a CEO, and clinging to titles can stall progress.

The tension between data quality and funding viability also took center stage. Many startups, especially those emerging from academic labs, struggle to convince VCs because their innovation doesn’t fit into the typical biotech investment playbook. Unlike therapeutic programs with clear regulatory and clinical pathways, platform or diagnostic companies must often invent their own roadmap—and articulate that roadmap convincingly to investors.

“Pride is your worst enemy when you’re in a startup.”— Thomas Wurdinger

This is where storytelling matters. Investors aren’t just betting on data. They’re betting on a vision, a team, and a well-crafted narrative that explains why now is the time, why this team is the right one, and why this solution matters. That’s why Wurdinger’s investment fund includes a filmmaker as one of the partners—to help founders construct that narrative arc. Because in biotech, your pitch deck is more than a slide show. It’s the first act of a story that investors must want to see through to its final scene.

While robust data underpins credibility, it’s often not the first thing investors see. Especially in early-stage funding, decisions are made based on the team, the problem-solution fit, and the ability to scale. Once a startup progresses, though, clinical readiness becomes critical. This includes preclinical validation, manufacturing scalability via CDMOs or CROs, and FDA registration planning. Without these, clinical trials are non-starters—and timelines slip fast.

Explore Castor’s tools for decentralized and hybrid trials

Wurdinger also offered pragmatic insight into fundraising phases. Many companies bridge the early-stage “valley of death” with government grants (like Eurostars) and local loans. Family, friends, and early believers play a crucial role, albeit a risky one. But to attract serious venture capital, companies need credible leadership, not just science. A seasoned CEO, clear go-to-market strategy, and defensible IP position are often the deciding factors.

Team-building was another recurrent theme. Founders should avoid perfectionism when assembling their leadership team. Instead, they need people who are aligned, resilient, and pragmatic. Equity dilution is not failure; it’s the price of momentum. The value lies in execution, not in retaining 100% of a stalled startup.

“You can’t get the best people in when you’re a startup and basically a nobody in startup land.”- Thomas Wurdinger

For first-time founders, Wurdinger left a final checklist: seek out mentors who challenge you; retain your own legal counsel; build your story before your data; and hire for your blind spots. Passion fuels the journey, but structure sustains it. No matter how disruptive your science, startups don’t scale themselves. They are built, step-by-step, through smart strategy, clinical readiness, and investor trust.

Platforms like Castor play a pivotal role here. As Wurdinger and Arts both noted, accelerating clinical operations through tech-enabled solutions is one of the few defensible edges in an increasingly competitive biotech ecosystem. With modern EDC systems, decentralized trial capabilities, and scalable workflows, Castor helps bridge the gap from hypothesis to human evidence.

The path from science to scale is long, but it’s navigable. With the right story, the right data, and the right team, even the most complex ideas can become transformative companies.

Try Castor EDC For Yourself

Start designing your own study structure and forms today.

Try For Free