On March 3, 2026, I had the privilege of joining an exceptional group of panelists at the SCRS EU Summit in Amsterdam for a session titled AI in Daily Site Operations. The room was packed, the conversation was energetic, and the questions from the audience were exactly the kind that tell you a topic has truly arrived. People weren’t asking whether AI matters in clinical research. They were asking how to use it responsibly, effectively, and without running afoul of the regulatory frameworks that govern our industry.
I was joined on the panel by Viviënne van de Walle (MD, PhD, Medical Director and Founder of PT&R), Alexandra Gerritsen (MBA, CEO and Founder of UniTriTeam), and Kristof Hadi (PharmD, MSc, Associate Director of Study Site Engagement at Takeda). Each panelist brought a distinct vantage point: investigator site operations, site network management, and sponsor-side strategy. The diversity of perspectives made for a genuinely substantive discussion.
In this post, I want to share some of the key themes from the session, with a particular focus on the regulatory and compliance dimension that formed the backbone of my contributions to the panel.
The question everyone is really asking
Before getting into the regulatory specifics, it’s worth acknowledging the undercurrent running through almost every question in the room: Am I allowed to use AI? And if I use it, will it get me in trouble during an inspection?
These are legitimate concerns, and I appreciate that people are asking them. It reflects a level of regulatory maturity in our industry that is frankly encouraging. But it also reflects how much confusion still exists around what AI regulation actually looks like in clinical trials today, and more importantly, what it doesn’t look like.
Where does AI regulation in clinical trials actually stand?
The regulatory landscape for AI in drug development moved considerably in early 2025 and into 2026. In January 2025, the FDA issued its first-ever AI draft guidance for drug and biological products, establishing a risk-based credibility assessment framework for AI models used in regulatory decision-making. Shortly after, in January 2026, the FDA and EMA published joint Guiding Principles of Good AI Practice in Drug Development, a landmark moment of international regulatory harmonization that the industry should not underestimate.
Three principles from FDA’s CDER underpin the emerging regulatory approach: adaptive regulation (iterative and technology-enabling), risk-based regulation (oversight proportionate to the risk the AI model presents), and collaborative regulation (engagement across the ecosystem of stakeholders). That third principle matters. It signals that regulators are not looking to issue edicts from on high. They are inviting dialogue, and the industry should take them up on it.
Not all AI use is the same, and that distinction matters enormously
One of the most important clarifications I offered during the panel was around scope. The FDA’s guidance is explicit: AI used to support regulatory decision-making regarding a product’s safety, effectiveness, or quality is within regulatory scope. AI used for operational efficiency at sites, including scheduling, contract review, drafting SOPs, and administrative task management, is generally outside that scope.
This is a meaningful distinction for sites and sponsors trying to determine how much regulatory infrastructure they need to put in place before they start using AI tools. The answer is: it depends on what the AI is doing and how much influence it has over a decision that could affect patient safety or data integrity.
The FDA has established a practical two-dimensional risk matrix to help make this determination. The first axis is model influence: is the AI one input among many, or is it the sole or primary determinant of an outcome? The second axis is decision consequence: how serious would the impact be if the AI got it wrong? A low-influence, low-consequence use case (think: an AI tool that drafts a scheduling email) looks nothing like a high-influence, high-consequence one (think: an AI model that serves as the primary classifier of adverse events for regulatory reporting). The same technology can sit in very different places on that matrix depending on how it is deployed.
Do you need to validate every AI tool you use?
The short answer I gave in Amsterdam: no, but you do need a risk-based validation strategy.
For low-risk operational AI such as scheduling tools, general drafting assistants, and administrative automation, basic accuracy verification, user acceptance testing, and standard IT/cybersecurity review is generally sufficient. No FDA submission is required. An internal SOP documenting how the tool is used is typically adequate.
For medium-risk applications, including AI-assisted data entry verification and preliminary screening flags, you’ll want documented accuracy testing on representative datasets, defined performance metrics, version control, and human oversight mechanisms. Documentation in the Trial Master File may be warranted, and early sponsor engagement is worth considering.
For high-risk AI, such as endpoint adjudication support or safety signal detection where AI is the primary determinant, a full credibility assessment per the FDA’s seven-step framework is expected. Lifecycle monitoring, independent test data validation, and detailed regulatory documentation all come into play. Critically, the primary validation burden for high-risk AI typically falls on the sponsor or applicant, not the site. However, sites still bear responsibility for understanding how the AI tools in their workflows have been validated, using them per validated procedures, and documenting and reporting any deviations or unexpected behavior.
The question sites should stop being afraid to ask
Alexandra raised an important point during the session about foundational infrastructure: before you can think about AI implementation, you need to know what processes you’re trying to improve and whether your current data hygiene is good enough to support it. AI amplifies what’s already there. Good processes and clean data get better; messy workflows get messier, faster.
Kristof offered the sponsor’s perspective on how AI is influencing site and country selection decisions, as well as performance monitoring, a lens that sites would do well to understand. Sponsors are increasingly leveraging data to make more informed allocation decisions, and sites that are operating on well-validated, clean electronic systems will have a meaningful advantage in that environment.
A few practical takeaways for sites and sponsors
If your site is currently using AI for scheduling, contract review, or SOP drafting, you are almost certainly operating in the operational efficiency category: lower regulatory scrutiny, no FDA submission required. What you should have in place is an internal SOP that governs how the tool is used, basic accuracy or quality checks, and training documentation for your staff.
If your site or sponsor organization is considering AI for anything that touches participant data, endpoint assessment, or safety monitoring, the calculus changes materially. Engage early, document thoroughly, and be clear-eyed about where your AI sits on the risk matrix.
If you are a sponsor integrating AI into any part of your monitoring or oversight function, review the FDA-EMA Guiding Principles closely. The emphasis on GxP adherence, including computer system validation, change control, and audit trails, applies with full force in that context.
Closing thoughts
What struck me most about the SCRS EU session was not any single answer that came from the panel. It was the quality of the questions coming from the floor. Sites are no longer asking whether they should engage with AI. They’re asking how to do it in a way that holds up to scrutiny, protects their patients, and doesn’t create compliance landmines down the road.
That is exactly the right question, and it’s one our industry is well-positioned to answer, especially as regulatory frameworks continue to mature and the dialogue between industry and regulators deepens.
If you’d like to discuss any of the topics raised in this post, or explore how CRIO supports compliant AI integration in clinical trial operations, feel free to reach out.
Marc Wartenberger is the VP of Compliance & Security at CRIO. His extensive work with regulatory bodies including serving as host for SCDM’s regulatory town halls and participation in the SCDM Regulatory Council has positioned him at the forefront of dialogue between industry and regulators on the evolving topic of AI in clinical research.