The Challenge
A Health Technology Appraisal submission is one of the highest-stakes documents your organization will produce. The evidence package you submit to NICE or a comparable body will determine whether your therapy reaches patients, and at what price. Getting external review right, and early enough to act on it, is not optional. It is the difference between a submission that has been stress-tested against the agency's lens and one built entirely on internal assumptions.
The real problem is not internal coordination. It is timing and opportunity. Pre-submission meetings with HTA agencies, those rare structured interactions where external reviewers scrutinize your evidence before you formally submit, typically happen late in the development cycle. By the time that meeting takes place, critical evidence generation activities are already complete or nearly so. The feedback you receive is often valid, often difficult to hear, and almost always impossible to fully incorporate. The window to act has closed.
These meetings are also one-shot opportunities. Getting a pre-submission scientific advice meeting with NICE or a comparable body requires months of advance planning, extensive cross-functional preparation, and significant organizational resource. You go in with the best package you can assemble at that moment, you receive feedback, and that is it. There is no second meeting before your official submission. The single-use nature of this interaction, combined with the scale of effort required to prepare for it, creates a strategic bottleneck: most evidence packages reach regulators having received either no external agency input at all, or feedback that arrived too late to make a meaningful difference.
How It's Done Today
Securing a pre-submission meeting with an HTA agency requires months of preparation before a single document lands on a reviewer's desk. Teams must define the scope of the scientific advice request, prepare a detailed briefing document, align internally across medical, HEOR, market access, and regulatory functions, and submit a formal meeting request, often six to twelve months before the meeting itself. The agency sets the agenda, selects the committee members, and determines the format. You prepare extensively for a conversation you do not fully control.
In the weeks before the meeting, the preparation intensifies. Teams run internal rehearsals, anticipate likely lines of questioning, stress-test the cost-effectiveness model, and refine the narrative across all three dossiers. External advisors are often brought in. This is a significant operational undertaking that pulls senior scientific and commercial leadership away from other priorities for weeks at a time.
The meeting itself typically runs a few hours. The committee raises questions. Your team responds. The session concludes with a set of questions and recommendations for the manufacturer to consider. Your team leaves with notes and decides internally what to take forward. That ability to act, however, is constrained by where you are in the development timeline. Trials are enrolled. Model structures are set. Endpoints are locked. The questions get noted, partially addressed where timelines allow, and filed. The next opportunity to receive external agency scrutiny of your evidence is your official submission.
The AI-Enabled Approach
The Simulated Evidence Review Committee was built to dissolve the bottleneck at the core of the problem: the fact that you only get one shot at external scrutiny, and usually too late to change anything meaningful. With this application, you can run as many evidence review sessions as your team needs, at any stage of development, without scheduling a single meeting or waiting months for a slot.
The expert panel is configurable. You define the skills, perspectives, and experience profile of the reviewers that matter most for your asset and submission context. You can adjust how critically they engage with different dimensions of your evidence, dial scrutiny up or down depending on what you need at a given stage, and incorporate contextual nuances that are specific to your situation: the degree of unmet need in the therapeutic area, whether your technology is first-in-class, characteristics of the patient population, comparative evidence landscape, or payer precedent. The committee does not apply a generic lens. It applies the lens you configure.
Critically, this capability is not limited to near-final submissions. You can engage the committee early in your evidence planning, before pivotal data are available, working from your anticipated evidence profile and the value propositions you intend to support. The committee will pressure test those propositions: whether the direction you are taking is defensible, what evidence would be required to sustain it, and where the likely lines of challenge will come from. That kind of structured external perspective, applied early, is what creates the space to actually act on what you learn.
Before each session, the system independently conducts background research across ten critical domains, drawing on real literature databases and credible online sources. Disease epidemiology, treatment sequencing, NICE precedent, patient perspective evidence, clinical practice context: all of it is researched and synthesized automatically, so the committee begins its work fully informed rather than starting from scratch.
A panel of specialized AI experts then convenes: a clinical reviewer, an epidemiologist, and a health economist. Each independently reviews the submission, leads a structured contribution covering rationale, evidence strengths, methodological concerns, and regulatory alignment, and then actively challenges the other experts' assessments. A dedicated moderator manages the discussion flow, ensures every critical dimension receives sufficient depth, and prevents both premature closure and circular debate.
What you receive at the end is a complete, professional committee report: an executive summary, section-by-section analysis of the submission, a structured set of questions for the manufacturer, and a full discussion transcript with every expert contribution and moderator decision preserved. The entire process, from document upload to final report, takes 10 to 15 minutes. The output is ready for internal decision-making, regulatory preparation, or submission defense.
What It Means for You
- What once required 6 to 9 months of preparation, coordination, and follow-up now completes in 10 to 15 minutes, giving your team a full committee-quality review before a single scheduling email is sent.
- Automated background research across 10 domains eliminates the duplicated literature synthesis burden that currently falls on every expert reviewer before each submission cycle.
- Structured peer challenge between a clinical reviewer, epidemiologist, and health economist surfaces methodological flaws and evidence gaps that single-pass or siloed reviews routinely miss.
- Every expert contribution, moderator decision, and committee conclusion is captured in a full discussion transcript, giving you an auditable reasoning trail you can use to defend positions to NICE or internal stakeholders.
- Consistent coverage of all critical review dimensions — including study design quality, population generalizability, cost model assumptions, and NICE alignment — is guaranteed across every submission, regardless of who is available.
- Smaller biotech and medtech teams can now conduct pre-submission reviews with the same rigor and depth as large pharma organizations, without the headcount or coordination overhead.
The Simulated Evidence Review Committee gives your team the expert scrutiny your submission deserves, at the speed your submission timeline demands.
▶ See It in Action
Watch the demo to explore the full Simulated Evidence Review Committee workflow.