Skip to main content
QMSQMS
QMS
  • Welcome to your QMS
  • Quality Manual
  • Procedures
  • Records
  • Legit.Health Plus Version 1.1.0.0
  • Legit.Health Plus Version 1.1.0.1
  • Legit.Health Utilities
  • Licenses and accreditations
  • Applicable Standards and Regulations
  • BSI Non-Conformities
    • Technical Review
    • Clinical Review
      • Round 1
        • Item 0: Background & Action Plan
          • Item 0 — Background, Regulatory Guides & Meeting Records
          • 📝 Las notas
          • 📝 Las notas
        • Item 1: CER Update Frequency
        • Item 2: Device Description & Claims
        • Item 3: Clinical Data
        • Item 4: Usability
        • Item 5: PMS Plan
        • Item 6: PMCF Plan
        • Item 7: Risk
    • BSI Non-Conformities
  • Pricing
  • Public tenders
  • BSI Non-Conformities
  • Clinical Review
  • Round 1
  • Item 0: Background & Action Plan

Item 0: Background & Action Plan

This is not a formal BSI question item. It is the foundational context layer for the entire Round 1 Clinical Review — the output of a BSI clarification meeting held on 2026-03-25 and two internal follow-up sessions. Read this before working on any of the seven formal items.

Full background, meeting insights, and regulatory strategy are documented in CLAUDE.md in this folder.


Item 0 status​


Full action plan​

All action items identified from the BSI meeting (2026-03-25) and the internal debriefs (2026-03-25, 2026-03-26), grouped by formal BSI item.

Cross-cutting — must happen before or alongside all items​

These actions affect every formal item. The CER rewrite strategy and regulatory framing must be settled before detailed work on individual items begins.

#ActionOwnerPrerequisite forStatus
X-1Settle which MDCG sections and MEDDEV stages apply to each CER section. Jordi reviews MDCG 2020-6 and sends section highlights to Taig via Slack.JordiAll itemsTo do
X-2AI-document the guidance frameworks: what MDCG and MEDDEV define, which sections apply, what the MEDDEV stages (0–5) require at each step.TaigAll itemsTo do
X-3Resolve the disease categorization approach. If evidence is structured by disease category (malignant, inflammatory, etc.), this implicitly accepts that the device use is disease-specific — which conflicts with the intended-use framing (full ICD-11 distribution). This tension must be resolved with a clear team decision before Item 2 and Item 3 can be written.TeamItem 2, Item 3To do

Item 2a — Device description & intended purpose​

The CER device description is currently not stand-alone. BSI reviewers evaluate it in isolation. Everything about the device must be readable within the CER without cross-referencing the IFU or other documents.

#ActionOwnerStatus
2a-1Rewrite device description section of CER to be fully self-contained. Include: what the device is, what it does, what it does not do, what regulatory category it is, and how it fits into clinical practice.TaigIn progress
2a-2Add the "pin" explanation: the device always outputs a probability distribution over all visible ICD-11 classes. It never produces a binary positive/negative for any specific condition. This is the architectural basis for the clinical evaluation approach.TaigTo do
2a-3Clarify the interface distinction: the API outputs all ICD-11 codes, but the interface that physicians actually see shows a limited, prioritized probability view — not the full list. The CER must accurately describe both layers (API and interface) because the physician's actual experience is of the interface, not the raw API output.TaigTo do
2a-4Add a "how to read this CER" guide at the start of the document. Explain the table structure, what each column means, and how to navigate from the benefit claim to the supporting evidence. BSI reviewers should not have to reverse-engineer the logic.TaigTo do

Item 2b — Clinical benefits, performance & safety vs SotA​

The core structure of this section is correct but the reasoning chain is invisible. BSI cannot trace how acceptance criteria were derived from state of the art, and the volume of claims/metrics makes it unauditable.

#ActionOwnerStatus
2b-1Simplify and consolidate the 7 clinical benefits. Some are near-duplicates (e.g., two benefits that differ only by the word "remotely"). Consolidating does not mean losing specificity — it means reducing noise so the core claims are legible.JordiNeeds rework
2b-2Establish acceptance criteria for each benefit with explicit traceability to state of the art. The chain must be visible: comparable devices in the literature → what performance they achieve → why a threshold is appropriate for this device.JordiNeeds rework
2b-3For high-risk conditions (melanoma, malignancies): provide an individual breakdown of acceptance criteria and data meeting those criteria. BSI will specifically audit these. For lower-risk categories, pooling with justification is acceptable.JordiTo do
2b-4Replace the current means-of-measure / magnitude-of-benefit table with something readable. The current version has so many entries (top-1 accuracy, top-1 accuracy per HCP tier, etc.) that it is described by Erin as "an endless list." Each claim must have one measurable outcome, a threshold, and evidence.JordiNeeds rework

Item 3a — Clinical data analysis​

The clinical data exists. The problem is that it is not presented as an analysis — it is presented as a list. BSI needs to see the reasoning: what was done in each study, what its limitations were, what it proves, and how it feeds the acceptance criteria.

#ActionOwnerStatus
3a-1Add a prose validation strategy narrative at the start of Item 3. This should use the exact terminology of the MDR and MDCG (e.g., "clinical data", "clinical evidence", "clinical benefit", "sufficient clinical evidence"). It should explain the priority hierarchy of evidence types (prospective investigations first, then retrospective, then PMS), and why the combination used is appropriate.JordiNeeds rework
3a-2Reframe the MRMC (multi-reader, multi-case) studies. Nick stated explicitly that MRMC studies using simulated scenarios are not clinical data — they do not reflect the device in its intended environment (real patients, real clinical settings). MRMC studies must be presented as supporting/corroborating evidence only. Primary clinical evidence must come from real-world studies.JordiNeeds rework
3a-3Clarify the equivalence argument with the Legacy device in detail. The melanoma study was conducted with Legacy. There is a section in the CER explaining this is the same device with no clinically relevant changes. This section needs to be expanded: provide a list of all changes since Legacy, assess whether each change is clinically relevant, and justify why PMS data and studies from Legacy still apply.JordiNeeds rework
3a-4For every referenced paper and study: add an analysis entry — not just a citation. What was the methodology? What were the limitations? What devices or populations were included? What did it conclude relevant to this device? Was the version used the Legacy or the current device?JordiTo do
3a-5Provide all referenced papers as a zip file with a numbered or author+year reference list. BSI cannot follow hyperlinks in PDFs.JordiTo do
3a-6Use the two real-world studies (including the one with Sakidecha) as primary clinical evidence. These are the closest thing to real-world evidence currently available. They should be prominently positioned, with full analysis of their methodology and findings.JordiTo do

Item 3b — Data sufficiency justification​

BSI cannot tell whether the data is sufficient because we have not explained why it is sufficient. This is different from having the data — it requires explicit argumentation.

#ActionOwnerStatus
3b-1For every evidence set, justify why the sample size is sufficient. Why are 80 cases enough for the melanoma claim? Why is limited paediatric evidence (approximately 20 cases) acceptable? The justification should reference statistical power calculations or established methodological guidance where appropriate.JordiNeeds rework
3b-2Incorporate 4 years of physician survey data as real-world evidence. Existing surveys should be reviewed and — if they do not already include per-benefit validation questions — new questions should be proposed for future surveys. These surveys are direct evidence from physicians that the clinical benefits have been achieved in real hospital settings.JordiTo do
3b-3Mine Legacy device market data (PMS, adverse event records, hospital outcome data) as real-world evidence of end-to-end clinical impact. Nick specifically asked for evidence that "when a hospital uses the device, outcomes improve compared to the same hospital before, or compared to what the literature shows."JordiTo do
3b-4Assess whether the fototipos (skin phototype) study dataset (already prepared per Jordi's March 26 note) can contribute as additional supporting evidence for any specific claims.JordiTo do

Item 4 — Usability​

The existing answer is likely close to correct. The main gap is traceability.

#ActionOwnerStatus
4-1Add explicit traceability from the usability claims in the CER to the summative usability study results. BSI needs to be able to verify that the study met its acceptance criteria. A direct reference to the study, its methodology, and its results is required.TaigNeeds rework

Item 5 — PMS plan​

Saray described this as the simpler item. The gap is procedural traceability.

#ActionOwnerStatus
5-1Update the PMS plan to reference the relevant SOPs and procedures that govern each surveillance activity. BSI needs to be able to trace from the plan to the actual process. Provide the SOP identifiers and confirm each is in the QMS.SarayNeeds rework

Item 6 — PMCF plan​

This item requires a two-step fix in a strict order. The CER must first declare acceptable gaps before the PMCF plan can logically flow from those gaps. Attempting to fix the PMCF plan before the CER gaps are declared will result in an incoherent structure.

#ActionOwnerPrerequisiteStatus
6-0In the CER: for each identified evidence gap, explicitly state that the gap is acceptable — that despite the gap, the available evidence is still sufficient for validation — and explain why. This is a logical prerequisite for the PMCF plan.JordiNone — must happen firstTo do
6a-1Link every PMCF activity to a specific acceptable gap declared in the CER (per action 6-0). If an activity cannot be linked to a specific gap, it should be removed.Jordi6-0Needs rework
6a-2Add full methodology detail to each retained PMCF activity: sample size, acceptance criteria, start date, expected duration, and contingency plan for insufficient participation or response rates.Jordi6-0Needs rework
6a-3Remove PMCF activities that are not directly relevant to closing identified gaps. The current volume is described by Erin as "overwhelming." Fewer, better-described activities are more convincing than many vague ones.Jordi6-0To do
6b-1Add a justification explaining how the retained PMCF activities, combined with PMS, will together provide sufficient proactive data to cover the safety and performance of the device over its lifetime — as required by Annex XIV Part 5.Jordi6a-1, 6a-2Needs rework

Item 7 — Risk​

This item has not been started. The main concern is traceability of occurrence rates, and a possible mismatch between the risk document BSI reviewed and the correct one.

#ActionOwnerStatus
7-1For each risk line where an occurrence rate is stated: add explicit traceability to the source of that rate — which PMS record, which clinical investigation, or which literature reference was used. BSI needs to be able to verify that rates were appropriately pulled from real data.AlfonsoTo do
7-2If the correct risk document is not the one BSI reviewed: provide a clear cross-reference in the response. State explicitly which document contains the relevant risk lines, why it is the applicable document, and identify which specific lines correspond to the ones BSI queried.AlfonsoTo do

Dependencies and critical path​

The following actions block others and should be prioritised:

  1. X-1 and X-2 (guidance framework documentation) — unblock all CER rewriting work
  2. X-3 (disease categorization decision) — unblocks Items 2b and 3a
  3. 3a-1 (prose validation strategy) — structural backbone of Item 3; everything else in Item 3 hangs off this
  4. 6-0 (declare acceptable gaps in CER) — strict prerequisite for all Item 6 PMCF rework
  5. 2a-1 through 2a-4 (device description and "pin" explanation) — foundation of Item 2; the acceptance criteria work in 2b depends on the intended-use framing being settled first
Previous
Clinical Review: Round 1
Next
Item 0 — Background, Regulatory Guides & Meeting Records
All the information contained in this QMS is confidential. The recipient agrees not to transmit or reproduce the information, neither by himself nor by third parties, through whichever means, without obtaining the prior written permission of Legit.Health (AI Labs Group S.L.)