Item 0: Background & Action Plan
This is not a formal BSI question item. It is the foundational context layer for the entire Round 1 Clinical Review — the output of a BSI clarification meeting held on 2026-03-25 and two internal follow-up sessions. Read this before working on any of the seven formal items.
Full background, meeting insights, and regulatory strategy are documented in CLAUDE.md in this folder.
Item 0 status
Full action plan
All action items identified from the BSI meeting (2026-03-25) and the internal debriefs (2026-03-25, 2026-03-26), grouped by formal BSI item.
Cross-cutting — must happen before or alongside all items
These actions affect every formal item. The CER rewrite strategy and regulatory framing must be settled before detailed work on individual items begins.
| # | Action | Owner | Prerequisite for | Status |
|---|---|---|---|---|
| X-1 | Settle which MDCG sections and MEDDEV stages apply to each CER section. Jordi reviews MDCG 2020-6 and sends section highlights to Taig via Slack. | Jordi | All items | To do |
| X-2 | AI-document the guidance frameworks: what MDCG and MEDDEV define, which sections apply, what the MEDDEV stages (0–5) require at each step. | Taig | All items | To do |
| X-3 | Resolve the disease categorization approach. If evidence is structured by disease category (malignant, inflammatory, etc.), this implicitly accepts that the device use is disease-specific — which conflicts with the intended-use framing (full ICD-11 distribution). This tension must be resolved with a clear team decision before Item 2 and Item 3 can be written. | Team | Item 2, Item 3 | To do |
Item 2a — Device description & intended purpose
The CER device description is currently not stand-alone. BSI reviewers evaluate it in isolation. Everything about the device must be readable within the CER without cross-referencing the IFU or other documents.
| # | Action | Owner | Status |
|---|---|---|---|
| 2a-1 | Rewrite device description section of CER to be fully self-contained. Include: what the device is, what it does, what it does not do, what regulatory category it is, and how it fits into clinical practice. | Taig | In progress |
| 2a-2 | Add the "pin" explanation: the device always outputs a probability distribution over all visible ICD-11 classes. It never produces a binary positive/negative for any specific condition. This is the architectural basis for the clinical evaluation approach. | Taig | To do |
| 2a-3 | Clarify the interface distinction: the API outputs all ICD-11 codes, but the interface that physicians actually see shows a limited, prioritized probability view — not the full list. The CER must accurately describe both layers (API and interface) because the physician's actual experience is of the interface, not the raw API output. | Taig | To do |
| 2a-4 | Add a "how to read this CER" guide at the start of the document. Explain the table structure, what each column means, and how to navigate from the benefit claim to the supporting evidence. BSI reviewers should not have to reverse-engineer the logic. | Taig | To do |
Item 2b — Clinical benefits, performance & safety vs SotA
The core structure of this section is correct but the reasoning chain is invisible. BSI cannot trace how acceptance criteria were derived from state of the art, and the volume of claims/metrics makes it unauditable.
| # | Action | Owner | Status |
|---|---|---|---|
| 2b-1 | Simplify and consolidate the 7 clinical benefits. Some are near-duplicates (e.g., two benefits that differ only by the word "remotely"). Consolidating does not mean losing specificity — it means reducing noise so the core claims are legible. | Jordi | Needs rework |
| 2b-2 | Establish acceptance criteria for each benefit with explicit traceability to state of the art. The chain must be visible: comparable devices in the literature → what performance they achieve → why a threshold is appropriate for this device. | Jordi | Needs rework |
| 2b-3 | For high-risk conditions (melanoma, malignancies): provide an individual breakdown of acceptance criteria and data meeting those criteria. BSI will specifically audit these. For lower-risk categories, pooling with justification is acceptable. | Jordi | To do |
| 2b-4 | Replace the current means-of-measure / magnitude-of-benefit table with something readable. The current version has so many entries (top-1 accuracy, top-1 accuracy per HCP tier, etc.) that it is described by Erin as "an endless list." Each claim must have one measurable outcome, a threshold, and evidence. | Jordi | Needs rework |
Item 3a — Clinical data analysis
The clinical data exists. The problem is that it is not presented as an analysis — it is presented as a list. BSI needs to see the reasoning: what was done in each study, what its limitations were, what it proves, and how it feeds the acceptance criteria.
| # | Action | Owner | Status |
|---|---|---|---|
| 3a-1 | Add a prose validation strategy narrative at the start of Item 3. This should use the exact terminology of the MDR and MDCG (e.g., "clinical data", "clinical evidence", "clinical benefit", "sufficient clinical evidence"). It should explain the priority hierarchy of evidence types (prospective investigations first, then retrospective, then PMS), and why the combination used is appropriate. | Jordi | Needs rework |
| 3a-2 | Reframe the MRMC (multi-reader, multi-case) studies. Nick stated explicitly that MRMC studies using simulated scenarios are not clinical data — they do not reflect the device in its intended environment (real patients, real clinical settings). MRMC studies must be presented as supporting/corroborating evidence only. Primary clinical evidence must come from real-world studies. | Jordi | Needs rework |
| 3a-3 | Clarify the equivalence argument with the Legacy device in detail. The melanoma study was conducted with Legacy. There is a section in the CER explaining this is the same device with no clinically relevant changes. This section needs to be expanded: provide a list of all changes since Legacy, assess whether each change is clinically relevant, and justify why PMS data and studies from Legacy still apply. | Jordi | Needs rework |
| 3a-4 | For every referenced paper and study: add an analysis entry — not just a citation. What was the methodology? What were the limitations? What devices or populations were included? What did it conclude relevant to this device? Was the version used the Legacy or the current device? | Jordi | To do |
| 3a-5 | Provide all referenced papers as a zip file with a numbered or author+year reference list. BSI cannot follow hyperlinks in PDFs. | Jordi | To do |
| 3a-6 | Use the two real-world studies (including the one with Sakidecha) as primary clinical evidence. These are the closest thing to real-world evidence currently available. They should be prominently positioned, with full analysis of their methodology and findings. | Jordi | To do |
Item 3b — Data sufficiency justification
BSI cannot tell whether the data is sufficient because we have not explained why it is sufficient. This is different from having the data — it requires explicit argumentation.
| # | Action | Owner | Status |
|---|---|---|---|
| 3b-1 | For every evidence set, justify why the sample size is sufficient. Why are 80 cases enough for the melanoma claim? Why is limited paediatric evidence (approximately 20 cases) acceptable? The justification should reference statistical power calculations or established methodological guidance where appropriate. | Jordi | Needs rework |
| 3b-2 | Incorporate 4 years of physician survey data as real-world evidence. Existing surveys should be reviewed and — if they do not already include per-benefit validation questions — new questions should be proposed for future surveys. These surveys are direct evidence from physicians that the clinical benefits have been achieved in real hospital settings. | Jordi | To do |
| 3b-3 | Mine Legacy device market data (PMS, adverse event records, hospital outcome data) as real-world evidence of end-to-end clinical impact. Nick specifically asked for evidence that "when a hospital uses the device, outcomes improve compared to the same hospital before, or compared to what the literature shows." | Jordi | To do |
| 3b-4 | Assess whether the fototipos (skin phototype) study dataset (already prepared per Jordi's March 26 note) can contribute as additional supporting evidence for any specific claims. | Jordi | To do |
Item 4 — Usability
The existing answer is likely close to correct. The main gap is traceability.
| # | Action | Owner | Status |
|---|---|---|---|
| 4-1 | Add explicit traceability from the usability claims in the CER to the summative usability study results. BSI needs to be able to verify that the study met its acceptance criteria. A direct reference to the study, its methodology, and its results is required. | Taig | Needs rework |
Item 5 — PMS plan
Saray described this as the simpler item. The gap is procedural traceability.
| # | Action | Owner | Status |
|---|---|---|---|
| 5-1 | Update the PMS plan to reference the relevant SOPs and procedures that govern each surveillance activity. BSI needs to be able to trace from the plan to the actual process. Provide the SOP identifiers and confirm each is in the QMS. | Saray | Needs rework |
Item 6 — PMCF plan
This item requires a two-step fix in a strict order. The CER must first declare acceptable gaps before the PMCF plan can logically flow from those gaps. Attempting to fix the PMCF plan before the CER gaps are declared will result in an incoherent structure.
| # | Action | Owner | Prerequisite | Status |
|---|---|---|---|---|
| 6-0 | In the CER: for each identified evidence gap, explicitly state that the gap is acceptable — that despite the gap, the available evidence is still sufficient for validation — and explain why. This is a logical prerequisite for the PMCF plan. | Jordi | None — must happen first | To do |
| 6a-1 | Link every PMCF activity to a specific acceptable gap declared in the CER (per action 6-0). If an activity cannot be linked to a specific gap, it should be removed. | Jordi | 6-0 | Needs rework |
| 6a-2 | Add full methodology detail to each retained PMCF activity: sample size, acceptance criteria, start date, expected duration, and contingency plan for insufficient participation or response rates. | Jordi | 6-0 | Needs rework |
| 6a-3 | Remove PMCF activities that are not directly relevant to closing identified gaps. The current volume is described by Erin as "overwhelming." Fewer, better-described activities are more convincing than many vague ones. | Jordi | 6-0 | To do |
| 6b-1 | Add a justification explaining how the retained PMCF activities, combined with PMS, will together provide sufficient proactive data to cover the safety and performance of the device over its lifetime — as required by Annex XIV Part 5. | Jordi | 6a-1, 6a-2 | Needs rework |
Item 7 — Risk
This item has not been started. The main concern is traceability of occurrence rates, and a possible mismatch between the risk document BSI reviewed and the correct one.
| # | Action | Owner | Status |
|---|---|---|---|
| 7-1 | For each risk line where an occurrence rate is stated: add explicit traceability to the source of that rate — which PMS record, which clinical investigation, or which literature reference was used. BSI needs to be able to verify that rates were appropriately pulled from real data. | Alfonso | To do |
| 7-2 | If the correct risk document is not the one BSI reviewed: provide a clear cross-reference in the response. State explicitly which document contains the relevant risk lines, why it is the applicable document, and identify which specific lines correspond to the ones BSI queried. | Alfonso | To do |
Dependencies and critical path
The following actions block others and should be prioritised:
- X-1 and X-2 (guidance framework documentation) — unblock all CER rewriting work
- X-3 (disease categorization decision) — unblocks Items 2b and 3a
- 3a-1 (prose validation strategy) — structural backbone of Item 3; everything else in Item 3 hangs off this
- 6-0 (declare acceptable gaps in CER) — strict prerequisite for all Item 6 PMCF rework
- 2a-1 through 2a-4 (device description and "pin" explanation) — foundation of Item 2; the acceptance criteria work in 2b depends on the intended-use framing being settled first