Skip to main content
QMSQMS
QMS
  • Welcome to your QMS
  • Quality Manual
  • Procedures
  • Records
  • Legit.Health Plus Version 1.1.0.0
  • Legit.Health Plus Version 1.1.0.1
  • Legit.Health version 2.1 (Legacy MDD)
  • Legit.Health Utilities
  • Licenses and accreditations
  • Applicable Standards and Regulations
  • BSI Non-Conformities
    • Technical Review
    • Clinical Review
      • Round 1
        • Item 0: Background & Action Plan
          • X-3: Disease categorisation decision
          • Issue 6: PMCF activities for X-3 gaps A & B
          • Source Document Verification Audit (2026-04-06)
          • Clinical Benefits Consolidation Options
        • Item 1: CER Update Frequency
        • Item 2: Device Description & Claims
        • Item 3: Clinical Data
        • Item 4: Usability
        • Item 5: PMS Plan
        • Item 6: PMCF Plan
        • Item 7: Risk
        • task-3b2-3b3-legacy-rwe-study
        • task-3b4-mrmc-dark-phototypes
  • Pricing
  • Public tenders
  • BSI Non-Conformities
  • Clinical Review
  • Round 1
  • Item 0: Background & Action Plan

Item 0: Background & Action Plan

This is not a formal BSI question item. It is the foundational context layer for the entire Round 1 Clinical Review, the output of a BSI clarification meeting held on 2026-03-25 and two internal follow-up sessions.

Read this before working on any item

If you are working on any of the seven formal items (Item 1 through Item 7) without having read this page, you are missing critical strategic context that changes how every item must be approached. The BSI meeting revealed expectations and positions that are not written in the formal non-conformity documents.


Item 0 status​


BSI clarification meeting: 2026-03-25​

On 2026-03-25 at 15:12, a clarification call was held with BSI to discuss the clinical review non-conformities. This was followed by two internal debrief sessions: one the same afternoon (16:22) and one the following morning (2026-03-26 at 11:44). The transcripts of all three meetings are stored in this folder under _resources/.

Attendees​

BSI side:

NameRoleNotes
Erin Preiss (erin.preiss@bsigroup.com)Clinical Evaluation SpecialistMeeting organiser. The assigned clinical reviewer. Had reviewed the documents before the call.
Simon Lidgate (simon.lidgate@bsigroup.com)Unknown / senior BSI staffListed as attendee. Did not speak prominently in the transcript.
Nick (surname not confirmed)Senior clinical reviewerNot on the calendar invite, joined the call. Gave the most direct and candid feedback. Referenced MEDDEV 2.7.1 Rev 4 explicitly. Issued the warning about likely refusal.

Our side (from calendar invite):

NameRole
Saray UgidosRegulatory / clinical, attended, spoke
Alfonso MedelaAttended
Jordi BarrachinaAttended, spoke
Taig Mac CarthyAttended, led discussion

March 26 internal meeting additional attendee:

NameNotes
Ignacio Hernández (ignaciohernandez@legit.health)Invited to the March 26 internal meeting. Did not speak in the recorded transcript.

External contacts mentioned:

  • Arancha: An external person (likely a consultant) that Saray was planning to call in connection with the clinical review work. Identity and role not fully specified in the transcripts.

What BSI told us: the full picture​

1. The situation is very serious​

danger

Nick stated explicitly that refusal is extremely likely if the identified gaps are not closed. Every item must be treated as if it directly determines whether we get CE marking.

Nick stated explicitly and candidly that:

  • This application has been open for a long time and was previously close to being refused.
  • As of 2026, BSI has a new policy: non-conformities in clinical review can only be closed out via one small round of clarification. There is no indefinite back-and-forth.
  • He raised the option of voluntarily withdrawing the application, getting proper expert help, and resubmitting from scratch. This is not a rhetorical suggestion; he genuinely presented it as a viable alternative.
  • The team declined withdrawal because the device is on the market under MDD Article 120, so transitioning to MDR is mandatory. Withdrawal is not a real option.

2. The core problem: lack of traceability and a broken storyline​

Both Erin and Nick described the same fundamental issue from different angles. The documents do not tell a coherent, traceable story. Specifically:

  • A reviewer cannot follow the chain: state of the art → clinical benefit definition → performance acceptance criteria → clinical data → how data meets criteria.
  • Statements are made without it being clear how they are supported. This is not just a formatting problem; BSI cannot tell whether the data is actually sufficient because the reasoning is not visible.
  • Nick said: "We're not even sure what we're seeing at this point because of the way it's being presented."
  • Erin described it as: "It's not clear if you make a statement how that statement is supported — and that traceability element is usually what is lacking."

This is the #1 thing to fix across all items. Every claim, every acceptance criterion, every data table must have a visible chain of reasoning.

3. The CER is not stand-alone​

BSI reviews the CER in isolation. They do not cross-reference other documents unless explicitly pointed to. This means:

  • The device description must be inside the CER itself: what the device is, what its output looks like, what the interface shows, what it does NOT do.
  • The explanation of the intended purpose must be inside the CER.
  • The clinical validation strategy (what types of data, why they are appropriate, why the sample sizes are sufficient) must be explained in prose inside the CER.
  • The CER cannot assume the reviewer has read the IFU, the device description annex, or any other document.
  • The CER needs a "how to read this document" section at the start: an explanation of the table structure, what each column means, and how to navigate the evidence. The team discussed this specifically: "Tenemos esta tabla que es maravillosa con tales columnas que aquí está la explicación de cómo es la tabla. Meter un poquito como que eso que empiecen leyendo eso y les quede claro todo lo demás."

3a. The CER must use prose with regulatory language, not just tables​

This is a separate but related insight from the internal meeting. Jordi confirmed that the studies ARE already scored and rated by quality ("están baremados los estudios por calidad"). The data is correct. But it is presented in tables and structured data files (JSON), which BSI cannot easily follow or audit.

What is needed:

  • Prose narrative explaining the validation strategy, not just tables.
  • The prose must use the exact words from the MDR and from the MDCG guidance. Auditors work from a checklist; they look for specific phrases and terminology. If the CER uses different words to describe the same concept, the box doesn't get ticked.
  • The data priority hierarchy (prospective clinical investigations first, then retrospective, then PMS, etc.) must be described in prose, using the MDCG framework, before the tables appear.
  • Saray put it clearly: "Aunque el trabajo ya está hecho y baremado, se requiere más prosa en lugar de solo tablas, usando las palabras exactas de la MDR y la guía para demostrar que se sigue la checklist."

4. The "pin": the device output architecture​

During the meeting, Saray and Taig wanted to explain a key architectural point about the device (they called it "the pin"):

  • The device always outputs the full array of all visible ICD-11 classes as a probability distribution.
  • It never outputs a binary true/false for any specific condition.
  • The clinical benefit claim is therefore: "an HCP using the device and seeing the full probability distribution has better diagnostic performance than an HCP without it."
  • This is why the team believes per-condition evidence is clinically inappropriate; the device is never read as "does this patient have melanoma: yes/no."

Nick's response was very important:

  • He acknowledged this architecture.
  • He confirmed that CE marking IS achievable with this kind of evidence approach.
  • However, he stated clearly: the rules do not change whether the device diagnoses, informs, or drives clinical management. MDR clinical data requirements apply in full.
  • With MRMC-style studies (simulated use), CE marking is possible only if accompanied by a very robust post-market study to prove real-world impact at scale.
  • He specifically asked: "Have you looked at a large sample of patients where this device is being used within that described workflow and compared it against standard medical practice?" The answer is that there are two smaller studies, but they are not sufficient on their own.

Practical implication: The "pin" explanation needs to be added to the CER device description with full clarity. But it does not eliminate the need for clinical data. It reframes what evidence is appropriate, and PMCF must include a robust real-world study.

Important nuance from the internal debrief (March 25)

Saray added a clarification that the team had not fully articulated during the BSI meeting. The API does output all ICD-11 codes as a probability distribution, but the interface that physicians actually see does NOT show the full list. It shows a prioritised, limited probability view (a "top CCO"). Saray's words: "la forma de uso final sí es con ese top CCO o con ese interfaz que le van a poner que siempre clasifica y te da una probabilidad limitada, no te da la lista entera."

This is a meaningful distinction: the physician's experience is of a ranked short-list, not a raw array of hundreds of conditions. The CER description of the device output must reflect this accurately, both what the API provides and what the interface presents, because the intended use is ultimately what the physician sees and acts on.

Unresolved team tension

Saray also noted that even with the "full ICD list" architecture, the physician's final intended use is always in the context of diagnosing diseases. Nick was explicit that BSI evaluates based on intended use regardless of how the device frames its output. Jordi confirmed: "El tío ha sido claro también diciendo que la manera de evaluar no cambia a pesar del intend."

This tension, between the device's architectural framing (ICD distribution, not disease-specific) and the regulatory reality that clinical evaluation requirements apply in full, is not fully resolved. The CER must address it head-on, not avoid it.

5. MRMC studies are not considered "clinical data" by Nick​

This is a critical and unexpected position. Nick stated:

  • MRMC (multi-reader, multi-case) studies where you show doctors images and ask them to assess with/without the device are not clinical data.
  • The reason: the device is not intended to be used in a simulated environment. It is intended for real patients in real clinical situations.
  • Real clinical data means real patients, real clinical practice, real outcomes.
  • The MRMC studies can still be used as supporting evidence, but they cannot be the primary basis for performance claims.

Practical implication: The CER must explicitly frame the MRMC studies as supporting/corroborating evidence, not as primary clinical evidence. Primary evidence must come from real-world studies, PMS data, and investigations conducted in actual clinical settings.

6. Performance acceptance criteria must be established with traceability to SotA​

Neither a criterion like ">=80% sensitivity" nor the percentage itself is acceptable without:

  1. A clear statement of what the clinical benefit is and why it matters for this device.
  2. Identification of comparable devices or methods (state of the art) in the literature.
  3. A justification for why the chosen threshold is appropriate based on what those comparable devices achieve.
  4. For pooled data across conditions: a risk-based justification for why pooling is appropriate.
  5. For higher-risk conditions (melanoma, malignancies): an individual breakdown of acceptance criteria and evidence. BSI will specifically audit these.

Nick noted it is unusual for a device to claim expertise in as many skin conditions as the device does. He suggested the team consider whether all indications are equally well-supported by data, and whether some should be removed from the claims if data is insufficient.

Unresolved tension around disease categorisation

The team discussed whether to group evidence by disease category (malignant, inflammatory, etc.) as Nick implied. Taig noted the concern: doing so means explicitly structuring the evidence around diseases, which reinforces the framing that the device is "for diseases," conflicting with the intended-use framing based on full ICD probability distributions.

Saray's counter-argument is that the physician's final use is always disease-related anyway, and Nick already confirmed the evaluation rules don't change regardless. For malignant conditions (melanoma, carcinomas), data already exists. The question is whether evidence for inflammatory and other lower-risk categories is also sufficient or needs categorisation.

This needs to be resolved before drafting Item 2 and Item 3 responses.

7. Equivalence with the Legacy device needs more detail​

Erin flagged this as a gap in the CER. The current explanation is too high-level. BSI needs:

  • A comprehensive list of what changes were made to the device since the version used in the clinical investigations.
  • An assessment of whether those changes are clinically relevant.
  • Justification that PMS data from earlier versions is still applicable to the current device.
  • If usability changes were made: evidence that usability was re-tested or that usability did not change in a clinically meaningful way.

8. Links in PDFs do not work​

Erin asked for a zip file of all referenced papers, with a clear reference list (author + year or numbered references). She cannot follow hyperlinks in the PDF documents. All evidence must be provided as physical files.

9. PMCF plan: too many activities, too little explanation​

The PMCF plan has so many activities it is "overwhelming" and difficult to evaluate. What BSI needs:

  • Fewer, better-described activities.
  • Each activity must be explicitly linked to a specific identified gap from the CER.
  • Each activity must include: methodology, sample size, acceptance criteria, timeline (start date, duration), and contingency plan (what happens if recruitment targets are not met).
  • The plan must demonstrate how the activities, combined with PMS, will provide sufficient proactive data to cover safety and performance over the device's lifetime (Annex XIV Part 5).
  • Activities that are not directly relevant to filling identified gaps should be removed.
Critical prerequisite: gaps must first be declared "acceptable" in the CER

Saray articulated the correct logic flow in the March 25 internal debrief: "estos gaps que son aceptables — hay que explicarlo bien en el CER — que es un gap, pero es aceptable, sigue siendo suficientemente evidencia para validarlo."

The CER must not just identify gaps; it must explicitly state that each gap is acceptable and explain why the existing evidence is still sufficient for validation despite the gap. Only then do PMCF activities flow naturally as the means to close those acceptable gaps over time.

This two-step, (1) justify the gap is acceptable now, (2) PMCF closes it in post-market, is what gives the structure logical coherence for BSI.

10. Risk (Item 7): need for traceability of rates​

Erin asked about specific risk lines and their occurrence rates. The expectation is:

  • Occurrence rates must be traceable to where they were pulled from (PMS data, clinical investigations, literature).
  • Risk documents must be updated to reflect current evidence.
  • If the risk assessment is in a different document than what was sent, a clear cross-reference must be provided explaining which document, which lines, and why they are applicable.

Regulatory strategy decided​

Decision: combined MDCG + MEDDEV 2.7.1 Rev 4​

After the two internal meetings, the team resolved to use a combined approach:

GuidancePurpose in the combined strategy
MDCG 2020-6Document structure: organises the CER into sections that match MDR requirements. State-of-the-art requirements are aligned with MDR Annex XIV. This is the primary document BSI uses to check MDR compliance.
MEDDEV 2.7.1 Rev 4Process rigor: defines the clinical evaluation process as stages 0–4. The CER must demonstrate that each stage was completed. Defines the search strategy methodology, data appraisal criteria, and evaluation depth.
Relationship documentConfirms which sections of MEDDEV R4 remain applicable under MDR. This is what MDCG 2020-6 itself references; it is not two separate methodologies, it is one integrated approach that MDCG formally endorses.

Nick explicitly said the MEDDEV stages are the framework that makes a clinical evaluation work: "Unless you complete stages 0, 1, 2, 3, 4, 5 — they won't work and it will fall down."

The narration analogy​

The analogy to the software development situation is very relevant here and was raised explicitly by Taig in the internal debrief: the software team had done all the development work correctly but had not framed it according to the recognised IEC 62304 design phases. The non-conformity was not about missing work; it was about missing narration. The exact same pattern applies to the clinical evaluation. Taig said: "Lo hicimos todo bien, pero no lo enmarcamos según las design faces... no cambió el producto, pero sí que cambió cómo lo explicamos."

The hypothesis, confirmed by the team, is that reimagining and re-narrating the clinical history according to MEDDEV's stages is the highest-impact action.


Guidance framework documentation​

This section was AI-documented on 2026-03-28 based on the full text of all three guidance documents stored in this folder, cross-referenced with the applicable sections identified by Jordi Barrachina via Slack on 2026-03-26.

Applicable guidance documents​

Jordi identified four indispensable guidance documents for the CER. All four are already referenced in the CER itself:

DocumentPurpose
MDCG 2020-6Primary MDR-era guidance for legacy device clinical evaluation. Defines what evidence is needed, the evidence quality hierarchy, and the clinical evaluation plan checklist.
MEDDEV 2.7.1 Rev 4Process-level methodology for clinical evaluation stages. Defines how to conduct the literature search, appraise data, analyse evidence, and structure the CER. Only the sections listed in MDCG 2020-6 Appendix I remain applicable under MDR.
MDCG 2020-13Clinical evaluation assessment report (CEAR) template. This is the template BSI uses when assessing the CER; understanding its structure helps anticipate what BSI is looking for.
MDCG 2020-1Guidance on clinical evaluation of medical device software (MDR) and performance evaluation of IVD software (IVDR). Directly applicable to our AI-based device.

How MDCG 2020-6 and MEDDEV 2.7.1 Rev 4 work together​

MDCG 2020-6 is the controlling document. It defines what clinical evidence is required under MDR. MEDDEV 2.7.1 Rev 4 provides the methodological detail for how to execute the clinical evaluation process. MDCG 2020-6 formally endorses specific MEDDEV sections in its Appendix I; sections not listed there are not applicable under MDR.

The practical usage pattern:

  1. MDCG 2020-6 sets the framework: what clinical evidence is needed, the evidence quality hierarchy (Appendix III), the clinical evaluation plan checklist (Appendix II), MDR-specific concepts (GSPRs, well-established technologies, the narrower MDR definition of "clinical data")
  2. MEDDEV 2.7.1 Rev 4 provides execution methodology: how to conduct the literature search, appraise each data set, structure the device description, and check the CER before release
  3. Section 10 substitution rule: where MEDDEV Section 10 references MDD requirements, those must be replaced with the corresponding MDR requirements (Essential Requirements → GSPRs; PMCF references → MDR Annex XIV Part B)

MDCG 2020-6: key content​

Scope: guidance for legacy devices transitioning from MDD/AIMDD to MDR conformity assessment. Not a comprehensive clinical evaluation methodology; it relies on MEDDEV for that. Applies to all legacy devices regardless of risk class.

Section 4: Article 61 exemptions: MDR Article 61(4) requires clinical investigations for Class III and implantable devices, but exemptions exist under Article 61(5) (equivalence), 61(6a) (legacy devices), and 61(6b) (well-established technologies). All exemptions still require "sufficient clinical data."

Section 5: MDR reinforcements vs MDD: MDR introduces stricter requirements compared to the Directives:

  • Alternative treatment options must be considered in benefit-risk analysis
  • PMS/PMCF data must be incorporated into the clinical evaluation
  • The "level of clinical evidence" must be specified and justified by the manufacturer
  • Equivalence criteria are stricter (see MDCG 2020-5)
  • The definition of "clinical data" (Article 2(48)) is narrower than under MDD

Section 6.3: Data appraisal must use validated tools: the CER must use explicitly named validated appraisal tools (Cochrane RCT tool, MINORS, Newcastle-Ottawa Scale, IMDRF MDCE WG/N56 Appendix F). Complaint/incident ratios (incidents ÷ device sales) are NOT sufficient to prove safety.

Section 6.4: No reliance on future PMCF to fill pre-certification gaps: sufficient clinical evidence must exist PRIOR to MDR certification. PMCF under MDR can confirm conclusions already supported by other evidence, but cannot fill empty pre-market gaps. If evidence is insufficient for an indication, the intended purpose must be narrowed.

Section 6.5c: State of the art must include alternative treatments: for a diagnostic AI, alternatives include human dermatologist assessment, dermoscopy, teledermatology, and equivalent software tools. Novel/innovative device technology may have a rapidly evolving state of the art; this must be acknowledged.

Section 6.5e: Gap bridging and narrowing: if clinical evidence is insufficient for a specific indication, options include systematic literature reviews, PMCF studies, or clinical investigations. If gaps cannot be closed, the intended purpose must be narrowed to match available evidence.

MDCG 2020-6 Appendix III: clinical data quality hierarchy​

This is a 12-level ranked table from strongest to weakest evidence. For Class III devices: minimum acceptable level is Rank 4.

RankType of evidenceNotes
1High-quality clinical investigations covering all variants, indications, populations, durationGold standard; may not be feasible for WET devices
2High-quality clinical investigations with some gapsGaps must be justified with risk assessment; PMCF required
3High-quality data collection systems (registries)Must assess data quality, device representation, stratification
4Studies with methodological flaws but data still quantifiable and acceptability justifiableMany peer-reviewed literature sources; minimum for Class III
5Equivalence data (reliable/quantifiable)Must meet stricter MDR equivalence criteria
6State of the art evaluation including similar device dataNOT clinical data under MDR; WET devices only
7Complaints and vigilance data; curated QMS dataClinical data per Article 2(48) but low quality
8Proactive PMS data (surveys)Clinical data per Article 2(48) but low quality
9Individual case reportsNot high quality; illustrative/supportive only
10Compliance to non-clinical common specifications
11Simulated use / animal / cadaveric testing with HCPsNot clinical data
12Pre-clinical and bench testing / standards complianceAddresses clinically relevant endpoints non-clinically
Implication for our MRMC studies

MRMC studies (simulated use with images) map to Rank 11: "simulated use testing with healthcare professionals." This is explicitly listed as not clinical data under MDR. This aligns with Nick's statement during the BSI meeting. Our MRMC studies can support but cannot be the primary basis for clinical claims. Primary evidence must come from Ranks 1–4.

MEDDEV 2.7.1 Rev 4: applicable sections​

The following sections are endorsed by MDCG 2020-6 Appendix I and constitute the applicable MEDDEV methodology under MDR.

Section 6.4: who should perform the clinical evaluation​

The clinical evaluation must be conducted by qualified evaluators. Minimum qualifications:

  • A degree in a relevant field AND 5 years documented professional experience; OR
  • 10 years documented professional experience

Required knowledge areas: research methodology and biostatistics; information management (database searches); regulatory requirements; medical writing and systematic review; the device technology; diagnosis and management of the target conditions; medical alternatives and treatment standards; specialist clinical expertise.

Each evaluator must provide a CV and declaration of interests.

Section 8 (Stage 1): identification of pertinent data​

All pre-market and post-market clinical data must be identified. Two sources:

Manufacturer-held data: all clinical investigations, PMS data (PMCF studies, vigilance reports, incident reports, complaints, explant analysis, field safety corrective actions), pre-clinical studies (bench tests, V&V data). All data must be disclosed in full to evaluators.

Literature data: requires a documented literature search protocol (before execution) and search report (after execution). Multiple databases required (MEDLINE/PubMed alone is insufficient; must include EMBASE, Cochrane CENTRAL, and others). The search must identify both (a) clinical data on the device or equivalent, and (b) state-of-the-art data. Full text papers must be obtained; abstracts are insufficient.

Literature search protocol (Annex A5) must include: PICO-structured research questions; databases and justification; search terms and limits; date ranges; inclusion/exclusion criteria; data collection plan; appraisal plan; analysis plan. Recommended methods: PICO, Cochrane Handbook, PRISMA, MOOSE.

Section 9 (Stage 2): appraisal of pertinent data​

Each data set is appraised for methodological quality, relevance, and weighting.

Methodological quality (Section 9.3.1): assessed by examining study design (sample size, power calculation, endpoints, controls, randomisation, blinding, follow-up, adverse event reporting); data processing and statistics; quality assurance (GCP compliance); report quality (disclosure of methods, conflicts of interest).

Relevance (Section 9.3.2): data classified as either "pivotal" (directly demonstrates performance/safety of the device or equivalent) or "other" (supports state of the art, hazard identification, equivalence justification).

Weighting (Section 9.3.3): highest weight goes to well-designed, monitored RCTs with the device under evaluation in its intended purpose. No universal weighting method; evaluators define device-specific criteria. Reasons for rejecting evidence must be documented.

Section 10 (Stage 3): analysis of clinical data​

The goal is determining whether appraised data collectively demonstrate compliance with each relevant GSPR (substituting MDR GSPRs for MDD ERs). The analysis must:

  • Use sound methods (qualitative or quantitative, with justification)
  • Be comprehensive: cover all products/models/sizes, every indication, the entire target population, all intended users, the full duration of use
  • Verify consistency between the CER, manufacturer information materials (IFU, label), and risk management documentation
  • Verify consistency with current state of the art
  • Identify evidence gaps and determine whether additional clinical investigations are needed
  • Determine PMCF needs (residual risks, uncertainties, rare complications, long-term performance)

Annex A3: device description typical contents​

The CER device description must include:

  • Device name, models, sizes, components (including software and accessories)
  • Intended purpose in full: exact medical indications, disease/condition/stage/severity, patient populations, intended users, contraindications
  • How the device achieves its intended purpose, principles of operation
  • Whether it addresses unmet medical needs or has medical alternatives
  • Intended performance: technical performance, clinical benefits, clinical performance and safety claims
  • For predecessor-based devices: predecessor description, modifications, whether still on market
  • If equivalence claimed: equivalent device identification and demonstration

Annex A6: studies lacking scientific validity​

Studies insufficient for demonstrating clinical performance/safety include:

  • Missing elementary information: no methods, no product identification, no patient numbers, no outcomes, no adverse events, no confidence intervals
  • Numbers too small: inconclusive pilot data, anecdotal experience, hypothesis papers, unsubstantiated opinions
  • Improper statistics: multiple comparisons without correction, assumed distributions without testing
  • Inadequate controls: single-arm studies for subjective endpoints, external/historic comparisons when confounding is likely
  • Improper mortality/SAE data collection: no follow-up of lost subjects, no sensitivity analysis
  • Author misinterpretation: conclusions not matching results, ignoring lack of statistical significance
  • Illegal activities: non-compliance with local regulations, EN ISO 14155, or Declaration of Helsinki

Important: a study inadequate for performance claims may still contain usable safety data, and vice versa.

Annex A7.2: conformity with acceptable benefit/risk profile​

The clinical evaluation must demonstrate risks are minimised, acceptable when weighed against benefits, and compatible with a high level of health protection. Key requirements:

  • Benefits must be meaningful and measurable: nature, extent, probability, and duration. For diagnostic devices, benefits include correct diagnosis, earlier diagnosis, identifying likely treatment responders.
  • Benefits must be quantified: magnitude of benefit, variation across population, clinical relevance of endpoint changes, proportion of "responders," duration of effect.
  • Clinical risks must be evaluated: nature, severity, number, and rates of harmful events. For diagnostic devices: risks from false positives (unnecessary treatment) and false negatives (missed treatment, misdiagnosis).
  • Benefit-risk must be evaluated against state of the art: including available medical alternatives (existing devices, human specialist assessment, medicinal products) and their benefit/risk profiles. Data selection must be objective, not selectively favourable to the device.

Annex A7.3: conformity with performance requirements​

The device must achieve the performances intended by the manufacturer under normal conditions of use.

For diagnostic devices specifically, performance data includes:

  • Reproducibility of independent image acquisition (same patient, same machine, different operator/interpreter)
  • Reproducibility of independent image reporting (same images, different interpreter/analyser)
  • Diagnostic sensitivity and specificity for major clinical indications; PPV and NPV according to varying pre-test probabilities
  • Comparisons of new vs previous software versions
  • Normal values by age and gender for all target populations

Annex A7.4: conformity with undesirable side-effects requirements​

Any undesirable side-effect must be an acceptable risk when weighed against intended performances. Key rules:

  • Clinical data must contain an adequate number of observations for scientifically valid conclusions about side-effects
  • If clinical data is lacking or observations are insufficient, conformity is not fulfilled
  • Statistical guidance: to have 80% probability of observing at least one event at 1% actual rate requires minimum 161 subjects; at 5% rate requires 32; at 10% rate requires 15

Annex A10: CER release checklist​

Before releasing a CER, the following must be verified:

  • Readability: can a third party read and understand the report? Sufficient detail on data, assumptions, conclusions?
  • Manufacturer data completeness: is all manufacturer-held clinical data mentioned and summarised?
  • Equivalence (if claimed): demonstration included? All differences disclosed? Justification that differences don't affect performance/safety?
  • PMS/PMCF data: latest data considered and summarised?
  • State of the art: adequately summarised and substantiated? Benefit/risk and side-effects acceptable relative to SotA?
  • Coverage: sufficient evidence for all devices, sizes, models, settings, every indication, entire target population, all users, full duration of use, all forms/stages/severity of condition?
  • GSPR conformity: clearly stated for each relevant GSPR? Discrepancies identified?
  • Consistency: manufacturer information materials match CER contents?
  • Residual risks: all residual risks and uncertainties identified for PMS/PMCF follow-up?
  • Administrative: report dated, evaluator qualifications included, CVs and declarations of interest on file?

Jordi's specific applicable sections from MEDDEV 2.7.1 Rev 4 (via Slack, 2026-03-26)​

Jordi confirmed the following MEDDEV sections apply, which match exactly MDCG 2020-6 Appendix I:

  • 6.4: Who should perform the clinical evaluation?
  • 8: Identification of pertinent data (Stage 1)
  • 9: Appraisal of pertinent data (Stage 2)
  • 10: Analysis of the clinical data (Stage 3); MDD references must be substituted with MDR equivalents
  • A3: Device description typical contents
  • A4: Sources of literature
  • A5: Literature search protocol key elements
  • A6: Studies lacking scientific validity
  • A7.2: Conformity with acceptable benefit/risk profile
  • A7.3: Conformity with performance requirements
  • A7.4: Conformity with undesirable side-effects
  • A10: CER release checklist

Jordi noted he was surprised because all four guidance documents are already referenced in the CER itself. The gap is not in the references but in how the CER demonstrates compliance with each section's requirements.

MDCG 2020-13: Clinical Evaluation Assessment Report template​

MDCG 2020-13 (July 2020) is the harmonised template that notified bodies use to produce their Clinical Evaluation Assessment Report (CEAR), the formal record of how the NB assessed a manufacturer's CER under MDR. Understanding this template means understanding BSI's inspection checklist: every section of the CEAR is a question the CER must answer.

Structure: the CEAR has two parts: general sections (A–H) applicable to all assessments, and specific considerations (I–K) triggered by device-specific circumstances.

SectionWhat BSI checks
A: AdministrativeDevice identity, UDI-DI, intended purpose, CER dated and signed, evaluator CVs current, evaluator team covers all required expertise areas
B: NB ReviewersBSI's own reviewer qualifications and any additional expert involvement
C: Device, CEP, SotADevice description (including software), classification, CEP against Annex XIV Part A Section 1a checklist, clinical performance, safety, standards compliance, equivalence (if claimed), state of the art benchmarks, novel features
D: Literature reviewSearch protocol, databases, PICO/PRISMA methods, inclusion/exclusion criteria, both favourable and unfavourable data, full documentation set (protocol + reports + retrieved list + excluded list with reasons + full-text copies)
E: Clinical investigationsWhether investigations were conducted, EUDAMED registration, CIP compliance with Annex XV + ISO 14155, study design, conclusions validity
F: PMS/PMCFPMS Plan, PMS Report, PMCF Plan, PMCF Report, PSUR reviewed; PMCF adequacy or absence justification; CER update schedule
G: IFU/SSCP/labellingInternal consistency with CER, risk management, and PMS; intended purpose supported by evidence; limitations, contraindications, warnings adequate; risk information quantified
H: ConclusionsBenefit-risk against SotA; all GSPRs addressed; unanswered questions mapped to PMCF; recommendation to NB decision-maker

CEP completeness checklist (Annex XIV Part A Section 1a, individually verified by BSI):

  1. GSPRs requiring clinical data support identified
  2. Intended purpose specified
  3. Target groups with indications and contra-indications specified
  4. Clinical benefits with clinical outcome parameters described
  5. Methods for qualitative and quantitative safety examination specified (residual risks, side-effects)
  6. Benefit-risk acceptability parameters specified based on SotA
  7. Benefit-risk issues for specific components addressed (pharmaceutical, tissues)
  8. Clinical development plan with milestones and acceptance criteria, from exploratory through confirmatory to PMCF

Key implications for our CER:

  • The CER must mirror the CEAR structure. BSI fills in the CEAR by reading the CER. If the CER doesn't address a CEAR question, it's an automatic non-compliance.
  • Evaluator CVs must cover all expertise areas. The NB checks that the team collectively covers: research methods, information management, regulatory, device technology, and clinical diagnosis/management of dermatological conditions. A CER written entirely by regulatory staff without a dermatology clinician is a non-compliance.
  • Literature search documentation must be complete. Many CERs fail by providing only the included studies. The protocol, excluded list with reasons, and full-text copies are all mandatory.
  • The equivalence route is a liability for SaMD. AI systems have different training data, architectures, and software versions; any such difference likely breaks equivalence on technical grounds. The safer approach is basing evidence on our own studies.
  • AI is treated as novel. The novelty sub-section in Section C is mandatory. Attempting to characterise an AI diagnostic tool as non-novel will be questioned.
  • Non-compliances must be closed before the box can be ticked. Understanding what BSI checks allows targeted responses that close non-compliances in one round.

MDCG 2020-1: Clinical evaluation of medical device software​

MDCG 2020-1 (March 2020) provides the framework for determining clinical evidence requirements specifically for Medical Device Software (MDSW) under MDR. It is harmonised with IMDRF/SaMD WG/N41FINAL:2017. This is the most directly applicable guidance for our AI-based device.

Core framework: three evidence pillars (not sequential steps, but parallel requirements):

PillarWhat it meansWhat we must demonstrate
Valid Clinical Association (VCA)The software's output correlates with a real clinical conditionThat our AI's outputs (ICD-11 probability distributions, severity scores) are scientifically associated with actual dermatological conditions
Technical PerformanceThe software reliably and accurately generates outputs from inputsAlgorithm accuracy, sensitivity, specificity across the full range of real-world input variability (skin tones, image conditions, camera types, body locations)
Clinical PerformanceThe software produces clinically relevant outputs in real intended-use contextDiagnostic accuracy metrics (sensitivity, specificity, PPV, NPV, confidence intervals) validated against a reference standard in the target population

Valid Clinical Association (VCA): the foundational pillar:

  • Must demonstrate that the output is "clinically accepted or well founded," accepted by the medical community and/or described in peer-reviewed literature
  • For well-characterised diseases (psoriasis, AD, HS), VCA is establishable via systematic literature review and professional society guidelines
  • Each specific claimed output (diagnosis, severity grading, disease monitoring) requires separate VCA establishment
  • If VCA gaps exist, new clinical data must be generated before the evaluation can proceed

Generalisability requirement: critical for AI:

  • Defined as "the ability of a MDSW to extend the intended performance tested on a specified set of data to the broader intended population"
  • We must demonstrate performance across: Fitzpatrick skin types, image acquisition conditions, lighting, camera types, body locations, patient demographics, disease severity spectrum
  • Gaps require real-world dataset testing

Prospective vs retrospective studies:

  • If the output directly impacts patient management decisions → prospective study likely required
  • If the tool is an "aid to diagnosis" where a clinician retains final authority → well-designed retrospective study on curated datasets may suffice
  • Retrospective studies are explicitly permissible for software where there is no impact on patient management

Clinical benefit framing: favourable for our device:

  • The document explicitly recognises that for diagnostic software, clinical benefit "may lie in providing accurate medical information on patients, assessed against medical information obtained through other diagnostic options"
  • The final clinical outcome depending on downstream clinician decisions is acknowledged and accepted

Closest worked example: Annex II, example (b): image segmentation MDSW (CT scan detecting organ pathology). VCA established from literature without gaps (anatomy is well-characterised). Technical performance via V&V. Clinical performance via usability plus retrospective data (or prospective if data variability insufficient). This maps almost directly to our situation.

Continuous monitoring obligation:

  • MDSW connectivity enables real-world performance monitoring; this is both an obligation (ongoing PMCF) and an opportunity (RWE feeds back into CER updates)
  • Algorithm performance may degrade over time (data drift, population shift); PMCF must address this

Modular validation:

  • Permitted for modular software, relevant when an AI model is updated/retrained without changing the full system
  • Avoids full-system re-evaluation for every module change

Relationship to formal items​

Item 0 is the background layer. The seven formal BSI items build on it:

ItemTopicPrimary insight from item-0
Item 1CER update frequencyMinor. Follow MDR Article 61(11) and 86 cadence language.
Item 2Device description & clinical benefits vs SotACritical. The "pin" explanation, the benefit simplification question, and the SotA acceptance criteria methodology all apply here.
Item 3Clinical dataCritical. MRMC studies vs real clinical data, equivalence, data sufficiency justification, analysis (not just citation) all apply here.
Item 4UsabilityFollow existing approach; traceability to summative study results needed.
Item 5PMS planSaray's domain. Traceability of SOPs and procedures.
Item 6PMCF planCritical. Link activities to CER gaps, add methodology detail, reduce volume, add timelines and contingencies.
Item 7RiskTrace occurrence rates to PMS/clinical data. Cross-reference the correct risk document if it is in another file.

Full action plan​

All action items identified from the BSI meeting (2026-03-25) and the internal debriefs (2026-03-25, 2026-03-26), grouped by formal BSI item.

Cross-cutting: must happen before or alongside all items​

These actions affect every formal item. The CER rewrite strategy and regulatory framing must be settled before detailed work on individual items begins.

#ActionOwnerPrerequisite forStatus
X-1Settle which MDCG sections and MEDDEV stages apply to each CER section. Jordi reviews MDCG 2020-6 and sends section highlights to Taig via Slack.JordiAll itemsDone
X-2AI-document the guidance frameworks: what MDCG and MEDDEV define, which sections apply, what the MEDDEV stages (0–5) require at each step. See Guidance framework documentation below.TaigAll itemsDone
X-3Resolve the disease categorization approach. If evidence is structured by disease category (malignant, inflammatory, etc.), this implicitly accepts that the device use is disease-specific, which conflicts with the intended-use framing (full ICD-11 distribution). This tension must be resolved with a clear team decision before Item 2 and Item 3 can be written. Resolved: see X-3 Disease categorisation decision.TaigItem 2, Item 3Done

Item 2a: Device description & intended purpose​

The CER device description is currently not stand-alone. BSI reviewers evaluate it in isolation. Everything about the device must be readable within the CER without cross-referencing the IFU or other documents.

#ActionOwnerStatus
2a-1Rewrite device description section of CER to be fully self-contained. Include: what the device is, what it does, what it does not do, what regulatory category it is, and how it fits into clinical practice.TaigIn progress
2a-2Add the "pin" explanation: the device always outputs a probability distribution over all visible ICD-11 classes. It never produces a binary positive/negative for any specific condition. This is the architectural basis for the clinical evaluation approach.TaigTo do
2a-3Clarify the interface distinction: the API outputs all ICD-11 codes, but the interface that physicians actually see shows a limited, prioritized probability view, not the full list. The CER must accurately describe both layers (API and interface) because the physician's actual experience is of the interface, not the raw API output.TaigTo do
2a-4Add a "how to read this CER" guide at the start of the document. Explain the table structure, what each column means, and how to navigate from the benefit claim to the supporting evidence. BSI reviewers should not have to reverse-engineer the logic.TaigTo do
2a-5Fix device description wording: must say "covering visible diseases of the skin" (not "coveringas of the skin"). Per 2026-04-02 internal meeting (_resources/meeting-notes/2026-04-02-coordinar-envio-documentacion-md101.md).TaigDone

Item 2b: Clinical benefits, performance & safety vs SotA​

The core structure of this section is correct but the reasoning chain is invisible. BSI cannot trace how acceptance criteria were derived from state of the art, and the volume of claims/metrics makes it unauditable.

#ActionOwnerStatus
2b-1Simplify and consolidate the 7 clinical benefits. Some are near-duplicates (e.g., two benefits that differ only by the word "remotely"). Consolidating does not mean losing specificity; it means reducing noise so the core claims are legible.JordiNeeds rework
2b-2Establish acceptance criteria for each benefit with explicit traceability to state of the art. The chain must be visible: comparable devices in the literature → what performance they achieve → why a threshold is appropriate for this device.JordiNeeds rework
2b-3For high-risk conditions (melanoma, malignancies): provide an individual breakdown of acceptance criteria and data meeting those criteria. BSI will specifically audit these. For lower-risk categories, pooling with justification is acceptable.JordiTo do
2b-4Replace the current means-of-measure / magnitude-of-benefit table with something readable. The current version has so many entries (top-1 accuracy, top-1 accuracy per HCP tier, etc.) that it is described by Erin as "an endless list." Each claim must have one measurable outcome, a threshold, and evidence.JordiNeeds rework
2b-5Rename "Source" column to "Source of Acceptance Criteria" with values "Current study" (was "Sample") and "State of the art" (was "Population"). MD101's Mylène flagged that the original labels were confusing for reviewers. Per 2026-04-02 MD101 call (_resources/meeting-notes/2026-04-02-call-legit-health-x-md101.md) and internal follow-up (_resources/meeting-notes/2026-04-02-coordinar-envio-documentacion-md101.md).TaigDone
2b-6Add pediatric performance claims as separate rows. For each study that includes pediatric patients, duplicate the relevant performance claims with a new patientPopulation column set to "pediatric" and valueAchieved reflecting only the pediatric sub-sample. Existing claims get "general". Acceptance criteria remain unchanged for now. Per 2026-04-02 meetings (_resources/meeting-notes/2026-04-02-coordinar-envio-documentacion-md101.md).JordiTo do
2b-7Define the pediatric age cutoff based on the appropriate regulatory standard and document it in the CER. Jordi noted the biggest gap is children under 2 years. Group patients below the cutoff (18 or 21, per the standard) as "pediatric". Per 2026-04-02 internal meeting (_resources/meeting-notes/2026-04-02-coordinar-envio-documentacion-md101.md).JordiTo do
2b-8Add explanations for unmet acceptance criteria in the CER. When a performance claim's achieved value is below the acceptance threshold, the CER must explain why (e.g., subgroup limitations, acceptable risk). MD101's Mylène specifically asked whether this was explained; the answer was no. Per 2026-04-02 MD101 call (_resources/meeting-notes/2026-04-02-call-legit-health-x-md101.md).JordiTo do

Item 3a: Clinical data analysis​

The clinical data exists. The problem is that it is not presented as an analysis; it is presented as a list. BSI needs to see the reasoning: what was done in each study, what its limitations were, what it proves, and how it feeds the acceptance criteria.

#ActionOwnerStatus
3a-1Add a prose validation strategy narrative at the start of Item 3. This should use the exact terminology of the MDR and MDCG (e.g., "clinical data", "clinical evidence", "clinical benefit", "sufficient clinical evidence"). It should explain the priority hierarchy of evidence types (prospective investigations first, then retrospective, then PMS), and why the combination used is appropriate.JordiNeeds rework
3a-2Reframe the MRMC (multi-reader, multi-case) studies. Nick stated explicitly that MRMC studies using simulated scenarios are not clinical data; they do not reflect the device in its intended environment (real patients, real clinical settings). MRMC studies must be presented as supporting/corroborating evidence only. Primary clinical evidence must come from real-world studies.JordiNeeds rework
3a-3Clarify the equivalence argument with the Legacy device in detail. The melanoma study was conducted with Legacy. There is a section in the CER explaining this is the same device with no clinically relevant changes. This section needs to be expanded: provide a list of all changes since Legacy, assess whether each change is clinically relevant, and justify why PMS data and studies from Legacy still apply.JordiNeeds rework
3a-4For every referenced paper and study: add an analysis entry, not just a citation. What was the methodology? What were the limitations? What devices or populations were included? What did it conclude relevant to this device? Was the version used the Legacy or the current device?JordiTo do
3a-5Provide all referenced papers as a zip file with a numbered or author+year reference list. BSI cannot follow hyperlinks in PDFs.JordiTo do
3a-6Use the two real-world studies (including the one with Sakidecha) as primary clinical evidence. These are the closest thing to real-world evidence currently available. They should be prominently positioned, with full analysis of their methodology and findings.JordiTo do
3a-7Include the NMSC_2025 publication ("The utility and reliability of a deep learning algorithm as a diagnosis support tool in head & neck non-melanoma skin malignancies") as clinical evidence for the malignancy sub-criterion within the diagnostic accuracy benefit (7GH+9VW+1QF). This paper has a public dataset, active patient recruitment, and provides squamous cell carcinoma data that is currently underrepresented in the evidence base. Added per 2026-04-08 meeting.JordiTo do
3a-8Review the 4 severity publications (APASI_2025, AUAS_2023, AIHS4_2023, ASCORAD_2022) that Taig added as Pillar 2 Technical Performance evidence for benefit 5RB. Verify concordance with the CER and compatibility with the evidence strategy. Added per 2026-04-08 meeting.JordiTo do

Item 3b: Data sufficiency justification​

BSI cannot tell whether the data is sufficient because we have not explained why it is sufficient. This is different from having the data; it requires explicit argumentation.

#ActionOwnerStatus
3b-1For every evidence set, justify why the sample size is sufficient. Why are 80 cases enough for the melanoma claim? Why is limited paediatric evidence (approximately 20 cases) acceptable? The justification should reference statistical power calculations or established methodological guidance where appropriate.JordiNeeds rework
3b-2Design physician survey questionnaires as real-world evidence for the legacy device benefits. Questions must target the 3 consolidated benefits (diagnostic accuracy, severity, care pathway). Include quantitative questions (e.g., % improvement in referral efficiency) and a control question on whether the response comes from opinion (MDCG 2020-6 Rank 8) or actual statistics (Rank 4). Validate with the first 60 respondents for preliminary statistical validation before sending to all legacy device clients. See 2026-04-01 meeting for survey framework and MDCG 2020-6 §6.2.2 context, and 2026-04-08 meeting for detailed requirements.TaigTo do
3b-3Mine Legacy device market data (PMS, adverse event records, hospital outcome data) as real-world evidence of end-to-end clinical impact. Nick specifically asked for evidence that "when a hospital uses the device, outcomes improve compared to the same hospital before, or compared to what the literature shows." Reassigned to Taig per 2026-04-08 meeting.TaigTo do
3b-4Create a new MRMC study with Fitzpatrick phototype 5-6 images. Phase 1 (exploratory): convert images from SAN_2024, BI_2024, PH_2024 to dark phototypes using Imagen 3 (Vertex AI), send to device API, verify concordance with existing JSON results. Phase 2 (if concordant): build new multireader multicase combining all 3 studies' clinical cases with dark-skin images, deploy for dermatologists to complete. Replaces the original scope of "assess fototipos dataset". See multireader-multicase repo and 2026-04-08 meeting.TaigTo do

Item 4: Usability​

The existing answer is likely close to correct. The main gap is traceability.

#ActionOwnerStatus
4-1Add explicit traceability from the usability claims in the CER to the summative usability study results. BSI needs to be able to verify that the study met its acceptance criteria. A direct reference to the study, its methodology, and its results is required.TaigNeeds rework

Item 5: PMS plan​

Saray described this as the simpler item. The gap is procedural traceability.

#ActionOwnerStatus
5-1Update the PMS plan to reference the relevant SOPs and procedures that govern each surveillance activity. BSI needs to be able to trace from the plan to the actual process. Provide the SOP identifiers and confirm each is in the QMS.SarayNeeds rework

Item 6: PMCF plan​

This item requires a two-step fix in a strict order. The CER must first declare acceptable gaps before the PMCF plan can logically flow from those gaps. Attempting to fix the PMCF plan before the CER gaps are declared will result in an incoherent structure.

#ActionOwnerPrerequisiteStatus
6-0In the CER: for each identified evidence gap, explicitly state that the gap is acceptable, that despite the gap, the available evidence is still sufficient for validation, and explain why. This is a logical prerequisite for the PMCF plan.JordiNone: must happen firstTo do
6a-1Link every PMCF activity to a specific acceptable gap declared in the CER (per action 6-0). If an activity cannot be linked to a specific gap, it should be removed.Jordi6-0Needs rework
6a-2Add full methodology detail to each retained PMCF activity: sample size, acceptance criteria, start date, expected duration, and contingency plan for insufficient participation or response rates.Jordi6-0Needs rework
6a-3Remove PMCF activities that are not directly relevant to closing identified gaps. The current volume is described by Erin as "overwhelming." Fewer, better-described activities are more convincing than many vague ones.Jordi6-0To do
6b-1Add a justification explaining how the retained PMCF activities, combined with PMS, will together provide sufficient proactive data to cover the safety and performance of the device over its lifetime, as required by Annex XIV Part 5.Jordi6a-1, 6a-2Needs rework

Item 7: Risk​

This item has not been started. The main concern is traceability of occurrence rates, and a possible mismatch between the risk document BSI reviewed and the correct one.

#ActionOwnerStatus
7-1For each risk line where an occurrence rate is stated: add explicit traceability to the source of that rate: which PMS record, which clinical investigation, or which literature reference was used. BSI needs to be able to verify that rates were appropriately pulled from real data.AlfonsoTo do
7-2If the correct risk document is not the one BSI reviewed: provide a clear cross-reference in the response. State explicitly which document contains the relevant risk lines, why it is the applicable document, and identify which specific lines correspond to the ones BSI queried.AlfonsoTo do

Dependencies and critical path​

The following actions block others and should be prioritised:

  1. X-1 and X-2 (guidance framework documentation): unblock all CER rewriting work
  2. X-3 (disease categorization decision): unblocks Items 2b and 3a
  3. 3a-1 (prose validation strategy): structural backbone of Item 3; everything else in Item 3 hangs off this
  4. 6-0 (declare acceptable gaps in CER): strict prerequisite for all Item 6 PMCF rework
  5. 2a-1 through 2a-4 (device description and "pin" explanation): foundation of Item 2; the acceptance criteria work in 2b depends on the intended-use framing being settled first

Action items from the internal meetings (raw)​

The following action items were identified during the internal debrief sessions. They are listed here for traceability; the structured action plan above is the canonical tracking source.

OwnerAction
El grupoUse device Legacy market data (4 years) as Real World Evidence. Mine PMS data to show how hospital outcomes improved when the device was used.
El grupoUpdate surveys of physicians to include questions validating achievement of each specific benefit claim, per benefit, per hospital.
El grupoRewrite the CER to be stand-alone: include full device description, output explanation, why output is a probability distribution over all ICD-11 classes (not binary), and how to read the CER.
El grupoAdd a high-level prose narrative at the start of the CER explaining the overall validation strategy, what types of data were used, why they are appropriate, and how the full evidence chain is structured.
El grupoJustify data sufficiency explicitly: why 80 cases for melanoma is enough, why limited paediatric evidence is acceptable, etc.
El grupoLink every PMCF activity to a specific identified gap in the CER. Remove activities that cannot be linked to a gap.
Jordi BarrachinaIdentify gaps where clinical data evidence is insufficient. Find relevant literature to fill those gaps. Provide analysis (not just citation) of each paper. This is complex expert work; AI helps Jordi draft, but the clinical judgment is Jordi's.
El grupoClarify equivalence between current device and Legacy device in detail: list changes, assess clinical relevance of each, justify use of historical studies. Note: the melanoma study was done with Legacy, and a section of the CER already explains it is the same device with no changes. This section needs to be expanded and made more prominent, not created from scratch.
Saray UgidosHandle post-market documentation (PMS plan). Provide relevant SOPs and procedures. Saray described this as the simpler part: "más que nada es darle los procedimientos y ya lo." Saray also planned to contact Arancha (possible external consultant) in connection with this work.
Taig Mac CarthyCreate this item-0 folder with regulatory guidance PDFs (done). Generate AI-assisted markdown documentation of the guidances. Create action plan for the non-conformities including the intended use definition.
Jordi BarrachinaReview MDCG 2020-6 to identify the specific sections relevant to the CER rewrite. Send to Taig via Slack with highlights of which specific points to prioritise for AI prompts.
Jordi BarrachinaThe dataset for the fototipos (skin phototype) study is already prepared. This could be relevant as additional real-world evidence. Jordi mentioned this in the March 26 meeting.
Jordi BarrachinaInclude NMSC_2025 publication (DOI) as clinical evidence for the malignancy sub-criterion. Paper has public dataset + active patient recruitment and adds squamous cell carcinoma data (currently underrepresented). Also review the 4 severity publications (APASI_2025, AUAS_2023, AIHS4_2023, ASCORAD_2022) Taig added for concordance with the CER. Per 2026-04-08 meeting.
Taig Mac CarthyItems 3b-2, 3b-3, 3b-4 reassigned from Jordi to Taig per 2026-04-08 meeting. 3b-4 scope changed from "assess fototipos dataset" to "create new MRMC study with Fitzpatrick 5-6 images" using the multireader-multicase repo. 3b-2 updated with detailed survey requirements (see meeting notes).

Reference materials in this folder​

FileTypeDescription
_resources/meeting-notes/2026-03-25-bsi-clarification-call.txtPlain textFull auto-transcript of the BSI clarification call on March 25, 2026 (15:12). The primary source of BSI's unwritten expectations. Contains candid, off-the-record-level feedback from both Erin and Nick.
_resources/meeting-notes/2026-03-25-internal-debrief-post-bsi.mdMarkdownNotes + auto-transcript of the internal debrief held the same afternoon (16:22). Contains Gemini-generated meeting notes in Spanish summarising conclusions, plus the full transcript.
_resources/meeting-notes/2026-03-26-internal-regulatory-strategy.mdMarkdownNotes + auto-transcript of the internal meeting the following morning (March 26, 11:44). Resolves the regulatory strategy question: which guidance document(s) to follow.
_resources/2026-03-26-bsi-meeting-calendar-invite.pngImageCalendar invite for the BSI meeting. Shows attendees from both sides.
_resources/meeting-notes/2026-04-01-clinical-validation-weekly.mdMarkdownNotes + auto-transcript of the internal meeting (April 1). Decided on 3-benefit consolidation (Option C). Defined survey framework for legacy PMS evidence and identified MDCG 2020-6 §6.2.2 as the regulatory basis. Deadline extension to April 21 confirmed.
_resources/meeting-notes/2026-04-07-internal-risks-pmcf-review.mdMarkdownNotes + auto-transcript of the internal meeting (April 7). PMCF activity pruning decision: keep all activities. Severity evidence strategy discussion.
_resources/meeting-notes/2026-04-08-internal-meeting-three-tasks.mdMarkdownNotes + corrected auto-transcript of the internal meeting (April 8). Defines 4 tasks: (1) severity publications added, (2) NMSC_2025 for malignancy (Jordi), (3) new MRMC with dark phototypes (Taig), (4) physician surveys for legacy RWE (Taig). Items 3b-2/3b-3/3b-4 reassigned from Jordi to Taig.
_resources/meeting-notes/2026-03-24-medical-device-approval-weekly.mdMarkdownNotes + auto-transcript of the medical device approval weekly (March 24). Day before the BSI call. Prep discussion: clinical evidence strategy for Erin, phototype study proposal, pediatric data justification, performance claims traceability improvements.
_resources/meeting-notes/2026-03-31-dudas-de-bsi.mdMarkdownNotes + auto-transcript of "Dudas de BSI" meeting (March 31). Malignancy classification: melanoma = Critical (Class 2B), BCC/SCC = Serious (Class 2A). Justifies separate melanoma metrics. Severity gap strategy: keep benefit, recognise limitation, address in PMCF. Claude Code setup for Jordi.
_resources/meeting-notes/2026-04-02-call-legit-health-x-md101.mdMarkdownNotes of the call with MD101 consultancy (April 2, 08:59). Key meeting: MD101 advises total CER rewrite from accepted template. First mention of "Source" column confusion (sample vs population). Pediatric population data requirement identified. Benefits vs performance claims distinction clarified for external reviewers.
_resources/meeting-notes/2026-04-02-coordinar-envio-documentacion-md101.mdMarkdownNotes + auto-transcript of internal follow-up (April 2, 10:06). Contains the decision to rename the "Source" column to "Source of acceptance criteria" with values "Current study" / "State of the art" instead of "Sample" / "Population". Also: pediatric performance claims plan, pediatric age definition, documentation shipment.
MDCG-2020-6/Folder (chunked PDF)MDCG 2020-6 guidance document, split into 5-page chunks. The primary MDR-era guidance for clinical evaluation. Key sections: Appendix 1 (which MEDDEV R4 sections remain applicable), Appendix 3 (clinical data quality hierarchy for class 3 devices), and the checklist for conformity assessment of the evaluation report.
MEDDEV-2-7-1-rev-4/Folder (chunked PDF)MEDDEV 2.7.1 Rev 4 guidance (2016), split into 5-page chunks. Nick explicitly recommended this document. Defines the clinical evaluation process in stages 0–4.
relationship-between-MDCG-2020-6-and-MEDDEV-2-7-1-rev-4/Folder (chunked PDF)Single-page document explaining which sections of MEDDEV 2.7.1 Rev 4 remain applicable under MDR. Confirms the combined strategy: use MDCG for structure, MEDDEV for process rigor.
MDCG-2020-13/Folder (chunked PDF)MDCG 2020-13 (31 pages, 7 chunks). Clinical Evaluation Assessment Report (CEAR) template, the standardised form BSI uses to assess the CER. Understanding its structure reveals exactly what BSI checks and in what order.
MDCG-2020-1/Folder (chunked PDF)MDCG 2020-1 (22 pages, 5 chunks). Guidance on clinical evaluation of medical device software under MDR. Defines the three-pillar evidence framework (VCA + Technical Performance + Clinical Performance) specific to MDSW. The most directly applicable guidance for our AI-based device.
Previous
Clinical Review: Round 1
Next
X-3: Disease categorisation decision
  • Item 0 status
  • BSI clarification meeting: 2026-03-25
    • Attendees
  • What BSI told us: the full picture
    • 1. The situation is very serious
    • 2. The core problem: lack of traceability and a broken storyline
    • 3. The CER is not stand-alone
    • 3a. The CER must use prose with regulatory language, not just tables
    • 4. The "pin": the device output architecture
    • 5. MRMC studies are not considered "clinical data" by Nick
    • 6. Performance acceptance criteria must be established with traceability to SotA
    • 7. Equivalence with the Legacy device needs more detail
    • 8. Links in PDFs do not work
    • 9. PMCF plan: too many activities, too little explanation
    • 10. Risk (Item 7): need for traceability of rates
  • Regulatory strategy decided
    • Decision: combined MDCG + MEDDEV 2.7.1 Rev 4
    • The narration analogy
  • Guidance framework documentation
    • Applicable guidance documents
    • How MDCG 2020-6 and MEDDEV 2.7.1 Rev 4 work together
    • MDCG 2020-6: key content
    • MDCG 2020-6 Appendix III: clinical data quality hierarchy
    • MEDDEV 2.7.1 Rev 4: applicable sections
      • Section 6.4: who should perform the clinical evaluation
      • Section 8 (Stage 1): identification of pertinent data
      • Section 9 (Stage 2): appraisal of pertinent data
      • Section 10 (Stage 3): analysis of clinical data
      • Annex A3: device description typical contents
      • Annex A6: studies lacking scientific validity
      • Annex A7.2: conformity with acceptable benefit/risk profile
      • Annex A7.3: conformity with performance requirements
      • Annex A7.4: conformity with undesirable side-effects requirements
      • Annex A10: CER release checklist
    • Jordi's specific applicable sections from MEDDEV 2.7.1 Rev 4 (via Slack, 2026-03-26)
    • MDCG 2020-13: Clinical Evaluation Assessment Report template
    • MDCG 2020-1: Clinical evaluation of medical device software
  • Relationship to formal items
  • Full action plan
    • Cross-cutting: must happen before or alongside all items
    • Item 2a: Device description & intended purpose
    • Item 2b: Clinical benefits, performance & safety vs SotA
    • Item 3a: Clinical data analysis
    • Item 3b: Data sufficiency justification
    • Item 4: Usability
    • Item 5: PMS plan
    • Item 6: PMCF plan
    • Item 7: Risk
  • Dependencies and critical path
  • Action items from the internal meetings (raw)
  • Reference materials in this folder
All the information contained in this QMS is confidential. The recipient agrees not to transmit or reproduce the information, neither by himself nor by third parties, through whichever means, without obtaining the prior written permission of Legit.Health (AI Labs Group S.L.)