Skip to main content
QMSQMS
QMS
  • Welcome to your QMS
  • Quality Manual
  • Procedures
  • Records
  • Legit.Health Plus Version 1.1.0.0
  • Legit.Health Plus Version 1.1.0.1
  • Legit.Health Utilities
  • Licenses and accreditations
  • Applicable Standards and Regulations
  • BSI Non-Conformities
    • Technical Review
    • Clinical Review
      • Round 1
        • Item 1: CER Update Frequency
        • Item 2: Device Description & Claims
        • Item 3: Clinical Data
          • Request A: Clinical Data Analysis
            • Question
            • Research and planning
          • Request B: Data Sufficiency Justification
        • Item 4: Usability
        • Item 5: PMS Plan
        • Item 6: PMCF Plan
        • Item 7: Risk
    • BSI Non-Conformities
  • Pricing
  • Public tenders
  • BSI Non-Conformities
  • Clinical Review
  • Round 1
  • Item 3: Clinical Data
  • Request A: Clinical Data Analysis
  • Research and planning

Research and planning

Internal working document

This document is for internal use only. It contains analysis, gap identification, and response strategy for Item 3a of the BSI Clinical Review Round 1. It will not be included in the final response to BSI.

1. What BSI is asking​

Item 3a says: "Please address all points above. Please ensure all relevant clinical data is identified and provide sufficient analysis (including traceability, details, discussion and justifications)."

The "points above" are the detailed observations in the Item 3 index, which span five areas. BSI is essentially saying: the CER does not provide a standalone, self-contained clinical evaluation — the reviewer could not follow the analysis, verify traceability, or confirm that all clinical data has been adequately assessed. This is a deficiency finding under Annex XIV and Article 61.

The core regulatory concern is Article 61(1): the manufacturer must specify and justify the level of clinical evidence necessary to demonstrate conformity with the relevant GSPRs, and that level must be appropriate to the device's characteristics and intended purpose.

BSI's five observation areas (mapped to regulatory requirements)​

#AreaKey concernRegulatory basis
1Overall analysisClinical benefits/performance docs hard to follow; links broken; data pooling unclear; unmet acceptance criteria not discussedAnnex XIV 1(a), 2
2Clinical investigations (MC_DAO, IDEI)Missing regulatory details: competent authority communication, registration, publication status, protocol deviationsAnnex XIV (b), (c)
3Evidence sufficiencyPopulation coverage, traceability to outcomes, methodology justifications, sample sizesAnnex XIV 2, Article 61(1)
4EquivalenceHigh-level assessment; unclear what changed since MDD; contradictory statements about "improvements"Annex XIV 3
5Clinical literatureNo subject-device articles found in literature; unclear if SotA protocol applies to device literature searchAnnex XIV (e), Article 2(48)
6PMS dataLegacy device marketed since 2020 with 21 contracts and 4,500 reports but no market data in CERArticle 2(48), (51)

2. What BSI reviewed​

  • R-TF-015-003 Clinical Evaluation Report (CER)
  • R-TF-015-001 Clinical Evaluation Plan (CEP)
  • R-TF-015-011 State of the Art
  • Clinical investigation reports (R-TF-015-006 series)
  • R-TF-007-003 PSUR
  • R-TF-007-004 PMS Report
  • The "Clinical Benefits" and "Performance Claims" interactive components (rendered in the QMS site)

3. Relevant QMS documents and findings​

3.1. CER — Commercialisation status (lines 427–441)​

Line 429: "This product has not been commercialized yet. It is undergoing initial CE mark." Line 441: "The legacy device has been commercialized since 2020."

These two statements are not contradictory — they refer to different regulatory entities:

  • "This product" = Legit.Health Plus (MDR version, not yet CE-marked under MDR)
  • "The legacy device" = Legit.Health (MDD version, on market since 2020)

However, BSI reads the CER as a single document about a single device, so the distinction is confusing. The CER must make this clearer and, critically, must integrate the legacy market experience data into the clinical evaluation rather than treating it as separate.

3.2. CER — PMS data gap (lines 651–662)​

Line 655–656: "Once on the market...the manufacturer will implement a proactive PMS process." (Future tense only.)

Line 660: "Since this clinical evaluation is performed for the initial CE-mark submission...there are currently no retrospective PMCF data."

This is the critical gap. The CER treats PMS as future-only, completely ignoring the legacy device's 4+ years of market experience. But the data exists:

  • R-TF-007-003 PSUR (lines 62–98): Documents 21 contracts, 4,500+ reports, 500+ practitioners, 1,000+ patients. Reports 7 non-serious incidents in 2023:

    • 4 customer complaints (API deserialization, timeout, algorithm performance mismatch × 2)
    • 3 internal non-conformities (image zoom bias, benign pigmentation scoring, misclassification)
    • Zero serious incidents, zero FSCAs
  • R-TF-007-004 PMS Report (lines 69–98): Confirms zero serious incidents, zero FSCAs. Documents 6 customer complaints (4 classified as non-serious incidents), trend analysis, and CAPA actions.

Root cause: The CER was drafted as if the device had no market history, ignoring that equivalence is claimed with the legacy device. Under Article 2(48), clinical data includes "safety or performance information generated from PMS" — the legacy PMS data is clinical data that must be analysed.

3.3. Clinical investigations — Regulatory details​

MC_EVCDAO_2019 (mc-evcdao-2019/r-tf-015-006.mdx)​

DetailStatusLocation
Ethics committee approvalExists: CEIM approval February 10, 2020CIP r-tf-015-004.mdx, embedded approval PDFs
Competent authority (AEMPS)Planned but not documented: CIP states CIR will be provided to AEMPS, but no record of actual communicationCIP line 191
ClinicalTrials.gov registrationNot found anywhere in documentation—
Publication statusNot found — no mention of whether the study was published in a journal—
Protocol deviationsPartially documented: CIR states "no adverse events/deviations" but separately notes secondary objective (GP comparison) was abandoned due to recruitment difficultiesCIR line 417; CIP line 98
Sample size discrepancy (200 → 105)Documented: Originally 200 planned with 40 melanoma cases; study closed at 105 subjects with 36 melanoma (34.29%). Justification: exceeded melanoma ratio target; impact of low-quality data compensated by DIQA exclusionCIR line 653

IDEI_2023 (idei-2023/r-tf-015-006.mdx)​

DetailStatusLocation
Ethics committee approvalNot explicitly documented in the CIR — compliance statement references regulatory adherence but no specific approval reference foundCIR line 55
Competent authorityNot found—
ClinicalTrials.gov registrationPartially found: CER line 740 mentions 2 records in ClinicalTrials.gov for IDEI_2023 and COVIDX_2022, but no NCT numbers providedCER line 685, 740
Publication statusNot found—
Protocol deviationsNot found—
Sample size202 patients recruited (108 pigmented lesions + 96 androgenetic alopecia) — appears adequate for stated objectivesCIR lines 142–147

3.4. Acceptance criteria — Met vs. not met​

BSI says: "Some acceptance criteria have not been met and further analysis/justification has not been performed."

From the CER and study reports:

  • MC_EVCDAO_2019: All primary acceptance criteria met (AUC 0.8482 > 0.8 threshold; sensitivity 0.7379 and specificity 0.8054)
  • IDEI_2023: AUC 0.7338 (95% CI: [0.5971–0.8554]) for malignancy detection from retrospective images — this is below typical thresholds but within confidence interval range
  • All 8 pivotal studies: CER line 956–966 states "all safety objectives...have been met"

Gap: The CER does not contain a per-study acceptance criteria reconciliation table showing which criteria were met and which were not, with justifications for any shortfalls. BSI needs this analysis presented explicitly. The <AcceptanceCriteriaTable> component renders this data but BSI noted the "links to CIs do not work" and the presentation is "difficult to follow."

Key issue — confirmed

The CER was delivered to BSI as a PDF export. The interactive React components (Clinical Benefits, Performance Claims, AcceptanceCriteriaTable) did not render interactively — links were broken and dynamic tables may have been incomplete or illegible. This is a confirmed document delivery problem that likely accounts for a significant portion of BSI's "difficult to follow" and "links do not work" observations.

Action required: All component-rendered data must be presented as static tables in the CER text, not delegated to React components, so the PDF export is self-contained and readable.

3.5. Data pooling and the globalValueOfDevice​

BSI says: "It is unclear how/why data has been pooled or what the categories represent."

The globalValueOfDevice computation (weighted average: Σ(achievedValue × sampleSize) / Σ(sampleSize)) exists in packages/ui/src/components/PerformanceClaimsAndClinicalBenefits/types.ts but is not documented anywhere in the CER or CEP. The CER must explain:

  1. Why data is pooled (to derive a global performance estimate across heterogeneous study populations)
  2. How pooling is done (weighted average by sample size, grouped by indication + user group + domain + metric + magnitude + performance subject)
  3. Limitations of this approach (heterogeneity across study designs, populations, and settings)
Cross-NC connection: Clinical Review Item 2b

This data pooling gap is also flagged in Item 2b as root cause #2: "Data pooling formula undocumented in regulatory documents." The fix must be coordinated — one description of the pooling methodology in the CER, referenced from both the clinical benefits analysis and the performance claims analysis.

3.6. Clinical literature search​

BSI says: "§16.4.4 of the CER seems to state that there are no relevant articles identified on the subject device in the literature."

What the CER actually says (lines 740–744):

  • 12 articles found from PubMed (10) and Google Scholar (2) about the device
  • All 12 were excluded because they were "proprietary (internal) company articles describing preclinical (in-silico) and non-clinical results"
  • ClinicalTrials.gov yielded 2 records (IDEI_2023 and COVIDX_2022) — these are already counted as pre-market CIs

BSI's concern: Excluding all 12 articles means the clinical literature search found zero usable articles about the device. BSI questions whether the SotA search protocol (with its PICO framework and appraisal methodology) was also applied to the device-specific literature search, or whether different methods were used.

Gap: The CER does not clearly distinguish:

  1. The SotA literature search (227 records → 64 included, about similar/alternative devices and clinical practice) — documented in R-TF-015-011
  2. The device-specific literature search (15 records → 0 included, about the subject device) — briefly mentioned in CER lines 678–744

The methodology, keywords, and appraisal criteria for the device-specific search should be explicitly described, and any differences from the SotA protocol should be justified.

3.7. Equivalence assessment​

BSI says: "§16.24 of the CER seems to state that improvements have been made. It is unclear what these changes are."

CER line 563: "The improvements introduced in Legit.Health Plus — mainly related to software version stabilisation and the consolidation of features..."

CER line 433: "All differences between the two versions are solely documentary..."

These are contradictory: "solely documentary" vs "software version stabilisation and consolidation of features." BSI rightly flags this. The CER must either:

  • Clarify that the changes are purely documentary (remove the "improvements" language), or
  • List what specifically changed and justify that the changes do not impact clinical safety/performance

The equivalence tables (lines 498–557) use "Same" for almost every row, which BSI reads as superficial. More detail is needed on:

  • What specific software changes were made (even if documentary)
  • How these were assessed for impact on performance
  • Reference to any design change records

3.8. Dermatoscopic camera concern​

BSI says: "photos taken with dermatoscopic camera only (is this representative of how the device will be used)"

What the studies used:

  • MC_EVCDAO_2019: DermLite Foto X dermatoscope with smartphones (Pixel 3, Galaxy S10, iPhone X)
  • IDEI_2023: Mix of dermatoscopic (87.5% retrospective) and clinical (100% prospective) images
  • Other studies (BI_2024, PH_2024, SAN_2024): Varied — some use clinical images only

Intended device use: The IFU states the device accepts clinical (non-dermoscopic) images. The device includes DIQA (Dermatology Image Quality Assessment) to validate image quality.

Gap: The CER does not explicitly address whether the validation studies are representative of real-world image acquisition. If most studies used dermatoscopic images but the device is intended for clinical images taken by non-specialist users, there is a representativeness concern. The CER needs a discussion of:

  • Which studies used which image types
  • How performance varies between dermatoscopic and clinical images
  • Why the evidence is representative of intended use

3.9. Population coverage​

BSI says: "How do the CIs sufficiently cover all/representative patient populations (age, pigment, sex, etc) and indications"

CER line 840: "Over 800 patients across eight pivotal studies."

Gap: The CER does not provide a demographic breakdown across studies. The PSUR notes GDPR-driven data minimisation (no demographic collection beyond clinical necessity), which is a legitimate regulatory constraint but creates a gap for BSI. The CER needs to:

  • Describe what demographic data IS available from each study
  • Justify any gaps in demographic coverage by reference to GDPR data minimisation principles
  • Discuss Fitzpatrick skin type coverage (critical for AI dermatology devices)
  • Address coverage of malignant/high-risk conditions specifically

4. Gap analysis​

#BSI concernWhat we haveWhat's missingSeverity
1Clinical benefits hard to followReact components with programmatic claim assignmentNarrative CER analysis explaining benefits, methodology, limitations; static-friendly tablesHigh
2Performance claims hard to follow, broken links148 claims in performanceClaims.ts, dynamic tablesSame as above; PDF-exportable summary; broken link fixHigh
3Data pooling unclearglobalValueOfDevice computation in codeCER documentation of pooling methodology, justification, limitationsHigh
4Unmet acceptance criteria not discussedAll criteria appear to be met (or borderline with CIs crossing thresholds)Per-study reconciliation table with explicit met/not-met status and justificationsHigh
5CI regulatory details (AEMPS, registration, publication, deviations)Ethics approval exists for MC_DAO; IDEI has ClinicalTrials.gov entries; deviation documented in CIPCER text explicitly stating: competent authority communications, NCT numbers, publication status, protocol deviations for each CIHigh
6CI methodology justificationsSample size calculations in CIPs; DIQA quality validationCER narrative on: photo quality removal rationale, dermatoscopic vs clinical images, MC_DAO 200→105 justification, sample size adequacyHigh
7Population coverage800+ patients across 8 studies; limited demographics due to GDPRDemographic breakdown per study; Fitzpatrick coverage; malignant condition coverage analysisMedium
8Equivalence lacks detailEquivalence tables exist; "Same" comparisonsClarify "improvements" vs "documentary changes" contradiction; list specific changes; impact assessmentHigh
9No subject-device literature12 articles found but excluded (preclinical/internal)Explain why excluded articles are not clinical data; clarify protocol differences between SotA and device searchMedium
10No PMS data in CERPSUR and PMS Report document 7 non-serious incidents, 0 serious, 0 FSCAsIntegrate legacy PMS data into CER: complaints, incidents, trend analysis, safety conclusionsCritical

5. Cross-NC connections​

Clinical Review Item 2b — Clinical benefits, performance, safety vs SotA​

Item 2b research identified overlapping gaps:

  1. SotA traceability: acceptanceCriteriaStateOfTheArtValue exists in data but provenance chain to specific SotA articles is broken → same gap applies to Item 3a's "traceability to outcomes" concern
  2. Data pooling documentation: globalValueOfDevice undocumented → same as Item 3a gap #3
  3. Top-1 accuracy: Some Top-1 metrics appear below SotA baselines → may be what BSI means by "some acceptance criteria have not been met"
  4. Use environment: 0ZC clinical benefit (remote care) vs intended use environment → tangentially relevant to IDEI study's teledermatology context

Clinical Review Item 3b — Data sufficiency justification​

Item 3b asks for justification that "sufficient data in quantity and quality has been analyzed." The research here directly feeds into 3b:

  • Sample size adequacy (800+ patients, formal calculations)
  • Population representativeness
  • Data quality methodology (DIQA)

Technical Review M1.Q1 — IFU performance claims​

M1.Q1 research has a cross-NC connection section identifying shared issues:

  • SotA baselines traceability
  • Top-1 accuracy vs IFU claims
  • Data pooling documentation
  • 239 vs 346 ICD-11 category reconciliation

6. Response strategy​

Approach: Fix-then-reference​

Per the BSI NC CLAUDE.md, the workflow is: analyse → fix the documentation → write the response referencing what was fixed.

Fixes required in the CER (R-TF-015-003)​

Fix 1: Integrate legacy PMS data (Critical)​

Add a new subsection under "Clinical data generated from risk management and PMS activities" that:

  • Summarises the legacy device's market experience (2020–2024: 21 contracts, 4,500+ reports, 500+ practitioners, 1,000+ patients)
  • Lists all non-serious incidents from the PSUR/PMS Report (7 incidents in 2023)
  • Confirms zero serious incidents and zero FSCAs
  • Analyses trends and CAPA outcomes
  • Draws safety conclusions from the market data
  • References R-TF-007-003 PSUR and R-TF-007-004 PMS Report

Fix 2: Add per-CI regulatory detail table​

For each clinical investigation, add explicit documentation of:

  • Competent authority communication (or statement that none was required for observational studies under national law)
  • Clinical trial registration status and numbers
  • Publication status
  • Protocol deviations (or explicit "none" statement)
  • Ethics committee approval reference

Fix 3: Add acceptance criteria reconciliation​

Add a per-study table showing:

  • Each acceptance criterion
  • The achieved value
  • Met/not-met status
  • Justification for any borderline or unmet criteria
  • This should be presented as static content (not React components) to ensure readability in PDF

Fix 4: Document data pooling methodology​

Add a subsection to the CER explaining the globalValueOfDevice computation:

  • Formula and grouping criteria
  • Justification for pooling across studies
  • Limitations (heterogeneity, design differences)
  • Cross-reference to the CEP's study design rationale

Fix 5: Clarify equivalence "improvements" language​

The changes between legacy and Plus are confirmed to be minor technical changes (software version stabilisation, feature consolidation) — not purely documentary. This means the CER's line 433 ("solely documentary") is inaccurate and must be corrected.

Action required:

  • Remove the "solely documentary" claim from line 433
  • Create a formal change list comparing legacy to Plus, with each change explicitly assessed for impact on clinical safety and performance (no such document currently exists — it needs to be created)
  • Update the equivalence section to reference the change list and conclude that no change impacts clinical safety/performance

Fix 6: Expand clinical literature discussion​

  • Explain why the 12 excluded articles are preclinical/non-clinical and therefore not "clinical data" per Article 2(48)
  • Clarify whether the device-specific search used the same protocol as the SotA search
  • If different, document the differences and justify

Fix 7: Add methodology justification narrative​

For each study, add discussion of:

  • Why image quality exclusion is appropriate (DIQA mirrors real-world use because the device itself rejects poor quality images)
  • Dermatoscopic vs clinical image coverage across the study portfolio
  • MC_EVCDAO_2019 sample size rationale (exceeded melanoma ratio target at 105 subjects; statistical power maintained)
  • Overall sample size adequacy across the 800+ patient portfolio

Fix 8: Improve population coverage narrative​

  • Compile available demographic data from each study
  • Address Fitzpatrick skin type representation
  • Address malignant/high-risk condition coverage (melanoma, SCC, BCC representation across studies)
  • Justify any demographic gaps with reference to GDPR data minimisation and study design constraints

Fixes required elsewhere​

Clinical Benefits and Performance Claims components​

The interactive components need to be made BSI-reviewer-friendly:

  • Fix broken links to clinical investigations
  • Consider generating static summary tables that can be included in the CER as fallback
  • Ensure the PDF export (if provided) renders the data correctly
Document delivery problem

A significant portion of BSI's Item 3 observations may stem from the CER being reviewed as a rendered web page or PDF where React components did not render correctly. This is a cross-cutting issue that affects Items 2a, 2b, 3a, and 3b. We need a strategy for presenting dynamic data to BSI in a format they can review.

7. Risk assessment​

RiskImpactMitigation
PMS data gap is the most visible deficiency — BSI explicitly states "no discussion of data from the market...is found"If not fixed, BSI will escalate: this is a direct violation of Article 2(48) (clinical data includes PMS)Priority 1: integrate legacy PMS data into CER before responding
Equivalence "improvements" contradiction could undermine the entire equivalence claimIf BSI concludes the devices are NOT equivalent, all legacy clinical data becomes inapplicableClarify language immediately; remove "improvements" or provide detailed impact assessment
Interactive component rendering issues could make our response unintelligible to BSIIf BSI cannot read the clinical benefits/performance data, they will escalate regardless of content qualityProvide static summary tables alongside or instead of component references
MC_EVCDAO_2019 200→105 could be read as underpowered studyBSI may question whether the reduced sample size maintains statistical validityDocument that the melanoma ratio (34.29%) exceeded the target (20%), maintaining statistical power for the primary endpoint

8. Open items​

Resolved​

#ItemResolution
4Document delivery formatConfirmed: PDF export. React components did not render interactively. This is a confirmed root cause for "links don't work" and "hard to follow" observations.
5Fitzpatrick dataConfirmed: some studies have Fitzpatrick data. Need to identify which ones and compile the coverage analysis.
—Equivalence: "improvements" vs "documentary"Confirmed: minor technical changes (not purely documentary). A formal change list with impact assessment needs to be created.

Pending — questions for Jordi​

See question-for-jordi.mdx for the full list. Summary:

  1. AEMPS communications: AEMPS was notified, but we need the actual records (letters, acknowledgements, classification decisions) for each study.
  2. ClinicalTrials.gov NCT numbers: CER references 2 registrations but no NCT numbers are provided. Need the numbers and registration status for all studies.
  3. Publication status: Whether any of the 9 CIs have been published in peer-reviewed journals.
  4. Fitzpatrick details: Which specific studies have Fitzpatrick data, and can we get the breakdown.
Previous
Question
Next
Question
  • 1. What BSI is asking
    • BSI's five observation areas (mapped to regulatory requirements)
  • 2. What BSI reviewed
  • 3. Relevant QMS documents and findings
    • 3.1. CER — Commercialisation status (lines 427–441)
    • 3.2. CER — PMS data gap (lines 651–662)
    • 3.3. Clinical investigations — Regulatory details
      • MC_EVCDAO_2019 (mc-evcdao-2019/r-tf-015-006.mdx)
      • IDEI_2023 (idei-2023/r-tf-015-006.mdx)
    • 3.4. Acceptance criteria — Met vs. not met
    • 3.5. Data pooling and the globalValueOfDevice
    • 3.6. Clinical literature search
    • 3.7. Equivalence assessment
    • 3.8. Dermatoscopic camera concern
    • 3.9. Population coverage
  • 4. Gap analysis
  • 5. Cross-NC connections
    • Clinical Review Item 2b — Clinical benefits, performance, safety vs SotA
    • Clinical Review Item 3b — Data sufficiency justification
    • Technical Review M1.Q1 — IFU performance claims
  • 6. Response strategy
    • Approach: Fix-then-reference
    • Fixes required in the CER (R-TF-015-003)
      • Fix 1: Integrate legacy PMS data (Critical)
      • Fix 2: Add per-CI regulatory detail table
      • Fix 3: Add acceptance criteria reconciliation
      • Fix 4: Document data pooling methodology
      • Fix 5: Clarify equivalence "improvements" language
      • Fix 6: Expand clinical literature discussion
      • Fix 7: Add methodology justification narrative
      • Fix 8: Improve population coverage narrative
    • Fixes required elsewhere
      • Clinical Benefits and Performance Claims components
  • 7. Risk assessment
  • 8. Open items
    • Resolved
    • Pending — questions for Jordi
All the information contained in this QMS is confidential. The recipient agrees not to transmit or reproduce the information, neither by himself nor by third parties, through whichever means, without obtaining the prior written permission of Legit.Health (AI Labs Group S.L.)