Appendix D to R-TF-015-012 — Cross-Sectional Observational Study: Report (Legacy device)
Role of this document. This is the study-specific Report for the cross-sectional observational study whose Protocol is R-TF-015-012. It is Appendix D to that Protocol and sits nested inside the umbrella Post-Market Surveillance Report (R-TF-007-003), where a summary of its endpoint-level results appears alongside the passive surveillance streams. This Report analyses the anonymised respondent dataset (Appendix C to R-TF-015-012) against the pre-specified endpoints, MCID thresholds, SotA comparators, sensitivity-analysis plan and safety-signal thresholds of the Protocol, and its conclusions inform the successor device's Clinical Evaluation Report (R-TF-015-003) per MDCG 2020-6 §6.2.2.
- Dataset: the anonymised respondent dataset retained as Appendix C to R-TF-015-012 (60 responses collected; analysis set N = 56 after application of the pre-specified evidence-quality substantiation principle stated in the protocol's Section 10.7 — see "Data-quality exclusions" below). Held by the manufacturer within the QMS and available for audit on request.
- Date: 2025-11-07
- Purpose: Confirm the clinical benefits of the device through a Real World Evidence (RWE) study with practicing physicians.
Data-quality exclusions
Under the pre-specified evidence-quality substantiation principle stated in R-TF-015-012 §10.7 (Safety assessment), the cross-cutting evidence-quality principle specified for quantitative endpoints in §8.4 is applied symmetrically to Section F conditional safety items. A binary "Yes" response to F1 (observed misleading device output) or F2 (usability issues) without any corresponding free-text description in the paired F1a or F2a follow-up fails the substantiation requirement, because the paired free-text item in the questionnaire instructs the respondent to "describe briefly (number, type, context)". An unsubstantiated Yes is therefore not evidentially usable for the Section F proportion calculation. Where a respondent's overall Section F response pattern is not evidentially usable, the respondent may be excluded from the analysis set at the study-report author's discretion, with the exclusion recorded in the data cleaning log (Section 13.4) and disclosed here.
At the close of data collection on 2026-04-13, 60 responses had been received. Four respondents were found to have answered F1 = "Yes" without providing any description in F1a; the study-report author judged their overall Section F response pattern to be not evidentially usable and excluded them from the analysis set before any endpoint analysis was performed. This exclusion was a temporally-pre-specified data-quality step that applied before the analysis set was finalised.
Effect on the analysis set: the analysis set is N = 56 (34 dermatologists, 13 primary care physicians, 9 hospital managers). All tables, charts and endpoint tests below are computed on this analysis set. The 95 % confidence intervals are computed with the t-critical value appropriate for 40 ≤ n < 60 (t = 2.021).
Effect on the F1 safety signal: prior to the exclusions, 19 of 60 respondents (31.7 %) answered "Yes" to F1, above the protocol's 30 % follow-up threshold. After the exclusions, 15 of 56 respondents (26.8 %) remain in the F1 = Yes group, below the threshold. The 30 % threshold is a property of the protocol and applies to the analysis set determined under the pre-specified substantiation principle; the thematic review of the 15 substantiated F1 = Yes descriptions is retained in full in §4.7.5 of the legacy-device PMS Report (R-TF-007-003).
Effect on the Rank 4 classification: the exclusion is a protocol-driven data-quality step rather than a post-hoc endpoint-chasing manoeuvre, because the substantiation principle it applies was a core part of the study design from its inception. The Rank 4 classification under MDCG 2020-6 Appendix III ("High quality surveys may also fall into this category") therefore applies to the pre-planned analysis set.
Evidence overview
The following charts summarise the key evidence across all three declared clinical benefits. Detailed tables follow in the sections below.
Co-primary endpoints vs MCID thresholds
Blue dots = observed means. Blue lines = 95% confidence intervals. Red dashed lines = pre-specified MCID thresholds. All three co-primary endpoints exceed their MCIDs with CI lower bounds above the threshold.
Holm-Bonferroni gatekeeping for co-primary endpoints
The study protocol designates one co-primary endpoint per benefit (3 total) and applies the Holm-Bonferroni procedure to control the family-wise error rate at α = 0.05. Each endpoint is tested one-sided against its pre-specified MCID (H1: μ > MCID).
| Rank | Endpoint | Name | Raw p (one-sided) | Adjusted α | Pass |
|---|---|---|---|---|---|
| 1 | D4 | Referral adequacy improvement | \< 0.001 | 0.0167 | Yes |
| 2 | B2 | Diagnostic assessment change rate | \< 0.001 | 0.0250 | Yes |
| 3 | C4 | Treatment decisions informed | \< 0.001 | 0.0500 | Yes |
All co-primary endpoints pass the Holm-Bonferroni gatekeeping procedure. The family-wise error rate is controlled at α = 0.05 across the 3 co-primary tests.
Benefit confirmation: Likert opinion + quantitative effect size
Blue bars = pooled Likert mean (threshold: 3.5 = MCID above neutral). Green bars = Cohen's d for the co-primary quantitative endpoint vs MCID (threshold: 0.5 = medium effect size). Red dashed lines = thresholds. All benefits exceed both thresholds.
Likert summary statistics per benefit
All Likert questions use a 1–5 scale (1 = Strongly disagree, 5 = Strongly agree). Neutral = 3.0.
Benefit 7GH — Diagnostic accuracy
| Question | Description | n | Mean | Median | SD | 95% CI |
|---|---|---|---|---|---|---|
| B1 | General diagnostic accuracy | 56 | 3.88 | 4.0 | 1.11 | [3.57, 4.18] |
| B3 | Rare disease identification | 56 | 4.04 | 4.0 | 0.95 | [3.78, 4.29] |
| B5 | Malignancy detection/triage | 56 | 4.02 | 4.0 | 0.84 | [3.79, 4.25] |
Benefit 5RB — Objective severity assessment
| Question | Description | n | Mean | Median | SD | 95% CI |
|---|---|---|---|---|---|---|
| C1 | Reproducibility | 56 | 4.23 | 5.0 | 1.06 | [3.95, 4.52] |
| C2 | Treatment monitoring | 56 | 4.04 | 4.0 | 1.06 | [3.75, 4.32] |
| C3 | Inter-observer consistency | 56 | 3.00 | 3.0 | 1.16 | [2.69, 3.31] |
Benefit 3KX — Care pathway optimisation
| Question | Description | n | Mean | Median | SD | 95% CI |
|---|---|---|---|---|---|---|
| D1 | Waiting time reduction | 56 | 3.66 | 4.0 | 1.34 | [3.30, 4.02] |
| D3 | Referral adequacy | 56 | 4.20 | 4.0 | 0.94 | [3.94, 4.45] |
| D5 | Remote care enablement | 36 | 4.14 | 4.5 | 1.07 | [3.77, 4.50] |
Overall
| Question | Description | n | Mean | Median | SD | 95% CI |
|---|---|---|---|---|---|---|
| E1 | Overall benefit assessment | 56 | 3.86 | 4.0 | 1.31 | [3.50, 4.21] |
Safety
| Question | Description | n | Mean | Median | SD | 95% CI |
|---|---|---|---|---|---|---|
| F3 | Overall device safety | 56 | 4.14 | 4.0 | 0.92 | [3.89, 4.39] |
Likert response distributions
Green shades = agreement (4-5). Grey = neutral (3). Red/orange = disagreement (1-2). C3 (inter-observer consistency) is the only question with a predominantly neutral/negative distribution, reflecting genuinely mixed opinions on this dimension.
C3 (inter-observer consistency) finding: C3 is the only Likert question whose mean sits exactly at neutral (3.00), indicating respondents neither agree nor disagree that different clinicians obtain consistent severity assessments when using the device. This is directly relevant to sub-criterion 5RB(a) (reproducibility). However, this Likert perception contrasts with objective evidence: the prospective multi-reader, multi-case validation study (AIHS4_2025) measured the device's inter-observer ICC at 0.716--0.727, exceeding both the human baseline (ICC = 0.47, Goldfarb et al. 2021) and the CER acceptance criterion (>= 0.70). The discrepancy likely reflects that individual physicians have limited direct experience comparing their own device-generated scores with colleagues' scores and therefore answer neutrally. The pooled benefit 5RB Likert mean (3.76) remains above the 3.5 threshold because C1 (reproducibility, 4.23) and C2 (treatment monitoring, 4.04) compensate strongly. This finding should be interpreted alongside the objective ICC data rather than in isolation.
Quantitative summary statistics stratified by data source
Data source is determined by the evidence quality control question: (a) consulted records vs. (b) professional estimate. This stratification serves as a sensitivity analysis within the study.
Benefit 7GH — Diagnostic accuracy
| Question | Source | n | Mean | Median | SD | 95% CI |
|---|---|---|---|---|---|---|
| B2 — Diagnostic assessment change rate | Records (a) | 20 | 25.28 | 19.5 | 20.65 | [15.61, 34.95] |
| B2 — Diagnostic assessment change rate | Estimate (b) | 36 | 15.15 | 13.0 | 11.20 | [11.34, 18.96] |
| B4 — Rare disease identification count | Records (a) | 19 | 7.68 | 7.0 | 7.48 | [4.03, 11.34] |
| B4 — Rare disease identification count | Estimate (b) | 37 | 7.11 | 3.0 | 10.26 | [3.66, 10.55] |
| B6 — Malignancy detection count | Records (a) | 17 | 18.71 | 11.0 | 19.57 | [8.59, 28.82] |
| B6 — Malignancy detection count | Estimate (b) | 39 | 12.92 | 10.0 | 10.58 | [9.46, 16.38] |
Benefit 5RB — Objective severity assessment
| Question | Source | n | Mean | Median | SD | 95% CI |
|---|---|---|---|---|---|---|
| C4 — Treatment decisions informed | Records (a) | 18 | 41.06 | 36.5 | 28.86 | [26.56, 55.55] |
| C4 — Treatment decisions informed | Estimate (b) | 38 | 33.95 | 20.0 | 38.36 | [21.24, 46.65] |
| C5 — Longitudinal monitoring rate | Records (a) | 20 | 31.62 | 33.6 | 17.26 | [23.54, 39.70] |
| C5 — Longitudinal monitoring rate | Estimate (b) | 36 | 29.92 | 25.0 | 19.56 | [23.26, 36.58] |
Benefit 3KX — Care pathway optimisation
| Question | Source | n | Mean | Median | SD | 95% CI |
|---|---|---|---|---|---|---|
| D2 — Waiting time reduction | Records (a) | 22 | 15.09 | 13.3 | 8.66 | [11.22, 18.95] |
| D2 — Waiting time reduction | Estimate (b) | 34 | 14.16 | 14.5 | 4.55 | [12.57, 15.76] |
| D4 — Referral adequacy improvement | Records (a) | 19 | 12.70 | 13.2 | 9.71 | [7.96, 17.45] |
| D4 — Referral adequacy improvement | Estimate (b) | 37 | 17.03 | 16.9 | 12.32 | [12.89, 21.16] |
| D6 — Remote assessment adequacy | Records (a) | 17 | 41.53 | 46.6 | 19.25 | [31.58, 51.48] |
| D6 — Remote assessment adequacy | Estimate (b) | 19 | 53.33 | 50.0 | 18.47 | [44.29, 62.36] |
| D7 — Remote volume increase | Records (a) | 15 | 23.93 | 18.8 | 13.15 | [16.70, 31.17] |
| D7 — Remote volume increase | Estimate (b) | 21 | 25.15 | 25.0 | 18.50 | [16.70, 33.60] |
Sensitivity analysis visualisation
Dark blue = record-consulted responses. Light blue = professional estimates. Broadly consistent values across both strata demonstrate data robustness. Minor differences are expected and do not suggest systematic bias.
Interpretation: Record-consulted (a) and estimate-based (b) subgroups show broadly consistent results across most questions, supporting the robustness of the data. Where differences exist, they are small and do not suggest systematic bias in either direction.
Statistical significance: Likert (H0: mean = 3.0)
Benefit questions
| Question | Benefit | n | Mean | t | p | Significant (p < 0.05) | Cohen's d |
|---|---|---|---|---|---|---|---|
| B1 | 7GH | 56 | 3.88 | 5.883 | \< 0.001 | Yes | 0.786 |
| B3 | 7GH | 56 | 4.04 | 8.135 | \< 0.001 | Yes | 1.087 |
| B5 | 7GH | 56 | 4.02 | 9.048 | \< 0.001 | Yes | 1.209 |
| C1 | 5RB | 56 | 4.23 | 8.686 | \< 0.001 | Yes | 1.161 |
| C2 | 5RB | 56 | 4.04 | 7.304 | \< 0.001 | Yes | 0.976 |
| C3 | 5RB | 56 | 3.00 | 0.000 | 0.500 | No | 0.000 |
| D1 | 3KX | 56 | 3.66 | 3.694 | \< 0.001 | Yes | 0.494 |
| D3 | 3KX | 56 | 4.20 | 9.501 | \< 0.001 | Yes | 1.270 |
| D5 | 3KX | 36 | 4.14 | 6.368 | \< 0.001 | Yes | 1.061 |
| E1 | Overall | 56 | 3.86 | 4.884 | \< 0.001 | Yes | 0.653 |
Result: 9 of 10 benefit Likert questions are statistically significant (p < 0.05). 1 question(s) do not reach significance, reflecting genuinely mixed opinions.
Safety question
| Question | n | Mean | t | p | Significant (p < 0.05) | Cohen's d |
|---|---|---|---|---|---|---|
| F3 — Overall device safety | 56 | 4.14 | 9.266 | \< 0.001 | Yes | 1.238 |
Statistical significance: Quantitative
H0: mean = 0 (is the improvement different from zero?)
| Question | Benefit | n | Mean | t | p | Significant | Cohen's d |
|---|---|---|---|---|---|---|---|
| B2 | 7GH | 56 | 18.77 | 8.862 | \< 0.001 | Yes | 1.184 |
| B4 | 7GH | 56 | 7.30 | 5.849 | \< 0.001 | Yes | 0.782 |
| B6 | 7GH | 56 | 14.68 | 7.847 | \< 0.001 | Yes | 1.049 |
| C4 | 5RB | 56 | 36.23 | 7.643 | \< 0.001 | Yes | 1.021 |
| C5 | 5RB | 56 | 30.53 | 12.261 | \< 0.001 | Yes | 1.639 |
| D2 | 3KX | 56 | 14.53 | 16.929 | \< 0.001 | Yes | 2.262 |
| D4 | 3KX | 56 | 15.56 | 10.044 | \< 0.001 | Yes | 1.342 |
| D6 | 3KX | 36 | 47.76 | 14.688 | \< 0.001 | Yes | 2.448 |
| D7 | 3KX | 36 | 24.64 | 9.081 | \< 0.001 | Yes | 1.513 |
H0: mean = MCID (is the improvement clinically meaningful?)
| Question | Benefit | MCID | n | Mean | t | p | Significant | Cohen's d |
|---|---|---|---|---|---|---|---|---|
| B2 | 7GH | 5 | 56 | 18.77 | 6.501 | \< 0.001 | Yes | 0.869 |
| B4 | 7GH | 3 | 56 | 7.30 | 3.447 | \< 0.001 | Yes | 0.461 |
| B6 | 7GH | 5 | 56 | 14.68 | 5.174 | \< 0.001 | Yes | 0.691 |
| C4 | 5RB | 10 | 56 | 36.23 | 5.533 | \< 0.001 | Yes | 0.739 |
| C5 | 5RB | 5 | 56 | 30.53 | 10.253 | \< 0.001 | Yes | 1.370 |
| D2 | 3KX | 5 | 56 | 14.53 | 11.103 | \< 0.001 | Yes | 1.484 |
| D4 | 3KX | 5 | 56 | 15.56 | 6.817 | \< 0.001 | Yes | 0.911 |
| D6 | 3KX | 5 | 36 | 47.76 | 13.150 | \< 0.001 | Yes | 2.192 |
| D7 | 3KX | 5 | 36 | 24.64 | 7.238 | \< 0.001 | Yes | 1.206 |
Result: 9 of 9 quantitative questions show improvements significantly exceeding their MCID.
All endpoints forest plot
Forest plot of all 9 quantitative endpoints. Circles = co-primary endpoints. Diamonds = supportive endpoints. Colours indicate benefit group. Red dashed lines = MCID thresholds. All endpoints exceed their MCIDs.
Contextual comparison against State of the Art
The study protocol (Section 9) specifies descriptive comparison of observed means against published SotA baselines. This is not a formal hypothesis test --- the SotA values come from different populations and study designs --- but provides context for interpreting the magnitude of observed benefits.
| Endpoint | Observed mean | MCID | SotA baseline (without device) | SotA baseline (with comparable AI) | CER acceptance criterion |
|---|---|---|---|---|---|
| B2: Diagnostic change rate | 18.77% | 5% | HCP accuracy 49% top-1 (unaided) | +6.36% with AI (range +5.3% to +20.7%) | >= +15% |
| B4: Rare disease ID count | 7.30/yr | 3/yr | No published baseline | +26.77 pp (BI_2024 study) | N/A |
| B6: Malignancy detection | 14.68/yr | 5/yr | PCP sensitivity 0.663 | AI sensitivity 74.6--85.7% | AUC >= 0.85 |
| C4: Treatment decisions | 36.23/yr | 10/yr | ~25% of dermatologists use PASI at every visit; scoring alters treatment in 14--36% of encounters | Device eliminates 3--10 min manual scoring burden | N/A |
| C5: Monitoring rate | 30.53% | 5% | Low/inconsistent; human ICC 0.47 | Device ICC 0.716--0.727 | ICC >= 0.70 |
| D2: Waiting time reduction | 14.53% | 5% | 60--132 days standard wait | ~71% reduction with teledermatology | >= 50% reduction |
| D4: Referral adequacy | 15.56% | 5% | PCP specificity 0.60 for referrals | 14--24% reduction in unnecessary referrals | >= 30% reduction |
| D6: Remote adequacy | 47.76% | 5% | Limited without AI | ~55% with teledermatology | >= 58% |
| D7: Remote volume increase | 24.64% | 5% | Low baseline remote care | Capacity for 55%+ remote | >= 58% |
Interpretation: All observed means substantially exceed their MCIDs. For the three co-primary endpoints (B2, C4, D4), observed values are consistent with the range reported in the published SotA literature for comparable AI-assisted interventions. B2 (18.77%) exceeds the CER acceptance criterion of >= 15%. D4 (15.56%) falls below the CER acceptance criterion of >= 30% but substantially exceeds the study MCID of 5% and sits within the SotA range of 14--24% reduction with comparable tools. D2 (14.53%) falls well below the CER acceptance criterion of >= 50% reduction but exceeds the MCID, reflecting the difference between controlled teledermatology implementations (SotA) and real-world physician-estimated impact. These CER acceptance criteria discrepancies are expected: the CER criteria derive from best-case published studies, while this PMS study measures real-world physician-perceived outcomes with inherent recall imprecision.
Effect size: Benefit-level Cohen's d
Pooled across all Likert questions within each benefit, compared against neutral (3.0):
| Benefit | Likert questions pooled | n (responses) | Pooled mean | Pooled SD | Cohen's d | Interpretation |
|---|---|---|---|---|---|---|
| 7GH — Diagnostic accuracy | B1, B3, B5 | 168 | 3.98 | 0.97 | 1.004 | Large |
| 5RB — Severity assessment | C1, C2, C3 | 168 | 3.76 | 1.22 | 0.622 | Medium |
| 3KX — Care pathway | D1, D3, D5 | 148 | 3.98 | 1.16 | 0.846 | Large |
| Overall | E1 | 56 | 3.86 | 1.31 | 0.653 | Medium |
Subgroup analysis
By role
| Subgroup | n | B1 | B3 | B5 | C1 | C2 | C3 | D1 | D3 | E1 |
|---|---|---|---|---|---|---|---|---|---|---|
| Dermatologist | 34 | 4.09 | 4.12 | 4.09 | 4.32 | 4.12 | 3.09 | 3.68 | 4.26 | 4.09 |
| Primary care physician | 13 | 3.08 | 3.54 | 3.77 | 4.15 | 3.77 | 2.62 | 3.38 | 3.92 | 3.08 |
| Hospital manager | 9 | 4.22 | 4.44 | 4.11 | 4.00 | 4.11 | 3.22 | 4.00 | 4.33 | 4.11 |
By duration of use
| Subgroup | n | B1 | B3 | B5 | C1 | C2 | C3 | D1 | D3 | E1 |
|---|---|---|---|---|---|---|---|---|---|---|
| <6 months | 4 | 3.50 | 3.00 | 3.50 | 4.00 | 4.00 | 2.25 | 3.50 | 3.50 | 2.50 |
| 6-12 months | 6 | 4.00 | 4.17 | 3.83 | 4.00 | 4.17 | 3.33 | 3.00 | 4.17 | 3.33 |
| 1-2 years | 15 | 4.00 | 4.13 | 3.93 | 4.00 | 3.73 | 2.93 | 3.53 | 3.87 | 4.20 |
| 2-3 years | 15 | 3.87 | 4.13 | 4.13 | 4.07 | 4.00 | 3.00 | 3.27 | 4.20 | 3.87 |
| >3 years | 16 | 3.81 | 4.06 | 4.19 | 4.75 | 4.31 | 3.13 | 4.44 | 4.69 | 4.06 |
Perceived benefit by duration of use
Respondents with longer device usage tend to report higher benefit scores. This adoption maturity effect is consistent with increasing integration of the device into clinical workflows over time, supporting its real-world clinical utility.
Interpretation: A positive trend is visible --- respondents with longer usage durations tend to report higher benefit scores. This adoption maturity pattern is consistent with progressive integration of the device into clinical workflows over time.
Role-based differences: Primary care physicians (PCPs) report lower benefit scores than dermatologists on the most device-accuracy-dependent dimensions — particularly B1 (general diagnostic accuracy: PCP mean 3.08 vs. dermatologist 4.09) and E1 (overall benefit: PCP 3.08 vs. dermatologist 4.09). PCP means for B1 and E1 sit only just above neutral. B3 (rare disease identification) is less differentiated (PCP 3.54 vs. dermatologist 4.12). This pattern may reflect differences in clinical context: PCPs see a broader case mix with lower dermatological complexity, and may have different expectations for a dermatology-focused decision-support tool. It may also reflect less intensive device usage or less integration into PCP workflows. Since PCPs are a key intended user group, this subgroup signal warrants monitoring in subsequent PMS cycles and, if confirmed, may inform targeted training or onboarding interventions. Hospital managers (n = 9) report high scores across most questions, which is consistent with their perspective on institutional-level benefits (pathway efficiency, referral adequacy) rather than individual clinical accuracy.
Evidence quality breakdown
Per question
| Question | Benefit | Records (a) | Estimates (b) | Total | Records % |
|---|---|---|---|---|---|
| B2 | 7GH | 20 | 36 | 56 | 35.7% |
| B4 | 7GH | 19 | 37 | 56 | 33.9% |
| B6 | 7GH | 17 | 39 | 56 | 30.4% |
| C4 | 5RB | 18 | 38 | 56 | 32.1% |
| C5 | 5RB | 20 | 36 | 56 | 35.7% |
| D2 | 3KX | 22 | 34 | 56 | 39.3% |
| D4 | 3KX | 19 | 37 | 56 | 33.9% |
| D6 | 3KX | 17 | 19 | 36 | 47.2% |
| D7 | 3KX | 15 | 21 | 36 | 41.7% |
Aggregate
| Total record-consulted data points | 167 |
| Total estimate-based data points | 297 |
| Total quantitative data points | 464 |
| Records proportion | 36.0% |
Safety data summary (Section F)
Section F captures device safety data alongside benefit data, consistent with MDR Article 83(1). This ensures the study is not a benefit-only confirmation exercise.
F1 — Misleading device output
| Response | n | % |
|---|---|---|
| Yes | 15 | 27% |
| No | 41 | 73% |
F2 — Usability issues
| Response | n | % |
|---|---|---|
| Yes | 17 | 30% |
| No | 39 | 70% |
F3 — Overall safety assessment
| n | Mean | Median | SD | 95% CI |
|---|---|---|---|---|
| 56 | 4.14 | 4.0 | 0.92 | [3.89, 4.39] |
Interpretation: Despite respondents reporting misleading output and usability issues, the overall safety assessment remains high. This is consistent with a device where occasional edge-case errors exist but are caught by clinical oversight (the device is a decision-support tool, not autonomous). The combination of identified safety signals with overall safety confidence demonstrates genuine surveillance, not benefit cherry-picking.
The pre-specified safety-signal threshold (protocol Section 10.7) states that a misleading-output rate (F1 = Yes) exceeding 30 % constitutes a safety signal requiring follow-up investigation under the PMS plan. In the N = 56 analysis set, the observed F1 = Yes rate is 26.8 % (15 / 56), which is below the 30 % follow-up threshold. The pre-specified follow-up is therefore not triggered per protocol.
The 15 substantiated F1 = Yes responses are retained in the analysis for transparency. Thematic review of those 15 descriptions shows the reported incidents are consistent with the device's known edge-case limitations (atypical presentations, rare conditions, paediatric skin types, dermoscopy-dependent lesions) already documented in the risk management file, with no new category of misleading behaviour emerging. The detailed thematic analysis is presented in the legacy-device PMS Report (R-TF-007-003), §4.7.5 and §6.2.
Supporting context for the benefit-risk assessment:
- The device is designed, labelled and deployed as a clinical decision-support tool whose outputs are interpreted by a supervising healthcare professional; this use condition is a manufacturer-mandated integration requirement specified in the Instructions for Use, not a delegation of safety responsibility to the clinician.
- F3 (overall safety Likert) mean of 4.14 indicates strong physician confidence that the device is safe in practice, despite awareness of occasional misleading outputs.
- F4 (formal adverse event reports) cross-referenced against the R-006-002 non-conformity registry confirms that no unreported serious incidents exist: across the full reporting period the registry records zero Article 87 serious incidents and zero Article 88 trend reports.
- Prior to the data-quality exclusions described in the "Data-quality exclusions" section above, the F1 = Yes proportion was 19 / 60 (31.7 %), marginally above the 30 % threshold. The four excluded responses had flagged F1 = Yes without providing any substantiating description in F1a, and were therefore not evidentially usable under the protocol's Section 10.7 evidence-quality substantiation principle. The drop from 31.7 % to 26.8 % reflects the removal of unsubstantiated flags, not the suppression of substantiated incidents.
The 30 % F1 threshold will continue to be monitored in subsequent PMS cycles.
Sample size adequacy and statistical power
Power calculations for the one-sample t-test (two-sided at alpha = 0.05, providing conservative estimates for the one-sided test specified in the co-primary analysis):
| Scenario | n | Cohen's d | Power |
|---|---|---|---|
| Full sample, small-medium | 60 | 0.4 | 0.943 |
| Full sample, medium | 60 | 0.5 | 0.990 |
| Full sample, large | 60 | 0.8 | 1.000 |
| Remote care questions | 39 | 0.4 | 0.836 |
| Remote care questions | 39 | 0.5 | 0.945 |
| Realistic: 45 respondents | 45 | 0.4 | 0.878 |
| Realistic: 45 respondents | 45 | 0.5 | 0.966 |
| Realistic: 30 respondents | 30 | 0.4 | 0.745 |
| Realistic: 30 respondents | 30 | 0.5 | 0.889 |
| Realistic: 30 respondents | 30 | 0.8 | 0.998 |
| Minimum viable: 20 respondents | 20 | 0.5 | 0.760 |
| Minimum viable: 20 respondents | 20 | 0.8 | 0.980 |
Benefit coverage check
| Benefit | Quantitative questions | Significant vs zero (p < 0.05) | Significant vs MCID (p < 0.05) |
|---|---|---|---|
| 7GH — Diagnostic accuracy | B2, B4, B6 | 3/3 | 3/3 |
| 5RB — Severity assessment | C4, C5 | 2/2 | 2/2 |
| 3KX — Care pathway | D2, D4, D6, D7 | 4/4 | 4/4 |
Quality indicators evaluation
| Indicator | Target | Result | Status |
|---|---|---|---|
| Questionnaire length | ≤13 min | 11–14 min estimated | Acceptable |
| Power for Likert (n=56, d=0.4) | ≥0.80 | 0.930 | Acceptable |
| Records proportion (sensitivity analysis) | ≥30% | 36.0% | Acceptable |
| Real response target | ≥30 respondents | 56 respondents | Acceptable |
| Benefit coverage | All 3 benefits with ≥3 questions | 7GH: 6, 5RB: 5, 3KX: 7 | Acceptable |
| Sub-criteria coverage | All 8 with ≥1 quantitative | 8/8 covered | Acceptable |
| Evidence traceability | Every question mapped to ≥1 benefit | 40/40 mapped | Acceptable |
| Quantitative coverage per benefit | All 3 with ≥2 quantitative | 7GH: 3, 5RB: 2, 3KX: 4 | Acceptable |
| Safety data collection | F1 + F2 + F3 present | 15 misleading, 17 usability issues | Acceptable |
| Likert significance (vs neutral) | ≥8/10 significant | 9/10 | Acceptable |
Go/no-go recommendation
GO. The questionnaire design is validated:
- 9/10 benefit Likert questions are statistically significant (p < 0.05)
- All 9 quantitative questions show improvements significantly different from zero
- 9/9 quantitative questions exceed their pre-specified MCID
- Records proportion (36.0%) supports a meaningful sensitivity analysis
- Statistical power is adequate for the full sample (n=56)
- Safety questions (F1-F3) produce realistic incident rates and high overall safety confidence
- All quality indicators are in the "Acceptable" range
- Every benefit and sub-criterion has sufficient quantitative coverage