Skip to main content
QMSQMS
QMS
  • Welcome to your QMS
  • Quality Manual
  • Procedures
  • Records
  • Legit.Health Plus Version 1.1.0.0
    • Index
    • Overview and Device Description
    • Information provided by the Manufacturer
    • Design and Manufacturing Information
      • R-TF-012-019 SOUPs
      • R-TF-012-028 Software Requirement Specification
      • R-TF-012-029 Software Architecture Description
      • R-TF-012-030 Software Configuration Management Plan
      • R-TF-012-031 Product requirements specification
      • R-TF-012-037 Labeling and IFU Requirements
      • R-TF-012-040 Documentation level FDA
      • R-TF-012-041 Software Classification 62304
      • R-TF-012-043 Traceability Matrix
      • Review meetings
    • GSPR
    • Benefit-Risk Analysis and Risk Management
    • Product Verification and Validation
    • Post-Market Surveillance
  • Legit.Health Plus Version 1.1.0.1
  • Legit.Health Utilities
  • Licenses and accreditations
  • Applicable Standards and Regulations
  • Pricing
  • Public tenders
  • Legit.Health Plus Version 1.1.0.0
  • Design and Manufacturing Information
  • R-TF-012-029 Software Architecture Description

R-TF-012-029 Software Architecture Description

Introduction​

Purpose​

This document describes the software architecture for the medical device software, including the high-level structure, key software components, interfaces, and design decisions. The architecture provides the framework for implementing the software requirements specified in the Software Requirements Specification (SRS).

Scope​

This architecture description covers:

  • High-level software architecture and system decomposition
  • Software elements and their responsibilities
  • Interfaces between software elements and external systems
  • SOUP (Software of Unknown Provenance) components
  • Data flow and control flow
  • Architectural decisions and rationale

Architectural Overview​

What is this for?​

The device offers a workflow to perform an analysis of images presenting skin conditions as specified in the Intended Use.

The workflow offered by the device to the integrators consists of the following main steps:

  1. The user decides whether to perform a diagnosis support or severity assessment.
  • Diagnosis Support: The user uploads up to 3 images of the skin condition for analysis.
  • Severity Assessment: The user collects required subjective information from the patient (e.g., itch intensity, sleep disturbance) and uploads 1 image of the skin condition for analysis.
  1. The device analyzes the provided information by calling the experts associated to the involved AI models and returns the results to the user.

Diagnosis support​

This endpoint allows the user to upload up to 3 images of a skin condition for analysis. The device validates the images for quality and domain relevance before calling the expert associated with the Diagnosis Support AI model. The expert processes the images and returns a respose contanining:

  • An aggregated ICD probability distribution from the set of images as the software requirement (SRS-Q3Q).
  • A per-image ICD diagnosis with their corresponding explainability heat maps (SRS-0AB).
  • The model sensitivity and specificity metrics (SRS-1XR).
  • The entropy score for allowing the user to assess the uncertainty of the AI model's predictions (SRS-58W).
  • The following clinical indicators:
    • High priority referral (SRS-71I)
    • The indicator of malignancy in the report (SRS-8HY)
    • The indicator of the image presenting a pigmented lesion (SRS-D08)
    • The indicator of the presence of a condition (SRS-JLM)
    • The indicator of needing an urgent referral (SRS-KAS)

Severity assessment​

The severity assessment workflow involves the user collecting subjective information from the patient (e.g., itch intensity, sleep disturbance) and uploading 1 image of the skin condition. The device validates the image for quality and domain relevance before calling the requested experts that are associated with the Severity Assessment AI models. The expert processes the information and returns a response containing.

The severity assessment allows to calculate the following clinical signs:

  • Binary classification outputs for presence/absence detection of clinically significant features:
    • Wound characteristics: perilesional erythema, wound edge morphology (damaged, delimited, diffuse, thickened, indistinguishable), perilesional maceration, exudate types (fibrinous, purulent, bloody, serous), biofilm-compatible tissue, affected tissue depth (intact skin, dermis-epidermis, subcutaneous, muscle, bone), wound bed tissue composition (necrotic, closed, granulation, epithelial, slough).
  • Segmentation-based quantitative outputs for two-dimensional surface area measurements:
    • Percentage coverage: relative area of detected features normalized to total lesion or anatomical region (e.g., erythema percentage, granulation tissue percentage, wound closure percentage, hair loss percentage, nail lesion percentage, pigmentary alteration percentage).
    • Absolute measurements (when calibration available): surface area in cm², perimeter in cm, maximum length and width dimensions.
    • Segmentation masks: pixel-level delineation with class labels (e.g., background, lesion, healthy tissue) enabling spatial analysis and longitudinal tracking.
    • Applicable segmentation targets include: wound bed area, erythema extent, granulation tissue, biofilm/slough, necrosis, maceration, orthopedic material exposure, bone/cartilage/tendon exposure, hair loss, nail lesions, hypopigmentation/depigmentation, hyperpigmentation.
  • Object detection and counting outputs for discrete lesion enumeration:
    • Absolute counts: number of detected lesions or structures (e.g., inflammatory nodules, abscesses, draining/non-draining tunnels, papules, pustules, cysts, comedones, nodules, hives, hair follicles).
    • Bounding box annotations: spatial coordinates for each detected object with class labels and confidence scores.
    • Density metrics: lesions per unit area when combined with surface area quantification.
  • Multi-class classification outputs for lesion staging and phenotyping:
    • Wound stage classification (Stage 0, I, II, III, IV) according to standardized protocols.
    • Hurley staging (I, II, III) for hidradenitis suppurativa severity.
    • Inflammatory activity status (active vs. quiescent disease).
    • Acne severity grading (IGA scale 0-4: Clear, Almost Clear, Mild, Moderate, Severe).
    • Phenotype identification (e.g., follicular, inflammatory, mixed patterns).

Main functions​

The software implements the following main functions and requirements:

  • Generate an interpretative probability distribution of possible ICD categories by analysing images.
  • Quantitative assessment of dermatological visual sign intensity.
  • Quantitative assessment of dermatological structural features and lesion morphometry.
  • Assess image adequacy on ingestion.
  • Expose the device's functionality through a versioned, network-accessible API.

Main interfaces​

RESTful API​

Overview​

The API Gateway exposes endpoints organized into the following categories:

  • Authentication - User login and token management
  • Clinical - Medical diagnosis and severity assessment
  • Device - Device information and health checks
  • Internal - System monitoring (not included in public API schema)
Authentication​
POST /auth/login​

Authenticate a user with username and password (OAuth2 Password Grant).

Request

Headers:

HeaderRequiredDescription
Content-TypeYesapplication/json

Body parameters:

{
"username": "string", // Required, min length: 1
"password": "string" // Required, min length: 1
}

Response

Status: 200 OK

{
"access_token": "string", // JWT access token
"token_type": "bearer", // Token type (default: "bearer")
"expires_in": 3600 // Token expiration time in seconds
}

Error Responses:

StatusDescription
401 UnauthorizedInvalid username or password
403 ForbiddenAccount is locked
Clinical​

⚠️ All clinical endpoints require authentication

POST /clinical/diagnosis-support​

Provide one or more base64-encoded images of the suspected lesion from different perspectives and receive a report with possible diagnoses.

Request

Headers:

HeaderRequiredDescription
AuthorizationYesBearer <access_token>
Content-TypeYesapplication/json

Body parameters:

{
"images": ["base64_encoded_image_string"], // Required, 1-5 images
"extra_parameters": {
// Optional
"common": {
// Common parameters for all experts
"key": "value"
},
"expert_specific": {
// Parameters per expert
"expert_name": {
"key": "value"
}
}
}
}

Response

Status: 200 OK

{
"study_aggregate": {
"findings": {
"hypotheses": [
{
"identifier": "string", // e.g., "psoriasis"
"concepts": [
{
"name": "string", // e.g., "Psoriasis"
"code": "string", // e.g., SNOMED CT ID "44054006"
"terminology_system": "string" // e.g., "SNOMED CT"
}
],
"probability": 0.85, // 0-1 probability
"explanation": {
"attention_map": "base64_image" // Saliency heatmap (optional)
}
}
],
"risk_metrics": {
"any_condition_probability": 0.95,
"malignant_condition_probability": 0.02,
"pigmented_condition_probability": 0.15,
"urgent_referral_probability": 0.03,
"high_priority_referral_probability": 0.08
}
}
},
"image_analyses": [
{
"findings": {
"hypotheses": [...],
"risk_metrics": {...}
}
}
]
}

Error Responses:

StatusDescription
401 UnauthorizedMissing or invalid authentication token
403 ForbiddenUser account is not active
503 Service UnavailableControl Plane unavailable
POST /clinical/severity-assessment​

Provide a base64-encoded image of the suspected lesion and the list of experts to consult, and receive a report with the severity assessment of visual signs.

Request

Headers

HeaderRequiredDescription
AuthorizationYesBearer <access_token>
Content-TypeYesapplication/json

Body parameters:

{
"image": "base64_encoded_image_string", // Required
"experts": ["expert_identifier"], // Required, min 1 expert
"extra_parameters": {
// Optional
"common": {
"key": "value"
},
"expert_specific": {
"expert_name": {
"key": "value"
}
}
}
}

Response

Status: 200 OK

{
"image_analysis": {
"findings": [
{
"sign_identifier": "erythema",
"overall_severity_summary": "moderate erythema",
"overall_severity_score": 0.6,
"intensity": {
"grade": 2.5,
"grading_scale": "Legit.Health",
"confidence": 0.92
},
"extent": {
"regions": [
{
"mask": "base64_image",
"threshold": 0.5,
"confidence_map": "base64_image"
}
],
"annotated_image": "base64_image"
},
"count": {
"total": 5,
"detections": [
{
"identifier": "papule",
"bounding_box": {
"x": 100,
"y": 150,
"width": 50,
"height": 40
},
"confidence": 0.88
}
],
"annotated_image": "base64_image"
}
}
]
}
}

Error Responses:

StatusDescription
401 UnauthorizedMissing or invalid authentication token
403 ForbiddenUser account is not active
503 Service UnavailableControl Plane unavailable
Device​

⚠️ All device endpoints require authentication

GET /device/about​

Get device information including legal details and manufacturer information.

Request

Headers:

HeaderRequiredDescription
AuthorizationYesBearer <access_token>

Body parameters: None

Response

Status: 200 OK

{
"device_name": "string",
"description": "string",
"version": "2.3.1.0",
"release_date": "2024-01-15",
"manufacturer": {
"name": "string",
"description": "string",
"country_code": "ES",
"website": "https://example.com"
},
"udi": "string",
"alternate_identifiers": [
{
"system": "GMDN",
"code": "47905",
"issuing_authority": "GMDN Agency",
"description": "string"
}
],
"standards": [
{
"standard_name": "ISO 14971",
"standard_version": "2019"
}
],
"regulatory_clearances": [
{
"regulation": "EU MDR 2017/745",
"risk_class": "Class IIa",
"notified_body": "string",
"authorization_id": "string",
"authorization_date": "2024-01-01"
}
],
"claims": [
{
"description": "string",
"reference": "string"
}
],
"contact_email": "support@example.com",
"instructions_url": "https://example.com/ifu"
}

Error Responses:

StatusDescription
401 UnauthorizedMissing or invalid authentication token
403 ForbiddenUser account is not active
503 Service UnavailableControl Plane unavailable
GET /device/health​

Check whether the device is operational.

Request Headers:

HeaderRequiredDescription
AuthorizationYesBearer <access_token>

Body parameters: None

Response

Status: 200 OK

The response schema is determined by the GetDeviceHealthUseCase implementation.

Error Responses:

StatusDescription
401 UnauthorizedMissing or invalid authentication token
403 ForbiddenUser account is not active
503 Service UnavailableControl Plane unavailable

Global system views​

Architecture diagram​

The following diagram illustrates the high-level software architecture of the medical device deployed in an AWS environment:

Connections​

The medical device exposes a RESTful API over HTTPS for integrators to interact with its functionalities. This API is a kind of interface for allowing external systems to communicate with the sofware items known as "experts" that implement the different AI models.

Dataflow​

The following diagram illustrates the high-level data flow within the medical device software architecture:

The following sequence diagram illustrates the interactions between the main software items during a typical user request:

Updatability / Patchability​

The device update process is the following:

  • The development team creates a new software version using the four-digit scheme (W.X.Y.Z) defined in GP-012. Digit X is incremented for major releases or significant feature additions (digit W is reserved for exceptional, fundamental changes), digit Y for minor feature improvements, and digit Z for bug fixes and patches.
  • The new software version is deployed to the production environment after passing all required verification and validation activities.
  • The API version follows an independent scheme (vA.B) decoupled from the software version. API major/minor increments reflect contract changes, and compatibility is documented in the API gateway to keep clients aligned with supported versions.

Security Use Cases​

Controlled acccess to the API​

Integrators access the medical device API using secure, credential-based authentication. Each integrator is assigned unique username and password pairs, which they use to log in via the API's authentication endpoint. Upon successful login, the API issues a signed JWT (JSON Web Token) that grants temporary access to clinical endpoints. This token is valid for a limited period; once it expires, the integrator must re-authenticate to obtain a new JWT and continue accessing the API. This mechanism ensures that only authorized users can interact with the medical device API, and that access is regularly validated.

Controlled access to user database​

Access to the DynamoDB table containing user records is protected through AWS IAM (Identity and Access Management) and strict service-level authorization. The API service does not use long-lived credentials; instead, it assumes a dedicated IAM role that grants the minimum permissions required to interact with the DynamoDB table. AWS issues temporary, automatically rotated credentials to the API each time the role is assumed, ensuring that database access is both short-lived and tightly scoped.

Only the API can access the DynamoDB table. External systems, integrators, and users have no direct connection to DynamoDB; all operations must pass through the API’s authenticated and audited endpoints. Every request made by the API to DynamoDB is cryptographically signed using AWS Signature Version 4, guaranteeing request integrity and preventing unauthorized calls. This mechanism ensures that user data is only accessible through authorized backend components, that privileges are minimized, and that access is continuously validated via AWS security controls.

Controlled access to audit records database​

Audit records are stored in a dedicated DynamoDB table that is protected through AWS IAM and a strict separation of privileges. The API service is granted access to audit storage using a specialized IAM role that provides write-only permissions. When the API needs to record an event, it temporarily assumes this audit-writer role, and AWS issues short-lived credentials that allow the service to store audit entries securely.

Each audit write operation must be executed through the API itself, since no external system or integrator has direct access to the audit table. All writes to DynamoDB are signed using AWS Signature Version 4, ensuring the authenticity and integrity of every audit record. The API cannot modify or delete existing audit entries, which guarantees immutability and preserves a reliable chronological trail of system activity. This mechanism ensures that audit data remains tamper-resistant, access is tightly controlled, and all operations involving audit logs are authenticated and traceable.

Logical architecture overview​

Medical device software system​

The medical device software system is composed of the following main software elements:

  • API interface, which exposes the device's functionalities to integrators through a RESTful API.
  • Control plane, which orchestrates the device's internal operations, manages requests, and coordinates interactions with the experts.
  • Medical report builder for transforming dermatology-expert analysis results into canonical reports.
  • Medical report exporter for exporting generated medical reports in standard formats like JSON.
  • Experts, which are individual software items that implement specific AI models for diagnosis support, severity assessment and other non clinical functionalities.

The following diagram illustrates the high-level decomposition of the medical device software system and their relationships:

API Interface​

The API Interface software item provides an HTTP-based entry point that allows integrator applications to access the medical device's clinical services. It exposes RESTful endpoints for authentication, diagnosis support, severity assessment, and device information retrieval. All requests and responses use JSON format, and protected endpoints require JWT-based authentication. The API Interface delegates clinical processing to the Control Plane and records all API calls for audit purposes.

The API Interface is composed of the following software units:

  1. HTTP Routes: FastAPI router modules (auth, clinical, device, internal) that define endpoint handlers for authentication (/auth/login), clinical analysis (/clinical/diagnosis-support, /clinical/severity-assessment), device information (/device/about, /device/health), and internal status monitoring (/internal/status).

  2. Authentication Dependencies: Middleware components that extract and validate JWT bearer tokens from incoming requests and verify that the authenticated user is active in the system.

  3. Use Cases: Application layer components (LoginUseCase, DiagnosisUseCase, SeverityAssessmentUseCase, GetDeviceLegalInfoUseCase, GetDeviceHealthUseCase, GetDeviceStatusUseCase, RecordCallUseCase) that orchestrate business operations by coordinating domain services and outbound adapters.

  4. Domain Services: The AccountSecurityService implements account lockout policies, managing failed login attempts and determining when to lock accounts based on configurable thresholds.

  5. Outbound Adapters: Implementations of port interfaces including:

    • HTTPControlPlaneClient: HTTP client adapter that communicates with the Control Plane for clinical analysis requests and device information.
    • DynamoDBUserRepository, DynamoDBLockoutRepository, DynamoDBCallRecordRepository: Persistence adapters for user accounts, lockout records, and API call audit logs.
    • Argon2AuthenticationService: Password hashing and verification using the Argon2 algorithm.
    • JwtTokenService: JWT generation and validation for access tokens.
  6. Call Recording Middleware: A Starlette middleware (CallRecorderMiddleware) that intercepts all API requests and responses, capturing metadata (timing, authentication context, HTTP details, client information) and asynchronously persisting audit records.

  7. Dependency Injection Container: A Dependency Injector container (Container) that wires together all services, repositories, use cases, and configuration, ensuring proper initialization and dependency management.

SOUPs​

The API Interface leverages the following SOUP items:

  • FastAPI: Web framework for building the HTTP API with automatic validation and OpenAPI documentation.
  • Uvicorn: ASGI server for running the FastAPI application.
  • Authlib: JWT encoding and decoding for access token generation and validation.
  • Argon2-cffi: Secure password hashing using the Argon2 algorithm.
  • HTTPX: Asynchronous HTTP client for communication with the Control Plane.
  • aioboto3: Asynchronous AWS SDK client for DynamoDB operations.
  • Pydantic: Data validation and settings management for request/response models and configuration.
  • Dependency Injector: Dependency injection framework for wiring application components.

Interfaces with other software items​

The API Interface interacts with the following software items and external systems:

  • Control Plane: Receives delegated requests for diagnosis support, severity assessment, and device information via HTTP calls. The Control Plane orchestrates downstream processing through expert modules and report generation.
  • Users Database (DynamoDB): Stores user account information including credentials and organization membership.
  • Lockouts Database (DynamoDB): Maintains account lockout state for security enforcement.
  • API Calls Database (DynamoDB): Persists audit records of all API calls for traceability and compliance.
  • Integrator Application: External client applications that consume the API endpoints using JSON over HTTP with JWT bearer authentication.

The following figure depicts the API Interface architecture and its relations with other software items.

Control Plane​

The Control Plane is the orchestration layer of the medical device backend. It receives requests from the API interface and coordinates the internal workflow needed to produce a clinical report. Its primary responsibility is to route user input through the appropriate expert services and report-building pipeline, returning a fully structured canonical report to the caller.

The Control Plane software item comprises the following software units:

  1. REST HTTP Controller: Exposes HTTP endpoints for clinical workflows. The controller receives incoming requests from the API interface, validates input using Pydantic data models from the shared interface models library, and delegates processing to the appropriate use case. It also provides health check endpoints that verify connectivity to downstream services.

  2. Diagnosis Workflow Use Case: Orchestrates the complete diagnosis workflow. It receives user diagnosis input, forwards it to the Expert Orchestrator to obtain expert analysis results, and then sends those results to the Report Builder to generate the final canonical diagnosis report.

  3. Severity Workflow Use Case: Orchestrates the complete severity assessment workflow. It follows the same pattern as the diagnosis workflow, coordinating between the Expert Orchestrator and Report Builder to produce a canonical severity report.

  4. Expert Orchestrator Gateway: An outgoing adapter that communicates with the Expert Orchestrator microservice via HTTP. This gateway abstracts the details of the HTTP transport and provides a domain-oriented interface for invoking expert analysis on user input.

  5. Report Builder Gateway: An outgoing adapter that communicates with the Report Builder microservice via HTTP. This gateway abstracts the HTTP communication required to transform expert results into canonical report structures.

  6. HTTP Client: A shared infrastructure component that manages persistent HTTP connections with retry logic and exponential backoff. It is used by the outgoing gateways to perform HTTP requests to downstream services.

  7. Dependency Injection Container: Manages the lifecycle of all application components using the dependency-injector library. It wires configuration, gateways, use cases, and infrastructure components, ensuring proper initialization and shutdown of resources.

  8. Settings Configuration: Loads application settings from environment variables using pydantic-settings. This includes service metadata, server configuration, downstream service URLs, and HTTP client parameters.

SOUPs​

The Control Plane directly uses the following SOUP items:

  • FastAPI: A modern Python web framework used to implement the REST API endpoints.
  • Uvicorn: An ASGI server used to run the FastAPI application.
  • Pydantic and pydantic-settings: Used for data validation, serialization, and configuration management.
  • dependency-injector: A dependency injection framework used to manage component wiring and lifecycle.
  • httpx: An asynchronous HTTP client library used for communication with downstream microservices.
  • legithp-interface-models: A shared internal library that provides canonical data models for user input, expert results, and reports.

Interfaces with other software items​

The Control Plane interfaces with the following software items:

  • API Interface: The Control Plane receives HTTP requests from the API interface at the /clinical-workflows/diagnosis and /clinical-workflows/severity endpoints.
  • Expert Orchestrator: The Control Plane invokes the Expert Orchestrator via HTTP to process user input and obtain clinical analysis results from expert AI/ML models.
  • Report Builder: The Control Plane sends expert results to the Report Builder via HTTP to generate structured canonical reports.

The following figure depicts the Control Plane architecture and relations with other software items.

Report Builder​

The Report Builder software item is responsible for transforming expert analysis results into canonical clinical reports. It receives structured outputs from AI/ML experts (via the Control plane) and constructs standardized diagnosis and severity reports that can be consumed by downstream components such as the Report exporter or the API interface. The module applies domain rules, aggregates per-image findings into study-level assessments, calculates risk metrics, and optionally enriches reports with standardized medical terminology codes (ICD-11, SNOMED-CT).

The Report Builder is composed of the following software units:

  1. Generate Diagnosis Report Use Case (generate_diagnosis_report.py): Orchestrates the generation of diagnosis reports. It invokes the derivation service to translate expert results into domain aggregates, optionally enriches hypotheses with medical terminology codes via the Terminology Repository Port, and delegates to the mapper to produce the canonical DTO output.

  2. Generate Severity Report Use Case (generate_severity_report.py): Orchestrates the generation of severity reports from sign-based expert results. It invokes the derivation service and mapper to produce the canonical severity report DTO.

  3. Diagnosis Report Derivation Service (diagnosis_report_derivation.py): Translates DiagnosisExpertResults into domain aggregates. It creates differential diagnoses with ranked hypotheses, extracts risk assessment metrics, and constructs per-image analyses including technical assessments.

  4. Severity Report Derivation Service (severity_report_derivation.py): Translates SeverityExpertResults into domain aggregates. It extracts sign assessments (intensity, extent, count), maps segmentation and detection results, and builds the SeverityReport aggregate.

  5. Domain Aggregates (core/domain/aggregates/): DiagnosisReport and SeverityReport are the aggregate roots that encapsulate report business logic, enforce invariants, and provide methods for querying risk levels and assessable image counts.

  6. Domain Entities (core/domain/entities/): Include DifferentialDiagnosis, Hypothesis, RiskAssessment, ImageAnalysis, SignAssessment, SignIntensity, SignExtent, LesionCount, and TechnicalAssessment. These represent the clinical concepts and findings within a report.

  7. Domain Factories (core/domain/factories/): HypothesisFactory creates ranked hypotheses from model predictions; RiskAssessmentFactory constructs risk assessments from metric dictionaries.

  8. ACL Mappers (core/application/acl/mappers/): DiagnosisReportMapper and SeverityReportMapper translate domain aggregates into canonical DTOs (CanonicalDiagnosisReport, CanonicalSeverityReport) defined in the shared interface models library. The mappers also handle conversion of technical assessments, saliency maps, and terminology codes.

  9. Terminology Repository Port and Adapter (terminology_port.py, json_terminology_adapter.py): Defines an abstract port for medical terminology lookups and provides a JSON file–based adapter that loads ICD-11 and SNOMED-CT mappings from terminology.json.

  10. REST Inbound Adapter (infrastructure/adapters/inbound/rest/): Exposes HTTP endpoints (/reports/diagnosis, /reports/severity) via FastAPI routers. The adapter receives expert results as JSON payloads, invokes the appropriate use case, and returns canonical report DTOs.

  11. Dependency Injection Container (di.py): Configures and wires application dependencies using the dependency-injector library, including the terminology repository and use cases.

SOUPs​

The Report Builder directly uses the following SOUP items:

  • FastAPI: Web framework used to expose the REST API endpoints.
  • Uvicorn: ASGI server used to run the FastAPI application.
  • Pydantic / pydantic-settings: Used for data validation, settings management, and request/response serialization.
  • dependency-injector: Provides the dependency injection container for wiring application components.
  • httpx: HTTP client library (declared as a dependency, available for potential outbound HTTP calls).

Additionally, the Report Builder depends on internal monorepo libraries:

  • legithp-interface-models: Provides shared data models for expert results and canonical reports.
  • legithp-essentials: Provides utility functions such as JSON file reading.

Interfaces with other software items​

The Report Builder interfaces with other software items as follows:

  • Control plane: The Control plane invokes the Report Builder over HTTP, sending DiagnosisExpertResults or SeverityExpertResults payloads and receiving canonical reports. The Control plane uses an HttpReportBuilderGateway adapter to communicate with the Report Builder's REST endpoints.
  • Expert Orchestrator / Experts: The Report Builder does not communicate directly with expert components; instead, it receives aggregated expert results from the Control plane.
  • Report Exporter / API Interface: The canonical reports produced by the Report Builder are returned to the Control plane, which may forward them to the Report exporter or directly to the API interface for delivery to end users.
  • Terminology Data: The Report Builder reads medical terminology mappings from a JSON file (terminology.json) stored locally in the data/ directory.

The figure following figure depicts the Report Builder architecture and relations with other software items.

Detector experts architecture​

The Detector expert is a specialized software item that performs object detection on medical images, identifying and localizing clinical findings such as lesions, signs, or anatomical structures. Each detector expert is deployed as an independent HTTP service that exposes a REST API, receives image data, runs a trained deep-learning detection model, and returns structured detection results comprising bounding boxes, confidence scores, class labels, and optionally cropped regions for each detected object. These results are subsequently consumed by the Expert Orchestrator or Control plane for aggregation into clinical assessments and reports.

The detector expert is built on the shared legithp-expert framework library and follows a layered architecture with clear separation between application logic, domain entities, and infrastructure concerns. The main software units are:

  1. REST API layer (rest_app.py, server.py): Defines the FastAPI application factory and Uvicorn server entry point. It instantiates the inference model, registers prediction endpoints (e.g., /detect), and exposes standard operational endpoints (/health, /ready, /model/info, /system/info, /system/resources).

  2. ExpertService (framework platform): A builder class provided by legithp-expert that assembles the FastAPI application with lifecycle management, model loading, health checks, and monitoring infrastructure. It receives an inference model implementation and handler functions for prediction endpoints.

  3. Inference model adapter (YoloInferenceModel): Implements the InferenceModelPort interface and encapsulates loading, device management (CPU/GPU), and inference execution for a YOLO-based detection model. It preprocesses images (color-space conversion), invokes the underlying Ultralytics YOLO model, and returns raw detection results.

  4. Object detector adapter (YoloObjectDetector): Implements the ObjectDetector protocol. It receives raw model outputs and converts them into domain DetectionCollection entities, handling both standard and oriented bounding boxes.

  5. Detection use case (DetectObjectsUseCase): Orchestrates the end-to-end detection workflow: decoding the input image, invoking the object detector, optionally cropping detected regions, rendering an annotated visualization, and mapping domain detections to the interface output model (DetectionAssessment).

  6. Rendering and cropping adapters (OpenCVBoundingBoxRenderer, AlwaysCropStrategy): Infrastructure components that draw bounding boxes on images and extract cropped sub-images for each detection.

  7. Domain mapper (DomainDetectionMapper): An Anti-Corruption Layer adapter that converts domain detection entities to the shared legithp-interface-models output structures (DetectedObject, DetectionResult).

  8. Dependency Injection container (DetectionContainer): Wires together all infrastructure adapters and application services using the dependency-injector library, enabling testability and configuration-driven composition.

  9. Configuration (get_settings, DetectionSettings, YoloParameters): Loads service-level and model-level parameters from environment variables (via Pydantic Settings), including inference thresholds, cropping enablement, server host/port, and model weights location.

SOUPs​

The detector expert relies on the following external libraries:

  • FastAPI / Uvicorn – HTTP framework and ASGI server for exposing the REST API.
  • Ultralytics (YOLO) – Provides the YOLO model loading, inference, and post-processing routines.
  • PyTorch – Underlying deep-learning runtime for model execution.
  • OpenCV (opencv-contrib-python-headless) – Image processing and rendering of annotated bounding boxes.
  • NumPy / Pillow – Array manipulation and image I/O.
  • Pydantic / Pydantic-Settings – Data validation and environment-based configuration.
  • dependency-injector – Dependency injection container.
  • httpx – HTTP client (used in test clients and for potential upstream calls).
  • boto3 / azure-storage-blob – Cloud storage access for model weights (S3 or Azure Blob).

Interfaces with other software items​

  • Expert Orchestrator / Control plane: The detector expert is invoked via its HTTP REST API (POST /detect) by the Expert Orchestrator or Control plane, which supplies a base64-encoded clinical image and receives a DetectionAssessment response containing the list of detected objects, annotated image, and inference metadata.
  • Shared data models (legithp-interface-models): Input (ExpertInput) and output (DetectionAssessment, DetectedObject, BoundingBox) types are defined in the shared library to ensure contract consistency across all experts and consumers.
  • Model weights storage: At startup, the detector expert loads pre-trained YOLO weights from a configurable storage backend (local filesystem, AWS S3, or Azure Blob Storage) through the ModelWeightsProviderPort abstraction.
  • Report builder: The DetectionAssessment output, including the annotated image and structured detections, may be forwarded to the Report builder for inclusion in patient-facing or clinician-facing reports.

The figure 3.3.X depicts the detector expert architecture and relations with other software items.

Segmenter experts architecture​

The segmenter expert is a software item responsible for producing pixel-level segmentation masks that quantify the spatial extent of specific clinical signs (e.g., hyperpigmentation, erythema, wound necrosis) in medical images. It receives an encoded image via an HTTP interface, executes a deep learning segmentation model, and returns structured results including probabilistic masks, binary masks, and coverage percentages. These outputs are consumed by the Expert Orchestrator and ultimately by the Report Builder to derive clinical metrics and populate severity reports.

Each concrete segmenter expert (e.g., hyperpigmentation_segmenter, erythema_segmenter) is implemented as a standalone microservice that reuses a shared base module (segmenter_base). The service exposes a single prediction endpoint and delegates all inference logic to the common infrastructure provided by the legithp_expert framework.

The main software units composing a segmenter expert are:

  1. Entry point (main.py): Bootstraps the HTTP server by loading configuration settings and launching Uvicorn via the expert platform's server runner. It defines the host, port, and worker count for the service.

  2. Application factory (app.py): Creates and configures the FastAPI application by invoking the shared factory from segmenter_base. It binds the segmenter-specific configuration (endpoint path, model weights location, segmentation label) to the generic service skeleton.

  3. Configuration module (config.py): Provides segmenter-specific parameters such as the API endpoint path (e.g., /segmenter/hyperpigmentation), the S3 key for model weights, and the segmentation label used in output DTOs. Configuration is loaded from environment variables.

  4. Segmenter factory (factory.py): Assembles the complete service by wiring the model adapter, prediction pipeline, use case, and prediction handler into a FastAPI application. It registers the prediction endpoint and attaches health-check and monitoring infrastructure provided by the platform.

  5. Dependency injection container (composition.py): Configures all dependencies (model adapter, pipeline, image codec, mask renderer, domain-to-DTO mapper) using the dependency_injector library. The container is parameterized with model configuration and rendering style.

  6. Model adapter (model.py): Wraps the neural network architecture (created via segmentation_models_pytorch) and integrates it with the platform's inference port. It handles device placement, weight loading, and health checks.

  7. Prediction pipeline (pipeline.py): Implements the SegmentationPipelinePort interface. It preprocesses images (padding to square, resizing, normalization), invokes the model, and postprocesses outputs (sigmoid activation, resizing back to original dimensions, thresholding, percentage calculation).

  8. Segment image use case (legithp_expert/modules/segmentation): Orchestrates the end-to-end prediction flow: image decoding, pipeline execution, mask rendering, and mapping of domain entities to the SegmentationAssessment interface DTO.

  9. Request handler (handlers.py): Provides the curried handler function expected by the expert platform. It connects incoming HTTP requests to the segment image use case.

SOUPs​

SOUP's and third-party libraries directly used by this software item include:

  • PyTorch – Deep learning framework for model inference and tensor operations.
  • segmentation-models-pytorch – Provides pre-built segmentation architectures (e.g., DeepLabV3+, U-Net++) and encoder backbones.
  • torchvision – Image transformations (resizing) during preprocessing.
  • OpenCV – Image manipulation for mask resizing and rendering.
  • NumPy – Array operations for mask processing and percentage calculation.
  • FastAPI – HTTP framework exposing the REST prediction endpoint.
  • Uvicorn – ASGI server running the FastAPI application.
  • Pydantic – Data validation and serialization for request/response models.
  • dependency-injector – Dependency injection container for wiring components.

Interfaces with other software items​

  • Expert Orchestrator: Invokes the segmenter expert via HTTP POST requests to the prediction endpoint. The orchestrator passes an ExpertInput DTO containing a base64-encoded image and receives a SegmentationAssessment DTO with segmentation results.
  • Report Builder: Consumes the SegmentationAssessment output (via the orchestrator) to extract SignExtent entities containing probabilistic masks, binary masks, and coverage percentages for inclusion in severity reports.
  • Cloud Storage (S3 / Azure Blob): Model weights are retrieved from cloud storage at service startup. The storage adapter is configured via environment variables and managed by the expert platform's model weights provider.
  • Expert Platform (legithp_expert): Provides the foundational infrastructure including the ExpertService builder, model loading lifecycle, health endpoints, and the segmentation module with domain entities, use cases, and rendering components.
  • Interface Models (legithp_interface_models): Defines the canonical DTOs (ExpertInput, SegmentationAssessment, SegmentedObject) shared between the segmenter expert and its consumers.

The following diagram depicts the segmenter expert architecture and its relations with other software items.

Classifier experts architecture​

The classifier expert software item is a sign-severity classification service that determines a continuous severity score from clinical images. Each classifier expert is deployed as a standalone HTTP service exposing a REST endpoint for severity prediction. It receives a base64-encoded image from the Expert Orchestrator or Control plane, preprocesses the image, runs inference through a trained deep-learning classification model, and returns a structured severity score. Multiple classifier expert instances exist in the system—each targeting a specific clinical sign (e.g., crusting, desquamation, erythema, pustule)—but all share the same underlying architecture and codebase through a common classifier_base shared module.

The main software units of the classifier expert software item are:

  1. Application entry point and server runner (main.py): Bootstraps the HTTP server using the Uvicorn ASGI server. Reads environment-based configuration (host, port, number of workers) and launches the FastAPI application.

  2. Application factory (app.py): Creates the FastAPI application instance by delegating to the shared create_classifier_service() factory. Configures logging and initializes the service with the classifier-specific configuration.

  3. Configuration provider (config.py): Defines the classifier-specific parameters including the endpoint path, model weights location (S3 key), and model configuration (architecture, encoder backbone, number of classes, image size, classification label, postprocessing type). Each concrete classifier expert provides its own configuration pointing to its specific model.

  4. Service factory (classifier_base.factory): Instantiates the dependency injection container, wires together model, pipeline, and use-case components, and registers the prediction endpoint on the FastAPI router. Returns a fully configured FastAPI application.

  5. Dependency injection container (classifier_base.composition): Constructs and provides instances of the classification model adapter, classification pipeline, and classification use case, ensuring consistent dependency management throughout the request lifecycle.

  6. Classification use case (classifier_base.application.handlers.ClassifyImageUseCase): Application-layer orchestrator that decodes incoming base64-encoded images via the image codec, invokes the classification pipeline, and serializes the result into the response model (ClassificationResponse).

  7. Prediction pipeline (classifier_base.infrastructure.ml.pipeline.ClassificationPipeline): Coordinates preprocessing, inference, and postprocessing. Preprocessing includes image padding (to square), resizing, and normalization. Postprocessing converts raw model logits to a severity score through configurable strategies (e.g., softmax-weighted mean, sigmoid, argmax).

  8. Model architecture (classifier_base.infrastructure.ml.model.ClassificationModelArchitecture): Defines the neural network topology. Uses a pretrained encoder backbone (e.g., EfficientNet-B2) extracted from segmentation_models_pytorch, removes the decoder/segmentation head, and adds adaptive average pooling plus a fully connected classification layer. Handles ImageNet normalization internally.

  9. Model adapter (classifier_base.infrastructure.ml.model.ClassificationModelAdapter): Platform adapter wrapping the raw PyTorch model. Extends PytorchModelAdapter and exposes predict() for synchronous inference and health_check() for startup validation.

  10. Image codec (legithp_expert.modules.computer_vision.Base64ImageCodec): Decodes incoming base64-encoded image strings into an internal Image domain object suitable for further processing.

SOUPs​

The classifier expert directly uses the following third-party libraries:

  • FastAPI / Uvicorn: Web framework and ASGI server providing the HTTP REST interface.
  • Pydantic: Data validation and settings management via environment variables and typed configuration classes.
  • PyTorch / torchvision: Deep-learning framework for model definition, tensor manipulation, and inference; torchvision provides image transformation utilities (resizing).
  • segmentation_models_pytorch: Supplies pretrained encoder backbones used as feature extractors for classification.
  • NumPy: Numerical array processing for padding and tensor conversion during preprocessing.
  • Pillow (PIL): Image decoding and manipulation underpinning the base64-to-image conversion.
  • boto3: AWS SDK used for downloading model weights from S3 cloud storage at service startup.

Interfaces with other software item​

The classifier expert interacts with the following components:

  • Expert Orchestrator_: Invokes the classifier expert over HTTP by posting ExpertInput payloads (containing a base64-encoded image) and receives ClassificationResponse payloads with the computed severity score. The Orchestrator treats all classifier experts uniformly through their standardized REST API.
  • Model weights storage (AWS S3)_: At startup, the service retrieves pretrained model weights from S3 using the path specified in its configuration.
  • Shared libraries (legithp_expert, legithp_interface_models): The classifier expert depends on the expert core framework for infrastructure components (server lifecycle, model adapters, GPU management, configuration) and on interface models for input/output data contracts.

The following figure depicts the classifier expert architecture and its relations with other software items.

Dynamic behavior of architecture​

User requests a diagnosis support report via API​

This section describes the interactions occurring between the main components when a user creates and starts a new diagnosis support request via the API interface. The following sequence diagram illustrates the dynamic behavior:

The main functions used here are:

  • Use the login endpoint to authenticate and obtain a JWT token.
  • Use the create diagnosis request endpoint to submit a new request with up to 3 images.

User requests a severity assessment report via API​

This section describes the interactions occurring between the main components when a user creates and starts a new severity assessment request via the API interface. The following sequence diagram illustrates the dynamic behavior:

The main functions used here are:

  • Use the login endpoint to authenticate and obtain a JWT token.
  • Use the create severity assessment request endpoint to submit a new request with 1 image and patient info.

Justification of architectural decisions​

System architecture capabilities​

Performance​

The layered microservice architecture enables horizontal scaling of individual experts and the report builder, allowing the system to handle high throughput and low latency requirements. Experts can be deployed on GPU-enabled instances to accelerate inference times for models. The use of asynchronous HTTP communication between components minimizes blocking and maximizes resource utilization.

This layered architecture also ensures that performance-critical paths, such as model inference, are isolated within dedicated services. This allows for targeted optimizations (e.g., batching requests, model quantization) without impacting other system components.

User safety​

The architecture doesn't directly interact with end users so it doesn't provide any specific user safety features. However, the modular design allows for rigorous testing and validation of individual components (e.g., experts, report builder) to ensure accurate and reliable outputs. The use of well-defined data contracts and DTOs minimizes the risk of data corruption or misinterpretation.

Software security​

The source code of the system is stored in a private monorepo with controlled access and is peer-reviewed before merging and analyzed by the Github's Code quality gate. Each microservice exposes only the necessary endpoints, reducing the attack surface.

Adaptability​

The modular microservice architecture allows for easy addition, removal, or modification of experts and other components. New experts can be developed and deployed independently without affecting existing services. The use of shared libraries (e.g., legithp_expert, legithp_interface_models) promotes code reuse and consistency across services. The dependency injection pattern used throughout the services enhances testability and allows for easy swapping of implementations.

Network architecture capabilities​

Interoperability​

The use of standard HTTP/REST interfaces for communication between components ensures interoperability with external systems and services. The API interface exposes well-documented endpoints that can be easily integrated with third-party applications. The use of shared data models (legithp_interface_models) ensures consistent data representation across all components.

Communication reliability, data integrity, and confidentiality​

The architecture employs HTTPS for secure communication between components, ensuring data confidentiality and integrity during transmission. Each microservice can implement its own retry and timeout mechanisms to handle transient network failures, enhancing overall communication reliability. Authentication and authorization mechanisms (e.g., JWT tokens) are used to control access to sensitive endpoints.

SOUP integration​

The SOUP's are listed in SOUP section. Each SOUP has been evaluated for compatibility, security, and performance to ensure safe integration into the system. The modular architecture allows for easy replacement or upgrading of SOUP components as needed.

Previous
R-TF-012-028 Software Requirement Specification
Next
R-TF-012-030 Software Configuration Management Plan
  • Introduction
    • Purpose
    • Scope
  • Architectural Overview
    • What is this for?
      • Diagnosis support
      • Severity assessment
    • Main functions
    • Main interfaces
      • RESTful API
        • Overview
        • Authentication
          • POST /auth/login
        • Clinical
          • POST /clinical/diagnosis-support
          • POST /clinical/severity-assessment
        • Device
          • GET /device/about
          • GET /device/health
  • Global system views
    • Architecture diagram
    • Connections
    • Dataflow
    • Updatability / Patchability
    • Security Use Cases
      • Controlled acccess to the API
      • Controlled access to user database
      • Controlled access to audit records database
  • Logical architecture overview
    • Medical device software system
    • API Interface
      • SOUPs
      • Interfaces with other software items
    • Control Plane
      • SOUPs
      • Interfaces with other software items
    • Report Builder
      • SOUPs
      • Interfaces with other software items
    • Detector experts architecture
      • SOUPs
      • Interfaces with other software items
    • Segmenter experts architecture
      • SOUPs
      • Interfaces with other software items
    • Classifier experts architecture
      • SOUPs
      • Interfaces with other software item
  • Dynamic behavior of architecture
    • User requests a diagnosis support report via API
    • User requests a severity assessment report via API
  • Justification of architectural decisions
    • System architecture capabilities
      • Performance
      • User safety
      • Software security
      • Adaptability
    • Network architecture capabilities
      • Interoperability
      • Communication reliability, data integrity, and confidentiality
    • SOUP integration
All the information contained in this QMS is confidential. The recipient agrees not to transmit or reproduce the information, neither by himself nor by third parties, through whichever means, without obtaining the prior written permission of Legit.Health (AI Labs Group S.L.)